venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
ICLR | Title
Variational Autoencoder with Arbitrary Conditioning
Abstract
We propose a single neural probabilistic model based on variational autoencoder that can be conditioned on an arbitrary subset of observed features and then sample the remaining features in “one shot”. The features may be both real-valued and categorical. Training of the model is performed by stochastic variational Bayes. The experimental evaluation on synthetic data, as well as feature imputation and image inpainting problems, shows the effectiveness of the proposed approach and diversity of the generated samples.
1 INTRODUCTION
In past years, a number of generative probabilistic models based on neural networks have been proposed. The most popular approaches include variational autoencoder (Kingma & Welling, 2013) (VAE) and generative adversarial net (Goodfellow et al., 2014) (GANs). They learn a distribution over objects p(x) and allow sampling from this distribution.
In many cases, we are interested in learning a conditional distribution p(x|y). For instance, if x is an image of a face, y could be the characteristics describing the face (are glasses present or not; length of hair, etc.) Conditional variational autoencoder (Sohn et al., 2015) and conditional generative adversarial nets (Mirza & Osindero, 2014) are popular methods for this problem.
In this paper, we consider the problem of learning all conditional distributions of the form p(xI |xU\I), where U is the set of all features and I is its arbitrary subset. This problem generalizes both learning the joint distribution p(x) and learning the conditional distribution p(x|y). To tackle this problem, we propose a Variational Autoencoder with Arbitrary Conditioning (VAEAC) model. It is a latent variable model similar to VAE, but allows conditioning on an arbitrary subset of the features. The conditioning features affect the prior on the latent Gaussian variables which are used to generate unobserved features. The model is trained using stochastic gradient variational Bayes (Kingma & Welling, 2013).
We consider two most natural applications of the proposed model. The first one is feature imputation where the goal is to restore the missing features given the observed ones. The imputed values may be valuable by themselves or may improve the performance of other machine learning algorithms which process the dataset. Another application is image inpainting in which the goal is to fill in an unobserved part of an image with an artificial content in a realistic way. This can be used for removing unnecessary objects from the images or, vice versa, for complementing the partially closed or corrupted object.
∗Author is in DeepMind now.
The experimental evaluation shows that the proposed model successfully samples from the conditional distributions. The distribution over samples is close to the true conditional distribution. This property is very important when the true distribution has several modes. The model is shown to be effective in feature imputation problem which helps to increase the quality of subsequent discriminative models on different problems from UCI datasets collection (Lichman, 2013). We demonstrate that model can generate diverse and realistic image inpaintings on MNIST (LeCun et al., 1998), Omniglot (Lake et al., 2015) and CelebA (Liu et al., 2015) datasets, and works even better than the current state of the art inpainting techniques in terms of peak signal to noise ratio (PSNR).
The paper is organized as follows. In section 2 we review the related works. In section 3 we briefly describe variational autoencoders and conditional variational autoencoders. In section 4 we define the problem, describe the VAEAC model and its training procedure. In section 5 we evaluate VAEAC. Section 6 concludes the paper. Appendix contains additional explanations, theoretical analysis, and experiments for VAEAC.
2 RELATED WORK
Universal Marginalizer (Douglas et al., 2017) is a model based on a feed-forward neural network which approximates marginals of unobserved features conditioned on observable values. A related idea of an autoregressive model of joint probability was previously proposed in Germain et al. (2015) and Uria et al. (2016). The description of the model and comparison with VAEAC are available in section 5.3.
Yoon et al. (2018) propose a GANs-based model called GAIN which solves the same problem as VAEAC. In contrast to VAEAC, GAIN does not use unobserved data during training, which makes it easier to apply to the missing features imputation problem. Nevertheless, it turns into a disadvantage when the fully-observed training data is available but the missingness rate at the testing stage is high. For example, in inpainting setting GAIN cannot learn the conditional distribution over MNIST digits given one horizontal line of the image while VAEAC can (see appendix D.4). The comparison of VAEAC and GAIN on the missing feature imputation problem is given in section 5.1 and appendix D.2.
Rezende et al. (2014) [Appendix F], Sohl-Dickstein et al. (2015), Goyal et al. (2017), and Bordes et al. (2017) propose to fill missing data with noise and run Markov chain with a learned transition operator. The stationary distribution of such chains approximates the true conditional distribution of the unobserved features. Bachman & Precup (2015) consider missing feature imputation in terms of Markov decision process and propose LSTM-based sequential decision making model to solve it. Nevertheless, these methods are computationally expensive at the test time and require fully-observed training data.
Image inpainting is a classic computer vision problem. Most of the earlier methods rely on local and texture information or hand-crafted problem-specific features (Bertalmio et al., 2000). In past years multiple neural network based approaches have been proposed.
Pathak et al. (2016), Yeh et al. (2016) and Yang et al. (2017) use different kinds and combinations of adversarial, reconstruction, texture and other losses. Li et al. (2017) focuses on face inpainting and uses two adversarial losses and one semantic parsing loss to train the generative model. In Yeh et al. (2017) GANs are first trained on the whole training dataset. The inpainting is an optimization procedure that finds the latent variables that explain the observed features best. Then, the obtained latents are passed through the generative model to restore the unobserved portion of the image. We can say that VAEAC is a similar model which uses prior network to find a proper latents instead of solving the optimization problem.
All described methods aim to produce a single realistic inpainting, while VAEAC is capable of sampling diverse inpaintings. Additionally, Yeh et al. (2016), Yang et al. (2017) and Yeh et al. (2017) have high testtime computational complexity of inpainting, because they require an optimization problem to be solved. On the other hand, VAEAC is a “single-shot” method with a low computational cost.
3 BACKGROUND
3.1 VARIATIONAL AUTOENCODER
Variational autoencoder (Kingma & Welling, 2013) (VAE) is a directed generative model with latent variables. The generative process in variational autoencoder is as follows: first, a latent variable z is generated from the prior distribution p(z), and then the data x is generated from the generative distribution pθ(x|z), where θ are the generative model’s parameters. This process induces the distribution pθ(x) = Ep(z)pθ(x|z). The distribution pθ(x|z) is modeled by a neural network with parameters θ. p(z) is a standard Gaussian distribution.
The parameters θ are tuned by maximizing the likelihood of the training data points {xi}Ni=1 from the true data distribution pd(x). In general, this optimization problem is challenging due to intractable posterior inference. However, a variational lower bound can be optimized efficiently using backpropagation and stochastic gradient descent:
log pθ(x) = Eqφ(z|x) log pθ(x, z)
qφ(z|x) +DKL(qφ(z|x)‖p(z|x, θ))
≥ Eqφ(z|x) log pθ(x|z)−DKL(qφ(z|x)‖p(z)) = LV AE(x; θ, φ) (1)
Here qφ(z|x) is a proposal distribution parameterized by neural network with parameters φ that approximates the posterior p(z|x, θ). Usually this distribution is Gaussian with a diagonal covariance matrix. The closer qφ(z|x) to p(z|x, θ), the tighter variational lower bound LV AE(θ, φ). To compute the gradient of the variational lower bound with respect to φ, reparameterization trick is used: z = µφ(x) + εσφ(x) where ε ∼ N (0, I) and µφ and σφ are deterministic functions parameterized by neural networks. So the gradient can be estimated using Monte-Carlo method for the first term and computing the second term analytically:
∂LV AE(x; θ, φ)
∂φ = Eε∼N (0,I)
∂
∂φ log pθ(x|µφ(x) + εσφ(x))−
∂
∂φ DKL(qφ(z|x)‖p(z)) (2)
So LV AE(θ, φ) can be optimized using stochastic gradient ascent with respect to φ and θ.
3.2 CONDITIONAL VARIATIONAL AUTOENCODER
Conditional variational autoencoder (Sohn et al., 2015) (CVAE) approximates the conditional distribution pd(x|y). It outperforms deterministic models when the distribution pd(x|y) is multi-modal (diverse xs are probable for the given y). For example, assume that x is a real-valued image. Then, a deterministic regression model with mean squared error loss would predict the average blurry value for x. On the other hand, CVAE learns the distribution of x, from which one can sample diverse and realistic objects.
Variational lower bound for CVAE can be derived similarly to VAE by conditioning all considered distributions on y:
LCV AE(x, y; θ, ψ, φ) = Eqφ(z|x,y) log pθ(x|z, y)−DKL(qφ(z|x, y)‖pψ(z|y)) ≤ log pθ,ψ(x|y) (3)
Similarly to VAE, this objective is optimized using the reparameterization trick. Note that the prior distribution pψ(z|y) is conditioned on y and is modeled by a neural network with parameters ψ. Thus, CVAE uses three trainable neural networks, while VAE only uses two.
Also authors propose such modifications of CVAE as Gaussian stochastic neural network and hybrid model. These modifications can be applied to our model as well. Nevertheless, we don’t use them, because of their disadvantage which is described in appendix C.
4 VARIATIONAL AUTOENCODER WITH ARBITRARY CONDITIONING
4.1 PROBLEM STATEMENT
Consider a distribution pd(x) over a D-dimensional vector x with real or categorical components. The components of the vector are called features.
Let binary vector b ∈ {0, 1}D be the binary mask of unobserved features of the object. Then we describe the vector of unobserved features as xb = {xi:bi=1}. For example, x(0,1,1,0,1) = (x2, x3, x5). Using this notation we denote x1−b as a vector of observed features.
Our goal is to build a model of the conditional distribution pψ,θ(xb|x1−b, b) ≈ pd(xb|x1−b, b) for an arbitrary b, where ψ and θ are parameters that are used in our model at the testing stage.
However, the true distribution pd(xb|x1−b, b) is intractable without strong assumptions about pd(x). Therefore, our model pψ,θ(xb|x1−b, b) has to be more precise for some b and less precise for others. To formalize our requirements about the accuracy of our model we introduce the distribution p(b) over different unobserved feature masks. The distribution p(b) is arbitrary and may be defined by the user depending on the problem. Generally it should have full support over {0, 1}D so that pψ,θ(xb|x1−b, b) can evaluate arbitrary conditioning. Nevertheless, it is not necessary if the model is used for specific kinds of conditioning (as we do in section 5.2).
Using p(b) we can introduce the following log-likelihood objective function for the model:
max ψ,θ Epd(x)Ep(b) log pψ,θ(xb|x1−b, b) (4)
The special cases of the objective (4) are variational autoencoder (bi = 1 ∀i ∈ {1, . . . , D}) and conditional variational autoencoder (b is constant).
4.2 MODEL DESCRIPTION
The generative process of our model is similar to the generative process of CVAE: for each object firstly we generate z ∼ pψ(z|x1−b, b) using prior network, and then sample unobserved features xb ∼ pθ(xb|z, x1−b, b) using generative network. This process induces the following model distribution over unobserved features:
pψ,θ(xb|x1−b, b) = Ez∼pψ(z|x1−b,b)pθ(xb|z, x1−b, b) (5)
We use z ∈ Rd, and Gaussian distribution pψ over z, with parameters from a neural network with weights ψ: pψ(z|x1−b, b, ψ) = N (z|µψ(x1−b, b), σ2ψ(x1−b, b)I). The real-valued components of distribution pθ(xb|z, x1−b, b) are defined likewise. Each categorical component i of distribution pθ(xi|z, x1−b, b) is parameterized by a function wi,θ(z, x1−b, b), whose outputs are logits of probabilities for each category: xi ∼ Cat[Softmax(wi,θ(z, x1−b, b))]. Therefore the components of the latent vector z are conditionally independent given x1−b and b, and the components of xb are conditionally independent given z, x1−b and b.
The variables xb and x1−b have variable length that depends on b. So in order to use architectures such as multi-layer perceptron and convolutional neural network we consider x1−b = x ◦ (1 − b) where ◦ is an element-wise product. So in implementation x1−b has fixed length. The output of the generative network also has a fixed length, but we use only unobserved components to compute likelihood.
The theoretical analysis of the model is available in appendix B.1.
4.3 LEARNING VARIATIONAL AUTOENCODER WITH ARBITRARY CONDITIONING
4.3.1 VARIATIONAL LOWER BOUND
We can derive a lower bound for log pψ,θ(xb|x1−b, b) as for variational autoencoder:
log pψ,θ(xb|x1−b, b) = Eqφ(z|x,b) log pψ,θ(xb, z|x1−b, b)
qφ(z|x, b) +DKL(qφ(z|x, b)‖pψ,θ(z|x, b))
≥ Eqφ(z|x,b) log pθ(xb|z, x1−b, b)−DKL(qφ(z|x, b)‖pψ(z|x1−b, b)) = LV AEAC(x, b; θ, ψ, φ) (6)
Therefore we have the following variational lower bound optimization problem:
max θ,ψ,φ Epd(x)Ep(b)LV AEAC(x, b; θ, ψ, φ) (7)
We use fully-factorized Gaussian proposal distribution qφ which allows us to perform reparameterization trick and compute KL divergence analytically in order to optimize (7).
4.3.2 PRIOR IN LATENT SPACE
During the optimization of objective (7) the parameters µψ and σψ of the prior distribution of z may tend to infinity, since there is no penalty for large values of those parameters. We usually observe the growth of ‖z‖2 during training, though it is slow enough. To prevent potential numerical instabilities, we put a Normal-Gamma prior on the parameters of the prior distribution to prevent the divergence. Formally, we redefine pψ(z|x1−b, b) as follows:
pψ(z, µψ, σψ|x1−b, b) = N (z|µψ, σ2ψ)N (µψ|0, σµ)Gamma(σψ|2, σσ) (8)
As a result, the regularizers − µ 2 ψ
2σ2µ and σσ(log(σψ) − σψ) are added to the model log-likelihood. Hyperpa-
rameter σµ is chosen to be large (104) and σσ is taken to be a small positive number (10−4). This distribution is close to uniform near zero, so it doesn’t affect the learning process significantly.
4.3.3 MISSING FEATURES
The optimization objective (7) requires all features of each object at the training stage: some of the features will be observed variables at the input of the model and other will be unobserved features used to evaluate the model. Nevertheless, in some problem settings the training data contains missing features too. We propose the following slight modification of the problem (7) in order to cover such problems as well.
The missing values cannot be observed so xi = ω ⇒ bi = 1, where ω describes the missing value in the data. In order to meet this requirement, we redefine mask distribution as conditioned on x: p(b) turns into p(b|x) in (4) and (7). In the reconstruction loss (5) we simply omit the missing features, i. e. marginalize them out:
log pθ(xb|z, x1−b, b) = ∑
i:bi=1,xi 6=ω
log pθ(xi|z, x1−b, b) (9)
The proposal network must be able to determine which features came from real object and which are just missing. So we use additional missing features mask which is fed to proposal network together with unobserved features mask b and object x.
The proposed modifications are evaluated in section 5.1.
5 EXPERIMENTS
In this section we validate the performance of VAEAC using several real-world datasets. In the first set of experiments we evaluate VAEAC missing features imputation performance using various UCI datasets (Lichman, 2013). We compare imputations from our model with imputations from such classical methods as MICE (Buuren & Groothuis-Oudshoorn, 2010) and MissForest (Stekhoven & Bühlmann, 2011) and recently proposed GANs-based method GAIN (Yoon et al., 2018). In the second set of experiments we use VAEAC to solve image inpainting problem. We show inpainitngs generated by VAEAC and compare our model with models from papers Pathak et al. (2016), Yeh et al. (2017) and Li et al. (2017) in terms of peak signal-to-noise ratio (PSNR) of obtained inpaintings on CelebA dataset (Liu et al., 2015) . And finally, we evaluate VAEAC against the competing method called Universal Marginalizer (Douglas et al., 2017). Additional experiments can be found in appendices C and D. The code is available at https://github.com/tigvarts/ vaeac.
5.1 MISSING FEATURES IMPUTATION
The datasets with missing features are widespread. Consider a dataset with D-dimensional objects x where each feature may be missing (which we denote by xi = ω) and their target values y. The majority of discriminative methods do not support missing values in the objects. The procedure of filling in the missing features values is called missing features imputation.
In this section we evaluate the quality of imputations produced by VAEAC. For evaluation we use datasets from UCI repository (Lichman, 2013). Before training we drop randomly 50% of values both in train and test set. After that we impute missing features using MICE (Buuren & Groothuis-Oudshoorn, 2010), MissForest (Stekhoven & Bühlmann, 2011), GAIN (Yoon et al., 2018) and VAEAC trained on the observed data. The details of GAIN implementation are described in appendix A.4.
Our model learns the distribution of the imputations, so it is able to sample from this distribution. We replace each object with missing features by n = 10 objects with sampled imputations, so the size of the dataset increases by n times. This procedure is called missing features multiple imputation. MICE and GAIN are also capable of multiple imputation (we use n = 10 for them in experiments as well), but MissForest is not.
For more details about the experimental setup see appendices A.1, A.2, and A.4.
In table 1 we report NRMSE (i.e. RMSE normalized by the standard deviation of each feature and then averaged over all features) of imputations for continuous datasets and proportion of falsely classified (PFC) for categorical ones. For multiple imputation methods we average imputations of continuous variables and take most frequent imputation for categorical ones for each object.
We also learn linear or logistic regression and report the regression or classification performance after applying imputations of different methods in table 2. For multiple imputation methods we average predictions for continuous targets and take most frequent prediction for categorical ones for each object in test set.
As can be seen from the tables 1 and 2, VAEAC can learn joint data distribution and use it for missing feature imputation. The imputations are competitive with current state of the art imputation methods in terms of RMSE, PFC, post-imputation regression R2-score and classification accuracy. Nevertheless, we don’t claim that our method is state of the art in missing features imputation; for some datasets MICE or MissForest outperform it. The additional experiments can be found in appendix D.2.
5.2 IMAGE INPAINTING
The image inpainting problem has a number of different formulations. The formulation of our interest is as follows: some of the pixels of an image are unobserved and we want to restore them in a natural way. Unlike the majority of papers, we want to restore not just one most probable inpainting, but the distribution over all possible inpaintings from which we can sample. This distribution is extremely multi-modal because often there is a lot of different possible ways to inpaint the image.
Unlike the previous subsection, here we have uncorrupted images without missing features in the training set, so p(b|x) = p(b). As we show in section 2, state of the art results use different adversarial losses to achieve more sharp and realistic samples. VAEAC can be adapted to the image inpainting problem by using a combination of those adversarial losses as a part of reconstruction loss pθ(xb|z, x1−b, b). Nevertheless, such construction is out of scope for this research, so we leave it for the future work. In the current work we show that the model can generate both diverse and realistic inpaintings.
In figures 1, 2, 3 and 4 we visualize image inpaintings produced by VAEAC on binarized MNIST (LeCun et al., 1998), Omniglot (Lake et al., 2015) and CelebA (Liu et al., 2015). The details of learning procedure and description of datasets are available in appendixes A.1 and A.3.
To the best of our knowledge, the most modern inpainting papers don’t consider the diverse inpainting problem, where the goal is to build diverse image inpaintings, so there is no straightforward way to compare with these models. Nevertheless, we compute peak signal-to-noise ratio (PSNR) for one random inpainting from VAEAC and the best PSNR among 10 random inpaintings from VAEAC. One inpainting might not be similar to the original image, so we also measure how good the inpainting which is most similar to the original image reconstructs it. We compare these two metrics computed for certain masks with the PSNRs for the same masks on CelebA from papers Yeh et al. (2017) and Li et al. (2017). The results are available in tables 3 and 4.
We observe that for the majority of proposed masks our model outperforms the competing methods in terms of PSNR even with one sample, and for the rest (where the inpaintings are significantly diverse) the best PSNR over 10 inpaintings is larger than the same PSNR of the competing models. Even if PSNR does not reflect completely the visual quality of images and tends to encourage blurry VAE samples instead of realistic GANs samples, the results show that VAEAC is able to solve inpainting problem comparably to the state of the art methods. The disadvantage of VAEAC compared to Yeh et al. (2017) and Li et al. (2017) (but
not Pathak et al. (2016)) is that it needs the distribution over masks at the training stage to be similar to the distribution over them at the test stage. However, it is not a very strict limitation for the practical usage.
5.3 UNIVERSAL MARGINALIZER
Universal Marginalizer (Douglas et al., 2017) (UM) is a model which uses a single neural network to estimate the marginal distributions over the unobserved features. So it optimizes the following objective:
max θ Ex∼pd(x)Eb∼p(b) D∑ i=1 bi log pθ(xi|x1−b, b) (10)
For given mask b we fix a permutation of its unobserved components: (i1, i2, . . . , i|b|), where |b| is a number of unobserved components. Using the learned model and the permutation we can generate objects from joint distribution and estimate their probability using chain rule.
log pθ(xb|x1−b, b) = |b|∑ j=1 log pθ(xij |x1−(b−∑j−1k=1 eik ), b− j−1∑ k=1 eik) (11)
For example, pθ(x1, x4, x5|x2, x3) = pθ(x4|x2, x3)pθ(x1|x2, x3, x4)pθ(x5|x1, x2, x3, x4). Conditional sampling or conditional likelihood estimation for one object requires |b| requests to UM to compute pθ(xi|x1−b, b). Each request is a forward pass through the neural network. In the case of conditional sampling those requests even cannot be paralleled because the input of the next request contains the output of the previous one.
We propose a slight modification of the original UM training procedure which allows learning UM efficiently for any kind of masks including those considered in this paper. The details of the modification are described in appendix B.3.
1The results are from the paper (Yeh et al., 2017) 2The results are from the paper (Li et al., 2017)
Left: input. The gray pixels are unobserved. Middle: samples from VAEAC. Right: ground truth.
The results of using this modification of UM are provided in table 5. We can say that the relation between VAEAC and UM is similar to the relation between VAE and PixelCNN. The second one is much slower at the testing stage, but it easily takes into account local dependencies in data while the first one is faster but assumes conditional independence of the outputs. Nevertheless, there are a number of cases where UM cannot learn the distribution well while VAEAC can. For example, when the data is real-valued and marginal distributions have many local optima, there is no straightforward parametrization which allows UM to approximate them, and, therefore also the conditioned joint distribution. An example of such distribution and more illustrations for comparison of VAEAC and UM are available in appendix D.5.
6 CONCLUSION
In this paper we consider the problem of simultaneous learning of all conditional distributions for a vector. This problem has a number of different special cases with practical applications. We propose neural network based probabilistic model for distribution conditioning learning with Gaussian latent variables. This model is scalable and efficient in inference and learning. We propose several tricks to improve optimization and give recommendations about hyperparameters choice. The model is successfully applied to feature imputation and inpainting tasks. The experimental results show that the model is competitive with state of the art methods for both missing features imputation and image inpainting problems.
APPENDIX
A EXPERIMENTAL DETAILS
A.1 NEURAL NETWORK ARCHITECTURES
In all experiments we use optimization method Adam (Kingma & Ba, 2014), skip-connections between prior network and generative network inspired by (Mao et al., 2016), (Sønderby et al., 2016) and (Ronneberger et al., 2015), and convolutional neural networks based on ResNet blocks (He et al., 2016).
Without skip-connections all information for decoder goes through the latent variables. In image inpainting we found skip-connections very useful in both terms of log-likelihood improvement and the image realism, because latent variables are responsible for the global information only while the local information passes through skip-connections. Therefore the border between image and inpainting becomes less conspicuous.
The main idea of neural networks architecture is reflected in figure 5.
The number of hidden layers, their widths and structure may be different.
The neural networks we used for image inpainting have He-Uniform initialization of convolutional ResNet blocks, and the skip-connections are implemented using concatenation, not addition. The proposal network structure is exactly the same as the prior network except skip-connections.
Also one could use much simpler fully-connected networks with one hidden layer as a proposal, prior and generative networks in VAEAC and still obtain nice inpaintings on MNIST.
A.2 MISSING FEATURES IMPUTATION
We split the dataset into train and test set with size ratio 3:1. Before training we drop randomly 50% of values both in train and test set. We repeat each experiment 5 times with different train-test splits and dropped features and then average results and compute their standard deviation.
As we show in appendix B.2, the better results can be achieved when the model learns the concatenation of objects features x and targets y. So we treat y as an additional feature that is always unobserved during the testing time.
To train our model we use distribution p(bi|x) in which p(bi|xi = ω) = 1 and p(bi|x) = 0.2 otherwise. Also for VAEAC trainig we normalize real-valued features, fix σθ = 1 in the generative model of VAEAC in order to optimize RMSE, and use 25% of training data as validation set to select the best model among all epochs of training.
For the test set, the classifier or regressor is applied to each of the n imputed objects and the predictions are combined. For regression problems we report R2-score of combined predictions, so we use averaging as a combination method. For classification problem we report accuracy, and therefore choose the mode. We consider the workflow where the imputed values of y are not fed to the classifier or regressor to make a fair comparison of feature imputation quality.
NRMSE or PFC for dataset is computed as an average of NRMSE or PFC of all features of this dataset. NRMSE of a feature is just RMSE of imputations divided by the standard deviation of this feature. PFC of a feature is a proportion of imputations which are incorrect.
A.3 IMAGE INPAINTING DATASETS AND MASKS
MNIST is a dataset of 60000 train and 10000 test grayscale images of digits from 0 to 9 of size 28x28. We binarize all images in the dataset. For MNIST we consider Bernoulli log-likelihood as the reconstruction loss: log pθ(xb|z, x1−b, b) = ∑ i:bi=1
log Bernoulli(xi|pθ,i(z, x1−b, b)) where pθ,i(z, x1−b, b) is an output of the generative neural network. We use 16 latent variables. In the mask for this dataset the observed pixels form a three pixels wide horizontal line which position is distributed uniformly.
Omniglot is a dataset of 19280 train and 13180 test black-and-white images of different alphabets symbols of size 105x105. As in previous section, the brightness of each pixel is treated as a Bernoulli probability of it to be 1. The mask we use is a random rectangular which is described below. We use 64 latent variables. We train model for 50 epochs and choose best model according to IWAE log-likelihood estimation on the validation set after each epoch.
CelebA is a dataset of 162770 train, 19867 validation and 19962 test color images of faces of celebrities of size 178x218. Before learning we normalize the channels in dataset. We use logarithm of fully-factorized Gaussian distribution as reconstruction loss. The mask we use is a random rectangular which is describe below. We use 32 latent variables.
Rectangular mask is the common shape of unobserved region in image inpainting. We use such mask for Omniglot and Celeba. We sample the corner points of rectangles uniprobably on the image, but reject those rectangles which area is less than a quarter of the image area.
In Li et al. (2017) six different masks O1–O6 are used on the testing stage. We reconstruct the positions of masks from the illustrations in the paper and give their coordinates in table 6. The visualizations of the masks are available in figure 10.
At the training stage we used a rectangle mask with uniprobable random corners. We reject masks with width or height less than 16pt. We use 64 latent variables and take the best model over 50 epochs based on the validation IWAE log-likelihood estimation. We can obtain slightly higher PSNR values than reported in table 4 if use only masks O1–O6 at the training stage.
In Yeh et al. (2017) four types of masks are used. Center mask is just an unobserved 32x32 square in the center of 64x64 image. Half mask mean that one of upper, lower, left or right half of the image is unobserved. All these types of a half are equiprobable. Random mask means that we use pixelwise-independent Bernoulli
distribution with probability 0.8 to form a mask of unobserved pixels. Pattern mask is proposed in Pathak et al. (2016). As we deduced from the code 3, the generation process is follows: firstly we generate 600x600 one-channel image with uniform distribution over pixels, then bicubically interpolate it to image of size 10000x10000, and then apply Heaviside step function H(x− 0.25) (i. e. all points with value less than 0.25 are considered as unobserved). To sample a mask we sample a random position in this 10000x10000 binary image and crop 64x64 mask. If less than 20% or more than 30% of pixel are unobserved, than the mask is rejected and the position is sampled again. In comparison with this paper in section 5.2 we use the same distribution over masks at training and testing stages. We use VAEAC with 64 latent variables and take the best model over 50 epochs based on the validation IWAE log-likelihood estimation.
A.4 GAIN IMPLEMENTATION DETAILS
For missing feature imputation we reimplemented GAIN in PyTorch based on the paper (Yoon et al., 2018) and the available TensorFlow source code for image inpainting 4.
For categorical features we use one-hot encoding. We observe in experiments that it works better in terms of NRMSE and PFC than processing categorical features in GAIN as continuous ones and then rounding them to the nearest category.
For categorical features we also use reconstruction loss LM (xi, x′i) = − 1|Xi| ∑|Xi| j=1 xi,j log(x ′ i,j). |Xi| is the number of categories of the i-th feature, and xi,j is the j-th component of one-hot encoding of the feature xi. Such LM enforces equal contribution of each categorical feature into the whole reconstruction loss.
We use one more modification of LM (x, x′) for binary and categorical features. Cross-entropy loss in LM penalizes incorrect reconstructions of categorical and binary features much more than incorrect reconstructions for continuous ones. To avoid such imbalance we mixed L2 and cross-entropy reconstruction losses for binary and categorical features with weights 0.8 and 0.2 respectively:
L′M (xi, x ′ i) = 0.2 · LM (xi, x′i) + 0.8 ·
{ 1 |Xi| ∑|Xi| j=1(xi,j − x′i,j)2, if xi is categorical
(xi − x′i)2, if xi is binary (12)
We observe in experiments that this modification also works better in terms of NRMSE and PFC than the original model.
We use validation set which contains 5% of the observed features for the best model selection (hyperparameter is the number of iterations).
In the original GAIN paper authors propose to use cross-validation for hyper-parameter α ∈ {0.1, 0.5, 1, 2, 10}. We observe that using α = 10 and a hint h = b ◦ m + 0.5(1 − b) where vector b is sampled from Bernoulli distribution with p = 0.01 provides better results in terms of NRMSE and PFC than the original model with every α ∈ {0.1, 0.5, 1, 2, 10}. Such hint distribution makes model theoretically inconsistent but works well in practice (see table 7).
Table 7 shows that our modifications provide consistently not worse or even better imputations than the original GAIN (in terms of NRMSE and PFC, on the considered datasets). So in this paper for the missing feature imputation problem we report the results of our modification of GAIN.
3https://github.com/pathak22/context-encoder/blob/master/train_random.lua#
L273
4https://github.com/jsyoon0823/GAIN
B THEORY
B.1 VAEAC UNIVERSALITY
The theoretical guarantees that VAEAC can model arbitrary distribution are based on the same guarantees for Condtitional Variational Autoencoder (CVAE). We prove below that if CVAE can model each of the conditional distributions p(xb|x1−b), then VAEAC can model all of them. We can imagine 2D CVAEs learned each for the certain mask. Because neural networks are universal approximators, VAEAC networks could model the union of CVAE networks, so that VAEAC network performs transformation defined by the same network of the corresponding to the given mask CVAE.
pψ,V AEAC(z|x1−b, b) = pψ,CV AE,1−b(z|x1−b) ∀x, b
pθ,V AEAC(xb|z, x1−b, b) = pθ,CV AE,1−b(xb|z, x1−b) ∀z, x, b So if CVAE models any distribution p(x|y), VAEAC also do. The guarantees for CVAE in the case of continuous variables are based on the point that every smooth distribution can be approximated with a large enough mixture of Gaussians, which is a special case of CVAE’s generative model. These guarantees can be extended on the case of categorical-continuous variables also. Actually, there are distributions over categorical variables which CVAE with Gaussian prior and proposal distributions cannot learn. Nevertheless, this kind of limitation is not fundamental and is caused by poor proposal distribution family.
B.2 WHY VAEAC NEEDS TARGET VALUES FOR MISSING FEATURES IMPUTATION?
Consider a dataset with D-dimensional objects x where each feature may be missing (which we denote by xi = ω) and their target values y. In this section we show that the better results are achieved when our model learns the concatenation of objects features x and targets y. The example that shows the necessity of it is following. Consider a dataset where x1 = 1, x2 ∼ N (x2|y, 1), pd(y = 0) = p(y = 5) = 0.5. In this
case pd(x2|x1 = 1) = 0.5N (x2|0, 1) + 0.5N (x2|5, 1). We can see that generating data from pd(x2|x1) may only confuse the classifier, because with probability 0.5 it generates x2 ∼ N (0, 1) for y = 5 and x2 ∼ N (5, 1) for y = 0. On the other hand, pd(x2|x1, y) = N (x2|y, 1). Filling gaps using pd(x2|x1, y) may only improve classifier or regressor by giving it some information from the joint distribution pd(x, y) and thus simplifying the dependence to be learned at the training time. So we treat y as an additional feature that is always unobserved during the testing time.
B.3 UNIVERSAL MARGINALIZER: TRAINING PROCEDURE MODIFICATION
The problem authors did not address in the original paper is the relation between the distribution of unobserved components p(b) at the testing stage and the distribution of masks in the requests to UM p̂(b). The distribution over masks p(b) induces the distribution p̂(b), and in the most cases p(b) 6= p̂(b). The distribution p̂(b) also depends on the permutations (i1, i2, . . . , i|b|) that we use to generate objects.
We observed in experiments, that UM must be trained using unobserved mask distribution p̂(b). For example, if all masks from p(b) have a fixed number of unobserved components (e. g., D2 ), then UM will never see an example of mask with 1, 2, . . . , D2 − 1 unobserved components, which is necessary to generate a sample conditioned on D2 components. That leads to drastically low likelihood estimate for the test set and unrealistic samples.
We developed an easy generative process for p̂(b) for arbitrary p(b) if the permutation of unobserved components (i1, i2, . . . , i|b|) is chosen randomly and equiprobably: firstly we generate b0 ∼ p(b), u ∼ U [0, 1], then b1 ∼ (Bernoulli(u))D and b = b0 ◦ b1. More complicated generative process exists for a sorted permutation where ij−1 < ij ∀j : 2 ≤ j ≤ |b|. In experiments we use uniform distribution over the permutations.
C GAUSSIAN STOCHASTIC NEURAL NETWORK
Gaussian stochastic neural network (13) and hybrid model (14) are originally proposed in the paper on Conditional VAE (Sohn et al., 2015). The motivation authors mention in the paper is as follows. During training the proposal distribution qφ(z|x, y) is used to generate the latent variables z, while during the testing stage the prior pψ(z|y) is used. KL divergence tries to close the gap between two distributions but, according to authors, it is not enough. To overcome the issue authors propose to use a hybrid model (14), a weighted mixture of variational lower bound (3) and a single-sample Monte-Carlo estimation of log-likelihood (13). The model corresponding to the second term is called Gaussian Stochastic Neural Network (13), because it is a feed-forward neural network with a single Gaussian stochastic layer in the middle. Also GSNN is a special case of CVAE where qφ(z|x, y) = pψ(z|y).
LGSNN (x, y; θ, ψ) = Epψ(z|y) log pθ(x|z, y) (13) L(x, y; θ, ψ, φ) = αLCV AE(x, y; θ, ψ, φ) + (1− α)LGSNN (x, y; θ, ψ), α ∈ [0, 1] (14)
Authors report that hybrid model and GSNN outperform CVAE in terms of segmentation accuracy on the majority of datasets.
We can also add that this technique seems to soften the “holes problem” (Makhzani et al., 2016). In Makhzani et al. (2016) authors observe that vectors z from prior distribution may be different enough from all vectors z from the proposal distribution at the training stage, so the generator network may be confused at the testing stage. Due to this problem CVAE can have good reconstructions of y given z ∼ qφ(z|x, y), while samples of y given z ∼ pψ(z|x) are not realistic.
The same trick is applicable to our model as well:
LGSNN (x, b; θ, ψ) = Epψ(z|x1−b,b) log pθ(xb|z, x1−b, b) (15) L(x, b; θ, ψ, φ) = αLV AEAC(x, b; θ, ψ, φ) + (1− α)LGSNN (x, b; θ, ψ), α ∈ [0, 1] (16)
In order to reflect the difference between sampling z from prior and proposal distributions, authors of CVAE use two methods of log-likelihood estimation:
log pθ,ψ(x|y) ≈ log 1
S S∑ i=1 pθ(x|zi, y), zi ∼ pψ(z|y) (17)
log pθ,ψ(x|y) ≈ log 1
S S∑ i=1 pθ(x|zi, y)pψ(zi|y) qφ(zi|x, y) , zi ∼ qφ(z|x, y) (18)
The first estimator is called Monte-Carlo estimator and the second one is called Importance Sampling estimator (also known as IWAE). They are asymptotically equivalent, but in practice the Monte-Carlo estimator requires much more samples to obtain the same accuracy of estimation. Small S leads to underestimation of the log-likelihood for both Monte-Carlo and Importance Sampling (Burda et al., 2015), but for Monte-Carlo the underestimation is expressed much stronger.
We perform an additional study of GSNN and hybrid model and show that they have drawbacks when the target distribution p(x|y) is has multiple different local maximums.
C.1 THEORETICAL STUDY
In this section we show why GSNN cannot learn distributions with several different modes and leads to a blurry image samples.
For the simplicity of the notation we consider hybrid model for a standard VAE:
L(x;φ, ψ, θ) = αEz∼qφ(z|x) log pθ(x|z)pψ(z) qφ(z|x) + (1− α)Ez∼pψ(z) log pθ(x|z) (19)
The hybrid model (16) for VAEAC can be obtained from (19) by replacing x with xb and conditioning all distributions on x1−b and b. The validity of the further equations and conclusions remains for VAEAC after this replacement.
Consider now a categorical latent variable z which can take one of K values. Let x be a random variable with true distribution pd(x) to be modeled. Consider the following true data distribution: pd(x = xi) = 1K for i ∈ {1, 2, . . . ,K} and some values x1, x2, . . . , xK . So the true distribution has K different equiprobable modes. Suppose the generator network NNθ which models mapping from z to some vector of parameters vz = NNθ(z). Thus, we define generative distribution as some function of these parameters: pθ(x|z) = f(x, vz). Therefore, the parameters θ are just the set of v1, v2, . . . , vK . For the simplicity of the model we assume pψ(z) = 1K . Taking into account pψ(z) = 1 K , we obtain optimal q(z = i|x) = f(x,vi)∑K j=1 f(x,vj) . Using (19) and the above formulas for qφ, pψ and pθ we obtain the following optimization problem:
max v1,v2,...,vK
1
K K∑ i=1 α K∑ j=1 f(xi, vj)∑K k=1 f(xi, vk) log f(xi, vj) 1 K f(xi,vj)∑K k=1 f(xi,vk) + (1− α) K∑ j=1 1 K log f(xi, vj) (20)
It is easy to show that (20) is equivalent to the following optimization problem:
max v1,v2,...,vK K∑ i=1 α log ∑Kj=1 f(xi, vj) K + (1− α) K∑ j=1 1 K log f(xi, vj) (21) It is clear from (21) that when α = 1 the log-likelihood of the initial model is optimized. On the other hand, when α = 0 the optimal point is v1 = v2 = · · · = vK = argmaxv ∑K i=1 log f(xi, v), i. e. z doesn’t influence the generative process, and for each z generator produces the same v which maximizes likelihood estimation of the generative model f(x, v) for the given dataset of x’s. For Bernoulli and Gaussian generative distributions f such v is just average of all modes x1, x2, . . . , xK . That explains why further we observe blurry images when using GSNN model.
The same conclusion holds for for continuous latent variables instead of categorical. Given K different modes in true data distribution, VAE uses proposal network to separate prior distribution intoK components (i. e. regions in the latent space), so that each region corresponds to one mode. On the other hand, in GSNN z is sampled independently on the mode which is to be reconstructed from it, so for each z the generator have to produce parameters suitable for all modes.
From this point of view, there is no difference between VAE and VAEAC. If the true conditional distribution has several different modes, then VAEAC can fit them all, while GSNN learns their average. If true conditional distribution has one mode, GSNN and VAEAC are equal, and GSNN may even learn faster because it has less parameters.
Hybrid model is a trade-off between VAEAC and GSNN: the closer α to zero, the more blurry and closer to the average is the distribution of the model. The exact dependence of the model distribution on α can be derived analytically for the simple data distributions or evaluated experimentally. We perform such experimental evaluation in the next sections.
C.2 SYNTHETIC DATA
In this section we show that VAEAC is capable of learning a complex multimodal distribution of synthetic data while GSNN and hybrid model are not. Let x ∈ R2 and p(b1 = 1) = p(b2 = 1) = 0.5. pd(x) = 1 8 ∑8 i=1N (x|µi, 1 10I) where µi ∼ N (µi|0, I). The distribution p(x) is plotted in figure 6. The dataset contains 100000 points sampled from pd(x). We use multi-layer perceptron with four ReLU layers of size 400-200-100-50, 25-dimensional Gaussian latent variables.
For different mixture coefficients α we visualize samples from the learned distributions pψ,θ(x1, x2), pψ,θ(x1|x2), and pψ,θ(x2|x1). The observed features for the conditional distributions are generated from the marginal distributions p(x2) and p(x1) respectively.
We see in table 8 and in figure 7, that even with very small weight GSNN prevents model from learning distributions with several local optimas. GSNN also increases Monte-Carlo log-likelihood estimation with
1 0 1 x1
1
0
1
x2
0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00
Figure 6: Probability density function of synthetic data distribution.
1 0 1
1
0
1
= 1
x1 unknown: x2 p(x2)
x1 p , (x1|x2)
1 0 1
1
0
1
x2 unknown: x1 p(x1)
x2 p , (x2|x1)
1 0 1
1
0
1
x2
x1, x2 unknown: x1, x2 p , (x1, x2)
1 0 1
1
0
1
= 0.99
1 0 1
1
0
1
1 0 1
1
0
1
x2
1 0 1 x1
1
0
1
= 0.9
1 0 1 x1
1
0
1
1 0 1 x1
1
0
1
x2
Figure 7: VAEAC for synthetic data.
(a) VAEAC (b) GSNN
Figure 8: MNIST inpaintings.
Left: input. The gray pixels are unobserved. Middle: samples from the model. Right: ground truth.
a few samples and decreases much more precise Importance Sampling log-likelihood estimation. When α = 0.9 the whole distribution structure is lost.
We see that using α 6= 1 ruins multimodality of the restored distribution, so we highly recommend to use α = 1 or at least α ≈ 1.
C.3 COMPARISON ON THE IMAGE INPAINTING PROBLEM
In figure 8 we can see that the inpaintings produced by GSNN are smooth, blurry and not diverse compared with VAEAC.
Table 9 shows that VAEAC learns distribution over inpaintings better than GSNN in terms of test loglikelihood. Nevertheless, Monte-Carlo estimations with a small number of samples sometimes are better for GSNN, which means less local modes in the learned distribution and more blurriness in the samples.
D ADDITIONAL EXPERIMENTS
D.1 CONVERGENCE SPEED
In figure 9 one can see that VAEAC has similar convergence speed to VAE in terms of iterations on MNIST dataset. In our experiments we observed the same behaviour for other datasets. Each iteration of VAEAC is about 1.5 times slower than VAE due to usage of three networks instead of two.
D.2 MISSING FEATURES IMPUTATION
We evaluate the quality of imputations on different datasets (mostly from UCI (Lichman, 2013)). The evaluation is performed for VAEAC, GSNN (15) and NN (neural network; can be considered as a special case of GSNN where pθ(z|x1−b, b) is delta-function; produces single imputation). We compare these methods with MICE (Buuren & Groothuis-Oudshoorn, 2010), MissForest (Stekhoven & Bühlmann, 2011), and GAIN (Yoon et al., 2018).
We see that for some datasets MICE and MissForest outperform VAEAC, GSNN and NN. The reason is that for some datasets random forest is more natural structure than neural network.
The results also show that VAEAC, GSNN and NN show similar imputation performance in terms of NRMSE, PFC, post-imputation R2-score and accuracy. Given the result from appendix C we can take this as a weak evidence that the distribution of imputations has only one local maximum for datasets from (Lichman, 2013).
Left: input. The gray pixels are unobserved. Middle: samples from VAEAC. Right: ground truth.
D.3 FACE INPAINTINGS
In figure 10 we provide samples of VAEAC on the CelebA dataset for the masks from (Li et al., 2017).
D.4 GAIN FOR IMAGE INPAINTING
GAIN (Yoon et al., 2018) doesnt use unobserved data during training, which makes it easier to apply to the missing features imputation problem. Nevertheless, it turns into a disadvantage when the fully-observed training data is available but the missingness rate at the testing stage is high.
We consider the horizontal line mask for MNIST which is described in appendix A.3. We use the released GAIN code 5 with a different mask generator. The inpaintings from VAEAC which uses the unobserved pixels during training are available in figure 1. The inpaintings from GAIN which ignores unobserved pixels are provided in figure 11. As can be seen in figure 11, GAIN fails to learn conditional distribution for given mask distribution p(b).
Nevertheless, we don’t claim that GAIN is not suitable for image inpainting. As it was shown in the supplementary of (Yoon et al., 2018) and in the corresponding code, GAIN is able to learn conditional distributions when p(b) is pixel-wise independent Bernoulli distribution with probability 0.5.
5https://github.com/jsyoon0823/GAIN
Left: input. The gray pixels are unobserved. Middle: samples from the model. Right: ground truth.
Left: input. The gray pixels are unobserved. Middle: samples from the model. Right: ground truth.
D.5 UNIVERSAL MARGINALIZER: ILLUSTRATIONS
In figure 12 we provide samples of Universal Marginalizer (UM) and VAEAC for the same inputs.
Consider the case when UM marginal distributions are parametrized with Gaussians. The most simple example of a distribution, which UM cannot learn but VAEAC can, is given in figure 13. | 1. What is the focus of the paper regarding learning conditional distribution?
2. What are the strengths and weaknesses of the proposed method compared to prior works in missing data imputation and image inpainting tasks?
3. How does the reviewer assess the generalization capability of the model in arbitrary conditional density learning?
4. What are some concerns or suggestions for future improvements regarding the experimental results and the sub-sampling issue in the training process? | Review | Review
The paper presents a model for learning conditional distribution when arbitrary partitioning the input to observed and masked parts. The idea is to extend the conditional VAE framework such that the posterior is a function of an arbitrary subset of observed variables. Accordingly, reconstruction loss only penalizes the error in the reconstruction of masked (unobserved) variables. The method is compared against 1) classical approaches in missing data imputation on UCI benchmarks; 2) image inpainting against recently proposed GANS for the similar task, as well as; 3) against universal marginalizer, which learns conditional densities using a feedforward / autoregressive architecture.
My concern about the experimental results on missing data imputation is that strong competition such as Gondra et al’17 and Yoon et al’18 that report better results on UCI than classical approaches are not included. Could you please comment? See also [1,2] for other autoencoding architectures for this task.
While the derivation of the method is principled, it assumes that either the mask is known during the training OR one could efficiently sample a distribution of masks to learn arbitrary conditional densities. Given the exponential number of valid masks in a general setting, one only subsamples a small portion during the training. The question is whether the model can generalize well in this regime? The experimental results in this setting is not very encouraging, suggesting the proposed approach is effective only when the limitted mask patterns are known in advance.
[1] Gondara, Lovedeep, and Ke Wang. "Multiple imputation using deep denoising autoencoders." arXiv preprint arXiv:1705.02737 (2017).
[2] Zhang, Hongbao, Pengtao Xie, and Eric Xing. "Missing Value Imputation Based on Deep Generative Models." arXiv preprint arXiv:1808.01684 (2018). |
ICLR | Title
An Information Theoretic Approach to Distributed Representation Learning
Abstract
The problem of distributed representation learning is one in which multiple sources of information X1, . . . , XK are processed separately so as to extract useful information about some statistically correlated ground truth Y . We investigate this problem from informationtheoretic grounds. For both discrete memoryless (DM) and memoryless vector Gaussian models, we establish fundamental limits of learning in terms of optimal tradeoffs between relevance and complexity. We also develop a variational bound on the optimal tradeoff that generalizes the evidence lower bound (ELBO) to the distributed setting. Furthermore, we provide a variational inference type algorithm that allows to compute this bound and in which the mappings are parametrized by neural networks and the bound approximated by Markov sampling and optimized with stochastic gradient descent. Experimental results on synthetic and real datasets are provided to support the efficiency of the approaches and algorithms which we develop in this paper.
1 INTRODUCTION
Let a measurable variable X ∈ X and a target variable Y ∈ Y with unknown joint distribution PX,Y be given. In the classic problem of statistical learning, one wishes to infer an accurate predictor of the target variable Y ∈ Y based on observed realizations of X ∈ X . That is, for a given class F of admissible predictors φ : X → Ŷ and an additive loss function ` : Y → Ŷ that measures discrepancies between true values and their estimated fits, one aims at finding the mapping φ? ∈ F that minimizes the expected risk
CPX,Y (φ, `) = EPX,Y [`(Y, φ(X))]. (1)
Because the joint distribution PX,Y is unknown, in practice the risk equation 1 (also called population risk) cannot be computed directly; and, in the standard approach, one usually resorts to choosing the predictor with minimal risk on a training dataset consisting of n labeled samples {(xi, yi)}ni=1 that are drawn independently from the unknown joint distribution PX,Y . Also, it is important to restrict the set F of admissible predictors to a low-complexity class to prevent overfitting. This leads to the abstract inference problem shown in Figure 1.
In this paper, we study a generalization of this problem in which the prediction is to be performed in a distributed manner. The model is shown in Figure 2. Here, the prediction of the target variable Y ∈ Y is to be performed on the basis of samples of statistically correlated random variables (X1, . . . , XK) that are observed each at a distinct predictor. We investigate this problem in the case in which the loss function `(·) is the logarithmic-loss fidelity measure, given by
`log(y, ŷ) = log ( 1 ŷ(y) ) (2)
where ŷ(·) designates a probability distribution on Y and ŷ(y) is the value of this distribution evaluated for the outcome y ∈ Y . The choice of a ‘good” loss function is often controversial in statistical learning theory, and although a complete and rigorous justification of the usage of logarithmic loss as a fidelity measure in learning theory is still awaited, partial explanations appeared in Jiao et al. (2015) and, especially in Painsky and Wornell (2018) where it is shown that, for binary classification problems, by minimizing the logarithmic-loss one actually minimizes an upper bound to any choice of loss function that is smooth, proper (i.e., unbiased and Fisher consistent) and convex. Also, we constrain the complexity of the predictors by using mutual information as a regularizer term. This is inline with recent works Xu and Raginsky (2017); Russo and Zou (2015) that show that the generalization error can be upper-bounded using the mutual information between the input dataset and the output of the predictor – see also Bousquet and Elisseeff (2002); Shalev-Shwartz et al. (2010) where the stability of an algorithm is controlled by constraining the mutual information between its input and output.
1.1 AN EXAMPLE: MULTI-VIEW LEARNING
In many data analytics problems, data is collected from various sources of information or feature extractors; and is intrinsically heterogeneous. For example, an image can be identified by its color or texture features; and a document may contain text and images. Conventional machine learning approaches concatenate all available data into one big row vector (or matrix) on which a suitable algorithm is then applied. Treating different observations as a single source might cause overfitting and is not physically meaningful because each group of data may have different statistical properties. Alternatively, one may partition the data into groups according to samples homogeneity, and each group of data be regarded as a separate view. This paradigm, termed multi-view learning Xu et al. (2013), has received growing interest; and various algorithms exist, sometimes under references such as co-training Blum and Mitchell (1998); Dhillon et al. (2011); Kumar and Daumé (2011); Gönen and Alpaydın (2011), multiple kernel learning Gönen and Alpaydın (2011) and subspace learning Jia et al. (2010). By using distinct encoder mappings to represent distinct groups of data, and jointly optimizing over all mappings to remove redundancy, multiview learning offers a degree of flexibility that is not only desirable in practice but is likely to result in better learning capability. Actually, as shown in Vapnik (2013), local learning algorithms produce less errors than global ones. Viewing the problem as that of function approximation, the intuition is that it is usually non-easy to find a unique function that holds good predictability properties in the entire data space.
1.2 INFORMAL SUMMARY OF RESULTS
In this paper, first we characterize the optimal tradeoff between relevance and complexity for the distributed learning model of Figure 2 for both discrete memoryless (DM) and memoryless vector Gaussian models. While the result for the discrete data model (Theorem 1) is not difficult to establish using connections with Courtade and Weissman (2014, Appendix B) which we explicit here, the result for the multivariate Gaussian data model (Theorem 2), which provides a sharp analytic characterization of optimal tradeoffs, is new and non-trivial (the proof of the converse part is not straightforward and was missing before this work in both theoretic learning and information theoretic communities including in the scalar case). Second, we develop a variational bound on the optimal tradeoff that can be seen as a generalization of the ELBO and the β-VAE criteria Higgins et al. (2016) to the distributed setting. Furthermore, for both DM and Gaussian models, we also provide a variational inference type algorithm which is parametrized by neural networks and allows to compute the developed variational bound when the data distribution is not known. Specifically, the main contributions of this paper are:
• In Section 3.2, we find an explicit analytic characterization of optimal tradeoffs between relevance and complexity for the memoryless vector Gaussian model. The result generalizes the Gaussian Information Bottleneck method of Globerson and Tishby (2004); Chechik et al. (Feb. 2005) to the distributed learning scenario.
• In Section 3.3, we study the problem of maximizing relevance under a constraint on the sum complexity for which we establish a variational bound which generalizes the ELBO and the β-VAE criteria to the distributed setting.
• Section 3.4 is algorithmic-oriented. We develop a variational inference type algorithm which enables to compute the bound. This algorithm is obtained by parametrizing the encoders, the decoder, and the prior distributions via DNNs and using Monte-Carlo sampling. Also, it makes usage of Kingma et
al.’s re-parametrization trick Kingma and Welling (2013) and can be seen as a generalization of the variational information bottleneck algorithm in Alemi et al. (2017) to the distributed setting.
• Section 4 contains some experimental results on real datasets which show the efficiency of the approaches and algorithms that we develop in this paper.
Most relevant to this paper is the single-encoder Information Bottleneck (IB) method of Tishby et al. (1999) which readily and elegantly captures the above mentioned viewpoint of seeking the right balance between data fit and generalization by using the mutual information both as a cost function and as a regularizer term. Thus, the results of this paper can be seen as a generalization of those of Tishby et al. (1999) for the DM model and Globerson and Tishby (2004); Chechik et al. (Feb. 2005) for the Gaussian model to the distributed learning setting.
Remark: Due to space constraints, the proofs of the results of this paper are deferred to the appendices section, which also contains additional experimental results.
1.3 NOTATION
Throughout, upper case letters denote random variables, e.g., X; lower case letters denote realizations of random variables, e.g., x; and calligraphic letters denote sets, e.g., X . The cardinality of a set is denoted by |X |. For a random variable X with probability mass function (pmf) PX , we use PX(x) = p(x), x ∈ X for short. Boldface upper case letters denote vectors or matrices, e.g., X, where context should make the distinction clear. For random variables (X1, X2, . . .) and a set of integers K ⊆ N, XK denotes the set of random variables with indices in the set K, i.e., XK = {Xk : k ∈ K}. If K = ∅, XK = ∅. For k ∈ K we let XK/k = (X1, . . . , Xk−1, Xk+1, . . . , XK), and assume that X0 = XK+1 = ∅. Also, for zero-mean random vectors X and Y, the quantities Σx, Σx,y and Σx|y denote respectively the covariance matrix of the vector X, the covariance matric of vector (X,Y) and the conditional covariance matrix of X, conditionally on Y. Finally, for two probability measures PX and QX on the random variable X ∈ X , the relative entropy or Kullback-Leibler divergence is denoted as DKL(PX‖QX).
2 FORMAL PROBLEM FORMULATION
Let K ≥ 2 and (X1, . . . , XK , Y ) be a tuple of random variables with a given joint probability mass function (pmf) PX1,...,XK ,Y (x1, . . . , xK , y) for (x1, . . . , xK) ∈ X1 × . . .×XK and y ∈ Y , where Xk designates the alphabet of Xk and Y that of Y . Throughout, we assume that the Markov chain
Xk − − Y − −XK/k (3) holds for all k ∈ K. That is, the joint pmf factorizes as
PX1,...,XK ,Y (x1, . . . , xK , y) = PY (y) K∏ k=1 PXk|Y (xk|y). (4)
The variable Y is a target variable; and we seek to characterize how accurate it can be predicted from a measurable random vector (X1, . . . , XK) when the components of this vector are processed separately, each by a distinct encoder. More specifically, let {(X1,i, . . . , XK,i, Yi)}ni=1 be a collection of n independent copies of (X1, . . . , XK , Y ). Encoder k ∈ K only observes the sequence Xnk ; and generates a description Jk = φk(Xnk ) according to some mapping
φk : Xnk →M (n) k , (5)
whereM(n)k is an arbitrary set of descriptions. The range of allowable description sets will be specified below. A decoder ψ(·) collects all descriptions JK = (J1, . . . , JK) and returns an estimate Ŷ n of Y n as
ψ :M(n)1 × . . .×M (n) K → Ŷ n. (6)
The relevance of the estimation Ŷ n is measured in terms of the relevance, defined here as the information that the descriptions φ1(Xn1 ), . . . , φK(XnK) collectively preserve about Y
n, as measured by Shannon mutual information 1
∆(n)(PXK,Y ) = 1
n ∑ yn,xn1 ,...,x n K P (yn) K∏ k=1 P (xnk |yn) log P (yn, ψ(φ1(x n 1 ), . . . , φK(x n K))) P (yn)P (ψ(φ1(xn1 ), . . . , φK(x n K)))
:= 1
n IPXK,Y (Y
n; Ŷ n), (7)
1Alternatively, the relevance could be defined in a more operational manner by the average logarithmic loss distortion or error EPXK,Y [`log(Y n, Ŷ n)] = H(Y n|Ŷ n).
where Ŷ n = ψ(φ1(Xn1 ), . . . , φK(XnK)) and the subscript PXK,Y indicates that the mutual information is computed under the joint distribution PXK,Y .
There are various ways to control the complexity of the encoding functions {φk}Kk=1. In this paper, we do so by restricting their ranges. This is known as minimum description length complexity measure Hinton and van Camp (1993). Specifically, the mapping φk(·) at Encoder k ∈ K needs to satisfy
Rk ≥ 1
n log |φk(Xnk )| for all Xnk ∈ Xnk . (8)
Definition 1 A tuple (∆, R1, . . . , RK) is said to be achievable if there exists an integer n, a family of encoding mappings {φk}Kk=1 and a decoder mapping ψ such that
∆ ≤ 1 n IPXK,Y
( Y n;ψ(φ1(X n 1 ), . . . , φK(X n K)) )
(9)
Rk ≥ 1
n log |φk(Xnk )| for all k ∈ K. (10)
The relevance-complexity region IRDIB is given by the closure of all achievable tuples (∆, R1, . . . , RK).
In some cases, for given RK = (R1, . . . , RK), for the ease of the exposition we will be content with the relevance-complexity function ∆(RK, PXK,Y ) defined as
∆(RK, PXK,Y ) = max {φk}Kk=1,ψ
∆(n)(PXK,Y ) (11)
where the maximization is subjected to equation 8.
3 MAIN RESULTS
3.1 DISCRETE MEMORYLESS DATA MODEL
The following theorem (the proof of which can be found in the appendices section) provides a computable characterization of the relevance-complexity region IRDIB. The result can be seen as a generalization of Tishby et al. Tishby et al. (1999) single encoder IB to the distributed learning model with K encoders.
Theorem 1 The relevance-complexity region IRDIB of the distributed learning problem with PXK,Y for which the Markov chain equation 3 holds is given by the union of all tuples (∆, R1, . . . , RK) ∈ RK+1+ that satisfy for all S ⊆ K,
∆ ≤ ∑ k∈S [Rk−I(Xk;Uk|Y, T )] + I(Y ;USc |T ), (12)
for some set of pmfs P := {PUk|Xk,T , . . . , PUk|Xk,T , PT } with joint distribution of the form
PT (t)PY (y) K∏ k=1 PXk|Y (xk|y) K∏ k=1 PUk|Xk,T (uk|xk, t). (13)
Remark 1 In Theorem 1, the random variable T stands for a convexification of the region, i.e., convex combination of achievable relevance-complexity tuples is itself achievable. For given T = t, the result of Theorem1 comprises the optimization over K conditional distributions {PUK |Xk,t}. For k ∈ K, the conditional distribution PUK |Xk,t represents a stochastic encoding of the feature Xk into a latent variable Uk. Intuitively, the latent variableUk should capture all relevant information about Y that is contained inXk and non redundant with those carried out by {Ui}i 6=k. The requirement of non-redundancy is mandated by the need to operate at the minimum possible complexity at which a desired relevance level is achievable (recall that minimum complexity, as expressed by algorithm’s input-output mutual information, translates directly into a better generalization capability). Collectively, however, the set of all latent variables (U1, . . . , UK) should be expressive enough to reproduce the target variable Y to within the desired relevance level.
Remark 2 Like for the single-encoder IB problem of Tishby et al. (1999) and an increasing number of works that followed, including Courtade and Weissman (2014, Section III-F), our approach here is asymptotic. In addition to that it leads to an exact characterization, the result also readily provides a lower bound on the performance in the non-asymptotic (e.g., one shot) setting. For the latter setting known approaches (e.g., the functional representation lemma of Li and El Gamal (2018)) would lead to only non-matching inner and outer bounds on the region of optimal tradeoff pairs, as this is the case even for the single encoder case Li et al. (2018). 4
3.2 MEMORYLESS VECTOR GAUSSIAN DATA MODEL
We now turn to a continuous-alphabet setting. Here, (X1, . . . ,XK ,Y) is a zero-mean Gaussian random vector such that Xk = HkY + Nk for all k ∈ K, (14) where Hk ∈ Cnk×ny models the linear model connecting the target variable Y ∈ Cny to the observation at encoder k, and Nk ∈ Cnk , k = 1, . . . ,K, is the noise vector at encoder k, assumed to be Gaussian with zero-mean and covariance matrix Σk, and independent from all other noises and the target variable Y. We denote by Σy the covariance matrix of of the target vector Y ∈ Cny .
For this model, we find an explicit analytic characterization of optimal tradeoffs between relevance and complexity. The proof relies on deriving an outer bound on the region described by equation 12, and showing that it is achievable with Gaussian distribution, with no time-sharing. In doing so, we use techniques that rely on the de Bruijn identity and the properties of Fisher information and minimum mean square error (MMSE).
Theorem 2 The relevance-complexity region IRGDIB for the vector Gaussian model is given by the union of all tuples (∆, R1, . . . , RL) that satisfy for all S ⊆ K
∆ ≤ [ Rk + log ∣∣∣I−Σ1/2k ΩkΣ1/2k ∣∣∣]+ log ∣∣∣∣∣∑ k∈Sc Σ1/2y H † kΩkHkΣ 1/2 y + I ∣∣∣∣∣ , for some 0 Ωk Σ−1k .
Proof: The proof of the direct part follows by evaluating the region of Theorem 1, which can be extended to the case of continuous alphabets using standard discretization (quantization) arguments, with the choices T = ∅ and p(uk|xk, t) = CN (xk,Σ1/2k (Ωk − I)Σ 1/2 k ). The main contribution in the proof is that of the converse part. This proof is technical and rather lengthy and, for this reason, is deferred to the appendices section.
In the special case in which K = 1, the result of Theorem 2 recovers that by Globerson and Tishby (2004) (see also Chechik et al. (Feb. 2005)) which establishes the optimal relevance-complexity tradeoff of the single-encoder Gaussian IB problem.
3.3 A VARIATIONAL BOUND
In this section, we consider the problem of learning encoders- and decoder mappings that maximize the relevance level for a given (fixed) complexity level, i.e., those that perform at the vicinity of the boundary of the region IRDIB. First, we derive a parametrization of the relevance-complexity region; and, then, we develop a variational bound which expresses the optimal encoders’ and decoder mappings as the solution to an optimization problem – (an algorithm for solving this problem in the case of unknown distributions is given in the next section).
Let Rsum := ∑K k=1Rk. Also, let IR sum DIB denote the region of achievable (relevance, sum-complexity) pairs,
IRsumDIB := { (∆, Rsum) ∈ R2+ : ∃(R1, . . . , RK) ∈ RK+ s.t.
(∆, R1, . . . , RK) ∈ IRDIB and K∑ k=1 Rk = Rsum } .
Proposition 1 The relevance-complexity region under sum-complexity constraintRIsumDIB is given by the convexhull of all tuples (∆, Rsum) ∈ R2+ satisfying ∆ ≤ ∆(Rsum, PXK,Y ) where
∆(Rsum, PXK,Y ) = max P min
{ I(Y ;UK), Rsum −
K∑ k=1 I(Xk;Uk|Y )
} , (15)
and where the maximization is over the set of pmfs P := {PU1|X1 , . . . , PUK |XK} such that the joint pmf factorizes as pY (y) ∏K k=1 pXk|Y (xk|y) ∏K k=1 pUk|Xk (uk|xk).
The next proposition provides a characterization of the pairs (∆, Rsum) that lie on the boundary ofRIsumDIB in terms of a nonnegative parameter s ≥ 0.
Proposition 2 For every pair (∆, Rsum) ∈ R2+ that lies on the boundary of the relevance-complexity region RIsumDIB there exist s ≥ 0 such that (∆, Rsum) = (∆s, Rs), where
∆s = 1
(1 + s)
[ (1 + sK)H(Y ) + sRs + max
P Ls(P)
] , (16)
Rs = I(Y ;U ∗ K) + K∑ k=1 [I(Xk;U ∗ k )− I(Y ;U∗k )], (17)
and P∗ is the set of conditional pmfs P that maximize the cost function
Ls(P) := −H(Y |UK)− s K∑ k=1 [H(Y |Uk) + I(Xk;Uk)]. (18)
Using Proposition 2 it is clear that the encoders {PUk|Xk}k∈K that achieve the relevance-complexity pair (∆s, Rs) can be computed by maximizing the regularized cost equation 18 for the corresponding value of s ≥ 0. The corresponding optimal decoder PY |UK for these encoders can be found as in equation ??. Different relevance-complexity pairs (∆s, Rs) on the boundary of IRsumDIB and encoders- and decoder mappings that achieve it can be found by solving equation 18 for different values of s ≥ 0 and then evaluating equation 16 and equation 17 for the obtained solution.
The optimization of equation 18 generally requires to compute marginal distributions involving the descriptions U1, . . . , UK , an aspect which can be non-easy computationally costly. To overcome this limitation, in the following we derive a tight variational bound on Ls(P) which lower bounds the DIB cost function with respect to some arbitrary distributions. Let us consider the arbitrary decoder QY |U1,...,UK (y|u1, . . . , uK) for y ∈ Y , u1 ∈ U1, . . . , uK ∈ UK , the K decoders QY |Uk (y|uk) for k ∈ K for y ∈ Y , uk ∈ Uk, and latent variable priors QUk (uk), k ∈ K, uk ∈ Uk. For short, we denote
Q := {QY |U1,...,UK , QY |U1 , . . . , QY |UK , QU1 , . . . , QUK}.
Let us define the variational DIB cost function LVBs (P,Q) as
LVBs (P,Q) := E[logQY |UK(Y |UK)]︸ ︷︷ ︸ av. logarithmic-loss + s K∑ k=1 ( E[logQY |Uk (Y |Uk)]−DKL(PUk|Xk‖QUk ) ) ︸ ︷︷ ︸
regularizer
. (19)
The following lemma states that LVBs (P,Q) is a lower bound to Ls(P) for all distributions Q.
Lemma 1 For fixed pmfs P, we have
Ls(P) ≥ LVBs (P,Q), for all pmfs Q. (20)
In addition, there exists a unique Q that achieves the maximum maxQ LVBs (P,Q) = Ls(P), and is given by
Q∗Uk = PUk , Q ∗ Y |Uk = PY |Uk , k = 1, . . . ,K, (21)
Q∗Y |U1,...,Uk = PY |U1,...,UK , (22)
where PUk , PY |Uk and PY |U1,...,UK are computed from the pmfs P.
Using the above, the optimization in equation 16 can be written in terms of the variational DIB cost function as
max P Ls(P) = max P max Q LVBs (P,Q). (23)
We close this section by noting that the cost function equation 19 can be seen as a generalization of the evidence lower bound (ELBO) as given in Rezende et al. (2014); Kingma and Welling (2013) for the single-encoder learning to the distributed setting. Also, in the specific case in which Y = (X1, . . . , XK) the bound generalizes the ELBO used for VAEs to the case of an arbitrary number of encoders.
3.4 CASE OF UNKNOWN DISTRIBUTIONS: VARIATIONAL DISTRIBUTED IB ALGORITHM
In practice only a set of training samples {(X1,i, . . . , XK,i, Yi)}ni=1 are available. In this section, we provide a method to optimize equation 23 in this case by parametrizing the encoding and decoding distributions that are to optimize using a family of distributions whose parameters are determined by Deep Neural networks (DNNs). This allows us to formulate equation 23 in terms of the DNN parameters and optimize it by using the reparametrization trick Kingma and Welling (2013), Monte Carlo sampling, as well as stochastic gradient descent (SGD) type algorithms.
Let FeNN,k denote the parametric family of encoding probability distributions PUk|Xk over Uk for each element on Xk. Each member of this collection, PUk|Xk;γek , is described by a parameter vector γ e k ∈ Γek ⊆ Rl e k , where
Γek ⊆ Rl e k denotes the set of allowable parameter vectors. The parameter vector γek is the output of a DNN fθk : Xk → Γ e k, with network parameters θk ∈ Θk ⊆ Rd e k , e.g., the weights of the network at all layers. The DNN fθk takes Xk as input and outputs the parameter vector γ e k, determining one of the probability members PUk|Xk;γek . We have FeNN,k = { PUk|Xk;γek (uk|xk), for uk ∈ Uk, xk ∈ Xk : γ e k = fθk (xk), θk ∈ Θk } . (24)
For example, the family of multivariate Gaussian distributions is parametrized by the mean µθk and covariance matrix Σθk, i.e., γk := (µ θ k,Σ θ k). Therefore, given an observation Xk, γk := (µ θ k,Σ θ k) is determined by the output of the DNN fθk and F e NN,k is given by PUk|Xk;γk (uk|xk) = N (uk;µ θ k,Σ θ k).
Similarly, for decoders QY |Uk over Y , define the family of distributions parametrized by a vector in Γ d k ⊆ Rl
d k
determined by the output of a DNN fφk : Uk → Γ d k, with parameters φk ∈ Φk ⊆ Rd
d k , as
FdNN,k = { QY |Uk;γdk (y|uk), for y ∈ Y, uk ∈ Uk : γdk = fφk (uk), φk ∈ Φk } , (25)
and for the distribution QY |UK over Y for each element in U1 × · · · × UK , define the family of distributions parameterized by the output of the DNN fφK : U1 × · · · × UK → Γ d K, with φK ∈ ΦK ⊆ Rd d K , and ΓdK ⊆ Rd d K
FdNN,K = { QY |U1,...,UK ;γdK (y|u1, . . . , uK), y ∈ Y, uk ∈ Uk : γdK = fφK(u1, . . . , uK), φK ∈ ΦK } . (26)
Finally, for the distributions Qϕk (uk) we define the family of distributions with parameter ϕk ∈ Ψk ⊆ R l p k
FpNN,k = { QUk;ϕk (uk), for uk ∈ Uk : ϕk ∈ Ψk } .
In the following, for brevity we use Pθk (uk|xk), Qψk (y|uk), QψK(y|uK) and Qϕk (uk) to denote the distributions parametrized by the DNNs fθk , fψk , fψK and ϕk, respectively.
By restricting the optimization of the variational DIB cost in equation 23 to the encoder, decoder and priors within the families of distributions FeNN,k, FdNN,k, FdNN,K, FpNN,k we get
max P max Q LVBs (P,Q) ≥ max θ,φ,ϕ LNNs (θ,φ,ϕ), (27)
where we use the notation θ := [θ1, . . . , θK ], φ := [φ1, . . . , φK , φK] and ϕ := [ϕ1, . . . , ϕK ] to denote the DNN and prior parameters and, the cost in equation 27 is given by
LNNs (θ,φ,ϕ) := EPY,XE{Pθk (Uk|Xk)} [ logQφK(Y |UK)
+ s K∑ k=1 ( logQφk (Y |Uk)−DKL(Pθk (Uk|Xk)‖Qϕk (Uk)) )] . (28)
Next, we train the DNNs to maximize a Monte Carlo approximation of equation 27 over θ,φ,ϕ using SGD. We use the reparameterization trick Kingma and Welling (2013), to sample from Pθk (Uk|Xk). In particular, we consider FeNN,k to consist of a parametric family of distributions that can be sampled by first sampling a random variable Zk with distribution PZk (zk), zk ∈ Zk and then transforming the samples using some function gθk : Xk × Zk → Uk parameterized by θk, such that Uk = gθk (xk, Zk) ∼ Pθk (Uk|xk). The reparametrization trick reduces the original optimization to estimating θk of the deterministic function gθk and allows to compute estimates of the gradient using backpropagation Kingma and Welling (2013). The variational DIB cost in equation 27 can be approximated, by sampling m independent samples {uk,i,j}mj=1 ∼ Pθk (uk|xk,i) for each training sample (x1,i, . . . , xK,i, yi), i = 1, . . . , n. Sampling is performed by using uk,i,j = gφk (xk,i, zk,j) with {zk,j} m j=1 i.i.d. sampled from PZk . We then have
Lemps,i (θ,φ,ϕ) := 1
m m∑ j=1 logQφK(yi|u1,i,j , . . . , uK,i,j)
+ s
m m∑ j=1 K∑ k=1 ( logQφk (yi|uk,i,j)−DKL(Pθk (Uk,i|xk,i)‖Qϕk (Uk,i)) ) . (29)
4 EXPERIMENTS: RESILIENCE TO NOISE, ROTATION AND OCCLUSION
In this experiment, we test the robustness of our method against noise, rotation and random occlusion on the MNIST dataset. Specifically, we combine two types of random occlusions: the first encoder observes a digit from the MNIST that is occluded by a square which is rotated randomly (rotation angle uniformly distributed over [−45o, 45o]); and the second encoder observes a noisy version of the same digit corrupted by additive noise
(noise level uniform between 0 and 3). The noisy pixels are clipped between 0 and 1, with more than 60% of the pixels occluded. These occlusions make the problem significantly more involved than the standard MNIST (for which application of our algorithm leads to an relevance of about 99.9%).
We considered a CNN deterministic networks with dropout which achieves a 99.8% for test data on the clean MNIST data. Then, we have trained the same CNN architecture for each of the noisy inputs to the encoders, resulting in a relevance of 92.1% from the input to encoder 1 (randomly rotated occlusion) and 79.68% from the input to encoder 2 (noisy clipped image).
0
5
10
15
20
25
0 10 20 30 40 50
0
5
10
15
20
25
0 10 20
0
5
10
15
20
25
30 40 50 0
5
10
15
20
25
Original Y
Figure 3: View 1: occluded. View 2: noisy.
CNN Layers
Encoder k conv. ker. [5,5,32]-ReLu maxpool [2,2,2]
conv. ker. [5,5,64]-ReLu maxpool [2,2,2]
dense [1024]-ReLu dropout 0.4
dense [256]-relu Latent space k dense [256]-ReLu Decoder 12 dense [256]-ReLu Decoder k dense [256]-ReLu
101 102 103
Sum-Complexity Rsum
0.0
0.5
1.0
1.5
2.0
R el
ev an
ce ∆
C-IB with Rsum →∞ D-VIB train n=50000 D-VIB test n=50000
Figure 4: relevance v.s. sum-complexity for n = 50.000 and s ∈ [10−10, 1].
We applied our D-VIB algorithm of Section 3.4 to this model with the CNN architecture of Table 1, in which Encoder k = 1, 2 is parametrized by an nuk = 256 dimensional multivariate Gaussian distributionN (µ e k,Σ e k) determined by the output of a DNN fθk consisting of the concatenation of convolution, dense and maxpool layers with ReLu activations and dropout. The output of the last layer is followed by a dense layer without activation that generate µek and Σ e k. The prior is chosen as Qϕk (u) = N (0, I). Each decoder takes the samples from Pθk (Uk|Xk) and processes its inputs with a dense layer DNN (fφK and fφk ) each with 256 neurons and ReLu activation, which outputs a vector ŷi of size |Y| = 10 normalized with a softmax, corresponding to a distribution over the one-hot encoding of the digit labels {0, . . . , 9} from the K observations,
Qφk (ŷk|uk) = Softmax(fφk (Uk)), k = 1, 2, and (30) QφK(ŷ|uK) = Softmax(fφK(U1, U2))), (31)
where Softmax(p) for p ∈ Rd is a vector with i-th entry as [Softmax(p)]i = exp(pi)/ ∑d j=1 exp(pj). Figure 4 shows the relevance-complexity tradeoffs obtained using our D-VIB algorithm of Section 3.4, with n = 50.000 and 15 distinct s-values randomly chosen in the range [10−10, 1]. For comparison, we also present the performance obtained using three methods among state-of the-art multiview learning approaches: (i) applying a deterministic CNN on the two views concatenated (deterministic CNN), (ii) applying the singleencoder variational IB method of Alemi et al. on the two views concatenated (C-VIB), and (iii) learning one function for each view via a distinct CNNs and optimize all CNNs independently (independent CNNs). The achieved relevance is reported in Table 2. For other experimental results, see the appendices section.
We also mention that at a high level our algorithm D-VIB can be considered as performing some form of coregularization (for instance its Gaussian version is similar to the CCA of Hardoon et al. (2004)). Comparatively, the single-view algorithm C-VIB can be viewed as belonging to the family of co-training style algorithms (such as the co-EM of Nigam and Ghani (2000)) which, as mentioned in the recent survey Zhao et al. (2017), override on single-view algorithms. The performance of D-VIB dominates that of C-VIB, which itself dominates co-EM.
5 PROOFS OF MAIN THEOREMS, PROPOSITIONS AND LEMMAS
5.1 AUXILIARY LEMMAS
Lemma 2 Dembo et al. (1991); Ekrem and Ulukus (2014) Let (X,Y) be a pair of random vectors with pmf p(x,y). We have
log |(πe)J−1(X|Y)| ≤ h(X|Y) ≤ log |(πe)mmse(X|Y)|,
where the conditional Fischer information matrix is defined as
J(X|Y) := E[∇ log p(X|Y)∇ log p(X|Y)†],
and the minimum mean squared error (MMSE) matrix is
mmse(X|Y) := E[(X− E[X|Y])(X− E[X|Y])†].
Lemma 3 Ekrem and Ulukus (2014) Let (V1,V2) be a random vector with finite second moments and N∼CN (0,ΣN ) independent of (V1,V2). Then
mmse(V2|V1,V2 + N) = ΣN −ΣNJ(V2 + N|V1)ΣN .
5.2 PROOF OF THEOREM 1
If K = 1 the distributed learning problem that we study boils down to the well known Information Bottleneck (IB) problem of Tishby et al. (1999). The single-encoder IB problem is essentially a remote point-to-point source coding problem Dobrushin and Tsybakov (1962) in which distortion is measured under the logarithm loss fidelity criterion Harremoes and Tishby (2007). In accordance with this analogy, for K ≥ 2 consider the multiterminal source coding problem under logarithmic loss in which the sequence Y n models a remote source that is observed by K spatially distributed agents; the agents observe noisy versions of the remote source and communicate independently with a decoder or Chief Executive Officer (CEO) over rate-constrained noise-free links. For instance, agent k, k ∈ K, observes Xnk and uses Rk bits per sample to describe it to the decoder. The decoder wants to reconstruct the remote source Y n to within a prescribed fidelity level, where incurred distortion is measured using the logarithmic loss criterion, i.e.,
`log(y n, ŷn) =
1 n log
1
P̂Y n|J(yn|φ1(xn1 ), . . . , φK(xnK)) , (32)
where J = (φ1(Xn1 ), . . . , φK(XnK)).
Here, (Xn1 , . . . , XnK , Y n) is assumed to be distributed i.i.d. according to the n-product of the pmf PX1,...,XK ,Y , i.e., the Markov chain equation 3 holds.
Definition 2 A rate-distortion code (of blocklength n) for the CEO problem consists of K encoding functions
φ̃k : Xnk → {1, . . . ,M (n) k }, for k = 1, . . . ,K, (33)
and a decoding function
ψ̃ : {1, . . . ,M (n)1 } × . . .× {1, . . . ,M (n) K } → Ŷ n. (34)
A distortion-rate tuple (D,R1, . . . , RK) is achievable for the DM CEO source coding problem with side information if there exist a blocklength n, encoding functions {φ̃k}Kk=1 and a decoding function ψ̃ such that
Rk ≥ 1
n logM
(n) k , for k = 1, . . . ,K,
D ≥ E [ `log ( Y n, ψ̃(φ̃1(X n 1 ), . . . , φ̃K(X n K)) )] .
The distortion-rate region DRCEO of the CEO model is defined as the closure of all non-negative tuples (D,R1, . . . , RK) that are achievable.
Key to the proof of Theorem 1 is the following proposition which states that IRDIB andDRCEO can be inferred from each other. Proposition 3 (∆, R1, . . . , RK) ∈ IRDIB if and only if ( H(Y )−∆, R1, . . . , RK ) ∈ DRCEO.
Proof: Let, for k = 1, . . . ,K, Jk = φk(Xnk ) and J = (J1, . . . , JK). Then,
E[`log(Y n, Ŷ n)|J = j] = ∑ yn∈Yn P (yn|j) log
( 1
P̂ (yn|j)
) (35)
= ∑
yn∈Yn P (yn|j) log ( P (yn|j) P̂ (yn|j) ) +H(Y n|J = j) (36)
= DKL(P (y n|j)‖P̂ (yn|j)) +H(Y n|J = j) (37) ≥ H(Y n|J = j), (38)
where equation 38 is due to the non-negativity of the Kullback-Leibler divergence and the equality holds if and only if for P̂ (yn|j) = P (yn|j) where P (yn|j) = Pr{Y n = yn|J = j} for all j and yn ∈ Yn.
Let an achievable tuple (∆, R1, . . . , RK) ∈ IRDIB be given. Then, there must exist functions {φk}Kk=1 such that equation 9 and equation 10 hold. Using equation 38 that by letting the decoding function ψ̃(JK) = {PY n|JK(y
n|JK)}, we have E[`log(Y n, Ŷ n)|JK] = H(Y n|JK), which implies (H(Y )−∆, R1, . . . , RK) ∈ DRCEO.
The result of Theorem 1 follows easily by combining (Courtade and Weissman, 2014, Theorem 10), which provides a single-letter characterization of the rate distortion region DR?CEO of the CEO problem, and Proposition 3.
5.3 PROOF OF THEOREM 2
The proof of the direct part of Theorem 2 follows by evaluating the region of Theorem 1 with the choice T = ∅ and p(uk|xk, t) = CN (xk,Σ1/2k (Ωk − I)Σ 1/2 k ).
The proof of the converse part is as follows. Fix t ∈ T , S ⊆ K and a family of distributions {p(uk|xk, t)}Kk=1 such that the joint distribution factorizes as equation 13. Also, let 0 Ωk,t Σ−1k and
mmse(Xk|Y,Uk,t, t) = Σk −ΣkΩk,tΣk. (39)
Such Ωk,t always exists since 0 mmse(Xk|Y,Uk,t, t) Σ−1k . (40)
Then, we have
I(Xk; Uk|Y, t) ≥ log |Σk| − log |mmse(Xk|Y,Uk,t, t)|
= − log |I−Σ1/2k Ωk,tΣ 1/2 k |, (41)
where the inequality is due to Lemma 2; and equation 41 is due to equation 39.
Also, we have
I(Y; USc,t|t) ≤ log |Σy| − log |J−1(Y|USc,t, t)| (42)
= log ∣∣∣∣∣∑ k∈Sc Σ1/2y H † kΩk,tHkΣ 1/2 y + I ∣∣∣∣∣ , (43) where equation 42 follows by using Lemma 2; and equation 43 holds by using the following equality
J(Y|USc,t, t) = ∑ k∈Sc H†kΩk,tHk + Σ −1 y . (44)
the proof of which uses a connection between MMSE and Fisher information as shown next.
For the proof of equation 44, first note that from the MMSE estimation of Gaussian random vectors El Gamal and Kim (2011), we have
Y = E[Y|XSc ] + ZSc = ∑ k∈Sc GkXk + ZSc , (45)
where Gk = Σy|xScH † kΣ −1 k and ZSc ∼ CN (0,Σy|xSc ), with
Σ−1y|xSc = Σ −1 y + ∑ k∈Sc H†kΣ −1 k Hk. (46)
Note that ZSc is independent of YSc due to the orthogonality principle of the MMSE and its Gaussian distribution. Hence, it is also independent of USc,q . We have
mmse (∑ k∈Sc GkXk ∣∣∣Y,USc,t, t) = ∑ k∈Sc Gkmmse (Xk|Y,USc,t, t) G†k (47)
= Σy|xSc ∑ k∈Sc H†k ( Σ−1k −Ωk ) HkΣy|xSc , (48)
where equation 47 follows since the cross terms are zero due to the Markov chain (Uk,t,Xk) − − Y − − (UK/k,t,XK/k); and equation 48 follows due to equation 39 and Gk. Finally,
J(Y|USc,t, t) = Σ−1y|xSc −Σ −1 y|xScmmse (∑ k∈Sc GkXk ∣∣∣Y,USc,t, t)Σ−1y|xSc (49) =Σ−1y|xSc − ∑ k∈Sc H†k ( Σ−1k −Ωk,t ) Hk (50)
=Σ−1y + ∑ k∈Sc H†kΩk,tHk, (51)
where equation 49 is due to Lemma 3; equation 50 is due to equation 48; and equation 51 follows due to equation 46.
Now, let Ω̄k := ∑ t∈T p(t)Ωk,t. The rest of the converse proof follows by averaging over the time sharing random variable to get
I(Xk; Uk|Y, T ) ≥ − ∑ t∈T p(t) log |I−Σ1/2k Ωk,tΣ 1/2 k |
≥ − log |I−Σ1/2k Ω̄kΣ 1/2 k |, (52)
where equation 52 follows from the concavity of the log-det function and Jensen’s inequality. Similarly to equation 52, from equation 43 and Jensen’s Inequality we have
I(Y; USc |T ) ≤ log ∣∣∣∣∣∑ k∈Sc Σ1/2y H † kΩ̄kHkΣ 1/2 y + I ∣∣∣∣∣ . (53) Finally, using equation 52 and equation 53 in equation ??, noting that Ωk = ∑ t∈T p(t)Ωk,t Σ −1 k since 0 Ωk,t Σ−1k , and taking the union over Ωk satisfying 0 Ωk Σ −1 k , completes the proof of the converse part; and, hence, that of Theorem 2.
5.4 PROOF OF PROPOSITION 1
For simplicity of exposition, the proof is given for the case K = 2 encoders. The proof for K > 2 follows similarly. By the definition of IRsumDIB, the accuracy complexity tuple (∆, Rsum) ∈ R2+ is achievable for some random variables Y,X1, X2, U1, U2 with joint pmf satisfying equation 13, if it holds that
∆ ≤ I(Y ;U1, U2) (54) ∆ ≤ R1 − I(X1;U1|Y ) + I(Y ;U2) (55) ∆ ≤ R2 − I(X2;U2|Y ) + I(Y ;U1) (56) ∆ ≤ R1 +R2 − I(X1;U1|Y )− I(X2;U2|Y ) (57)
R1 +R2 ≤ Rsum. (58)
The application of the Fourier-Motzkin elimination to project out R1 and R2 reduces the system on inequalities equation 54-equation 58 to the following system of inequalities
∆ ≤ I(Y ;U1, U2) (59) ∆ ≤ Rsum − I(X1;U1|Y )− I(X2;U2|Y ) (60)
2∆ ≤ Rsum − I(X1;U1|Y )− I(X2;U2|Y ) + I(Y ;U1) + I(Y ;U2) (61)
It follows due to the Markov chainU1− −X1− −Y − −X2− −U2 that we have I(Y ;U1, U2) ≤ I(Y ;U1)+I(Y ;U2). Therefore, inequality equation 61 is redundant as it is implied by equation 59 and equation 60. This completes the proof of Proposition 1.
5.5 PROOF OF PROPOSITION 2
Suppose that P∗ yields the maximum in equation 16. Then,
(1 + s)∆s = (1 + sK)H(Y ) + sRs + Ls(P∗) (62)
= (1 + sK)H(Y ) + sRs + ( −H(Y |U∗K)− s
K∑ k=1 [H(Y |U∗k ) + I(Xk;U∗k )]
) (63)
= (1 + sK)H(Y ) + sRs + (−H(Y |U∗K)− s(Rs − I(Y ;U∗K) +KH(Y ))) (64) = (1 + s)I(Y ;U∗K) (65)
≤ (1 + s)∆(Rs, PXK,Y ), (66)
where equation 63 is due to the definition of Ls(P) in equation 18; equation 64 follows since we have∑K k=1[I(Xk;U ∗ k ) +H(Y |U∗k )] = Rs − I(Y ;U∗K) +KH(Y ) from the definition of Rs in equation 17; and equation 66 follows from the definition in equation ??.
Conversely, if P∗ is the solution to the maximization in the function ∆(Rsum, PXK,Y ) in equation ?? such that ∆(Rsum, PXK,Y ) = ∆s, then ∆s ≤ I(Y ;U ∗ K) and ∆s ≤ Rsum − ∑K k=1 I(Xk;U ∗ k |Y ) and we have, for any s ≥ 0, that
∆(Rsum, PXK,Y ) = ∆s
≤ ∆s − (∆s − I(Y ;U∗K))− s ( ∆s −Rsum +
K∑ k=1 I(Xk;U ∗ k |Y )
)
= I(Y ;U∗K)− s∆s + sRsum − s K∑ k=1 I(Xk;U ∗ k |Y ) = H(Y )− s∆s + sRsum −H(Y |U∗K)− s K∑ k=1 [I(Xk;U ∗ k ) +H(Y |U∗k )] + sKH(Y )
(67)
≤ H(Y )− s∆s + sRsum + L∗s + sKH(Y ) (68) = H(Y )− s∆s + sRsum + sKH(Y )− ((1 + sK)H(Y ) + sRs − (1 + s)∆s) (69) = ∆s + s(Rsum −Rs), (70)
where in equation 67 we have ∑K k=1 I(Xk;Uk|Y ) = −KH(Y ) + ∑K k=1 I(Xk;Uk) +H(Y |Uk) due to the Markov chain Uk −Xk − Y − (XK\k, UK\k); equation 68 follows since L∗s is the maximum over all possible distributions P (not necessarily P∗ maximizing ∆(Rsum, PXK,Y )); and equation 69 is due to equation 16.
Finally, equation 70 is valid for any Rsum ≥ 0 and s ≥ 0. Given s, and hence (∆s, Rs), choosing R = Rs yields ∆(Rs, PXK,Y ) ≤ ∆s. Together with equation 66, this completes the proof of Proposition 2.
5.6 PROOF OF LEMMA 1
The proof follows by deriving the following bounds. For any conditional pmf QY |Z(y|z), y ∈ Y and z ∈ Z , e.g., Z = UK or Z = Uk, proceeding similarly to equation 38 and averaging over Z, we have
H(Y |Z) = E[− logQY |Z(Y |Z)]−DKL(PY |Z‖QY |Z). (71)
Similarly, we have
I(Xk;Uk) = H(Uk)−H(Uk|Xk) (72) = E[− logQUk (Uk)]−DKL(PUk‖QUk )−H(Xk|UK) (73) = DKL(PY |Uk‖QUk )−DKL(PUk‖QUk ) (74)
Thus, we get
Ls(P) = LVBs (P,Q) +DKL(PY |UK ||QY |UK) + s K∑ k=1 (DKL(PY |Uk ||QY |Uk ) +DKL(PUk ||QUk ))
≥ LVBs (P,Q), (75)
where equation 75 holds by the non-negativity of relative entropy: and the equality is met if and only if Q∗ is as given by equation 21 and equation 22.
6 OTHER EXPERIMENTAL RESULTS (REGRESSION FOR UNKNOWN GAUSSIAN MODEL)
6.1 D-VIB ALGORITHM FOR VECTOR GAUSSIAN MODEL
For the vector Gaussian data model equation 14 the optimal distributions P and Q in equation 23 lie within the family of multivariate Gaussian distributions. Motivated by this observation, we consider the following parameterization for k ∈ K:
Pθk (uk|xk) = N (uk;µ e k,Σ e k) (76) QφK(ŷ|uK) = N (ŷ;µ d K,Σ d K) (77)
Qφk (ŷ|uk) = N (ŷ;µ d k,Σ d k) (78)
Qϕk (uk) = N (0, I). (79)
where µek,Σ e k are the output of a DNN fθk with input Xk that encodes the observations in a nuk -dimensional Gaussian distribution, µdK,Σ d K are the outputs of a DNN fφK with inputs U1, . . . ,UK , sampled from Pθk (uk|xk), and µ d k,Σ e k are the output of a DNN fφk with input Uk, k = 1, . . . ,K.
With the above choice of parametric encoders and decoders, and using a single sample m = 1, the empirical DIB cost in equation 29 is given for the sample (x1,i, . . . ,xK,i,yi) by
Lemps,i (θ,φ,ϕ) :=− 1
2
( (yi − µd12,i)TΣd,−112,i (yi − µ d 12,i) + log det(Σ d 12,i) ) − s
K∑ k=1 1 2 ( (yi − µdk,i)TΣd−1k,i (yi − µ d k,i) + log det(Σ d k,i) )
− s K∑ k=1 1 2 ( (µek,i − I)T (µek,i − I) + log |Σe,−1k,i | − nuk + tr{Σ e k,i} ) − ny 2 (1 + sK) log(2π),
where (µd12,i,Σ d 12,i) denote the output of the DNN fφK for the i-th sample (x1,i, . . . ,xK,i,yi), and similarly for the other mean and covariance terms; and where we have used that each term in the empirical DIB cost equation 29 can be computed noting that for d-dimensional Gaussian pmfsN (y;µ,Σ) we have
logN (y;µ,Σ) = −1 2
( (y − µ)TΣ−1(y − µ) + d log(2π) + log det(Σ) ) ,
and the KL divergence between two multivariate Gaussian pmfs P1 ∼ N (µ1,Σ1) and P2 ∼ N (µ2,Σ2) in Rd, is
DKL(P1‖P2) = 1
2
( (µ1 − µ2)TΣ−12 (µ1 − µ2) + log |Σ2Σ −1 1 | − d+ tr{Σ −1 2 Σ1} ) . (80)
The multivariate Gaussian parametrization of the encoders, decoders and prior distribution as given by equation 76-equation 79 can be used for other data models that are not necessary Gaussian. For example, it is particularly suitable for regression problems in which Y lies on a continuous space. Also, it is very often used in conjunction with VAE generative problems Rezende et al. (2014); Kingma and Welling (2013).
6.2 REGRESSION FOR VECTOR GAUSSIAN DATA MODEL
Consider a distributed learning model withK = 2 encoders, each observing a noisy version of an ny-dimensional Gaussian vector Y ∼ N (y; 0, I), as Xk = HkY + Nk, where Hk ∈ Rnk×ny and the noises are distributed as Nk ∼ N (0, I) for k = 1, 2.
For this model, the optimal accuracy-complexity region can be computed using Theorem 2. In what follows, we evaluate the performance of our D-VIB of the previous section for regression. The algorithm is trained using a dataset of n i.i.d. samples {(X1,i,X2,i,Yi)}ni=1 form the described vector Gaussian data model. We train the DNNs for various values of the parameter s. We use the multivariate Gaussian parameterization in equation 76-equation 79 for the DNNs architecture shown in Table 6.2. Specifically, Encoder k, k = 1, 2, consists of three dense layers of 512 neurons each followed by rectified linear unit (ReLu) activations. The output of encoder k is processed by a dense layer without nonlinear activation to generate µek and Σ e k of size 512 and 512× 512, respectively. Each decoder consists of two dense layers of 512 neurons with ReLu activations. The output of decoder 1, 2 and 12 is processed, each, by a fully connected layer without activation to generate µdk and Σ d k and µ d 12 and Σd12, of size 2 and 2× 2.
Figure 5 shows the optimal relevance-complexity region of tuples (∆, Rsum) obtained from Theorem 2 for a vector Gaussian model with K = 2 encoders, target variable dimension ny = 1, and observations dimension n1 = n2 = 3. A set of 40.000 samples split among training (30.000 samples) and test (10.000 samples). The figure depicts all accuracy-complexity pairs obtained by application of our algorithm D-VIB to this setting. The results are compared to the case of inference with known joint distribution (referred to as D-IB, see next section) as well as the case of centralized inference (C-IB). For the D-VIB algorithm, the the DNN architecture for the coders is shown in Table 6.2. Figure 6 shows the evolution of the associated mean squared error (MSE) in the estimation of the label Y using our D-VIB algorithm. As it can bee seen from both figures the performance of our D-VIB algorithm (which does not require knowledge of the joint label-feature distribution) is very close to that predicted by the theory, i.e., our Theorem 2.
Figure 7 shows similar curves for ny = 2, n1 = n2 = 3 dimensions, for various sizes of the training datset. As expected large training sets allow a more accurate prediction. Noteworthy, that the performance during the training phase might be better than that of the centralized learning scenario is an indicator can be caused by overfitting. Related to this aspect, recall that although the D-VIB algorithm does not estimate the underlying distribution explicitly, intuitively it does for the computation of the cost function. This is related to that universal compressors also learn the actual distribution of the data that is being compressed. Recall that since the plug-in estimator of entropy is biased downward, estimations of the mutual information terms that are involved in the cost function are then biased upward, which is an alternate explanation to the observed overfitting during the training phase.
DNN Layers
Encoder k dense [512]-ReLu dense [512]-ReLu dense [512]-ReLu Lat. space k dense [256]-ReLu Decoder 12 dense [256]-ReLu Decoder k dense [256]-ReLu
Table 3: Used DNN architecture.
7 DISTRIBUTED BLAHUT-ARIMOTO TYPE ALGORITHMS
7.1 DISCRETE-ALPHABET SETTING
In this section, we derive an iterative method to optimize the variational DIB cost function in equation 23 when the data model is discrete and the joint distribution PXK,Y is either known, or a good estimation of it can be obtained from the training samples. In these cases, the maximizing distributions P,Q of the variational DIB cost in equation 23 can be efficiently found by an alternating optimization procedure over P and Q similar to the expectation-maximization (EM) algorithm Dempster et al. (1977) and the standard Blahut-Arimoto (BA) methodBlahut (1972). An extension to the vector Gaussian data model, which involves random variable with continuous alphabets, is also provided. The main idea of the algorithm is that at iteration t, the optimal distributions P(t) that maximize the variational D-IB bound LVBs (P,Q(t)) for fixed Q(t) can be optimized in closed form and, next, the maximizing pmfs Q(t) for given P(t) can be also found analytically. So, starting from an initialization P(0) and Q(0) the algorithms performs the following computations successively and in this order, until convergence,
P(0) → Q(0) → P(1) → . . .→ P(t) → Q(t) → . . . (81)
We refer to such algorithm as “Blahut-Arimoto Distributed Information Bottleneck Algorithm (BA-DIB)”. Algorithm 1 describes the steps taken by BA-DIB to successively maximize LVBs (P,Q) by solving a concave optimization problem over P and over Q at each iteration. We have the following lemma whose proof follows essentially by using the log-sum inequality Cover and Thomas (1991) and the convexity of the mapping x 7→ x log x.
Lemma 4 The function LVBs (P,Q) is concave in P and in Q.
For fixed P(t), the optimal Q(t) maximizing the variational D-IB bound in equation 19 follows from Lemma 1 as given by equation 21-equation 22. For fixed Q(t), the optimal P(t) can be found using the following lemma.
Lemma 5 For fixed Q, there exists a P that achieves the maximum maxP LVBs (P,Q), where PUk|Xk is given by
p∗(uk|xk) = q(uk) exp (−ψs(uk, xk))∑
uk∈Uk q(uk) exp(−ψs(uk, xk))
, (82)
for uk ∈ Uk and xk ∈ Xk, k ∈ K, and where we define
ψs(uk, xk) := DKL(PY |xk ||QY |uk ) + 1
s EUK\k|xk [DKL(PY |UK\k,xk ||QY |UK\k,uk ))]. (83)
Proof: Due to its concavity, to maximize LVBs (P,Q) with respect to P for given Q, we add the Lagrange multipliers λxk ≥ 0 for each constraint ∑ uk∈Uk
p(uk|xk) = 1 with xk ∈ Xk. For each s, λxk ≥ 0 and p(uk|xk) can be explicitly found by solving the KKT conditions, e.g.,
∂
∂p(uk|xk) LVBs (P,Q) + ∑ xk∈Xk λxk ∑ uk∈Uk p(uk|xk)− 1 = 0. This completes the proof.
Algorithm 1 BA-DIB training algorithm for discrete data
1: inputs: discrete pmf PX1,...,Xk,Y , parameter s ≥ 0. 2: output: optimal P ∗Uk|Xk , pair (∆s, Rs). 3: initialization Set t = 0 and set P(0) with p(uk|xk) = 1|Uk| for uk ∈ Uk, xk ∈ Xk, k = 1, . . . ,K. 4: repeat 5: Compute Q(t+1) using equation 21 and equation 22. 6: Compute P(t+1) using equation 82. 7: t← t + 1 8: until convergence.
7.1.1 CONVERGENCE
Algorithm 1 essentially falls into the class of the Successive Upper-Bound Minimization (SUM) algorithms Razaviyayn et al. (2013) in which LVBs (P,Q) acts as a globally tight lower bound on Ls(P). Algorithm 1 provides a sequence P(t) for each iteration t, which converges to a stationary point of the optimization problem equation 23.
Proposition 4 Every limit point of the sequence P(t) generated by Algorithm 1 converges to a stationary point of equation 23.
Proof: Let Q∗(P) = arg maxQ LVBs (P,Q). Using Lemma 1, for every P′ 6= P, it holds that
LVBs (P,Q∗(P′)) ≤ LVBs (P,Q∗(P)) = Ls(P). (84)
Since Ls(P) and LVBs (P,Q∗(P′)) satisfy the assumptions of (Razaviyayn et al., 2013, Proposition 1), then LVBs (P,Q∗(P′)) satisfies A1-A4 in Razaviyayn et al. (2013). Convergence to a stationary point of equation 23 follows from (Razaviyayn et al., 2013, Theorem 1).
The self consistent equations equation 21, equation 22 and equation 83 satisfied by any stationary point of the D-IB problem extend those of the standard point-to-point IB problem Globerson and Tishby (2004) to the distributed IB problem with K ≥ 2 encoders. In particular, note the additional divergence term in equation 83.
7.2 GAUSSIAN SETTING
Recall Algorithm 1. For finite alphabet sources the updating rules of Q(t+1) and P(t+1) in Algorithm 1 are relatively easy, but they become unfeasible for continuous alphabet sources. We leverage on the optimality of Gaussian test channels, shown in Theorem 2, to restrict the optimization of P to Gaussian distributions, which are easily represented by a finite set of parameters, namely mean and covariance. We show that if P(t) are Gaussian distributions, then P(t+1) are also Gaussian distributions, which can be computed with an efficient update algorithm of its representing parameters. In particular, if at time t the k-th distributions P (t)Uk|Xk is given by
Utk = A t kXk + Z t k, (85)
where Ztk ∼ CN (0,Σzt k ), we show that at t+ 1, for P(t+1) updated as in equation 82, the encoder P (t+1)Uk|Xk corresponds to Ut+1k = A t+1 k Xk + Z t+1 k , where Z t+1 k ∼ CN (0,Σzt+1
k ) and Σ zt+1 k ,At+1k are updated as
Σ zt+1 k =
(( 1 + 1
s
) Σ−1
ut k |y −
1 s Σ−1 ut k |utK\k
)−1 , (86)
At+1k = Σzt+1 k
(( 1 + 1
s
) Σ−1
ut k |yA
t k(I−Σxk|yΣ −1 xk )−
1 s Σ−1 ut k |utK\k Atk(I−Σxk|utK\kΣ −1 xk )
) . (87)
The detailed update procedure is given in Algorithm 2 (see the following section for the details of the derivations).
Algorithm 2 BA-DIB algorithm for the Gaussin Vector D-IB
1: inputs: covariance Σy,x1,...,xk , parameter s ≥ 0. 2: output: optimal pairs (A∗k,Σz∗k), k = 1, . . . ,K. 3: initialization Set randomly A0k and Σz0k 0, k ∈ K. 4: repeat 5: Compute Σxk|utK\k and update for k ∈ K
Σutk|y = A t kΣxk|yA t,† k + Σztk (88)
Σutk|utK\k = A t kΣxk|utK\kA t,† k + Σztk , (89)
6: Compute Σzt+1k as in equation 86 for k ∈ K. 7: Compute At+1k as equation 87, k ∈ K. 8: t← t + 1. 9: until convergence.
7.2.1 DERIVATION OF ALGORITHM 2
We derive the update rules of Algorithm 2 and show that the Gaussian distribution is invariant to the update rules in Algorithm 1, in line with Theorem 2. First, we recall that if (X1,X2) are jointly Gaussian, then
PX2|X1=x1 = CN (µx2|x1 ,Σx2|x1), (90)
where µx2|x1 := Kx2|x1x1, with Kx2|x1 := Σx2,x1Σ −1 x1 .
Then, for Q(t+1) computed as in equation 21 and equation 22 from P(t), which is a set of Gaussian distributions, we have
Q (t+1) Y|uk = CN (µy|ut k ,Σy|ut k ), Q (t+1)
Y|uK = CN (µy|utK ,Σy|utK).
Next, we look at the update P(t+1) as in equation 82 from given Q(t+1). First, we have that p(utk) is the marginal of Utk, given by U t k ∼ CN (0,Σut
k ) where Σut k = AtkΣxkA t,H k + Σztk .
Then, to compute ψs(utk,xk), first, we note that
EUK\k|xk [DKL(PY |UK\k,xk ||QY |UK\k,uk )] = DKL(PY,UK\k|xk ||QY,UK\k|uk )−DKL(PUK\k|xk ||QUK\k|uk ) (91)
and that for two generic multivariate Gaussian distributions P1 ∼ CN (µ1,Σ1) and P2 ∼ CN (µ2,Σ2) in CN , the KL divergence is computed as in equation 80 below.
Applying equation 91 and equation 80 in equation 83 and noting that all involved distributions are Gaussian, it follows that ψs(utk,xk) is a quadratic form. Then, since p(u t k) is Gaussian, the product log(p(utk) exp(−ψs(utk,xk))) is also a quadratic form, and identifying constant, first and second order terms, we can write
log p(t+1)(uk|xk) = Z(xk) + (uk − µut+1 k |xk )HΣ−1 zt+1 k (uk − µut+1 k |xk ), (92)
where Z(xk) is a normalization term independent of uk,
Σ−1 zt+1 k = Σ−1 ut k + KHy|ut k Σ−1 y|ut k Ky|ut k
+ 1
s KHyutK\k|u t k Σ−1 yutK\k|u t k KyutK\k|u t k − 1 s KHutK\k|u t k Σ−1 utK\k|u t k KutK\k|u t k , (93)
and
µ ut+1 k |xk = Σ zt+1 k
( KHy|ut k Σ−1 y|ut k µy|xk
+ 1
s Ky,utK\k|u t k Σ−1 y,utK\k|u t k µy,utK\k|xk − 1 s KutK\k|u t k Σ−1 utK\k|u t k µutK\k|xk
) .
(94)
This shows that p(t+1)(uk|xk) is a multivariate Gaussian distribution and that Ut+1k |{Xk = xk} is also a multivariate Gaussian distributed as CN (µ
ut+1 k |xk ,Σ zt+1 k ).
Next, we simplify equation 93 and equation 94 to obtain the update rules equation 86 and equation 87. From the matrix inversion lemma, similarly to Chechik et al. (Feb. 2005), for (X1,X2) jointly Gaussian we have
Σ−1x2|x1 = Σ −1 x2 + K H x1|x2Σ −1 x1|x2Kx1|x2 . (95)
Applying equation 95, in equation 93 we have
Σ−1 zt+1 k = Σ−1 ut k |y +
1 s Σ−1 ut k |yutK\k − 1 s Σ−1 ut k |utK\k , (96)
= ( 1 + 1
s
) Σ−1
ut k |y −
1 s Σ−1 ut k |utK\k , (97)
where equation 97 is due to the Markov chain Uk − −Y − −UK\k.
Then, also from the matrix inversion lemma, we have for jointly Gaussian (X1,X2),
Σ−1x2|x1Σx2,x1Σ −1 x1 = Σ −1 x2 Σx2,x1Σ −1 x1|x2 . (98)
Applying equation 98 to equation 94, for the first term in equation 94, we have
KHy|ut k Σ−1 y|ut k µy|xk = Σ −1 ut k |yΣy,utk Σ−1y µy|xk (99)
= Σ−1 ut k |yA t kΣxk,yΣ −1 y Σy,xkΣ −1 xk xk = Σ−1 ut k |yA t k(I−Σxk|yΣ −1 xk )xk, (100)
where Σy,ut k = AtkΣxk,y; and equation 100 is due to the definition of Σxk|y.
Similarly, for the second term in equation 94, we have
KyutK\k|u t k Σ−1 yutK\k|u t k µy,utK\k|xk = Σ−1 ut k |yutK\k Atk(I−Σxk|yutK\kΣ −1 xk )xk, (101)
= Σ−1 ut k |yA t k(I−Σxk|yΣ −1 xk )xk, (102)
where we use Σut k ,yutK\k = AtkΣxk,yutK\k ; and equation 102 is due to the Markov chain Uk − −Y− −UK\k.
For the third term in equation 94,
KutK\k|u t k Σ−1 utK\k|u t k µutK\k|xk = Σ−1 ut k |utK\k Atk(I−Σxk|utK\kΣ −1 xk )xk. (103)
Equation equation 87 follows by noting that µ ut+1 k |xk = At+1k xk, and that from equation 94 A t+1 k can be identified as in equation 87.
Finally, we note that due to equation 85, Σ | 1. What is the main contribution of the paper regarding distributed representation?
2. What are the strengths and weaknesses of the paper in terms of its connection to the learning problem?
3. How does the author derive the fundamental trade-off between accuracy and complexity?
4. What is the purpose of considering the case where the joint distribution is unknown?
5. What are the concerns regarding the proposed method's ability to approximate the variational lower bound and its efficiency in training? | Review | Review
This paper studies a distributed representation problem where multiple features X_1,...,X_K are processed (or encoded) separately to estimate (or decode) some quantity of interest Y.
The log loss is considered throughout, which amounts to measuring the mutual information between Y and \hat Y, defined as the "accuracy" of the estimation method. The average rate (measured in number of bits per sample) of the encoded feature is defined as the "complexity" of the representation method.
The author derived the fundamental trade-off between the accuracy and the complexity for any representation-estimation (or encoding-decoding) method.
The author also derived a variational representation of the optimal accuracy-complexity region, which also expresses the optimal encoder and decoder map as the solution of the optimization problem.
Finally, the author considered the case where the joint distribution of P_{X_1,,,,X_K,Y} is unknown, and encoder and decoder are parameterized by neural networks, parameters of which are tuned using data.
I incline to reject the paper, for the following reasons.
1. The accuracy-complexity trade-off studied in the paper is more of a rate-distortion type of information-theoretic problem, where the joint distribution P_{X_1,,,,X_K,Y} is assumed to be known. Its connection to the learning problem, where the joint distribution P_{X_1,,,,X_K,Y} is unknown, is unclear. Even if the precise accuracy-complexity region is obtained, it says little about the sample complexity needed by a learning algorithm to achieve this region.
2. Deriving the optimal encoder-decoder mapping from the variational representation of the accuracy-complexity region also requires the joint distribution, which violates the basic assumption of the learning problem.
3. The author did consider the case where the joint distribution is unknown, and the encoder-decoder pair is learned from data. However, this learning problem is somewhat artificial: each encoder only encodes one of the features, but in order to encode optimally, it has to know the entire joint distribution, hence need to access all the features during training. This discrepancy of seeing different components of the data set during training and inference is not well-motivated.
The author mentioned "multi-view learning" at the beginning of the paper. It would be good if the author can elaborate more on this problem in Sec 4 of Experiment Results, and discuss with more detail on how the proposed method solves this problem and how it is different from the existing results, both in terms of the algorithm and the performance.
=================================================
Feedback to authors' reply
I got a better understanding on how the proposed learning algorithms works after reading the authors' reply.
I guess the idea for the case where the joint distribution is unknown is that, for encoding, different nodes uses its own training data (without accessing other nodes' data) to optimize the encoder separately; while for decoding, the master node trains the decoder uses data available to all nodes to estimate the joint distribution.
In this way, the encoders and the decoder jointly optimizes a variational lower bound of the optimal rate region.
If this is the case, I think the proposed method may have some value in practice.
But now the question is how good the variational lower bound is compared to the optimal region, and how well can this variational lower bound be approximated by neural networks and how efficient can the training be done. Without theoretical analysis on these questions, one may only use experiments to assess the performance. From Table 2, it looks like the improvement of the proposed method on the existing method is quite marginal.
In summary, I would like to thank the authors' valuable reply. I encourage the authors to study the gap between the variational lower bound and the optimal region, and maybe do more experiments to find a good use case of the proposed method. |
ICLR | Title
An Information Theoretic Approach to Distributed Representation Learning
Abstract
The problem of distributed representation learning is one in which multiple sources of information X1, . . . , XK are processed separately so as to extract useful information about some statistically correlated ground truth Y . We investigate this problem from informationtheoretic grounds. For both discrete memoryless (DM) and memoryless vector Gaussian models, we establish fundamental limits of learning in terms of optimal tradeoffs between relevance and complexity. We also develop a variational bound on the optimal tradeoff that generalizes the evidence lower bound (ELBO) to the distributed setting. Furthermore, we provide a variational inference type algorithm that allows to compute this bound and in which the mappings are parametrized by neural networks and the bound approximated by Markov sampling and optimized with stochastic gradient descent. Experimental results on synthetic and real datasets are provided to support the efficiency of the approaches and algorithms which we develop in this paper.
1 INTRODUCTION
Let a measurable variable X ∈ X and a target variable Y ∈ Y with unknown joint distribution PX,Y be given. In the classic problem of statistical learning, one wishes to infer an accurate predictor of the target variable Y ∈ Y based on observed realizations of X ∈ X . That is, for a given class F of admissible predictors φ : X → Ŷ and an additive loss function ` : Y → Ŷ that measures discrepancies between true values and their estimated fits, one aims at finding the mapping φ? ∈ F that minimizes the expected risk
CPX,Y (φ, `) = EPX,Y [`(Y, φ(X))]. (1)
Because the joint distribution PX,Y is unknown, in practice the risk equation 1 (also called population risk) cannot be computed directly; and, in the standard approach, one usually resorts to choosing the predictor with minimal risk on a training dataset consisting of n labeled samples {(xi, yi)}ni=1 that are drawn independently from the unknown joint distribution PX,Y . Also, it is important to restrict the set F of admissible predictors to a low-complexity class to prevent overfitting. This leads to the abstract inference problem shown in Figure 1.
In this paper, we study a generalization of this problem in which the prediction is to be performed in a distributed manner. The model is shown in Figure 2. Here, the prediction of the target variable Y ∈ Y is to be performed on the basis of samples of statistically correlated random variables (X1, . . . , XK) that are observed each at a distinct predictor. We investigate this problem in the case in which the loss function `(·) is the logarithmic-loss fidelity measure, given by
`log(y, ŷ) = log ( 1 ŷ(y) ) (2)
where ŷ(·) designates a probability distribution on Y and ŷ(y) is the value of this distribution evaluated for the outcome y ∈ Y . The choice of a ‘good” loss function is often controversial in statistical learning theory, and although a complete and rigorous justification of the usage of logarithmic loss as a fidelity measure in learning theory is still awaited, partial explanations appeared in Jiao et al. (2015) and, especially in Painsky and Wornell (2018) where it is shown that, for binary classification problems, by minimizing the logarithmic-loss one actually minimizes an upper bound to any choice of loss function that is smooth, proper (i.e., unbiased and Fisher consistent) and convex. Also, we constrain the complexity of the predictors by using mutual information as a regularizer term. This is inline with recent works Xu and Raginsky (2017); Russo and Zou (2015) that show that the generalization error can be upper-bounded using the mutual information between the input dataset and the output of the predictor – see also Bousquet and Elisseeff (2002); Shalev-Shwartz et al. (2010) where the stability of an algorithm is controlled by constraining the mutual information between its input and output.
1.1 AN EXAMPLE: MULTI-VIEW LEARNING
In many data analytics problems, data is collected from various sources of information or feature extractors; and is intrinsically heterogeneous. For example, an image can be identified by its color or texture features; and a document may contain text and images. Conventional machine learning approaches concatenate all available data into one big row vector (or matrix) on which a suitable algorithm is then applied. Treating different observations as a single source might cause overfitting and is not physically meaningful because each group of data may have different statistical properties. Alternatively, one may partition the data into groups according to samples homogeneity, and each group of data be regarded as a separate view. This paradigm, termed multi-view learning Xu et al. (2013), has received growing interest; and various algorithms exist, sometimes under references such as co-training Blum and Mitchell (1998); Dhillon et al. (2011); Kumar and Daumé (2011); Gönen and Alpaydın (2011), multiple kernel learning Gönen and Alpaydın (2011) and subspace learning Jia et al. (2010). By using distinct encoder mappings to represent distinct groups of data, and jointly optimizing over all mappings to remove redundancy, multiview learning offers a degree of flexibility that is not only desirable in practice but is likely to result in better learning capability. Actually, as shown in Vapnik (2013), local learning algorithms produce less errors than global ones. Viewing the problem as that of function approximation, the intuition is that it is usually non-easy to find a unique function that holds good predictability properties in the entire data space.
1.2 INFORMAL SUMMARY OF RESULTS
In this paper, first we characterize the optimal tradeoff between relevance and complexity for the distributed learning model of Figure 2 for both discrete memoryless (DM) and memoryless vector Gaussian models. While the result for the discrete data model (Theorem 1) is not difficult to establish using connections with Courtade and Weissman (2014, Appendix B) which we explicit here, the result for the multivariate Gaussian data model (Theorem 2), which provides a sharp analytic characterization of optimal tradeoffs, is new and non-trivial (the proof of the converse part is not straightforward and was missing before this work in both theoretic learning and information theoretic communities including in the scalar case). Second, we develop a variational bound on the optimal tradeoff that can be seen as a generalization of the ELBO and the β-VAE criteria Higgins et al. (2016) to the distributed setting. Furthermore, for both DM and Gaussian models, we also provide a variational inference type algorithm which is parametrized by neural networks and allows to compute the developed variational bound when the data distribution is not known. Specifically, the main contributions of this paper are:
• In Section 3.2, we find an explicit analytic characterization of optimal tradeoffs between relevance and complexity for the memoryless vector Gaussian model. The result generalizes the Gaussian Information Bottleneck method of Globerson and Tishby (2004); Chechik et al. (Feb. 2005) to the distributed learning scenario.
• In Section 3.3, we study the problem of maximizing relevance under a constraint on the sum complexity for which we establish a variational bound which generalizes the ELBO and the β-VAE criteria to the distributed setting.
• Section 3.4 is algorithmic-oriented. We develop a variational inference type algorithm which enables to compute the bound. This algorithm is obtained by parametrizing the encoders, the decoder, and the prior distributions via DNNs and using Monte-Carlo sampling. Also, it makes usage of Kingma et
al.’s re-parametrization trick Kingma and Welling (2013) and can be seen as a generalization of the variational information bottleneck algorithm in Alemi et al. (2017) to the distributed setting.
• Section 4 contains some experimental results on real datasets which show the efficiency of the approaches and algorithms that we develop in this paper.
Most relevant to this paper is the single-encoder Information Bottleneck (IB) method of Tishby et al. (1999) which readily and elegantly captures the above mentioned viewpoint of seeking the right balance between data fit and generalization by using the mutual information both as a cost function and as a regularizer term. Thus, the results of this paper can be seen as a generalization of those of Tishby et al. (1999) for the DM model and Globerson and Tishby (2004); Chechik et al. (Feb. 2005) for the Gaussian model to the distributed learning setting.
Remark: Due to space constraints, the proofs of the results of this paper are deferred to the appendices section, which also contains additional experimental results.
1.3 NOTATION
Throughout, upper case letters denote random variables, e.g., X; lower case letters denote realizations of random variables, e.g., x; and calligraphic letters denote sets, e.g., X . The cardinality of a set is denoted by |X |. For a random variable X with probability mass function (pmf) PX , we use PX(x) = p(x), x ∈ X for short. Boldface upper case letters denote vectors or matrices, e.g., X, where context should make the distinction clear. For random variables (X1, X2, . . .) and a set of integers K ⊆ N, XK denotes the set of random variables with indices in the set K, i.e., XK = {Xk : k ∈ K}. If K = ∅, XK = ∅. For k ∈ K we let XK/k = (X1, . . . , Xk−1, Xk+1, . . . , XK), and assume that X0 = XK+1 = ∅. Also, for zero-mean random vectors X and Y, the quantities Σx, Σx,y and Σx|y denote respectively the covariance matrix of the vector X, the covariance matric of vector (X,Y) and the conditional covariance matrix of X, conditionally on Y. Finally, for two probability measures PX and QX on the random variable X ∈ X , the relative entropy or Kullback-Leibler divergence is denoted as DKL(PX‖QX).
2 FORMAL PROBLEM FORMULATION
Let K ≥ 2 and (X1, . . . , XK , Y ) be a tuple of random variables with a given joint probability mass function (pmf) PX1,...,XK ,Y (x1, . . . , xK , y) for (x1, . . . , xK) ∈ X1 × . . .×XK and y ∈ Y , where Xk designates the alphabet of Xk and Y that of Y . Throughout, we assume that the Markov chain
Xk − − Y − −XK/k (3) holds for all k ∈ K. That is, the joint pmf factorizes as
PX1,...,XK ,Y (x1, . . . , xK , y) = PY (y) K∏ k=1 PXk|Y (xk|y). (4)
The variable Y is a target variable; and we seek to characterize how accurate it can be predicted from a measurable random vector (X1, . . . , XK) when the components of this vector are processed separately, each by a distinct encoder. More specifically, let {(X1,i, . . . , XK,i, Yi)}ni=1 be a collection of n independent copies of (X1, . . . , XK , Y ). Encoder k ∈ K only observes the sequence Xnk ; and generates a description Jk = φk(Xnk ) according to some mapping
φk : Xnk →M (n) k , (5)
whereM(n)k is an arbitrary set of descriptions. The range of allowable description sets will be specified below. A decoder ψ(·) collects all descriptions JK = (J1, . . . , JK) and returns an estimate Ŷ n of Y n as
ψ :M(n)1 × . . .×M (n) K → Ŷ n. (6)
The relevance of the estimation Ŷ n is measured in terms of the relevance, defined here as the information that the descriptions φ1(Xn1 ), . . . , φK(XnK) collectively preserve about Y
n, as measured by Shannon mutual information 1
∆(n)(PXK,Y ) = 1
n ∑ yn,xn1 ,...,x n K P (yn) K∏ k=1 P (xnk |yn) log P (yn, ψ(φ1(x n 1 ), . . . , φK(x n K))) P (yn)P (ψ(φ1(xn1 ), . . . , φK(x n K)))
:= 1
n IPXK,Y (Y
n; Ŷ n), (7)
1Alternatively, the relevance could be defined in a more operational manner by the average logarithmic loss distortion or error EPXK,Y [`log(Y n, Ŷ n)] = H(Y n|Ŷ n).
where Ŷ n = ψ(φ1(Xn1 ), . . . , φK(XnK)) and the subscript PXK,Y indicates that the mutual information is computed under the joint distribution PXK,Y .
There are various ways to control the complexity of the encoding functions {φk}Kk=1. In this paper, we do so by restricting their ranges. This is known as minimum description length complexity measure Hinton and van Camp (1993). Specifically, the mapping φk(·) at Encoder k ∈ K needs to satisfy
Rk ≥ 1
n log |φk(Xnk )| for all Xnk ∈ Xnk . (8)
Definition 1 A tuple (∆, R1, . . . , RK) is said to be achievable if there exists an integer n, a family of encoding mappings {φk}Kk=1 and a decoder mapping ψ such that
∆ ≤ 1 n IPXK,Y
( Y n;ψ(φ1(X n 1 ), . . . , φK(X n K)) )
(9)
Rk ≥ 1
n log |φk(Xnk )| for all k ∈ K. (10)
The relevance-complexity region IRDIB is given by the closure of all achievable tuples (∆, R1, . . . , RK).
In some cases, for given RK = (R1, . . . , RK), for the ease of the exposition we will be content with the relevance-complexity function ∆(RK, PXK,Y ) defined as
∆(RK, PXK,Y ) = max {φk}Kk=1,ψ
∆(n)(PXK,Y ) (11)
where the maximization is subjected to equation 8.
3 MAIN RESULTS
3.1 DISCRETE MEMORYLESS DATA MODEL
The following theorem (the proof of which can be found in the appendices section) provides a computable characterization of the relevance-complexity region IRDIB. The result can be seen as a generalization of Tishby et al. Tishby et al. (1999) single encoder IB to the distributed learning model with K encoders.
Theorem 1 The relevance-complexity region IRDIB of the distributed learning problem with PXK,Y for which the Markov chain equation 3 holds is given by the union of all tuples (∆, R1, . . . , RK) ∈ RK+1+ that satisfy for all S ⊆ K,
∆ ≤ ∑ k∈S [Rk−I(Xk;Uk|Y, T )] + I(Y ;USc |T ), (12)
for some set of pmfs P := {PUk|Xk,T , . . . , PUk|Xk,T , PT } with joint distribution of the form
PT (t)PY (y) K∏ k=1 PXk|Y (xk|y) K∏ k=1 PUk|Xk,T (uk|xk, t). (13)
Remark 1 In Theorem 1, the random variable T stands for a convexification of the region, i.e., convex combination of achievable relevance-complexity tuples is itself achievable. For given T = t, the result of Theorem1 comprises the optimization over K conditional distributions {PUK |Xk,t}. For k ∈ K, the conditional distribution PUK |Xk,t represents a stochastic encoding of the feature Xk into a latent variable Uk. Intuitively, the latent variableUk should capture all relevant information about Y that is contained inXk and non redundant with those carried out by {Ui}i 6=k. The requirement of non-redundancy is mandated by the need to operate at the minimum possible complexity at which a desired relevance level is achievable (recall that minimum complexity, as expressed by algorithm’s input-output mutual information, translates directly into a better generalization capability). Collectively, however, the set of all latent variables (U1, . . . , UK) should be expressive enough to reproduce the target variable Y to within the desired relevance level.
Remark 2 Like for the single-encoder IB problem of Tishby et al. (1999) and an increasing number of works that followed, including Courtade and Weissman (2014, Section III-F), our approach here is asymptotic. In addition to that it leads to an exact characterization, the result also readily provides a lower bound on the performance in the non-asymptotic (e.g., one shot) setting. For the latter setting known approaches (e.g., the functional representation lemma of Li and El Gamal (2018)) would lead to only non-matching inner and outer bounds on the region of optimal tradeoff pairs, as this is the case even for the single encoder case Li et al. (2018). 4
3.2 MEMORYLESS VECTOR GAUSSIAN DATA MODEL
We now turn to a continuous-alphabet setting. Here, (X1, . . . ,XK ,Y) is a zero-mean Gaussian random vector such that Xk = HkY + Nk for all k ∈ K, (14) where Hk ∈ Cnk×ny models the linear model connecting the target variable Y ∈ Cny to the observation at encoder k, and Nk ∈ Cnk , k = 1, . . . ,K, is the noise vector at encoder k, assumed to be Gaussian with zero-mean and covariance matrix Σk, and independent from all other noises and the target variable Y. We denote by Σy the covariance matrix of of the target vector Y ∈ Cny .
For this model, we find an explicit analytic characterization of optimal tradeoffs between relevance and complexity. The proof relies on deriving an outer bound on the region described by equation 12, and showing that it is achievable with Gaussian distribution, with no time-sharing. In doing so, we use techniques that rely on the de Bruijn identity and the properties of Fisher information and minimum mean square error (MMSE).
Theorem 2 The relevance-complexity region IRGDIB for the vector Gaussian model is given by the union of all tuples (∆, R1, . . . , RL) that satisfy for all S ⊆ K
∆ ≤ [ Rk + log ∣∣∣I−Σ1/2k ΩkΣ1/2k ∣∣∣]+ log ∣∣∣∣∣∑ k∈Sc Σ1/2y H † kΩkHkΣ 1/2 y + I ∣∣∣∣∣ , for some 0 Ωk Σ−1k .
Proof: The proof of the direct part follows by evaluating the region of Theorem 1, which can be extended to the case of continuous alphabets using standard discretization (quantization) arguments, with the choices T = ∅ and p(uk|xk, t) = CN (xk,Σ1/2k (Ωk − I)Σ 1/2 k ). The main contribution in the proof is that of the converse part. This proof is technical and rather lengthy and, for this reason, is deferred to the appendices section.
In the special case in which K = 1, the result of Theorem 2 recovers that by Globerson and Tishby (2004) (see also Chechik et al. (Feb. 2005)) which establishes the optimal relevance-complexity tradeoff of the single-encoder Gaussian IB problem.
3.3 A VARIATIONAL BOUND
In this section, we consider the problem of learning encoders- and decoder mappings that maximize the relevance level for a given (fixed) complexity level, i.e., those that perform at the vicinity of the boundary of the region IRDIB. First, we derive a parametrization of the relevance-complexity region; and, then, we develop a variational bound which expresses the optimal encoders’ and decoder mappings as the solution to an optimization problem – (an algorithm for solving this problem in the case of unknown distributions is given in the next section).
Let Rsum := ∑K k=1Rk. Also, let IR sum DIB denote the region of achievable (relevance, sum-complexity) pairs,
IRsumDIB := { (∆, Rsum) ∈ R2+ : ∃(R1, . . . , RK) ∈ RK+ s.t.
(∆, R1, . . . , RK) ∈ IRDIB and K∑ k=1 Rk = Rsum } .
Proposition 1 The relevance-complexity region under sum-complexity constraintRIsumDIB is given by the convexhull of all tuples (∆, Rsum) ∈ R2+ satisfying ∆ ≤ ∆(Rsum, PXK,Y ) where
∆(Rsum, PXK,Y ) = max P min
{ I(Y ;UK), Rsum −
K∑ k=1 I(Xk;Uk|Y )
} , (15)
and where the maximization is over the set of pmfs P := {PU1|X1 , . . . , PUK |XK} such that the joint pmf factorizes as pY (y) ∏K k=1 pXk|Y (xk|y) ∏K k=1 pUk|Xk (uk|xk).
The next proposition provides a characterization of the pairs (∆, Rsum) that lie on the boundary ofRIsumDIB in terms of a nonnegative parameter s ≥ 0.
Proposition 2 For every pair (∆, Rsum) ∈ R2+ that lies on the boundary of the relevance-complexity region RIsumDIB there exist s ≥ 0 such that (∆, Rsum) = (∆s, Rs), where
∆s = 1
(1 + s)
[ (1 + sK)H(Y ) + sRs + max
P Ls(P)
] , (16)
Rs = I(Y ;U ∗ K) + K∑ k=1 [I(Xk;U ∗ k )− I(Y ;U∗k )], (17)
and P∗ is the set of conditional pmfs P that maximize the cost function
Ls(P) := −H(Y |UK)− s K∑ k=1 [H(Y |Uk) + I(Xk;Uk)]. (18)
Using Proposition 2 it is clear that the encoders {PUk|Xk}k∈K that achieve the relevance-complexity pair (∆s, Rs) can be computed by maximizing the regularized cost equation 18 for the corresponding value of s ≥ 0. The corresponding optimal decoder PY |UK for these encoders can be found as in equation ??. Different relevance-complexity pairs (∆s, Rs) on the boundary of IRsumDIB and encoders- and decoder mappings that achieve it can be found by solving equation 18 for different values of s ≥ 0 and then evaluating equation 16 and equation 17 for the obtained solution.
The optimization of equation 18 generally requires to compute marginal distributions involving the descriptions U1, . . . , UK , an aspect which can be non-easy computationally costly. To overcome this limitation, in the following we derive a tight variational bound on Ls(P) which lower bounds the DIB cost function with respect to some arbitrary distributions. Let us consider the arbitrary decoder QY |U1,...,UK (y|u1, . . . , uK) for y ∈ Y , u1 ∈ U1, . . . , uK ∈ UK , the K decoders QY |Uk (y|uk) for k ∈ K for y ∈ Y , uk ∈ Uk, and latent variable priors QUk (uk), k ∈ K, uk ∈ Uk. For short, we denote
Q := {QY |U1,...,UK , QY |U1 , . . . , QY |UK , QU1 , . . . , QUK}.
Let us define the variational DIB cost function LVBs (P,Q) as
LVBs (P,Q) := E[logQY |UK(Y |UK)]︸ ︷︷ ︸ av. logarithmic-loss + s K∑ k=1 ( E[logQY |Uk (Y |Uk)]−DKL(PUk|Xk‖QUk ) ) ︸ ︷︷ ︸
regularizer
. (19)
The following lemma states that LVBs (P,Q) is a lower bound to Ls(P) for all distributions Q.
Lemma 1 For fixed pmfs P, we have
Ls(P) ≥ LVBs (P,Q), for all pmfs Q. (20)
In addition, there exists a unique Q that achieves the maximum maxQ LVBs (P,Q) = Ls(P), and is given by
Q∗Uk = PUk , Q ∗ Y |Uk = PY |Uk , k = 1, . . . ,K, (21)
Q∗Y |U1,...,Uk = PY |U1,...,UK , (22)
where PUk , PY |Uk and PY |U1,...,UK are computed from the pmfs P.
Using the above, the optimization in equation 16 can be written in terms of the variational DIB cost function as
max P Ls(P) = max P max Q LVBs (P,Q). (23)
We close this section by noting that the cost function equation 19 can be seen as a generalization of the evidence lower bound (ELBO) as given in Rezende et al. (2014); Kingma and Welling (2013) for the single-encoder learning to the distributed setting. Also, in the specific case in which Y = (X1, . . . , XK) the bound generalizes the ELBO used for VAEs to the case of an arbitrary number of encoders.
3.4 CASE OF UNKNOWN DISTRIBUTIONS: VARIATIONAL DISTRIBUTED IB ALGORITHM
In practice only a set of training samples {(X1,i, . . . , XK,i, Yi)}ni=1 are available. In this section, we provide a method to optimize equation 23 in this case by parametrizing the encoding and decoding distributions that are to optimize using a family of distributions whose parameters are determined by Deep Neural networks (DNNs). This allows us to formulate equation 23 in terms of the DNN parameters and optimize it by using the reparametrization trick Kingma and Welling (2013), Monte Carlo sampling, as well as stochastic gradient descent (SGD) type algorithms.
Let FeNN,k denote the parametric family of encoding probability distributions PUk|Xk over Uk for each element on Xk. Each member of this collection, PUk|Xk;γek , is described by a parameter vector γ e k ∈ Γek ⊆ Rl e k , where
Γek ⊆ Rl e k denotes the set of allowable parameter vectors. The parameter vector γek is the output of a DNN fθk : Xk → Γ e k, with network parameters θk ∈ Θk ⊆ Rd e k , e.g., the weights of the network at all layers. The DNN fθk takes Xk as input and outputs the parameter vector γ e k, determining one of the probability members PUk|Xk;γek . We have FeNN,k = { PUk|Xk;γek (uk|xk), for uk ∈ Uk, xk ∈ Xk : γ e k = fθk (xk), θk ∈ Θk } . (24)
For example, the family of multivariate Gaussian distributions is parametrized by the mean µθk and covariance matrix Σθk, i.e., γk := (µ θ k,Σ θ k). Therefore, given an observation Xk, γk := (µ θ k,Σ θ k) is determined by the output of the DNN fθk and F e NN,k is given by PUk|Xk;γk (uk|xk) = N (uk;µ θ k,Σ θ k).
Similarly, for decoders QY |Uk over Y , define the family of distributions parametrized by a vector in Γ d k ⊆ Rl
d k
determined by the output of a DNN fφk : Uk → Γ d k, with parameters φk ∈ Φk ⊆ Rd
d k , as
FdNN,k = { QY |Uk;γdk (y|uk), for y ∈ Y, uk ∈ Uk : γdk = fφk (uk), φk ∈ Φk } , (25)
and for the distribution QY |UK over Y for each element in U1 × · · · × UK , define the family of distributions parameterized by the output of the DNN fφK : U1 × · · · × UK → Γ d K, with φK ∈ ΦK ⊆ Rd d K , and ΓdK ⊆ Rd d K
FdNN,K = { QY |U1,...,UK ;γdK (y|u1, . . . , uK), y ∈ Y, uk ∈ Uk : γdK = fφK(u1, . . . , uK), φK ∈ ΦK } . (26)
Finally, for the distributions Qϕk (uk) we define the family of distributions with parameter ϕk ∈ Ψk ⊆ R l p k
FpNN,k = { QUk;ϕk (uk), for uk ∈ Uk : ϕk ∈ Ψk } .
In the following, for brevity we use Pθk (uk|xk), Qψk (y|uk), QψK(y|uK) and Qϕk (uk) to denote the distributions parametrized by the DNNs fθk , fψk , fψK and ϕk, respectively.
By restricting the optimization of the variational DIB cost in equation 23 to the encoder, decoder and priors within the families of distributions FeNN,k, FdNN,k, FdNN,K, FpNN,k we get
max P max Q LVBs (P,Q) ≥ max θ,φ,ϕ LNNs (θ,φ,ϕ), (27)
where we use the notation θ := [θ1, . . . , θK ], φ := [φ1, . . . , φK , φK] and ϕ := [ϕ1, . . . , ϕK ] to denote the DNN and prior parameters and, the cost in equation 27 is given by
LNNs (θ,φ,ϕ) := EPY,XE{Pθk (Uk|Xk)} [ logQφK(Y |UK)
+ s K∑ k=1 ( logQφk (Y |Uk)−DKL(Pθk (Uk|Xk)‖Qϕk (Uk)) )] . (28)
Next, we train the DNNs to maximize a Monte Carlo approximation of equation 27 over θ,φ,ϕ using SGD. We use the reparameterization trick Kingma and Welling (2013), to sample from Pθk (Uk|Xk). In particular, we consider FeNN,k to consist of a parametric family of distributions that can be sampled by first sampling a random variable Zk with distribution PZk (zk), zk ∈ Zk and then transforming the samples using some function gθk : Xk × Zk → Uk parameterized by θk, such that Uk = gθk (xk, Zk) ∼ Pθk (Uk|xk). The reparametrization trick reduces the original optimization to estimating θk of the deterministic function gθk and allows to compute estimates of the gradient using backpropagation Kingma and Welling (2013). The variational DIB cost in equation 27 can be approximated, by sampling m independent samples {uk,i,j}mj=1 ∼ Pθk (uk|xk,i) for each training sample (x1,i, . . . , xK,i, yi), i = 1, . . . , n. Sampling is performed by using uk,i,j = gφk (xk,i, zk,j) with {zk,j} m j=1 i.i.d. sampled from PZk . We then have
Lemps,i (θ,φ,ϕ) := 1
m m∑ j=1 logQφK(yi|u1,i,j , . . . , uK,i,j)
+ s
m m∑ j=1 K∑ k=1 ( logQφk (yi|uk,i,j)−DKL(Pθk (Uk,i|xk,i)‖Qϕk (Uk,i)) ) . (29)
4 EXPERIMENTS: RESILIENCE TO NOISE, ROTATION AND OCCLUSION
In this experiment, we test the robustness of our method against noise, rotation and random occlusion on the MNIST dataset. Specifically, we combine two types of random occlusions: the first encoder observes a digit from the MNIST that is occluded by a square which is rotated randomly (rotation angle uniformly distributed over [−45o, 45o]); and the second encoder observes a noisy version of the same digit corrupted by additive noise
(noise level uniform between 0 and 3). The noisy pixels are clipped between 0 and 1, with more than 60% of the pixels occluded. These occlusions make the problem significantly more involved than the standard MNIST (for which application of our algorithm leads to an relevance of about 99.9%).
We considered a CNN deterministic networks with dropout which achieves a 99.8% for test data on the clean MNIST data. Then, we have trained the same CNN architecture for each of the noisy inputs to the encoders, resulting in a relevance of 92.1% from the input to encoder 1 (randomly rotated occlusion) and 79.68% from the input to encoder 2 (noisy clipped image).
0
5
10
15
20
25
0 10 20 30 40 50
0
5
10
15
20
25
0 10 20
0
5
10
15
20
25
30 40 50 0
5
10
15
20
25
Original Y
Figure 3: View 1: occluded. View 2: noisy.
CNN Layers
Encoder k conv. ker. [5,5,32]-ReLu maxpool [2,2,2]
conv. ker. [5,5,64]-ReLu maxpool [2,2,2]
dense [1024]-ReLu dropout 0.4
dense [256]-relu Latent space k dense [256]-ReLu Decoder 12 dense [256]-ReLu Decoder k dense [256]-ReLu
101 102 103
Sum-Complexity Rsum
0.0
0.5
1.0
1.5
2.0
R el
ev an
ce ∆
C-IB with Rsum →∞ D-VIB train n=50000 D-VIB test n=50000
Figure 4: relevance v.s. sum-complexity for n = 50.000 and s ∈ [10−10, 1].
We applied our D-VIB algorithm of Section 3.4 to this model with the CNN architecture of Table 1, in which Encoder k = 1, 2 is parametrized by an nuk = 256 dimensional multivariate Gaussian distributionN (µ e k,Σ e k) determined by the output of a DNN fθk consisting of the concatenation of convolution, dense and maxpool layers with ReLu activations and dropout. The output of the last layer is followed by a dense layer without activation that generate µek and Σ e k. The prior is chosen as Qϕk (u) = N (0, I). Each decoder takes the samples from Pθk (Uk|Xk) and processes its inputs with a dense layer DNN (fφK and fφk ) each with 256 neurons and ReLu activation, which outputs a vector ŷi of size |Y| = 10 normalized with a softmax, corresponding to a distribution over the one-hot encoding of the digit labels {0, . . . , 9} from the K observations,
Qφk (ŷk|uk) = Softmax(fφk (Uk)), k = 1, 2, and (30) QφK(ŷ|uK) = Softmax(fφK(U1, U2))), (31)
where Softmax(p) for p ∈ Rd is a vector with i-th entry as [Softmax(p)]i = exp(pi)/ ∑d j=1 exp(pj). Figure 4 shows the relevance-complexity tradeoffs obtained using our D-VIB algorithm of Section 3.4, with n = 50.000 and 15 distinct s-values randomly chosen in the range [10−10, 1]. For comparison, we also present the performance obtained using three methods among state-of the-art multiview learning approaches: (i) applying a deterministic CNN on the two views concatenated (deterministic CNN), (ii) applying the singleencoder variational IB method of Alemi et al. on the two views concatenated (C-VIB), and (iii) learning one function for each view via a distinct CNNs and optimize all CNNs independently (independent CNNs). The achieved relevance is reported in Table 2. For other experimental results, see the appendices section.
We also mention that at a high level our algorithm D-VIB can be considered as performing some form of coregularization (for instance its Gaussian version is similar to the CCA of Hardoon et al. (2004)). Comparatively, the single-view algorithm C-VIB can be viewed as belonging to the family of co-training style algorithms (such as the co-EM of Nigam and Ghani (2000)) which, as mentioned in the recent survey Zhao et al. (2017), override on single-view algorithms. The performance of D-VIB dominates that of C-VIB, which itself dominates co-EM.
5 PROOFS OF MAIN THEOREMS, PROPOSITIONS AND LEMMAS
5.1 AUXILIARY LEMMAS
Lemma 2 Dembo et al. (1991); Ekrem and Ulukus (2014) Let (X,Y) be a pair of random vectors with pmf p(x,y). We have
log |(πe)J−1(X|Y)| ≤ h(X|Y) ≤ log |(πe)mmse(X|Y)|,
where the conditional Fischer information matrix is defined as
J(X|Y) := E[∇ log p(X|Y)∇ log p(X|Y)†],
and the minimum mean squared error (MMSE) matrix is
mmse(X|Y) := E[(X− E[X|Y])(X− E[X|Y])†].
Lemma 3 Ekrem and Ulukus (2014) Let (V1,V2) be a random vector with finite second moments and N∼CN (0,ΣN ) independent of (V1,V2). Then
mmse(V2|V1,V2 + N) = ΣN −ΣNJ(V2 + N|V1)ΣN .
5.2 PROOF OF THEOREM 1
If K = 1 the distributed learning problem that we study boils down to the well known Information Bottleneck (IB) problem of Tishby et al. (1999). The single-encoder IB problem is essentially a remote point-to-point source coding problem Dobrushin and Tsybakov (1962) in which distortion is measured under the logarithm loss fidelity criterion Harremoes and Tishby (2007). In accordance with this analogy, for K ≥ 2 consider the multiterminal source coding problem under logarithmic loss in which the sequence Y n models a remote source that is observed by K spatially distributed agents; the agents observe noisy versions of the remote source and communicate independently with a decoder or Chief Executive Officer (CEO) over rate-constrained noise-free links. For instance, agent k, k ∈ K, observes Xnk and uses Rk bits per sample to describe it to the decoder. The decoder wants to reconstruct the remote source Y n to within a prescribed fidelity level, where incurred distortion is measured using the logarithmic loss criterion, i.e.,
`log(y n, ŷn) =
1 n log
1
P̂Y n|J(yn|φ1(xn1 ), . . . , φK(xnK)) , (32)
where J = (φ1(Xn1 ), . . . , φK(XnK)).
Here, (Xn1 , . . . , XnK , Y n) is assumed to be distributed i.i.d. according to the n-product of the pmf PX1,...,XK ,Y , i.e., the Markov chain equation 3 holds.
Definition 2 A rate-distortion code (of blocklength n) for the CEO problem consists of K encoding functions
φ̃k : Xnk → {1, . . . ,M (n) k }, for k = 1, . . . ,K, (33)
and a decoding function
ψ̃ : {1, . . . ,M (n)1 } × . . .× {1, . . . ,M (n) K } → Ŷ n. (34)
A distortion-rate tuple (D,R1, . . . , RK) is achievable for the DM CEO source coding problem with side information if there exist a blocklength n, encoding functions {φ̃k}Kk=1 and a decoding function ψ̃ such that
Rk ≥ 1
n logM
(n) k , for k = 1, . . . ,K,
D ≥ E [ `log ( Y n, ψ̃(φ̃1(X n 1 ), . . . , φ̃K(X n K)) )] .
The distortion-rate region DRCEO of the CEO model is defined as the closure of all non-negative tuples (D,R1, . . . , RK) that are achievable.
Key to the proof of Theorem 1 is the following proposition which states that IRDIB andDRCEO can be inferred from each other. Proposition 3 (∆, R1, . . . , RK) ∈ IRDIB if and only if ( H(Y )−∆, R1, . . . , RK ) ∈ DRCEO.
Proof: Let, for k = 1, . . . ,K, Jk = φk(Xnk ) and J = (J1, . . . , JK). Then,
E[`log(Y n, Ŷ n)|J = j] = ∑ yn∈Yn P (yn|j) log
( 1
P̂ (yn|j)
) (35)
= ∑
yn∈Yn P (yn|j) log ( P (yn|j) P̂ (yn|j) ) +H(Y n|J = j) (36)
= DKL(P (y n|j)‖P̂ (yn|j)) +H(Y n|J = j) (37) ≥ H(Y n|J = j), (38)
where equation 38 is due to the non-negativity of the Kullback-Leibler divergence and the equality holds if and only if for P̂ (yn|j) = P (yn|j) where P (yn|j) = Pr{Y n = yn|J = j} for all j and yn ∈ Yn.
Let an achievable tuple (∆, R1, . . . , RK) ∈ IRDIB be given. Then, there must exist functions {φk}Kk=1 such that equation 9 and equation 10 hold. Using equation 38 that by letting the decoding function ψ̃(JK) = {PY n|JK(y
n|JK)}, we have E[`log(Y n, Ŷ n)|JK] = H(Y n|JK), which implies (H(Y )−∆, R1, . . . , RK) ∈ DRCEO.
The result of Theorem 1 follows easily by combining (Courtade and Weissman, 2014, Theorem 10), which provides a single-letter characterization of the rate distortion region DR?CEO of the CEO problem, and Proposition 3.
5.3 PROOF OF THEOREM 2
The proof of the direct part of Theorem 2 follows by evaluating the region of Theorem 1 with the choice T = ∅ and p(uk|xk, t) = CN (xk,Σ1/2k (Ωk − I)Σ 1/2 k ).
The proof of the converse part is as follows. Fix t ∈ T , S ⊆ K and a family of distributions {p(uk|xk, t)}Kk=1 such that the joint distribution factorizes as equation 13. Also, let 0 Ωk,t Σ−1k and
mmse(Xk|Y,Uk,t, t) = Σk −ΣkΩk,tΣk. (39)
Such Ωk,t always exists since 0 mmse(Xk|Y,Uk,t, t) Σ−1k . (40)
Then, we have
I(Xk; Uk|Y, t) ≥ log |Σk| − log |mmse(Xk|Y,Uk,t, t)|
= − log |I−Σ1/2k Ωk,tΣ 1/2 k |, (41)
where the inequality is due to Lemma 2; and equation 41 is due to equation 39.
Also, we have
I(Y; USc,t|t) ≤ log |Σy| − log |J−1(Y|USc,t, t)| (42)
= log ∣∣∣∣∣∑ k∈Sc Σ1/2y H † kΩk,tHkΣ 1/2 y + I ∣∣∣∣∣ , (43) where equation 42 follows by using Lemma 2; and equation 43 holds by using the following equality
J(Y|USc,t, t) = ∑ k∈Sc H†kΩk,tHk + Σ −1 y . (44)
the proof of which uses a connection between MMSE and Fisher information as shown next.
For the proof of equation 44, first note that from the MMSE estimation of Gaussian random vectors El Gamal and Kim (2011), we have
Y = E[Y|XSc ] + ZSc = ∑ k∈Sc GkXk + ZSc , (45)
where Gk = Σy|xScH † kΣ −1 k and ZSc ∼ CN (0,Σy|xSc ), with
Σ−1y|xSc = Σ −1 y + ∑ k∈Sc H†kΣ −1 k Hk. (46)
Note that ZSc is independent of YSc due to the orthogonality principle of the MMSE and its Gaussian distribution. Hence, it is also independent of USc,q . We have
mmse (∑ k∈Sc GkXk ∣∣∣Y,USc,t, t) = ∑ k∈Sc Gkmmse (Xk|Y,USc,t, t) G†k (47)
= Σy|xSc ∑ k∈Sc H†k ( Σ−1k −Ωk ) HkΣy|xSc , (48)
where equation 47 follows since the cross terms are zero due to the Markov chain (Uk,t,Xk) − − Y − − (UK/k,t,XK/k); and equation 48 follows due to equation 39 and Gk. Finally,
J(Y|USc,t, t) = Σ−1y|xSc −Σ −1 y|xScmmse (∑ k∈Sc GkXk ∣∣∣Y,USc,t, t)Σ−1y|xSc (49) =Σ−1y|xSc − ∑ k∈Sc H†k ( Σ−1k −Ωk,t ) Hk (50)
=Σ−1y + ∑ k∈Sc H†kΩk,tHk, (51)
where equation 49 is due to Lemma 3; equation 50 is due to equation 48; and equation 51 follows due to equation 46.
Now, let Ω̄k := ∑ t∈T p(t)Ωk,t. The rest of the converse proof follows by averaging over the time sharing random variable to get
I(Xk; Uk|Y, T ) ≥ − ∑ t∈T p(t) log |I−Σ1/2k Ωk,tΣ 1/2 k |
≥ − log |I−Σ1/2k Ω̄kΣ 1/2 k |, (52)
where equation 52 follows from the concavity of the log-det function and Jensen’s inequality. Similarly to equation 52, from equation 43 and Jensen’s Inequality we have
I(Y; USc |T ) ≤ log ∣∣∣∣∣∑ k∈Sc Σ1/2y H † kΩ̄kHkΣ 1/2 y + I ∣∣∣∣∣ . (53) Finally, using equation 52 and equation 53 in equation ??, noting that Ωk = ∑ t∈T p(t)Ωk,t Σ −1 k since 0 Ωk,t Σ−1k , and taking the union over Ωk satisfying 0 Ωk Σ −1 k , completes the proof of the converse part; and, hence, that of Theorem 2.
5.4 PROOF OF PROPOSITION 1
For simplicity of exposition, the proof is given for the case K = 2 encoders. The proof for K > 2 follows similarly. By the definition of IRsumDIB, the accuracy complexity tuple (∆, Rsum) ∈ R2+ is achievable for some random variables Y,X1, X2, U1, U2 with joint pmf satisfying equation 13, if it holds that
∆ ≤ I(Y ;U1, U2) (54) ∆ ≤ R1 − I(X1;U1|Y ) + I(Y ;U2) (55) ∆ ≤ R2 − I(X2;U2|Y ) + I(Y ;U1) (56) ∆ ≤ R1 +R2 − I(X1;U1|Y )− I(X2;U2|Y ) (57)
R1 +R2 ≤ Rsum. (58)
The application of the Fourier-Motzkin elimination to project out R1 and R2 reduces the system on inequalities equation 54-equation 58 to the following system of inequalities
∆ ≤ I(Y ;U1, U2) (59) ∆ ≤ Rsum − I(X1;U1|Y )− I(X2;U2|Y ) (60)
2∆ ≤ Rsum − I(X1;U1|Y )− I(X2;U2|Y ) + I(Y ;U1) + I(Y ;U2) (61)
It follows due to the Markov chainU1− −X1− −Y − −X2− −U2 that we have I(Y ;U1, U2) ≤ I(Y ;U1)+I(Y ;U2). Therefore, inequality equation 61 is redundant as it is implied by equation 59 and equation 60. This completes the proof of Proposition 1.
5.5 PROOF OF PROPOSITION 2
Suppose that P∗ yields the maximum in equation 16. Then,
(1 + s)∆s = (1 + sK)H(Y ) + sRs + Ls(P∗) (62)
= (1 + sK)H(Y ) + sRs + ( −H(Y |U∗K)− s
K∑ k=1 [H(Y |U∗k ) + I(Xk;U∗k )]
) (63)
= (1 + sK)H(Y ) + sRs + (−H(Y |U∗K)− s(Rs − I(Y ;U∗K) +KH(Y ))) (64) = (1 + s)I(Y ;U∗K) (65)
≤ (1 + s)∆(Rs, PXK,Y ), (66)
where equation 63 is due to the definition of Ls(P) in equation 18; equation 64 follows since we have∑K k=1[I(Xk;U ∗ k ) +H(Y |U∗k )] = Rs − I(Y ;U∗K) +KH(Y ) from the definition of Rs in equation 17; and equation 66 follows from the definition in equation ??.
Conversely, if P∗ is the solution to the maximization in the function ∆(Rsum, PXK,Y ) in equation ?? such that ∆(Rsum, PXK,Y ) = ∆s, then ∆s ≤ I(Y ;U ∗ K) and ∆s ≤ Rsum − ∑K k=1 I(Xk;U ∗ k |Y ) and we have, for any s ≥ 0, that
∆(Rsum, PXK,Y ) = ∆s
≤ ∆s − (∆s − I(Y ;U∗K))− s ( ∆s −Rsum +
K∑ k=1 I(Xk;U ∗ k |Y )
)
= I(Y ;U∗K)− s∆s + sRsum − s K∑ k=1 I(Xk;U ∗ k |Y ) = H(Y )− s∆s + sRsum −H(Y |U∗K)− s K∑ k=1 [I(Xk;U ∗ k ) +H(Y |U∗k )] + sKH(Y )
(67)
≤ H(Y )− s∆s + sRsum + L∗s + sKH(Y ) (68) = H(Y )− s∆s + sRsum + sKH(Y )− ((1 + sK)H(Y ) + sRs − (1 + s)∆s) (69) = ∆s + s(Rsum −Rs), (70)
where in equation 67 we have ∑K k=1 I(Xk;Uk|Y ) = −KH(Y ) + ∑K k=1 I(Xk;Uk) +H(Y |Uk) due to the Markov chain Uk −Xk − Y − (XK\k, UK\k); equation 68 follows since L∗s is the maximum over all possible distributions P (not necessarily P∗ maximizing ∆(Rsum, PXK,Y )); and equation 69 is due to equation 16.
Finally, equation 70 is valid for any Rsum ≥ 0 and s ≥ 0. Given s, and hence (∆s, Rs), choosing R = Rs yields ∆(Rs, PXK,Y ) ≤ ∆s. Together with equation 66, this completes the proof of Proposition 2.
5.6 PROOF OF LEMMA 1
The proof follows by deriving the following bounds. For any conditional pmf QY |Z(y|z), y ∈ Y and z ∈ Z , e.g., Z = UK or Z = Uk, proceeding similarly to equation 38 and averaging over Z, we have
H(Y |Z) = E[− logQY |Z(Y |Z)]−DKL(PY |Z‖QY |Z). (71)
Similarly, we have
I(Xk;Uk) = H(Uk)−H(Uk|Xk) (72) = E[− logQUk (Uk)]−DKL(PUk‖QUk )−H(Xk|UK) (73) = DKL(PY |Uk‖QUk )−DKL(PUk‖QUk ) (74)
Thus, we get
Ls(P) = LVBs (P,Q) +DKL(PY |UK ||QY |UK) + s K∑ k=1 (DKL(PY |Uk ||QY |Uk ) +DKL(PUk ||QUk ))
≥ LVBs (P,Q), (75)
where equation 75 holds by the non-negativity of relative entropy: and the equality is met if and only if Q∗ is as given by equation 21 and equation 22.
6 OTHER EXPERIMENTAL RESULTS (REGRESSION FOR UNKNOWN GAUSSIAN MODEL)
6.1 D-VIB ALGORITHM FOR VECTOR GAUSSIAN MODEL
For the vector Gaussian data model equation 14 the optimal distributions P and Q in equation 23 lie within the family of multivariate Gaussian distributions. Motivated by this observation, we consider the following parameterization for k ∈ K:
Pθk (uk|xk) = N (uk;µ e k,Σ e k) (76) QφK(ŷ|uK) = N (ŷ;µ d K,Σ d K) (77)
Qφk (ŷ|uk) = N (ŷ;µ d k,Σ d k) (78)
Qϕk (uk) = N (0, I). (79)
where µek,Σ e k are the output of a DNN fθk with input Xk that encodes the observations in a nuk -dimensional Gaussian distribution, µdK,Σ d K are the outputs of a DNN fφK with inputs U1, . . . ,UK , sampled from Pθk (uk|xk), and µ d k,Σ e k are the output of a DNN fφk with input Uk, k = 1, . . . ,K.
With the above choice of parametric encoders and decoders, and using a single sample m = 1, the empirical DIB cost in equation 29 is given for the sample (x1,i, . . . ,xK,i,yi) by
Lemps,i (θ,φ,ϕ) :=− 1
2
( (yi − µd12,i)TΣd,−112,i (yi − µ d 12,i) + log det(Σ d 12,i) ) − s
K∑ k=1 1 2 ( (yi − µdk,i)TΣd−1k,i (yi − µ d k,i) + log det(Σ d k,i) )
− s K∑ k=1 1 2 ( (µek,i − I)T (µek,i − I) + log |Σe,−1k,i | − nuk + tr{Σ e k,i} ) − ny 2 (1 + sK) log(2π),
where (µd12,i,Σ d 12,i) denote the output of the DNN fφK for the i-th sample (x1,i, . . . ,xK,i,yi), and similarly for the other mean and covariance terms; and where we have used that each term in the empirical DIB cost equation 29 can be computed noting that for d-dimensional Gaussian pmfsN (y;µ,Σ) we have
logN (y;µ,Σ) = −1 2
( (y − µ)TΣ−1(y − µ) + d log(2π) + log det(Σ) ) ,
and the KL divergence between two multivariate Gaussian pmfs P1 ∼ N (µ1,Σ1) and P2 ∼ N (µ2,Σ2) in Rd, is
DKL(P1‖P2) = 1
2
( (µ1 − µ2)TΣ−12 (µ1 − µ2) + log |Σ2Σ −1 1 | − d+ tr{Σ −1 2 Σ1} ) . (80)
The multivariate Gaussian parametrization of the encoders, decoders and prior distribution as given by equation 76-equation 79 can be used for other data models that are not necessary Gaussian. For example, it is particularly suitable for regression problems in which Y lies on a continuous space. Also, it is very often used in conjunction with VAE generative problems Rezende et al. (2014); Kingma and Welling (2013).
6.2 REGRESSION FOR VECTOR GAUSSIAN DATA MODEL
Consider a distributed learning model withK = 2 encoders, each observing a noisy version of an ny-dimensional Gaussian vector Y ∼ N (y; 0, I), as Xk = HkY + Nk, where Hk ∈ Rnk×ny and the noises are distributed as Nk ∼ N (0, I) for k = 1, 2.
For this model, the optimal accuracy-complexity region can be computed using Theorem 2. In what follows, we evaluate the performance of our D-VIB of the previous section for regression. The algorithm is trained using a dataset of n i.i.d. samples {(X1,i,X2,i,Yi)}ni=1 form the described vector Gaussian data model. We train the DNNs for various values of the parameter s. We use the multivariate Gaussian parameterization in equation 76-equation 79 for the DNNs architecture shown in Table 6.2. Specifically, Encoder k, k = 1, 2, consists of three dense layers of 512 neurons each followed by rectified linear unit (ReLu) activations. The output of encoder k is processed by a dense layer without nonlinear activation to generate µek and Σ e k of size 512 and 512× 512, respectively. Each decoder consists of two dense layers of 512 neurons with ReLu activations. The output of decoder 1, 2 and 12 is processed, each, by a fully connected layer without activation to generate µdk and Σ d k and µ d 12 and Σd12, of size 2 and 2× 2.
Figure 5 shows the optimal relevance-complexity region of tuples (∆, Rsum) obtained from Theorem 2 for a vector Gaussian model with K = 2 encoders, target variable dimension ny = 1, and observations dimension n1 = n2 = 3. A set of 40.000 samples split among training (30.000 samples) and test (10.000 samples). The figure depicts all accuracy-complexity pairs obtained by application of our algorithm D-VIB to this setting. The results are compared to the case of inference with known joint distribution (referred to as D-IB, see next section) as well as the case of centralized inference (C-IB). For the D-VIB algorithm, the the DNN architecture for the coders is shown in Table 6.2. Figure 6 shows the evolution of the associated mean squared error (MSE) in the estimation of the label Y using our D-VIB algorithm. As it can bee seen from both figures the performance of our D-VIB algorithm (which does not require knowledge of the joint label-feature distribution) is very close to that predicted by the theory, i.e., our Theorem 2.
Figure 7 shows similar curves for ny = 2, n1 = n2 = 3 dimensions, for various sizes of the training datset. As expected large training sets allow a more accurate prediction. Noteworthy, that the performance during the training phase might be better than that of the centralized learning scenario is an indicator can be caused by overfitting. Related to this aspect, recall that although the D-VIB algorithm does not estimate the underlying distribution explicitly, intuitively it does for the computation of the cost function. This is related to that universal compressors also learn the actual distribution of the data that is being compressed. Recall that since the plug-in estimator of entropy is biased downward, estimations of the mutual information terms that are involved in the cost function are then biased upward, which is an alternate explanation to the observed overfitting during the training phase.
DNN Layers
Encoder k dense [512]-ReLu dense [512]-ReLu dense [512]-ReLu Lat. space k dense [256]-ReLu Decoder 12 dense [256]-ReLu Decoder k dense [256]-ReLu
Table 3: Used DNN architecture.
7 DISTRIBUTED BLAHUT-ARIMOTO TYPE ALGORITHMS
7.1 DISCRETE-ALPHABET SETTING
In this section, we derive an iterative method to optimize the variational DIB cost function in equation 23 when the data model is discrete and the joint distribution PXK,Y is either known, or a good estimation of it can be obtained from the training samples. In these cases, the maximizing distributions P,Q of the variational DIB cost in equation 23 can be efficiently found by an alternating optimization procedure over P and Q similar to the expectation-maximization (EM) algorithm Dempster et al. (1977) and the standard Blahut-Arimoto (BA) methodBlahut (1972). An extension to the vector Gaussian data model, which involves random variable with continuous alphabets, is also provided. The main idea of the algorithm is that at iteration t, the optimal distributions P(t) that maximize the variational D-IB bound LVBs (P,Q(t)) for fixed Q(t) can be optimized in closed form and, next, the maximizing pmfs Q(t) for given P(t) can be also found analytically. So, starting from an initialization P(0) and Q(0) the algorithms performs the following computations successively and in this order, until convergence,
P(0) → Q(0) → P(1) → . . .→ P(t) → Q(t) → . . . (81)
We refer to such algorithm as “Blahut-Arimoto Distributed Information Bottleneck Algorithm (BA-DIB)”. Algorithm 1 describes the steps taken by BA-DIB to successively maximize LVBs (P,Q) by solving a concave optimization problem over P and over Q at each iteration. We have the following lemma whose proof follows essentially by using the log-sum inequality Cover and Thomas (1991) and the convexity of the mapping x 7→ x log x.
Lemma 4 The function LVBs (P,Q) is concave in P and in Q.
For fixed P(t), the optimal Q(t) maximizing the variational D-IB bound in equation 19 follows from Lemma 1 as given by equation 21-equation 22. For fixed Q(t), the optimal P(t) can be found using the following lemma.
Lemma 5 For fixed Q, there exists a P that achieves the maximum maxP LVBs (P,Q), where PUk|Xk is given by
p∗(uk|xk) = q(uk) exp (−ψs(uk, xk))∑
uk∈Uk q(uk) exp(−ψs(uk, xk))
, (82)
for uk ∈ Uk and xk ∈ Xk, k ∈ K, and where we define
ψs(uk, xk) := DKL(PY |xk ||QY |uk ) + 1
s EUK\k|xk [DKL(PY |UK\k,xk ||QY |UK\k,uk ))]. (83)
Proof: Due to its concavity, to maximize LVBs (P,Q) with respect to P for given Q, we add the Lagrange multipliers λxk ≥ 0 for each constraint ∑ uk∈Uk
p(uk|xk) = 1 with xk ∈ Xk. For each s, λxk ≥ 0 and p(uk|xk) can be explicitly found by solving the KKT conditions, e.g.,
∂
∂p(uk|xk) LVBs (P,Q) + ∑ xk∈Xk λxk ∑ uk∈Uk p(uk|xk)− 1 = 0. This completes the proof.
Algorithm 1 BA-DIB training algorithm for discrete data
1: inputs: discrete pmf PX1,...,Xk,Y , parameter s ≥ 0. 2: output: optimal P ∗Uk|Xk , pair (∆s, Rs). 3: initialization Set t = 0 and set P(0) with p(uk|xk) = 1|Uk| for uk ∈ Uk, xk ∈ Xk, k = 1, . . . ,K. 4: repeat 5: Compute Q(t+1) using equation 21 and equation 22. 6: Compute P(t+1) using equation 82. 7: t← t + 1 8: until convergence.
7.1.1 CONVERGENCE
Algorithm 1 essentially falls into the class of the Successive Upper-Bound Minimization (SUM) algorithms Razaviyayn et al. (2013) in which LVBs (P,Q) acts as a globally tight lower bound on Ls(P). Algorithm 1 provides a sequence P(t) for each iteration t, which converges to a stationary point of the optimization problem equation 23.
Proposition 4 Every limit point of the sequence P(t) generated by Algorithm 1 converges to a stationary point of equation 23.
Proof: Let Q∗(P) = arg maxQ LVBs (P,Q). Using Lemma 1, for every P′ 6= P, it holds that
LVBs (P,Q∗(P′)) ≤ LVBs (P,Q∗(P)) = Ls(P). (84)
Since Ls(P) and LVBs (P,Q∗(P′)) satisfy the assumptions of (Razaviyayn et al., 2013, Proposition 1), then LVBs (P,Q∗(P′)) satisfies A1-A4 in Razaviyayn et al. (2013). Convergence to a stationary point of equation 23 follows from (Razaviyayn et al., 2013, Theorem 1).
The self consistent equations equation 21, equation 22 and equation 83 satisfied by any stationary point of the D-IB problem extend those of the standard point-to-point IB problem Globerson and Tishby (2004) to the distributed IB problem with K ≥ 2 encoders. In particular, note the additional divergence term in equation 83.
7.2 GAUSSIAN SETTING
Recall Algorithm 1. For finite alphabet sources the updating rules of Q(t+1) and P(t+1) in Algorithm 1 are relatively easy, but they become unfeasible for continuous alphabet sources. We leverage on the optimality of Gaussian test channels, shown in Theorem 2, to restrict the optimization of P to Gaussian distributions, which are easily represented by a finite set of parameters, namely mean and covariance. We show that if P(t) are Gaussian distributions, then P(t+1) are also Gaussian distributions, which can be computed with an efficient update algorithm of its representing parameters. In particular, if at time t the k-th distributions P (t)Uk|Xk is given by
Utk = A t kXk + Z t k, (85)
where Ztk ∼ CN (0,Σzt k ), we show that at t+ 1, for P(t+1) updated as in equation 82, the encoder P (t+1)Uk|Xk corresponds to Ut+1k = A t+1 k Xk + Z t+1 k , where Z t+1 k ∼ CN (0,Σzt+1
k ) and Σ zt+1 k ,At+1k are updated as
Σ zt+1 k =
(( 1 + 1
s
) Σ−1
ut k |y −
1 s Σ−1 ut k |utK\k
)−1 , (86)
At+1k = Σzt+1 k
(( 1 + 1
s
) Σ−1
ut k |yA
t k(I−Σxk|yΣ −1 xk )−
1 s Σ−1 ut k |utK\k Atk(I−Σxk|utK\kΣ −1 xk )
) . (87)
The detailed update procedure is given in Algorithm 2 (see the following section for the details of the derivations).
Algorithm 2 BA-DIB algorithm for the Gaussin Vector D-IB
1: inputs: covariance Σy,x1,...,xk , parameter s ≥ 0. 2: output: optimal pairs (A∗k,Σz∗k), k = 1, . . . ,K. 3: initialization Set randomly A0k and Σz0k 0, k ∈ K. 4: repeat 5: Compute Σxk|utK\k and update for k ∈ K
Σutk|y = A t kΣxk|yA t,† k + Σztk (88)
Σutk|utK\k = A t kΣxk|utK\kA t,† k + Σztk , (89)
6: Compute Σzt+1k as in equation 86 for k ∈ K. 7: Compute At+1k as equation 87, k ∈ K. 8: t← t + 1. 9: until convergence.
7.2.1 DERIVATION OF ALGORITHM 2
We derive the update rules of Algorithm 2 and show that the Gaussian distribution is invariant to the update rules in Algorithm 1, in line with Theorem 2. First, we recall that if (X1,X2) are jointly Gaussian, then
PX2|X1=x1 = CN (µx2|x1 ,Σx2|x1), (90)
where µx2|x1 := Kx2|x1x1, with Kx2|x1 := Σx2,x1Σ −1 x1 .
Then, for Q(t+1) computed as in equation 21 and equation 22 from P(t), which is a set of Gaussian distributions, we have
Q (t+1) Y|uk = CN (µy|ut k ,Σy|ut k ), Q (t+1)
Y|uK = CN (µy|utK ,Σy|utK).
Next, we look at the update P(t+1) as in equation 82 from given Q(t+1). First, we have that p(utk) is the marginal of Utk, given by U t k ∼ CN (0,Σut
k ) where Σut k = AtkΣxkA t,H k + Σztk .
Then, to compute ψs(utk,xk), first, we note that
EUK\k|xk [DKL(PY |UK\k,xk ||QY |UK\k,uk )] = DKL(PY,UK\k|xk ||QY,UK\k|uk )−DKL(PUK\k|xk ||QUK\k|uk ) (91)
and that for two generic multivariate Gaussian distributions P1 ∼ CN (µ1,Σ1) and P2 ∼ CN (µ2,Σ2) in CN , the KL divergence is computed as in equation 80 below.
Applying equation 91 and equation 80 in equation 83 and noting that all involved distributions are Gaussian, it follows that ψs(utk,xk) is a quadratic form. Then, since p(u t k) is Gaussian, the product log(p(utk) exp(−ψs(utk,xk))) is also a quadratic form, and identifying constant, first and second order terms, we can write
log p(t+1)(uk|xk) = Z(xk) + (uk − µut+1 k |xk )HΣ−1 zt+1 k (uk − µut+1 k |xk ), (92)
where Z(xk) is a normalization term independent of uk,
Σ−1 zt+1 k = Σ−1 ut k + KHy|ut k Σ−1 y|ut k Ky|ut k
+ 1
s KHyutK\k|u t k Σ−1 yutK\k|u t k KyutK\k|u t k − 1 s KHutK\k|u t k Σ−1 utK\k|u t k KutK\k|u t k , (93)
and
µ ut+1 k |xk = Σ zt+1 k
( KHy|ut k Σ−1 y|ut k µy|xk
+ 1
s Ky,utK\k|u t k Σ−1 y,utK\k|u t k µy,utK\k|xk − 1 s KutK\k|u t k Σ−1 utK\k|u t k µutK\k|xk
) .
(94)
This shows that p(t+1)(uk|xk) is a multivariate Gaussian distribution and that Ut+1k |{Xk = xk} is also a multivariate Gaussian distributed as CN (µ
ut+1 k |xk ,Σ zt+1 k ).
Next, we simplify equation 93 and equation 94 to obtain the update rules equation 86 and equation 87. From the matrix inversion lemma, similarly to Chechik et al. (Feb. 2005), for (X1,X2) jointly Gaussian we have
Σ−1x2|x1 = Σ −1 x2 + K H x1|x2Σ −1 x1|x2Kx1|x2 . (95)
Applying equation 95, in equation 93 we have
Σ−1 zt+1 k = Σ−1 ut k |y +
1 s Σ−1 ut k |yutK\k − 1 s Σ−1 ut k |utK\k , (96)
= ( 1 + 1
s
) Σ−1
ut k |y −
1 s Σ−1 ut k |utK\k , (97)
where equation 97 is due to the Markov chain Uk − −Y − −UK\k.
Then, also from the matrix inversion lemma, we have for jointly Gaussian (X1,X2),
Σ−1x2|x1Σx2,x1Σ −1 x1 = Σ −1 x2 Σx2,x1Σ −1 x1|x2 . (98)
Applying equation 98 to equation 94, for the first term in equation 94, we have
KHy|ut k Σ−1 y|ut k µy|xk = Σ −1 ut k |yΣy,utk Σ−1y µy|xk (99)
= Σ−1 ut k |yA t kΣxk,yΣ −1 y Σy,xkΣ −1 xk xk = Σ−1 ut k |yA t k(I−Σxk|yΣ −1 xk )xk, (100)
where Σy,ut k = AtkΣxk,y; and equation 100 is due to the definition of Σxk|y.
Similarly, for the second term in equation 94, we have
KyutK\k|u t k Σ−1 yutK\k|u t k µy,utK\k|xk = Σ−1 ut k |yutK\k Atk(I−Σxk|yutK\kΣ −1 xk )xk, (101)
= Σ−1 ut k |yA t k(I−Σxk|yΣ −1 xk )xk, (102)
where we use Σut k ,yutK\k = AtkΣxk,yutK\k ; and equation 102 is due to the Markov chain Uk − −Y− −UK\k.
For the third term in equation 94,
KutK\k|u t k Σ−1 utK\k|u t k µutK\k|xk = Σ−1 ut k |utK\k Atk(I−Σxk|utK\kΣ −1 xk )xk. (103)
Equation equation 87 follows by noting that µ ut+1 k |xk = At+1k xk, and that from equation 94 A t+1 k can be identified as in equation 87.
Finally, we note that due to equation 85, Σ | 1. What is the focus and contribution of the paper on multi-view learning?
2. What are the strengths of the paper, particularly in its theoretical analysis?
3. Do you have any questions regarding the paper's notation and clarity?
4. How does the reviewer assess the comprehensiveness and comparative nature of the experimental results?
5. Are there any concerns about the implementation and practicality of the proposed algorithm? | Review | Review
The paper extended the Gaussian Information Bottleneck method to the case of multi-view learning and provided a variation bound for the accuracy optimization with constrain on the sum of complexity. It also proposed an algorithm to learn the distributed representation without any prior knowledge of the data distribution.
The multi-view learning problem has been quite well studied in the literature. The paper reformulated the multi-view learning problem as a Bayesian inference problem and provided solid analysis for it.
The writing of the paper was pretty hard to follow for me, with a lot of notations that are not defined clearly.
* For example, I can roughly guess that U in theorem 1 represent the learned descriptors, but what’s the variable T in theorem 1?
* What is \Omega in Theorem 2?
The experimental result doesn’t look very comprehensive at all as it was mostly compared with variations of the proposed algorithm and it doesn’t include any other multi-view learning algorithms.
The algorithms in the experimental result are not very clearly defined. I don’t see much explanation of what is exactly D-VIB and C-VIB. There’s some formulation of the algorithm in Section 3.4, but it only gives a loss function briefly. I’m not sure if many practitioners will be able to implement this algorithm from the description here. |
ICLR | Title
An Information Theoretic Approach to Distributed Representation Learning
Abstract
The problem of distributed representation learning is one in which multiple sources of information X1, . . . , XK are processed separately so as to extract useful information about some statistically correlated ground truth Y . We investigate this problem from informationtheoretic grounds. For both discrete memoryless (DM) and memoryless vector Gaussian models, we establish fundamental limits of learning in terms of optimal tradeoffs between relevance and complexity. We also develop a variational bound on the optimal tradeoff that generalizes the evidence lower bound (ELBO) to the distributed setting. Furthermore, we provide a variational inference type algorithm that allows to compute this bound and in which the mappings are parametrized by neural networks and the bound approximated by Markov sampling and optimized with stochastic gradient descent. Experimental results on synthetic and real datasets are provided to support the efficiency of the approaches and algorithms which we develop in this paper.
1 INTRODUCTION
Let a measurable variable X ∈ X and a target variable Y ∈ Y with unknown joint distribution PX,Y be given. In the classic problem of statistical learning, one wishes to infer an accurate predictor of the target variable Y ∈ Y based on observed realizations of X ∈ X . That is, for a given class F of admissible predictors φ : X → Ŷ and an additive loss function ` : Y → Ŷ that measures discrepancies between true values and their estimated fits, one aims at finding the mapping φ? ∈ F that minimizes the expected risk
CPX,Y (φ, `) = EPX,Y [`(Y, φ(X))]. (1)
Because the joint distribution PX,Y is unknown, in practice the risk equation 1 (also called population risk) cannot be computed directly; and, in the standard approach, one usually resorts to choosing the predictor with minimal risk on a training dataset consisting of n labeled samples {(xi, yi)}ni=1 that are drawn independently from the unknown joint distribution PX,Y . Also, it is important to restrict the set F of admissible predictors to a low-complexity class to prevent overfitting. This leads to the abstract inference problem shown in Figure 1.
In this paper, we study a generalization of this problem in which the prediction is to be performed in a distributed manner. The model is shown in Figure 2. Here, the prediction of the target variable Y ∈ Y is to be performed on the basis of samples of statistically correlated random variables (X1, . . . , XK) that are observed each at a distinct predictor. We investigate this problem in the case in which the loss function `(·) is the logarithmic-loss fidelity measure, given by
`log(y, ŷ) = log ( 1 ŷ(y) ) (2)
where ŷ(·) designates a probability distribution on Y and ŷ(y) is the value of this distribution evaluated for the outcome y ∈ Y . The choice of a ‘good” loss function is often controversial in statistical learning theory, and although a complete and rigorous justification of the usage of logarithmic loss as a fidelity measure in learning theory is still awaited, partial explanations appeared in Jiao et al. (2015) and, especially in Painsky and Wornell (2018) where it is shown that, for binary classification problems, by minimizing the logarithmic-loss one actually minimizes an upper bound to any choice of loss function that is smooth, proper (i.e., unbiased and Fisher consistent) and convex. Also, we constrain the complexity of the predictors by using mutual information as a regularizer term. This is inline with recent works Xu and Raginsky (2017); Russo and Zou (2015) that show that the generalization error can be upper-bounded using the mutual information between the input dataset and the output of the predictor – see also Bousquet and Elisseeff (2002); Shalev-Shwartz et al. (2010) where the stability of an algorithm is controlled by constraining the mutual information between its input and output.
1.1 AN EXAMPLE: MULTI-VIEW LEARNING
In many data analytics problems, data is collected from various sources of information or feature extractors; and is intrinsically heterogeneous. For example, an image can be identified by its color or texture features; and a document may contain text and images. Conventional machine learning approaches concatenate all available data into one big row vector (or matrix) on which a suitable algorithm is then applied. Treating different observations as a single source might cause overfitting and is not physically meaningful because each group of data may have different statistical properties. Alternatively, one may partition the data into groups according to samples homogeneity, and each group of data be regarded as a separate view. This paradigm, termed multi-view learning Xu et al. (2013), has received growing interest; and various algorithms exist, sometimes under references such as co-training Blum and Mitchell (1998); Dhillon et al. (2011); Kumar and Daumé (2011); Gönen and Alpaydın (2011), multiple kernel learning Gönen and Alpaydın (2011) and subspace learning Jia et al. (2010). By using distinct encoder mappings to represent distinct groups of data, and jointly optimizing over all mappings to remove redundancy, multiview learning offers a degree of flexibility that is not only desirable in practice but is likely to result in better learning capability. Actually, as shown in Vapnik (2013), local learning algorithms produce less errors than global ones. Viewing the problem as that of function approximation, the intuition is that it is usually non-easy to find a unique function that holds good predictability properties in the entire data space.
1.2 INFORMAL SUMMARY OF RESULTS
In this paper, first we characterize the optimal tradeoff between relevance and complexity for the distributed learning model of Figure 2 for both discrete memoryless (DM) and memoryless vector Gaussian models. While the result for the discrete data model (Theorem 1) is not difficult to establish using connections with Courtade and Weissman (2014, Appendix B) which we explicit here, the result for the multivariate Gaussian data model (Theorem 2), which provides a sharp analytic characterization of optimal tradeoffs, is new and non-trivial (the proof of the converse part is not straightforward and was missing before this work in both theoretic learning and information theoretic communities including in the scalar case). Second, we develop a variational bound on the optimal tradeoff that can be seen as a generalization of the ELBO and the β-VAE criteria Higgins et al. (2016) to the distributed setting. Furthermore, for both DM and Gaussian models, we also provide a variational inference type algorithm which is parametrized by neural networks and allows to compute the developed variational bound when the data distribution is not known. Specifically, the main contributions of this paper are:
• In Section 3.2, we find an explicit analytic characterization of optimal tradeoffs between relevance and complexity for the memoryless vector Gaussian model. The result generalizes the Gaussian Information Bottleneck method of Globerson and Tishby (2004); Chechik et al. (Feb. 2005) to the distributed learning scenario.
• In Section 3.3, we study the problem of maximizing relevance under a constraint on the sum complexity for which we establish a variational bound which generalizes the ELBO and the β-VAE criteria to the distributed setting.
• Section 3.4 is algorithmic-oriented. We develop a variational inference type algorithm which enables to compute the bound. This algorithm is obtained by parametrizing the encoders, the decoder, and the prior distributions via DNNs and using Monte-Carlo sampling. Also, it makes usage of Kingma et
al.’s re-parametrization trick Kingma and Welling (2013) and can be seen as a generalization of the variational information bottleneck algorithm in Alemi et al. (2017) to the distributed setting.
• Section 4 contains some experimental results on real datasets which show the efficiency of the approaches and algorithms that we develop in this paper.
Most relevant to this paper is the single-encoder Information Bottleneck (IB) method of Tishby et al. (1999) which readily and elegantly captures the above mentioned viewpoint of seeking the right balance between data fit and generalization by using the mutual information both as a cost function and as a regularizer term. Thus, the results of this paper can be seen as a generalization of those of Tishby et al. (1999) for the DM model and Globerson and Tishby (2004); Chechik et al. (Feb. 2005) for the Gaussian model to the distributed learning setting.
Remark: Due to space constraints, the proofs of the results of this paper are deferred to the appendices section, which also contains additional experimental results.
1.3 NOTATION
Throughout, upper case letters denote random variables, e.g., X; lower case letters denote realizations of random variables, e.g., x; and calligraphic letters denote sets, e.g., X . The cardinality of a set is denoted by |X |. For a random variable X with probability mass function (pmf) PX , we use PX(x) = p(x), x ∈ X for short. Boldface upper case letters denote vectors or matrices, e.g., X, where context should make the distinction clear. For random variables (X1, X2, . . .) and a set of integers K ⊆ N, XK denotes the set of random variables with indices in the set K, i.e., XK = {Xk : k ∈ K}. If K = ∅, XK = ∅. For k ∈ K we let XK/k = (X1, . . . , Xk−1, Xk+1, . . . , XK), and assume that X0 = XK+1 = ∅. Also, for zero-mean random vectors X and Y, the quantities Σx, Σx,y and Σx|y denote respectively the covariance matrix of the vector X, the covariance matric of vector (X,Y) and the conditional covariance matrix of X, conditionally on Y. Finally, for two probability measures PX and QX on the random variable X ∈ X , the relative entropy or Kullback-Leibler divergence is denoted as DKL(PX‖QX).
2 FORMAL PROBLEM FORMULATION
Let K ≥ 2 and (X1, . . . , XK , Y ) be a tuple of random variables with a given joint probability mass function (pmf) PX1,...,XK ,Y (x1, . . . , xK , y) for (x1, . . . , xK) ∈ X1 × . . .×XK and y ∈ Y , where Xk designates the alphabet of Xk and Y that of Y . Throughout, we assume that the Markov chain
Xk − − Y − −XK/k (3) holds for all k ∈ K. That is, the joint pmf factorizes as
PX1,...,XK ,Y (x1, . . . , xK , y) = PY (y) K∏ k=1 PXk|Y (xk|y). (4)
The variable Y is a target variable; and we seek to characterize how accurate it can be predicted from a measurable random vector (X1, . . . , XK) when the components of this vector are processed separately, each by a distinct encoder. More specifically, let {(X1,i, . . . , XK,i, Yi)}ni=1 be a collection of n independent copies of (X1, . . . , XK , Y ). Encoder k ∈ K only observes the sequence Xnk ; and generates a description Jk = φk(Xnk ) according to some mapping
φk : Xnk →M (n) k , (5)
whereM(n)k is an arbitrary set of descriptions. The range of allowable description sets will be specified below. A decoder ψ(·) collects all descriptions JK = (J1, . . . , JK) and returns an estimate Ŷ n of Y n as
ψ :M(n)1 × . . .×M (n) K → Ŷ n. (6)
The relevance of the estimation Ŷ n is measured in terms of the relevance, defined here as the information that the descriptions φ1(Xn1 ), . . . , φK(XnK) collectively preserve about Y
n, as measured by Shannon mutual information 1
∆(n)(PXK,Y ) = 1
n ∑ yn,xn1 ,...,x n K P (yn) K∏ k=1 P (xnk |yn) log P (yn, ψ(φ1(x n 1 ), . . . , φK(x n K))) P (yn)P (ψ(φ1(xn1 ), . . . , φK(x n K)))
:= 1
n IPXK,Y (Y
n; Ŷ n), (7)
1Alternatively, the relevance could be defined in a more operational manner by the average logarithmic loss distortion or error EPXK,Y [`log(Y n, Ŷ n)] = H(Y n|Ŷ n).
where Ŷ n = ψ(φ1(Xn1 ), . . . , φK(XnK)) and the subscript PXK,Y indicates that the mutual information is computed under the joint distribution PXK,Y .
There are various ways to control the complexity of the encoding functions {φk}Kk=1. In this paper, we do so by restricting their ranges. This is known as minimum description length complexity measure Hinton and van Camp (1993). Specifically, the mapping φk(·) at Encoder k ∈ K needs to satisfy
Rk ≥ 1
n log |φk(Xnk )| for all Xnk ∈ Xnk . (8)
Definition 1 A tuple (∆, R1, . . . , RK) is said to be achievable if there exists an integer n, a family of encoding mappings {φk}Kk=1 and a decoder mapping ψ such that
∆ ≤ 1 n IPXK,Y
( Y n;ψ(φ1(X n 1 ), . . . , φK(X n K)) )
(9)
Rk ≥ 1
n log |φk(Xnk )| for all k ∈ K. (10)
The relevance-complexity region IRDIB is given by the closure of all achievable tuples (∆, R1, . . . , RK).
In some cases, for given RK = (R1, . . . , RK), for the ease of the exposition we will be content with the relevance-complexity function ∆(RK, PXK,Y ) defined as
∆(RK, PXK,Y ) = max {φk}Kk=1,ψ
∆(n)(PXK,Y ) (11)
where the maximization is subjected to equation 8.
3 MAIN RESULTS
3.1 DISCRETE MEMORYLESS DATA MODEL
The following theorem (the proof of which can be found in the appendices section) provides a computable characterization of the relevance-complexity region IRDIB. The result can be seen as a generalization of Tishby et al. Tishby et al. (1999) single encoder IB to the distributed learning model with K encoders.
Theorem 1 The relevance-complexity region IRDIB of the distributed learning problem with PXK,Y for which the Markov chain equation 3 holds is given by the union of all tuples (∆, R1, . . . , RK) ∈ RK+1+ that satisfy for all S ⊆ K,
∆ ≤ ∑ k∈S [Rk−I(Xk;Uk|Y, T )] + I(Y ;USc |T ), (12)
for some set of pmfs P := {PUk|Xk,T , . . . , PUk|Xk,T , PT } with joint distribution of the form
PT (t)PY (y) K∏ k=1 PXk|Y (xk|y) K∏ k=1 PUk|Xk,T (uk|xk, t). (13)
Remark 1 In Theorem 1, the random variable T stands for a convexification of the region, i.e., convex combination of achievable relevance-complexity tuples is itself achievable. For given T = t, the result of Theorem1 comprises the optimization over K conditional distributions {PUK |Xk,t}. For k ∈ K, the conditional distribution PUK |Xk,t represents a stochastic encoding of the feature Xk into a latent variable Uk. Intuitively, the latent variableUk should capture all relevant information about Y that is contained inXk and non redundant with those carried out by {Ui}i 6=k. The requirement of non-redundancy is mandated by the need to operate at the minimum possible complexity at which a desired relevance level is achievable (recall that minimum complexity, as expressed by algorithm’s input-output mutual information, translates directly into a better generalization capability). Collectively, however, the set of all latent variables (U1, . . . , UK) should be expressive enough to reproduce the target variable Y to within the desired relevance level.
Remark 2 Like for the single-encoder IB problem of Tishby et al. (1999) and an increasing number of works that followed, including Courtade and Weissman (2014, Section III-F), our approach here is asymptotic. In addition to that it leads to an exact characterization, the result also readily provides a lower bound on the performance in the non-asymptotic (e.g., one shot) setting. For the latter setting known approaches (e.g., the functional representation lemma of Li and El Gamal (2018)) would lead to only non-matching inner and outer bounds on the region of optimal tradeoff pairs, as this is the case even for the single encoder case Li et al. (2018). 4
3.2 MEMORYLESS VECTOR GAUSSIAN DATA MODEL
We now turn to a continuous-alphabet setting. Here, (X1, . . . ,XK ,Y) is a zero-mean Gaussian random vector such that Xk = HkY + Nk for all k ∈ K, (14) where Hk ∈ Cnk×ny models the linear model connecting the target variable Y ∈ Cny to the observation at encoder k, and Nk ∈ Cnk , k = 1, . . . ,K, is the noise vector at encoder k, assumed to be Gaussian with zero-mean and covariance matrix Σk, and independent from all other noises and the target variable Y. We denote by Σy the covariance matrix of of the target vector Y ∈ Cny .
For this model, we find an explicit analytic characterization of optimal tradeoffs between relevance and complexity. The proof relies on deriving an outer bound on the region described by equation 12, and showing that it is achievable with Gaussian distribution, with no time-sharing. In doing so, we use techniques that rely on the de Bruijn identity and the properties of Fisher information and minimum mean square error (MMSE).
Theorem 2 The relevance-complexity region IRGDIB for the vector Gaussian model is given by the union of all tuples (∆, R1, . . . , RL) that satisfy for all S ⊆ K
∆ ≤ [ Rk + log ∣∣∣I−Σ1/2k ΩkΣ1/2k ∣∣∣]+ log ∣∣∣∣∣∑ k∈Sc Σ1/2y H † kΩkHkΣ 1/2 y + I ∣∣∣∣∣ , for some 0 Ωk Σ−1k .
Proof: The proof of the direct part follows by evaluating the region of Theorem 1, which can be extended to the case of continuous alphabets using standard discretization (quantization) arguments, with the choices T = ∅ and p(uk|xk, t) = CN (xk,Σ1/2k (Ωk − I)Σ 1/2 k ). The main contribution in the proof is that of the converse part. This proof is technical and rather lengthy and, for this reason, is deferred to the appendices section.
In the special case in which K = 1, the result of Theorem 2 recovers that by Globerson and Tishby (2004) (see also Chechik et al. (Feb. 2005)) which establishes the optimal relevance-complexity tradeoff of the single-encoder Gaussian IB problem.
3.3 A VARIATIONAL BOUND
In this section, we consider the problem of learning encoders- and decoder mappings that maximize the relevance level for a given (fixed) complexity level, i.e., those that perform at the vicinity of the boundary of the region IRDIB. First, we derive a parametrization of the relevance-complexity region; and, then, we develop a variational bound which expresses the optimal encoders’ and decoder mappings as the solution to an optimization problem – (an algorithm for solving this problem in the case of unknown distributions is given in the next section).
Let Rsum := ∑K k=1Rk. Also, let IR sum DIB denote the region of achievable (relevance, sum-complexity) pairs,
IRsumDIB := { (∆, Rsum) ∈ R2+ : ∃(R1, . . . , RK) ∈ RK+ s.t.
(∆, R1, . . . , RK) ∈ IRDIB and K∑ k=1 Rk = Rsum } .
Proposition 1 The relevance-complexity region under sum-complexity constraintRIsumDIB is given by the convexhull of all tuples (∆, Rsum) ∈ R2+ satisfying ∆ ≤ ∆(Rsum, PXK,Y ) where
∆(Rsum, PXK,Y ) = max P min
{ I(Y ;UK), Rsum −
K∑ k=1 I(Xk;Uk|Y )
} , (15)
and where the maximization is over the set of pmfs P := {PU1|X1 , . . . , PUK |XK} such that the joint pmf factorizes as pY (y) ∏K k=1 pXk|Y (xk|y) ∏K k=1 pUk|Xk (uk|xk).
The next proposition provides a characterization of the pairs (∆, Rsum) that lie on the boundary ofRIsumDIB in terms of a nonnegative parameter s ≥ 0.
Proposition 2 For every pair (∆, Rsum) ∈ R2+ that lies on the boundary of the relevance-complexity region RIsumDIB there exist s ≥ 0 such that (∆, Rsum) = (∆s, Rs), where
∆s = 1
(1 + s)
[ (1 + sK)H(Y ) + sRs + max
P Ls(P)
] , (16)
Rs = I(Y ;U ∗ K) + K∑ k=1 [I(Xk;U ∗ k )− I(Y ;U∗k )], (17)
and P∗ is the set of conditional pmfs P that maximize the cost function
Ls(P) := −H(Y |UK)− s K∑ k=1 [H(Y |Uk) + I(Xk;Uk)]. (18)
Using Proposition 2 it is clear that the encoders {PUk|Xk}k∈K that achieve the relevance-complexity pair (∆s, Rs) can be computed by maximizing the regularized cost equation 18 for the corresponding value of s ≥ 0. The corresponding optimal decoder PY |UK for these encoders can be found as in equation ??. Different relevance-complexity pairs (∆s, Rs) on the boundary of IRsumDIB and encoders- and decoder mappings that achieve it can be found by solving equation 18 for different values of s ≥ 0 and then evaluating equation 16 and equation 17 for the obtained solution.
The optimization of equation 18 generally requires to compute marginal distributions involving the descriptions U1, . . . , UK , an aspect which can be non-easy computationally costly. To overcome this limitation, in the following we derive a tight variational bound on Ls(P) which lower bounds the DIB cost function with respect to some arbitrary distributions. Let us consider the arbitrary decoder QY |U1,...,UK (y|u1, . . . , uK) for y ∈ Y , u1 ∈ U1, . . . , uK ∈ UK , the K decoders QY |Uk (y|uk) for k ∈ K for y ∈ Y , uk ∈ Uk, and latent variable priors QUk (uk), k ∈ K, uk ∈ Uk. For short, we denote
Q := {QY |U1,...,UK , QY |U1 , . . . , QY |UK , QU1 , . . . , QUK}.
Let us define the variational DIB cost function LVBs (P,Q) as
LVBs (P,Q) := E[logQY |UK(Y |UK)]︸ ︷︷ ︸ av. logarithmic-loss + s K∑ k=1 ( E[logQY |Uk (Y |Uk)]−DKL(PUk|Xk‖QUk ) ) ︸ ︷︷ ︸
regularizer
. (19)
The following lemma states that LVBs (P,Q) is a lower bound to Ls(P) for all distributions Q.
Lemma 1 For fixed pmfs P, we have
Ls(P) ≥ LVBs (P,Q), for all pmfs Q. (20)
In addition, there exists a unique Q that achieves the maximum maxQ LVBs (P,Q) = Ls(P), and is given by
Q∗Uk = PUk , Q ∗ Y |Uk = PY |Uk , k = 1, . . . ,K, (21)
Q∗Y |U1,...,Uk = PY |U1,...,UK , (22)
where PUk , PY |Uk and PY |U1,...,UK are computed from the pmfs P.
Using the above, the optimization in equation 16 can be written in terms of the variational DIB cost function as
max P Ls(P) = max P max Q LVBs (P,Q). (23)
We close this section by noting that the cost function equation 19 can be seen as a generalization of the evidence lower bound (ELBO) as given in Rezende et al. (2014); Kingma and Welling (2013) for the single-encoder learning to the distributed setting. Also, in the specific case in which Y = (X1, . . . , XK) the bound generalizes the ELBO used for VAEs to the case of an arbitrary number of encoders.
3.4 CASE OF UNKNOWN DISTRIBUTIONS: VARIATIONAL DISTRIBUTED IB ALGORITHM
In practice only a set of training samples {(X1,i, . . . , XK,i, Yi)}ni=1 are available. In this section, we provide a method to optimize equation 23 in this case by parametrizing the encoding and decoding distributions that are to optimize using a family of distributions whose parameters are determined by Deep Neural networks (DNNs). This allows us to formulate equation 23 in terms of the DNN parameters and optimize it by using the reparametrization trick Kingma and Welling (2013), Monte Carlo sampling, as well as stochastic gradient descent (SGD) type algorithms.
Let FeNN,k denote the parametric family of encoding probability distributions PUk|Xk over Uk for each element on Xk. Each member of this collection, PUk|Xk;γek , is described by a parameter vector γ e k ∈ Γek ⊆ Rl e k , where
Γek ⊆ Rl e k denotes the set of allowable parameter vectors. The parameter vector γek is the output of a DNN fθk : Xk → Γ e k, with network parameters θk ∈ Θk ⊆ Rd e k , e.g., the weights of the network at all layers. The DNN fθk takes Xk as input and outputs the parameter vector γ e k, determining one of the probability members PUk|Xk;γek . We have FeNN,k = { PUk|Xk;γek (uk|xk), for uk ∈ Uk, xk ∈ Xk : γ e k = fθk (xk), θk ∈ Θk } . (24)
For example, the family of multivariate Gaussian distributions is parametrized by the mean µθk and covariance matrix Σθk, i.e., γk := (µ θ k,Σ θ k). Therefore, given an observation Xk, γk := (µ θ k,Σ θ k) is determined by the output of the DNN fθk and F e NN,k is given by PUk|Xk;γk (uk|xk) = N (uk;µ θ k,Σ θ k).
Similarly, for decoders QY |Uk over Y , define the family of distributions parametrized by a vector in Γ d k ⊆ Rl
d k
determined by the output of a DNN fφk : Uk → Γ d k, with parameters φk ∈ Φk ⊆ Rd
d k , as
FdNN,k = { QY |Uk;γdk (y|uk), for y ∈ Y, uk ∈ Uk : γdk = fφk (uk), φk ∈ Φk } , (25)
and for the distribution QY |UK over Y for each element in U1 × · · · × UK , define the family of distributions parameterized by the output of the DNN fφK : U1 × · · · × UK → Γ d K, with φK ∈ ΦK ⊆ Rd d K , and ΓdK ⊆ Rd d K
FdNN,K = { QY |U1,...,UK ;γdK (y|u1, . . . , uK), y ∈ Y, uk ∈ Uk : γdK = fφK(u1, . . . , uK), φK ∈ ΦK } . (26)
Finally, for the distributions Qϕk (uk) we define the family of distributions with parameter ϕk ∈ Ψk ⊆ R l p k
FpNN,k = { QUk;ϕk (uk), for uk ∈ Uk : ϕk ∈ Ψk } .
In the following, for brevity we use Pθk (uk|xk), Qψk (y|uk), QψK(y|uK) and Qϕk (uk) to denote the distributions parametrized by the DNNs fθk , fψk , fψK and ϕk, respectively.
By restricting the optimization of the variational DIB cost in equation 23 to the encoder, decoder and priors within the families of distributions FeNN,k, FdNN,k, FdNN,K, FpNN,k we get
max P max Q LVBs (P,Q) ≥ max θ,φ,ϕ LNNs (θ,φ,ϕ), (27)
where we use the notation θ := [θ1, . . . , θK ], φ := [φ1, . . . , φK , φK] and ϕ := [ϕ1, . . . , ϕK ] to denote the DNN and prior parameters and, the cost in equation 27 is given by
LNNs (θ,φ,ϕ) := EPY,XE{Pθk (Uk|Xk)} [ logQφK(Y |UK)
+ s K∑ k=1 ( logQφk (Y |Uk)−DKL(Pθk (Uk|Xk)‖Qϕk (Uk)) )] . (28)
Next, we train the DNNs to maximize a Monte Carlo approximation of equation 27 over θ,φ,ϕ using SGD. We use the reparameterization trick Kingma and Welling (2013), to sample from Pθk (Uk|Xk). In particular, we consider FeNN,k to consist of a parametric family of distributions that can be sampled by first sampling a random variable Zk with distribution PZk (zk), zk ∈ Zk and then transforming the samples using some function gθk : Xk × Zk → Uk parameterized by θk, such that Uk = gθk (xk, Zk) ∼ Pθk (Uk|xk). The reparametrization trick reduces the original optimization to estimating θk of the deterministic function gθk and allows to compute estimates of the gradient using backpropagation Kingma and Welling (2013). The variational DIB cost in equation 27 can be approximated, by sampling m independent samples {uk,i,j}mj=1 ∼ Pθk (uk|xk,i) for each training sample (x1,i, . . . , xK,i, yi), i = 1, . . . , n. Sampling is performed by using uk,i,j = gφk (xk,i, zk,j) with {zk,j} m j=1 i.i.d. sampled from PZk . We then have
Lemps,i (θ,φ,ϕ) := 1
m m∑ j=1 logQφK(yi|u1,i,j , . . . , uK,i,j)
+ s
m m∑ j=1 K∑ k=1 ( logQφk (yi|uk,i,j)−DKL(Pθk (Uk,i|xk,i)‖Qϕk (Uk,i)) ) . (29)
4 EXPERIMENTS: RESILIENCE TO NOISE, ROTATION AND OCCLUSION
In this experiment, we test the robustness of our method against noise, rotation and random occlusion on the MNIST dataset. Specifically, we combine two types of random occlusions: the first encoder observes a digit from the MNIST that is occluded by a square which is rotated randomly (rotation angle uniformly distributed over [−45o, 45o]); and the second encoder observes a noisy version of the same digit corrupted by additive noise
(noise level uniform between 0 and 3). The noisy pixels are clipped between 0 and 1, with more than 60% of the pixels occluded. These occlusions make the problem significantly more involved than the standard MNIST (for which application of our algorithm leads to an relevance of about 99.9%).
We considered a CNN deterministic networks with dropout which achieves a 99.8% for test data on the clean MNIST data. Then, we have trained the same CNN architecture for each of the noisy inputs to the encoders, resulting in a relevance of 92.1% from the input to encoder 1 (randomly rotated occlusion) and 79.68% from the input to encoder 2 (noisy clipped image).
0
5
10
15
20
25
0 10 20 30 40 50
0
5
10
15
20
25
0 10 20
0
5
10
15
20
25
30 40 50 0
5
10
15
20
25
Original Y
Figure 3: View 1: occluded. View 2: noisy.
CNN Layers
Encoder k conv. ker. [5,5,32]-ReLu maxpool [2,2,2]
conv. ker. [5,5,64]-ReLu maxpool [2,2,2]
dense [1024]-ReLu dropout 0.4
dense [256]-relu Latent space k dense [256]-ReLu Decoder 12 dense [256]-ReLu Decoder k dense [256]-ReLu
101 102 103
Sum-Complexity Rsum
0.0
0.5
1.0
1.5
2.0
R el
ev an
ce ∆
C-IB with Rsum →∞ D-VIB train n=50000 D-VIB test n=50000
Figure 4: relevance v.s. sum-complexity for n = 50.000 and s ∈ [10−10, 1].
We applied our D-VIB algorithm of Section 3.4 to this model with the CNN architecture of Table 1, in which Encoder k = 1, 2 is parametrized by an nuk = 256 dimensional multivariate Gaussian distributionN (µ e k,Σ e k) determined by the output of a DNN fθk consisting of the concatenation of convolution, dense and maxpool layers with ReLu activations and dropout. The output of the last layer is followed by a dense layer without activation that generate µek and Σ e k. The prior is chosen as Qϕk (u) = N (0, I). Each decoder takes the samples from Pθk (Uk|Xk) and processes its inputs with a dense layer DNN (fφK and fφk ) each with 256 neurons and ReLu activation, which outputs a vector ŷi of size |Y| = 10 normalized with a softmax, corresponding to a distribution over the one-hot encoding of the digit labels {0, . . . , 9} from the K observations,
Qφk (ŷk|uk) = Softmax(fφk (Uk)), k = 1, 2, and (30) QφK(ŷ|uK) = Softmax(fφK(U1, U2))), (31)
where Softmax(p) for p ∈ Rd is a vector with i-th entry as [Softmax(p)]i = exp(pi)/ ∑d j=1 exp(pj). Figure 4 shows the relevance-complexity tradeoffs obtained using our D-VIB algorithm of Section 3.4, with n = 50.000 and 15 distinct s-values randomly chosen in the range [10−10, 1]. For comparison, we also present the performance obtained using three methods among state-of the-art multiview learning approaches: (i) applying a deterministic CNN on the two views concatenated (deterministic CNN), (ii) applying the singleencoder variational IB method of Alemi et al. on the two views concatenated (C-VIB), and (iii) learning one function for each view via a distinct CNNs and optimize all CNNs independently (independent CNNs). The achieved relevance is reported in Table 2. For other experimental results, see the appendices section.
We also mention that at a high level our algorithm D-VIB can be considered as performing some form of coregularization (for instance its Gaussian version is similar to the CCA of Hardoon et al. (2004)). Comparatively, the single-view algorithm C-VIB can be viewed as belonging to the family of co-training style algorithms (such as the co-EM of Nigam and Ghani (2000)) which, as mentioned in the recent survey Zhao et al. (2017), override on single-view algorithms. The performance of D-VIB dominates that of C-VIB, which itself dominates co-EM.
5 PROOFS OF MAIN THEOREMS, PROPOSITIONS AND LEMMAS
5.1 AUXILIARY LEMMAS
Lemma 2 Dembo et al. (1991); Ekrem and Ulukus (2014) Let (X,Y) be a pair of random vectors with pmf p(x,y). We have
log |(πe)J−1(X|Y)| ≤ h(X|Y) ≤ log |(πe)mmse(X|Y)|,
where the conditional Fischer information matrix is defined as
J(X|Y) := E[∇ log p(X|Y)∇ log p(X|Y)†],
and the minimum mean squared error (MMSE) matrix is
mmse(X|Y) := E[(X− E[X|Y])(X− E[X|Y])†].
Lemma 3 Ekrem and Ulukus (2014) Let (V1,V2) be a random vector with finite second moments and N∼CN (0,ΣN ) independent of (V1,V2). Then
mmse(V2|V1,V2 + N) = ΣN −ΣNJ(V2 + N|V1)ΣN .
5.2 PROOF OF THEOREM 1
If K = 1 the distributed learning problem that we study boils down to the well known Information Bottleneck (IB) problem of Tishby et al. (1999). The single-encoder IB problem is essentially a remote point-to-point source coding problem Dobrushin and Tsybakov (1962) in which distortion is measured under the logarithm loss fidelity criterion Harremoes and Tishby (2007). In accordance with this analogy, for K ≥ 2 consider the multiterminal source coding problem under logarithmic loss in which the sequence Y n models a remote source that is observed by K spatially distributed agents; the agents observe noisy versions of the remote source and communicate independently with a decoder or Chief Executive Officer (CEO) over rate-constrained noise-free links. For instance, agent k, k ∈ K, observes Xnk and uses Rk bits per sample to describe it to the decoder. The decoder wants to reconstruct the remote source Y n to within a prescribed fidelity level, where incurred distortion is measured using the logarithmic loss criterion, i.e.,
`log(y n, ŷn) =
1 n log
1
P̂Y n|J(yn|φ1(xn1 ), . . . , φK(xnK)) , (32)
where J = (φ1(Xn1 ), . . . , φK(XnK)).
Here, (Xn1 , . . . , XnK , Y n) is assumed to be distributed i.i.d. according to the n-product of the pmf PX1,...,XK ,Y , i.e., the Markov chain equation 3 holds.
Definition 2 A rate-distortion code (of blocklength n) for the CEO problem consists of K encoding functions
φ̃k : Xnk → {1, . . . ,M (n) k }, for k = 1, . . . ,K, (33)
and a decoding function
ψ̃ : {1, . . . ,M (n)1 } × . . .× {1, . . . ,M (n) K } → Ŷ n. (34)
A distortion-rate tuple (D,R1, . . . , RK) is achievable for the DM CEO source coding problem with side information if there exist a blocklength n, encoding functions {φ̃k}Kk=1 and a decoding function ψ̃ such that
Rk ≥ 1
n logM
(n) k , for k = 1, . . . ,K,
D ≥ E [ `log ( Y n, ψ̃(φ̃1(X n 1 ), . . . , φ̃K(X n K)) )] .
The distortion-rate region DRCEO of the CEO model is defined as the closure of all non-negative tuples (D,R1, . . . , RK) that are achievable.
Key to the proof of Theorem 1 is the following proposition which states that IRDIB andDRCEO can be inferred from each other. Proposition 3 (∆, R1, . . . , RK) ∈ IRDIB if and only if ( H(Y )−∆, R1, . . . , RK ) ∈ DRCEO.
Proof: Let, for k = 1, . . . ,K, Jk = φk(Xnk ) and J = (J1, . . . , JK). Then,
E[`log(Y n, Ŷ n)|J = j] = ∑ yn∈Yn P (yn|j) log
( 1
P̂ (yn|j)
) (35)
= ∑
yn∈Yn P (yn|j) log ( P (yn|j) P̂ (yn|j) ) +H(Y n|J = j) (36)
= DKL(P (y n|j)‖P̂ (yn|j)) +H(Y n|J = j) (37) ≥ H(Y n|J = j), (38)
where equation 38 is due to the non-negativity of the Kullback-Leibler divergence and the equality holds if and only if for P̂ (yn|j) = P (yn|j) where P (yn|j) = Pr{Y n = yn|J = j} for all j and yn ∈ Yn.
Let an achievable tuple (∆, R1, . . . , RK) ∈ IRDIB be given. Then, there must exist functions {φk}Kk=1 such that equation 9 and equation 10 hold. Using equation 38 that by letting the decoding function ψ̃(JK) = {PY n|JK(y
n|JK)}, we have E[`log(Y n, Ŷ n)|JK] = H(Y n|JK), which implies (H(Y )−∆, R1, . . . , RK) ∈ DRCEO.
The result of Theorem 1 follows easily by combining (Courtade and Weissman, 2014, Theorem 10), which provides a single-letter characterization of the rate distortion region DR?CEO of the CEO problem, and Proposition 3.
5.3 PROOF OF THEOREM 2
The proof of the direct part of Theorem 2 follows by evaluating the region of Theorem 1 with the choice T = ∅ and p(uk|xk, t) = CN (xk,Σ1/2k (Ωk − I)Σ 1/2 k ).
The proof of the converse part is as follows. Fix t ∈ T , S ⊆ K and a family of distributions {p(uk|xk, t)}Kk=1 such that the joint distribution factorizes as equation 13. Also, let 0 Ωk,t Σ−1k and
mmse(Xk|Y,Uk,t, t) = Σk −ΣkΩk,tΣk. (39)
Such Ωk,t always exists since 0 mmse(Xk|Y,Uk,t, t) Σ−1k . (40)
Then, we have
I(Xk; Uk|Y, t) ≥ log |Σk| − log |mmse(Xk|Y,Uk,t, t)|
= − log |I−Σ1/2k Ωk,tΣ 1/2 k |, (41)
where the inequality is due to Lemma 2; and equation 41 is due to equation 39.
Also, we have
I(Y; USc,t|t) ≤ log |Σy| − log |J−1(Y|USc,t, t)| (42)
= log ∣∣∣∣∣∑ k∈Sc Σ1/2y H † kΩk,tHkΣ 1/2 y + I ∣∣∣∣∣ , (43) where equation 42 follows by using Lemma 2; and equation 43 holds by using the following equality
J(Y|USc,t, t) = ∑ k∈Sc H†kΩk,tHk + Σ −1 y . (44)
the proof of which uses a connection between MMSE and Fisher information as shown next.
For the proof of equation 44, first note that from the MMSE estimation of Gaussian random vectors El Gamal and Kim (2011), we have
Y = E[Y|XSc ] + ZSc = ∑ k∈Sc GkXk + ZSc , (45)
where Gk = Σy|xScH † kΣ −1 k and ZSc ∼ CN (0,Σy|xSc ), with
Σ−1y|xSc = Σ −1 y + ∑ k∈Sc H†kΣ −1 k Hk. (46)
Note that ZSc is independent of YSc due to the orthogonality principle of the MMSE and its Gaussian distribution. Hence, it is also independent of USc,q . We have
mmse (∑ k∈Sc GkXk ∣∣∣Y,USc,t, t) = ∑ k∈Sc Gkmmse (Xk|Y,USc,t, t) G†k (47)
= Σy|xSc ∑ k∈Sc H†k ( Σ−1k −Ωk ) HkΣy|xSc , (48)
where equation 47 follows since the cross terms are zero due to the Markov chain (Uk,t,Xk) − − Y − − (UK/k,t,XK/k); and equation 48 follows due to equation 39 and Gk. Finally,
J(Y|USc,t, t) = Σ−1y|xSc −Σ −1 y|xScmmse (∑ k∈Sc GkXk ∣∣∣Y,USc,t, t)Σ−1y|xSc (49) =Σ−1y|xSc − ∑ k∈Sc H†k ( Σ−1k −Ωk,t ) Hk (50)
=Σ−1y + ∑ k∈Sc H†kΩk,tHk, (51)
where equation 49 is due to Lemma 3; equation 50 is due to equation 48; and equation 51 follows due to equation 46.
Now, let Ω̄k := ∑ t∈T p(t)Ωk,t. The rest of the converse proof follows by averaging over the time sharing random variable to get
I(Xk; Uk|Y, T ) ≥ − ∑ t∈T p(t) log |I−Σ1/2k Ωk,tΣ 1/2 k |
≥ − log |I−Σ1/2k Ω̄kΣ 1/2 k |, (52)
where equation 52 follows from the concavity of the log-det function and Jensen’s inequality. Similarly to equation 52, from equation 43 and Jensen’s Inequality we have
I(Y; USc |T ) ≤ log ∣∣∣∣∣∑ k∈Sc Σ1/2y H † kΩ̄kHkΣ 1/2 y + I ∣∣∣∣∣ . (53) Finally, using equation 52 and equation 53 in equation ??, noting that Ωk = ∑ t∈T p(t)Ωk,t Σ −1 k since 0 Ωk,t Σ−1k , and taking the union over Ωk satisfying 0 Ωk Σ −1 k , completes the proof of the converse part; and, hence, that of Theorem 2.
5.4 PROOF OF PROPOSITION 1
For simplicity of exposition, the proof is given for the case K = 2 encoders. The proof for K > 2 follows similarly. By the definition of IRsumDIB, the accuracy complexity tuple (∆, Rsum) ∈ R2+ is achievable for some random variables Y,X1, X2, U1, U2 with joint pmf satisfying equation 13, if it holds that
∆ ≤ I(Y ;U1, U2) (54) ∆ ≤ R1 − I(X1;U1|Y ) + I(Y ;U2) (55) ∆ ≤ R2 − I(X2;U2|Y ) + I(Y ;U1) (56) ∆ ≤ R1 +R2 − I(X1;U1|Y )− I(X2;U2|Y ) (57)
R1 +R2 ≤ Rsum. (58)
The application of the Fourier-Motzkin elimination to project out R1 and R2 reduces the system on inequalities equation 54-equation 58 to the following system of inequalities
∆ ≤ I(Y ;U1, U2) (59) ∆ ≤ Rsum − I(X1;U1|Y )− I(X2;U2|Y ) (60)
2∆ ≤ Rsum − I(X1;U1|Y )− I(X2;U2|Y ) + I(Y ;U1) + I(Y ;U2) (61)
It follows due to the Markov chainU1− −X1− −Y − −X2− −U2 that we have I(Y ;U1, U2) ≤ I(Y ;U1)+I(Y ;U2). Therefore, inequality equation 61 is redundant as it is implied by equation 59 and equation 60. This completes the proof of Proposition 1.
5.5 PROOF OF PROPOSITION 2
Suppose that P∗ yields the maximum in equation 16. Then,
(1 + s)∆s = (1 + sK)H(Y ) + sRs + Ls(P∗) (62)
= (1 + sK)H(Y ) + sRs + ( −H(Y |U∗K)− s
K∑ k=1 [H(Y |U∗k ) + I(Xk;U∗k )]
) (63)
= (1 + sK)H(Y ) + sRs + (−H(Y |U∗K)− s(Rs − I(Y ;U∗K) +KH(Y ))) (64) = (1 + s)I(Y ;U∗K) (65)
≤ (1 + s)∆(Rs, PXK,Y ), (66)
where equation 63 is due to the definition of Ls(P) in equation 18; equation 64 follows since we have∑K k=1[I(Xk;U ∗ k ) +H(Y |U∗k )] = Rs − I(Y ;U∗K) +KH(Y ) from the definition of Rs in equation 17; and equation 66 follows from the definition in equation ??.
Conversely, if P∗ is the solution to the maximization in the function ∆(Rsum, PXK,Y ) in equation ?? such that ∆(Rsum, PXK,Y ) = ∆s, then ∆s ≤ I(Y ;U ∗ K) and ∆s ≤ Rsum − ∑K k=1 I(Xk;U ∗ k |Y ) and we have, for any s ≥ 0, that
∆(Rsum, PXK,Y ) = ∆s
≤ ∆s − (∆s − I(Y ;U∗K))− s ( ∆s −Rsum +
K∑ k=1 I(Xk;U ∗ k |Y )
)
= I(Y ;U∗K)− s∆s + sRsum − s K∑ k=1 I(Xk;U ∗ k |Y ) = H(Y )− s∆s + sRsum −H(Y |U∗K)− s K∑ k=1 [I(Xk;U ∗ k ) +H(Y |U∗k )] + sKH(Y )
(67)
≤ H(Y )− s∆s + sRsum + L∗s + sKH(Y ) (68) = H(Y )− s∆s + sRsum + sKH(Y )− ((1 + sK)H(Y ) + sRs − (1 + s)∆s) (69) = ∆s + s(Rsum −Rs), (70)
where in equation 67 we have ∑K k=1 I(Xk;Uk|Y ) = −KH(Y ) + ∑K k=1 I(Xk;Uk) +H(Y |Uk) due to the Markov chain Uk −Xk − Y − (XK\k, UK\k); equation 68 follows since L∗s is the maximum over all possible distributions P (not necessarily P∗ maximizing ∆(Rsum, PXK,Y )); and equation 69 is due to equation 16.
Finally, equation 70 is valid for any Rsum ≥ 0 and s ≥ 0. Given s, and hence (∆s, Rs), choosing R = Rs yields ∆(Rs, PXK,Y ) ≤ ∆s. Together with equation 66, this completes the proof of Proposition 2.
5.6 PROOF OF LEMMA 1
The proof follows by deriving the following bounds. For any conditional pmf QY |Z(y|z), y ∈ Y and z ∈ Z , e.g., Z = UK or Z = Uk, proceeding similarly to equation 38 and averaging over Z, we have
H(Y |Z) = E[− logQY |Z(Y |Z)]−DKL(PY |Z‖QY |Z). (71)
Similarly, we have
I(Xk;Uk) = H(Uk)−H(Uk|Xk) (72) = E[− logQUk (Uk)]−DKL(PUk‖QUk )−H(Xk|UK) (73) = DKL(PY |Uk‖QUk )−DKL(PUk‖QUk ) (74)
Thus, we get
Ls(P) = LVBs (P,Q) +DKL(PY |UK ||QY |UK) + s K∑ k=1 (DKL(PY |Uk ||QY |Uk ) +DKL(PUk ||QUk ))
≥ LVBs (P,Q), (75)
where equation 75 holds by the non-negativity of relative entropy: and the equality is met if and only if Q∗ is as given by equation 21 and equation 22.
6 OTHER EXPERIMENTAL RESULTS (REGRESSION FOR UNKNOWN GAUSSIAN MODEL)
6.1 D-VIB ALGORITHM FOR VECTOR GAUSSIAN MODEL
For the vector Gaussian data model equation 14 the optimal distributions P and Q in equation 23 lie within the family of multivariate Gaussian distributions. Motivated by this observation, we consider the following parameterization for k ∈ K:
Pθk (uk|xk) = N (uk;µ e k,Σ e k) (76) QφK(ŷ|uK) = N (ŷ;µ d K,Σ d K) (77)
Qφk (ŷ|uk) = N (ŷ;µ d k,Σ d k) (78)
Qϕk (uk) = N (0, I). (79)
where µek,Σ e k are the output of a DNN fθk with input Xk that encodes the observations in a nuk -dimensional Gaussian distribution, µdK,Σ d K are the outputs of a DNN fφK with inputs U1, . . . ,UK , sampled from Pθk (uk|xk), and µ d k,Σ e k are the output of a DNN fφk with input Uk, k = 1, . . . ,K.
With the above choice of parametric encoders and decoders, and using a single sample m = 1, the empirical DIB cost in equation 29 is given for the sample (x1,i, . . . ,xK,i,yi) by
Lemps,i (θ,φ,ϕ) :=− 1
2
( (yi − µd12,i)TΣd,−112,i (yi − µ d 12,i) + log det(Σ d 12,i) ) − s
K∑ k=1 1 2 ( (yi − µdk,i)TΣd−1k,i (yi − µ d k,i) + log det(Σ d k,i) )
− s K∑ k=1 1 2 ( (µek,i − I)T (µek,i − I) + log |Σe,−1k,i | − nuk + tr{Σ e k,i} ) − ny 2 (1 + sK) log(2π),
where (µd12,i,Σ d 12,i) denote the output of the DNN fφK for the i-th sample (x1,i, . . . ,xK,i,yi), and similarly for the other mean and covariance terms; and where we have used that each term in the empirical DIB cost equation 29 can be computed noting that for d-dimensional Gaussian pmfsN (y;µ,Σ) we have
logN (y;µ,Σ) = −1 2
( (y − µ)TΣ−1(y − µ) + d log(2π) + log det(Σ) ) ,
and the KL divergence between two multivariate Gaussian pmfs P1 ∼ N (µ1,Σ1) and P2 ∼ N (µ2,Σ2) in Rd, is
DKL(P1‖P2) = 1
2
( (µ1 − µ2)TΣ−12 (µ1 − µ2) + log |Σ2Σ −1 1 | − d+ tr{Σ −1 2 Σ1} ) . (80)
The multivariate Gaussian parametrization of the encoders, decoders and prior distribution as given by equation 76-equation 79 can be used for other data models that are not necessary Gaussian. For example, it is particularly suitable for regression problems in which Y lies on a continuous space. Also, it is very often used in conjunction with VAE generative problems Rezende et al. (2014); Kingma and Welling (2013).
6.2 REGRESSION FOR VECTOR GAUSSIAN DATA MODEL
Consider a distributed learning model withK = 2 encoders, each observing a noisy version of an ny-dimensional Gaussian vector Y ∼ N (y; 0, I), as Xk = HkY + Nk, where Hk ∈ Rnk×ny and the noises are distributed as Nk ∼ N (0, I) for k = 1, 2.
For this model, the optimal accuracy-complexity region can be computed using Theorem 2. In what follows, we evaluate the performance of our D-VIB of the previous section for regression. The algorithm is trained using a dataset of n i.i.d. samples {(X1,i,X2,i,Yi)}ni=1 form the described vector Gaussian data model. We train the DNNs for various values of the parameter s. We use the multivariate Gaussian parameterization in equation 76-equation 79 for the DNNs architecture shown in Table 6.2. Specifically, Encoder k, k = 1, 2, consists of three dense layers of 512 neurons each followed by rectified linear unit (ReLu) activations. The output of encoder k is processed by a dense layer without nonlinear activation to generate µek and Σ e k of size 512 and 512× 512, respectively. Each decoder consists of two dense layers of 512 neurons with ReLu activations. The output of decoder 1, 2 and 12 is processed, each, by a fully connected layer without activation to generate µdk and Σ d k and µ d 12 and Σd12, of size 2 and 2× 2.
Figure 5 shows the optimal relevance-complexity region of tuples (∆, Rsum) obtained from Theorem 2 for a vector Gaussian model with K = 2 encoders, target variable dimension ny = 1, and observations dimension n1 = n2 = 3. A set of 40.000 samples split among training (30.000 samples) and test (10.000 samples). The figure depicts all accuracy-complexity pairs obtained by application of our algorithm D-VIB to this setting. The results are compared to the case of inference with known joint distribution (referred to as D-IB, see next section) as well as the case of centralized inference (C-IB). For the D-VIB algorithm, the the DNN architecture for the coders is shown in Table 6.2. Figure 6 shows the evolution of the associated mean squared error (MSE) in the estimation of the label Y using our D-VIB algorithm. As it can bee seen from both figures the performance of our D-VIB algorithm (which does not require knowledge of the joint label-feature distribution) is very close to that predicted by the theory, i.e., our Theorem 2.
Figure 7 shows similar curves for ny = 2, n1 = n2 = 3 dimensions, for various sizes of the training datset. As expected large training sets allow a more accurate prediction. Noteworthy, that the performance during the training phase might be better than that of the centralized learning scenario is an indicator can be caused by overfitting. Related to this aspect, recall that although the D-VIB algorithm does not estimate the underlying distribution explicitly, intuitively it does for the computation of the cost function. This is related to that universal compressors also learn the actual distribution of the data that is being compressed. Recall that since the plug-in estimator of entropy is biased downward, estimations of the mutual information terms that are involved in the cost function are then biased upward, which is an alternate explanation to the observed overfitting during the training phase.
DNN Layers
Encoder k dense [512]-ReLu dense [512]-ReLu dense [512]-ReLu Lat. space k dense [256]-ReLu Decoder 12 dense [256]-ReLu Decoder k dense [256]-ReLu
Table 3: Used DNN architecture.
7 DISTRIBUTED BLAHUT-ARIMOTO TYPE ALGORITHMS
7.1 DISCRETE-ALPHABET SETTING
In this section, we derive an iterative method to optimize the variational DIB cost function in equation 23 when the data model is discrete and the joint distribution PXK,Y is either known, or a good estimation of it can be obtained from the training samples. In these cases, the maximizing distributions P,Q of the variational DIB cost in equation 23 can be efficiently found by an alternating optimization procedure over P and Q similar to the expectation-maximization (EM) algorithm Dempster et al. (1977) and the standard Blahut-Arimoto (BA) methodBlahut (1972). An extension to the vector Gaussian data model, which involves random variable with continuous alphabets, is also provided. The main idea of the algorithm is that at iteration t, the optimal distributions P(t) that maximize the variational D-IB bound LVBs (P,Q(t)) for fixed Q(t) can be optimized in closed form and, next, the maximizing pmfs Q(t) for given P(t) can be also found analytically. So, starting from an initialization P(0) and Q(0) the algorithms performs the following computations successively and in this order, until convergence,
P(0) → Q(0) → P(1) → . . .→ P(t) → Q(t) → . . . (81)
We refer to such algorithm as “Blahut-Arimoto Distributed Information Bottleneck Algorithm (BA-DIB)”. Algorithm 1 describes the steps taken by BA-DIB to successively maximize LVBs (P,Q) by solving a concave optimization problem over P and over Q at each iteration. We have the following lemma whose proof follows essentially by using the log-sum inequality Cover and Thomas (1991) and the convexity of the mapping x 7→ x log x.
Lemma 4 The function LVBs (P,Q) is concave in P and in Q.
For fixed P(t), the optimal Q(t) maximizing the variational D-IB bound in equation 19 follows from Lemma 1 as given by equation 21-equation 22. For fixed Q(t), the optimal P(t) can be found using the following lemma.
Lemma 5 For fixed Q, there exists a P that achieves the maximum maxP LVBs (P,Q), where PUk|Xk is given by
p∗(uk|xk) = q(uk) exp (−ψs(uk, xk))∑
uk∈Uk q(uk) exp(−ψs(uk, xk))
, (82)
for uk ∈ Uk and xk ∈ Xk, k ∈ K, and where we define
ψs(uk, xk) := DKL(PY |xk ||QY |uk ) + 1
s EUK\k|xk [DKL(PY |UK\k,xk ||QY |UK\k,uk ))]. (83)
Proof: Due to its concavity, to maximize LVBs (P,Q) with respect to P for given Q, we add the Lagrange multipliers λxk ≥ 0 for each constraint ∑ uk∈Uk
p(uk|xk) = 1 with xk ∈ Xk. For each s, λxk ≥ 0 and p(uk|xk) can be explicitly found by solving the KKT conditions, e.g.,
∂
∂p(uk|xk) LVBs (P,Q) + ∑ xk∈Xk λxk ∑ uk∈Uk p(uk|xk)− 1 = 0. This completes the proof.
Algorithm 1 BA-DIB training algorithm for discrete data
1: inputs: discrete pmf PX1,...,Xk,Y , parameter s ≥ 0. 2: output: optimal P ∗Uk|Xk , pair (∆s, Rs). 3: initialization Set t = 0 and set P(0) with p(uk|xk) = 1|Uk| for uk ∈ Uk, xk ∈ Xk, k = 1, . . . ,K. 4: repeat 5: Compute Q(t+1) using equation 21 and equation 22. 6: Compute P(t+1) using equation 82. 7: t← t + 1 8: until convergence.
7.1.1 CONVERGENCE
Algorithm 1 essentially falls into the class of the Successive Upper-Bound Minimization (SUM) algorithms Razaviyayn et al. (2013) in which LVBs (P,Q) acts as a globally tight lower bound on Ls(P). Algorithm 1 provides a sequence P(t) for each iteration t, which converges to a stationary point of the optimization problem equation 23.
Proposition 4 Every limit point of the sequence P(t) generated by Algorithm 1 converges to a stationary point of equation 23.
Proof: Let Q∗(P) = arg maxQ LVBs (P,Q). Using Lemma 1, for every P′ 6= P, it holds that
LVBs (P,Q∗(P′)) ≤ LVBs (P,Q∗(P)) = Ls(P). (84)
Since Ls(P) and LVBs (P,Q∗(P′)) satisfy the assumptions of (Razaviyayn et al., 2013, Proposition 1), then LVBs (P,Q∗(P′)) satisfies A1-A4 in Razaviyayn et al. (2013). Convergence to a stationary point of equation 23 follows from (Razaviyayn et al., 2013, Theorem 1).
The self consistent equations equation 21, equation 22 and equation 83 satisfied by any stationary point of the D-IB problem extend those of the standard point-to-point IB problem Globerson and Tishby (2004) to the distributed IB problem with K ≥ 2 encoders. In particular, note the additional divergence term in equation 83.
7.2 GAUSSIAN SETTING
Recall Algorithm 1. For finite alphabet sources the updating rules of Q(t+1) and P(t+1) in Algorithm 1 are relatively easy, but they become unfeasible for continuous alphabet sources. We leverage on the optimality of Gaussian test channels, shown in Theorem 2, to restrict the optimization of P to Gaussian distributions, which are easily represented by a finite set of parameters, namely mean and covariance. We show that if P(t) are Gaussian distributions, then P(t+1) are also Gaussian distributions, which can be computed with an efficient update algorithm of its representing parameters. In particular, if at time t the k-th distributions P (t)Uk|Xk is given by
Utk = A t kXk + Z t k, (85)
where Ztk ∼ CN (0,Σzt k ), we show that at t+ 1, for P(t+1) updated as in equation 82, the encoder P (t+1)Uk|Xk corresponds to Ut+1k = A t+1 k Xk + Z t+1 k , where Z t+1 k ∼ CN (0,Σzt+1
k ) and Σ zt+1 k ,At+1k are updated as
Σ zt+1 k =
(( 1 + 1
s
) Σ−1
ut k |y −
1 s Σ−1 ut k |utK\k
)−1 , (86)
At+1k = Σzt+1 k
(( 1 + 1
s
) Σ−1
ut k |yA
t k(I−Σxk|yΣ −1 xk )−
1 s Σ−1 ut k |utK\k Atk(I−Σxk|utK\kΣ −1 xk )
) . (87)
The detailed update procedure is given in Algorithm 2 (see the following section for the details of the derivations).
Algorithm 2 BA-DIB algorithm for the Gaussin Vector D-IB
1: inputs: covariance Σy,x1,...,xk , parameter s ≥ 0. 2: output: optimal pairs (A∗k,Σz∗k), k = 1, . . . ,K. 3: initialization Set randomly A0k and Σz0k 0, k ∈ K. 4: repeat 5: Compute Σxk|utK\k and update for k ∈ K
Σutk|y = A t kΣxk|yA t,† k + Σztk (88)
Σutk|utK\k = A t kΣxk|utK\kA t,† k + Σztk , (89)
6: Compute Σzt+1k as in equation 86 for k ∈ K. 7: Compute At+1k as equation 87, k ∈ K. 8: t← t + 1. 9: until convergence.
7.2.1 DERIVATION OF ALGORITHM 2
We derive the update rules of Algorithm 2 and show that the Gaussian distribution is invariant to the update rules in Algorithm 1, in line with Theorem 2. First, we recall that if (X1,X2) are jointly Gaussian, then
PX2|X1=x1 = CN (µx2|x1 ,Σx2|x1), (90)
where µx2|x1 := Kx2|x1x1, with Kx2|x1 := Σx2,x1Σ −1 x1 .
Then, for Q(t+1) computed as in equation 21 and equation 22 from P(t), which is a set of Gaussian distributions, we have
Q (t+1) Y|uk = CN (µy|ut k ,Σy|ut k ), Q (t+1)
Y|uK = CN (µy|utK ,Σy|utK).
Next, we look at the update P(t+1) as in equation 82 from given Q(t+1). First, we have that p(utk) is the marginal of Utk, given by U t k ∼ CN (0,Σut
k ) where Σut k = AtkΣxkA t,H k + Σztk .
Then, to compute ψs(utk,xk), first, we note that
EUK\k|xk [DKL(PY |UK\k,xk ||QY |UK\k,uk )] = DKL(PY,UK\k|xk ||QY,UK\k|uk )−DKL(PUK\k|xk ||QUK\k|uk ) (91)
and that for two generic multivariate Gaussian distributions P1 ∼ CN (µ1,Σ1) and P2 ∼ CN (µ2,Σ2) in CN , the KL divergence is computed as in equation 80 below.
Applying equation 91 and equation 80 in equation 83 and noting that all involved distributions are Gaussian, it follows that ψs(utk,xk) is a quadratic form. Then, since p(u t k) is Gaussian, the product log(p(utk) exp(−ψs(utk,xk))) is also a quadratic form, and identifying constant, first and second order terms, we can write
log p(t+1)(uk|xk) = Z(xk) + (uk − µut+1 k |xk )HΣ−1 zt+1 k (uk − µut+1 k |xk ), (92)
where Z(xk) is a normalization term independent of uk,
Σ−1 zt+1 k = Σ−1 ut k + KHy|ut k Σ−1 y|ut k Ky|ut k
+ 1
s KHyutK\k|u t k Σ−1 yutK\k|u t k KyutK\k|u t k − 1 s KHutK\k|u t k Σ−1 utK\k|u t k KutK\k|u t k , (93)
and
µ ut+1 k |xk = Σ zt+1 k
( KHy|ut k Σ−1 y|ut k µy|xk
+ 1
s Ky,utK\k|u t k Σ−1 y,utK\k|u t k µy,utK\k|xk − 1 s KutK\k|u t k Σ−1 utK\k|u t k µutK\k|xk
) .
(94)
This shows that p(t+1)(uk|xk) is a multivariate Gaussian distribution and that Ut+1k |{Xk = xk} is also a multivariate Gaussian distributed as CN (µ
ut+1 k |xk ,Σ zt+1 k ).
Next, we simplify equation 93 and equation 94 to obtain the update rules equation 86 and equation 87. From the matrix inversion lemma, similarly to Chechik et al. (Feb. 2005), for (X1,X2) jointly Gaussian we have
Σ−1x2|x1 = Σ −1 x2 + K H x1|x2Σ −1 x1|x2Kx1|x2 . (95)
Applying equation 95, in equation 93 we have
Σ−1 zt+1 k = Σ−1 ut k |y +
1 s Σ−1 ut k |yutK\k − 1 s Σ−1 ut k |utK\k , (96)
= ( 1 + 1
s
) Σ−1
ut k |y −
1 s Σ−1 ut k |utK\k , (97)
where equation 97 is due to the Markov chain Uk − −Y − −UK\k.
Then, also from the matrix inversion lemma, we have for jointly Gaussian (X1,X2),
Σ−1x2|x1Σx2,x1Σ −1 x1 = Σ −1 x2 Σx2,x1Σ −1 x1|x2 . (98)
Applying equation 98 to equation 94, for the first term in equation 94, we have
KHy|ut k Σ−1 y|ut k µy|xk = Σ −1 ut k |yΣy,utk Σ−1y µy|xk (99)
= Σ−1 ut k |yA t kΣxk,yΣ −1 y Σy,xkΣ −1 xk xk = Σ−1 ut k |yA t k(I−Σxk|yΣ −1 xk )xk, (100)
where Σy,ut k = AtkΣxk,y; and equation 100 is due to the definition of Σxk|y.
Similarly, for the second term in equation 94, we have
KyutK\k|u t k Σ−1 yutK\k|u t k µy,utK\k|xk = Σ−1 ut k |yutK\k Atk(I−Σxk|yutK\kΣ −1 xk )xk, (101)
= Σ−1 ut k |yA t k(I−Σxk|yΣ −1 xk )xk, (102)
where we use Σut k ,yutK\k = AtkΣxk,yutK\k ; and equation 102 is due to the Markov chain Uk − −Y− −UK\k.
For the third term in equation 94,
KutK\k|u t k Σ−1 utK\k|u t k µutK\k|xk = Σ−1 ut k |utK\k Atk(I−Σxk|utK\kΣ −1 xk )xk. (103)
Equation equation 87 follows by noting that µ ut+1 k |xk = At+1k xk, and that from equation 94 A t+1 k can be identified as in equation 87.
Finally, we note that due to equation 85, Σ | 1. What is the focus of the paper in terms of the problem addressed?
2. What are the key contributions of the authors regarding the distributed representation learning problem?
3. How does the paper approach the problem from an information-theoretic perspective?
4. Can you elaborate on the variational bound constructed by the authors and its purpose?
5. What kind of experimental results did the authors provide to support their approach? | Review | Review
In this paper, the authors studied the distributed representation learning problem, where multiple sources of data are processed to provide information about Y. They studied this problem from information-theoretic point of view. Their main contribution can be summarized as follows.
1. The optimal trade-off between the accuracy and complexity were studied for discrete memoryless data model as well as memoryless vector Gaussian model.
2. A variational bound were constructed in order to connect the optimal encoder and decoder mappings with the solution of an optimization algorithm.
3. If only samples from an unknown distribution are available, an algorithm were proposed to find the optimal encode and decoder. Moreover, some experiment were conducted to support the approach.
In general, I think the paper is well-organized. The definition of the problem and the motivation of the approach are clear. The theorems, algorithms and experiments are solid enough to support the whole story of this paper. Generally I wish to see this paper being accepted. |
ICLR | Title
An Information Theoretic Approach to Distributed Representation Learning
Abstract
The problem of distributed representation learning is one in which multiple sources of information X1, . . . , XK are processed separately so as to extract useful information about some statistically correlated ground truth Y . We investigate this problem from informationtheoretic grounds. For both discrete memoryless (DM) and memoryless vector Gaussian models, we establish fundamental limits of learning in terms of optimal tradeoffs between relevance and complexity. We also develop a variational bound on the optimal tradeoff that generalizes the evidence lower bound (ELBO) to the distributed setting. Furthermore, we provide a variational inference type algorithm that allows to compute this bound and in which the mappings are parametrized by neural networks and the bound approximated by Markov sampling and optimized with stochastic gradient descent. Experimental results on synthetic and real datasets are provided to support the efficiency of the approaches and algorithms which we develop in this paper.
1 INTRODUCTION
Let a measurable variable X ∈ X and a target variable Y ∈ Y with unknown joint distribution PX,Y be given. In the classic problem of statistical learning, one wishes to infer an accurate predictor of the target variable Y ∈ Y based on observed realizations of X ∈ X . That is, for a given class F of admissible predictors φ : X → Ŷ and an additive loss function ` : Y → Ŷ that measures discrepancies between true values and their estimated fits, one aims at finding the mapping φ? ∈ F that minimizes the expected risk
CPX,Y (φ, `) = EPX,Y [`(Y, φ(X))]. (1)
Because the joint distribution PX,Y is unknown, in practice the risk equation 1 (also called population risk) cannot be computed directly; and, in the standard approach, one usually resorts to choosing the predictor with minimal risk on a training dataset consisting of n labeled samples {(xi, yi)}ni=1 that are drawn independently from the unknown joint distribution PX,Y . Also, it is important to restrict the set F of admissible predictors to a low-complexity class to prevent overfitting. This leads to the abstract inference problem shown in Figure 1.
In this paper, we study a generalization of this problem in which the prediction is to be performed in a distributed manner. The model is shown in Figure 2. Here, the prediction of the target variable Y ∈ Y is to be performed on the basis of samples of statistically correlated random variables (X1, . . . , XK) that are observed each at a distinct predictor. We investigate this problem in the case in which the loss function `(·) is the logarithmic-loss fidelity measure, given by
`log(y, ŷ) = log ( 1 ŷ(y) ) (2)
where ŷ(·) designates a probability distribution on Y and ŷ(y) is the value of this distribution evaluated for the outcome y ∈ Y . The choice of a ‘good” loss function is often controversial in statistical learning theory, and although a complete and rigorous justification of the usage of logarithmic loss as a fidelity measure in learning theory is still awaited, partial explanations appeared in Jiao et al. (2015) and, especially in Painsky and Wornell (2018) where it is shown that, for binary classification problems, by minimizing the logarithmic-loss one actually minimizes an upper bound to any choice of loss function that is smooth, proper (i.e., unbiased and Fisher consistent) and convex. Also, we constrain the complexity of the predictors by using mutual information as a regularizer term. This is inline with recent works Xu and Raginsky (2017); Russo and Zou (2015) that show that the generalization error can be upper-bounded using the mutual information between the input dataset and the output of the predictor – see also Bousquet and Elisseeff (2002); Shalev-Shwartz et al. (2010) where the stability of an algorithm is controlled by constraining the mutual information between its input and output.
1.1 AN EXAMPLE: MULTI-VIEW LEARNING
In many data analytics problems, data is collected from various sources of information or feature extractors; and is intrinsically heterogeneous. For example, an image can be identified by its color or texture features; and a document may contain text and images. Conventional machine learning approaches concatenate all available data into one big row vector (or matrix) on which a suitable algorithm is then applied. Treating different observations as a single source might cause overfitting and is not physically meaningful because each group of data may have different statistical properties. Alternatively, one may partition the data into groups according to samples homogeneity, and each group of data be regarded as a separate view. This paradigm, termed multi-view learning Xu et al. (2013), has received growing interest; and various algorithms exist, sometimes under references such as co-training Blum and Mitchell (1998); Dhillon et al. (2011); Kumar and Daumé (2011); Gönen and Alpaydın (2011), multiple kernel learning Gönen and Alpaydın (2011) and subspace learning Jia et al. (2010). By using distinct encoder mappings to represent distinct groups of data, and jointly optimizing over all mappings to remove redundancy, multiview learning offers a degree of flexibility that is not only desirable in practice but is likely to result in better learning capability. Actually, as shown in Vapnik (2013), local learning algorithms produce less errors than global ones. Viewing the problem as that of function approximation, the intuition is that it is usually non-easy to find a unique function that holds good predictability properties in the entire data space.
1.2 INFORMAL SUMMARY OF RESULTS
In this paper, first we characterize the optimal tradeoff between relevance and complexity for the distributed learning model of Figure 2 for both discrete memoryless (DM) and memoryless vector Gaussian models. While the result for the discrete data model (Theorem 1) is not difficult to establish using connections with Courtade and Weissman (2014, Appendix B) which we explicit here, the result for the multivariate Gaussian data model (Theorem 2), which provides a sharp analytic characterization of optimal tradeoffs, is new and non-trivial (the proof of the converse part is not straightforward and was missing before this work in both theoretic learning and information theoretic communities including in the scalar case). Second, we develop a variational bound on the optimal tradeoff that can be seen as a generalization of the ELBO and the β-VAE criteria Higgins et al. (2016) to the distributed setting. Furthermore, for both DM and Gaussian models, we also provide a variational inference type algorithm which is parametrized by neural networks and allows to compute the developed variational bound when the data distribution is not known. Specifically, the main contributions of this paper are:
• In Section 3.2, we find an explicit analytic characterization of optimal tradeoffs between relevance and complexity for the memoryless vector Gaussian model. The result generalizes the Gaussian Information Bottleneck method of Globerson and Tishby (2004); Chechik et al. (Feb. 2005) to the distributed learning scenario.
• In Section 3.3, we study the problem of maximizing relevance under a constraint on the sum complexity for which we establish a variational bound which generalizes the ELBO and the β-VAE criteria to the distributed setting.
• Section 3.4 is algorithmic-oriented. We develop a variational inference type algorithm which enables to compute the bound. This algorithm is obtained by parametrizing the encoders, the decoder, and the prior distributions via DNNs and using Monte-Carlo sampling. Also, it makes usage of Kingma et
al.’s re-parametrization trick Kingma and Welling (2013) and can be seen as a generalization of the variational information bottleneck algorithm in Alemi et al. (2017) to the distributed setting.
• Section 4 contains some experimental results on real datasets which show the efficiency of the approaches and algorithms that we develop in this paper.
Most relevant to this paper is the single-encoder Information Bottleneck (IB) method of Tishby et al. (1999) which readily and elegantly captures the above mentioned viewpoint of seeking the right balance between data fit and generalization by using the mutual information both as a cost function and as a regularizer term. Thus, the results of this paper can be seen as a generalization of those of Tishby et al. (1999) for the DM model and Globerson and Tishby (2004); Chechik et al. (Feb. 2005) for the Gaussian model to the distributed learning setting.
Remark: Due to space constraints, the proofs of the results of this paper are deferred to the appendices section, which also contains additional experimental results.
1.3 NOTATION
Throughout, upper case letters denote random variables, e.g., X; lower case letters denote realizations of random variables, e.g., x; and calligraphic letters denote sets, e.g., X . The cardinality of a set is denoted by |X |. For a random variable X with probability mass function (pmf) PX , we use PX(x) = p(x), x ∈ X for short. Boldface upper case letters denote vectors or matrices, e.g., X, where context should make the distinction clear. For random variables (X1, X2, . . .) and a set of integers K ⊆ N, XK denotes the set of random variables with indices in the set K, i.e., XK = {Xk : k ∈ K}. If K = ∅, XK = ∅. For k ∈ K we let XK/k = (X1, . . . , Xk−1, Xk+1, . . . , XK), and assume that X0 = XK+1 = ∅. Also, for zero-mean random vectors X and Y, the quantities Σx, Σx,y and Σx|y denote respectively the covariance matrix of the vector X, the covariance matric of vector (X,Y) and the conditional covariance matrix of X, conditionally on Y. Finally, for two probability measures PX and QX on the random variable X ∈ X , the relative entropy or Kullback-Leibler divergence is denoted as DKL(PX‖QX).
2 FORMAL PROBLEM FORMULATION
Let K ≥ 2 and (X1, . . . , XK , Y ) be a tuple of random variables with a given joint probability mass function (pmf) PX1,...,XK ,Y (x1, . . . , xK , y) for (x1, . . . , xK) ∈ X1 × . . .×XK and y ∈ Y , where Xk designates the alphabet of Xk and Y that of Y . Throughout, we assume that the Markov chain
Xk − − Y − −XK/k (3) holds for all k ∈ K. That is, the joint pmf factorizes as
PX1,...,XK ,Y (x1, . . . , xK , y) = PY (y) K∏ k=1 PXk|Y (xk|y). (4)
The variable Y is a target variable; and we seek to characterize how accurate it can be predicted from a measurable random vector (X1, . . . , XK) when the components of this vector are processed separately, each by a distinct encoder. More specifically, let {(X1,i, . . . , XK,i, Yi)}ni=1 be a collection of n independent copies of (X1, . . . , XK , Y ). Encoder k ∈ K only observes the sequence Xnk ; and generates a description Jk = φk(Xnk ) according to some mapping
φk : Xnk →M (n) k , (5)
whereM(n)k is an arbitrary set of descriptions. The range of allowable description sets will be specified below. A decoder ψ(·) collects all descriptions JK = (J1, . . . , JK) and returns an estimate Ŷ n of Y n as
ψ :M(n)1 × . . .×M (n) K → Ŷ n. (6)
The relevance of the estimation Ŷ n is measured in terms of the relevance, defined here as the information that the descriptions φ1(Xn1 ), . . . , φK(XnK) collectively preserve about Y
n, as measured by Shannon mutual information 1
∆(n)(PXK,Y ) = 1
n ∑ yn,xn1 ,...,x n K P (yn) K∏ k=1 P (xnk |yn) log P (yn, ψ(φ1(x n 1 ), . . . , φK(x n K))) P (yn)P (ψ(φ1(xn1 ), . . . , φK(x n K)))
:= 1
n IPXK,Y (Y
n; Ŷ n), (7)
1Alternatively, the relevance could be defined in a more operational manner by the average logarithmic loss distortion or error EPXK,Y [`log(Y n, Ŷ n)] = H(Y n|Ŷ n).
where Ŷ n = ψ(φ1(Xn1 ), . . . , φK(XnK)) and the subscript PXK,Y indicates that the mutual information is computed under the joint distribution PXK,Y .
There are various ways to control the complexity of the encoding functions {φk}Kk=1. In this paper, we do so by restricting their ranges. This is known as minimum description length complexity measure Hinton and van Camp (1993). Specifically, the mapping φk(·) at Encoder k ∈ K needs to satisfy
Rk ≥ 1
n log |φk(Xnk )| for all Xnk ∈ Xnk . (8)
Definition 1 A tuple (∆, R1, . . . , RK) is said to be achievable if there exists an integer n, a family of encoding mappings {φk}Kk=1 and a decoder mapping ψ such that
∆ ≤ 1 n IPXK,Y
( Y n;ψ(φ1(X n 1 ), . . . , φK(X n K)) )
(9)
Rk ≥ 1
n log |φk(Xnk )| for all k ∈ K. (10)
The relevance-complexity region IRDIB is given by the closure of all achievable tuples (∆, R1, . . . , RK).
In some cases, for given RK = (R1, . . . , RK), for the ease of the exposition we will be content with the relevance-complexity function ∆(RK, PXK,Y ) defined as
∆(RK, PXK,Y ) = max {φk}Kk=1,ψ
∆(n)(PXK,Y ) (11)
where the maximization is subjected to equation 8.
3 MAIN RESULTS
3.1 DISCRETE MEMORYLESS DATA MODEL
The following theorem (the proof of which can be found in the appendices section) provides a computable characterization of the relevance-complexity region IRDIB. The result can be seen as a generalization of Tishby et al. Tishby et al. (1999) single encoder IB to the distributed learning model with K encoders.
Theorem 1 The relevance-complexity region IRDIB of the distributed learning problem with PXK,Y for which the Markov chain equation 3 holds is given by the union of all tuples (∆, R1, . . . , RK) ∈ RK+1+ that satisfy for all S ⊆ K,
∆ ≤ ∑ k∈S [Rk−I(Xk;Uk|Y, T )] + I(Y ;USc |T ), (12)
for some set of pmfs P := {PUk|Xk,T , . . . , PUk|Xk,T , PT } with joint distribution of the form
PT (t)PY (y) K∏ k=1 PXk|Y (xk|y) K∏ k=1 PUk|Xk,T (uk|xk, t). (13)
Remark 1 In Theorem 1, the random variable T stands for a convexification of the region, i.e., convex combination of achievable relevance-complexity tuples is itself achievable. For given T = t, the result of Theorem1 comprises the optimization over K conditional distributions {PUK |Xk,t}. For k ∈ K, the conditional distribution PUK |Xk,t represents a stochastic encoding of the feature Xk into a latent variable Uk. Intuitively, the latent variableUk should capture all relevant information about Y that is contained inXk and non redundant with those carried out by {Ui}i 6=k. The requirement of non-redundancy is mandated by the need to operate at the minimum possible complexity at which a desired relevance level is achievable (recall that minimum complexity, as expressed by algorithm’s input-output mutual information, translates directly into a better generalization capability). Collectively, however, the set of all latent variables (U1, . . . , UK) should be expressive enough to reproduce the target variable Y to within the desired relevance level.
Remark 2 Like for the single-encoder IB problem of Tishby et al. (1999) and an increasing number of works that followed, including Courtade and Weissman (2014, Section III-F), our approach here is asymptotic. In addition to that it leads to an exact characterization, the result also readily provides a lower bound on the performance in the non-asymptotic (e.g., one shot) setting. For the latter setting known approaches (e.g., the functional representation lemma of Li and El Gamal (2018)) would lead to only non-matching inner and outer bounds on the region of optimal tradeoff pairs, as this is the case even for the single encoder case Li et al. (2018). 4
3.2 MEMORYLESS VECTOR GAUSSIAN DATA MODEL
We now turn to a continuous-alphabet setting. Here, (X1, . . . ,XK ,Y) is a zero-mean Gaussian random vector such that Xk = HkY + Nk for all k ∈ K, (14) where Hk ∈ Cnk×ny models the linear model connecting the target variable Y ∈ Cny to the observation at encoder k, and Nk ∈ Cnk , k = 1, . . . ,K, is the noise vector at encoder k, assumed to be Gaussian with zero-mean and covariance matrix Σk, and independent from all other noises and the target variable Y. We denote by Σy the covariance matrix of of the target vector Y ∈ Cny .
For this model, we find an explicit analytic characterization of optimal tradeoffs between relevance and complexity. The proof relies on deriving an outer bound on the region described by equation 12, and showing that it is achievable with Gaussian distribution, with no time-sharing. In doing so, we use techniques that rely on the de Bruijn identity and the properties of Fisher information and minimum mean square error (MMSE).
Theorem 2 The relevance-complexity region IRGDIB for the vector Gaussian model is given by the union of all tuples (∆, R1, . . . , RL) that satisfy for all S ⊆ K
∆ ≤ [ Rk + log ∣∣∣I−Σ1/2k ΩkΣ1/2k ∣∣∣]+ log ∣∣∣∣∣∑ k∈Sc Σ1/2y H † kΩkHkΣ 1/2 y + I ∣∣∣∣∣ , for some 0 Ωk Σ−1k .
Proof: The proof of the direct part follows by evaluating the region of Theorem 1, which can be extended to the case of continuous alphabets using standard discretization (quantization) arguments, with the choices T = ∅ and p(uk|xk, t) = CN (xk,Σ1/2k (Ωk − I)Σ 1/2 k ). The main contribution in the proof is that of the converse part. This proof is technical and rather lengthy and, for this reason, is deferred to the appendices section.
In the special case in which K = 1, the result of Theorem 2 recovers that by Globerson and Tishby (2004) (see also Chechik et al. (Feb. 2005)) which establishes the optimal relevance-complexity tradeoff of the single-encoder Gaussian IB problem.
3.3 A VARIATIONAL BOUND
In this section, we consider the problem of learning encoders- and decoder mappings that maximize the relevance level for a given (fixed) complexity level, i.e., those that perform at the vicinity of the boundary of the region IRDIB. First, we derive a parametrization of the relevance-complexity region; and, then, we develop a variational bound which expresses the optimal encoders’ and decoder mappings as the solution to an optimization problem – (an algorithm for solving this problem in the case of unknown distributions is given in the next section).
Let Rsum := ∑K k=1Rk. Also, let IR sum DIB denote the region of achievable (relevance, sum-complexity) pairs,
IRsumDIB := { (∆, Rsum) ∈ R2+ : ∃(R1, . . . , RK) ∈ RK+ s.t.
(∆, R1, . . . , RK) ∈ IRDIB and K∑ k=1 Rk = Rsum } .
Proposition 1 The relevance-complexity region under sum-complexity constraintRIsumDIB is given by the convexhull of all tuples (∆, Rsum) ∈ R2+ satisfying ∆ ≤ ∆(Rsum, PXK,Y ) where
∆(Rsum, PXK,Y ) = max P min
{ I(Y ;UK), Rsum −
K∑ k=1 I(Xk;Uk|Y )
} , (15)
and where the maximization is over the set of pmfs P := {PU1|X1 , . . . , PUK |XK} such that the joint pmf factorizes as pY (y) ∏K k=1 pXk|Y (xk|y) ∏K k=1 pUk|Xk (uk|xk).
The next proposition provides a characterization of the pairs (∆, Rsum) that lie on the boundary ofRIsumDIB in terms of a nonnegative parameter s ≥ 0.
Proposition 2 For every pair (∆, Rsum) ∈ R2+ that lies on the boundary of the relevance-complexity region RIsumDIB there exist s ≥ 0 such that (∆, Rsum) = (∆s, Rs), where
∆s = 1
(1 + s)
[ (1 + sK)H(Y ) + sRs + max
P Ls(P)
] , (16)
Rs = I(Y ;U ∗ K) + K∑ k=1 [I(Xk;U ∗ k )− I(Y ;U∗k )], (17)
and P∗ is the set of conditional pmfs P that maximize the cost function
Ls(P) := −H(Y |UK)− s K∑ k=1 [H(Y |Uk) + I(Xk;Uk)]. (18)
Using Proposition 2 it is clear that the encoders {PUk|Xk}k∈K that achieve the relevance-complexity pair (∆s, Rs) can be computed by maximizing the regularized cost equation 18 for the corresponding value of s ≥ 0. The corresponding optimal decoder PY |UK for these encoders can be found as in equation ??. Different relevance-complexity pairs (∆s, Rs) on the boundary of IRsumDIB and encoders- and decoder mappings that achieve it can be found by solving equation 18 for different values of s ≥ 0 and then evaluating equation 16 and equation 17 for the obtained solution.
The optimization of equation 18 generally requires to compute marginal distributions involving the descriptions U1, . . . , UK , an aspect which can be non-easy computationally costly. To overcome this limitation, in the following we derive a tight variational bound on Ls(P) which lower bounds the DIB cost function with respect to some arbitrary distributions. Let us consider the arbitrary decoder QY |U1,...,UK (y|u1, . . . , uK) for y ∈ Y , u1 ∈ U1, . . . , uK ∈ UK , the K decoders QY |Uk (y|uk) for k ∈ K for y ∈ Y , uk ∈ Uk, and latent variable priors QUk (uk), k ∈ K, uk ∈ Uk. For short, we denote
Q := {QY |U1,...,UK , QY |U1 , . . . , QY |UK , QU1 , . . . , QUK}.
Let us define the variational DIB cost function LVBs (P,Q) as
LVBs (P,Q) := E[logQY |UK(Y |UK)]︸ ︷︷ ︸ av. logarithmic-loss + s K∑ k=1 ( E[logQY |Uk (Y |Uk)]−DKL(PUk|Xk‖QUk ) ) ︸ ︷︷ ︸
regularizer
. (19)
The following lemma states that LVBs (P,Q) is a lower bound to Ls(P) for all distributions Q.
Lemma 1 For fixed pmfs P, we have
Ls(P) ≥ LVBs (P,Q), for all pmfs Q. (20)
In addition, there exists a unique Q that achieves the maximum maxQ LVBs (P,Q) = Ls(P), and is given by
Q∗Uk = PUk , Q ∗ Y |Uk = PY |Uk , k = 1, . . . ,K, (21)
Q∗Y |U1,...,Uk = PY |U1,...,UK , (22)
where PUk , PY |Uk and PY |U1,...,UK are computed from the pmfs P.
Using the above, the optimization in equation 16 can be written in terms of the variational DIB cost function as
max P Ls(P) = max P max Q LVBs (P,Q). (23)
We close this section by noting that the cost function equation 19 can be seen as a generalization of the evidence lower bound (ELBO) as given in Rezende et al. (2014); Kingma and Welling (2013) for the single-encoder learning to the distributed setting. Also, in the specific case in which Y = (X1, . . . , XK) the bound generalizes the ELBO used for VAEs to the case of an arbitrary number of encoders.
3.4 CASE OF UNKNOWN DISTRIBUTIONS: VARIATIONAL DISTRIBUTED IB ALGORITHM
In practice only a set of training samples {(X1,i, . . . , XK,i, Yi)}ni=1 are available. In this section, we provide a method to optimize equation 23 in this case by parametrizing the encoding and decoding distributions that are to optimize using a family of distributions whose parameters are determined by Deep Neural networks (DNNs). This allows us to formulate equation 23 in terms of the DNN parameters and optimize it by using the reparametrization trick Kingma and Welling (2013), Monte Carlo sampling, as well as stochastic gradient descent (SGD) type algorithms.
Let FeNN,k denote the parametric family of encoding probability distributions PUk|Xk over Uk for each element on Xk. Each member of this collection, PUk|Xk;γek , is described by a parameter vector γ e k ∈ Γek ⊆ Rl e k , where
Γek ⊆ Rl e k denotes the set of allowable parameter vectors. The parameter vector γek is the output of a DNN fθk : Xk → Γ e k, with network parameters θk ∈ Θk ⊆ Rd e k , e.g., the weights of the network at all layers. The DNN fθk takes Xk as input and outputs the parameter vector γ e k, determining one of the probability members PUk|Xk;γek . We have FeNN,k = { PUk|Xk;γek (uk|xk), for uk ∈ Uk, xk ∈ Xk : γ e k = fθk (xk), θk ∈ Θk } . (24)
For example, the family of multivariate Gaussian distributions is parametrized by the mean µθk and covariance matrix Σθk, i.e., γk := (µ θ k,Σ θ k). Therefore, given an observation Xk, γk := (µ θ k,Σ θ k) is determined by the output of the DNN fθk and F e NN,k is given by PUk|Xk;γk (uk|xk) = N (uk;µ θ k,Σ θ k).
Similarly, for decoders QY |Uk over Y , define the family of distributions parametrized by a vector in Γ d k ⊆ Rl
d k
determined by the output of a DNN fφk : Uk → Γ d k, with parameters φk ∈ Φk ⊆ Rd
d k , as
FdNN,k = { QY |Uk;γdk (y|uk), for y ∈ Y, uk ∈ Uk : γdk = fφk (uk), φk ∈ Φk } , (25)
and for the distribution QY |UK over Y for each element in U1 × · · · × UK , define the family of distributions parameterized by the output of the DNN fφK : U1 × · · · × UK → Γ d K, with φK ∈ ΦK ⊆ Rd d K , and ΓdK ⊆ Rd d K
FdNN,K = { QY |U1,...,UK ;γdK (y|u1, . . . , uK), y ∈ Y, uk ∈ Uk : γdK = fφK(u1, . . . , uK), φK ∈ ΦK } . (26)
Finally, for the distributions Qϕk (uk) we define the family of distributions with parameter ϕk ∈ Ψk ⊆ R l p k
FpNN,k = { QUk;ϕk (uk), for uk ∈ Uk : ϕk ∈ Ψk } .
In the following, for brevity we use Pθk (uk|xk), Qψk (y|uk), QψK(y|uK) and Qϕk (uk) to denote the distributions parametrized by the DNNs fθk , fψk , fψK and ϕk, respectively.
By restricting the optimization of the variational DIB cost in equation 23 to the encoder, decoder and priors within the families of distributions FeNN,k, FdNN,k, FdNN,K, FpNN,k we get
max P max Q LVBs (P,Q) ≥ max θ,φ,ϕ LNNs (θ,φ,ϕ), (27)
where we use the notation θ := [θ1, . . . , θK ], φ := [φ1, . . . , φK , φK] and ϕ := [ϕ1, . . . , ϕK ] to denote the DNN and prior parameters and, the cost in equation 27 is given by
LNNs (θ,φ,ϕ) := EPY,XE{Pθk (Uk|Xk)} [ logQφK(Y |UK)
+ s K∑ k=1 ( logQφk (Y |Uk)−DKL(Pθk (Uk|Xk)‖Qϕk (Uk)) )] . (28)
Next, we train the DNNs to maximize a Monte Carlo approximation of equation 27 over θ,φ,ϕ using SGD. We use the reparameterization trick Kingma and Welling (2013), to sample from Pθk (Uk|Xk). In particular, we consider FeNN,k to consist of a parametric family of distributions that can be sampled by first sampling a random variable Zk with distribution PZk (zk), zk ∈ Zk and then transforming the samples using some function gθk : Xk × Zk → Uk parameterized by θk, such that Uk = gθk (xk, Zk) ∼ Pθk (Uk|xk). The reparametrization trick reduces the original optimization to estimating θk of the deterministic function gθk and allows to compute estimates of the gradient using backpropagation Kingma and Welling (2013). The variational DIB cost in equation 27 can be approximated, by sampling m independent samples {uk,i,j}mj=1 ∼ Pθk (uk|xk,i) for each training sample (x1,i, . . . , xK,i, yi), i = 1, . . . , n. Sampling is performed by using uk,i,j = gφk (xk,i, zk,j) with {zk,j} m j=1 i.i.d. sampled from PZk . We then have
Lemps,i (θ,φ,ϕ) := 1
m m∑ j=1 logQφK(yi|u1,i,j , . . . , uK,i,j)
+ s
m m∑ j=1 K∑ k=1 ( logQφk (yi|uk,i,j)−DKL(Pθk (Uk,i|xk,i)‖Qϕk (Uk,i)) ) . (29)
4 EXPERIMENTS: RESILIENCE TO NOISE, ROTATION AND OCCLUSION
In this experiment, we test the robustness of our method against noise, rotation and random occlusion on the MNIST dataset. Specifically, we combine two types of random occlusions: the first encoder observes a digit from the MNIST that is occluded by a square which is rotated randomly (rotation angle uniformly distributed over [−45o, 45o]); and the second encoder observes a noisy version of the same digit corrupted by additive noise
(noise level uniform between 0 and 3). The noisy pixels are clipped between 0 and 1, with more than 60% of the pixels occluded. These occlusions make the problem significantly more involved than the standard MNIST (for which application of our algorithm leads to an relevance of about 99.9%).
We considered a CNN deterministic networks with dropout which achieves a 99.8% for test data on the clean MNIST data. Then, we have trained the same CNN architecture for each of the noisy inputs to the encoders, resulting in a relevance of 92.1% from the input to encoder 1 (randomly rotated occlusion) and 79.68% from the input to encoder 2 (noisy clipped image).
0
5
10
15
20
25
0 10 20 30 40 50
0
5
10
15
20
25
0 10 20
0
5
10
15
20
25
30 40 50 0
5
10
15
20
25
Original Y
Figure 3: View 1: occluded. View 2: noisy.
CNN Layers
Encoder k conv. ker. [5,5,32]-ReLu maxpool [2,2,2]
conv. ker. [5,5,64]-ReLu maxpool [2,2,2]
dense [1024]-ReLu dropout 0.4
dense [256]-relu Latent space k dense [256]-ReLu Decoder 12 dense [256]-ReLu Decoder k dense [256]-ReLu
101 102 103
Sum-Complexity Rsum
0.0
0.5
1.0
1.5
2.0
R el
ev an
ce ∆
C-IB with Rsum →∞ D-VIB train n=50000 D-VIB test n=50000
Figure 4: relevance v.s. sum-complexity for n = 50.000 and s ∈ [10−10, 1].
We applied our D-VIB algorithm of Section 3.4 to this model with the CNN architecture of Table 1, in which Encoder k = 1, 2 is parametrized by an nuk = 256 dimensional multivariate Gaussian distributionN (µ e k,Σ e k) determined by the output of a DNN fθk consisting of the concatenation of convolution, dense and maxpool layers with ReLu activations and dropout. The output of the last layer is followed by a dense layer without activation that generate µek and Σ e k. The prior is chosen as Qϕk (u) = N (0, I). Each decoder takes the samples from Pθk (Uk|Xk) and processes its inputs with a dense layer DNN (fφK and fφk ) each with 256 neurons and ReLu activation, which outputs a vector ŷi of size |Y| = 10 normalized with a softmax, corresponding to a distribution over the one-hot encoding of the digit labels {0, . . . , 9} from the K observations,
Qφk (ŷk|uk) = Softmax(fφk (Uk)), k = 1, 2, and (30) QφK(ŷ|uK) = Softmax(fφK(U1, U2))), (31)
where Softmax(p) for p ∈ Rd is a vector with i-th entry as [Softmax(p)]i = exp(pi)/ ∑d j=1 exp(pj). Figure 4 shows the relevance-complexity tradeoffs obtained using our D-VIB algorithm of Section 3.4, with n = 50.000 and 15 distinct s-values randomly chosen in the range [10−10, 1]. For comparison, we also present the performance obtained using three methods among state-of the-art multiview learning approaches: (i) applying a deterministic CNN on the two views concatenated (deterministic CNN), (ii) applying the singleencoder variational IB method of Alemi et al. on the two views concatenated (C-VIB), and (iii) learning one function for each view via a distinct CNNs and optimize all CNNs independently (independent CNNs). The achieved relevance is reported in Table 2. For other experimental results, see the appendices section.
We also mention that at a high level our algorithm D-VIB can be considered as performing some form of coregularization (for instance its Gaussian version is similar to the CCA of Hardoon et al. (2004)). Comparatively, the single-view algorithm C-VIB can be viewed as belonging to the family of co-training style algorithms (such as the co-EM of Nigam and Ghani (2000)) which, as mentioned in the recent survey Zhao et al. (2017), override on single-view algorithms. The performance of D-VIB dominates that of C-VIB, which itself dominates co-EM.
5 PROOFS OF MAIN THEOREMS, PROPOSITIONS AND LEMMAS
5.1 AUXILIARY LEMMAS
Lemma 2 Dembo et al. (1991); Ekrem and Ulukus (2014) Let (X,Y) be a pair of random vectors with pmf p(x,y). We have
log |(πe)J−1(X|Y)| ≤ h(X|Y) ≤ log |(πe)mmse(X|Y)|,
where the conditional Fischer information matrix is defined as
J(X|Y) := E[∇ log p(X|Y)∇ log p(X|Y)†],
and the minimum mean squared error (MMSE) matrix is
mmse(X|Y) := E[(X− E[X|Y])(X− E[X|Y])†].
Lemma 3 Ekrem and Ulukus (2014) Let (V1,V2) be a random vector with finite second moments and N∼CN (0,ΣN ) independent of (V1,V2). Then
mmse(V2|V1,V2 + N) = ΣN −ΣNJ(V2 + N|V1)ΣN .
5.2 PROOF OF THEOREM 1
If K = 1 the distributed learning problem that we study boils down to the well known Information Bottleneck (IB) problem of Tishby et al. (1999). The single-encoder IB problem is essentially a remote point-to-point source coding problem Dobrushin and Tsybakov (1962) in which distortion is measured under the logarithm loss fidelity criterion Harremoes and Tishby (2007). In accordance with this analogy, for K ≥ 2 consider the multiterminal source coding problem under logarithmic loss in which the sequence Y n models a remote source that is observed by K spatially distributed agents; the agents observe noisy versions of the remote source and communicate independently with a decoder or Chief Executive Officer (CEO) over rate-constrained noise-free links. For instance, agent k, k ∈ K, observes Xnk and uses Rk bits per sample to describe it to the decoder. The decoder wants to reconstruct the remote source Y n to within a prescribed fidelity level, where incurred distortion is measured using the logarithmic loss criterion, i.e.,
`log(y n, ŷn) =
1 n log
1
P̂Y n|J(yn|φ1(xn1 ), . . . , φK(xnK)) , (32)
where J = (φ1(Xn1 ), . . . , φK(XnK)).
Here, (Xn1 , . . . , XnK , Y n) is assumed to be distributed i.i.d. according to the n-product of the pmf PX1,...,XK ,Y , i.e., the Markov chain equation 3 holds.
Definition 2 A rate-distortion code (of blocklength n) for the CEO problem consists of K encoding functions
φ̃k : Xnk → {1, . . . ,M (n) k }, for k = 1, . . . ,K, (33)
and a decoding function
ψ̃ : {1, . . . ,M (n)1 } × . . .× {1, . . . ,M (n) K } → Ŷ n. (34)
A distortion-rate tuple (D,R1, . . . , RK) is achievable for the DM CEO source coding problem with side information if there exist a blocklength n, encoding functions {φ̃k}Kk=1 and a decoding function ψ̃ such that
Rk ≥ 1
n logM
(n) k , for k = 1, . . . ,K,
D ≥ E [ `log ( Y n, ψ̃(φ̃1(X n 1 ), . . . , φ̃K(X n K)) )] .
The distortion-rate region DRCEO of the CEO model is defined as the closure of all non-negative tuples (D,R1, . . . , RK) that are achievable.
Key to the proof of Theorem 1 is the following proposition which states that IRDIB andDRCEO can be inferred from each other. Proposition 3 (∆, R1, . . . , RK) ∈ IRDIB if and only if ( H(Y )−∆, R1, . . . , RK ) ∈ DRCEO.
Proof: Let, for k = 1, . . . ,K, Jk = φk(Xnk ) and J = (J1, . . . , JK). Then,
E[`log(Y n, Ŷ n)|J = j] = ∑ yn∈Yn P (yn|j) log
( 1
P̂ (yn|j)
) (35)
= ∑
yn∈Yn P (yn|j) log ( P (yn|j) P̂ (yn|j) ) +H(Y n|J = j) (36)
= DKL(P (y n|j)‖P̂ (yn|j)) +H(Y n|J = j) (37) ≥ H(Y n|J = j), (38)
where equation 38 is due to the non-negativity of the Kullback-Leibler divergence and the equality holds if and only if for P̂ (yn|j) = P (yn|j) where P (yn|j) = Pr{Y n = yn|J = j} for all j and yn ∈ Yn.
Let an achievable tuple (∆, R1, . . . , RK) ∈ IRDIB be given. Then, there must exist functions {φk}Kk=1 such that equation 9 and equation 10 hold. Using equation 38 that by letting the decoding function ψ̃(JK) = {PY n|JK(y
n|JK)}, we have E[`log(Y n, Ŷ n)|JK] = H(Y n|JK), which implies (H(Y )−∆, R1, . . . , RK) ∈ DRCEO.
The result of Theorem 1 follows easily by combining (Courtade and Weissman, 2014, Theorem 10), which provides a single-letter characterization of the rate distortion region DR?CEO of the CEO problem, and Proposition 3.
5.3 PROOF OF THEOREM 2
The proof of the direct part of Theorem 2 follows by evaluating the region of Theorem 1 with the choice T = ∅ and p(uk|xk, t) = CN (xk,Σ1/2k (Ωk − I)Σ 1/2 k ).
The proof of the converse part is as follows. Fix t ∈ T , S ⊆ K and a family of distributions {p(uk|xk, t)}Kk=1 such that the joint distribution factorizes as equation 13. Also, let 0 Ωk,t Σ−1k and
mmse(Xk|Y,Uk,t, t) = Σk −ΣkΩk,tΣk. (39)
Such Ωk,t always exists since 0 mmse(Xk|Y,Uk,t, t) Σ−1k . (40)
Then, we have
I(Xk; Uk|Y, t) ≥ log |Σk| − log |mmse(Xk|Y,Uk,t, t)|
= − log |I−Σ1/2k Ωk,tΣ 1/2 k |, (41)
where the inequality is due to Lemma 2; and equation 41 is due to equation 39.
Also, we have
I(Y; USc,t|t) ≤ log |Σy| − log |J−1(Y|USc,t, t)| (42)
= log ∣∣∣∣∣∑ k∈Sc Σ1/2y H † kΩk,tHkΣ 1/2 y + I ∣∣∣∣∣ , (43) where equation 42 follows by using Lemma 2; and equation 43 holds by using the following equality
J(Y|USc,t, t) = ∑ k∈Sc H†kΩk,tHk + Σ −1 y . (44)
the proof of which uses a connection between MMSE and Fisher information as shown next.
For the proof of equation 44, first note that from the MMSE estimation of Gaussian random vectors El Gamal and Kim (2011), we have
Y = E[Y|XSc ] + ZSc = ∑ k∈Sc GkXk + ZSc , (45)
where Gk = Σy|xScH † kΣ −1 k and ZSc ∼ CN (0,Σy|xSc ), with
Σ−1y|xSc = Σ −1 y + ∑ k∈Sc H†kΣ −1 k Hk. (46)
Note that ZSc is independent of YSc due to the orthogonality principle of the MMSE and its Gaussian distribution. Hence, it is also independent of USc,q . We have
mmse (∑ k∈Sc GkXk ∣∣∣Y,USc,t, t) = ∑ k∈Sc Gkmmse (Xk|Y,USc,t, t) G†k (47)
= Σy|xSc ∑ k∈Sc H†k ( Σ−1k −Ωk ) HkΣy|xSc , (48)
where equation 47 follows since the cross terms are zero due to the Markov chain (Uk,t,Xk) − − Y − − (UK/k,t,XK/k); and equation 48 follows due to equation 39 and Gk. Finally,
J(Y|USc,t, t) = Σ−1y|xSc −Σ −1 y|xScmmse (∑ k∈Sc GkXk ∣∣∣Y,USc,t, t)Σ−1y|xSc (49) =Σ−1y|xSc − ∑ k∈Sc H†k ( Σ−1k −Ωk,t ) Hk (50)
=Σ−1y + ∑ k∈Sc H†kΩk,tHk, (51)
where equation 49 is due to Lemma 3; equation 50 is due to equation 48; and equation 51 follows due to equation 46.
Now, let Ω̄k := ∑ t∈T p(t)Ωk,t. The rest of the converse proof follows by averaging over the time sharing random variable to get
I(Xk; Uk|Y, T ) ≥ − ∑ t∈T p(t) log |I−Σ1/2k Ωk,tΣ 1/2 k |
≥ − log |I−Σ1/2k Ω̄kΣ 1/2 k |, (52)
where equation 52 follows from the concavity of the log-det function and Jensen’s inequality. Similarly to equation 52, from equation 43 and Jensen’s Inequality we have
I(Y; USc |T ) ≤ log ∣∣∣∣∣∑ k∈Sc Σ1/2y H † kΩ̄kHkΣ 1/2 y + I ∣∣∣∣∣ . (53) Finally, using equation 52 and equation 53 in equation ??, noting that Ωk = ∑ t∈T p(t)Ωk,t Σ −1 k since 0 Ωk,t Σ−1k , and taking the union over Ωk satisfying 0 Ωk Σ −1 k , completes the proof of the converse part; and, hence, that of Theorem 2.
5.4 PROOF OF PROPOSITION 1
For simplicity of exposition, the proof is given for the case K = 2 encoders. The proof for K > 2 follows similarly. By the definition of IRsumDIB, the accuracy complexity tuple (∆, Rsum) ∈ R2+ is achievable for some random variables Y,X1, X2, U1, U2 with joint pmf satisfying equation 13, if it holds that
∆ ≤ I(Y ;U1, U2) (54) ∆ ≤ R1 − I(X1;U1|Y ) + I(Y ;U2) (55) ∆ ≤ R2 − I(X2;U2|Y ) + I(Y ;U1) (56) ∆ ≤ R1 +R2 − I(X1;U1|Y )− I(X2;U2|Y ) (57)
R1 +R2 ≤ Rsum. (58)
The application of the Fourier-Motzkin elimination to project out R1 and R2 reduces the system on inequalities equation 54-equation 58 to the following system of inequalities
∆ ≤ I(Y ;U1, U2) (59) ∆ ≤ Rsum − I(X1;U1|Y )− I(X2;U2|Y ) (60)
2∆ ≤ Rsum − I(X1;U1|Y )− I(X2;U2|Y ) + I(Y ;U1) + I(Y ;U2) (61)
It follows due to the Markov chainU1− −X1− −Y − −X2− −U2 that we have I(Y ;U1, U2) ≤ I(Y ;U1)+I(Y ;U2). Therefore, inequality equation 61 is redundant as it is implied by equation 59 and equation 60. This completes the proof of Proposition 1.
5.5 PROOF OF PROPOSITION 2
Suppose that P∗ yields the maximum in equation 16. Then,
(1 + s)∆s = (1 + sK)H(Y ) + sRs + Ls(P∗) (62)
= (1 + sK)H(Y ) + sRs + ( −H(Y |U∗K)− s
K∑ k=1 [H(Y |U∗k ) + I(Xk;U∗k )]
) (63)
= (1 + sK)H(Y ) + sRs + (−H(Y |U∗K)− s(Rs − I(Y ;U∗K) +KH(Y ))) (64) = (1 + s)I(Y ;U∗K) (65)
≤ (1 + s)∆(Rs, PXK,Y ), (66)
where equation 63 is due to the definition of Ls(P) in equation 18; equation 64 follows since we have∑K k=1[I(Xk;U ∗ k ) +H(Y |U∗k )] = Rs − I(Y ;U∗K) +KH(Y ) from the definition of Rs in equation 17; and equation 66 follows from the definition in equation ??.
Conversely, if P∗ is the solution to the maximization in the function ∆(Rsum, PXK,Y ) in equation ?? such that ∆(Rsum, PXK,Y ) = ∆s, then ∆s ≤ I(Y ;U ∗ K) and ∆s ≤ Rsum − ∑K k=1 I(Xk;U ∗ k |Y ) and we have, for any s ≥ 0, that
∆(Rsum, PXK,Y ) = ∆s
≤ ∆s − (∆s − I(Y ;U∗K))− s ( ∆s −Rsum +
K∑ k=1 I(Xk;U ∗ k |Y )
)
= I(Y ;U∗K)− s∆s + sRsum − s K∑ k=1 I(Xk;U ∗ k |Y ) = H(Y )− s∆s + sRsum −H(Y |U∗K)− s K∑ k=1 [I(Xk;U ∗ k ) +H(Y |U∗k )] + sKH(Y )
(67)
≤ H(Y )− s∆s + sRsum + L∗s + sKH(Y ) (68) = H(Y )− s∆s + sRsum + sKH(Y )− ((1 + sK)H(Y ) + sRs − (1 + s)∆s) (69) = ∆s + s(Rsum −Rs), (70)
where in equation 67 we have ∑K k=1 I(Xk;Uk|Y ) = −KH(Y ) + ∑K k=1 I(Xk;Uk) +H(Y |Uk) due to the Markov chain Uk −Xk − Y − (XK\k, UK\k); equation 68 follows since L∗s is the maximum over all possible distributions P (not necessarily P∗ maximizing ∆(Rsum, PXK,Y )); and equation 69 is due to equation 16.
Finally, equation 70 is valid for any Rsum ≥ 0 and s ≥ 0. Given s, and hence (∆s, Rs), choosing R = Rs yields ∆(Rs, PXK,Y ) ≤ ∆s. Together with equation 66, this completes the proof of Proposition 2.
5.6 PROOF OF LEMMA 1
The proof follows by deriving the following bounds. For any conditional pmf QY |Z(y|z), y ∈ Y and z ∈ Z , e.g., Z = UK or Z = Uk, proceeding similarly to equation 38 and averaging over Z, we have
H(Y |Z) = E[− logQY |Z(Y |Z)]−DKL(PY |Z‖QY |Z). (71)
Similarly, we have
I(Xk;Uk) = H(Uk)−H(Uk|Xk) (72) = E[− logQUk (Uk)]−DKL(PUk‖QUk )−H(Xk|UK) (73) = DKL(PY |Uk‖QUk )−DKL(PUk‖QUk ) (74)
Thus, we get
Ls(P) = LVBs (P,Q) +DKL(PY |UK ||QY |UK) + s K∑ k=1 (DKL(PY |Uk ||QY |Uk ) +DKL(PUk ||QUk ))
≥ LVBs (P,Q), (75)
where equation 75 holds by the non-negativity of relative entropy: and the equality is met if and only if Q∗ is as given by equation 21 and equation 22.
6 OTHER EXPERIMENTAL RESULTS (REGRESSION FOR UNKNOWN GAUSSIAN MODEL)
6.1 D-VIB ALGORITHM FOR VECTOR GAUSSIAN MODEL
For the vector Gaussian data model equation 14 the optimal distributions P and Q in equation 23 lie within the family of multivariate Gaussian distributions. Motivated by this observation, we consider the following parameterization for k ∈ K:
Pθk (uk|xk) = N (uk;µ e k,Σ e k) (76) QφK(ŷ|uK) = N (ŷ;µ d K,Σ d K) (77)
Qφk (ŷ|uk) = N (ŷ;µ d k,Σ d k) (78)
Qϕk (uk) = N (0, I). (79)
where µek,Σ e k are the output of a DNN fθk with input Xk that encodes the observations in a nuk -dimensional Gaussian distribution, µdK,Σ d K are the outputs of a DNN fφK with inputs U1, . . . ,UK , sampled from Pθk (uk|xk), and µ d k,Σ e k are the output of a DNN fφk with input Uk, k = 1, . . . ,K.
With the above choice of parametric encoders and decoders, and using a single sample m = 1, the empirical DIB cost in equation 29 is given for the sample (x1,i, . . . ,xK,i,yi) by
Lemps,i (θ,φ,ϕ) :=− 1
2
( (yi − µd12,i)TΣd,−112,i (yi − µ d 12,i) + log det(Σ d 12,i) ) − s
K∑ k=1 1 2 ( (yi − µdk,i)TΣd−1k,i (yi − µ d k,i) + log det(Σ d k,i) )
− s K∑ k=1 1 2 ( (µek,i − I)T (µek,i − I) + log |Σe,−1k,i | − nuk + tr{Σ e k,i} ) − ny 2 (1 + sK) log(2π),
where (µd12,i,Σ d 12,i) denote the output of the DNN fφK for the i-th sample (x1,i, . . . ,xK,i,yi), and similarly for the other mean and covariance terms; and where we have used that each term in the empirical DIB cost equation 29 can be computed noting that for d-dimensional Gaussian pmfsN (y;µ,Σ) we have
logN (y;µ,Σ) = −1 2
( (y − µ)TΣ−1(y − µ) + d log(2π) + log det(Σ) ) ,
and the KL divergence between two multivariate Gaussian pmfs P1 ∼ N (µ1,Σ1) and P2 ∼ N (µ2,Σ2) in Rd, is
DKL(P1‖P2) = 1
2
( (µ1 − µ2)TΣ−12 (µ1 − µ2) + log |Σ2Σ −1 1 | − d+ tr{Σ −1 2 Σ1} ) . (80)
The multivariate Gaussian parametrization of the encoders, decoders and prior distribution as given by equation 76-equation 79 can be used for other data models that are not necessary Gaussian. For example, it is particularly suitable for regression problems in which Y lies on a continuous space. Also, it is very often used in conjunction with VAE generative problems Rezende et al. (2014); Kingma and Welling (2013).
6.2 REGRESSION FOR VECTOR GAUSSIAN DATA MODEL
Consider a distributed learning model withK = 2 encoders, each observing a noisy version of an ny-dimensional Gaussian vector Y ∼ N (y; 0, I), as Xk = HkY + Nk, where Hk ∈ Rnk×ny and the noises are distributed as Nk ∼ N (0, I) for k = 1, 2.
For this model, the optimal accuracy-complexity region can be computed using Theorem 2. In what follows, we evaluate the performance of our D-VIB of the previous section for regression. The algorithm is trained using a dataset of n i.i.d. samples {(X1,i,X2,i,Yi)}ni=1 form the described vector Gaussian data model. We train the DNNs for various values of the parameter s. We use the multivariate Gaussian parameterization in equation 76-equation 79 for the DNNs architecture shown in Table 6.2. Specifically, Encoder k, k = 1, 2, consists of three dense layers of 512 neurons each followed by rectified linear unit (ReLu) activations. The output of encoder k is processed by a dense layer without nonlinear activation to generate µek and Σ e k of size 512 and 512× 512, respectively. Each decoder consists of two dense layers of 512 neurons with ReLu activations. The output of decoder 1, 2 and 12 is processed, each, by a fully connected layer without activation to generate µdk and Σ d k and µ d 12 and Σd12, of size 2 and 2× 2.
Figure 5 shows the optimal relevance-complexity region of tuples (∆, Rsum) obtained from Theorem 2 for a vector Gaussian model with K = 2 encoders, target variable dimension ny = 1, and observations dimension n1 = n2 = 3. A set of 40.000 samples split among training (30.000 samples) and test (10.000 samples). The figure depicts all accuracy-complexity pairs obtained by application of our algorithm D-VIB to this setting. The results are compared to the case of inference with known joint distribution (referred to as D-IB, see next section) as well as the case of centralized inference (C-IB). For the D-VIB algorithm, the the DNN architecture for the coders is shown in Table 6.2. Figure 6 shows the evolution of the associated mean squared error (MSE) in the estimation of the label Y using our D-VIB algorithm. As it can bee seen from both figures the performance of our D-VIB algorithm (which does not require knowledge of the joint label-feature distribution) is very close to that predicted by the theory, i.e., our Theorem 2.
Figure 7 shows similar curves for ny = 2, n1 = n2 = 3 dimensions, for various sizes of the training datset. As expected large training sets allow a more accurate prediction. Noteworthy, that the performance during the training phase might be better than that of the centralized learning scenario is an indicator can be caused by overfitting. Related to this aspect, recall that although the D-VIB algorithm does not estimate the underlying distribution explicitly, intuitively it does for the computation of the cost function. This is related to that universal compressors also learn the actual distribution of the data that is being compressed. Recall that since the plug-in estimator of entropy is biased downward, estimations of the mutual information terms that are involved in the cost function are then biased upward, which is an alternate explanation to the observed overfitting during the training phase.
DNN Layers
Encoder k dense [512]-ReLu dense [512]-ReLu dense [512]-ReLu Lat. space k dense [256]-ReLu Decoder 12 dense [256]-ReLu Decoder k dense [256]-ReLu
Table 3: Used DNN architecture.
7 DISTRIBUTED BLAHUT-ARIMOTO TYPE ALGORITHMS
7.1 DISCRETE-ALPHABET SETTING
In this section, we derive an iterative method to optimize the variational DIB cost function in equation 23 when the data model is discrete and the joint distribution PXK,Y is either known, or a good estimation of it can be obtained from the training samples. In these cases, the maximizing distributions P,Q of the variational DIB cost in equation 23 can be efficiently found by an alternating optimization procedure over P and Q similar to the expectation-maximization (EM) algorithm Dempster et al. (1977) and the standard Blahut-Arimoto (BA) methodBlahut (1972). An extension to the vector Gaussian data model, which involves random variable with continuous alphabets, is also provided. The main idea of the algorithm is that at iteration t, the optimal distributions P(t) that maximize the variational D-IB bound LVBs (P,Q(t)) for fixed Q(t) can be optimized in closed form and, next, the maximizing pmfs Q(t) for given P(t) can be also found analytically. So, starting from an initialization P(0) and Q(0) the algorithms performs the following computations successively and in this order, until convergence,
P(0) → Q(0) → P(1) → . . .→ P(t) → Q(t) → . . . (81)
We refer to such algorithm as “Blahut-Arimoto Distributed Information Bottleneck Algorithm (BA-DIB)”. Algorithm 1 describes the steps taken by BA-DIB to successively maximize LVBs (P,Q) by solving a concave optimization problem over P and over Q at each iteration. We have the following lemma whose proof follows essentially by using the log-sum inequality Cover and Thomas (1991) and the convexity of the mapping x 7→ x log x.
Lemma 4 The function LVBs (P,Q) is concave in P and in Q.
For fixed P(t), the optimal Q(t) maximizing the variational D-IB bound in equation 19 follows from Lemma 1 as given by equation 21-equation 22. For fixed Q(t), the optimal P(t) can be found using the following lemma.
Lemma 5 For fixed Q, there exists a P that achieves the maximum maxP LVBs (P,Q), where PUk|Xk is given by
p∗(uk|xk) = q(uk) exp (−ψs(uk, xk))∑
uk∈Uk q(uk) exp(−ψs(uk, xk))
, (82)
for uk ∈ Uk and xk ∈ Xk, k ∈ K, and where we define
ψs(uk, xk) := DKL(PY |xk ||QY |uk ) + 1
s EUK\k|xk [DKL(PY |UK\k,xk ||QY |UK\k,uk ))]. (83)
Proof: Due to its concavity, to maximize LVBs (P,Q) with respect to P for given Q, we add the Lagrange multipliers λxk ≥ 0 for each constraint ∑ uk∈Uk
p(uk|xk) = 1 with xk ∈ Xk. For each s, λxk ≥ 0 and p(uk|xk) can be explicitly found by solving the KKT conditions, e.g.,
∂
∂p(uk|xk) LVBs (P,Q) + ∑ xk∈Xk λxk ∑ uk∈Uk p(uk|xk)− 1 = 0. This completes the proof.
Algorithm 1 BA-DIB training algorithm for discrete data
1: inputs: discrete pmf PX1,...,Xk,Y , parameter s ≥ 0. 2: output: optimal P ∗Uk|Xk , pair (∆s, Rs). 3: initialization Set t = 0 and set P(0) with p(uk|xk) = 1|Uk| for uk ∈ Uk, xk ∈ Xk, k = 1, . . . ,K. 4: repeat 5: Compute Q(t+1) using equation 21 and equation 22. 6: Compute P(t+1) using equation 82. 7: t← t + 1 8: until convergence.
7.1.1 CONVERGENCE
Algorithm 1 essentially falls into the class of the Successive Upper-Bound Minimization (SUM) algorithms Razaviyayn et al. (2013) in which LVBs (P,Q) acts as a globally tight lower bound on Ls(P). Algorithm 1 provides a sequence P(t) for each iteration t, which converges to a stationary point of the optimization problem equation 23.
Proposition 4 Every limit point of the sequence P(t) generated by Algorithm 1 converges to a stationary point of equation 23.
Proof: Let Q∗(P) = arg maxQ LVBs (P,Q). Using Lemma 1, for every P′ 6= P, it holds that
LVBs (P,Q∗(P′)) ≤ LVBs (P,Q∗(P)) = Ls(P). (84)
Since Ls(P) and LVBs (P,Q∗(P′)) satisfy the assumptions of (Razaviyayn et al., 2013, Proposition 1), then LVBs (P,Q∗(P′)) satisfies A1-A4 in Razaviyayn et al. (2013). Convergence to a stationary point of equation 23 follows from (Razaviyayn et al., 2013, Theorem 1).
The self consistent equations equation 21, equation 22 and equation 83 satisfied by any stationary point of the D-IB problem extend those of the standard point-to-point IB problem Globerson and Tishby (2004) to the distributed IB problem with K ≥ 2 encoders. In particular, note the additional divergence term in equation 83.
7.2 GAUSSIAN SETTING
Recall Algorithm 1. For finite alphabet sources the updating rules of Q(t+1) and P(t+1) in Algorithm 1 are relatively easy, but they become unfeasible for continuous alphabet sources. We leverage on the optimality of Gaussian test channels, shown in Theorem 2, to restrict the optimization of P to Gaussian distributions, which are easily represented by a finite set of parameters, namely mean and covariance. We show that if P(t) are Gaussian distributions, then P(t+1) are also Gaussian distributions, which can be computed with an efficient update algorithm of its representing parameters. In particular, if at time t the k-th distributions P (t)Uk|Xk is given by
Utk = A t kXk + Z t k, (85)
where Ztk ∼ CN (0,Σzt k ), we show that at t+ 1, for P(t+1) updated as in equation 82, the encoder P (t+1)Uk|Xk corresponds to Ut+1k = A t+1 k Xk + Z t+1 k , where Z t+1 k ∼ CN (0,Σzt+1
k ) and Σ zt+1 k ,At+1k are updated as
Σ zt+1 k =
(( 1 + 1
s
) Σ−1
ut k |y −
1 s Σ−1 ut k |utK\k
)−1 , (86)
At+1k = Σzt+1 k
(( 1 + 1
s
) Σ−1
ut k |yA
t k(I−Σxk|yΣ −1 xk )−
1 s Σ−1 ut k |utK\k Atk(I−Σxk|utK\kΣ −1 xk )
) . (87)
The detailed update procedure is given in Algorithm 2 (see the following section for the details of the derivations).
Algorithm 2 BA-DIB algorithm for the Gaussin Vector D-IB
1: inputs: covariance Σy,x1,...,xk , parameter s ≥ 0. 2: output: optimal pairs (A∗k,Σz∗k), k = 1, . . . ,K. 3: initialization Set randomly A0k and Σz0k 0, k ∈ K. 4: repeat 5: Compute Σxk|utK\k and update for k ∈ K
Σutk|y = A t kΣxk|yA t,† k + Σztk (88)
Σutk|utK\k = A t kΣxk|utK\kA t,† k + Σztk , (89)
6: Compute Σzt+1k as in equation 86 for k ∈ K. 7: Compute At+1k as equation 87, k ∈ K. 8: t← t + 1. 9: until convergence.
7.2.1 DERIVATION OF ALGORITHM 2
We derive the update rules of Algorithm 2 and show that the Gaussian distribution is invariant to the update rules in Algorithm 1, in line with Theorem 2. First, we recall that if (X1,X2) are jointly Gaussian, then
PX2|X1=x1 = CN (µx2|x1 ,Σx2|x1), (90)
where µx2|x1 := Kx2|x1x1, with Kx2|x1 := Σx2,x1Σ −1 x1 .
Then, for Q(t+1) computed as in equation 21 and equation 22 from P(t), which is a set of Gaussian distributions, we have
Q (t+1) Y|uk = CN (µy|ut k ,Σy|ut k ), Q (t+1)
Y|uK = CN (µy|utK ,Σy|utK).
Next, we look at the update P(t+1) as in equation 82 from given Q(t+1). First, we have that p(utk) is the marginal of Utk, given by U t k ∼ CN (0,Σut
k ) where Σut k = AtkΣxkA t,H k + Σztk .
Then, to compute ψs(utk,xk), first, we note that
EUK\k|xk [DKL(PY |UK\k,xk ||QY |UK\k,uk )] = DKL(PY,UK\k|xk ||QY,UK\k|uk )−DKL(PUK\k|xk ||QUK\k|uk ) (91)
and that for two generic multivariate Gaussian distributions P1 ∼ CN (µ1,Σ1) and P2 ∼ CN (µ2,Σ2) in CN , the KL divergence is computed as in equation 80 below.
Applying equation 91 and equation 80 in equation 83 and noting that all involved distributions are Gaussian, it follows that ψs(utk,xk) is a quadratic form. Then, since p(u t k) is Gaussian, the product log(p(utk) exp(−ψs(utk,xk))) is also a quadratic form, and identifying constant, first and second order terms, we can write
log p(t+1)(uk|xk) = Z(xk) + (uk − µut+1 k |xk )HΣ−1 zt+1 k (uk − µut+1 k |xk ), (92)
where Z(xk) is a normalization term independent of uk,
Σ−1 zt+1 k = Σ−1 ut k + KHy|ut k Σ−1 y|ut k Ky|ut k
+ 1
s KHyutK\k|u t k Σ−1 yutK\k|u t k KyutK\k|u t k − 1 s KHutK\k|u t k Σ−1 utK\k|u t k KutK\k|u t k , (93)
and
µ ut+1 k |xk = Σ zt+1 k
( KHy|ut k Σ−1 y|ut k µy|xk
+ 1
s Ky,utK\k|u t k Σ−1 y,utK\k|u t k µy,utK\k|xk − 1 s KutK\k|u t k Σ−1 utK\k|u t k µutK\k|xk
) .
(94)
This shows that p(t+1)(uk|xk) is a multivariate Gaussian distribution and that Ut+1k |{Xk = xk} is also a multivariate Gaussian distributed as CN (µ
ut+1 k |xk ,Σ zt+1 k ).
Next, we simplify equation 93 and equation 94 to obtain the update rules equation 86 and equation 87. From the matrix inversion lemma, similarly to Chechik et al. (Feb. 2005), for (X1,X2) jointly Gaussian we have
Σ−1x2|x1 = Σ −1 x2 + K H x1|x2Σ −1 x1|x2Kx1|x2 . (95)
Applying equation 95, in equation 93 we have
Σ−1 zt+1 k = Σ−1 ut k |y +
1 s Σ−1 ut k |yutK\k − 1 s Σ−1 ut k |utK\k , (96)
= ( 1 + 1
s
) Σ−1
ut k |y −
1 s Σ−1 ut k |utK\k , (97)
where equation 97 is due to the Markov chain Uk − −Y − −UK\k.
Then, also from the matrix inversion lemma, we have for jointly Gaussian (X1,X2),
Σ−1x2|x1Σx2,x1Σ −1 x1 = Σ −1 x2 Σx2,x1Σ −1 x1|x2 . (98)
Applying equation 98 to equation 94, for the first term in equation 94, we have
KHy|ut k Σ−1 y|ut k µy|xk = Σ −1 ut k |yΣy,utk Σ−1y µy|xk (99)
= Σ−1 ut k |yA t kΣxk,yΣ −1 y Σy,xkΣ −1 xk xk = Σ−1 ut k |yA t k(I−Σxk|yΣ −1 xk )xk, (100)
where Σy,ut k = AtkΣxk,y; and equation 100 is due to the definition of Σxk|y.
Similarly, for the second term in equation 94, we have
KyutK\k|u t k Σ−1 yutK\k|u t k µy,utK\k|xk = Σ−1 ut k |yutK\k Atk(I−Σxk|yutK\kΣ −1 xk )xk, (101)
= Σ−1 ut k |yA t k(I−Σxk|yΣ −1 xk )xk, (102)
where we use Σut k ,yutK\k = AtkΣxk,yutK\k ; and equation 102 is due to the Markov chain Uk − −Y− −UK\k.
For the third term in equation 94,
KutK\k|u t k Σ−1 utK\k|u t k µutK\k|xk = Σ−1 ut k |utK\k Atk(I−Σxk|utK\kΣ −1 xk )xk. (103)
Equation equation 87 follows by noting that µ ut+1 k |xk = At+1k xk, and that from equation 94 A t+1 k can be identified as in equation 87.
Finally, we note that due to equation 85, Σ | 1. How does the model incorporate model complexity through mutual information and MDL?
2. Is the mathematical analysis in the paper too complex for the proposed model?
3. Are there any comparisons with other methods in the literature for integrating multiple data sources?
4. Could the authors provide clearer definitions for Discrete Memoryless and Memoryless Vector Gaussian Models?
5. Is the Markov chain representation in equation (3) well defined?
6. Are X_k^n and X_{k,n} equivalent?
7. Would it be more readable to explicitly define Shannon Mutual Information in equation (6)?
8. Why are Gaussian pmfs used for discrete variables in the second paragraph on page 5? | Review | Review
I am not an expert in this area and the paper involves a lot of derivations and proofs, but I did not check the correctness of those derivations. In summary, this paper proposed a framework for integrating multiple data sources for representing data. In the framework, each data source was mapped to a latent data variable by using a nonlinear function which is called an encoder; then the mapped latent variables were jointly mapped to the target data by using another nonlinear function which is called a decoder. To make this idea to work, the paper used mutual information as the objective function to control the accuracy if the model, and at the same time to avoid overfitting the paper proposed to use MDL as a measure to control the complexity of the model. If I was right, this was the whole picture of the proposed model. My questions are the following:
1) I am not very clear how the model complexity was automatically incorporated with the objective function. It seems to me that the objective function was finally the equation (29) and then the neural networks for encoder and decoder were optimized. If this was the case, how the model complexity was incorporated, that is, how the R_k was used in the model? Was the values R_k constant in the model - I mean they are fixed constant values? How these values,i.e.,R_k, were chosen?
2) I am a mathematician, but to be honest, I feel that the Maths in the paper is huge and heavy and I thought it could not be that complex for the model. The consequence is that it make the paper to be hard to read. This is a personal feeling, you could just ignore this point.
3) Experiments: there are a lot of papers describing to integrate data sources for at least the MNIST example. It would be interesting to compare the proposed method to the literature. The experiment in 4.1 obviously is a toy data problem - I mean although the data is real, but the data generated was using noisy and rotations. It would be more interesting to apply the method to a real-world problem.
4) I think it would be more friendly to explicitly define the concepts of Discrete Memoryless and Memoryless Vector Gaussian Models.
5) The Markov chain represented in equation (3) is not well defined. I do not understand these notations.
6) Before the equation (4), is it equivalent X_k^n and X_{k,n}? I am confused by these notations
7) In equation (6), it is more readable to explicitly define the Shannon Mutual Information.
8) The second paragraph on Page 5: you use Gaussian pmfs here, but pmf denotes discrete variable. But Gaussian I assume is continuous. |
ICLR | Title
Mutual Information Continuity-constrained Estimator
Abstract
The estimation of mutual information (MI) is vital to a variety of applications in machine learning. Recent developments in neural approaches have shown encouraging potential in estimating the MI between high-dimensional variables based on their latent representations. However, these estimators are prone to high variances owing to the inevitable outlier events. Recent approaches mitigate the outlier issue by smoothing the partition function using clipping or averaging strategies; however, these estimators either break the lower bound condition or sacrifice the level of accuracy. Accordingly, we propose Mutual Information Continuityconstrained Estimator (MICE). MICE alternatively smooths the partition function by constraining the Lipschitz constant of the log-density ratio estimator, thus alleviating the induced variances without clipping or averaging. Our proposed estimator outperforms most of the existing estimators in terms of bias and variance in the standard benchmark. In addition, we propose an experiment extension based on the standard benchmark, where variables are drawn from a multivariate normal distribution with correlations between each sample in a batch. The experimental results imply that when the i.i.d. assumption is unfulfilled, our proposed estimator can be more accurate than the existing approaches in which the MI tends to be underestimated. Finally, we demonstrate that MICE mitigates mode collapse in the kernel density estimation task.
1 INTRODUCTION
Mutual information (MI) estimation is essential in various machine learning applications, including learning representations (Oord et al., 2018; Chen et al., 2016; Bachman et al., 2019; Hjelm et al., 2018; Sordoni et al., 2021), feature selection (Battiti, 1994; Estévez et al., 2009), feature disentanglement (Higgins et al., 2018; Esmaeili et al., 2019; Colombo et al., 2021), and reinforcement learning (Oord et al., 2018; Bachman et al., 2019; Li et al., 2016). Some conventional non-parametric approaches have been proposed to estimate MI (Estévez et al., 2009; Fraser & Swinney, 1986; Moon et al., 1995; Kwak & Choi, 2002). Despite promising results, (Belghazi et al., 2018; Poole et al., 2019) indicated that these estimators have limited capability to scale up well with the sample size or dimension (Gao et al., 2015) therefore hard to be utilized in general purpose applications.
Recent studies focus on scalable MI estimation through variational bounds maximization (Oord et al., 2018; Belghazi et al., 2018; Poole et al., 2019) or minimization (Cheng et al., 2020) using neural networks or convex maximum-entropy method(Samo, 2021). These neural estimators have been adopted in some remarkable self-supervised applications, such as computer vision (Chen et al., 2020; He et al., 2020; Grill et al., 2020; Chen & He, 2020; Chen et al., 2021) and speech recognition (Schneider et al., 2019; Baevski et al., 2019), with the aim of maximizing the shared information between different views with respect to space or time. In MI estimation, the neural networks (also known as the critics) has been used to approximate the log-density ratio. These MI estimators generally characterize the Kullback-Leibler (KL) divergence (Kullback & Leibler, 1951) using a dual representation and subsequently formulate MI lower bounds.
Although multiple applications have attained promising results, two significant issues have not been fully addressed. As the first issue, the existing MI estimators can be debilitated by significant bias and variance owing to inevitable outlier events. It was pointed out by (Poole et al., 2019; Song & Ermon, 2020) that the exponential partition function causes a high-variance issue. It implies
that estimators leveraging f -divergence representations could suffer from the high-variance issue. Numerous studies have been conducted to address this problem. Previous approaches such as Mutual Information Neural Estimation (MINE) (Belghazi et al., 2018) and Contrastive Predictive Coding (CPC) (Oord et al., 2018) reduce the variances by adopting different types of averaging. Based on MINE, the Smoothed Mutual Information Lower-bound Estimator (SMILE) (Song & Ermon, 2020) limits the range of the critic with a hyper-parameter, enabling estimates with low bias and variance. For the second issue, as summarized in (Oord et al., 2018; Belghazi et al., 2018; Poole et al., 2019; Nguyen et al., 2010), most of the existing MI estimators are tested on a standard benchmark where random variables are drawn independently. However, the benchmark is insufficient for an analysis of videos or audio signals in which data frames could be correlated.
In this paper, we address the high variance issue by a novel Mutual Information Continuityconstrained Estimator (MICE) that constrains the Lipschitz constant of the critic by its spectral norm (Miyato et al., 2018), and we block the unstable gradients generated from the partition function. MICE is less underestimated in the extended benchmark because the partition function is smoothed by the scale of the spectral norm instead of hard clipping, which could overly restrict the range of the density ratio. The experimental results show that MICE has a competitive bias-variance trade-off compared to SMILE in the standard benchmark, without selecting a clipping threshold. Based on the standard benchmark, we propose an extension in which random variables are correlated within a batch. Our proposed method is robust when samples are not independent compared to existing variational estimators that underestimate MI drastically when slight correlations are involved. Finally, in the kernel density estimation (KDE) experiment, we demonstrate that using MICE as MI regularization alleviates mode collapse (Che et al., 2016; Dumoulin et al., 2016; Srivastava et al., 2017) in the training of generative adversarial networks (GANs) (Goodfellow et al., 2014). Our contributions are as follows:
• We address the high-variance issue of an existing unbiased estimator by constraining the Lipschitz constant of log-density ratio estimator and gradient stabilization.
• We prove that MICE is a strongly consistent estimator of MI.
• In the proposed experiment extension, the results show that MICE outperforms existing estimators under the condition in which the i.i.d. assumption is not fulfilled.
• A GAN regularized by MICE can capture more modes in the KDE experiment and ease the mode collapse problem.
2 RELATED WORK
For a pair of random variables (X,Y ) over the probability space X × Y , the mutual information I(X;Y ) between X and Y can be defined as the KL divergence of the joint distribution P(X,Y ) and the product of the marginals PX and PY :
I(X;Y ) = DKL(P(X,Y )‖PX ⊗ PY ) (1)
where DKL is the KL divergence. Next, we start with a common characterization of KL divergence, the Donsker–Varadhan (DV) representation (Donsker & Varadhan, 1983), which is adopted by MINE (Belghazi et al., 2018) and SMILE (Song & Ermon, 2020).
Lemma 1 (Donsker–Varadhan (DV)) Given two probability distributions P and Q over X :
DKL(P‖Q) = sup T :X→R {EP [T ]− logEQ[eT ] , IDV} (2)
for some bounded function T : X → R such that the expectations are finite. In particular, if P and Q are specified as P(X,Y ) and PX ⊗PY , MI can be estimated by maximizing the DV representation. It should be noted that the equation holds when T = log dP/dQ + C for some constant C ∈ R. In (Broniatowski & Keziou, 2009; Nowozin et al., 2016), a general variational estimation of f - divergences is introduced. For any convex, lower-semicontinuous function f , there exists a convex conjugate f∗ such that f(u) = supt∈dom(f∗){tu − f∗(t)}, where u belongs to the domain of f .
Therefore, f -divergences can be estimated by taking supremum over an arbitrary class of functions T : X → R:
Df (P‖Q) = ∫ X q(x) sup t∈dom(f∗) { t p(x) q(x) − f∗(t) } dx (3)
≥ sup T :X→R {EP [T ]− EQ[f∗(T )]} (4)
The derivation form Equation 3 to Equation 4 is based on Jensen’s inequality because the supremum is swapped out of the integration. Here, the KL divergence can be obtained by specifying f(u) = u log u, thus f∗(T ) = eT−1, yielding the Nguyen-Wainright-Jordan (NWJ) lower bound (Nguyen et al., 2010). Similarly, MI can be estimated by setting P = P(X,Y ) and Q = PX ⊗ PY . Lemma 2 (Nguyen, Wainright, and Jordan (NWJ) (Nguyen et al., 2010)) Given two probability distributions P and Q over X ,
DKL(P‖Q) ≥ sup Tθ:X→R
{ EP [Tθ]− EQ[eTθ−1] , INWJ } (5)
where the equation holds when Tθ = 1 + log dPdQ .
Note that INWJ is unbiased since no nonlinear function is taken on the right-hand side out of the expectation. Although IDV and INWJ are tight with a sufficient large hypothesis set of Tθ, the partition function induces large variances. The following approaches aim to solve the high-variance issue by averaging and clipping on the partition function. For instance, MINE (Belghazi et al., 2018) proposed a neural information measure based on taking supremum of IDV over a neural network Tθ : X × Y → R parameterized by θ. Lemma 3 (Mutual Information Neural Estimation (MINE) (Belghazi et al., 2018)) Let P and Q be two probability distributions over X
I(X;Y ) ≥ sup Tθ:X→R
{ EP(X,Y ) [Tθ]− log EMA ( EPX⊗PY [eTθ ] ) , IMINE } (6)
In this manner, MINE collects cross-batch statistics to evaluate bias-corrected estimate, reducing the bias and variance simultaneously. In contrast to MINE, which uses the exponential moving average (EMA) to reduce variances induced from the partition function, (Song & Ermon, 2020) proposed to reduce variances by putting limits on the range of the log-density ratio.
Lemma 4 (Smoothed Mutual Information Lower-bound Estimator (SMILE) (Song & Ermon, 2020)) Let P and Q be two probability distributions over X
I(X;Y ) ≥ sup Tθ:X→R
{ EP(X,Y ) [Tθ]− logEPX⊗PY [e max(min(Tθ,τ),−τ)] , ISMILE }
(7)
Another multi-sample estimator, Contrastive Predictive Coding (CPC) (Oord et al., 2018), uses the cross-entropy between the positive and negative samples as an objective
EΠjp(xj ,yj)
[ 1
n n∑ i=1 log f(xi, yi) 1 n ∑n j=1 f(xi, yj)
] , ICPC (8)
where f(x, y) = ex >Wy is a log-bilinear function with a trainable parameterW , and the expectation is taken over the distribution with density Πjp(xj , yj). Noted that ICPC is tight when f(x, y) = log p(y|x) + c(y), where c(y) is an arbitrary function that depends on y. However, (Oord et al., 2018) indicated that this bound is loose when I(X;Y ) > log n, requiring an exponentially large batch size to achieve accurate estimates with high confidence (Song & Ermon, 2020).
3 LIMITATIONS ON DV REPRESENTATION
3.1 MAXIMUM OF LOG-DENSITY RATIO ESTIMATE DOMINATING THE PARTITION FUNCTION
According to (Poole et al., 2019; Song & Ermon, 2020), the partition function EQ [ eTθ(x,y) ] is the rationale behind high variances and biases. This expression is highly dependent on the maximum
of the log-density ratio in a batch. We demonstrate this by showing the relationship of LogSumExp (LSE, also known as a smooth approximation to the maximum function) operation and the maximum function as follows
LSE(Tθ(x1, y1), . . . , Tθ(xn, yn−1)) > max{Tθ(x1, y1), . . . , Tθ(xn, yn−1)}
1
n(n− 1) n∑ i=1 n−1∑ j=1 eTθ(xi,yj) > 1 n(n− 1) emax{Tθ(x1,y1),...,Tθ(xn,yn−1)} (9)
where Tθ(xi, yj) is the estimated log-density ratio log dP/dQ where x and y are drawn from Q. Note that because Q is the product of marginals, the total number of Tθ sampled from Q is n(n − 1). (McAllester & Stratos, 2020) indicated that the partition function is dominated by extremely rare events which are never observed through the sampling from PX ⊗ PY . They quantified the probability of outlier events using the outlier risk lemma.
Lemma 5 (Outlier risk lemma (McAllester & Stratos, 2020)) Given n samples (n ≥ 2) that follow the distribution PX and a property Φ[x] such that PX(Φ[x]) ≤ 1/n, the probability that no sample x satisfies Φ[x]is at least 1/4.
Here, PX(Φ[x]) is the probability of drawing x from PX such that statement Φ[x] holds. Lemma 5 can be easily proved based on the probability of sampling with replacement.
Letting P = P(X,Y ) andQ = PX⊗PY , for DV representation, the best estimate of MI is established when
EP [Tθ(x, y)] = I(X;Y ) (10) EQ[eTθ(x,y)] = 1 (11)
The outlier risk lemma indicates that there is at least a probability of 1/4 that one can draw an unseen variable such that EQ[eTθ(x,y)] > 1. By observing Equation 9, if a pair of unseen variables (x′, y′) were sampled, the partition function will be larger than eTθ(x
′,y′)/(n(n− 1)); therefore, the estimates of DV representation are of high bias and variance. Similarly, the best estimate of INWJ is established with the same Equation 10, but Equation 11 should be modified as EQ[eTθ(x,y)−1] = 1.
3.2 NEITHER UPPER BOUND NOR LOWER BOUND ESTIMATORS
Based on the aforementioned limitations of the DV representation, the IMINE and ISMILE focus on controlling the variance of the partition function. IMINE reduces the variance by applying EMA to the partition function over all previous samples. According to (McAllester & Stratos, 2020), the worst case of the DV representation can be bounded under log n. Because IMINE implicitly enlarges the batch size with the scale of iteration (i.e, the number of covered samples at the ith iteration is i × n, where n is the batch size), it can leverage the linearly increasing batch size to reduce the bias issue. Another method adopted by ISMILE is controlling the range of the partition function by clipping the log-density ratio with a threshold τ in Equation 7.
In (Song & Ermon, 2020), the clipped density ratio rτ = max(min(eTθ(x,y), eτ ), e−τ ) is estimated by n random variables over the distribution Q = PX ⊗ PY . The variance of the bounded partition function EQ[rτ ] satisfies Var[EQ[rτ ]] ≤ (eτ − e−τ )2/4n. According to (Song & Ermon, 2020), a trade-off of the bias and variance can be determined by a threshold τ . Decreasing τ reduces the variance, but increases bias with such choice.
Although these estimators mitigate the high-variance issue and attain more accurate estimates, they are no longer upper or lower bounds on MI. This is because the modified partition function is no longer a normalizing term. As MINE applies EMA to EQ [ eTθ(x,y) ] across batches, and there is at least 1/4 chance that the outlier event occurs, the partition function eventually saturates at eTmax/(4N2 − N)), where Tmax is the maximum among all Tθ, and N is the amount of training data. As the range of the partition function of ISMILE is limited within [e−τ , eτ ], the MI would be overestimated when the log-density ratio is larger than τ and would not be underestimated only if τ → 0 because of Equation 11. In a nutshell, although these neither upper bound nor lower bound estimators reached more accurate MI estimates than IDV, these estimators could overestimate MI to some unknown extent as they are
not guaranteed to be bounded below the MI. Moreover, IMINE requires a large batch size to avoid from yielding large errors; in addition, the development of a criterion of selecting a proper threshold for ISMILE is also challenging.
4 METHODOLOGY
4.1 MUTUAL INFORMATION CONTINUITY-CONSTRAINED ESTIMATOR
To alleviate the issue of outlier events dominating the partition function, we adopt two strategies, which are limiting the Lipschitz constant of the log-density ratio estimator and gradient stabilization. The core idea of reducing variances is to smooth the critic. For instance, IMINE and ICPC adopt averaging in different manners on the partition function to achieve a trade-off between the bias and variance, and ISMILE directly truncates the value of density ratio using a hyper-parameter. Clearly, these approaches have certain flaws in that averaging leads to high bias, and it requires prior knowledge to choose the proper thresholds for clipping. To avert these issues, we utilize the spectral normalization that constrains the spectral norm of the parameters in the last layer, and consequently smooth the partition function. In (Miyato et al., 2018), the spectral norm of a weight matrix W is defined as
σ(W ) := max h:h6=0 ‖Wh‖2 ‖h‖2 = max ‖h‖2≤1 ‖Wh‖2 (12)
where h denotes any non-zero vector. The spectral norm σ(W ) is equivalent to the largest singular value of W . Therefore, σ(W ) is independent from h, so the preconceptions regarding the data is no longer required. For the weight matrix W l in the lth layer of T l, spectral normalization normalizes Wl with its spectral norm
W lSN := W l
σ(W l) (13)
where W lSN is the normalized weight matrix such that ‖T l‖Lip ≤ 1. Therefore, although we cannot avoid sampling unseen variables, we can still constrain the maximum value of the partition function by limiting the smoothness of the critic.
By leveraging the spectral normalization, we propose the Mutual Information Continuityconstrained Estimator that smooths the critic
I(X;Y ) = sup TSNθ :X→R
{ EP(X,Y ) [ T SNθ (x, y) ] − EPX⊗PY [ eT SN θ (x,y)−1 ] , IMICE } (14)
where T SNθ is a critic normalized by the spectral norm of the last layer. In contrast to previous approaches that focus on reducing the variances of the partition function, the proposed IMICE shares the same parameters in both sides of Equation 14, and therefore it is guaranteed to not exceed the MI.
To quantify the maximal variance of the log-density ratio, we assume that T SNθ : Rd → R is a multi-layer perceptron (MLP) with Lipschitz continuous activation functions.
Lemma 6 Let X be a random variable, and g(X) : Rd → R is an MLP with any Lipschitz continuous activation function. Let Li be the Lipschitz constant of the ith layer, then
Var[g(X)] ≤ E [ ‖X − E(X)‖2 ] I∏ i=1 L2i (15)
Here, we defer the proof in Section A.1. Lemma 6 shows that the variance of the critic is bounded above by the product of the square of its Lipschitz constants in each layer. An inequality resembles to Equation 9 that upper bounds the partition function is shown below
LSE(Tθ(x1, y1), . . . , Tθ(xn, yn−1)) ≤ max{Tθ(x1, y1), . . . , Tθ(xn, yn−1)}+ log n(n− 1)
1
n(n− 1) n∑ i=1 n−1∑ j=1 eTθ(xi,yj) ≤ 1 n(n− 1) ( emax{Tθ(x1,y1),...,Tθ(xn,yn−1)} + 1 ) (16)
Therefore, by Equation 15 and Equation 16, the variance of the partition function is reduced by limiting the Lipschitz constant L of the critic and controlling the variance of X , and the estimate is
of lower variance with smaller L determined by the network during the optimization. Investigating Equation 14, because the partition function is exponential, its gradient with respect to T SNθ is still an exponential function, which causes the training to become unstable. Therefore, to further mitigate the high variance issue and stabilize the gradients, we avoid gradients generated by the partition function from back-propagating and consequently stabilize the gradients. The training procedure using gradient stabilization is presented in Algorithm 1.
Algorithm 1: Mutual Information Continuity-constrained Estimator (MICE) θ ← initialize network parameters from uniform distribution U ( − √ 1 d , √ 1 d ) ; while not converge do Draw n pair of samples (x1, y1), . . . , (xn, yn) from the joint distribution P(X,Y ) Forward pass of MICE: T SNθ (x, y)←MLPθ(x, y) IMICE(θ)← 1n ∑ i=1 T SN θ (xi, yi)− log 1n(n−1) ∑ i 6=j e TSNθ (xi,yj)
Compute the gradients on the left-hand side of IMICE with respect to θ: G(θ)← ∇θI leftMICE(θ) Update the network parameters: θ ← θ + G(θ)
end
4.2 CONSISTENCY
According to (Belghazi et al., 2018), an estimator In(X;Y ) constructed using a statistics network over n samples is strongly consistent if for all > 0, and there exists a positive integer N such that
∀n ≥ N, |I(X;Y )− In(X;Y )| ≤ , a.e. (17) Then, the authors separate the consistency question into approximation and estimation problems. In summary, to prove that MICE is strongly consistent, we first prove that there exists a neural network Tθ parameterized by θ in some compact domain Θ ∈ R, such that for all > 0, |I(X;Y )− IΘ(X;Y )| ≤ , a.e. This ensures the existence of neural networks that can approximate the MI with arbitrary accuracy. Second, we prove that given a family of neural networks Tθ in some bounded domain, for all > 0, there exists an N ∈ N such that for all n ≥ N , |In(X;Y ) − IΘ(X;Y )| ≤ , a.e., ensuring that given sufficient number of samples, one can estimate the MI with some statistics networks over samples. Combining the above two results with triangular inequality, we conclude that MICE is strongly consistent. We provide the details of the proofs in Section A.2.
5 EXPERIMENTS
5.1 STANDARD BENCHMARK
Dataset. The standard benchmark (Belghazi et al., 2018; Poole et al., 2019; Song & Ermon, 2020) contains two tasks, the Gaussian task and the Cubic task. For both tasks, we sample n random variables X,Y ∈ Rd for a batch from a standard multivariate normal distribution with correlation ρ between X and Y . For the Cubic task, to examine how much the MI estimators degrade when a nonlinear transformation involved, we estimate I(X;Y 3) = I(X;Y ), which does not change the MI.
Critics. Following previous studies (Belghazi et al., 2018; Poole et al., 2019; Song & Ermon, 2020), we consider two types of critics: the joint critic (Belghazi et al., 2018) and the separable critic (Oord et al., 2018). The joint critic first lists all combinations of all random variables in a batch and computes the log-density ratio with an MLP R2d → R. The separable critic applies nonlinear mapping to the inputs with two MLPs, f, g : Rd → Rd′ , and subsequently estimates log-density ratio by 〈f, g〉. The joint critic compares all combinations, having the computational complexity of O(n2), and since the computation of f and g can be paralleled, thus having a complexity of O(n).
In Figure 1, we show the performance of each estimator under different MI. The top row shows the Gaussian task, and the bottom row shows the Cubic task. As described in Section 2, ICPC is highly
biased and bounded above by log n, and the variance of INWJ increases along with the ground truth MI. Here, ISMILE (τ = 1.0) and IMICE have overall lower biases and variances, as compared to ICPC and INWJ using both critics. Because ISMILE is neither an upper bound nor lower bound on MI, MI estimates in the Gaussian task are sometimes slightly overestimated, but the moving mean of IMICE is almost not exceeding the ground truth of MI. In the Cubic task, the joint critic degrades more severely than the separable critic for most of estimators, except INWJ.
We show the bias-variance trade-offs of estimators using the separable critic in Figure 2, where the top row illustrates the results of the Gaussian task, and the results of the Cubic task are shown at the bottom row. It is observed that ICPC is severely biased, but the variance is much lower than all the other approaches. Although INWJ is theoretically unbiased, it has large bias owing to the inevitable outliers, and the variance grows up exponentially with MI as (Song & Ermon, 2020) pointed out. IMICE leverages the unbiasedness of INWJ and further reduces the variance by constraining the Lipschitz constant of the critic and gradient stabilization. Comparing result of ISMILE and IMICE using the joint critic, IMICE converges faster than ISMILE. It is possibly benefited from the stabilized
gradients. However, because we limit the Lipschitz constants in some layers of the critic, this could lead to lower flexibility, and thus IMICE is slightly more biased than ISMILE in the Cubic task. In brief, IMICE simultaneously guarantees not to exceed MI and remarkably relaxes the high-variance issue of INWJ.
5.2 EXTENSION OF STANDARD BENCHMARK
Sampling scheme. Next, we evaluate the MI estimators using an extension experiment based on the standard benchmark. As described in Section 5.1, random variables are sampled independently; that is, no correlations between samples is considered. However, we believe that, for practical scenarios, it is extremely difficult for one to create a batch in which all samples are independent. Therefore, based on the standard benchmark, we established an extension experiment in which random variables are sampled using the scheme below:
xi = ρ̂xi−1 + √ 1− ρ̂2 , ∀i = 2, . . . , n (18)
yi = ρxi + √ 1− ρ2 , ∀i = 1, . . . , n (19)
where x1 and are d-dimensional random variables following a standard normal distribution N (0, Id). Sampling variables using Equation 18 and Equation 19 is equivalent to sample X = {x1, . . . , xn} and Y = {y1, . . . , yn} from a multivariate normal distribution
X,Y ∼ N ( 0, [ Σx ρΣx ρΣx Σx ]) , Σx = Id ρ̂Id ρ̂ 2Id . . . ρ̂ n−1Id ρ̂Id Id ρ̂Id . . . ρ̂ n−2Id ρ̂2Id ρ̂Id Id . . . ρ̂ n−3Id
... ...
... . . . ... ρ̂n−1Id ρ̂ n−2Id ρ̂ n−3Id . . . Id where ρ̂ is the correlation between each pair of two consecutive samples, i.e., xi, xi+1 and yi, yi+1. In the extension benchmark, we follow the setting in Section 5.1 with an additional setting ρ̂ = 0.1, and the ground truth MI is increased by 2 after 4000 iterations during training. In general, the correlation coefficient less than 0.3 is considered to be weak. As shown in Figure 3, data with correlation of 0.1, which is even weaker than 0.3, degenerates other estimators, whereas IMICE still has relatively accurate estimates. To further explore this effect, additional experiments using different settings of ρ̂ are presented in Section A.3.
In Figure 3, we demonstrate that the bias and MSE of the estimate of IMICE are much lower than those of the other estimators using the separable critic for both the Gaussian task and the Cubic task.
There are two possible reasons that IMICE outperforms the other approaches. First, as we stated in Section 3, the partition function could be dominated by the nonzero log-density ratios when the correlations between samples are involved. The other reason is that the gradients are stabilized by applying spectral normalization to the critic and blocking the gradients generated by the partition function.
5.3 REGULARIZING GAN WITH MICE
GANs (Goodfellow et al., 2014) have recently shown powerful capabilities in real-world data generation. However, the well-known mode collapse agonizes GANs with the consequence of limited diversity. This is because the discriminator does not require the generator to capture all modes to decrease the loss function. (Belghazi et al., 2018) proposed to alleviate mode collapse by involving code variables C, and jointly maximize the MI between the generated data and C. Formally, a GAN regularized by MICE alternately optimizes the following two objectives:
LD := EPX [logD(X)] + EPZ [log(1−D(G(Z))] (20) LG := EPZ [log(1−D(G(z)))]− βIMICE(G(Z,C);C) (21)
where D,G are the discriminator and the generator, and Z follows a standard uniform distribution. Comparing the results of vanilla GAN and GAN + MICE in Figure 4, a vanilla GAN fails to model the structure, whereas GAN + MICE captures all 25 modes, showing the efficacy of mode collapse mitigation.
6 CONCLUSION
In this study, we comprehensively discuss the attributes and the limitations of existing approaches to variational MI estimation. We show that energy-based estimators such as INWJ, and IDV are of high variances because they are susceptible to the outlier events. Although neither upper bound nor lower bound estimators achieve much more accurate approximations to MI in the standard benchmark, they are under the risk of overestimating the MI. To address the above mentioned issues, we propose a unbiased and consistent estimator of MI, IMICE, which has been proven free from overestimation of the MI. We also argue that the standard benchmark is insufficient for evaluation since samples can hardly be entirely uncorrelated in general cases. Therefore, we employ an additional benchmark to evaluate the performance of the estimators in which the samples are correlated. In the standard benchmark, the proposed IMICE has a slightly better performance than ISMILE without prior knowledge for selecting clipping threshold. We empirically show that IMICE is more accurate than other estimators in the proposed additional benchmark. Finally, we show that regularizing GANs with MICE improves the ability of the GAN to capture multiple modes and consequently mitigate mode collapse.
A APPENDIX
A.1 PROOF OF LEMMA 6
Lemma 6 Let X be a random variable, and g(X) : Rd → R is an MLP with any Lipschitz continuous activation function. Let Li be the Lipschitz constant of the ith layer, then
Var[g(X)] ≤ E [ ‖X − E(X)‖2 ] I∏ i=1 L2i (22)
Proof. First, we consider the i-th layer fi with a Lipschitz continuous activation function, and fi has Lipschitz constant Li, then
Var[fi(X)] := E [ (fi(X)− E [fi(X)])2 ] (23)
≤ E [ (fi(X)− fi(E [X]))2 ] (24)
≤ L2iE [ ‖X − E[X]‖2 ] (25)
The first inequality stems from the fact that the mean of a random variable is the constant with the smallest MSE. By the definition of Lipschitz continuity, the second inequality holds because Li is the Lipschitz constant of fi. Second, let g be the composite function of f1, f2, . . . , fI that g = f1 ◦ f2 ◦ · · · ◦ fI , where I is the number of layers in g, then
Var[g(X)] ≤ E [ ‖X − E(X)‖2 ] I∏ i=1 L2i (26)
which completes the proof.
A.2 PROOF OF CONSISTENCY
The proof of consistency generally follows the proofs in (Belghazi et al., 2018) yet with some modifications to fit MICE. To prove that MICE is strongly consistent, we first prove that for all > 0, there exists a class of neural networks Tθ parameterized by θ in some compact domain Θ such that
|I(X;Y )− IΘ(X;Y )| ≤ (27)
Next, we prove that given > 0, there exists N ∈ N such that
|In(X;Y )− IΘ(X;Y )| ≤ (28)
As consequence, combining the above results with triangular inequality, we have ∀n ≥ N, |I(X;Y )− In(X;Y )| ≤ , which proves the consistency of MICE.
Proof. Let the optimal critic T ∗ = log dPdQ , where P and Q denote the joint distribution P(X,Y ) and the product of marginals PXPY of the continuous random variables X and Y , respectively. By the definition of INWJ, we have
I(X;Y )− IΘ(X;Y ) = EP [T ∗ − T ] + EQ[eT ∗−1 − eTθ−1] (29)
Next, according to the universal approximation theorem (Hornik et al., 1989), one can choose a Tθ such that
EP |T ∗ − Tθ| ≤
2 (30)
EQ|T ∗ − Tθ| ≤ 2 e−Tmax+1 (31)
where T ∗ is upper bounded above by Tmax. Because exp(·) is Lipschitz continuous with constant eTmax−1 on (−∞, eTmax−1], EQ|eT
∗−1 − eTθ−1| ≤ eTmax−1EQ|T ∗ − Tθ|, and consequently we have
EQ|eT ∗−1 − eTθ−1| ≤
2 (32)
Combine Equation 29, Equation 30, and Equation 32 with triangular inequality, we have
|I(X;Y )− IΘ(X;Y )| ≤ EP |T ∗ − Tθ|+ EP |eT ∗−1 − eTθ−1| ≤ (33)
So far we have proved that for T ∗ ≤ Tmax, Equation 27 holds. Next, we consider a subset that {T ∗ > Tmax} for a suitably chosen large value of Tmax. Here, let A be the subset belongs to the input domain, we use the indicator function 1A to partition the input domain. By the Lebesgue dominated convergence theorem, since that T ∗ and eT∗ are integrable w.r.t. P and Q, we could choose Tmax so that
EP [1T∗>Tmax(T ∗)] ≤
4 (34)
EQ[1T∗>Tmax(eT ∗−1)] ≤
4 (35)
Again, we can choose a function Tθ ≤ Tmax such that
EP |T ∗ − Tθ| ≤
2 (36)
EQ1T∗≤Tmax(|T ∗ − Tθ|) ≤ 2 e−Tmax+1 (37)
Combining Equation 35 and Equation 37 together
EQ[eT ∗−1 − eTθ−1] = EQ[1T∗≤Tmax(eT ∗−1 − eTθ−1)] + EQ[1T∗>Tmax(eT ∗−1 − eTθ−1)]
≤ eTmax−1EQ[1T∗≤Tmax(T ∗ − Tθ)] + EQ[1T∗>Tmax(eT ∗−1)]
≤ 2
(38)
Similar to the derivation of Equation 33, put Equation 36 and Equation 38 together we obtain ∀ > 0, |I(X;Y )− IΘ(X;Y )| ≤ (39)
For the estimation problem, let > 0 and given Tθ in some compact domain Θ ⊂ Rd, there exists a positive integer N such that
∀n ≥ N, |In(X;Y )− IΘ(X;Y )| ≤ (40) Here, we denote Pn and Qn as the empirical version of P and Q respectively, and In is the MI estimation with n samples. By triangular inequality we have
|In(X;Y )− IΘ(X;Y )| ≤ sup θ∈Θ
{ |EPn [Tθ]− EP [Tθ]|+ |EQn [eTθ−1]− EQ[eTθ−1]| } (41)
Since Θ is compact (therefore bounded) and neural networks are continuous, Tθ and eTθ satisfiy the uniform law of large numbers (Geer & van de Geer, 2000). Therefore, given > 0 we can choose a positive integer N such that ∀n ≥ N and with probability one, then
sup θ∈Θ {|EPn [Tθ]− EP [Tθ]|} ≤ 2 (42)
sup θ∈Θ
{ |EQn [eTθ−1]− EQ[eTθ−1]| } ≤
2 (43)
According to the three inequalities above we derive Equation 40.
Finally, combining Equation 39 and Equation 40 with triangular inequality, let > 0 and δ = 2 , and there exists a positive integer N such that
∀n ≥ N, |I(X;Y )− In(X;Y )| ≤ |I(X;Y )− IΘ(X;Y )|+ |In(X;Y )− IΘ(X;Y )| ≤ δ (44)
which completes the proof.
A.3 ADDITIONAL EXPERIMENTS
Experimental Settings. The experiments in Section 5.1 and Section 5.2 are established using a GTX 1080 Ti GPU with 11 GB VRAM. For the MLPs utilized in the joint/separable critic have the input dimension of 20, two hidden layers of 256 hidden dimension, and the output dimension is 32. In addition, ReLU Agarap (2018) is used as the activation function for both critics.
Performance of MI Estimators under Specific Correlations. We compare the performance of the MI estimators under specific ρ̂ settings (ρ̂=0.1, 0.2, 0.3, 0.4, and 0.5) using the separable critic. As shown in Figure 5, the MI estimators are more biased with larger correlation between samples. Among the MI estimators, INWJ is the most biased since neither the partition function nor the critic is constrained, so the outliers lead to large variances and biases, and this is the same reason that causes ISMILE(τ=∞) to be inaccurate. As mentioned in Section 2, the estimates of ISMILE(τ=∞) are more accurate than that of INWJ because ISMILE(τ=∞) is equivalent to IDV which is sharper than INWJ. Despite that ICPC is bounded above by log n, it is consistent under different settings of correlation. ISMILE(τ=1.0) is of low variance and bias comparing to itself when τ = ∞, but the improvement is mainly on reducing the variance. Comparing to the other MI estimators, our proposed IMICE is the least biased, and is robust when correlations between samples involved.
Randomly Selected ρ̂. We provide an experiment that correlations between samples are randomly initialized, which is a more complicated configuration than the extension benchmark in Section 5.2. Here, ρ̂ are randomly initialized from a uniform distribution that ranges from 0.0 to 0.5. In Figure 6, each estimator using the separable critic has an average performance in Figure 5. The proposed IMICE benefits the separable critic that it is robust to random correlations. In Figure 7, we also observed that the variance of INWJ is very sensitive to the data because the right-hand side of Eqn. Equation 5 is an exponential function without logarithm in IDV, and consequently yields high MSE.
By constraining the continuity and gradient stabilization, IMICE is robust when correlation between samples involved as compared with the other estimators, especially for the separable critic. This could benefit large scale training that requires a light weight model structure for the critic. | 1. What is the focus of the paper regarding mutual information estimation?
2. What are the strengths of the proposed approach, particularly in addressing outlier issues?
3. Do you have any concerns or questions about the methodology, such as the lack of a formal definition of Tθ^SN or the omission of gradients in gradient stabilization?
4. Are there any questions regarding the convergence of the algorithm or the limit point of theta?
5. How does the reviewer assess the clarity and quality of the paper's content, especially in comparison to prior works? | Summary Of The Paper
Review | Summary Of The Paper
This paper consider the mutual information estimation problem. The starting point of this paper is the variational formulation, which cast the mutual information as the optimal value of an infinite dimensional supremum problem over all bounded critic function. However, because the objective function has an expectation of an exponential function, this optimal value is sensitive to outlier values.
This paper proposes an estimator that is less sensitive to outlier. This estimator is constructed based on the following two ingredients:
restrict the set of possible critic functions to a smaller family using spectral norm normalization,
use gradient stabilization by ``avoiding gradients generated by the partition function from back-propagating".
The numerical experiments benchmark against existing methods (Belghazi et al 2018, Poole et al 2019, and Song & Ermon 2020) in various applications (correlated data, GANs, etc.) and deliver competitive results.
Review
The content of this paper is presented in Section 4. The paper provides a thorough discussion of existing methods along with important properties in Section 2 and 3. The paper, however, falls short of the clarity for the proposed methods:
There lacks a formal definition of T_\theta^{SN} in Section 4.
There lacks a clear motivation for spectral normalization. There are also many other ways that we can regularize the weight matrix W. Why is the spectral norm appropriate for this application?
A more details of the gradient stabilization should be included. What are the theoretical justification to omit the gradients of the partition function? How can we guarantee that there is no significant loss of performance when we omit this gradient?
Does Algorithm 1 even converge? If we use gradient stabilization, what is the limit point of theta?
The current theoretical results (Lemma 6 and the consistency results) follow from minor adaptation of well-known results (such as Belghazi et al. (2018)).
Minor comments:
Algorithm 1 is poorly written: Why do we need to compute I_{MICE}(theta)? What is the left-hand side of I_{MICE}? |
ICLR | Title
Mutual Information Continuity-constrained Estimator
Abstract
The estimation of mutual information (MI) is vital to a variety of applications in machine learning. Recent developments in neural approaches have shown encouraging potential in estimating the MI between high-dimensional variables based on their latent representations. However, these estimators are prone to high variances owing to the inevitable outlier events. Recent approaches mitigate the outlier issue by smoothing the partition function using clipping or averaging strategies; however, these estimators either break the lower bound condition or sacrifice the level of accuracy. Accordingly, we propose Mutual Information Continuityconstrained Estimator (MICE). MICE alternatively smooths the partition function by constraining the Lipschitz constant of the log-density ratio estimator, thus alleviating the induced variances without clipping or averaging. Our proposed estimator outperforms most of the existing estimators in terms of bias and variance in the standard benchmark. In addition, we propose an experiment extension based on the standard benchmark, where variables are drawn from a multivariate normal distribution with correlations between each sample in a batch. The experimental results imply that when the i.i.d. assumption is unfulfilled, our proposed estimator can be more accurate than the existing approaches in which the MI tends to be underestimated. Finally, we demonstrate that MICE mitigates mode collapse in the kernel density estimation task.
1 INTRODUCTION
Mutual information (MI) estimation is essential in various machine learning applications, including learning representations (Oord et al., 2018; Chen et al., 2016; Bachman et al., 2019; Hjelm et al., 2018; Sordoni et al., 2021), feature selection (Battiti, 1994; Estévez et al., 2009), feature disentanglement (Higgins et al., 2018; Esmaeili et al., 2019; Colombo et al., 2021), and reinforcement learning (Oord et al., 2018; Bachman et al., 2019; Li et al., 2016). Some conventional non-parametric approaches have been proposed to estimate MI (Estévez et al., 2009; Fraser & Swinney, 1986; Moon et al., 1995; Kwak & Choi, 2002). Despite promising results, (Belghazi et al., 2018; Poole et al., 2019) indicated that these estimators have limited capability to scale up well with the sample size or dimension (Gao et al., 2015) therefore hard to be utilized in general purpose applications.
Recent studies focus on scalable MI estimation through variational bounds maximization (Oord et al., 2018; Belghazi et al., 2018; Poole et al., 2019) or minimization (Cheng et al., 2020) using neural networks or convex maximum-entropy method(Samo, 2021). These neural estimators have been adopted in some remarkable self-supervised applications, such as computer vision (Chen et al., 2020; He et al., 2020; Grill et al., 2020; Chen & He, 2020; Chen et al., 2021) and speech recognition (Schneider et al., 2019; Baevski et al., 2019), with the aim of maximizing the shared information between different views with respect to space or time. In MI estimation, the neural networks (also known as the critics) has been used to approximate the log-density ratio. These MI estimators generally characterize the Kullback-Leibler (KL) divergence (Kullback & Leibler, 1951) using a dual representation and subsequently formulate MI lower bounds.
Although multiple applications have attained promising results, two significant issues have not been fully addressed. As the first issue, the existing MI estimators can be debilitated by significant bias and variance owing to inevitable outlier events. It was pointed out by (Poole et al., 2019; Song & Ermon, 2020) that the exponential partition function causes a high-variance issue. It implies
that estimators leveraging f -divergence representations could suffer from the high-variance issue. Numerous studies have been conducted to address this problem. Previous approaches such as Mutual Information Neural Estimation (MINE) (Belghazi et al., 2018) and Contrastive Predictive Coding (CPC) (Oord et al., 2018) reduce the variances by adopting different types of averaging. Based on MINE, the Smoothed Mutual Information Lower-bound Estimator (SMILE) (Song & Ermon, 2020) limits the range of the critic with a hyper-parameter, enabling estimates with low bias and variance. For the second issue, as summarized in (Oord et al., 2018; Belghazi et al., 2018; Poole et al., 2019; Nguyen et al., 2010), most of the existing MI estimators are tested on a standard benchmark where random variables are drawn independently. However, the benchmark is insufficient for an analysis of videos or audio signals in which data frames could be correlated.
In this paper, we address the high variance issue by a novel Mutual Information Continuityconstrained Estimator (MICE) that constrains the Lipschitz constant of the critic by its spectral norm (Miyato et al., 2018), and we block the unstable gradients generated from the partition function. MICE is less underestimated in the extended benchmark because the partition function is smoothed by the scale of the spectral norm instead of hard clipping, which could overly restrict the range of the density ratio. The experimental results show that MICE has a competitive bias-variance trade-off compared to SMILE in the standard benchmark, without selecting a clipping threshold. Based on the standard benchmark, we propose an extension in which random variables are correlated within a batch. Our proposed method is robust when samples are not independent compared to existing variational estimators that underestimate MI drastically when slight correlations are involved. Finally, in the kernel density estimation (KDE) experiment, we demonstrate that using MICE as MI regularization alleviates mode collapse (Che et al., 2016; Dumoulin et al., 2016; Srivastava et al., 2017) in the training of generative adversarial networks (GANs) (Goodfellow et al., 2014). Our contributions are as follows:
• We address the high-variance issue of an existing unbiased estimator by constraining the Lipschitz constant of log-density ratio estimator and gradient stabilization.
• We prove that MICE is a strongly consistent estimator of MI.
• In the proposed experiment extension, the results show that MICE outperforms existing estimators under the condition in which the i.i.d. assumption is not fulfilled.
• A GAN regularized by MICE can capture more modes in the KDE experiment and ease the mode collapse problem.
2 RELATED WORK
For a pair of random variables (X,Y ) over the probability space X × Y , the mutual information I(X;Y ) between X and Y can be defined as the KL divergence of the joint distribution P(X,Y ) and the product of the marginals PX and PY :
I(X;Y ) = DKL(P(X,Y )‖PX ⊗ PY ) (1)
where DKL is the KL divergence. Next, we start with a common characterization of KL divergence, the Donsker–Varadhan (DV) representation (Donsker & Varadhan, 1983), which is adopted by MINE (Belghazi et al., 2018) and SMILE (Song & Ermon, 2020).
Lemma 1 (Donsker–Varadhan (DV)) Given two probability distributions P and Q over X :
DKL(P‖Q) = sup T :X→R {EP [T ]− logEQ[eT ] , IDV} (2)
for some bounded function T : X → R such that the expectations are finite. In particular, if P and Q are specified as P(X,Y ) and PX ⊗PY , MI can be estimated by maximizing the DV representation. It should be noted that the equation holds when T = log dP/dQ + C for some constant C ∈ R. In (Broniatowski & Keziou, 2009; Nowozin et al., 2016), a general variational estimation of f - divergences is introduced. For any convex, lower-semicontinuous function f , there exists a convex conjugate f∗ such that f(u) = supt∈dom(f∗){tu − f∗(t)}, where u belongs to the domain of f .
Therefore, f -divergences can be estimated by taking supremum over an arbitrary class of functions T : X → R:
Df (P‖Q) = ∫ X q(x) sup t∈dom(f∗) { t p(x) q(x) − f∗(t) } dx (3)
≥ sup T :X→R {EP [T ]− EQ[f∗(T )]} (4)
The derivation form Equation 3 to Equation 4 is based on Jensen’s inequality because the supremum is swapped out of the integration. Here, the KL divergence can be obtained by specifying f(u) = u log u, thus f∗(T ) = eT−1, yielding the Nguyen-Wainright-Jordan (NWJ) lower bound (Nguyen et al., 2010). Similarly, MI can be estimated by setting P = P(X,Y ) and Q = PX ⊗ PY . Lemma 2 (Nguyen, Wainright, and Jordan (NWJ) (Nguyen et al., 2010)) Given two probability distributions P and Q over X ,
DKL(P‖Q) ≥ sup Tθ:X→R
{ EP [Tθ]− EQ[eTθ−1] , INWJ } (5)
where the equation holds when Tθ = 1 + log dPdQ .
Note that INWJ is unbiased since no nonlinear function is taken on the right-hand side out of the expectation. Although IDV and INWJ are tight with a sufficient large hypothesis set of Tθ, the partition function induces large variances. The following approaches aim to solve the high-variance issue by averaging and clipping on the partition function. For instance, MINE (Belghazi et al., 2018) proposed a neural information measure based on taking supremum of IDV over a neural network Tθ : X × Y → R parameterized by θ. Lemma 3 (Mutual Information Neural Estimation (MINE) (Belghazi et al., 2018)) Let P and Q be two probability distributions over X
I(X;Y ) ≥ sup Tθ:X→R
{ EP(X,Y ) [Tθ]− log EMA ( EPX⊗PY [eTθ ] ) , IMINE } (6)
In this manner, MINE collects cross-batch statistics to evaluate bias-corrected estimate, reducing the bias and variance simultaneously. In contrast to MINE, which uses the exponential moving average (EMA) to reduce variances induced from the partition function, (Song & Ermon, 2020) proposed to reduce variances by putting limits on the range of the log-density ratio.
Lemma 4 (Smoothed Mutual Information Lower-bound Estimator (SMILE) (Song & Ermon, 2020)) Let P and Q be two probability distributions over X
I(X;Y ) ≥ sup Tθ:X→R
{ EP(X,Y ) [Tθ]− logEPX⊗PY [e max(min(Tθ,τ),−τ)] , ISMILE }
(7)
Another multi-sample estimator, Contrastive Predictive Coding (CPC) (Oord et al., 2018), uses the cross-entropy between the positive and negative samples as an objective
EΠjp(xj ,yj)
[ 1
n n∑ i=1 log f(xi, yi) 1 n ∑n j=1 f(xi, yj)
] , ICPC (8)
where f(x, y) = ex >Wy is a log-bilinear function with a trainable parameterW , and the expectation is taken over the distribution with density Πjp(xj , yj). Noted that ICPC is tight when f(x, y) = log p(y|x) + c(y), where c(y) is an arbitrary function that depends on y. However, (Oord et al., 2018) indicated that this bound is loose when I(X;Y ) > log n, requiring an exponentially large batch size to achieve accurate estimates with high confidence (Song & Ermon, 2020).
3 LIMITATIONS ON DV REPRESENTATION
3.1 MAXIMUM OF LOG-DENSITY RATIO ESTIMATE DOMINATING THE PARTITION FUNCTION
According to (Poole et al., 2019; Song & Ermon, 2020), the partition function EQ [ eTθ(x,y) ] is the rationale behind high variances and biases. This expression is highly dependent on the maximum
of the log-density ratio in a batch. We demonstrate this by showing the relationship of LogSumExp (LSE, also known as a smooth approximation to the maximum function) operation and the maximum function as follows
LSE(Tθ(x1, y1), . . . , Tθ(xn, yn−1)) > max{Tθ(x1, y1), . . . , Tθ(xn, yn−1)}
1
n(n− 1) n∑ i=1 n−1∑ j=1 eTθ(xi,yj) > 1 n(n− 1) emax{Tθ(x1,y1),...,Tθ(xn,yn−1)} (9)
where Tθ(xi, yj) is the estimated log-density ratio log dP/dQ where x and y are drawn from Q. Note that because Q is the product of marginals, the total number of Tθ sampled from Q is n(n − 1). (McAllester & Stratos, 2020) indicated that the partition function is dominated by extremely rare events which are never observed through the sampling from PX ⊗ PY . They quantified the probability of outlier events using the outlier risk lemma.
Lemma 5 (Outlier risk lemma (McAllester & Stratos, 2020)) Given n samples (n ≥ 2) that follow the distribution PX and a property Φ[x] such that PX(Φ[x]) ≤ 1/n, the probability that no sample x satisfies Φ[x]is at least 1/4.
Here, PX(Φ[x]) is the probability of drawing x from PX such that statement Φ[x] holds. Lemma 5 can be easily proved based on the probability of sampling with replacement.
Letting P = P(X,Y ) andQ = PX⊗PY , for DV representation, the best estimate of MI is established when
EP [Tθ(x, y)] = I(X;Y ) (10) EQ[eTθ(x,y)] = 1 (11)
The outlier risk lemma indicates that there is at least a probability of 1/4 that one can draw an unseen variable such that EQ[eTθ(x,y)] > 1. By observing Equation 9, if a pair of unseen variables (x′, y′) were sampled, the partition function will be larger than eTθ(x
′,y′)/(n(n− 1)); therefore, the estimates of DV representation are of high bias and variance. Similarly, the best estimate of INWJ is established with the same Equation 10, but Equation 11 should be modified as EQ[eTθ(x,y)−1] = 1.
3.2 NEITHER UPPER BOUND NOR LOWER BOUND ESTIMATORS
Based on the aforementioned limitations of the DV representation, the IMINE and ISMILE focus on controlling the variance of the partition function. IMINE reduces the variance by applying EMA to the partition function over all previous samples. According to (McAllester & Stratos, 2020), the worst case of the DV representation can be bounded under log n. Because IMINE implicitly enlarges the batch size with the scale of iteration (i.e, the number of covered samples at the ith iteration is i × n, where n is the batch size), it can leverage the linearly increasing batch size to reduce the bias issue. Another method adopted by ISMILE is controlling the range of the partition function by clipping the log-density ratio with a threshold τ in Equation 7.
In (Song & Ermon, 2020), the clipped density ratio rτ = max(min(eTθ(x,y), eτ ), e−τ ) is estimated by n random variables over the distribution Q = PX ⊗ PY . The variance of the bounded partition function EQ[rτ ] satisfies Var[EQ[rτ ]] ≤ (eτ − e−τ )2/4n. According to (Song & Ermon, 2020), a trade-off of the bias and variance can be determined by a threshold τ . Decreasing τ reduces the variance, but increases bias with such choice.
Although these estimators mitigate the high-variance issue and attain more accurate estimates, they are no longer upper or lower bounds on MI. This is because the modified partition function is no longer a normalizing term. As MINE applies EMA to EQ [ eTθ(x,y) ] across batches, and there is at least 1/4 chance that the outlier event occurs, the partition function eventually saturates at eTmax/(4N2 − N)), where Tmax is the maximum among all Tθ, and N is the amount of training data. As the range of the partition function of ISMILE is limited within [e−τ , eτ ], the MI would be overestimated when the log-density ratio is larger than τ and would not be underestimated only if τ → 0 because of Equation 11. In a nutshell, although these neither upper bound nor lower bound estimators reached more accurate MI estimates than IDV, these estimators could overestimate MI to some unknown extent as they are
not guaranteed to be bounded below the MI. Moreover, IMINE requires a large batch size to avoid from yielding large errors; in addition, the development of a criterion of selecting a proper threshold for ISMILE is also challenging.
4 METHODOLOGY
4.1 MUTUAL INFORMATION CONTINUITY-CONSTRAINED ESTIMATOR
To alleviate the issue of outlier events dominating the partition function, we adopt two strategies, which are limiting the Lipschitz constant of the log-density ratio estimator and gradient stabilization. The core idea of reducing variances is to smooth the critic. For instance, IMINE and ICPC adopt averaging in different manners on the partition function to achieve a trade-off between the bias and variance, and ISMILE directly truncates the value of density ratio using a hyper-parameter. Clearly, these approaches have certain flaws in that averaging leads to high bias, and it requires prior knowledge to choose the proper thresholds for clipping. To avert these issues, we utilize the spectral normalization that constrains the spectral norm of the parameters in the last layer, and consequently smooth the partition function. In (Miyato et al., 2018), the spectral norm of a weight matrix W is defined as
σ(W ) := max h:h6=0 ‖Wh‖2 ‖h‖2 = max ‖h‖2≤1 ‖Wh‖2 (12)
where h denotes any non-zero vector. The spectral norm σ(W ) is equivalent to the largest singular value of W . Therefore, σ(W ) is independent from h, so the preconceptions regarding the data is no longer required. For the weight matrix W l in the lth layer of T l, spectral normalization normalizes Wl with its spectral norm
W lSN := W l
σ(W l) (13)
where W lSN is the normalized weight matrix such that ‖T l‖Lip ≤ 1. Therefore, although we cannot avoid sampling unseen variables, we can still constrain the maximum value of the partition function by limiting the smoothness of the critic.
By leveraging the spectral normalization, we propose the Mutual Information Continuityconstrained Estimator that smooths the critic
I(X;Y ) = sup TSNθ :X→R
{ EP(X,Y ) [ T SNθ (x, y) ] − EPX⊗PY [ eT SN θ (x,y)−1 ] , IMICE } (14)
where T SNθ is a critic normalized by the spectral norm of the last layer. In contrast to previous approaches that focus on reducing the variances of the partition function, the proposed IMICE shares the same parameters in both sides of Equation 14, and therefore it is guaranteed to not exceed the MI.
To quantify the maximal variance of the log-density ratio, we assume that T SNθ : Rd → R is a multi-layer perceptron (MLP) with Lipschitz continuous activation functions.
Lemma 6 Let X be a random variable, and g(X) : Rd → R is an MLP with any Lipschitz continuous activation function. Let Li be the Lipschitz constant of the ith layer, then
Var[g(X)] ≤ E [ ‖X − E(X)‖2 ] I∏ i=1 L2i (15)
Here, we defer the proof in Section A.1. Lemma 6 shows that the variance of the critic is bounded above by the product of the square of its Lipschitz constants in each layer. An inequality resembles to Equation 9 that upper bounds the partition function is shown below
LSE(Tθ(x1, y1), . . . , Tθ(xn, yn−1)) ≤ max{Tθ(x1, y1), . . . , Tθ(xn, yn−1)}+ log n(n− 1)
1
n(n− 1) n∑ i=1 n−1∑ j=1 eTθ(xi,yj) ≤ 1 n(n− 1) ( emax{Tθ(x1,y1),...,Tθ(xn,yn−1)} + 1 ) (16)
Therefore, by Equation 15 and Equation 16, the variance of the partition function is reduced by limiting the Lipschitz constant L of the critic and controlling the variance of X , and the estimate is
of lower variance with smaller L determined by the network during the optimization. Investigating Equation 14, because the partition function is exponential, its gradient with respect to T SNθ is still an exponential function, which causes the training to become unstable. Therefore, to further mitigate the high variance issue and stabilize the gradients, we avoid gradients generated by the partition function from back-propagating and consequently stabilize the gradients. The training procedure using gradient stabilization is presented in Algorithm 1.
Algorithm 1: Mutual Information Continuity-constrained Estimator (MICE) θ ← initialize network parameters from uniform distribution U ( − √ 1 d , √ 1 d ) ; while not converge do Draw n pair of samples (x1, y1), . . . , (xn, yn) from the joint distribution P(X,Y ) Forward pass of MICE: T SNθ (x, y)←MLPθ(x, y) IMICE(θ)← 1n ∑ i=1 T SN θ (xi, yi)− log 1n(n−1) ∑ i 6=j e TSNθ (xi,yj)
Compute the gradients on the left-hand side of IMICE with respect to θ: G(θ)← ∇θI leftMICE(θ) Update the network parameters: θ ← θ + G(θ)
end
4.2 CONSISTENCY
According to (Belghazi et al., 2018), an estimator In(X;Y ) constructed using a statistics network over n samples is strongly consistent if for all > 0, and there exists a positive integer N such that
∀n ≥ N, |I(X;Y )− In(X;Y )| ≤ , a.e. (17) Then, the authors separate the consistency question into approximation and estimation problems. In summary, to prove that MICE is strongly consistent, we first prove that there exists a neural network Tθ parameterized by θ in some compact domain Θ ∈ R, such that for all > 0, |I(X;Y )− IΘ(X;Y )| ≤ , a.e. This ensures the existence of neural networks that can approximate the MI with arbitrary accuracy. Second, we prove that given a family of neural networks Tθ in some bounded domain, for all > 0, there exists an N ∈ N such that for all n ≥ N , |In(X;Y ) − IΘ(X;Y )| ≤ , a.e., ensuring that given sufficient number of samples, one can estimate the MI with some statistics networks over samples. Combining the above two results with triangular inequality, we conclude that MICE is strongly consistent. We provide the details of the proofs in Section A.2.
5 EXPERIMENTS
5.1 STANDARD BENCHMARK
Dataset. The standard benchmark (Belghazi et al., 2018; Poole et al., 2019; Song & Ermon, 2020) contains two tasks, the Gaussian task and the Cubic task. For both tasks, we sample n random variables X,Y ∈ Rd for a batch from a standard multivariate normal distribution with correlation ρ between X and Y . For the Cubic task, to examine how much the MI estimators degrade when a nonlinear transformation involved, we estimate I(X;Y 3) = I(X;Y ), which does not change the MI.
Critics. Following previous studies (Belghazi et al., 2018; Poole et al., 2019; Song & Ermon, 2020), we consider two types of critics: the joint critic (Belghazi et al., 2018) and the separable critic (Oord et al., 2018). The joint critic first lists all combinations of all random variables in a batch and computes the log-density ratio with an MLP R2d → R. The separable critic applies nonlinear mapping to the inputs with two MLPs, f, g : Rd → Rd′ , and subsequently estimates log-density ratio by 〈f, g〉. The joint critic compares all combinations, having the computational complexity of O(n2), and since the computation of f and g can be paralleled, thus having a complexity of O(n).
In Figure 1, we show the performance of each estimator under different MI. The top row shows the Gaussian task, and the bottom row shows the Cubic task. As described in Section 2, ICPC is highly
biased and bounded above by log n, and the variance of INWJ increases along with the ground truth MI. Here, ISMILE (τ = 1.0) and IMICE have overall lower biases and variances, as compared to ICPC and INWJ using both critics. Because ISMILE is neither an upper bound nor lower bound on MI, MI estimates in the Gaussian task are sometimes slightly overestimated, but the moving mean of IMICE is almost not exceeding the ground truth of MI. In the Cubic task, the joint critic degrades more severely than the separable critic for most of estimators, except INWJ.
We show the bias-variance trade-offs of estimators using the separable critic in Figure 2, where the top row illustrates the results of the Gaussian task, and the results of the Cubic task are shown at the bottom row. It is observed that ICPC is severely biased, but the variance is much lower than all the other approaches. Although INWJ is theoretically unbiased, it has large bias owing to the inevitable outliers, and the variance grows up exponentially with MI as (Song & Ermon, 2020) pointed out. IMICE leverages the unbiasedness of INWJ and further reduces the variance by constraining the Lipschitz constant of the critic and gradient stabilization. Comparing result of ISMILE and IMICE using the joint critic, IMICE converges faster than ISMILE. It is possibly benefited from the stabilized
gradients. However, because we limit the Lipschitz constants in some layers of the critic, this could lead to lower flexibility, and thus IMICE is slightly more biased than ISMILE in the Cubic task. In brief, IMICE simultaneously guarantees not to exceed MI and remarkably relaxes the high-variance issue of INWJ.
5.2 EXTENSION OF STANDARD BENCHMARK
Sampling scheme. Next, we evaluate the MI estimators using an extension experiment based on the standard benchmark. As described in Section 5.1, random variables are sampled independently; that is, no correlations between samples is considered. However, we believe that, for practical scenarios, it is extremely difficult for one to create a batch in which all samples are independent. Therefore, based on the standard benchmark, we established an extension experiment in which random variables are sampled using the scheme below:
xi = ρ̂xi−1 + √ 1− ρ̂2 , ∀i = 2, . . . , n (18)
yi = ρxi + √ 1− ρ2 , ∀i = 1, . . . , n (19)
where x1 and are d-dimensional random variables following a standard normal distribution N (0, Id). Sampling variables using Equation 18 and Equation 19 is equivalent to sample X = {x1, . . . , xn} and Y = {y1, . . . , yn} from a multivariate normal distribution
X,Y ∼ N ( 0, [ Σx ρΣx ρΣx Σx ]) , Σx = Id ρ̂Id ρ̂ 2Id . . . ρ̂ n−1Id ρ̂Id Id ρ̂Id . . . ρ̂ n−2Id ρ̂2Id ρ̂Id Id . . . ρ̂ n−3Id
... ...
... . . . ... ρ̂n−1Id ρ̂ n−2Id ρ̂ n−3Id . . . Id where ρ̂ is the correlation between each pair of two consecutive samples, i.e., xi, xi+1 and yi, yi+1. In the extension benchmark, we follow the setting in Section 5.1 with an additional setting ρ̂ = 0.1, and the ground truth MI is increased by 2 after 4000 iterations during training. In general, the correlation coefficient less than 0.3 is considered to be weak. As shown in Figure 3, data with correlation of 0.1, which is even weaker than 0.3, degenerates other estimators, whereas IMICE still has relatively accurate estimates. To further explore this effect, additional experiments using different settings of ρ̂ are presented in Section A.3.
In Figure 3, we demonstrate that the bias and MSE of the estimate of IMICE are much lower than those of the other estimators using the separable critic for both the Gaussian task and the Cubic task.
There are two possible reasons that IMICE outperforms the other approaches. First, as we stated in Section 3, the partition function could be dominated by the nonzero log-density ratios when the correlations between samples are involved. The other reason is that the gradients are stabilized by applying spectral normalization to the critic and blocking the gradients generated by the partition function.
5.3 REGULARIZING GAN WITH MICE
GANs (Goodfellow et al., 2014) have recently shown powerful capabilities in real-world data generation. However, the well-known mode collapse agonizes GANs with the consequence of limited diversity. This is because the discriminator does not require the generator to capture all modes to decrease the loss function. (Belghazi et al., 2018) proposed to alleviate mode collapse by involving code variables C, and jointly maximize the MI between the generated data and C. Formally, a GAN regularized by MICE alternately optimizes the following two objectives:
LD := EPX [logD(X)] + EPZ [log(1−D(G(Z))] (20) LG := EPZ [log(1−D(G(z)))]− βIMICE(G(Z,C);C) (21)
where D,G are the discriminator and the generator, and Z follows a standard uniform distribution. Comparing the results of vanilla GAN and GAN + MICE in Figure 4, a vanilla GAN fails to model the structure, whereas GAN + MICE captures all 25 modes, showing the efficacy of mode collapse mitigation.
6 CONCLUSION
In this study, we comprehensively discuss the attributes and the limitations of existing approaches to variational MI estimation. We show that energy-based estimators such as INWJ, and IDV are of high variances because they are susceptible to the outlier events. Although neither upper bound nor lower bound estimators achieve much more accurate approximations to MI in the standard benchmark, they are under the risk of overestimating the MI. To address the above mentioned issues, we propose a unbiased and consistent estimator of MI, IMICE, which has been proven free from overestimation of the MI. We also argue that the standard benchmark is insufficient for evaluation since samples can hardly be entirely uncorrelated in general cases. Therefore, we employ an additional benchmark to evaluate the performance of the estimators in which the samples are correlated. In the standard benchmark, the proposed IMICE has a slightly better performance than ISMILE without prior knowledge for selecting clipping threshold. We empirically show that IMICE is more accurate than other estimators in the proposed additional benchmark. Finally, we show that regularizing GANs with MICE improves the ability of the GAN to capture multiple modes and consequently mitigate mode collapse.
A APPENDIX
A.1 PROOF OF LEMMA 6
Lemma 6 Let X be a random variable, and g(X) : Rd → R is an MLP with any Lipschitz continuous activation function. Let Li be the Lipschitz constant of the ith layer, then
Var[g(X)] ≤ E [ ‖X − E(X)‖2 ] I∏ i=1 L2i (22)
Proof. First, we consider the i-th layer fi with a Lipschitz continuous activation function, and fi has Lipschitz constant Li, then
Var[fi(X)] := E [ (fi(X)− E [fi(X)])2 ] (23)
≤ E [ (fi(X)− fi(E [X]))2 ] (24)
≤ L2iE [ ‖X − E[X]‖2 ] (25)
The first inequality stems from the fact that the mean of a random variable is the constant with the smallest MSE. By the definition of Lipschitz continuity, the second inequality holds because Li is the Lipschitz constant of fi. Second, let g be the composite function of f1, f2, . . . , fI that g = f1 ◦ f2 ◦ · · · ◦ fI , where I is the number of layers in g, then
Var[g(X)] ≤ E [ ‖X − E(X)‖2 ] I∏ i=1 L2i (26)
which completes the proof.
A.2 PROOF OF CONSISTENCY
The proof of consistency generally follows the proofs in (Belghazi et al., 2018) yet with some modifications to fit MICE. To prove that MICE is strongly consistent, we first prove that for all > 0, there exists a class of neural networks Tθ parameterized by θ in some compact domain Θ such that
|I(X;Y )− IΘ(X;Y )| ≤ (27)
Next, we prove that given > 0, there exists N ∈ N such that
|In(X;Y )− IΘ(X;Y )| ≤ (28)
As consequence, combining the above results with triangular inequality, we have ∀n ≥ N, |I(X;Y )− In(X;Y )| ≤ , which proves the consistency of MICE.
Proof. Let the optimal critic T ∗ = log dPdQ , where P and Q denote the joint distribution P(X,Y ) and the product of marginals PXPY of the continuous random variables X and Y , respectively. By the definition of INWJ, we have
I(X;Y )− IΘ(X;Y ) = EP [T ∗ − T ] + EQ[eT ∗−1 − eTθ−1] (29)
Next, according to the universal approximation theorem (Hornik et al., 1989), one can choose a Tθ such that
EP |T ∗ − Tθ| ≤
2 (30)
EQ|T ∗ − Tθ| ≤ 2 e−Tmax+1 (31)
where T ∗ is upper bounded above by Tmax. Because exp(·) is Lipschitz continuous with constant eTmax−1 on (−∞, eTmax−1], EQ|eT
∗−1 − eTθ−1| ≤ eTmax−1EQ|T ∗ − Tθ|, and consequently we have
EQ|eT ∗−1 − eTθ−1| ≤
2 (32)
Combine Equation 29, Equation 30, and Equation 32 with triangular inequality, we have
|I(X;Y )− IΘ(X;Y )| ≤ EP |T ∗ − Tθ|+ EP |eT ∗−1 − eTθ−1| ≤ (33)
So far we have proved that for T ∗ ≤ Tmax, Equation 27 holds. Next, we consider a subset that {T ∗ > Tmax} for a suitably chosen large value of Tmax. Here, let A be the subset belongs to the input domain, we use the indicator function 1A to partition the input domain. By the Lebesgue dominated convergence theorem, since that T ∗ and eT∗ are integrable w.r.t. P and Q, we could choose Tmax so that
EP [1T∗>Tmax(T ∗)] ≤
4 (34)
EQ[1T∗>Tmax(eT ∗−1)] ≤
4 (35)
Again, we can choose a function Tθ ≤ Tmax such that
EP |T ∗ − Tθ| ≤
2 (36)
EQ1T∗≤Tmax(|T ∗ − Tθ|) ≤ 2 e−Tmax+1 (37)
Combining Equation 35 and Equation 37 together
EQ[eT ∗−1 − eTθ−1] = EQ[1T∗≤Tmax(eT ∗−1 − eTθ−1)] + EQ[1T∗>Tmax(eT ∗−1 − eTθ−1)]
≤ eTmax−1EQ[1T∗≤Tmax(T ∗ − Tθ)] + EQ[1T∗>Tmax(eT ∗−1)]
≤ 2
(38)
Similar to the derivation of Equation 33, put Equation 36 and Equation 38 together we obtain ∀ > 0, |I(X;Y )− IΘ(X;Y )| ≤ (39)
For the estimation problem, let > 0 and given Tθ in some compact domain Θ ⊂ Rd, there exists a positive integer N such that
∀n ≥ N, |In(X;Y )− IΘ(X;Y )| ≤ (40) Here, we denote Pn and Qn as the empirical version of P and Q respectively, and In is the MI estimation with n samples. By triangular inequality we have
|In(X;Y )− IΘ(X;Y )| ≤ sup θ∈Θ
{ |EPn [Tθ]− EP [Tθ]|+ |EQn [eTθ−1]− EQ[eTθ−1]| } (41)
Since Θ is compact (therefore bounded) and neural networks are continuous, Tθ and eTθ satisfiy the uniform law of large numbers (Geer & van de Geer, 2000). Therefore, given > 0 we can choose a positive integer N such that ∀n ≥ N and with probability one, then
sup θ∈Θ {|EPn [Tθ]− EP [Tθ]|} ≤ 2 (42)
sup θ∈Θ
{ |EQn [eTθ−1]− EQ[eTθ−1]| } ≤
2 (43)
According to the three inequalities above we derive Equation 40.
Finally, combining Equation 39 and Equation 40 with triangular inequality, let > 0 and δ = 2 , and there exists a positive integer N such that
∀n ≥ N, |I(X;Y )− In(X;Y )| ≤ |I(X;Y )− IΘ(X;Y )|+ |In(X;Y )− IΘ(X;Y )| ≤ δ (44)
which completes the proof.
A.3 ADDITIONAL EXPERIMENTS
Experimental Settings. The experiments in Section 5.1 and Section 5.2 are established using a GTX 1080 Ti GPU with 11 GB VRAM. For the MLPs utilized in the joint/separable critic have the input dimension of 20, two hidden layers of 256 hidden dimension, and the output dimension is 32. In addition, ReLU Agarap (2018) is used as the activation function for both critics.
Performance of MI Estimators under Specific Correlations. We compare the performance of the MI estimators under specific ρ̂ settings (ρ̂=0.1, 0.2, 0.3, 0.4, and 0.5) using the separable critic. As shown in Figure 5, the MI estimators are more biased with larger correlation between samples. Among the MI estimators, INWJ is the most biased since neither the partition function nor the critic is constrained, so the outliers lead to large variances and biases, and this is the same reason that causes ISMILE(τ=∞) to be inaccurate. As mentioned in Section 2, the estimates of ISMILE(τ=∞) are more accurate than that of INWJ because ISMILE(τ=∞) is equivalent to IDV which is sharper than INWJ. Despite that ICPC is bounded above by log n, it is consistent under different settings of correlation. ISMILE(τ=1.0) is of low variance and bias comparing to itself when τ = ∞, but the improvement is mainly on reducing the variance. Comparing to the other MI estimators, our proposed IMICE is the least biased, and is robust when correlations between samples involved.
Randomly Selected ρ̂. We provide an experiment that correlations between samples are randomly initialized, which is a more complicated configuration than the extension benchmark in Section 5.2. Here, ρ̂ are randomly initialized from a uniform distribution that ranges from 0.0 to 0.5. In Figure 6, each estimator using the separable critic has an average performance in Figure 5. The proposed IMICE benefits the separable critic that it is robust to random correlations. In Figure 7, we also observed that the variance of INWJ is very sensitive to the data because the right-hand side of Eqn. Equation 5 is an exponential function without logarithm in IDV, and consequently yields high MSE.
By constraining the continuity and gradient stabilization, IMICE is robust when correlation between samples involved as compared with the other estimators, especially for the separable critic. This could benefit large scale training that requires a light weight model structure for the critic. | 1. What is the focus of the paper regarding mutual information estimation?
2. What are the strengths of the proposed MICE estimator compared to other existing methods?
3. What are the weaknesses of the paper, particularly in comparing the MICE with the NWJ estimator?
4. Do you have any minor comments or suggestions for improving the paper? | Summary Of The Paper
Review | Summary Of The Paper
The paper reviews some existing estimators of mutual information and points out that these estimators have the issues of significant bias and/or high variance. Then a new estimator of mutual information is proposed which is called the Mutual Information Continuity-constrained Estimator (MICE). Some properties of the MICE are shown such as strong consistency and a simple expression for the upper bound of the variance of the critic. An algorithm is presented to calculate the MICE. Experiments suggest that the MICE outperforms most of the existing estimators in terms of bias and variance in the standard benchmark.
Review
STRENGTHS:
(a) Unlike most of the existing estimators discussed in the paper, the proposed estimator, MICE, is free from the overestimation of mutual information. Some other desirable properties hold for the MICE such as strong consistency and a simple expression for the upper bound of the variance of the critic.
(b) A sufficient number of experiments are carried out to demonstrate that the MICE provides a better performance than most of the existing estimators in terms of bias, variance and MSE.
(c) The paper presents a well-summarized review of existing estimators of mutual information.
WEAKNESSES:
(d) Apart from its critic, the MICE has a similar form to the estimator of Nguyen et al. (2010), NWJ estimator, given in Lemma 2. However the theoretical comparison between the MICE and the NWJ estimator is not sufficiently investigated in the paper. In particular, it would be good to clarify whether the theoretical results such as Lemma 6, Inequality (16), and strong consistency also hold for the NWJ estimator or not.
(e) There is not enough discussion to compare the computational cost of the MICE with that of the existing estimators. It could be nice to add this discussion apart from the computational complexity of the critic given in Section 5.1.
MINOR COMMENTS:
(f) p.9, l.9 up: It is claimed that MICE is an unbiased estimator, but it is not proved in the paper.
(g) p.9, l.9 up: a unbiased ===> an unbiased |
ICLR | Title
Mutual Information Continuity-constrained Estimator
Abstract
The estimation of mutual information (MI) is vital to a variety of applications in machine learning. Recent developments in neural approaches have shown encouraging potential in estimating the MI between high-dimensional variables based on their latent representations. However, these estimators are prone to high variances owing to the inevitable outlier events. Recent approaches mitigate the outlier issue by smoothing the partition function using clipping or averaging strategies; however, these estimators either break the lower bound condition or sacrifice the level of accuracy. Accordingly, we propose Mutual Information Continuityconstrained Estimator (MICE). MICE alternatively smooths the partition function by constraining the Lipschitz constant of the log-density ratio estimator, thus alleviating the induced variances without clipping or averaging. Our proposed estimator outperforms most of the existing estimators in terms of bias and variance in the standard benchmark. In addition, we propose an experiment extension based on the standard benchmark, where variables are drawn from a multivariate normal distribution with correlations between each sample in a batch. The experimental results imply that when the i.i.d. assumption is unfulfilled, our proposed estimator can be more accurate than the existing approaches in which the MI tends to be underestimated. Finally, we demonstrate that MICE mitigates mode collapse in the kernel density estimation task.
1 INTRODUCTION
Mutual information (MI) estimation is essential in various machine learning applications, including learning representations (Oord et al., 2018; Chen et al., 2016; Bachman et al., 2019; Hjelm et al., 2018; Sordoni et al., 2021), feature selection (Battiti, 1994; Estévez et al., 2009), feature disentanglement (Higgins et al., 2018; Esmaeili et al., 2019; Colombo et al., 2021), and reinforcement learning (Oord et al., 2018; Bachman et al., 2019; Li et al., 2016). Some conventional non-parametric approaches have been proposed to estimate MI (Estévez et al., 2009; Fraser & Swinney, 1986; Moon et al., 1995; Kwak & Choi, 2002). Despite promising results, (Belghazi et al., 2018; Poole et al., 2019) indicated that these estimators have limited capability to scale up well with the sample size or dimension (Gao et al., 2015) therefore hard to be utilized in general purpose applications.
Recent studies focus on scalable MI estimation through variational bounds maximization (Oord et al., 2018; Belghazi et al., 2018; Poole et al., 2019) or minimization (Cheng et al., 2020) using neural networks or convex maximum-entropy method(Samo, 2021). These neural estimators have been adopted in some remarkable self-supervised applications, such as computer vision (Chen et al., 2020; He et al., 2020; Grill et al., 2020; Chen & He, 2020; Chen et al., 2021) and speech recognition (Schneider et al., 2019; Baevski et al., 2019), with the aim of maximizing the shared information between different views with respect to space or time. In MI estimation, the neural networks (also known as the critics) has been used to approximate the log-density ratio. These MI estimators generally characterize the Kullback-Leibler (KL) divergence (Kullback & Leibler, 1951) using a dual representation and subsequently formulate MI lower bounds.
Although multiple applications have attained promising results, two significant issues have not been fully addressed. As the first issue, the existing MI estimators can be debilitated by significant bias and variance owing to inevitable outlier events. It was pointed out by (Poole et al., 2019; Song & Ermon, 2020) that the exponential partition function causes a high-variance issue. It implies
that estimators leveraging f -divergence representations could suffer from the high-variance issue. Numerous studies have been conducted to address this problem. Previous approaches such as Mutual Information Neural Estimation (MINE) (Belghazi et al., 2018) and Contrastive Predictive Coding (CPC) (Oord et al., 2018) reduce the variances by adopting different types of averaging. Based on MINE, the Smoothed Mutual Information Lower-bound Estimator (SMILE) (Song & Ermon, 2020) limits the range of the critic with a hyper-parameter, enabling estimates with low bias and variance. For the second issue, as summarized in (Oord et al., 2018; Belghazi et al., 2018; Poole et al., 2019; Nguyen et al., 2010), most of the existing MI estimators are tested on a standard benchmark where random variables are drawn independently. However, the benchmark is insufficient for an analysis of videos or audio signals in which data frames could be correlated.
In this paper, we address the high variance issue by a novel Mutual Information Continuityconstrained Estimator (MICE) that constrains the Lipschitz constant of the critic by its spectral norm (Miyato et al., 2018), and we block the unstable gradients generated from the partition function. MICE is less underestimated in the extended benchmark because the partition function is smoothed by the scale of the spectral norm instead of hard clipping, which could overly restrict the range of the density ratio. The experimental results show that MICE has a competitive bias-variance trade-off compared to SMILE in the standard benchmark, without selecting a clipping threshold. Based on the standard benchmark, we propose an extension in which random variables are correlated within a batch. Our proposed method is robust when samples are not independent compared to existing variational estimators that underestimate MI drastically when slight correlations are involved. Finally, in the kernel density estimation (KDE) experiment, we demonstrate that using MICE as MI regularization alleviates mode collapse (Che et al., 2016; Dumoulin et al., 2016; Srivastava et al., 2017) in the training of generative adversarial networks (GANs) (Goodfellow et al., 2014). Our contributions are as follows:
• We address the high-variance issue of an existing unbiased estimator by constraining the Lipschitz constant of log-density ratio estimator and gradient stabilization.
• We prove that MICE is a strongly consistent estimator of MI.
• In the proposed experiment extension, the results show that MICE outperforms existing estimators under the condition in which the i.i.d. assumption is not fulfilled.
• A GAN regularized by MICE can capture more modes in the KDE experiment and ease the mode collapse problem.
2 RELATED WORK
For a pair of random variables (X,Y ) over the probability space X × Y , the mutual information I(X;Y ) between X and Y can be defined as the KL divergence of the joint distribution P(X,Y ) and the product of the marginals PX and PY :
I(X;Y ) = DKL(P(X,Y )‖PX ⊗ PY ) (1)
where DKL is the KL divergence. Next, we start with a common characterization of KL divergence, the Donsker–Varadhan (DV) representation (Donsker & Varadhan, 1983), which is adopted by MINE (Belghazi et al., 2018) and SMILE (Song & Ermon, 2020).
Lemma 1 (Donsker–Varadhan (DV)) Given two probability distributions P and Q over X :
DKL(P‖Q) = sup T :X→R {EP [T ]− logEQ[eT ] , IDV} (2)
for some bounded function T : X → R such that the expectations are finite. In particular, if P and Q are specified as P(X,Y ) and PX ⊗PY , MI can be estimated by maximizing the DV representation. It should be noted that the equation holds when T = log dP/dQ + C for some constant C ∈ R. In (Broniatowski & Keziou, 2009; Nowozin et al., 2016), a general variational estimation of f - divergences is introduced. For any convex, lower-semicontinuous function f , there exists a convex conjugate f∗ such that f(u) = supt∈dom(f∗){tu − f∗(t)}, where u belongs to the domain of f .
Therefore, f -divergences can be estimated by taking supremum over an arbitrary class of functions T : X → R:
Df (P‖Q) = ∫ X q(x) sup t∈dom(f∗) { t p(x) q(x) − f∗(t) } dx (3)
≥ sup T :X→R {EP [T ]− EQ[f∗(T )]} (4)
The derivation form Equation 3 to Equation 4 is based on Jensen’s inequality because the supremum is swapped out of the integration. Here, the KL divergence can be obtained by specifying f(u) = u log u, thus f∗(T ) = eT−1, yielding the Nguyen-Wainright-Jordan (NWJ) lower bound (Nguyen et al., 2010). Similarly, MI can be estimated by setting P = P(X,Y ) and Q = PX ⊗ PY . Lemma 2 (Nguyen, Wainright, and Jordan (NWJ) (Nguyen et al., 2010)) Given two probability distributions P and Q over X ,
DKL(P‖Q) ≥ sup Tθ:X→R
{ EP [Tθ]− EQ[eTθ−1] , INWJ } (5)
where the equation holds when Tθ = 1 + log dPdQ .
Note that INWJ is unbiased since no nonlinear function is taken on the right-hand side out of the expectation. Although IDV and INWJ are tight with a sufficient large hypothesis set of Tθ, the partition function induces large variances. The following approaches aim to solve the high-variance issue by averaging and clipping on the partition function. For instance, MINE (Belghazi et al., 2018) proposed a neural information measure based on taking supremum of IDV over a neural network Tθ : X × Y → R parameterized by θ. Lemma 3 (Mutual Information Neural Estimation (MINE) (Belghazi et al., 2018)) Let P and Q be two probability distributions over X
I(X;Y ) ≥ sup Tθ:X→R
{ EP(X,Y ) [Tθ]− log EMA ( EPX⊗PY [eTθ ] ) , IMINE } (6)
In this manner, MINE collects cross-batch statistics to evaluate bias-corrected estimate, reducing the bias and variance simultaneously. In contrast to MINE, which uses the exponential moving average (EMA) to reduce variances induced from the partition function, (Song & Ermon, 2020) proposed to reduce variances by putting limits on the range of the log-density ratio.
Lemma 4 (Smoothed Mutual Information Lower-bound Estimator (SMILE) (Song & Ermon, 2020)) Let P and Q be two probability distributions over X
I(X;Y ) ≥ sup Tθ:X→R
{ EP(X,Y ) [Tθ]− logEPX⊗PY [e max(min(Tθ,τ),−τ)] , ISMILE }
(7)
Another multi-sample estimator, Contrastive Predictive Coding (CPC) (Oord et al., 2018), uses the cross-entropy between the positive and negative samples as an objective
EΠjp(xj ,yj)
[ 1
n n∑ i=1 log f(xi, yi) 1 n ∑n j=1 f(xi, yj)
] , ICPC (8)
where f(x, y) = ex >Wy is a log-bilinear function with a trainable parameterW , and the expectation is taken over the distribution with density Πjp(xj , yj). Noted that ICPC is tight when f(x, y) = log p(y|x) + c(y), where c(y) is an arbitrary function that depends on y. However, (Oord et al., 2018) indicated that this bound is loose when I(X;Y ) > log n, requiring an exponentially large batch size to achieve accurate estimates with high confidence (Song & Ermon, 2020).
3 LIMITATIONS ON DV REPRESENTATION
3.1 MAXIMUM OF LOG-DENSITY RATIO ESTIMATE DOMINATING THE PARTITION FUNCTION
According to (Poole et al., 2019; Song & Ermon, 2020), the partition function EQ [ eTθ(x,y) ] is the rationale behind high variances and biases. This expression is highly dependent on the maximum
of the log-density ratio in a batch. We demonstrate this by showing the relationship of LogSumExp (LSE, also known as a smooth approximation to the maximum function) operation and the maximum function as follows
LSE(Tθ(x1, y1), . . . , Tθ(xn, yn−1)) > max{Tθ(x1, y1), . . . , Tθ(xn, yn−1)}
1
n(n− 1) n∑ i=1 n−1∑ j=1 eTθ(xi,yj) > 1 n(n− 1) emax{Tθ(x1,y1),...,Tθ(xn,yn−1)} (9)
where Tθ(xi, yj) is the estimated log-density ratio log dP/dQ where x and y are drawn from Q. Note that because Q is the product of marginals, the total number of Tθ sampled from Q is n(n − 1). (McAllester & Stratos, 2020) indicated that the partition function is dominated by extremely rare events which are never observed through the sampling from PX ⊗ PY . They quantified the probability of outlier events using the outlier risk lemma.
Lemma 5 (Outlier risk lemma (McAllester & Stratos, 2020)) Given n samples (n ≥ 2) that follow the distribution PX and a property Φ[x] such that PX(Φ[x]) ≤ 1/n, the probability that no sample x satisfies Φ[x]is at least 1/4.
Here, PX(Φ[x]) is the probability of drawing x from PX such that statement Φ[x] holds. Lemma 5 can be easily proved based on the probability of sampling with replacement.
Letting P = P(X,Y ) andQ = PX⊗PY , for DV representation, the best estimate of MI is established when
EP [Tθ(x, y)] = I(X;Y ) (10) EQ[eTθ(x,y)] = 1 (11)
The outlier risk lemma indicates that there is at least a probability of 1/4 that one can draw an unseen variable such that EQ[eTθ(x,y)] > 1. By observing Equation 9, if a pair of unseen variables (x′, y′) were sampled, the partition function will be larger than eTθ(x
′,y′)/(n(n− 1)); therefore, the estimates of DV representation are of high bias and variance. Similarly, the best estimate of INWJ is established with the same Equation 10, but Equation 11 should be modified as EQ[eTθ(x,y)−1] = 1.
3.2 NEITHER UPPER BOUND NOR LOWER BOUND ESTIMATORS
Based on the aforementioned limitations of the DV representation, the IMINE and ISMILE focus on controlling the variance of the partition function. IMINE reduces the variance by applying EMA to the partition function over all previous samples. According to (McAllester & Stratos, 2020), the worst case of the DV representation can be bounded under log n. Because IMINE implicitly enlarges the batch size with the scale of iteration (i.e, the number of covered samples at the ith iteration is i × n, where n is the batch size), it can leverage the linearly increasing batch size to reduce the bias issue. Another method adopted by ISMILE is controlling the range of the partition function by clipping the log-density ratio with a threshold τ in Equation 7.
In (Song & Ermon, 2020), the clipped density ratio rτ = max(min(eTθ(x,y), eτ ), e−τ ) is estimated by n random variables over the distribution Q = PX ⊗ PY . The variance of the bounded partition function EQ[rτ ] satisfies Var[EQ[rτ ]] ≤ (eτ − e−τ )2/4n. According to (Song & Ermon, 2020), a trade-off of the bias and variance can be determined by a threshold τ . Decreasing τ reduces the variance, but increases bias with such choice.
Although these estimators mitigate the high-variance issue and attain more accurate estimates, they are no longer upper or lower bounds on MI. This is because the modified partition function is no longer a normalizing term. As MINE applies EMA to EQ [ eTθ(x,y) ] across batches, and there is at least 1/4 chance that the outlier event occurs, the partition function eventually saturates at eTmax/(4N2 − N)), where Tmax is the maximum among all Tθ, and N is the amount of training data. As the range of the partition function of ISMILE is limited within [e−τ , eτ ], the MI would be overestimated when the log-density ratio is larger than τ and would not be underestimated only if τ → 0 because of Equation 11. In a nutshell, although these neither upper bound nor lower bound estimators reached more accurate MI estimates than IDV, these estimators could overestimate MI to some unknown extent as they are
not guaranteed to be bounded below the MI. Moreover, IMINE requires a large batch size to avoid from yielding large errors; in addition, the development of a criterion of selecting a proper threshold for ISMILE is also challenging.
4 METHODOLOGY
4.1 MUTUAL INFORMATION CONTINUITY-CONSTRAINED ESTIMATOR
To alleviate the issue of outlier events dominating the partition function, we adopt two strategies, which are limiting the Lipschitz constant of the log-density ratio estimator and gradient stabilization. The core idea of reducing variances is to smooth the critic. For instance, IMINE and ICPC adopt averaging in different manners on the partition function to achieve a trade-off between the bias and variance, and ISMILE directly truncates the value of density ratio using a hyper-parameter. Clearly, these approaches have certain flaws in that averaging leads to high bias, and it requires prior knowledge to choose the proper thresholds for clipping. To avert these issues, we utilize the spectral normalization that constrains the spectral norm of the parameters in the last layer, and consequently smooth the partition function. In (Miyato et al., 2018), the spectral norm of a weight matrix W is defined as
σ(W ) := max h:h6=0 ‖Wh‖2 ‖h‖2 = max ‖h‖2≤1 ‖Wh‖2 (12)
where h denotes any non-zero vector. The spectral norm σ(W ) is equivalent to the largest singular value of W . Therefore, σ(W ) is independent from h, so the preconceptions regarding the data is no longer required. For the weight matrix W l in the lth layer of T l, spectral normalization normalizes Wl with its spectral norm
W lSN := W l
σ(W l) (13)
where W lSN is the normalized weight matrix such that ‖T l‖Lip ≤ 1. Therefore, although we cannot avoid sampling unseen variables, we can still constrain the maximum value of the partition function by limiting the smoothness of the critic.
By leveraging the spectral normalization, we propose the Mutual Information Continuityconstrained Estimator that smooths the critic
I(X;Y ) = sup TSNθ :X→R
{ EP(X,Y ) [ T SNθ (x, y) ] − EPX⊗PY [ eT SN θ (x,y)−1 ] , IMICE } (14)
where T SNθ is a critic normalized by the spectral norm of the last layer. In contrast to previous approaches that focus on reducing the variances of the partition function, the proposed IMICE shares the same parameters in both sides of Equation 14, and therefore it is guaranteed to not exceed the MI.
To quantify the maximal variance of the log-density ratio, we assume that T SNθ : Rd → R is a multi-layer perceptron (MLP) with Lipschitz continuous activation functions.
Lemma 6 Let X be a random variable, and g(X) : Rd → R is an MLP with any Lipschitz continuous activation function. Let Li be the Lipschitz constant of the ith layer, then
Var[g(X)] ≤ E [ ‖X − E(X)‖2 ] I∏ i=1 L2i (15)
Here, we defer the proof in Section A.1. Lemma 6 shows that the variance of the critic is bounded above by the product of the square of its Lipschitz constants in each layer. An inequality resembles to Equation 9 that upper bounds the partition function is shown below
LSE(Tθ(x1, y1), . . . , Tθ(xn, yn−1)) ≤ max{Tθ(x1, y1), . . . , Tθ(xn, yn−1)}+ log n(n− 1)
1
n(n− 1) n∑ i=1 n−1∑ j=1 eTθ(xi,yj) ≤ 1 n(n− 1) ( emax{Tθ(x1,y1),...,Tθ(xn,yn−1)} + 1 ) (16)
Therefore, by Equation 15 and Equation 16, the variance of the partition function is reduced by limiting the Lipschitz constant L of the critic and controlling the variance of X , and the estimate is
of lower variance with smaller L determined by the network during the optimization. Investigating Equation 14, because the partition function is exponential, its gradient with respect to T SNθ is still an exponential function, which causes the training to become unstable. Therefore, to further mitigate the high variance issue and stabilize the gradients, we avoid gradients generated by the partition function from back-propagating and consequently stabilize the gradients. The training procedure using gradient stabilization is presented in Algorithm 1.
Algorithm 1: Mutual Information Continuity-constrained Estimator (MICE) θ ← initialize network parameters from uniform distribution U ( − √ 1 d , √ 1 d ) ; while not converge do Draw n pair of samples (x1, y1), . . . , (xn, yn) from the joint distribution P(X,Y ) Forward pass of MICE: T SNθ (x, y)←MLPθ(x, y) IMICE(θ)← 1n ∑ i=1 T SN θ (xi, yi)− log 1n(n−1) ∑ i 6=j e TSNθ (xi,yj)
Compute the gradients on the left-hand side of IMICE with respect to θ: G(θ)← ∇θI leftMICE(θ) Update the network parameters: θ ← θ + G(θ)
end
4.2 CONSISTENCY
According to (Belghazi et al., 2018), an estimator In(X;Y ) constructed using a statistics network over n samples is strongly consistent if for all > 0, and there exists a positive integer N such that
∀n ≥ N, |I(X;Y )− In(X;Y )| ≤ , a.e. (17) Then, the authors separate the consistency question into approximation and estimation problems. In summary, to prove that MICE is strongly consistent, we first prove that there exists a neural network Tθ parameterized by θ in some compact domain Θ ∈ R, such that for all > 0, |I(X;Y )− IΘ(X;Y )| ≤ , a.e. This ensures the existence of neural networks that can approximate the MI with arbitrary accuracy. Second, we prove that given a family of neural networks Tθ in some bounded domain, for all > 0, there exists an N ∈ N such that for all n ≥ N , |In(X;Y ) − IΘ(X;Y )| ≤ , a.e., ensuring that given sufficient number of samples, one can estimate the MI with some statistics networks over samples. Combining the above two results with triangular inequality, we conclude that MICE is strongly consistent. We provide the details of the proofs in Section A.2.
5 EXPERIMENTS
5.1 STANDARD BENCHMARK
Dataset. The standard benchmark (Belghazi et al., 2018; Poole et al., 2019; Song & Ermon, 2020) contains two tasks, the Gaussian task and the Cubic task. For both tasks, we sample n random variables X,Y ∈ Rd for a batch from a standard multivariate normal distribution with correlation ρ between X and Y . For the Cubic task, to examine how much the MI estimators degrade when a nonlinear transformation involved, we estimate I(X;Y 3) = I(X;Y ), which does not change the MI.
Critics. Following previous studies (Belghazi et al., 2018; Poole et al., 2019; Song & Ermon, 2020), we consider two types of critics: the joint critic (Belghazi et al., 2018) and the separable critic (Oord et al., 2018). The joint critic first lists all combinations of all random variables in a batch and computes the log-density ratio with an MLP R2d → R. The separable critic applies nonlinear mapping to the inputs with two MLPs, f, g : Rd → Rd′ , and subsequently estimates log-density ratio by 〈f, g〉. The joint critic compares all combinations, having the computational complexity of O(n2), and since the computation of f and g can be paralleled, thus having a complexity of O(n).
In Figure 1, we show the performance of each estimator under different MI. The top row shows the Gaussian task, and the bottom row shows the Cubic task. As described in Section 2, ICPC is highly
biased and bounded above by log n, and the variance of INWJ increases along with the ground truth MI. Here, ISMILE (τ = 1.0) and IMICE have overall lower biases and variances, as compared to ICPC and INWJ using both critics. Because ISMILE is neither an upper bound nor lower bound on MI, MI estimates in the Gaussian task are sometimes slightly overestimated, but the moving mean of IMICE is almost not exceeding the ground truth of MI. In the Cubic task, the joint critic degrades more severely than the separable critic for most of estimators, except INWJ.
We show the bias-variance trade-offs of estimators using the separable critic in Figure 2, where the top row illustrates the results of the Gaussian task, and the results of the Cubic task are shown at the bottom row. It is observed that ICPC is severely biased, but the variance is much lower than all the other approaches. Although INWJ is theoretically unbiased, it has large bias owing to the inevitable outliers, and the variance grows up exponentially with MI as (Song & Ermon, 2020) pointed out. IMICE leverages the unbiasedness of INWJ and further reduces the variance by constraining the Lipschitz constant of the critic and gradient stabilization. Comparing result of ISMILE and IMICE using the joint critic, IMICE converges faster than ISMILE. It is possibly benefited from the stabilized
gradients. However, because we limit the Lipschitz constants in some layers of the critic, this could lead to lower flexibility, and thus IMICE is slightly more biased than ISMILE in the Cubic task. In brief, IMICE simultaneously guarantees not to exceed MI and remarkably relaxes the high-variance issue of INWJ.
5.2 EXTENSION OF STANDARD BENCHMARK
Sampling scheme. Next, we evaluate the MI estimators using an extension experiment based on the standard benchmark. As described in Section 5.1, random variables are sampled independently; that is, no correlations between samples is considered. However, we believe that, for practical scenarios, it is extremely difficult for one to create a batch in which all samples are independent. Therefore, based on the standard benchmark, we established an extension experiment in which random variables are sampled using the scheme below:
xi = ρ̂xi−1 + √ 1− ρ̂2 , ∀i = 2, . . . , n (18)
yi = ρxi + √ 1− ρ2 , ∀i = 1, . . . , n (19)
where x1 and are d-dimensional random variables following a standard normal distribution N (0, Id). Sampling variables using Equation 18 and Equation 19 is equivalent to sample X = {x1, . . . , xn} and Y = {y1, . . . , yn} from a multivariate normal distribution
X,Y ∼ N ( 0, [ Σx ρΣx ρΣx Σx ]) , Σx = Id ρ̂Id ρ̂ 2Id . . . ρ̂ n−1Id ρ̂Id Id ρ̂Id . . . ρ̂ n−2Id ρ̂2Id ρ̂Id Id . . . ρ̂ n−3Id
... ...
... . . . ... ρ̂n−1Id ρ̂ n−2Id ρ̂ n−3Id . . . Id where ρ̂ is the correlation between each pair of two consecutive samples, i.e., xi, xi+1 and yi, yi+1. In the extension benchmark, we follow the setting in Section 5.1 with an additional setting ρ̂ = 0.1, and the ground truth MI is increased by 2 after 4000 iterations during training. In general, the correlation coefficient less than 0.3 is considered to be weak. As shown in Figure 3, data with correlation of 0.1, which is even weaker than 0.3, degenerates other estimators, whereas IMICE still has relatively accurate estimates. To further explore this effect, additional experiments using different settings of ρ̂ are presented in Section A.3.
In Figure 3, we demonstrate that the bias and MSE of the estimate of IMICE are much lower than those of the other estimators using the separable critic for both the Gaussian task and the Cubic task.
There are two possible reasons that IMICE outperforms the other approaches. First, as we stated in Section 3, the partition function could be dominated by the nonzero log-density ratios when the correlations between samples are involved. The other reason is that the gradients are stabilized by applying spectral normalization to the critic and blocking the gradients generated by the partition function.
5.3 REGULARIZING GAN WITH MICE
GANs (Goodfellow et al., 2014) have recently shown powerful capabilities in real-world data generation. However, the well-known mode collapse agonizes GANs with the consequence of limited diversity. This is because the discriminator does not require the generator to capture all modes to decrease the loss function. (Belghazi et al., 2018) proposed to alleviate mode collapse by involving code variables C, and jointly maximize the MI between the generated data and C. Formally, a GAN regularized by MICE alternately optimizes the following two objectives:
LD := EPX [logD(X)] + EPZ [log(1−D(G(Z))] (20) LG := EPZ [log(1−D(G(z)))]− βIMICE(G(Z,C);C) (21)
where D,G are the discriminator and the generator, and Z follows a standard uniform distribution. Comparing the results of vanilla GAN and GAN + MICE in Figure 4, a vanilla GAN fails to model the structure, whereas GAN + MICE captures all 25 modes, showing the efficacy of mode collapse mitigation.
6 CONCLUSION
In this study, we comprehensively discuss the attributes and the limitations of existing approaches to variational MI estimation. We show that energy-based estimators such as INWJ, and IDV are of high variances because they are susceptible to the outlier events. Although neither upper bound nor lower bound estimators achieve much more accurate approximations to MI in the standard benchmark, they are under the risk of overestimating the MI. To address the above mentioned issues, we propose a unbiased and consistent estimator of MI, IMICE, which has been proven free from overestimation of the MI. We also argue that the standard benchmark is insufficient for evaluation since samples can hardly be entirely uncorrelated in general cases. Therefore, we employ an additional benchmark to evaluate the performance of the estimators in which the samples are correlated. In the standard benchmark, the proposed IMICE has a slightly better performance than ISMILE without prior knowledge for selecting clipping threshold. We empirically show that IMICE is more accurate than other estimators in the proposed additional benchmark. Finally, we show that regularizing GANs with MICE improves the ability of the GAN to capture multiple modes and consequently mitigate mode collapse.
A APPENDIX
A.1 PROOF OF LEMMA 6
Lemma 6 Let X be a random variable, and g(X) : Rd → R is an MLP with any Lipschitz continuous activation function. Let Li be the Lipschitz constant of the ith layer, then
Var[g(X)] ≤ E [ ‖X − E(X)‖2 ] I∏ i=1 L2i (22)
Proof. First, we consider the i-th layer fi with a Lipschitz continuous activation function, and fi has Lipschitz constant Li, then
Var[fi(X)] := E [ (fi(X)− E [fi(X)])2 ] (23)
≤ E [ (fi(X)− fi(E [X]))2 ] (24)
≤ L2iE [ ‖X − E[X]‖2 ] (25)
The first inequality stems from the fact that the mean of a random variable is the constant with the smallest MSE. By the definition of Lipschitz continuity, the second inequality holds because Li is the Lipschitz constant of fi. Second, let g be the composite function of f1, f2, . . . , fI that g = f1 ◦ f2 ◦ · · · ◦ fI , where I is the number of layers in g, then
Var[g(X)] ≤ E [ ‖X − E(X)‖2 ] I∏ i=1 L2i (26)
which completes the proof.
A.2 PROOF OF CONSISTENCY
The proof of consistency generally follows the proofs in (Belghazi et al., 2018) yet with some modifications to fit MICE. To prove that MICE is strongly consistent, we first prove that for all > 0, there exists a class of neural networks Tθ parameterized by θ in some compact domain Θ such that
|I(X;Y )− IΘ(X;Y )| ≤ (27)
Next, we prove that given > 0, there exists N ∈ N such that
|In(X;Y )− IΘ(X;Y )| ≤ (28)
As consequence, combining the above results with triangular inequality, we have ∀n ≥ N, |I(X;Y )− In(X;Y )| ≤ , which proves the consistency of MICE.
Proof. Let the optimal critic T ∗ = log dPdQ , where P and Q denote the joint distribution P(X,Y ) and the product of marginals PXPY of the continuous random variables X and Y , respectively. By the definition of INWJ, we have
I(X;Y )− IΘ(X;Y ) = EP [T ∗ − T ] + EQ[eT ∗−1 − eTθ−1] (29)
Next, according to the universal approximation theorem (Hornik et al., 1989), one can choose a Tθ such that
EP |T ∗ − Tθ| ≤
2 (30)
EQ|T ∗ − Tθ| ≤ 2 e−Tmax+1 (31)
where T ∗ is upper bounded above by Tmax. Because exp(·) is Lipschitz continuous with constant eTmax−1 on (−∞, eTmax−1], EQ|eT
∗−1 − eTθ−1| ≤ eTmax−1EQ|T ∗ − Tθ|, and consequently we have
EQ|eT ∗−1 − eTθ−1| ≤
2 (32)
Combine Equation 29, Equation 30, and Equation 32 with triangular inequality, we have
|I(X;Y )− IΘ(X;Y )| ≤ EP |T ∗ − Tθ|+ EP |eT ∗−1 − eTθ−1| ≤ (33)
So far we have proved that for T ∗ ≤ Tmax, Equation 27 holds. Next, we consider a subset that {T ∗ > Tmax} for a suitably chosen large value of Tmax. Here, let A be the subset belongs to the input domain, we use the indicator function 1A to partition the input domain. By the Lebesgue dominated convergence theorem, since that T ∗ and eT∗ are integrable w.r.t. P and Q, we could choose Tmax so that
EP [1T∗>Tmax(T ∗)] ≤
4 (34)
EQ[1T∗>Tmax(eT ∗−1)] ≤
4 (35)
Again, we can choose a function Tθ ≤ Tmax such that
EP |T ∗ − Tθ| ≤
2 (36)
EQ1T∗≤Tmax(|T ∗ − Tθ|) ≤ 2 e−Tmax+1 (37)
Combining Equation 35 and Equation 37 together
EQ[eT ∗−1 − eTθ−1] = EQ[1T∗≤Tmax(eT ∗−1 − eTθ−1)] + EQ[1T∗>Tmax(eT ∗−1 − eTθ−1)]
≤ eTmax−1EQ[1T∗≤Tmax(T ∗ − Tθ)] + EQ[1T∗>Tmax(eT ∗−1)]
≤ 2
(38)
Similar to the derivation of Equation 33, put Equation 36 and Equation 38 together we obtain ∀ > 0, |I(X;Y )− IΘ(X;Y )| ≤ (39)
For the estimation problem, let > 0 and given Tθ in some compact domain Θ ⊂ Rd, there exists a positive integer N such that
∀n ≥ N, |In(X;Y )− IΘ(X;Y )| ≤ (40) Here, we denote Pn and Qn as the empirical version of P and Q respectively, and In is the MI estimation with n samples. By triangular inequality we have
|In(X;Y )− IΘ(X;Y )| ≤ sup θ∈Θ
{ |EPn [Tθ]− EP [Tθ]|+ |EQn [eTθ−1]− EQ[eTθ−1]| } (41)
Since Θ is compact (therefore bounded) and neural networks are continuous, Tθ and eTθ satisfiy the uniform law of large numbers (Geer & van de Geer, 2000). Therefore, given > 0 we can choose a positive integer N such that ∀n ≥ N and with probability one, then
sup θ∈Θ {|EPn [Tθ]− EP [Tθ]|} ≤ 2 (42)
sup θ∈Θ
{ |EQn [eTθ−1]− EQ[eTθ−1]| } ≤
2 (43)
According to the three inequalities above we derive Equation 40.
Finally, combining Equation 39 and Equation 40 with triangular inequality, let > 0 and δ = 2 , and there exists a positive integer N such that
∀n ≥ N, |I(X;Y )− In(X;Y )| ≤ |I(X;Y )− IΘ(X;Y )|+ |In(X;Y )− IΘ(X;Y )| ≤ δ (44)
which completes the proof.
A.3 ADDITIONAL EXPERIMENTS
Experimental Settings. The experiments in Section 5.1 and Section 5.2 are established using a GTX 1080 Ti GPU with 11 GB VRAM. For the MLPs utilized in the joint/separable critic have the input dimension of 20, two hidden layers of 256 hidden dimension, and the output dimension is 32. In addition, ReLU Agarap (2018) is used as the activation function for both critics.
Performance of MI Estimators under Specific Correlations. We compare the performance of the MI estimators under specific ρ̂ settings (ρ̂=0.1, 0.2, 0.3, 0.4, and 0.5) using the separable critic. As shown in Figure 5, the MI estimators are more biased with larger correlation between samples. Among the MI estimators, INWJ is the most biased since neither the partition function nor the critic is constrained, so the outliers lead to large variances and biases, and this is the same reason that causes ISMILE(τ=∞) to be inaccurate. As mentioned in Section 2, the estimates of ISMILE(τ=∞) are more accurate than that of INWJ because ISMILE(τ=∞) is equivalent to IDV which is sharper than INWJ. Despite that ICPC is bounded above by log n, it is consistent under different settings of correlation. ISMILE(τ=1.0) is of low variance and bias comparing to itself when τ = ∞, but the improvement is mainly on reducing the variance. Comparing to the other MI estimators, our proposed IMICE is the least biased, and is robust when correlations between samples involved.
Randomly Selected ρ̂. We provide an experiment that correlations between samples are randomly initialized, which is a more complicated configuration than the extension benchmark in Section 5.2. Here, ρ̂ are randomly initialized from a uniform distribution that ranges from 0.0 to 0.5. In Figure 6, each estimator using the separable critic has an average performance in Figure 5. The proposed IMICE benefits the separable critic that it is robust to random correlations. In Figure 7, we also observed that the variance of INWJ is very sensitive to the data because the right-hand side of Eqn. Equation 5 is an exponential function without logarithm in IDV, and consequently yields high MSE.
By constraining the continuity and gradient stabilization, IMICE is robust when correlation between samples involved as compared with the other estimators, especially for the separable critic. This could benefit large scale training that requires a light weight model structure for the critic. | 1. What is the main contribution of the paper regarding mutual information estimation?
2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to prior works such as SMILE and GANs?
3. How does the reviewer assess the effectiveness and reliability of the proposed estimator in various scenarios?
4. Are there any concerns or suggestions regarding the writing style, organization, and inclusion of relevant literature?
5. How does the reviewer evaluate the significance and novelty of the paper's content, especially in relation to recent advancements in MI estimation? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposed a new estimator, named MICE for mutual information estimation. The heuristic is to smooth the partition function by constraining the Lipschitz constant of the log-density ratio estimator. The author(s) show the proposed estimator can be more accurate & reliable than the prior approaches when the i.i.d assumption is unfulfilled.
Review
Strengths: (1) Considering the non iid scenarios is a good call. (2) Clear writing and adequate summary of existing literature. (3) Using Lipschitz constraints to regulate the variance is an interesting idea.
Weakness: (1) Section 2 heavily repeat well-known results, extensively quoting known theories does not add value to the paper. There is no need to spend so much space reviewing and please significantly shrunk this section (prune to the Appendix). (2) I would consider the comparison to SMILE unfair, as there is no detail in the paper discussing how you tune your Lipschiz constant is tuned (while SMILE use fixed truncation). (3) Section 4, why do you only normalize T_{\theta}^{SN} by the spectral norm of the last layer. The right way should be normalizing T by all layers. It feels a bit of strange only regularizing the output layer with the spectrum norm, which does not adequately control the Lipschiz constant (the neural net can easily rescale preceding layers’ output to cope with a smaller output Lipschiz). (4) In section 4.2, if the critic functions have a bounded Lipschitz, then MICE is guaranteed to be a biased estimator if the true log-likelihood ratio violates the constraint. Since it is impossible to prove a biased estimator is consistent, the theory is flawed. (5) It appears that the experiments are weak, since MICE is not evaluated on real-world data. Section 5.1 & 5.2 are very basic toy models. Also, section 5.3 is not a challenging problem. It is meaningless to use vanilla GAN as baseline, most more recent GAN variants can recover this multi-modal Gaussian without additional regularization. In addition, other MI estimator might also achieve the same results, you have to show cases where MICE is a clear winner. (6) Some of more recent MI estimation literature on MI estimation (and their application) such as [1][2] are missing from the current discussion. Note they all try to improve the estimation. Also, I think the discussions should give specific attention to [Poole, 2019]’s outlook on the future directions for MI estimation, please clarify how the proposed approach addresses open problems such as high MI estimation and optimization (due to the fundamental tension pointed out by [McAllester, 2020)).
[1] Mroueh, Youssef, et al. "Improved Mutual Information Estimation." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 10. 2021. [2] Gupta, Umang, et al. "Controllable Guarantees for Fair Outcomes via Contrastive Information Estimation." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 9. 2021.
Also equation 11 strongly connects to idea from [3], which is to achieve variance reduction by exploiting the empirical partition function. Some discussions or comparisons is advised. [3] Guo, Qing, et al. "Tight Mutual Information Estimation With Contrastive Fenchel-Legendre Optimization." arXiv:2107.01131.
And the author(s) should cite [4,5] which introduced the Lipschiz constraint to Machine Learning context [4] Arjovsky, Martin, Soumith Chintala, and Léon Bottou. "Wasserstein generative adversarial networks." International conference on machine learning. PMLR, 2017. [5] Arjovsky, Martin, and Léon Bottou. "Towards principled methods for training generative adversarial networks." arXiv:1701.04862 (2017).
Minor: The CPC is better to be re-named to InfoNCE, which is well-known |
ICLR | Title
Energy Consumption-Aware Tabular Benchmarks for Neural Architecture Search
Abstract
The demand for large-scale computational resources for Neural Architecture Search (NAS) has been lessened by tabular benchmarks for NAS. Evaluating NAS strategies is now possible on extensive search spaces and at a moderate computational cost. But so far, NAS has mainly focused on maximising performance on some hold-out validation/test set. However, energy consumption is a partially conflicting objective that should not be neglected. We hypothesise that constraining NAS to include the energy consumption of training the models could reveal a subspace of undiscovered architectures that are more computationally efficient with a smaller carbon footprint. To support the hypothesis, an existing tabular benchmark for NAS is augmented with the energy consumption of each architecture. We then perform multi-objective optimisation that includes energy consumption as an additional objective. We demonstrate the usefulness of multi-objective NAS for uncovering the trade-off between performance and energy consumption as well as for finding more energy-efficient architectures. The updated tabular benchmark, EC-NAS-Bench, is open-sourced to encourage the further exploration of energy consumption-aware NAS.
1 INTRODUCTION
The design of neural architectures is a complex task. While general guidelines for producing suitable neural architectures have been proposed, neural architecture design still requires expert domain knowledge, experience, and not least substantial effort (Philipp, 2021; Zoph & Le, 2016; Ren et al., 2020). This led to an upsurge in research on automated exploration and design of neural architectures cast as an optimisation problem – neural architecture search (NAS) (Baker et al., 2016; Zoph & Le, 2016; Real et al., 2017).
NAS strategies explore neural architectures in a predefined search space relying on model training and evaluation to determine the model’s fitness (i.e., validation/test set score) to adjust the search strategy and extract the best performing architecture (Ren et al., 2020). NAS strategies have shown great promise in discovering novel architecture designs yielding state-of-the-art model performance (Liu et al., 2017; 2018; Lin et al., 2021; Baker et al., 2017). However, it can be prohibitively expensive to perform NAS (Tan & Le, 2019b) due to the demand for large-scale computational resources and the associated carbon footprint of NAS (Schwartz et al., 2019; Anthony et al., 2020).
The introduction of tabular benchmarks for NAS significantly lessened the computational challenges mentioned above by facilitating the evaluation of NAS strategies on a limited search space of architectures (Klein & Hutter, 2019; Dong & Yang, 2020). Predictive models and zero- and one-shot models (Wen et al., 2019; Lin et al., 2021; Zela et al., 2020) have reduced time-consuming model training and thereby increased the efficiency of NAS strategies. Most recently, surrogate NAS benchmarks (Zela et al., 2022) have been proposed for arbitrary expansion of architecture search spaces for NAS.
Notwithstanding the aforementioned major contributions to the advancement of NAS research, the prime objective of NAS has been maximising a performance objective on some hold-out test/validation test. NAS strategies can be evaluated effectively, yet the search strategies do not intentionally aim to find computationally efficient architectures. That is, the NAS may efficiently determine model performance at a moderate computational cost, but energy efficiency is generally not an objective of NAS.
We hypothesise that adding the energy consumption of training models as a NAS objective could reveal a sub-space of computationally efficient models that also have a smaller carbon footprint. In order to find efficient architectures without sacrificing cardinal performance requirements, we propose the use of NAS strategies that will optimise for multiple objectives.
Our main contributions.
1. We provide an energy consumption-aware tabular benchmark for NAS based on NASBench-101 (Ying et al., 2019). For each architecture, we added its training energy consumption, power consumption and carbon footprint. We hope that the new data set will foster the development of environmentally friendly deep learning systems.
2. We also introduce a surrogate energy model to predict the training energy cost for a given architecture in a large search space (about 423k architectures)
3. To exemplify the use of the new benchmark, we devise a simple multi-objective optimisation algorithm for NAS and apply it to optimise generalisation accuracy as well as energy consumption.
4. We demonstrate the usefulness of multi-objective architecture exploration for revealing the trade-off between performance and energy efficiency and for finding efficient architectures obeying accuracy constraints. This is also demonstrated with other baseline multi-objective methods.
2 ENERGY CONSUMPTION-AWARE BENCHMARKS - EC-NAS-Bench
Our energy consumption-aware tabular benchmark EC-NAS-Bench is based on Nas-Bench-101 (Ying et al., 2019). We closely follow their specification of architectures; however, the search space of architectures that are considered, the evaluation approach and the metrics provided for each architecture is different. This section will briefly present EC-NAS-Bench and its differences to NAS-Bench-101.
2.1 ARCHITECTURE DESIGN
Network Topology. All architectures considered are convolutional neural networks (CNNs) designed for the task of image classification on CIFAR-10 (Krizhevsky, 2009). Each neural network comprises a convolutional stem layer followed by three repeats of three stacked cells and a downsampling layer. Finally, a global pooling layer and a dense softmax layer are used. The space of architectures, X, is limited to the topological space of cells, where each cell is a configurable feedforward network.
Cell Encoding. The individual cells are represented as directed acyclic graphs (DAGs). Each DAG, G(V,M), has N = |V | vertices (or nodes) and edges described in the binary adjacency matrix M ∈ {0, 1}N×N . The set of op-
erations (labels) that each node can realise is given by L′ = {input, output} ∪ L, where L = {3x3conv, 1x1conv, 3x3maxpool}. Two of the N nodes are always fixed as input and output to the network. The remaining N − 2 nodes can take up one of the labels in L. The connections between nodes of the DAG are encoded in the upper-triangular adjacency matrix with no self-connections (zero main diagonal entries). For a given architecture, A, every entry αi,j ∈ MA denotes an edge, from node i to node j with operations i, j ∈ L and its labelled adjacency matrix, LA ∈ MA × L′.
Search space. The number of DAGs grows exponentially with N and L (Ying et al., 2019). We restrict the search space in EC-NAS-Bench by imposing |V | ≤ 5 and |A ̸= 0| ≤ 9, referred to as the 5V space. The search space with |V | ≤ 4 called 4V space is also considered. In contrast, NAS-Bench-101 considers the search space for |V | ≤ 7. With these imposed restrictions on the
search space of EC-NAS-Bench, 91, 2532 and 423k unique architectures are identified from the 4V, 5V and 7v spaces, respectively.
2.2 ENERGY CONSUMPTION-AWARENESS
Resource-constrained NAS for obtaining efficient architectures has been explored mainly by optimising the total number of floating point operations (FPOs) (Tan & Le, 2019a). Optimising for FPOs, however, might not be entirely indicative of the efficiency of models (Henderson et al., 2020). It has been reported that models with fewer FPOs have bottleneck operations that can consume the bulk of the training time (Howard et al., 2017), and some models with high FPOs have lower inference time (Jeon & Kim, 2018). Energy consumption optimised hyperparameter selection outside of NAS settings for large language models has been recently investigated in Puvis de Chavannes et al. (2021).
The energy consumption during the training of a model encapsulates facets of architecture efficiency that are not entirely taken into consideration when using standard resource constraints such as FPOs, computational time and the number of parameters. Energy consumption accounts for both hardware and software variations in the experimental set-ups. To foster a new direction for NAS to find more efficient architectures, we use energy consumption as the additional objective along with standard performance measures.
2.3 QUANTIFYING ENERGY CONSUMPTION
About 75% of the total energy costs during training a neural network are incurred by hardware accelerators such as graphics processing units (GPUs) or tensor processing units (TPUs) (Dodge et al., 2022). The remaining energy consumption is mainly due to the central processing units (CPUs) and dynamic random access memory (DRAM). Additional energy consumed by the supporting infrastructure, such as cooling- and power systems and dissipation, is usually accounted for by the power usage effectiveness (PUE), which is an overhead factor. Several open-source tools have been published in the past couple of years, such as experiment-impact-tracker (Henderson et al., 2020), Carbontracker (Anthony et al., 2020) and CodeCarbon (Schmidt et al., 2021) provide convenient ways to track and log the energy consumption of neural networks by taking these factors into consideration.
In EC-NAS-Bench, the energy consumption of training and evaluating the neural architectures is estimated by modifying the tool Carbontracker (Anthony et al., 2020). Our version of the tool monitors the GPUs, CPUs and DRAM and estimates the total energy costs, E (kWh), aggregate carbon footprint (kgCO2eq) based on the instantaneous carbon intensity of the regions and the total computation time, T (s). The complete set of metrics that are measured and reported in EC-NAS-Bench are listed in Table 1.
2.4 ARCHITECTURE PERFORMANCE AND EFFICIENCY
Training Pipeline. Architectures from the 4V and 5V space are trained on CIFAR-10 (Krizhevsky, 2009) using 40k samples and evaluated on 10k validation and test samples (60k total). Each model is trained on an in-house Slurm cluster on a single NVIDIA Quadro RTX 6000 GPU with 24 GB memory and two Intel CPUs. The training strategy, or hyper-parameter setting, is similar to that of NAS-Bench-101 (Klein & Hutter, 2019). Pre-
dicting the energy consumption of longer model runs from a few training epochs has been shown to be robust when performed on the same hardware (Anthony et al., 2020). To refrain from retraining and re-evaluating all the models in NAS-Bench-101, we train each model for only 4 epochs and then obtain surrogate time and energy measurements by linear scaling. We then tabulate these measurements along with the corresponding mean performance metrics for each model from NASBench-101 and obtain metrics for training and evaluating each model for 12, 36 and 108 epochs.
Metrics. We report the operations, no. parameters, and performance metrics in EC-NAS-Bench, as in NAS-Bench-101, and additionally, we include efficiency measures in terms of energy consumption and the carbon footprint for training each model. The primary focus for efficiency metrics is to quantify the resource costs specific to model training; however, we also report the total resource costs, which include computational overhead, e.g., data movements. For completeness, we also provide carbon intensity measures at training time, timestamp, and average energy consumption of computing resources. We have made the metrics of each architecture readily accessible to encourage the development of NAS strategies for exploring efficient architectures. The metrics reported relevant to this work can be seen in Table 1.
2.5 SURROGATE DATASET FOR 7V-SPACE
The 4V and 5V search spaces are the primary spaces used in this work to reduce the overall resource consumption to populate the energy measurements in the tabular benchmark datasets. However, even the 5V space has only a fraction of possible architectures compared to the 7V space published in Ying et al. (2019), which has about 423k architectures. Computing the energy consumption as done for 4V and 5V datasets on the 7V space is prohibitively expensive1.
We instead sample a subset of architectures from the 7V space and obtain the actual energy costs for 4300 architectures. Using these measurements we train a multi-layered perceptron (MLP) based surrogate energy prediction model. The MLP takes the graph-encoded architecture and the number of parameters as input and predicts the energy consumption for a given number of epochs. This surrogate model is similar to recent surrogate NAS methods that have shown to be more efficient Zela et al. (2022). Details of the surrogate model used to predict the energy measurements for the 7V space are provided Appendix D.
The resulting surrogate 7V dataset with the energy measurements yields a close approximation of the actual training energy costs as shown in Figure 2-a). The Pearson correlation between the actual and predicted energy measurements is 0.9977. In Figure 2-b), we also show that the mean absolute error of the predicted- and actual energy measurements plateau with about 3000 architectures, justifying its use to predict on the remaining 7V space. The standard deviation is estimated over 10 random initialisations of the surrogate model per training dataset size.
2.6 INFLUENCE OF HARDWARE ON EC-NAS-Bench
The energy consumption of the architectures in the 4V and 5V spaces were obtained on a single RTX Quadro 6000 GPU. While the energy measurements tabulated in EC-NAS-Bench are specific to these hardware settings, we argue that the trends across the architectures hold independent of the actual hardware used. To demonstrate this, we trained the architectures in the 4V space on four different (Nvidia) GPUs spanning multiple generations: Titan XP, RTX 3060, RTX 3090 and RTX Quadro 6000.
While the energy consumed by each model on specific hardware is different, the trends compared to other models are maintained across different GPUs. This is captured in Figure 3, where the energy
1Our estimates showed that it would require 770 GPU days of compute.
consumption for each architecture in the 4V space on all four GPUs is reported. This trend confirms the fact that when NAS is constrained on energy consumption and performance the resulting models would remain the same irrespective of the specific hardware used.
3 NAS STRATEGIES WITH EC-NAS-Bench
Given a tabular benchmark which can be used to query for model training energy consumption in addition to other standard metrics such as in EC-NAS-Bench, NAS strategies can be used to search for energy-efficient architectures. We next present multi-objective optimisation as a suitable strategy to uncover the trade-off between performance and efficiency, which supports an energyaware architecture choice.
3.1 MULTI-OBJECTIVE OPTIMISATION
Multi-objective optimisation (MOO) simultaneously optimises several, potentially conflicting objectives. The goal of MOO is to find or to approximate the set of Pareto-optimal solutions, where a solution is Pareto-optimal if it cannot be improved in one objective without getting worse in another.
In this work, we introduce a simple evolutionary MOO algorithm (SEMOA) based on Krause et al. (2016). The algorithm is simple, but derived from canonical principles of derivative-free multicriteria optimisation, such as hypervolume maximisation. Details of SEMOA are presented in Appendix A.2. We also use several existing MOO algorithms: random search, Speeding up Evolutionary Multi-Objective Algorithms (SHEMOA) and Mixed Surrogate Expected Hypervolume Improvement (MSEHVI), implemented in Izquierdo et al. (2021) to demonstrate the usefulness of EC-NAS-Bench .
3.2 EVALUATION OF NAS STRATEGIES
Experimental Setup. We conduct experiments on EC-NAS-Bench by adapting the presented MOO-algorithm to perform both single-objective optimisation (SOO) and MOO. In the former, we will naturally find only one solution when optimising a single objective. In contrast, when optimising multiple, diverse objectives, we will find the empirical Pareto-front in the latter. We run the algorithm in the 4V and 5V space of models trained for 108 epochs. The optimisation is performed over 100 evolutions with a population size of 20. All the experiments are conducted on a desktop workstation with a single NVIDIA RTX 3090 GPU with 24GB memory and Intel(R) Core(TM) i7-10700F CPU @ 2.90GHz.
Performance Criteria. For the multi-objective optimisation, we use the validation accuracy (Pv) and the training energy cost, E(kWh), as the two objectives to be jointly optimised using the MOO algorithm. For the single-objective optimisation, we only use Pv as the performance objective. We
use energy cost rather than, e.g., training time, considering that E is agnostic to parallel computing. We note that it is possible to use any of the provided metrics in Table 1 for the purpose of singleand multi-objective optimisation. As the MOO algorithm minimises the objectives, we simply use the negative of the objectives in cases where the quantities are to be maximised; for instance, we optimise −Pv as accuracy is a maximisation objective. Training costs In aggregate, EC-NAS-Bench had a total estimated training cost of 124.214 GPU days, 2021.02 kWh and 259.047 kgCO2eq for the 5V space. The 4V space had a total estimated training cost of 3.854 GPU days, 63.792 kWh and 5.981 kgCO2eq. The actual training costs for the 5V space were only 3.105 GPU days, 50.525 kWh and 6.476 kgCO2eq. Actual training costs of the 4V space were 0.096 GPU days, 1.594 kWh and 0.149 kgCO2eq.
In total, we saved an estimated compute cost of 121.109 GPU days, 1970.495 kWh and 252.571 kgCO2eq for the 5V space, and 3.758 GPU days, 48.931 kWh and 6.327 kgCO2eq for the 4V space. We obtain ≈ 97% reduction in computing resources and energy consumption in all efficiency measures.
4 RESULTS
Multi-objective exploration of 5V space. The key results from the experiments on EC-NAS-Bench using the multi-objective optimisation of E and −Pv are shown in Figure 4-a),b) and c). Pareto fronts over multiple random initialisations of the four MOO algorithms: SEMOA (ours), Random Search, SHEMOA, MSEHVI, are visualised as attainment curves in Figure 4-a) which summarises the median solutions attained over the multiple runs(Fonseca et al., 2001). All the MOO algorithms are able to explore the search space reasonably well, yielding attainment curves that largely look similar.
The Pareto front obtained from the our MOO algorithm, SEMOA, for one run is shown in Figure 4- b). It also shows the extrema (r0, r1) on both ends of the front preferring one of the objectives, whereas the knee point (rk) offers the best trade-off between the two objectives. These three points are shown in different colours and markers, where the two extrema (Ar0 /Blue, Ar1 /Green) and the knee point (Ark /yellow). We compute the bend-angles to find the knee point as suggested by Deb & Gupta (2011).
The architectures corresponding to the two extrema (Ar0 ,Ar1 ) and the corresponding knee point (Ark ) for a single MOO run are visualised in the radar plot in Figure 4-b). The exact performance metrics for the three models in Figure 4-b) are also reported in Table 2. The solution covering the largest area is one of the extremal points (Ar0 , blue) with high accuracy (0.944) but also a larger footprint in the energy consumption (1.032kWh), computation time (5482.78s) and the number of
parameters (21.22M) compared to the other extremum (Ar1 , green) or the knee point (Ark , yellow). The model corresponding to the knee point (Ark ) provides a large reduction in the energy consumption (0.324kWh) at the expense of a small reduction in performance (0.932).
Single-objective exploration. We optimise only the validation accuracy, Pv , to simulate standard NAS practices. The resulting solution is shown the last row of Table 2. This SOO model achieves the highest validation accuracy (0.944). However, the footprint of the solution along the energy consumption, computation time and the number of parameter axes are larger than those from the MOO algorithm.
Multi-objective exploration of 7V space. The MOO results for the surrogate 7V space resemble the trends observed in the 5V space, as shown in Figure 5. As with the 5V space all the attainment curves of all the four MOO algorithms look similar. Visibly, the MSEHVI method seems to underperform compared to other models due to the protrusion around the knee-point compared to the other models, which are largely overlapping. A single Pareto front of SEMOA are also showed in Figure 5-b), with trends comparable with those in the 5V space results in Figure 4.
5 DISCUSSIONS
Single versus multi-objective optimisation. The performance trends of the SOO and MOO solutions are clearly captured in Table 2. The knee point solution, Ark , from MOO, yields an architecture that consumes about 70% less energy and has only about 1% degradation in performance. Depending on the downstream tasks, this could be a reasonable trade-off. If the degradation in performance cannot be tolerated, the Pareto front offers other candidate solutions for the practitioners to choose from. For instance, the extremum solution (Ar0 ) offers basically the same performance as the SOO solution by consuming about 32% less energy.
Training time is not an alternative to energy consumption. The original NAS-Bench-101 already reports the training time (Ying et al., 2019). In single hardware regimes, this could serve as a measure of the energy consumption, as training time mostly correlates with the energy consumption. However, as most neural architecture training is performed on multiple GPUs with large-scale parallelism, training time alone cannot capture the efficiency of models. Aggregate energy consumption can take parallel hardware and the associated overheads into consideration. Even in single GPU
training settings, energy consumption could optimise for energy-efficient models. For instance, a small architecture trained on a large GPU still has larger energy consumption due to the underutilisation of the hardware resources. In such instances, a larger model could (to a certain extent) yield more performance improvements for the total energy consumed (Pv/E). Energy efficient tabular NAS benchmark for obtaining efficient architectures. Tabular benchmarks such as NAS-Bench-101 (Ying et al., 2019) were introduced to reduce the resources required to perform NAS. However, even the one-time cost of generating a tabular benchmark dataset is massive. Surrogate NAS benchmarks are being studied to alleviate these costs, where the models are not exhaustively trained and evaluated. Instead, the performance metrics of architectures are estimated based on smaller training costs. For instance, this is achieved using predictive modelling based on learning curves (Yan et al., 2021), gradient approximations (Xu et al., 2021), or by fitting surrogate models to a subset of architectures (Zela et al., 2022). Similar to these attempts, the proposed EC-NAS-Bench dataset does not train all the models but bases its predictions on training the models only for 4 epochs, as described in Section 3.2. This results in about 97% reduction if the dataset were to be created from scratch, as shown in Table 3. Thus, EC-NAS-Bench is an energyefficient tabular benchmark that can be used to obtain energy-efficient architectures as demonstrated in Section 4.
Carbon-footprint aware NAS. The EC-NAS-Bench dataset reports several metrics per architecture, as shown in Table 1. Combinations of these metrics and the use of MOO could allow for the exploration of architecture spaces that have interesting properties. For instance, NAS can be performed to directly optimise the carbon footprint of deep learning models. Although instantaneous energy consumption and carbon footprint are
linearly correlated, when measured over a longer duration (>5m) these quantities differ due to the fluctuations of the instantaneous carbon intensity (Anthony et al., 2020). These carbon intensity fluctuations are caused by the variations of the power sources to the grid (Henderson et al., 2020). This can have implications when training models for a longer duration or on cloud instances that can be distributed over data centres in different countries (Dodge et al., 2022). By reporting instantaneous and aggregate carbon footprint of model training in EC-NAS-Bench we facilitate the possibility of carbon footprint aware NAS (Selvan et al., 2022). In this work, we focused only on energy consumption awareness to work around the temporal- and spatial variations of the carbon intensity.
Energy Consumption aware Few-shot NAS. Tabular benchmarks such as NAS-Bench-101 (Ying et al., 2019) provide an efficient way to explore different NAS strategies where the model training cost is only incurred once. One restriction with such tabular benchmarks is that they are specific to a set of architectures (fx: feedforward convolutional neural networks) and datasets (fx: CIFAR10). Developing tabular benchmarks for all possible network architectures and datasets is alleviated using one- or few- shot learning methods (Zhao et al., 2021; Zela et al., 2020). Integrating surrogate models for predicting learning dynamics (Zela et al., 2022) and energy measurements using the surrogate model in Section 2.5 could bridge the divide between few-shot and surrogate tabular benchmark datasets that are also energy consumption-aware. We have demonstrated the integration of surrogate energy models with existing tabular benchmark datasets, and extending these to surrogate benchmark datasets is straightforward.
Limitations. Constraining the number of vertices in the DAGs results in sparser search spaces for the optimisation strategy. The optimisation strategy will therefore be more sensitive to initialisation and choice of random seeds, and the empirical Pareto front will appear to be more rigid, as seen in the attainment plot in Figure 4-c, even when multiple initialisations and trials are carried out. We also only demonstrated experiments on the 4V and 5V spaces.
To reduce the computation cost, in EC-NAS-Bench we used the surrogate time and energy measurements that do not model training time variability. We also query the performance metrics from the three repeats of NAS-Bench-101 and update EC-NAS-Bench with their mean performance metrics.
All these limitations are primarily driven by the need to minimise the energy consumption of these experiments. While these are at the expense of variability, we argue that the resulting reduction in the energy consumption justifies these choices. Further, the results from these small-scale experiments have been shown to extend to larger space of architectures (Ying et al., 2019).
6 CONCLUSIONS AND FUTURE WORK
In this work, we presented an updated tabular benchmark dataset, EC-NAS-Bench, which tabulates the energy consumption and carbon footprint of training models, in addition to standard performance measures. Using multi-objective optimisation strategies, we showed that Pareto-optimal solutions offer appealing trade-offs between the performance measures and the energy consumption of model training. We qualitatively showed that large reductions (about 70%) in energy consumption are possible with <1% reduction in performance.
In addition to providing energy consumption measures, the EC-NAS-Bench benchmark provides metrics such as average carbon footprint and power consumption of CPUs, GPUs and DRAM. We hope this will foster interest in the development of models that are efficient and environmentally friendly by optimising for their energy consumption and carbon footprint.
A MULTI-OBJECTIVE OPTIMISATION
Formally, let the MOO problem be described by f : X → Rm, f(x) 7→ (f1(x), . . . , fm(x)). Here X denotes the search space of the optimization problem and m refers to the number of objectives. We assume w.l.o.g. that all objectives are to be minimized. For two points x, x′ ∈ X we say that x′ dominates x and write x′ ≺ x if ∀i ∈ {1, . . . ,m} : fi(x′) ≤ fi(x) ∧ ∃j ∈ {1, . . . ,m} : fj(x′) < fj(x). For X ′, X ′′ ⊆ X we say that X ′ dominates X ′′ and write X ′ ≺ X ′′ if ∀x′′ ∈ X ′′ : ∃x′ ∈ X ′ : x′ ≺ X ′′. The subset of non-dominated solutions in a set X ′ ⊆ X is given by ndom(X ′) = {x | x ∈ X ′∧∄x′ ∈ X ′ \{x} : x′ ≺ x}. The Pareto front of a set X ′ ⊂ X defined as F(X ′) = {f(x) |x ∈ ndom(X ′)} and, thus, the goal of MOO can be formalised as approximating F(X). In iterative MOO, the strategy is to step-wise improve a set of candidate solutions towards a sufficiently good approximation of F(X). For the design of a MOO algorithm, it is important to have a way to rank two sets X ′ and X ′′ w.r.t. the overall MOO goal even if neither X ′ ≺ X ′′ nor X ′′ ≺ X ′. This ranking can be done by the hypervolume measure. The hypervolume measure or S-metric (see Zitzler & Thiele, 1999) of a set X ′ ⊆ X is the volume of the union of regions in Rm that are dominated by X ′ and bounded by some appropriately chosen reference point r ∈ Rm:
Sr(X ′) := Λ ( ⋃
x∈X′
[ f1(x), r1 ] × · · · × [ fm(x), rm ]) ,
where Λ( · ) is the Lebesgue measure. The hypervolume is, up to weighting objectives, the only strictly Pareto compliant measure (Zitzler et al., 2003) in the sense that given two sets X ′ and X ′′ we have S(X ′) > S(X ′′) if X ′ dominates X ′′. As stated by Bringmann et al. (2013), the worst-case approximation factor of a Pareto front F(X ′) obtained from any hypervolume-optimal set X ′ with size |X ′| = µ is asymptotically equal to the best worst-case approximation factor achievable by any set of size µ, namely Θ(1/µ) for additive approximation and 1+Θ(1/µ) for relative approximation (Bringmann & Friedrich, 2013). Now we define the contributing hypervolume of an individual x ∈ X ′ as
∆r(x,X ′) := Sr(X ′)− Sr(X ′ \ {x}) .
The value ∆(x,X ′) quantifies how much a candidate solution x contributed to the total hypervolume of X ′ and can be regarded as a measure of the relevance of the point. Therefore, the contributing hypervolume is a popular criterion in MOO algorithms (e.g. Beume et al., 2007; Igel et al., 2007; Bader & Zitzler, 2011; Krause et al., 2016). If we iteratively optimize some solution set P , then points x with low ∆(x, P ) are candidates in an already crowded region of the current Pareto front F(P ), while points with high ∆(x, P ) mark areas that are promising to explore further.
A.1 SEMOA: SIMPLE EVOLUTIONARY MULTI-OBJECTIVE OPTIMISATION ALGORITHM
In this study, we used a simple MOO algorithm based on hypervolume maximisation outlined in Algorithm 1 inspired by Krause et al. (2016). The algorithm iteratively updates a set P of candidate solutions, starting from a set of random network architectures. Dominated solutions are removed from P . Then λ new architectures are generated by first selecting λ architectures from P and then modifying these architectures according to the perturbation described in Procedure 2. The λ new architectures are added to P and the next iteration starts. In Procedure 2, the probability pedge for changing (i.e., either adding or removing) an edge is chosen such that in expectation, two edges are changed, and the probability pnode for changing a node is set such that in expectation every second perturbation changes the label of a node.
The selection of the λ > m architectures from the current solution set is described in Procedure 3. We always select the extreme points in P that minimize a single objective (thus, the precise choice of the reference point r is of lesser importance). The other m − λ points are randomly chosen preferring points with higher contributing hypervolume. The points in P are ranked according to their hypervolume contribution. The probability of being selected depends linearly on the rank. We use linear ranking selection (Baker, 1985; Greffenstette & Baker, 1989), where the parameter controlling the slope is set to η+ = 2. Always selecting the extreme points and focusing on points with large contributing hypervolume leads to a wide spread of non-dominated solutions.
Algorithm 1 SEMOA for NAS strategy Input: objective f = (f1, . . . , fm), maximum number of iterations n Output: set of non-dominated solutions P
1: Initialize P ⊂ X (e.g., randomly) ▷ Initial random architectures 2: P ← ndom(P ) ▷ Discard dominated solutions 3: for i← 1 to n do ▷ Loop over iterations 4: O ← LinearRankSample(P , λ) ▷ Get λ points from P 5: O ← Perturb(O) ▷ Change the architectures 6: Compute f(x) for all x ∈ O ▷ Evaluate architectures 7: P ← ndom(P ∪O) ▷ Discard dominated points 8: end for 9: return P
Procedure 2 Perturb(O) Input: set of architectures O, variation probabilities for edges and nodes pedge and pnode Output: set of modified architecture O∗
1: for all MA ∈ O do ▷ Loop over matrices 2: repeat 3: for all αi,j ∈MA do ▷ Loop over entries 4: With probability pedge flip αi,j 5: end for 6: for all l ∈ LA do ▷ Loop over labels 7: With probability pnode change the label of l 8: end for 9: until MA has changed
10: end for 11: return O∗
A.2 MULTI-OBJECTIVE OPTIMISATION BASELINES
Hyperparameters for the MOO baseline methods All baseline methods utilise the tabular benchmarks of EC-NAS-Benchfor exploring and optimising architectures. The methods’ hyperparameters are chosen to circumvent unfair advantages gained by increased compute time, e.g., no—iterations or function evaluations. Although we allocate similar resources for the baseline methods, it is difficult to reason for the fairness when comparing the baselines, when considering the disparity in the algorithmic approach of the baselines.
The bag-of-baselines implementation discussed in Izquierdo et al. (2021) are used and modified for compatibility with tabular benchmarks of EC-NAS-Bench. Each experiment is run for 10 trials using different initial seeds. All developed code will be made public upon the blind-review period ending.
Random Search The baseline methods, except for Random Search, apply evolutionary search heuristics to optimize architectures in the search space. The random search implementation samples architectures from the architecture uniformly at random, each time querying an architecture for a random epoch budget. Random search is done over 1000 iterations, as the other baseline methods, where applicable, will also run for 1000 iterations.
Speeding up Evolutionary Multi-Objective Algorithm (SH-EMOA) As with all our baselines, we use the implementation in Izquierdo et al. (2021). We define a problem for and search space following the bag-of-baselines API to allow model evaluation for different epoch budgets simply by querying the tabular benchmarks of EC-NAS-Bench. We initialize the algorithm with a population size of 250 and restrict the search to 1000 function evaluations for budgets between 4 and 108. However, we force the algorithm only to use budgets 4, 12, 36 and 108, which are available in our search space. The remaining hyperparameters we leave as default, which covers a uniform mutation type for architecture perturbation and tournament style parent selection for an off-spring generation.
Mixed Surrogate Expected Hypervolume Improvement (MS-EHVI) This evolutionary algorithm, too, is initialized with a population size of 250. We choose to generate 50 samples to lessen
Procedure 3 LinearRankSample(P , λ) Input: set P ⊂ X of candidate solutions, number λ of elements to be selected; reference point r ∈ Rm,
parameter controlling the preference for better ranked points η+ ∈ [1, 2] Output: O ⊂ P , |O| = λ
1: O = ∅ 2: for i← 1 to m do 3: O ← O ∪ argminx∈P fi(x) ▷ Always add extremes 4: end for 5: Compute ∆r(x, P ) for all x ∈ P ▷ Compute contributing hypervolume 6: Sort P according to ∆(x, P ) 7: Define discrete probability distribution π over P where
πi = 1
|P | ( η+ − 2(η+ − 1) i− 1|P | − 1 ) is the probability of the element xi with the ith largest contributing hypervolume
8: for i← 1 to λ−m do ▷ Randomly select remaining points 9: Draw x ∼ π ▷ Select points with larger ∆r with higher probability
10: O ← O ∪ x 11: end for 12: return O
computation time, and we merely pass an auxiliary function to discretize parameters to fit with the experimental setup using tabular benchmarks.
Simple Evolutionary Multi-Objective Algorithm (SEMOA) Our MOO algorithm is described in subsection A.2. The key hyperparameters are the initial population size, which we set to 250, similar to the baseline methods, and likewise, we run the algorithm for 1000 iterations.
B MEASUREMENTS FROM CARBONTRACKER
We modify the open-source tool Carbontracker (Anthony et al., 2020) to measure the additional metrics reported in Table 1. Measurements take into account the energy usage of Graphical Processing Units (GPU), Central Processing Units (CPU), and Dynamic Random Access Memory (DRAM). Note that the energy usage for CPUs will include the power usage of DRAM. Power usage information is monitored, logged every 10 seconds, and reported as the average power usage during model training. Power is measured as the average of total units of a watt (W) over 10-second intervals during model training. The integral power consumed over the time a time interval, energy, is then reported in units of kilowatt-hours (kWh) with 1kWh = 3.6·106Joule (J). Additionally, the emission of greenhouse gasses (GHG) is measured by equivalent units measured in grams of carbon dioxide (CO2eq). The CO2eq is then estimated by using the carbon intensity - CO2eq units necessary to produce one unit of electricity a kilowatt per hour (kWh) - to express the carbon footprint of model training. The quantities for carbon intensity are fetched from the carbon intensity data provider every 15 minutes during model training.
Measurements from the aforementioned components alone do not give an accurate depiction of the carbon footprint model training when taking into account the energy consumption of the supporting infrastructure (e.g., data centre) is not considered. Therefore the quality of energy and carbon footprint estimations is amended by multiplying the power measurements by the PUE of the data centre hosting the compute resources. We use a PUE of 1.59, which is the global average for data centres in 2020 (Ascierto & Lawrence, 2020).
C ADDITIONAL RESULTS
The results in Figure 4 and Figure 5 were reported for the 5V- and 7V spaces respectively. The EC-NAS-Benchdataset also consists of the complete 4V space. In this section we report the MOO solutions based on the 4V search space. The trends observed for the 5V- and 7V spaces also hold for this smaller space as well.
D SURROGATE ENERGY MODEL
The MLP-based surrogate model used to predict the training energy consumption of the 7v space, E is given as: fθ(·) : x ∈ RF → E ∈ R, where θ are the trainable parameters and x comprises the features obtained from the architecture specifications. Using the cell/graph encoding of architectures introduced in Section 2.1, we populate x to consist of the upper triangular entries of the adjacency matrix, operations {input, 1x1conv, 3x3conv, 3x3maxpool, output} mapped to categorical variables [1, 2, 3, 4, 5], respectively and the total number of parameters. For the 7v space this results in x ∈ R36. We use a simple four layered MLP with gelu(·) activation functions, except for the final layer, which transforms the input in this sequence 36 → 128 → 64 → 32 → 1. The surrogate energy model is trained using actual energy measurements from 4300 randomly sampled architectures from the 7v space. The model was implemented in Pytorch (Paszke et al., 2019) and trained on an Nvidia RTX 3060 GPU. Using a training, validation and test split of ratio [0.6, 0.1, 0.3] we train fθ(·) for 200 epochs with an initial learning rate of 5× 10−3 to minimise the the L1-norm loss function between the predicted and actual energy measurements using the Adam optimiser (Kingma & Ba, 2015). | 1. What is the main contribution of the paper, and how does it extend NASBench-101?
2. What are the strengths and weaknesses of the proposed MOO algorithm?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What is the reviewer's concern regarding the energy efficiency in few-shot scenarios? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper present a new benchmark that extends NASBench-101 with the training energy footprints. The author also presents a MOO algorithm to perform the NAS.
Strengths And Weaknesses
Strength:
curate a new dataset and perform NAS on it.
the authors also evaluate their algorithms on NAS benchmark.
the paper is quite easy to follow, it is more like a tool paper that proposes a new benchmark for others to use.
Weakness:
I don't think it is necessary for such paper to introduce a new optimizer then spent a huge amount of content on the optimizer. Instead, I think it is more reasonable for authors to reuse the latest MOO algorithm such as [1] and [2] to perform NAS on their datasets, then release them as a baseline.
[1] Zhao, Yiyang, et al. "Multi-objective Optimization by Learning Space Partitions." arXiv preprint arXiv:2110.03173 (2021).
[2] Daulton, Samuel, Maximilian Balandat, and Eytan Bakshy. "Parallel bayesian optimization of multiple noisy objectives with expected hypervolume improvement." Advances in Neural Information Processing Systems 34 (2021): 2187-2200.
The experiments spent too much on the optimizer but not on the dataset, but I'd like to see more analysis on the dataset since I assume the focus of this paper is on introducing a new dataset.
Clarity, Quality, Novelty And Reproducibility
I'd like author clarify the following question? Currently there are training based evaluation (most accurate but most expensive), one-shot evaluation (least accurate but least expensive), and few-shot evaluations (between the one-shot and training) [3]. In one-shot or few-shot, the users only train one or few supernets, therefore it should be the most energy efficient. How do we quantify the energy efficiency in few-shot scenarios? [3] Thank you.
[3] Zhao, Yiyang, et al. "Few-shot neural architecture search." International Conference on Machine Learning. PMLR, 2021. |
ICLR | Title
Energy Consumption-Aware Tabular Benchmarks for Neural Architecture Search
Abstract
The demand for large-scale computational resources for Neural Architecture Search (NAS) has been lessened by tabular benchmarks for NAS. Evaluating NAS strategies is now possible on extensive search spaces and at a moderate computational cost. But so far, NAS has mainly focused on maximising performance on some hold-out validation/test set. However, energy consumption is a partially conflicting objective that should not be neglected. We hypothesise that constraining NAS to include the energy consumption of training the models could reveal a subspace of undiscovered architectures that are more computationally efficient with a smaller carbon footprint. To support the hypothesis, an existing tabular benchmark for NAS is augmented with the energy consumption of each architecture. We then perform multi-objective optimisation that includes energy consumption as an additional objective. We demonstrate the usefulness of multi-objective NAS for uncovering the trade-off between performance and energy consumption as well as for finding more energy-efficient architectures. The updated tabular benchmark, EC-NAS-Bench, is open-sourced to encourage the further exploration of energy consumption-aware NAS.
1 INTRODUCTION
The design of neural architectures is a complex task. While general guidelines for producing suitable neural architectures have been proposed, neural architecture design still requires expert domain knowledge, experience, and not least substantial effort (Philipp, 2021; Zoph & Le, 2016; Ren et al., 2020). This led to an upsurge in research on automated exploration and design of neural architectures cast as an optimisation problem – neural architecture search (NAS) (Baker et al., 2016; Zoph & Le, 2016; Real et al., 2017).
NAS strategies explore neural architectures in a predefined search space relying on model training and evaluation to determine the model’s fitness (i.e., validation/test set score) to adjust the search strategy and extract the best performing architecture (Ren et al., 2020). NAS strategies have shown great promise in discovering novel architecture designs yielding state-of-the-art model performance (Liu et al., 2017; 2018; Lin et al., 2021; Baker et al., 2017). However, it can be prohibitively expensive to perform NAS (Tan & Le, 2019b) due to the demand for large-scale computational resources and the associated carbon footprint of NAS (Schwartz et al., 2019; Anthony et al., 2020).
The introduction of tabular benchmarks for NAS significantly lessened the computational challenges mentioned above by facilitating the evaluation of NAS strategies on a limited search space of architectures (Klein & Hutter, 2019; Dong & Yang, 2020). Predictive models and zero- and one-shot models (Wen et al., 2019; Lin et al., 2021; Zela et al., 2020) have reduced time-consuming model training and thereby increased the efficiency of NAS strategies. Most recently, surrogate NAS benchmarks (Zela et al., 2022) have been proposed for arbitrary expansion of architecture search spaces for NAS.
Notwithstanding the aforementioned major contributions to the advancement of NAS research, the prime objective of NAS has been maximising a performance objective on some hold-out test/validation test. NAS strategies can be evaluated effectively, yet the search strategies do not intentionally aim to find computationally efficient architectures. That is, the NAS may efficiently determine model performance at a moderate computational cost, but energy efficiency is generally not an objective of NAS.
We hypothesise that adding the energy consumption of training models as a NAS objective could reveal a sub-space of computationally efficient models that also have a smaller carbon footprint. In order to find efficient architectures without sacrificing cardinal performance requirements, we propose the use of NAS strategies that will optimise for multiple objectives.
Our main contributions.
1. We provide an energy consumption-aware tabular benchmark for NAS based on NASBench-101 (Ying et al., 2019). For each architecture, we added its training energy consumption, power consumption and carbon footprint. We hope that the new data set will foster the development of environmentally friendly deep learning systems.
2. We also introduce a surrogate energy model to predict the training energy cost for a given architecture in a large search space (about 423k architectures)
3. To exemplify the use of the new benchmark, we devise a simple multi-objective optimisation algorithm for NAS and apply it to optimise generalisation accuracy as well as energy consumption.
4. We demonstrate the usefulness of multi-objective architecture exploration for revealing the trade-off between performance and energy efficiency and for finding efficient architectures obeying accuracy constraints. This is also demonstrated with other baseline multi-objective methods.
2 ENERGY CONSUMPTION-AWARE BENCHMARKS - EC-NAS-Bench
Our energy consumption-aware tabular benchmark EC-NAS-Bench is based on Nas-Bench-101 (Ying et al., 2019). We closely follow their specification of architectures; however, the search space of architectures that are considered, the evaluation approach and the metrics provided for each architecture is different. This section will briefly present EC-NAS-Bench and its differences to NAS-Bench-101.
2.1 ARCHITECTURE DESIGN
Network Topology. All architectures considered are convolutional neural networks (CNNs) designed for the task of image classification on CIFAR-10 (Krizhevsky, 2009). Each neural network comprises a convolutional stem layer followed by three repeats of three stacked cells and a downsampling layer. Finally, a global pooling layer and a dense softmax layer are used. The space of architectures, X, is limited to the topological space of cells, where each cell is a configurable feedforward network.
Cell Encoding. The individual cells are represented as directed acyclic graphs (DAGs). Each DAG, G(V,M), has N = |V | vertices (or nodes) and edges described in the binary adjacency matrix M ∈ {0, 1}N×N . The set of op-
erations (labels) that each node can realise is given by L′ = {input, output} ∪ L, where L = {3x3conv, 1x1conv, 3x3maxpool}. Two of the N nodes are always fixed as input and output to the network. The remaining N − 2 nodes can take up one of the labels in L. The connections between nodes of the DAG are encoded in the upper-triangular adjacency matrix with no self-connections (zero main diagonal entries). For a given architecture, A, every entry αi,j ∈ MA denotes an edge, from node i to node j with operations i, j ∈ L and its labelled adjacency matrix, LA ∈ MA × L′.
Search space. The number of DAGs grows exponentially with N and L (Ying et al., 2019). We restrict the search space in EC-NAS-Bench by imposing |V | ≤ 5 and |A ̸= 0| ≤ 9, referred to as the 5V space. The search space with |V | ≤ 4 called 4V space is also considered. In contrast, NAS-Bench-101 considers the search space for |V | ≤ 7. With these imposed restrictions on the
search space of EC-NAS-Bench, 91, 2532 and 423k unique architectures are identified from the 4V, 5V and 7v spaces, respectively.
2.2 ENERGY CONSUMPTION-AWARENESS
Resource-constrained NAS for obtaining efficient architectures has been explored mainly by optimising the total number of floating point operations (FPOs) (Tan & Le, 2019a). Optimising for FPOs, however, might not be entirely indicative of the efficiency of models (Henderson et al., 2020). It has been reported that models with fewer FPOs have bottleneck operations that can consume the bulk of the training time (Howard et al., 2017), and some models with high FPOs have lower inference time (Jeon & Kim, 2018). Energy consumption optimised hyperparameter selection outside of NAS settings for large language models has been recently investigated in Puvis de Chavannes et al. (2021).
The energy consumption during the training of a model encapsulates facets of architecture efficiency that are not entirely taken into consideration when using standard resource constraints such as FPOs, computational time and the number of parameters. Energy consumption accounts for both hardware and software variations in the experimental set-ups. To foster a new direction for NAS to find more efficient architectures, we use energy consumption as the additional objective along with standard performance measures.
2.3 QUANTIFYING ENERGY CONSUMPTION
About 75% of the total energy costs during training a neural network are incurred by hardware accelerators such as graphics processing units (GPUs) or tensor processing units (TPUs) (Dodge et al., 2022). The remaining energy consumption is mainly due to the central processing units (CPUs) and dynamic random access memory (DRAM). Additional energy consumed by the supporting infrastructure, such as cooling- and power systems and dissipation, is usually accounted for by the power usage effectiveness (PUE), which is an overhead factor. Several open-source tools have been published in the past couple of years, such as experiment-impact-tracker (Henderson et al., 2020), Carbontracker (Anthony et al., 2020) and CodeCarbon (Schmidt et al., 2021) provide convenient ways to track and log the energy consumption of neural networks by taking these factors into consideration.
In EC-NAS-Bench, the energy consumption of training and evaluating the neural architectures is estimated by modifying the tool Carbontracker (Anthony et al., 2020). Our version of the tool monitors the GPUs, CPUs and DRAM and estimates the total energy costs, E (kWh), aggregate carbon footprint (kgCO2eq) based on the instantaneous carbon intensity of the regions and the total computation time, T (s). The complete set of metrics that are measured and reported in EC-NAS-Bench are listed in Table 1.
2.4 ARCHITECTURE PERFORMANCE AND EFFICIENCY
Training Pipeline. Architectures from the 4V and 5V space are trained on CIFAR-10 (Krizhevsky, 2009) using 40k samples and evaluated on 10k validation and test samples (60k total). Each model is trained on an in-house Slurm cluster on a single NVIDIA Quadro RTX 6000 GPU with 24 GB memory and two Intel CPUs. The training strategy, or hyper-parameter setting, is similar to that of NAS-Bench-101 (Klein & Hutter, 2019). Pre-
dicting the energy consumption of longer model runs from a few training epochs has been shown to be robust when performed on the same hardware (Anthony et al., 2020). To refrain from retraining and re-evaluating all the models in NAS-Bench-101, we train each model for only 4 epochs and then obtain surrogate time and energy measurements by linear scaling. We then tabulate these measurements along with the corresponding mean performance metrics for each model from NASBench-101 and obtain metrics for training and evaluating each model for 12, 36 and 108 epochs.
Metrics. We report the operations, no. parameters, and performance metrics in EC-NAS-Bench, as in NAS-Bench-101, and additionally, we include efficiency measures in terms of energy consumption and the carbon footprint for training each model. The primary focus for efficiency metrics is to quantify the resource costs specific to model training; however, we also report the total resource costs, which include computational overhead, e.g., data movements. For completeness, we also provide carbon intensity measures at training time, timestamp, and average energy consumption of computing resources. We have made the metrics of each architecture readily accessible to encourage the development of NAS strategies for exploring efficient architectures. The metrics reported relevant to this work can be seen in Table 1.
2.5 SURROGATE DATASET FOR 7V-SPACE
The 4V and 5V search spaces are the primary spaces used in this work to reduce the overall resource consumption to populate the energy measurements in the tabular benchmark datasets. However, even the 5V space has only a fraction of possible architectures compared to the 7V space published in Ying et al. (2019), which has about 423k architectures. Computing the energy consumption as done for 4V and 5V datasets on the 7V space is prohibitively expensive1.
We instead sample a subset of architectures from the 7V space and obtain the actual energy costs for 4300 architectures. Using these measurements we train a multi-layered perceptron (MLP) based surrogate energy prediction model. The MLP takes the graph-encoded architecture and the number of parameters as input and predicts the energy consumption for a given number of epochs. This surrogate model is similar to recent surrogate NAS methods that have shown to be more efficient Zela et al. (2022). Details of the surrogate model used to predict the energy measurements for the 7V space are provided Appendix D.
The resulting surrogate 7V dataset with the energy measurements yields a close approximation of the actual training energy costs as shown in Figure 2-a). The Pearson correlation between the actual and predicted energy measurements is 0.9977. In Figure 2-b), we also show that the mean absolute error of the predicted- and actual energy measurements plateau with about 3000 architectures, justifying its use to predict on the remaining 7V space. The standard deviation is estimated over 10 random initialisations of the surrogate model per training dataset size.
2.6 INFLUENCE OF HARDWARE ON EC-NAS-Bench
The energy consumption of the architectures in the 4V and 5V spaces were obtained on a single RTX Quadro 6000 GPU. While the energy measurements tabulated in EC-NAS-Bench are specific to these hardware settings, we argue that the trends across the architectures hold independent of the actual hardware used. To demonstrate this, we trained the architectures in the 4V space on four different (Nvidia) GPUs spanning multiple generations: Titan XP, RTX 3060, RTX 3090 and RTX Quadro 6000.
While the energy consumed by each model on specific hardware is different, the trends compared to other models are maintained across different GPUs. This is captured in Figure 3, where the energy
1Our estimates showed that it would require 770 GPU days of compute.
consumption for each architecture in the 4V space on all four GPUs is reported. This trend confirms the fact that when NAS is constrained on energy consumption and performance the resulting models would remain the same irrespective of the specific hardware used.
3 NAS STRATEGIES WITH EC-NAS-Bench
Given a tabular benchmark which can be used to query for model training energy consumption in addition to other standard metrics such as in EC-NAS-Bench, NAS strategies can be used to search for energy-efficient architectures. We next present multi-objective optimisation as a suitable strategy to uncover the trade-off between performance and efficiency, which supports an energyaware architecture choice.
3.1 MULTI-OBJECTIVE OPTIMISATION
Multi-objective optimisation (MOO) simultaneously optimises several, potentially conflicting objectives. The goal of MOO is to find or to approximate the set of Pareto-optimal solutions, where a solution is Pareto-optimal if it cannot be improved in one objective without getting worse in another.
In this work, we introduce a simple evolutionary MOO algorithm (SEMOA) based on Krause et al. (2016). The algorithm is simple, but derived from canonical principles of derivative-free multicriteria optimisation, such as hypervolume maximisation. Details of SEMOA are presented in Appendix A.2. We also use several existing MOO algorithms: random search, Speeding up Evolutionary Multi-Objective Algorithms (SHEMOA) and Mixed Surrogate Expected Hypervolume Improvement (MSEHVI), implemented in Izquierdo et al. (2021) to demonstrate the usefulness of EC-NAS-Bench .
3.2 EVALUATION OF NAS STRATEGIES
Experimental Setup. We conduct experiments on EC-NAS-Bench by adapting the presented MOO-algorithm to perform both single-objective optimisation (SOO) and MOO. In the former, we will naturally find only one solution when optimising a single objective. In contrast, when optimising multiple, diverse objectives, we will find the empirical Pareto-front in the latter. We run the algorithm in the 4V and 5V space of models trained for 108 epochs. The optimisation is performed over 100 evolutions with a population size of 20. All the experiments are conducted on a desktop workstation with a single NVIDIA RTX 3090 GPU with 24GB memory and Intel(R) Core(TM) i7-10700F CPU @ 2.90GHz.
Performance Criteria. For the multi-objective optimisation, we use the validation accuracy (Pv) and the training energy cost, E(kWh), as the two objectives to be jointly optimised using the MOO algorithm. For the single-objective optimisation, we only use Pv as the performance objective. We
use energy cost rather than, e.g., training time, considering that E is agnostic to parallel computing. We note that it is possible to use any of the provided metrics in Table 1 for the purpose of singleand multi-objective optimisation. As the MOO algorithm minimises the objectives, we simply use the negative of the objectives in cases where the quantities are to be maximised; for instance, we optimise −Pv as accuracy is a maximisation objective. Training costs In aggregate, EC-NAS-Bench had a total estimated training cost of 124.214 GPU days, 2021.02 kWh and 259.047 kgCO2eq for the 5V space. The 4V space had a total estimated training cost of 3.854 GPU days, 63.792 kWh and 5.981 kgCO2eq. The actual training costs for the 5V space were only 3.105 GPU days, 50.525 kWh and 6.476 kgCO2eq. Actual training costs of the 4V space were 0.096 GPU days, 1.594 kWh and 0.149 kgCO2eq.
In total, we saved an estimated compute cost of 121.109 GPU days, 1970.495 kWh and 252.571 kgCO2eq for the 5V space, and 3.758 GPU days, 48.931 kWh and 6.327 kgCO2eq for the 4V space. We obtain ≈ 97% reduction in computing resources and energy consumption in all efficiency measures.
4 RESULTS
Multi-objective exploration of 5V space. The key results from the experiments on EC-NAS-Bench using the multi-objective optimisation of E and −Pv are shown in Figure 4-a),b) and c). Pareto fronts over multiple random initialisations of the four MOO algorithms: SEMOA (ours), Random Search, SHEMOA, MSEHVI, are visualised as attainment curves in Figure 4-a) which summarises the median solutions attained over the multiple runs(Fonseca et al., 2001). All the MOO algorithms are able to explore the search space reasonably well, yielding attainment curves that largely look similar.
The Pareto front obtained from the our MOO algorithm, SEMOA, for one run is shown in Figure 4- b). It also shows the extrema (r0, r1) on both ends of the front preferring one of the objectives, whereas the knee point (rk) offers the best trade-off between the two objectives. These three points are shown in different colours and markers, where the two extrema (Ar0 /Blue, Ar1 /Green) and the knee point (Ark /yellow). We compute the bend-angles to find the knee point as suggested by Deb & Gupta (2011).
The architectures corresponding to the two extrema (Ar0 ,Ar1 ) and the corresponding knee point (Ark ) for a single MOO run are visualised in the radar plot in Figure 4-b). The exact performance metrics for the three models in Figure 4-b) are also reported in Table 2. The solution covering the largest area is one of the extremal points (Ar0 , blue) with high accuracy (0.944) but also a larger footprint in the energy consumption (1.032kWh), computation time (5482.78s) and the number of
parameters (21.22M) compared to the other extremum (Ar1 , green) or the knee point (Ark , yellow). The model corresponding to the knee point (Ark ) provides a large reduction in the energy consumption (0.324kWh) at the expense of a small reduction in performance (0.932).
Single-objective exploration. We optimise only the validation accuracy, Pv , to simulate standard NAS practices. The resulting solution is shown the last row of Table 2. This SOO model achieves the highest validation accuracy (0.944). However, the footprint of the solution along the energy consumption, computation time and the number of parameter axes are larger than those from the MOO algorithm.
Multi-objective exploration of 7V space. The MOO results for the surrogate 7V space resemble the trends observed in the 5V space, as shown in Figure 5. As with the 5V space all the attainment curves of all the four MOO algorithms look similar. Visibly, the MSEHVI method seems to underperform compared to other models due to the protrusion around the knee-point compared to the other models, which are largely overlapping. A single Pareto front of SEMOA are also showed in Figure 5-b), with trends comparable with those in the 5V space results in Figure 4.
5 DISCUSSIONS
Single versus multi-objective optimisation. The performance trends of the SOO and MOO solutions are clearly captured in Table 2. The knee point solution, Ark , from MOO, yields an architecture that consumes about 70% less energy and has only about 1% degradation in performance. Depending on the downstream tasks, this could be a reasonable trade-off. If the degradation in performance cannot be tolerated, the Pareto front offers other candidate solutions for the practitioners to choose from. For instance, the extremum solution (Ar0 ) offers basically the same performance as the SOO solution by consuming about 32% less energy.
Training time is not an alternative to energy consumption. The original NAS-Bench-101 already reports the training time (Ying et al., 2019). In single hardware regimes, this could serve as a measure of the energy consumption, as training time mostly correlates with the energy consumption. However, as most neural architecture training is performed on multiple GPUs with large-scale parallelism, training time alone cannot capture the efficiency of models. Aggregate energy consumption can take parallel hardware and the associated overheads into consideration. Even in single GPU
training settings, energy consumption could optimise for energy-efficient models. For instance, a small architecture trained on a large GPU still has larger energy consumption due to the underutilisation of the hardware resources. In such instances, a larger model could (to a certain extent) yield more performance improvements for the total energy consumed (Pv/E). Energy efficient tabular NAS benchmark for obtaining efficient architectures. Tabular benchmarks such as NAS-Bench-101 (Ying et al., 2019) were introduced to reduce the resources required to perform NAS. However, even the one-time cost of generating a tabular benchmark dataset is massive. Surrogate NAS benchmarks are being studied to alleviate these costs, where the models are not exhaustively trained and evaluated. Instead, the performance metrics of architectures are estimated based on smaller training costs. For instance, this is achieved using predictive modelling based on learning curves (Yan et al., 2021), gradient approximations (Xu et al., 2021), or by fitting surrogate models to a subset of architectures (Zela et al., 2022). Similar to these attempts, the proposed EC-NAS-Bench dataset does not train all the models but bases its predictions on training the models only for 4 epochs, as described in Section 3.2. This results in about 97% reduction if the dataset were to be created from scratch, as shown in Table 3. Thus, EC-NAS-Bench is an energyefficient tabular benchmark that can be used to obtain energy-efficient architectures as demonstrated in Section 4.
Carbon-footprint aware NAS. The EC-NAS-Bench dataset reports several metrics per architecture, as shown in Table 1. Combinations of these metrics and the use of MOO could allow for the exploration of architecture spaces that have interesting properties. For instance, NAS can be performed to directly optimise the carbon footprint of deep learning models. Although instantaneous energy consumption and carbon footprint are
linearly correlated, when measured over a longer duration (>5m) these quantities differ due to the fluctuations of the instantaneous carbon intensity (Anthony et al., 2020). These carbon intensity fluctuations are caused by the variations of the power sources to the grid (Henderson et al., 2020). This can have implications when training models for a longer duration or on cloud instances that can be distributed over data centres in different countries (Dodge et al., 2022). By reporting instantaneous and aggregate carbon footprint of model training in EC-NAS-Bench we facilitate the possibility of carbon footprint aware NAS (Selvan et al., 2022). In this work, we focused only on energy consumption awareness to work around the temporal- and spatial variations of the carbon intensity.
Energy Consumption aware Few-shot NAS. Tabular benchmarks such as NAS-Bench-101 (Ying et al., 2019) provide an efficient way to explore different NAS strategies where the model training cost is only incurred once. One restriction with such tabular benchmarks is that they are specific to a set of architectures (fx: feedforward convolutional neural networks) and datasets (fx: CIFAR10). Developing tabular benchmarks for all possible network architectures and datasets is alleviated using one- or few- shot learning methods (Zhao et al., 2021; Zela et al., 2020). Integrating surrogate models for predicting learning dynamics (Zela et al., 2022) and energy measurements using the surrogate model in Section 2.5 could bridge the divide between few-shot and surrogate tabular benchmark datasets that are also energy consumption-aware. We have demonstrated the integration of surrogate energy models with existing tabular benchmark datasets, and extending these to surrogate benchmark datasets is straightforward.
Limitations. Constraining the number of vertices in the DAGs results in sparser search spaces for the optimisation strategy. The optimisation strategy will therefore be more sensitive to initialisation and choice of random seeds, and the empirical Pareto front will appear to be more rigid, as seen in the attainment plot in Figure 4-c, even when multiple initialisations and trials are carried out. We also only demonstrated experiments on the 4V and 5V spaces.
To reduce the computation cost, in EC-NAS-Bench we used the surrogate time and energy measurements that do not model training time variability. We also query the performance metrics from the three repeats of NAS-Bench-101 and update EC-NAS-Bench with their mean performance metrics.
All these limitations are primarily driven by the need to minimise the energy consumption of these experiments. While these are at the expense of variability, we argue that the resulting reduction in the energy consumption justifies these choices. Further, the results from these small-scale experiments have been shown to extend to larger space of architectures (Ying et al., 2019).
6 CONCLUSIONS AND FUTURE WORK
In this work, we presented an updated tabular benchmark dataset, EC-NAS-Bench, which tabulates the energy consumption and carbon footprint of training models, in addition to standard performance measures. Using multi-objective optimisation strategies, we showed that Pareto-optimal solutions offer appealing trade-offs between the performance measures and the energy consumption of model training. We qualitatively showed that large reductions (about 70%) in energy consumption are possible with <1% reduction in performance.
In addition to providing energy consumption measures, the EC-NAS-Bench benchmark provides metrics such as average carbon footprint and power consumption of CPUs, GPUs and DRAM. We hope this will foster interest in the development of models that are efficient and environmentally friendly by optimising for their energy consumption and carbon footprint.
A MULTI-OBJECTIVE OPTIMISATION
Formally, let the MOO problem be described by f : X → Rm, f(x) 7→ (f1(x), . . . , fm(x)). Here X denotes the search space of the optimization problem and m refers to the number of objectives. We assume w.l.o.g. that all objectives are to be minimized. For two points x, x′ ∈ X we say that x′ dominates x and write x′ ≺ x if ∀i ∈ {1, . . . ,m} : fi(x′) ≤ fi(x) ∧ ∃j ∈ {1, . . . ,m} : fj(x′) < fj(x). For X ′, X ′′ ⊆ X we say that X ′ dominates X ′′ and write X ′ ≺ X ′′ if ∀x′′ ∈ X ′′ : ∃x′ ∈ X ′ : x′ ≺ X ′′. The subset of non-dominated solutions in a set X ′ ⊆ X is given by ndom(X ′) = {x | x ∈ X ′∧∄x′ ∈ X ′ \{x} : x′ ≺ x}. The Pareto front of a set X ′ ⊂ X defined as F(X ′) = {f(x) |x ∈ ndom(X ′)} and, thus, the goal of MOO can be formalised as approximating F(X). In iterative MOO, the strategy is to step-wise improve a set of candidate solutions towards a sufficiently good approximation of F(X). For the design of a MOO algorithm, it is important to have a way to rank two sets X ′ and X ′′ w.r.t. the overall MOO goal even if neither X ′ ≺ X ′′ nor X ′′ ≺ X ′. This ranking can be done by the hypervolume measure. The hypervolume measure or S-metric (see Zitzler & Thiele, 1999) of a set X ′ ⊆ X is the volume of the union of regions in Rm that are dominated by X ′ and bounded by some appropriately chosen reference point r ∈ Rm:
Sr(X ′) := Λ ( ⋃
x∈X′
[ f1(x), r1 ] × · · · × [ fm(x), rm ]) ,
where Λ( · ) is the Lebesgue measure. The hypervolume is, up to weighting objectives, the only strictly Pareto compliant measure (Zitzler et al., 2003) in the sense that given two sets X ′ and X ′′ we have S(X ′) > S(X ′′) if X ′ dominates X ′′. As stated by Bringmann et al. (2013), the worst-case approximation factor of a Pareto front F(X ′) obtained from any hypervolume-optimal set X ′ with size |X ′| = µ is asymptotically equal to the best worst-case approximation factor achievable by any set of size µ, namely Θ(1/µ) for additive approximation and 1+Θ(1/µ) for relative approximation (Bringmann & Friedrich, 2013). Now we define the contributing hypervolume of an individual x ∈ X ′ as
∆r(x,X ′) := Sr(X ′)− Sr(X ′ \ {x}) .
The value ∆(x,X ′) quantifies how much a candidate solution x contributed to the total hypervolume of X ′ and can be regarded as a measure of the relevance of the point. Therefore, the contributing hypervolume is a popular criterion in MOO algorithms (e.g. Beume et al., 2007; Igel et al., 2007; Bader & Zitzler, 2011; Krause et al., 2016). If we iteratively optimize some solution set P , then points x with low ∆(x, P ) are candidates in an already crowded region of the current Pareto front F(P ), while points with high ∆(x, P ) mark areas that are promising to explore further.
A.1 SEMOA: SIMPLE EVOLUTIONARY MULTI-OBJECTIVE OPTIMISATION ALGORITHM
In this study, we used a simple MOO algorithm based on hypervolume maximisation outlined in Algorithm 1 inspired by Krause et al. (2016). The algorithm iteratively updates a set P of candidate solutions, starting from a set of random network architectures. Dominated solutions are removed from P . Then λ new architectures are generated by first selecting λ architectures from P and then modifying these architectures according to the perturbation described in Procedure 2. The λ new architectures are added to P and the next iteration starts. In Procedure 2, the probability pedge for changing (i.e., either adding or removing) an edge is chosen such that in expectation, two edges are changed, and the probability pnode for changing a node is set such that in expectation every second perturbation changes the label of a node.
The selection of the λ > m architectures from the current solution set is described in Procedure 3. We always select the extreme points in P that minimize a single objective (thus, the precise choice of the reference point r is of lesser importance). The other m − λ points are randomly chosen preferring points with higher contributing hypervolume. The points in P are ranked according to their hypervolume contribution. The probability of being selected depends linearly on the rank. We use linear ranking selection (Baker, 1985; Greffenstette & Baker, 1989), where the parameter controlling the slope is set to η+ = 2. Always selecting the extreme points and focusing on points with large contributing hypervolume leads to a wide spread of non-dominated solutions.
Algorithm 1 SEMOA for NAS strategy Input: objective f = (f1, . . . , fm), maximum number of iterations n Output: set of non-dominated solutions P
1: Initialize P ⊂ X (e.g., randomly) ▷ Initial random architectures 2: P ← ndom(P ) ▷ Discard dominated solutions 3: for i← 1 to n do ▷ Loop over iterations 4: O ← LinearRankSample(P , λ) ▷ Get λ points from P 5: O ← Perturb(O) ▷ Change the architectures 6: Compute f(x) for all x ∈ O ▷ Evaluate architectures 7: P ← ndom(P ∪O) ▷ Discard dominated points 8: end for 9: return P
Procedure 2 Perturb(O) Input: set of architectures O, variation probabilities for edges and nodes pedge and pnode Output: set of modified architecture O∗
1: for all MA ∈ O do ▷ Loop over matrices 2: repeat 3: for all αi,j ∈MA do ▷ Loop over entries 4: With probability pedge flip αi,j 5: end for 6: for all l ∈ LA do ▷ Loop over labels 7: With probability pnode change the label of l 8: end for 9: until MA has changed
10: end for 11: return O∗
A.2 MULTI-OBJECTIVE OPTIMISATION BASELINES
Hyperparameters for the MOO baseline methods All baseline methods utilise the tabular benchmarks of EC-NAS-Benchfor exploring and optimising architectures. The methods’ hyperparameters are chosen to circumvent unfair advantages gained by increased compute time, e.g., no—iterations or function evaluations. Although we allocate similar resources for the baseline methods, it is difficult to reason for the fairness when comparing the baselines, when considering the disparity in the algorithmic approach of the baselines.
The bag-of-baselines implementation discussed in Izquierdo et al. (2021) are used and modified for compatibility with tabular benchmarks of EC-NAS-Bench. Each experiment is run for 10 trials using different initial seeds. All developed code will be made public upon the blind-review period ending.
Random Search The baseline methods, except for Random Search, apply evolutionary search heuristics to optimize architectures in the search space. The random search implementation samples architectures from the architecture uniformly at random, each time querying an architecture for a random epoch budget. Random search is done over 1000 iterations, as the other baseline methods, where applicable, will also run for 1000 iterations.
Speeding up Evolutionary Multi-Objective Algorithm (SH-EMOA) As with all our baselines, we use the implementation in Izquierdo et al. (2021). We define a problem for and search space following the bag-of-baselines API to allow model evaluation for different epoch budgets simply by querying the tabular benchmarks of EC-NAS-Bench. We initialize the algorithm with a population size of 250 and restrict the search to 1000 function evaluations for budgets between 4 and 108. However, we force the algorithm only to use budgets 4, 12, 36 and 108, which are available in our search space. The remaining hyperparameters we leave as default, which covers a uniform mutation type for architecture perturbation and tournament style parent selection for an off-spring generation.
Mixed Surrogate Expected Hypervolume Improvement (MS-EHVI) This evolutionary algorithm, too, is initialized with a population size of 250. We choose to generate 50 samples to lessen
Procedure 3 LinearRankSample(P , λ) Input: set P ⊂ X of candidate solutions, number λ of elements to be selected; reference point r ∈ Rm,
parameter controlling the preference for better ranked points η+ ∈ [1, 2] Output: O ⊂ P , |O| = λ
1: O = ∅ 2: for i← 1 to m do 3: O ← O ∪ argminx∈P fi(x) ▷ Always add extremes 4: end for 5: Compute ∆r(x, P ) for all x ∈ P ▷ Compute contributing hypervolume 6: Sort P according to ∆(x, P ) 7: Define discrete probability distribution π over P where
πi = 1
|P | ( η+ − 2(η+ − 1) i− 1|P | − 1 ) is the probability of the element xi with the ith largest contributing hypervolume
8: for i← 1 to λ−m do ▷ Randomly select remaining points 9: Draw x ∼ π ▷ Select points with larger ∆r with higher probability
10: O ← O ∪ x 11: end for 12: return O
computation time, and we merely pass an auxiliary function to discretize parameters to fit with the experimental setup using tabular benchmarks.
Simple Evolutionary Multi-Objective Algorithm (SEMOA) Our MOO algorithm is described in subsection A.2. The key hyperparameters are the initial population size, which we set to 250, similar to the baseline methods, and likewise, we run the algorithm for 1000 iterations.
B MEASUREMENTS FROM CARBONTRACKER
We modify the open-source tool Carbontracker (Anthony et al., 2020) to measure the additional metrics reported in Table 1. Measurements take into account the energy usage of Graphical Processing Units (GPU), Central Processing Units (CPU), and Dynamic Random Access Memory (DRAM). Note that the energy usage for CPUs will include the power usage of DRAM. Power usage information is monitored, logged every 10 seconds, and reported as the average power usage during model training. Power is measured as the average of total units of a watt (W) over 10-second intervals during model training. The integral power consumed over the time a time interval, energy, is then reported in units of kilowatt-hours (kWh) with 1kWh = 3.6·106Joule (J). Additionally, the emission of greenhouse gasses (GHG) is measured by equivalent units measured in grams of carbon dioxide (CO2eq). The CO2eq is then estimated by using the carbon intensity - CO2eq units necessary to produce one unit of electricity a kilowatt per hour (kWh) - to express the carbon footprint of model training. The quantities for carbon intensity are fetched from the carbon intensity data provider every 15 minutes during model training.
Measurements from the aforementioned components alone do not give an accurate depiction of the carbon footprint model training when taking into account the energy consumption of the supporting infrastructure (e.g., data centre) is not considered. Therefore the quality of energy and carbon footprint estimations is amended by multiplying the power measurements by the PUE of the data centre hosting the compute resources. We use a PUE of 1.59, which is the global average for data centres in 2020 (Ascierto & Lawrence, 2020).
C ADDITIONAL RESULTS
The results in Figure 4 and Figure 5 were reported for the 5V- and 7V spaces respectively. The EC-NAS-Benchdataset also consists of the complete 4V space. In this section we report the MOO solutions based on the 4V search space. The trends observed for the 5V- and 7V spaces also hold for this smaller space as well.
D SURROGATE ENERGY MODEL
The MLP-based surrogate model used to predict the training energy consumption of the 7v space, E is given as: fθ(·) : x ∈ RF → E ∈ R, where θ are the trainable parameters and x comprises the features obtained from the architecture specifications. Using the cell/graph encoding of architectures introduced in Section 2.1, we populate x to consist of the upper triangular entries of the adjacency matrix, operations {input, 1x1conv, 3x3conv, 3x3maxpool, output} mapped to categorical variables [1, 2, 3, 4, 5], respectively and the total number of parameters. For the 7v space this results in x ∈ R36. We use a simple four layered MLP with gelu(·) activation functions, except for the final layer, which transforms the input in this sequence 36 → 128 → 64 → 32 → 1. The surrogate energy model is trained using actual energy measurements from 4300 randomly sampled architectures from the 7v space. The model was implemented in Pytorch (Paszke et al., 2019) and trained on an Nvidia RTX 3060 GPU. Using a training, validation and test split of ratio [0.6, 0.1, 0.3] we train fθ(·) for 200 epochs with an initial learning rate of 5× 10−3 to minimise the the L1-norm loss function between the predicted and actual energy measurements using the Adam optimiser (Kingma & Ba, 2015). | 1. What is the focus and contribution of the paper regarding NAS?
2. What are the strengths of the proposed energy consumption-aware tabular benchmark?
3. What are the weaknesses of the paper, particularly in terms of search space modification and result clarity?
4. Do you have any questions or concerns about the multi-objective architecture exploration used in the paper?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work proposes an energy consumption-aware tabular benchmark for NAS based on NAS-Bench-101. For each architecture, it adds the training energy consumption, power consumption, and carbon footprint. This work also demonstrates the usefulness of multi-objective architecture exploration for finding energy-efficient architectures without sacrificing much accuracy. The MOO algorithm used is based on existing work from Krause et al. 2016.
Strengths And Weaknesses
Strengths: Energy consumption is indeed an important factor to consider in NAS. This benchmark can help facilitate further research and development on energy-efficient NAS. This work presents an interesting showcase of using MOO to find energy-efficient NAS.
Questions and concerns: This benchmark changed the search space of NAS-Bench-101. For example, instead of evaluating a 7V space, this work tests 5V and 4V spaces. Can the authors justify why this change is made? Clarity of the results: The results presented in Figure 2 are not very clear to me. Figure 2a shows the Pareto front obtained from one of the MOO runs. But what do the different markers and colors mean? It is my understanding that the blue, green, and yellow points correspond to the three types of architecture, including two extrema and a knee point. Why the red one, i.e., the optimal solution in SOO is, not shown in the figure? It would be great if the authors could establish benchmarked baseline performance on this new benchmark.
Clarity, Quality, Novelty And Reproducibility
Clarity: The paper is clear overall. But there are some places in the empirical evaluation that are not very clear.
Quality: The quality of the presentation can be improved.
Reproducibility: The authors mentioned that the proposed benchmark is open-sourced. This makes me believe the reproducibility is not an issue. |
ICLR | Title
Energy Consumption-Aware Tabular Benchmarks for Neural Architecture Search
Abstract
The demand for large-scale computational resources for Neural Architecture Search (NAS) has been lessened by tabular benchmarks for NAS. Evaluating NAS strategies is now possible on extensive search spaces and at a moderate computational cost. But so far, NAS has mainly focused on maximising performance on some hold-out validation/test set. However, energy consumption is a partially conflicting objective that should not be neglected. We hypothesise that constraining NAS to include the energy consumption of training the models could reveal a subspace of undiscovered architectures that are more computationally efficient with a smaller carbon footprint. To support the hypothesis, an existing tabular benchmark for NAS is augmented with the energy consumption of each architecture. We then perform multi-objective optimisation that includes energy consumption as an additional objective. We demonstrate the usefulness of multi-objective NAS for uncovering the trade-off between performance and energy consumption as well as for finding more energy-efficient architectures. The updated tabular benchmark, EC-NAS-Bench, is open-sourced to encourage the further exploration of energy consumption-aware NAS.
1 INTRODUCTION
The design of neural architectures is a complex task. While general guidelines for producing suitable neural architectures have been proposed, neural architecture design still requires expert domain knowledge, experience, and not least substantial effort (Philipp, 2021; Zoph & Le, 2016; Ren et al., 2020). This led to an upsurge in research on automated exploration and design of neural architectures cast as an optimisation problem – neural architecture search (NAS) (Baker et al., 2016; Zoph & Le, 2016; Real et al., 2017).
NAS strategies explore neural architectures in a predefined search space relying on model training and evaluation to determine the model’s fitness (i.e., validation/test set score) to adjust the search strategy and extract the best performing architecture (Ren et al., 2020). NAS strategies have shown great promise in discovering novel architecture designs yielding state-of-the-art model performance (Liu et al., 2017; 2018; Lin et al., 2021; Baker et al., 2017). However, it can be prohibitively expensive to perform NAS (Tan & Le, 2019b) due to the demand for large-scale computational resources and the associated carbon footprint of NAS (Schwartz et al., 2019; Anthony et al., 2020).
The introduction of tabular benchmarks for NAS significantly lessened the computational challenges mentioned above by facilitating the evaluation of NAS strategies on a limited search space of architectures (Klein & Hutter, 2019; Dong & Yang, 2020). Predictive models and zero- and one-shot models (Wen et al., 2019; Lin et al., 2021; Zela et al., 2020) have reduced time-consuming model training and thereby increased the efficiency of NAS strategies. Most recently, surrogate NAS benchmarks (Zela et al., 2022) have been proposed for arbitrary expansion of architecture search spaces for NAS.
Notwithstanding the aforementioned major contributions to the advancement of NAS research, the prime objective of NAS has been maximising a performance objective on some hold-out test/validation test. NAS strategies can be evaluated effectively, yet the search strategies do not intentionally aim to find computationally efficient architectures. That is, the NAS may efficiently determine model performance at a moderate computational cost, but energy efficiency is generally not an objective of NAS.
We hypothesise that adding the energy consumption of training models as a NAS objective could reveal a sub-space of computationally efficient models that also have a smaller carbon footprint. In order to find efficient architectures without sacrificing cardinal performance requirements, we propose the use of NAS strategies that will optimise for multiple objectives.
Our main contributions.
1. We provide an energy consumption-aware tabular benchmark for NAS based on NASBench-101 (Ying et al., 2019). For each architecture, we added its training energy consumption, power consumption and carbon footprint. We hope that the new data set will foster the development of environmentally friendly deep learning systems.
2. We also introduce a surrogate energy model to predict the training energy cost for a given architecture in a large search space (about 423k architectures)
3. To exemplify the use of the new benchmark, we devise a simple multi-objective optimisation algorithm for NAS and apply it to optimise generalisation accuracy as well as energy consumption.
4. We demonstrate the usefulness of multi-objective architecture exploration for revealing the trade-off between performance and energy efficiency and for finding efficient architectures obeying accuracy constraints. This is also demonstrated with other baseline multi-objective methods.
2 ENERGY CONSUMPTION-AWARE BENCHMARKS - EC-NAS-Bench
Our energy consumption-aware tabular benchmark EC-NAS-Bench is based on Nas-Bench-101 (Ying et al., 2019). We closely follow their specification of architectures; however, the search space of architectures that are considered, the evaluation approach and the metrics provided for each architecture is different. This section will briefly present EC-NAS-Bench and its differences to NAS-Bench-101.
2.1 ARCHITECTURE DESIGN
Network Topology. All architectures considered are convolutional neural networks (CNNs) designed for the task of image classification on CIFAR-10 (Krizhevsky, 2009). Each neural network comprises a convolutional stem layer followed by three repeats of three stacked cells and a downsampling layer. Finally, a global pooling layer and a dense softmax layer are used. The space of architectures, X, is limited to the topological space of cells, where each cell is a configurable feedforward network.
Cell Encoding. The individual cells are represented as directed acyclic graphs (DAGs). Each DAG, G(V,M), has N = |V | vertices (or nodes) and edges described in the binary adjacency matrix M ∈ {0, 1}N×N . The set of op-
erations (labels) that each node can realise is given by L′ = {input, output} ∪ L, where L = {3x3conv, 1x1conv, 3x3maxpool}. Two of the N nodes are always fixed as input and output to the network. The remaining N − 2 nodes can take up one of the labels in L. The connections between nodes of the DAG are encoded in the upper-triangular adjacency matrix with no self-connections (zero main diagonal entries). For a given architecture, A, every entry αi,j ∈ MA denotes an edge, from node i to node j with operations i, j ∈ L and its labelled adjacency matrix, LA ∈ MA × L′.
Search space. The number of DAGs grows exponentially with N and L (Ying et al., 2019). We restrict the search space in EC-NAS-Bench by imposing |V | ≤ 5 and |A ̸= 0| ≤ 9, referred to as the 5V space. The search space with |V | ≤ 4 called 4V space is also considered. In contrast, NAS-Bench-101 considers the search space for |V | ≤ 7. With these imposed restrictions on the
search space of EC-NAS-Bench, 91, 2532 and 423k unique architectures are identified from the 4V, 5V and 7v spaces, respectively.
2.2 ENERGY CONSUMPTION-AWARENESS
Resource-constrained NAS for obtaining efficient architectures has been explored mainly by optimising the total number of floating point operations (FPOs) (Tan & Le, 2019a). Optimising for FPOs, however, might not be entirely indicative of the efficiency of models (Henderson et al., 2020). It has been reported that models with fewer FPOs have bottleneck operations that can consume the bulk of the training time (Howard et al., 2017), and some models with high FPOs have lower inference time (Jeon & Kim, 2018). Energy consumption optimised hyperparameter selection outside of NAS settings for large language models has been recently investigated in Puvis de Chavannes et al. (2021).
The energy consumption during the training of a model encapsulates facets of architecture efficiency that are not entirely taken into consideration when using standard resource constraints such as FPOs, computational time and the number of parameters. Energy consumption accounts for both hardware and software variations in the experimental set-ups. To foster a new direction for NAS to find more efficient architectures, we use energy consumption as the additional objective along with standard performance measures.
2.3 QUANTIFYING ENERGY CONSUMPTION
About 75% of the total energy costs during training a neural network are incurred by hardware accelerators such as graphics processing units (GPUs) or tensor processing units (TPUs) (Dodge et al., 2022). The remaining energy consumption is mainly due to the central processing units (CPUs) and dynamic random access memory (DRAM). Additional energy consumed by the supporting infrastructure, such as cooling- and power systems and dissipation, is usually accounted for by the power usage effectiveness (PUE), which is an overhead factor. Several open-source tools have been published in the past couple of years, such as experiment-impact-tracker (Henderson et al., 2020), Carbontracker (Anthony et al., 2020) and CodeCarbon (Schmidt et al., 2021) provide convenient ways to track and log the energy consumption of neural networks by taking these factors into consideration.
In EC-NAS-Bench, the energy consumption of training and evaluating the neural architectures is estimated by modifying the tool Carbontracker (Anthony et al., 2020). Our version of the tool monitors the GPUs, CPUs and DRAM and estimates the total energy costs, E (kWh), aggregate carbon footprint (kgCO2eq) based on the instantaneous carbon intensity of the regions and the total computation time, T (s). The complete set of metrics that are measured and reported in EC-NAS-Bench are listed in Table 1.
2.4 ARCHITECTURE PERFORMANCE AND EFFICIENCY
Training Pipeline. Architectures from the 4V and 5V space are trained on CIFAR-10 (Krizhevsky, 2009) using 40k samples and evaluated on 10k validation and test samples (60k total). Each model is trained on an in-house Slurm cluster on a single NVIDIA Quadro RTX 6000 GPU with 24 GB memory and two Intel CPUs. The training strategy, or hyper-parameter setting, is similar to that of NAS-Bench-101 (Klein & Hutter, 2019). Pre-
dicting the energy consumption of longer model runs from a few training epochs has been shown to be robust when performed on the same hardware (Anthony et al., 2020). To refrain from retraining and re-evaluating all the models in NAS-Bench-101, we train each model for only 4 epochs and then obtain surrogate time and energy measurements by linear scaling. We then tabulate these measurements along with the corresponding mean performance metrics for each model from NASBench-101 and obtain metrics for training and evaluating each model for 12, 36 and 108 epochs.
Metrics. We report the operations, no. parameters, and performance metrics in EC-NAS-Bench, as in NAS-Bench-101, and additionally, we include efficiency measures in terms of energy consumption and the carbon footprint for training each model. The primary focus for efficiency metrics is to quantify the resource costs specific to model training; however, we also report the total resource costs, which include computational overhead, e.g., data movements. For completeness, we also provide carbon intensity measures at training time, timestamp, and average energy consumption of computing resources. We have made the metrics of each architecture readily accessible to encourage the development of NAS strategies for exploring efficient architectures. The metrics reported relevant to this work can be seen in Table 1.
2.5 SURROGATE DATASET FOR 7V-SPACE
The 4V and 5V search spaces are the primary spaces used in this work to reduce the overall resource consumption to populate the energy measurements in the tabular benchmark datasets. However, even the 5V space has only a fraction of possible architectures compared to the 7V space published in Ying et al. (2019), which has about 423k architectures. Computing the energy consumption as done for 4V and 5V datasets on the 7V space is prohibitively expensive1.
We instead sample a subset of architectures from the 7V space and obtain the actual energy costs for 4300 architectures. Using these measurements we train a multi-layered perceptron (MLP) based surrogate energy prediction model. The MLP takes the graph-encoded architecture and the number of parameters as input and predicts the energy consumption for a given number of epochs. This surrogate model is similar to recent surrogate NAS methods that have shown to be more efficient Zela et al. (2022). Details of the surrogate model used to predict the energy measurements for the 7V space are provided Appendix D.
The resulting surrogate 7V dataset with the energy measurements yields a close approximation of the actual training energy costs as shown in Figure 2-a). The Pearson correlation between the actual and predicted energy measurements is 0.9977. In Figure 2-b), we also show that the mean absolute error of the predicted- and actual energy measurements plateau with about 3000 architectures, justifying its use to predict on the remaining 7V space. The standard deviation is estimated over 10 random initialisations of the surrogate model per training dataset size.
2.6 INFLUENCE OF HARDWARE ON EC-NAS-Bench
The energy consumption of the architectures in the 4V and 5V spaces were obtained on a single RTX Quadro 6000 GPU. While the energy measurements tabulated in EC-NAS-Bench are specific to these hardware settings, we argue that the trends across the architectures hold independent of the actual hardware used. To demonstrate this, we trained the architectures in the 4V space on four different (Nvidia) GPUs spanning multiple generations: Titan XP, RTX 3060, RTX 3090 and RTX Quadro 6000.
While the energy consumed by each model on specific hardware is different, the trends compared to other models are maintained across different GPUs. This is captured in Figure 3, where the energy
1Our estimates showed that it would require 770 GPU days of compute.
consumption for each architecture in the 4V space on all four GPUs is reported. This trend confirms the fact that when NAS is constrained on energy consumption and performance the resulting models would remain the same irrespective of the specific hardware used.
3 NAS STRATEGIES WITH EC-NAS-Bench
Given a tabular benchmark which can be used to query for model training energy consumption in addition to other standard metrics such as in EC-NAS-Bench, NAS strategies can be used to search for energy-efficient architectures. We next present multi-objective optimisation as a suitable strategy to uncover the trade-off between performance and efficiency, which supports an energyaware architecture choice.
3.1 MULTI-OBJECTIVE OPTIMISATION
Multi-objective optimisation (MOO) simultaneously optimises several, potentially conflicting objectives. The goal of MOO is to find or to approximate the set of Pareto-optimal solutions, where a solution is Pareto-optimal if it cannot be improved in one objective without getting worse in another.
In this work, we introduce a simple evolutionary MOO algorithm (SEMOA) based on Krause et al. (2016). The algorithm is simple, but derived from canonical principles of derivative-free multicriteria optimisation, such as hypervolume maximisation. Details of SEMOA are presented in Appendix A.2. We also use several existing MOO algorithms: random search, Speeding up Evolutionary Multi-Objective Algorithms (SHEMOA) and Mixed Surrogate Expected Hypervolume Improvement (MSEHVI), implemented in Izquierdo et al. (2021) to demonstrate the usefulness of EC-NAS-Bench .
3.2 EVALUATION OF NAS STRATEGIES
Experimental Setup. We conduct experiments on EC-NAS-Bench by adapting the presented MOO-algorithm to perform both single-objective optimisation (SOO) and MOO. In the former, we will naturally find only one solution when optimising a single objective. In contrast, when optimising multiple, diverse objectives, we will find the empirical Pareto-front in the latter. We run the algorithm in the 4V and 5V space of models trained for 108 epochs. The optimisation is performed over 100 evolutions with a population size of 20. All the experiments are conducted on a desktop workstation with a single NVIDIA RTX 3090 GPU with 24GB memory and Intel(R) Core(TM) i7-10700F CPU @ 2.90GHz.
Performance Criteria. For the multi-objective optimisation, we use the validation accuracy (Pv) and the training energy cost, E(kWh), as the two objectives to be jointly optimised using the MOO algorithm. For the single-objective optimisation, we only use Pv as the performance objective. We
use energy cost rather than, e.g., training time, considering that E is agnostic to parallel computing. We note that it is possible to use any of the provided metrics in Table 1 for the purpose of singleand multi-objective optimisation. As the MOO algorithm minimises the objectives, we simply use the negative of the objectives in cases where the quantities are to be maximised; for instance, we optimise −Pv as accuracy is a maximisation objective. Training costs In aggregate, EC-NAS-Bench had a total estimated training cost of 124.214 GPU days, 2021.02 kWh and 259.047 kgCO2eq for the 5V space. The 4V space had a total estimated training cost of 3.854 GPU days, 63.792 kWh and 5.981 kgCO2eq. The actual training costs for the 5V space were only 3.105 GPU days, 50.525 kWh and 6.476 kgCO2eq. Actual training costs of the 4V space were 0.096 GPU days, 1.594 kWh and 0.149 kgCO2eq.
In total, we saved an estimated compute cost of 121.109 GPU days, 1970.495 kWh and 252.571 kgCO2eq for the 5V space, and 3.758 GPU days, 48.931 kWh and 6.327 kgCO2eq for the 4V space. We obtain ≈ 97% reduction in computing resources and energy consumption in all efficiency measures.
4 RESULTS
Multi-objective exploration of 5V space. The key results from the experiments on EC-NAS-Bench using the multi-objective optimisation of E and −Pv are shown in Figure 4-a),b) and c). Pareto fronts over multiple random initialisations of the four MOO algorithms: SEMOA (ours), Random Search, SHEMOA, MSEHVI, are visualised as attainment curves in Figure 4-a) which summarises the median solutions attained over the multiple runs(Fonseca et al., 2001). All the MOO algorithms are able to explore the search space reasonably well, yielding attainment curves that largely look similar.
The Pareto front obtained from the our MOO algorithm, SEMOA, for one run is shown in Figure 4- b). It also shows the extrema (r0, r1) on both ends of the front preferring one of the objectives, whereas the knee point (rk) offers the best trade-off between the two objectives. These three points are shown in different colours and markers, where the two extrema (Ar0 /Blue, Ar1 /Green) and the knee point (Ark /yellow). We compute the bend-angles to find the knee point as suggested by Deb & Gupta (2011).
The architectures corresponding to the two extrema (Ar0 ,Ar1 ) and the corresponding knee point (Ark ) for a single MOO run are visualised in the radar plot in Figure 4-b). The exact performance metrics for the three models in Figure 4-b) are also reported in Table 2. The solution covering the largest area is one of the extremal points (Ar0 , blue) with high accuracy (0.944) but also a larger footprint in the energy consumption (1.032kWh), computation time (5482.78s) and the number of
parameters (21.22M) compared to the other extremum (Ar1 , green) or the knee point (Ark , yellow). The model corresponding to the knee point (Ark ) provides a large reduction in the energy consumption (0.324kWh) at the expense of a small reduction in performance (0.932).
Single-objective exploration. We optimise only the validation accuracy, Pv , to simulate standard NAS practices. The resulting solution is shown the last row of Table 2. This SOO model achieves the highest validation accuracy (0.944). However, the footprint of the solution along the energy consumption, computation time and the number of parameter axes are larger than those from the MOO algorithm.
Multi-objective exploration of 7V space. The MOO results for the surrogate 7V space resemble the trends observed in the 5V space, as shown in Figure 5. As with the 5V space all the attainment curves of all the four MOO algorithms look similar. Visibly, the MSEHVI method seems to underperform compared to other models due to the protrusion around the knee-point compared to the other models, which are largely overlapping. A single Pareto front of SEMOA are also showed in Figure 5-b), with trends comparable with those in the 5V space results in Figure 4.
5 DISCUSSIONS
Single versus multi-objective optimisation. The performance trends of the SOO and MOO solutions are clearly captured in Table 2. The knee point solution, Ark , from MOO, yields an architecture that consumes about 70% less energy and has only about 1% degradation in performance. Depending on the downstream tasks, this could be a reasonable trade-off. If the degradation in performance cannot be tolerated, the Pareto front offers other candidate solutions for the practitioners to choose from. For instance, the extremum solution (Ar0 ) offers basically the same performance as the SOO solution by consuming about 32% less energy.
Training time is not an alternative to energy consumption. The original NAS-Bench-101 already reports the training time (Ying et al., 2019). In single hardware regimes, this could serve as a measure of the energy consumption, as training time mostly correlates with the energy consumption. However, as most neural architecture training is performed on multiple GPUs with large-scale parallelism, training time alone cannot capture the efficiency of models. Aggregate energy consumption can take parallel hardware and the associated overheads into consideration. Even in single GPU
training settings, energy consumption could optimise for energy-efficient models. For instance, a small architecture trained on a large GPU still has larger energy consumption due to the underutilisation of the hardware resources. In such instances, a larger model could (to a certain extent) yield more performance improvements for the total energy consumed (Pv/E). Energy efficient tabular NAS benchmark for obtaining efficient architectures. Tabular benchmarks such as NAS-Bench-101 (Ying et al., 2019) were introduced to reduce the resources required to perform NAS. However, even the one-time cost of generating a tabular benchmark dataset is massive. Surrogate NAS benchmarks are being studied to alleviate these costs, where the models are not exhaustively trained and evaluated. Instead, the performance metrics of architectures are estimated based on smaller training costs. For instance, this is achieved using predictive modelling based on learning curves (Yan et al., 2021), gradient approximations (Xu et al., 2021), or by fitting surrogate models to a subset of architectures (Zela et al., 2022). Similar to these attempts, the proposed EC-NAS-Bench dataset does not train all the models but bases its predictions on training the models only for 4 epochs, as described in Section 3.2. This results in about 97% reduction if the dataset were to be created from scratch, as shown in Table 3. Thus, EC-NAS-Bench is an energyefficient tabular benchmark that can be used to obtain energy-efficient architectures as demonstrated in Section 4.
Carbon-footprint aware NAS. The EC-NAS-Bench dataset reports several metrics per architecture, as shown in Table 1. Combinations of these metrics and the use of MOO could allow for the exploration of architecture spaces that have interesting properties. For instance, NAS can be performed to directly optimise the carbon footprint of deep learning models. Although instantaneous energy consumption and carbon footprint are
linearly correlated, when measured over a longer duration (>5m) these quantities differ due to the fluctuations of the instantaneous carbon intensity (Anthony et al., 2020). These carbon intensity fluctuations are caused by the variations of the power sources to the grid (Henderson et al., 2020). This can have implications when training models for a longer duration or on cloud instances that can be distributed over data centres in different countries (Dodge et al., 2022). By reporting instantaneous and aggregate carbon footprint of model training in EC-NAS-Bench we facilitate the possibility of carbon footprint aware NAS (Selvan et al., 2022). In this work, we focused only on energy consumption awareness to work around the temporal- and spatial variations of the carbon intensity.
Energy Consumption aware Few-shot NAS. Tabular benchmarks such as NAS-Bench-101 (Ying et al., 2019) provide an efficient way to explore different NAS strategies where the model training cost is only incurred once. One restriction with such tabular benchmarks is that they are specific to a set of architectures (fx: feedforward convolutional neural networks) and datasets (fx: CIFAR10). Developing tabular benchmarks for all possible network architectures and datasets is alleviated using one- or few- shot learning methods (Zhao et al., 2021; Zela et al., 2020). Integrating surrogate models for predicting learning dynamics (Zela et al., 2022) and energy measurements using the surrogate model in Section 2.5 could bridge the divide between few-shot and surrogate tabular benchmark datasets that are also energy consumption-aware. We have demonstrated the integration of surrogate energy models with existing tabular benchmark datasets, and extending these to surrogate benchmark datasets is straightforward.
Limitations. Constraining the number of vertices in the DAGs results in sparser search spaces for the optimisation strategy. The optimisation strategy will therefore be more sensitive to initialisation and choice of random seeds, and the empirical Pareto front will appear to be more rigid, as seen in the attainment plot in Figure 4-c, even when multiple initialisations and trials are carried out. We also only demonstrated experiments on the 4V and 5V spaces.
To reduce the computation cost, in EC-NAS-Bench we used the surrogate time and energy measurements that do not model training time variability. We also query the performance metrics from the three repeats of NAS-Bench-101 and update EC-NAS-Bench with their mean performance metrics.
All these limitations are primarily driven by the need to minimise the energy consumption of these experiments. While these are at the expense of variability, we argue that the resulting reduction in the energy consumption justifies these choices. Further, the results from these small-scale experiments have been shown to extend to larger space of architectures (Ying et al., 2019).
6 CONCLUSIONS AND FUTURE WORK
In this work, we presented an updated tabular benchmark dataset, EC-NAS-Bench, which tabulates the energy consumption and carbon footprint of training models, in addition to standard performance measures. Using multi-objective optimisation strategies, we showed that Pareto-optimal solutions offer appealing trade-offs between the performance measures and the energy consumption of model training. We qualitatively showed that large reductions (about 70%) in energy consumption are possible with <1% reduction in performance.
In addition to providing energy consumption measures, the EC-NAS-Bench benchmark provides metrics such as average carbon footprint and power consumption of CPUs, GPUs and DRAM. We hope this will foster interest in the development of models that are efficient and environmentally friendly by optimising for their energy consumption and carbon footprint.
A MULTI-OBJECTIVE OPTIMISATION
Formally, let the MOO problem be described by f : X → Rm, f(x) 7→ (f1(x), . . . , fm(x)). Here X denotes the search space of the optimization problem and m refers to the number of objectives. We assume w.l.o.g. that all objectives are to be minimized. For two points x, x′ ∈ X we say that x′ dominates x and write x′ ≺ x if ∀i ∈ {1, . . . ,m} : fi(x′) ≤ fi(x) ∧ ∃j ∈ {1, . . . ,m} : fj(x′) < fj(x). For X ′, X ′′ ⊆ X we say that X ′ dominates X ′′ and write X ′ ≺ X ′′ if ∀x′′ ∈ X ′′ : ∃x′ ∈ X ′ : x′ ≺ X ′′. The subset of non-dominated solutions in a set X ′ ⊆ X is given by ndom(X ′) = {x | x ∈ X ′∧∄x′ ∈ X ′ \{x} : x′ ≺ x}. The Pareto front of a set X ′ ⊂ X defined as F(X ′) = {f(x) |x ∈ ndom(X ′)} and, thus, the goal of MOO can be formalised as approximating F(X). In iterative MOO, the strategy is to step-wise improve a set of candidate solutions towards a sufficiently good approximation of F(X). For the design of a MOO algorithm, it is important to have a way to rank two sets X ′ and X ′′ w.r.t. the overall MOO goal even if neither X ′ ≺ X ′′ nor X ′′ ≺ X ′. This ranking can be done by the hypervolume measure. The hypervolume measure or S-metric (see Zitzler & Thiele, 1999) of a set X ′ ⊆ X is the volume of the union of regions in Rm that are dominated by X ′ and bounded by some appropriately chosen reference point r ∈ Rm:
Sr(X ′) := Λ ( ⋃
x∈X′
[ f1(x), r1 ] × · · · × [ fm(x), rm ]) ,
where Λ( · ) is the Lebesgue measure. The hypervolume is, up to weighting objectives, the only strictly Pareto compliant measure (Zitzler et al., 2003) in the sense that given two sets X ′ and X ′′ we have S(X ′) > S(X ′′) if X ′ dominates X ′′. As stated by Bringmann et al. (2013), the worst-case approximation factor of a Pareto front F(X ′) obtained from any hypervolume-optimal set X ′ with size |X ′| = µ is asymptotically equal to the best worst-case approximation factor achievable by any set of size µ, namely Θ(1/µ) for additive approximation and 1+Θ(1/µ) for relative approximation (Bringmann & Friedrich, 2013). Now we define the contributing hypervolume of an individual x ∈ X ′ as
∆r(x,X ′) := Sr(X ′)− Sr(X ′ \ {x}) .
The value ∆(x,X ′) quantifies how much a candidate solution x contributed to the total hypervolume of X ′ and can be regarded as a measure of the relevance of the point. Therefore, the contributing hypervolume is a popular criterion in MOO algorithms (e.g. Beume et al., 2007; Igel et al., 2007; Bader & Zitzler, 2011; Krause et al., 2016). If we iteratively optimize some solution set P , then points x with low ∆(x, P ) are candidates in an already crowded region of the current Pareto front F(P ), while points with high ∆(x, P ) mark areas that are promising to explore further.
A.1 SEMOA: SIMPLE EVOLUTIONARY MULTI-OBJECTIVE OPTIMISATION ALGORITHM
In this study, we used a simple MOO algorithm based on hypervolume maximisation outlined in Algorithm 1 inspired by Krause et al. (2016). The algorithm iteratively updates a set P of candidate solutions, starting from a set of random network architectures. Dominated solutions are removed from P . Then λ new architectures are generated by first selecting λ architectures from P and then modifying these architectures according to the perturbation described in Procedure 2. The λ new architectures are added to P and the next iteration starts. In Procedure 2, the probability pedge for changing (i.e., either adding or removing) an edge is chosen such that in expectation, two edges are changed, and the probability pnode for changing a node is set such that in expectation every second perturbation changes the label of a node.
The selection of the λ > m architectures from the current solution set is described in Procedure 3. We always select the extreme points in P that minimize a single objective (thus, the precise choice of the reference point r is of lesser importance). The other m − λ points are randomly chosen preferring points with higher contributing hypervolume. The points in P are ranked according to their hypervolume contribution. The probability of being selected depends linearly on the rank. We use linear ranking selection (Baker, 1985; Greffenstette & Baker, 1989), where the parameter controlling the slope is set to η+ = 2. Always selecting the extreme points and focusing on points with large contributing hypervolume leads to a wide spread of non-dominated solutions.
Algorithm 1 SEMOA for NAS strategy Input: objective f = (f1, . . . , fm), maximum number of iterations n Output: set of non-dominated solutions P
1: Initialize P ⊂ X (e.g., randomly) ▷ Initial random architectures 2: P ← ndom(P ) ▷ Discard dominated solutions 3: for i← 1 to n do ▷ Loop over iterations 4: O ← LinearRankSample(P , λ) ▷ Get λ points from P 5: O ← Perturb(O) ▷ Change the architectures 6: Compute f(x) for all x ∈ O ▷ Evaluate architectures 7: P ← ndom(P ∪O) ▷ Discard dominated points 8: end for 9: return P
Procedure 2 Perturb(O) Input: set of architectures O, variation probabilities for edges and nodes pedge and pnode Output: set of modified architecture O∗
1: for all MA ∈ O do ▷ Loop over matrices 2: repeat 3: for all αi,j ∈MA do ▷ Loop over entries 4: With probability pedge flip αi,j 5: end for 6: for all l ∈ LA do ▷ Loop over labels 7: With probability pnode change the label of l 8: end for 9: until MA has changed
10: end for 11: return O∗
A.2 MULTI-OBJECTIVE OPTIMISATION BASELINES
Hyperparameters for the MOO baseline methods All baseline methods utilise the tabular benchmarks of EC-NAS-Benchfor exploring and optimising architectures. The methods’ hyperparameters are chosen to circumvent unfair advantages gained by increased compute time, e.g., no—iterations or function evaluations. Although we allocate similar resources for the baseline methods, it is difficult to reason for the fairness when comparing the baselines, when considering the disparity in the algorithmic approach of the baselines.
The bag-of-baselines implementation discussed in Izquierdo et al. (2021) are used and modified for compatibility with tabular benchmarks of EC-NAS-Bench. Each experiment is run for 10 trials using different initial seeds. All developed code will be made public upon the blind-review period ending.
Random Search The baseline methods, except for Random Search, apply evolutionary search heuristics to optimize architectures in the search space. The random search implementation samples architectures from the architecture uniformly at random, each time querying an architecture for a random epoch budget. Random search is done over 1000 iterations, as the other baseline methods, where applicable, will also run for 1000 iterations.
Speeding up Evolutionary Multi-Objective Algorithm (SH-EMOA) As with all our baselines, we use the implementation in Izquierdo et al. (2021). We define a problem for and search space following the bag-of-baselines API to allow model evaluation for different epoch budgets simply by querying the tabular benchmarks of EC-NAS-Bench. We initialize the algorithm with a population size of 250 and restrict the search to 1000 function evaluations for budgets between 4 and 108. However, we force the algorithm only to use budgets 4, 12, 36 and 108, which are available in our search space. The remaining hyperparameters we leave as default, which covers a uniform mutation type for architecture perturbation and tournament style parent selection for an off-spring generation.
Mixed Surrogate Expected Hypervolume Improvement (MS-EHVI) This evolutionary algorithm, too, is initialized with a population size of 250. We choose to generate 50 samples to lessen
Procedure 3 LinearRankSample(P , λ) Input: set P ⊂ X of candidate solutions, number λ of elements to be selected; reference point r ∈ Rm,
parameter controlling the preference for better ranked points η+ ∈ [1, 2] Output: O ⊂ P , |O| = λ
1: O = ∅ 2: for i← 1 to m do 3: O ← O ∪ argminx∈P fi(x) ▷ Always add extremes 4: end for 5: Compute ∆r(x, P ) for all x ∈ P ▷ Compute contributing hypervolume 6: Sort P according to ∆(x, P ) 7: Define discrete probability distribution π over P where
πi = 1
|P | ( η+ − 2(η+ − 1) i− 1|P | − 1 ) is the probability of the element xi with the ith largest contributing hypervolume
8: for i← 1 to λ−m do ▷ Randomly select remaining points 9: Draw x ∼ π ▷ Select points with larger ∆r with higher probability
10: O ← O ∪ x 11: end for 12: return O
computation time, and we merely pass an auxiliary function to discretize parameters to fit with the experimental setup using tabular benchmarks.
Simple Evolutionary Multi-Objective Algorithm (SEMOA) Our MOO algorithm is described in subsection A.2. The key hyperparameters are the initial population size, which we set to 250, similar to the baseline methods, and likewise, we run the algorithm for 1000 iterations.
B MEASUREMENTS FROM CARBONTRACKER
We modify the open-source tool Carbontracker (Anthony et al., 2020) to measure the additional metrics reported in Table 1. Measurements take into account the energy usage of Graphical Processing Units (GPU), Central Processing Units (CPU), and Dynamic Random Access Memory (DRAM). Note that the energy usage for CPUs will include the power usage of DRAM. Power usage information is monitored, logged every 10 seconds, and reported as the average power usage during model training. Power is measured as the average of total units of a watt (W) over 10-second intervals during model training. The integral power consumed over the time a time interval, energy, is then reported in units of kilowatt-hours (kWh) with 1kWh = 3.6·106Joule (J). Additionally, the emission of greenhouse gasses (GHG) is measured by equivalent units measured in grams of carbon dioxide (CO2eq). The CO2eq is then estimated by using the carbon intensity - CO2eq units necessary to produce one unit of electricity a kilowatt per hour (kWh) - to express the carbon footprint of model training. The quantities for carbon intensity are fetched from the carbon intensity data provider every 15 minutes during model training.
Measurements from the aforementioned components alone do not give an accurate depiction of the carbon footprint model training when taking into account the energy consumption of the supporting infrastructure (e.g., data centre) is not considered. Therefore the quality of energy and carbon footprint estimations is amended by multiplying the power measurements by the PUE of the data centre hosting the compute resources. We use a PUE of 1.59, which is the global average for data centres in 2020 (Ascierto & Lawrence, 2020).
C ADDITIONAL RESULTS
The results in Figure 4 and Figure 5 were reported for the 5V- and 7V spaces respectively. The EC-NAS-Benchdataset also consists of the complete 4V space. In this section we report the MOO solutions based on the 4V search space. The trends observed for the 5V- and 7V spaces also hold for this smaller space as well.
D SURROGATE ENERGY MODEL
The MLP-based surrogate model used to predict the training energy consumption of the 7v space, E is given as: fθ(·) : x ∈ RF → E ∈ R, where θ are the trainable parameters and x comprises the features obtained from the architecture specifications. Using the cell/graph encoding of architectures introduced in Section 2.1, we populate x to consist of the upper triangular entries of the adjacency matrix, operations {input, 1x1conv, 3x3conv, 3x3maxpool, output} mapped to categorical variables [1, 2, 3, 4, 5], respectively and the total number of parameters. For the 7v space this results in x ∈ R36. We use a simple four layered MLP with gelu(·) activation functions, except for the final layer, which transforms the input in this sequence 36 → 128 → 64 → 32 → 1. The surrogate energy model is trained using actual energy measurements from 4300 randomly sampled architectures from the 7v space. The model was implemented in Pytorch (Paszke et al., 2019) and trained on an Nvidia RTX 3060 GPU. Using a training, validation and test split of ratio [0.6, 0.1, 0.3] we train fθ(·) for 200 epochs with an initial learning rate of 5× 10−3 to minimise the the L1-norm loss function between the predicted and actual energy measurements using the Adam optimiser (Kingma & Ba, 2015). | 1. What is the focus and contribution of the paper regarding NAS?
2. What are the strengths of the proposed approach, particularly in terms of energy consumption awareness?
3. What are the weaknesses of the paper, especially regarding the moving target of energy consumption?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any concerns or questions about the multi-objective optimization algorithm used in the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposed an energy consumption-aware tabular benchmark for NAS based on NAS-101, called EC-NAS-Bench. EC-NAS-Bench contains the training energy consumption, power consumption and carbon footprint of each architecture in the benchmark. The author performs single-objective optimization and multi-object optimization (based on the algorithm proposed in the paper "Multi-objective optimization with unbounded solution sets") on EC-NAS-Bench, and noticed that multi-objective optimization is able to figure out architectures with about 70% energy reduction and <1% performance degradation
Strengths And Weaknesses
Strength
Incorporating energy consumption data in NAS is an important problem. The author took the first step in this direction. Experiments show that it is able to find architecture that consumes much less energy but with comparable performance.
Weakness
Different from the performance of the model, energy consumption of neural networks is a moving target. This is related to factors like global economy and the computational hardware. The author has not discussed how they will incorporate potential changes in the energy/power consumption and carbon footprint of the models.
The multi-objective optimization algorithm is largely based on the cited paper "Multi-objective optimization with unbounded solution sets". Are there novel components proposed in the paper?
Clarity, Quality, Novelty And Reproducibility
Clarity: The paper is overall well-written. The author can improve the description of the MOO algorithm (e.g., give more details in the Appendix)
Quality + Novelty: The reviewer is concerned that it may be difficult for other researchers to continue their experiments on EC-NAS-Bench. The reason is that new hardwares appear every year and EC-NAS-Bench needs to be updated accordingly. The reviewer agrees that incorporting energy consumption in NAS is an important topic but EC-NAS-Bench won't be very useful when new hardwares appear.
Reproducibility: The author attached the source code. The reviewer has not ran the source code and only reviewed the content. It looks reproducible. |
ICLR | Title
Energy Consumption-Aware Tabular Benchmarks for Neural Architecture Search
Abstract
The demand for large-scale computational resources for Neural Architecture Search (NAS) has been lessened by tabular benchmarks for NAS. Evaluating NAS strategies is now possible on extensive search spaces and at a moderate computational cost. But so far, NAS has mainly focused on maximising performance on some hold-out validation/test set. However, energy consumption is a partially conflicting objective that should not be neglected. We hypothesise that constraining NAS to include the energy consumption of training the models could reveal a subspace of undiscovered architectures that are more computationally efficient with a smaller carbon footprint. To support the hypothesis, an existing tabular benchmark for NAS is augmented with the energy consumption of each architecture. We then perform multi-objective optimisation that includes energy consumption as an additional objective. We demonstrate the usefulness of multi-objective NAS for uncovering the trade-off between performance and energy consumption as well as for finding more energy-efficient architectures. The updated tabular benchmark, EC-NAS-Bench, is open-sourced to encourage the further exploration of energy consumption-aware NAS.
1 INTRODUCTION
The design of neural architectures is a complex task. While general guidelines for producing suitable neural architectures have been proposed, neural architecture design still requires expert domain knowledge, experience, and not least substantial effort (Philipp, 2021; Zoph & Le, 2016; Ren et al., 2020). This led to an upsurge in research on automated exploration and design of neural architectures cast as an optimisation problem – neural architecture search (NAS) (Baker et al., 2016; Zoph & Le, 2016; Real et al., 2017).
NAS strategies explore neural architectures in a predefined search space relying on model training and evaluation to determine the model’s fitness (i.e., validation/test set score) to adjust the search strategy and extract the best performing architecture (Ren et al., 2020). NAS strategies have shown great promise in discovering novel architecture designs yielding state-of-the-art model performance (Liu et al., 2017; 2018; Lin et al., 2021; Baker et al., 2017). However, it can be prohibitively expensive to perform NAS (Tan & Le, 2019b) due to the demand for large-scale computational resources and the associated carbon footprint of NAS (Schwartz et al., 2019; Anthony et al., 2020).
The introduction of tabular benchmarks for NAS significantly lessened the computational challenges mentioned above by facilitating the evaluation of NAS strategies on a limited search space of architectures (Klein & Hutter, 2019; Dong & Yang, 2020). Predictive models and zero- and one-shot models (Wen et al., 2019; Lin et al., 2021; Zela et al., 2020) have reduced time-consuming model training and thereby increased the efficiency of NAS strategies. Most recently, surrogate NAS benchmarks (Zela et al., 2022) have been proposed for arbitrary expansion of architecture search spaces for NAS.
Notwithstanding the aforementioned major contributions to the advancement of NAS research, the prime objective of NAS has been maximising a performance objective on some hold-out test/validation test. NAS strategies can be evaluated effectively, yet the search strategies do not intentionally aim to find computationally efficient architectures. That is, the NAS may efficiently determine model performance at a moderate computational cost, but energy efficiency is generally not an objective of NAS.
We hypothesise that adding the energy consumption of training models as a NAS objective could reveal a sub-space of computationally efficient models that also have a smaller carbon footprint. In order to find efficient architectures without sacrificing cardinal performance requirements, we propose the use of NAS strategies that will optimise for multiple objectives.
Our main contributions.
1. We provide an energy consumption-aware tabular benchmark for NAS based on NASBench-101 (Ying et al., 2019). For each architecture, we added its training energy consumption, power consumption and carbon footprint. We hope that the new data set will foster the development of environmentally friendly deep learning systems.
2. We also introduce a surrogate energy model to predict the training energy cost for a given architecture in a large search space (about 423k architectures)
3. To exemplify the use of the new benchmark, we devise a simple multi-objective optimisation algorithm for NAS and apply it to optimise generalisation accuracy as well as energy consumption.
4. We demonstrate the usefulness of multi-objective architecture exploration for revealing the trade-off between performance and energy efficiency and for finding efficient architectures obeying accuracy constraints. This is also demonstrated with other baseline multi-objective methods.
2 ENERGY CONSUMPTION-AWARE BENCHMARKS - EC-NAS-Bench
Our energy consumption-aware tabular benchmark EC-NAS-Bench is based on Nas-Bench-101 (Ying et al., 2019). We closely follow their specification of architectures; however, the search space of architectures that are considered, the evaluation approach and the metrics provided for each architecture is different. This section will briefly present EC-NAS-Bench and its differences to NAS-Bench-101.
2.1 ARCHITECTURE DESIGN
Network Topology. All architectures considered are convolutional neural networks (CNNs) designed for the task of image classification on CIFAR-10 (Krizhevsky, 2009). Each neural network comprises a convolutional stem layer followed by three repeats of three stacked cells and a downsampling layer. Finally, a global pooling layer and a dense softmax layer are used. The space of architectures, X, is limited to the topological space of cells, where each cell is a configurable feedforward network.
Cell Encoding. The individual cells are represented as directed acyclic graphs (DAGs). Each DAG, G(V,M), has N = |V | vertices (or nodes) and edges described in the binary adjacency matrix M ∈ {0, 1}N×N . The set of op-
erations (labels) that each node can realise is given by L′ = {input, output} ∪ L, where L = {3x3conv, 1x1conv, 3x3maxpool}. Two of the N nodes are always fixed as input and output to the network. The remaining N − 2 nodes can take up one of the labels in L. The connections between nodes of the DAG are encoded in the upper-triangular adjacency matrix with no self-connections (zero main diagonal entries). For a given architecture, A, every entry αi,j ∈ MA denotes an edge, from node i to node j with operations i, j ∈ L and its labelled adjacency matrix, LA ∈ MA × L′.
Search space. The number of DAGs grows exponentially with N and L (Ying et al., 2019). We restrict the search space in EC-NAS-Bench by imposing |V | ≤ 5 and |A ̸= 0| ≤ 9, referred to as the 5V space. The search space with |V | ≤ 4 called 4V space is also considered. In contrast, NAS-Bench-101 considers the search space for |V | ≤ 7. With these imposed restrictions on the
search space of EC-NAS-Bench, 91, 2532 and 423k unique architectures are identified from the 4V, 5V and 7v spaces, respectively.
2.2 ENERGY CONSUMPTION-AWARENESS
Resource-constrained NAS for obtaining efficient architectures has been explored mainly by optimising the total number of floating point operations (FPOs) (Tan & Le, 2019a). Optimising for FPOs, however, might not be entirely indicative of the efficiency of models (Henderson et al., 2020). It has been reported that models with fewer FPOs have bottleneck operations that can consume the bulk of the training time (Howard et al., 2017), and some models with high FPOs have lower inference time (Jeon & Kim, 2018). Energy consumption optimised hyperparameter selection outside of NAS settings for large language models has been recently investigated in Puvis de Chavannes et al. (2021).
The energy consumption during the training of a model encapsulates facets of architecture efficiency that are not entirely taken into consideration when using standard resource constraints such as FPOs, computational time and the number of parameters. Energy consumption accounts for both hardware and software variations in the experimental set-ups. To foster a new direction for NAS to find more efficient architectures, we use energy consumption as the additional objective along with standard performance measures.
2.3 QUANTIFYING ENERGY CONSUMPTION
About 75% of the total energy costs during training a neural network are incurred by hardware accelerators such as graphics processing units (GPUs) or tensor processing units (TPUs) (Dodge et al., 2022). The remaining energy consumption is mainly due to the central processing units (CPUs) and dynamic random access memory (DRAM). Additional energy consumed by the supporting infrastructure, such as cooling- and power systems and dissipation, is usually accounted for by the power usage effectiveness (PUE), which is an overhead factor. Several open-source tools have been published in the past couple of years, such as experiment-impact-tracker (Henderson et al., 2020), Carbontracker (Anthony et al., 2020) and CodeCarbon (Schmidt et al., 2021) provide convenient ways to track and log the energy consumption of neural networks by taking these factors into consideration.
In EC-NAS-Bench, the energy consumption of training and evaluating the neural architectures is estimated by modifying the tool Carbontracker (Anthony et al., 2020). Our version of the tool monitors the GPUs, CPUs and DRAM and estimates the total energy costs, E (kWh), aggregate carbon footprint (kgCO2eq) based on the instantaneous carbon intensity of the regions and the total computation time, T (s). The complete set of metrics that are measured and reported in EC-NAS-Bench are listed in Table 1.
2.4 ARCHITECTURE PERFORMANCE AND EFFICIENCY
Training Pipeline. Architectures from the 4V and 5V space are trained on CIFAR-10 (Krizhevsky, 2009) using 40k samples and evaluated on 10k validation and test samples (60k total). Each model is trained on an in-house Slurm cluster on a single NVIDIA Quadro RTX 6000 GPU with 24 GB memory and two Intel CPUs. The training strategy, or hyper-parameter setting, is similar to that of NAS-Bench-101 (Klein & Hutter, 2019). Pre-
dicting the energy consumption of longer model runs from a few training epochs has been shown to be robust when performed on the same hardware (Anthony et al., 2020). To refrain from retraining and re-evaluating all the models in NAS-Bench-101, we train each model for only 4 epochs and then obtain surrogate time and energy measurements by linear scaling. We then tabulate these measurements along with the corresponding mean performance metrics for each model from NASBench-101 and obtain metrics for training and evaluating each model for 12, 36 and 108 epochs.
Metrics. We report the operations, no. parameters, and performance metrics in EC-NAS-Bench, as in NAS-Bench-101, and additionally, we include efficiency measures in terms of energy consumption and the carbon footprint for training each model. The primary focus for efficiency metrics is to quantify the resource costs specific to model training; however, we also report the total resource costs, which include computational overhead, e.g., data movements. For completeness, we also provide carbon intensity measures at training time, timestamp, and average energy consumption of computing resources. We have made the metrics of each architecture readily accessible to encourage the development of NAS strategies for exploring efficient architectures. The metrics reported relevant to this work can be seen in Table 1.
2.5 SURROGATE DATASET FOR 7V-SPACE
The 4V and 5V search spaces are the primary spaces used in this work to reduce the overall resource consumption to populate the energy measurements in the tabular benchmark datasets. However, even the 5V space has only a fraction of possible architectures compared to the 7V space published in Ying et al. (2019), which has about 423k architectures. Computing the energy consumption as done for 4V and 5V datasets on the 7V space is prohibitively expensive1.
We instead sample a subset of architectures from the 7V space and obtain the actual energy costs for 4300 architectures. Using these measurements we train a multi-layered perceptron (MLP) based surrogate energy prediction model. The MLP takes the graph-encoded architecture and the number of parameters as input and predicts the energy consumption for a given number of epochs. This surrogate model is similar to recent surrogate NAS methods that have shown to be more efficient Zela et al. (2022). Details of the surrogate model used to predict the energy measurements for the 7V space are provided Appendix D.
The resulting surrogate 7V dataset with the energy measurements yields a close approximation of the actual training energy costs as shown in Figure 2-a). The Pearson correlation between the actual and predicted energy measurements is 0.9977. In Figure 2-b), we also show that the mean absolute error of the predicted- and actual energy measurements plateau with about 3000 architectures, justifying its use to predict on the remaining 7V space. The standard deviation is estimated over 10 random initialisations of the surrogate model per training dataset size.
2.6 INFLUENCE OF HARDWARE ON EC-NAS-Bench
The energy consumption of the architectures in the 4V and 5V spaces were obtained on a single RTX Quadro 6000 GPU. While the energy measurements tabulated in EC-NAS-Bench are specific to these hardware settings, we argue that the trends across the architectures hold independent of the actual hardware used. To demonstrate this, we trained the architectures in the 4V space on four different (Nvidia) GPUs spanning multiple generations: Titan XP, RTX 3060, RTX 3090 and RTX Quadro 6000.
While the energy consumed by each model on specific hardware is different, the trends compared to other models are maintained across different GPUs. This is captured in Figure 3, where the energy
1Our estimates showed that it would require 770 GPU days of compute.
consumption for each architecture in the 4V space on all four GPUs is reported. This trend confirms the fact that when NAS is constrained on energy consumption and performance the resulting models would remain the same irrespective of the specific hardware used.
3 NAS STRATEGIES WITH EC-NAS-Bench
Given a tabular benchmark which can be used to query for model training energy consumption in addition to other standard metrics such as in EC-NAS-Bench, NAS strategies can be used to search for energy-efficient architectures. We next present multi-objective optimisation as a suitable strategy to uncover the trade-off between performance and efficiency, which supports an energyaware architecture choice.
3.1 MULTI-OBJECTIVE OPTIMISATION
Multi-objective optimisation (MOO) simultaneously optimises several, potentially conflicting objectives. The goal of MOO is to find or to approximate the set of Pareto-optimal solutions, where a solution is Pareto-optimal if it cannot be improved in one objective without getting worse in another.
In this work, we introduce a simple evolutionary MOO algorithm (SEMOA) based on Krause et al. (2016). The algorithm is simple, but derived from canonical principles of derivative-free multicriteria optimisation, such as hypervolume maximisation. Details of SEMOA are presented in Appendix A.2. We also use several existing MOO algorithms: random search, Speeding up Evolutionary Multi-Objective Algorithms (SHEMOA) and Mixed Surrogate Expected Hypervolume Improvement (MSEHVI), implemented in Izquierdo et al. (2021) to demonstrate the usefulness of EC-NAS-Bench .
3.2 EVALUATION OF NAS STRATEGIES
Experimental Setup. We conduct experiments on EC-NAS-Bench by adapting the presented MOO-algorithm to perform both single-objective optimisation (SOO) and MOO. In the former, we will naturally find only one solution when optimising a single objective. In contrast, when optimising multiple, diverse objectives, we will find the empirical Pareto-front in the latter. We run the algorithm in the 4V and 5V space of models trained for 108 epochs. The optimisation is performed over 100 evolutions with a population size of 20. All the experiments are conducted on a desktop workstation with a single NVIDIA RTX 3090 GPU with 24GB memory and Intel(R) Core(TM) i7-10700F CPU @ 2.90GHz.
Performance Criteria. For the multi-objective optimisation, we use the validation accuracy (Pv) and the training energy cost, E(kWh), as the two objectives to be jointly optimised using the MOO algorithm. For the single-objective optimisation, we only use Pv as the performance objective. We
use energy cost rather than, e.g., training time, considering that E is agnostic to parallel computing. We note that it is possible to use any of the provided metrics in Table 1 for the purpose of singleand multi-objective optimisation. As the MOO algorithm minimises the objectives, we simply use the negative of the objectives in cases where the quantities are to be maximised; for instance, we optimise −Pv as accuracy is a maximisation objective. Training costs In aggregate, EC-NAS-Bench had a total estimated training cost of 124.214 GPU days, 2021.02 kWh and 259.047 kgCO2eq for the 5V space. The 4V space had a total estimated training cost of 3.854 GPU days, 63.792 kWh and 5.981 kgCO2eq. The actual training costs for the 5V space were only 3.105 GPU days, 50.525 kWh and 6.476 kgCO2eq. Actual training costs of the 4V space were 0.096 GPU days, 1.594 kWh and 0.149 kgCO2eq.
In total, we saved an estimated compute cost of 121.109 GPU days, 1970.495 kWh and 252.571 kgCO2eq for the 5V space, and 3.758 GPU days, 48.931 kWh and 6.327 kgCO2eq for the 4V space. We obtain ≈ 97% reduction in computing resources and energy consumption in all efficiency measures.
4 RESULTS
Multi-objective exploration of 5V space. The key results from the experiments on EC-NAS-Bench using the multi-objective optimisation of E and −Pv are shown in Figure 4-a),b) and c). Pareto fronts over multiple random initialisations of the four MOO algorithms: SEMOA (ours), Random Search, SHEMOA, MSEHVI, are visualised as attainment curves in Figure 4-a) which summarises the median solutions attained over the multiple runs(Fonseca et al., 2001). All the MOO algorithms are able to explore the search space reasonably well, yielding attainment curves that largely look similar.
The Pareto front obtained from the our MOO algorithm, SEMOA, for one run is shown in Figure 4- b). It also shows the extrema (r0, r1) on both ends of the front preferring one of the objectives, whereas the knee point (rk) offers the best trade-off between the two objectives. These three points are shown in different colours and markers, where the two extrema (Ar0 /Blue, Ar1 /Green) and the knee point (Ark /yellow). We compute the bend-angles to find the knee point as suggested by Deb & Gupta (2011).
The architectures corresponding to the two extrema (Ar0 ,Ar1 ) and the corresponding knee point (Ark ) for a single MOO run are visualised in the radar plot in Figure 4-b). The exact performance metrics for the three models in Figure 4-b) are also reported in Table 2. The solution covering the largest area is one of the extremal points (Ar0 , blue) with high accuracy (0.944) but also a larger footprint in the energy consumption (1.032kWh), computation time (5482.78s) and the number of
parameters (21.22M) compared to the other extremum (Ar1 , green) or the knee point (Ark , yellow). The model corresponding to the knee point (Ark ) provides a large reduction in the energy consumption (0.324kWh) at the expense of a small reduction in performance (0.932).
Single-objective exploration. We optimise only the validation accuracy, Pv , to simulate standard NAS practices. The resulting solution is shown the last row of Table 2. This SOO model achieves the highest validation accuracy (0.944). However, the footprint of the solution along the energy consumption, computation time and the number of parameter axes are larger than those from the MOO algorithm.
Multi-objective exploration of 7V space. The MOO results for the surrogate 7V space resemble the trends observed in the 5V space, as shown in Figure 5. As with the 5V space all the attainment curves of all the four MOO algorithms look similar. Visibly, the MSEHVI method seems to underperform compared to other models due to the protrusion around the knee-point compared to the other models, which are largely overlapping. A single Pareto front of SEMOA are also showed in Figure 5-b), with trends comparable with those in the 5V space results in Figure 4.
5 DISCUSSIONS
Single versus multi-objective optimisation. The performance trends of the SOO and MOO solutions are clearly captured in Table 2. The knee point solution, Ark , from MOO, yields an architecture that consumes about 70% less energy and has only about 1% degradation in performance. Depending on the downstream tasks, this could be a reasonable trade-off. If the degradation in performance cannot be tolerated, the Pareto front offers other candidate solutions for the practitioners to choose from. For instance, the extremum solution (Ar0 ) offers basically the same performance as the SOO solution by consuming about 32% less energy.
Training time is not an alternative to energy consumption. The original NAS-Bench-101 already reports the training time (Ying et al., 2019). In single hardware regimes, this could serve as a measure of the energy consumption, as training time mostly correlates with the energy consumption. However, as most neural architecture training is performed on multiple GPUs with large-scale parallelism, training time alone cannot capture the efficiency of models. Aggregate energy consumption can take parallel hardware and the associated overheads into consideration. Even in single GPU
training settings, energy consumption could optimise for energy-efficient models. For instance, a small architecture trained on a large GPU still has larger energy consumption due to the underutilisation of the hardware resources. In such instances, a larger model could (to a certain extent) yield more performance improvements for the total energy consumed (Pv/E). Energy efficient tabular NAS benchmark for obtaining efficient architectures. Tabular benchmarks such as NAS-Bench-101 (Ying et al., 2019) were introduced to reduce the resources required to perform NAS. However, even the one-time cost of generating a tabular benchmark dataset is massive. Surrogate NAS benchmarks are being studied to alleviate these costs, where the models are not exhaustively trained and evaluated. Instead, the performance metrics of architectures are estimated based on smaller training costs. For instance, this is achieved using predictive modelling based on learning curves (Yan et al., 2021), gradient approximations (Xu et al., 2021), or by fitting surrogate models to a subset of architectures (Zela et al., 2022). Similar to these attempts, the proposed EC-NAS-Bench dataset does not train all the models but bases its predictions on training the models only for 4 epochs, as described in Section 3.2. This results in about 97% reduction if the dataset were to be created from scratch, as shown in Table 3. Thus, EC-NAS-Bench is an energyefficient tabular benchmark that can be used to obtain energy-efficient architectures as demonstrated in Section 4.
Carbon-footprint aware NAS. The EC-NAS-Bench dataset reports several metrics per architecture, as shown in Table 1. Combinations of these metrics and the use of MOO could allow for the exploration of architecture spaces that have interesting properties. For instance, NAS can be performed to directly optimise the carbon footprint of deep learning models. Although instantaneous energy consumption and carbon footprint are
linearly correlated, when measured over a longer duration (>5m) these quantities differ due to the fluctuations of the instantaneous carbon intensity (Anthony et al., 2020). These carbon intensity fluctuations are caused by the variations of the power sources to the grid (Henderson et al., 2020). This can have implications when training models for a longer duration or on cloud instances that can be distributed over data centres in different countries (Dodge et al., 2022). By reporting instantaneous and aggregate carbon footprint of model training in EC-NAS-Bench we facilitate the possibility of carbon footprint aware NAS (Selvan et al., 2022). In this work, we focused only on energy consumption awareness to work around the temporal- and spatial variations of the carbon intensity.
Energy Consumption aware Few-shot NAS. Tabular benchmarks such as NAS-Bench-101 (Ying et al., 2019) provide an efficient way to explore different NAS strategies where the model training cost is only incurred once. One restriction with such tabular benchmarks is that they are specific to a set of architectures (fx: feedforward convolutional neural networks) and datasets (fx: CIFAR10). Developing tabular benchmarks for all possible network architectures and datasets is alleviated using one- or few- shot learning methods (Zhao et al., 2021; Zela et al., 2020). Integrating surrogate models for predicting learning dynamics (Zela et al., 2022) and energy measurements using the surrogate model in Section 2.5 could bridge the divide between few-shot and surrogate tabular benchmark datasets that are also energy consumption-aware. We have demonstrated the integration of surrogate energy models with existing tabular benchmark datasets, and extending these to surrogate benchmark datasets is straightforward.
Limitations. Constraining the number of vertices in the DAGs results in sparser search spaces for the optimisation strategy. The optimisation strategy will therefore be more sensitive to initialisation and choice of random seeds, and the empirical Pareto front will appear to be more rigid, as seen in the attainment plot in Figure 4-c, even when multiple initialisations and trials are carried out. We also only demonstrated experiments on the 4V and 5V spaces.
To reduce the computation cost, in EC-NAS-Bench we used the surrogate time and energy measurements that do not model training time variability. We also query the performance metrics from the three repeats of NAS-Bench-101 and update EC-NAS-Bench with their mean performance metrics.
All these limitations are primarily driven by the need to minimise the energy consumption of these experiments. While these are at the expense of variability, we argue that the resulting reduction in the energy consumption justifies these choices. Further, the results from these small-scale experiments have been shown to extend to larger space of architectures (Ying et al., 2019).
6 CONCLUSIONS AND FUTURE WORK
In this work, we presented an updated tabular benchmark dataset, EC-NAS-Bench, which tabulates the energy consumption and carbon footprint of training models, in addition to standard performance measures. Using multi-objective optimisation strategies, we showed that Pareto-optimal solutions offer appealing trade-offs between the performance measures and the energy consumption of model training. We qualitatively showed that large reductions (about 70%) in energy consumption are possible with <1% reduction in performance.
In addition to providing energy consumption measures, the EC-NAS-Bench benchmark provides metrics such as average carbon footprint and power consumption of CPUs, GPUs and DRAM. We hope this will foster interest in the development of models that are efficient and environmentally friendly by optimising for their energy consumption and carbon footprint.
A MULTI-OBJECTIVE OPTIMISATION
Formally, let the MOO problem be described by f : X → Rm, f(x) 7→ (f1(x), . . . , fm(x)). Here X denotes the search space of the optimization problem and m refers to the number of objectives. We assume w.l.o.g. that all objectives are to be minimized. For two points x, x′ ∈ X we say that x′ dominates x and write x′ ≺ x if ∀i ∈ {1, . . . ,m} : fi(x′) ≤ fi(x) ∧ ∃j ∈ {1, . . . ,m} : fj(x′) < fj(x). For X ′, X ′′ ⊆ X we say that X ′ dominates X ′′ and write X ′ ≺ X ′′ if ∀x′′ ∈ X ′′ : ∃x′ ∈ X ′ : x′ ≺ X ′′. The subset of non-dominated solutions in a set X ′ ⊆ X is given by ndom(X ′) = {x | x ∈ X ′∧∄x′ ∈ X ′ \{x} : x′ ≺ x}. The Pareto front of a set X ′ ⊂ X defined as F(X ′) = {f(x) |x ∈ ndom(X ′)} and, thus, the goal of MOO can be formalised as approximating F(X). In iterative MOO, the strategy is to step-wise improve a set of candidate solutions towards a sufficiently good approximation of F(X). For the design of a MOO algorithm, it is important to have a way to rank two sets X ′ and X ′′ w.r.t. the overall MOO goal even if neither X ′ ≺ X ′′ nor X ′′ ≺ X ′. This ranking can be done by the hypervolume measure. The hypervolume measure or S-metric (see Zitzler & Thiele, 1999) of a set X ′ ⊆ X is the volume of the union of regions in Rm that are dominated by X ′ and bounded by some appropriately chosen reference point r ∈ Rm:
Sr(X ′) := Λ ( ⋃
x∈X′
[ f1(x), r1 ] × · · · × [ fm(x), rm ]) ,
where Λ( · ) is the Lebesgue measure. The hypervolume is, up to weighting objectives, the only strictly Pareto compliant measure (Zitzler et al., 2003) in the sense that given two sets X ′ and X ′′ we have S(X ′) > S(X ′′) if X ′ dominates X ′′. As stated by Bringmann et al. (2013), the worst-case approximation factor of a Pareto front F(X ′) obtained from any hypervolume-optimal set X ′ with size |X ′| = µ is asymptotically equal to the best worst-case approximation factor achievable by any set of size µ, namely Θ(1/µ) for additive approximation and 1+Θ(1/µ) for relative approximation (Bringmann & Friedrich, 2013). Now we define the contributing hypervolume of an individual x ∈ X ′ as
∆r(x,X ′) := Sr(X ′)− Sr(X ′ \ {x}) .
The value ∆(x,X ′) quantifies how much a candidate solution x contributed to the total hypervolume of X ′ and can be regarded as a measure of the relevance of the point. Therefore, the contributing hypervolume is a popular criterion in MOO algorithms (e.g. Beume et al., 2007; Igel et al., 2007; Bader & Zitzler, 2011; Krause et al., 2016). If we iteratively optimize some solution set P , then points x with low ∆(x, P ) are candidates in an already crowded region of the current Pareto front F(P ), while points with high ∆(x, P ) mark areas that are promising to explore further.
A.1 SEMOA: SIMPLE EVOLUTIONARY MULTI-OBJECTIVE OPTIMISATION ALGORITHM
In this study, we used a simple MOO algorithm based on hypervolume maximisation outlined in Algorithm 1 inspired by Krause et al. (2016). The algorithm iteratively updates a set P of candidate solutions, starting from a set of random network architectures. Dominated solutions are removed from P . Then λ new architectures are generated by first selecting λ architectures from P and then modifying these architectures according to the perturbation described in Procedure 2. The λ new architectures are added to P and the next iteration starts. In Procedure 2, the probability pedge for changing (i.e., either adding or removing) an edge is chosen such that in expectation, two edges are changed, and the probability pnode for changing a node is set such that in expectation every second perturbation changes the label of a node.
The selection of the λ > m architectures from the current solution set is described in Procedure 3. We always select the extreme points in P that minimize a single objective (thus, the precise choice of the reference point r is of lesser importance). The other m − λ points are randomly chosen preferring points with higher contributing hypervolume. The points in P are ranked according to their hypervolume contribution. The probability of being selected depends linearly on the rank. We use linear ranking selection (Baker, 1985; Greffenstette & Baker, 1989), where the parameter controlling the slope is set to η+ = 2. Always selecting the extreme points and focusing on points with large contributing hypervolume leads to a wide spread of non-dominated solutions.
Algorithm 1 SEMOA for NAS strategy Input: objective f = (f1, . . . , fm), maximum number of iterations n Output: set of non-dominated solutions P
1: Initialize P ⊂ X (e.g., randomly) ▷ Initial random architectures 2: P ← ndom(P ) ▷ Discard dominated solutions 3: for i← 1 to n do ▷ Loop over iterations 4: O ← LinearRankSample(P , λ) ▷ Get λ points from P 5: O ← Perturb(O) ▷ Change the architectures 6: Compute f(x) for all x ∈ O ▷ Evaluate architectures 7: P ← ndom(P ∪O) ▷ Discard dominated points 8: end for 9: return P
Procedure 2 Perturb(O) Input: set of architectures O, variation probabilities for edges and nodes pedge and pnode Output: set of modified architecture O∗
1: for all MA ∈ O do ▷ Loop over matrices 2: repeat 3: for all αi,j ∈MA do ▷ Loop over entries 4: With probability pedge flip αi,j 5: end for 6: for all l ∈ LA do ▷ Loop over labels 7: With probability pnode change the label of l 8: end for 9: until MA has changed
10: end for 11: return O∗
A.2 MULTI-OBJECTIVE OPTIMISATION BASELINES
Hyperparameters for the MOO baseline methods All baseline methods utilise the tabular benchmarks of EC-NAS-Benchfor exploring and optimising architectures. The methods’ hyperparameters are chosen to circumvent unfair advantages gained by increased compute time, e.g., no—iterations or function evaluations. Although we allocate similar resources for the baseline methods, it is difficult to reason for the fairness when comparing the baselines, when considering the disparity in the algorithmic approach of the baselines.
The bag-of-baselines implementation discussed in Izquierdo et al. (2021) are used and modified for compatibility with tabular benchmarks of EC-NAS-Bench. Each experiment is run for 10 trials using different initial seeds. All developed code will be made public upon the blind-review period ending.
Random Search The baseline methods, except for Random Search, apply evolutionary search heuristics to optimize architectures in the search space. The random search implementation samples architectures from the architecture uniformly at random, each time querying an architecture for a random epoch budget. Random search is done over 1000 iterations, as the other baseline methods, where applicable, will also run for 1000 iterations.
Speeding up Evolutionary Multi-Objective Algorithm (SH-EMOA) As with all our baselines, we use the implementation in Izquierdo et al. (2021). We define a problem for and search space following the bag-of-baselines API to allow model evaluation for different epoch budgets simply by querying the tabular benchmarks of EC-NAS-Bench. We initialize the algorithm with a population size of 250 and restrict the search to 1000 function evaluations for budgets between 4 and 108. However, we force the algorithm only to use budgets 4, 12, 36 and 108, which are available in our search space. The remaining hyperparameters we leave as default, which covers a uniform mutation type for architecture perturbation and tournament style parent selection for an off-spring generation.
Mixed Surrogate Expected Hypervolume Improvement (MS-EHVI) This evolutionary algorithm, too, is initialized with a population size of 250. We choose to generate 50 samples to lessen
Procedure 3 LinearRankSample(P , λ) Input: set P ⊂ X of candidate solutions, number λ of elements to be selected; reference point r ∈ Rm,
parameter controlling the preference for better ranked points η+ ∈ [1, 2] Output: O ⊂ P , |O| = λ
1: O = ∅ 2: for i← 1 to m do 3: O ← O ∪ argminx∈P fi(x) ▷ Always add extremes 4: end for 5: Compute ∆r(x, P ) for all x ∈ P ▷ Compute contributing hypervolume 6: Sort P according to ∆(x, P ) 7: Define discrete probability distribution π over P where
πi = 1
|P | ( η+ − 2(η+ − 1) i− 1|P | − 1 ) is the probability of the element xi with the ith largest contributing hypervolume
8: for i← 1 to λ−m do ▷ Randomly select remaining points 9: Draw x ∼ π ▷ Select points with larger ∆r with higher probability
10: O ← O ∪ x 11: end for 12: return O
computation time, and we merely pass an auxiliary function to discretize parameters to fit with the experimental setup using tabular benchmarks.
Simple Evolutionary Multi-Objective Algorithm (SEMOA) Our MOO algorithm is described in subsection A.2. The key hyperparameters are the initial population size, which we set to 250, similar to the baseline methods, and likewise, we run the algorithm for 1000 iterations.
B MEASUREMENTS FROM CARBONTRACKER
We modify the open-source tool Carbontracker (Anthony et al., 2020) to measure the additional metrics reported in Table 1. Measurements take into account the energy usage of Graphical Processing Units (GPU), Central Processing Units (CPU), and Dynamic Random Access Memory (DRAM). Note that the energy usage for CPUs will include the power usage of DRAM. Power usage information is monitored, logged every 10 seconds, and reported as the average power usage during model training. Power is measured as the average of total units of a watt (W) over 10-second intervals during model training. The integral power consumed over the time a time interval, energy, is then reported in units of kilowatt-hours (kWh) with 1kWh = 3.6·106Joule (J). Additionally, the emission of greenhouse gasses (GHG) is measured by equivalent units measured in grams of carbon dioxide (CO2eq). The CO2eq is then estimated by using the carbon intensity - CO2eq units necessary to produce one unit of electricity a kilowatt per hour (kWh) - to express the carbon footprint of model training. The quantities for carbon intensity are fetched from the carbon intensity data provider every 15 minutes during model training.
Measurements from the aforementioned components alone do not give an accurate depiction of the carbon footprint model training when taking into account the energy consumption of the supporting infrastructure (e.g., data centre) is not considered. Therefore the quality of energy and carbon footprint estimations is amended by multiplying the power measurements by the PUE of the data centre hosting the compute resources. We use a PUE of 1.59, which is the global average for data centres in 2020 (Ascierto & Lawrence, 2020).
C ADDITIONAL RESULTS
The results in Figure 4 and Figure 5 were reported for the 5V- and 7V spaces respectively. The EC-NAS-Benchdataset also consists of the complete 4V space. In this section we report the MOO solutions based on the 4V search space. The trends observed for the 5V- and 7V spaces also hold for this smaller space as well.
D SURROGATE ENERGY MODEL
The MLP-based surrogate model used to predict the training energy consumption of the 7v space, E is given as: fθ(·) : x ∈ RF → E ∈ R, where θ are the trainable parameters and x comprises the features obtained from the architecture specifications. Using the cell/graph encoding of architectures introduced in Section 2.1, we populate x to consist of the upper triangular entries of the adjacency matrix, operations {input, 1x1conv, 3x3conv, 3x3maxpool, output} mapped to categorical variables [1, 2, 3, 4, 5], respectively and the total number of parameters. For the 7v space this results in x ∈ R36. We use a simple four layered MLP with gelu(·) activation functions, except for the final layer, which transforms the input in this sequence 36 → 128 → 64 → 32 → 1. The surrogate energy model is trained using actual energy measurements from 4300 randomly sampled architectures from the 7v space. The model was implemented in Pytorch (Paszke et al., 2019) and trained on an Nvidia RTX 3060 GPU. Using a training, validation and test split of ratio [0.6, 0.1, 0.3] we train fθ(·) for 200 epochs with an initial learning rate of 5× 10−3 to minimise the the L1-norm loss function between the predicted and actual energy measurements using the Adam optimiser (Kingma & Ba, 2015). | 1. What is the focus and contribution of the paper regarding tabular benchmarking?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of resource efficiency and search spaces?
3. Do you have any concerns about the provided benchmark and its limitations?
4. How could the paper improve regarding multi-objective algorithms and statistical analysis?
5. What is your assessment of the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a tabular benchmark that includes various energy consumption metrics for a subset of networks in the NAS-Bench-101 search space. The authors provide simple multi-objective and single-objective baselines and run those on the benchmark.
Strengths And Weaknesses
I find the proposed benchmark and the motivation for using such benchmark beneficial to the community as the awareness for resource efficient NAS increases. The paper is in general easy to follow and well-structured. The authors also release their codebase together with the API. However, I find the paper could improve with the following:
A larger and novel search space. The smaller versions of the NAS-Bench-101 search space do seem way far from being realistic in my opinion. With 91 and 2532 architectures, respectively, they do not provide a realistic testbed and even the simplest search algorithms would lead to optimal solutions pretty quickly. Moreover, these spaces do not provide support for one-shot NAS methods.
More multi-objective algorithms to evaluate. The authors only evaluate a single multi-objective algorithm in Section 4. It would be beneficial if they would add more methods to this section. Check [1] for some simple methods.
More on the benchmark than the algorithms used. The authors spend most of Section 3 by describing the multi-objective optimization algorithm they use to evaluate on their benchmark. While such a detailed description is appreciated, it does not serve the main purpose of the paper and therefore I do not find it necessary to be included in the main paper. Rather, more statistics and analysis of the benchmark would be more useful.
-- References --
[1] Bag of Baselines for Multi-objective Joint Neural Architecture Search and Hyperparameter Optimization. Guerrero-Viu et al. 2021
Clarity, Quality, Novelty And Reproducibility
The paper is in general easy to follow. The authors provide the code with the supplementary material. The main issue with this submission is the novelty. When proposing a new benchmark, there should be some novel component in terms of the search space, benchmark construction (surrogate model used for instance in surrogate benchmarks), or empirical evaluations that provide major unforeseen insights into the field. Unfortunately, none of these criteria is fulfilled. |
ICLR | Title
Uncertainty Sets for Image Classifiers using Conformal Prediction
Abstract
Convolutional image classifiers can achieve high predictive accuracy, but quantifying their uncertainty remains an unresolved challenge, hindering their deployment in consequential settings. Existing uncertainty quantification techniques, such as Platt scaling, attempt to calibrate the network’s probability estimates, but they do not have formal guarantees. We present an algorithm that modifies any classifier to output a predictive set containing the true label with a user-specified probability, such as 90%. The algorithm is simple and fast like Platt scaling, but provides a formal finite-sample coverage guarantee for every model and dataset. Our method modifies an existing conformal prediction algorithm to give more stable predictive sets by regularizing the small scores of unlikely classes after Platt scaling. In experiments on both Imagenet and Imagenet-V2 with ResNet-152 and other classifiers, our scheme outperforms existing approaches, achieving coverage with sets that are often factors of 5 to 10 smaller than a stand-alone Platt scaling baseline.
1 INTRODUCTION
Imagine you are a doctor making a high-stakes medical decision based on diagnostic information from a computer vision classifier. What would you want the classifier to output in order to make the best decision? This is not a casual hypothetical; such classifiers are already used in medical settings (e.g., Razzak et al., 2018; Lundervold & Lundervold, 2019; Li et al., 2014). A maximumlikelihood diagnosis with an accompanying probability may not be the most essential piece of information. To ensure the health of the patient, you must also rule in or rule out harmful diagnoses. In other words, even if the most likely diagnosis is a stomach ache, it is equally or more important to rule out stomach cancer. Therefore, you would want the classifier to give you—in addition to an estimate of the most likely outcome—actionable uncertainty quantification, such as a set of predictions that provably covers the true diagnosis with a high probability (e.g., 90%). This is called a prediction set (see Figure 1). Our paper describes a method for constructing prediction sets from any pre-trained image classifier that are formally guaranteed to contain the true class with the desired probability, relatively small, and practical to implement. Our method modifies a conformal predictor (Vovk et al., 2005) given in Romano et al. (2020) for the purpose of modern image classification in order to make it more stable in the presence of noisy small probability estimates. Just as importantly, we provide extensive evaluations and code for conformal prediction in computer vision.
Formally, for a discrete response Y ∈ Y = {1, . . . ,K} and a feature vector X ∈ Rd, we desire an uncertainty set function, C(X), mapping a feature vector to a subset of {1, . . . ,K} such that
P (Y ∈ C(X)) ≥ 1− α, (1)
for a pre-specified confidence level α such as 10%. Conformal predictors like our method can modify any black-box classifier to output predictive sets that are rigorously guaranteed to satisfy the desired coverage property shown in Eq. (1). For evaluations, we focus on Imagenet classification
∗Equal contribution. Blog: https://people.eecs.berkeley.edu/˜angelopoulos/blog/ posts/conformal-classification
using convolutional neural networks (CNNs) as the base classifiers, since this is a particularly challenging testbed. In this setting, X would be the image and Y would be the class label. Note that the guarantee in Eq. (1) is marginal over X and Y—it holds on average, not for a particular image X .
A first approach toward this goal might be to assemble the set by including classes from highest to lowest probability (e.g., after Platt scaling and a softmax function; see Platt et al., 1999; Guo et al., 2017) until their sum just exceeds the threshold 1 − α. We call this strategy naive and formulate it precisely in Algorithm 1. There are two problems with naive: first, the probabilities output by CNNs are known to be incorrect (Nixon et al., 2019), so the sets from naive do not achieve coverage. Second, image classification models’ tail probabilities are often badly miscalibrated, leading to large sets that do not faithfully articulate the uncertainty of the model; see Section 2.3. Moreover, smaller sets that achieve the same coverage level can be generated with other methods.
The coverage problem can be solved by picking a new threshold using holdout samples. For example, with α =10%, if choosing sets that contain 93% estimated probability achieves 90% coverage on the holdout set, we use the 93% cutoff instead. We refer to this algorithm, introduced in Romano et al. (2020), as Adaptive Prediction Sets (APS). The APS procedure provides coverage but still produces large sets. To fix this, we introduce a regularization technique that tempers the influence of these noisy estimates, leading to smaller, more stable sets. We describe our proposed algorithm, Regularized Adaptive Prediction Sets (RAPS), in Algorithms 2 and 3 (with APS as a special case). As we will see in Section 2, both APS and RAPS are always guaranteed to satisfy Eq. (1)—regardless of model and dataset. Furthermore, we show that RAPS is guaranteed to have better performance than choosing a fixed-size set. Both methods impose negligible computational requirements in both training and evaluation, and output useful estimates of the model’s uncertainty on a new image given, say, 1000 held-out examples.
In Section 3 we conduct the most extensive evaluation of conformal prediction in deep learning to date on Imagenet and Imagenet-V2. We find that RAPS sets always have smaller average size than naive and APSsets. For example, using a ResNeXt-101, naive does not achieve coverage, while APS and RAPS achieve it almost exactly. However, APS sets have an average size of 19, while RAPS sets have an average size of 2 at α = 10% (Figure 2 and Table 1). We will provide an accompanying codebase that implements our method as a wrapper for any PyTorch classifier, along with code to exactly reproduce all of our experiments.
1.1 RELATED WORK
Reliably estimating predictive uncertainty for neural networks is an unsolved problem. Historically, the standard approach has been to train a Bayesian neural network to learn a distribution over network weights (Quinonero-Candela et al., 2005; MacKay, 1992; Neal, 2012; Kuleshov et al., 2018; Gal, 2016). This approach requires computational and algorithmic modifications; other approaches avoid these via ensembles (Lakshminarayanan et al., 2017; Jiang et al., 2018) or approximations of Bayesian inference (Riquelme et al., 2018; Sensoy et al., 2018). These methods also have major practical limitations; for example, ensembling requires training many copies of a neural network adversarially. Therefore, the most widely used strategy is ad-hoc traditional calibration of the softmax scores with Platt scaling (Platt et al., 1999; Guo et al., 2017; Nixon et al., 2019).
This work develops a method for uncertainty quantification based on conformal prediction. Originating in the online learning literature, conformal prediction is an approach for generating predictive sets that satisfy the coverage property in Eq. (1) (Vovk et al., 1999; 2005). We use a convenient data-splitting version known as split conformal prediction that enables conformal prediction meth-
ods to be deployed for essentially any predictor (Papadopoulos et al., 2002; Lei et al., 2018). While mechanically very different from traditional calibration as discussed above, we will refer to our approach as conformal calibration to highlight that the two methodologies have overlapping but different goals.
Conformal prediction is a general framework, not a specific algorithm—important design decisions must be made to achieve the best performance for each context. To this end, Romano et al. (2020) and Cauchois et al. (2020) introduce techniques aimed at achieving coverage that is similar across regions of feature space, whereas Vovk et al. (2003); Hechtlinger et al. (2018) and Guan & Tibshirani (2019) introduce techniques aimed at achieving equal coverage for each class. While these methods have conceptual appeal, thus far there has been limited empirical evaluation of this general approach for state-of-the-art CNNs. Concretely, the only works that we are aware of that include some evaluation of conformal methods on ImageNet—the gold standard for benchmarking computer vision methods—are Hechtlinger et al. (2018), Park et al. (2019), Cauchois et al. (2020), and Messoudi et al. (2020), although in all four cases further experiments are needed to more fully evaluate their operating characteristics for practical deployment. At the heart of conformal prediction is the conformal score - a measure of similarity between labeled examples which is used to compare a new point to among those in a hold out set. Our theoretical contribution can be summarized as a modification of the conformal score from Romano et al. (2020) to have smaller, more stable sets. Lastly, there are alternative approaches to returning prediction sets not based on conformal prediction (Pearce et al., 2018; Zhang et al., 2018). These methods can be used as input to a conformal procedure to potentially improve performance, but they do not have finite-sample coverage guarantees when used alone.
2 METHODS
In developing uncertainty set methods to improve upon naive, we are guided by three desiderata. First and most importantly, the coverage desideratum says the sets must provide 1−α coverage, as discussed above. Secondly, the size desideratum says we want sets of small size, since these convey more detailed information and may be more useful in practice. Lastly, the adaptiveness desideratum says we want the sets to communicate instance-wise uncertainty: they should be smaller for easy test-time examples than for hard ones; see Figure 1 for an illustration. Coverage and size are obviously competing objectives, but size and adaptiveness are also often in tension. The size desideratum seeks small sets, while the adaptiveness desideratum seeks larger sets when the classi-
Algorithm 1 Naive Prediction Sets Input: α, sorted scores s, associated permutation of classes I , boolean rand
1: procedure NAIVE(α, s, I, rand) 2: L← 1 3: while ∑L i=1 si < 1− α do . Stop if 1− α probability exceeded 4: L← L+ 1 5: if rand then . Break ties randomly (explained in Appendix B) 6: U ← Unif(0, 1) 7: V ← ( ∑L i=1 si − (1− α))/sL 8: if U ≤ V then 9: L← L− 1
10: return { I1, ..., IL } Output: The 1− α prediction set, { I1, ..., IL }
fier is uncertain. For example, always predicting a set of size five could achieve coverage, but it is not adaptive. As noted above, both APSand RAPS achieve correct coverage, and we will show that RAPS improves upon APS according to the other two desiderata.
We now turn to the specifics of our proposed method. We begin in Subsection 2.1 by describing an abstract data-splitting procedure called conformal calibration that enables the near-automatic construction of valid predictive sets (that is, sets satisfying Eq. (1)). Subsequently, in Subsection 2.2, we provide a detailed presentation of our procedure, with commentary in Section 2.3. In Subsection 2.4 we discuss the optimality of our procedure, proving that it is at least as good as the procedure that returns sets of a fixed size, unlike alternative approaches.
2.1 CONFORMAL CALIBRATION
We first review a general technique for producing valid prediction sets, following the articulation in Gupta et al. (2019). Consider a procedure that outputs a predictive set for each observation, and further suppose that this procedure has a tuning parameter τ that controls the size of the sets. (In RAPS, τ is the cumulative sum of the sorted, penalized classifier scores.) We take a small independent conformal calibration set of data, and then choose the tuning parameter τ such that the predictive sets are large enough to achieve 1 − α coverage on this set. See Figure 3 for an illustration. This calibration step yields a choice of τ , and the resulting set is formally guaranteed to have coverage 1− α on a future test point from the same distribution; see Theorem 1 below. Formally, let (Xi, Yi)i=1,...,n be an independent and identically distributed (i.i.d.) set of variables that was not used for model training. Further, let C(x, u, τ) : Rd × [0, 1] × R → 2Y be a setvalued function that takes a feature vector x to a subset of the possible labels. The second argument u is included to allow for randomized procedures; let U1, . . . , Un be i.i.d. uniform [0, 1] random variables that will serve as the second argument for each data point. Suppose that the sets are indexed by τ such that they are nested, meaning larger values of τ lead to larger sets:
C(x, u, τ1) ⊆ C(x, u, τ2) if τ1 ≤ τ2. (2) To find a function that will achieve 1 − α coverage on test data, we select the smallest τ that gives at least 1 − α coverage on the conformal calibration set, with a slight correction to account for the finite sample size:
τ̂ccal = inf
{ τ : |{i : Yi ∈ C(Xi, Ui, τ)}|
n ≥ d(n+ 1)(1− α)e n
} . (3)
The set function C(x, u, τ) with this data-driven choice of τ is guaranteed to have correct finitesample coverage on a fresh test observation, as stated formally next. Theorem 1 (Conformal calibration coverage guarantee). Suppose (Xi, Yi, Ui)i=1,...,n and (Xn+1, Yn+1, Un+1) are i.i.d. and let C(x, u, τ) be a set-valued function satisfying the nesting property in Eq. (2). Suppose further that the sets C(x, u, τ) grow to include all labels for large enough τ : for all x ∈ Rd, C(x, u, τ) = Y for some τ . Then for τ̂ccal defined as in Eq. (3), we have the following coverage guarantee:
P ( Yn+1 ∈ C(Xn+1, Un+1, τ̂ccal) ) ≥ 1− α.
This is the same coverage property as Eq. (1) in the introduction, written in a more explicit manner. The result is not new—a special case of this result leveraging sample-splitting first appears in the regression setting in Papadopoulos et al. (2002), and the core idea of conformal prediction was introduced even earlier; see (Vovk et al., 2005).
As a technical remark, the theorem also holds if the observations to satisfy the weaker condition of exchangeability; see Vovk et al. (2005). In addition, for most families of set-valued functions C(x, u, τ) there is a matching upper bound:
P ( Yn+1 ∈ C(Xn+1, Un+1, τ̂ccal) ) ≤ 1− α+ 1
n+ 1 .
Roughly speaking, this will hold whenever the sets grow smoothly in τ . See Lei et al. (2018) for a formal statement of the required conditions.
2.2 OUR METHOD
Conformal calibration is a powerful general idea, allowing one to achieve the coverage desideratum for any choice of sets C(x, u, τ). Nonetheless, this is not yet a full solution, since the quality of the resulting prediction sets can vary dramatically depending on the design of C(x, u, τ). In particular, we recall the size and adaptiveness desiderata from Section 1—we want our uncertainty sets to be as small as possible while faithfully articulating the instance-wise uncertainty of each test point. In this section, we explicitly give our algorithm, which can be viewed as a special case of conformal calibration with the uncertainty sets C designed to extract information from CNNs. Our algorithm has three main ingredients. First, for a feature vector x, the base model computes class probabilities π̂x ∈ Rk, and we order the classes from most probable to least probable. Then, we add a regularization term to promote small predictive sets. Finally, we conformally calibrate the penalized prediction sets to guarantee coverage on future test points.
Formally, let ρx(y) = ∑K y′=1 π̂x(y
′)I{π̂x(y′)>π̂x(y)} be the total probability mass of the set of labels that are more likely than y. These are all the labels that will be included before y is included. In addition, let ox(y) = |{y′ ∈ Y : π̂x(y′) ≥ π̂x(y)}| be the ranking of y among the label based on the probabilities π̂. For example, if y is the third most likely label, then ox(y) = 3.1 We take
C∗(x, u, τ) := { y : ρx(y) + π̂x(y) · u+ λ · (ox(y)− kreg)+︸ ︷︷ ︸
regularization
≤ τ } , (4)
where (z)+ denotes the positive part of z and λ, kreg ≥ 0 are regularization hyperparameters that are introduced to encourage small set sizes. See Figure 3 for a visualization of a RAPS predictive set and Appendix E for a discussion of how to select kreg and λ.
Since this is the heart of our proposal, we carefully parse each term. First, the ρx(y) term increases as y ranges from the most probable to least probable label, so our sets will prefer to include the y that are predicted to be the most probable. The second term, π̂x(y) · u, is a randomized term to handle the fact that the value will jump discretely with the inclusion of each new y. The randomization term can never impact more than one value of y: there is at most one value of y such that y ∈ C(x, 0, τ) but y /∈ C(x, 1, τ). These first two terms can be viewed as the CDF transform after arranging the classes from most likely to least likely, randomized in the usual way to result in a continuous uniform random variable (cf. Romano et al., 2020). We discuss randomization further in Appendix B.
Lastly, the regularization promotes small set sizes: for values of y that occur farther down the ordered list of classes, the term λ · (ox(y)− kreg)+ makes that value of y require a higher value of τ before it is included in the predictive set. For example, if kreg = 5, then the sixth most likely value of y has an extra penalty of size λ, so it will never be included until τ exceeds ρx(y) + π̂x(y) · u + λ, whereas it enters when τ exceeds ρx(y) + π̂x(y) · u in the nonregularized version. Our method has the following coverage property: Proposition 1 (RAPS coverage guarantee). Suppose (Xi, Yi, Ui)i=1,...,n and (Xn+1, Yn+1, Un+1) are i.i.d. and let C∗(x, u, τ) be defined as in Eq. (4). Suppose further that π̂x(y) > 0 for all x and y. Then for τ̂ccal defined as in Eq. (3), we have the following coverage guarantee:
1− α ≤ P ( Yn+1 ∈ C∗(Xn+1, Un+1, τ̂ccal) ) ≤ 1− α+ 1
n+ 1 .
1For ease of notation, we assume distinct probabilities. Else, label-ordering ties should be broken randomly.
Algorithm 2 RAPS Conformal Calibration
Input: α; s ∈ [0, 1]n×K , I ∈ {1, ...,K}n×K , and one-hot y ∈ {0, 1}K corresponding respectively to the sorted scores, the associated permutation of indexes, and labels for each of n examples in the calibration set; kreg; λ; boolean rand
1: procedure RAPSC(α,s,I ,y,λ) 2: for i ∈ {1, · · · , n} do 3: Li ← { j : Ii,j = yi } 4: Ei ← ΣLij=0si,j + λ(Li − kreg + 1)+ 5: if rand then 6: U ∼ Unif(0, 1) 7: Ei ← Ei − si,Li + U ∗ si,Li 8: τ̂ccal ← the d(1− α)(1 + n)e largest value in {Ei}ni=1 9: return τ̂ccal
Output: The generalized quantile, τ̂ccal . The value in Eq. (3)
Algorithm 3 RAPS Prediction Sets Input: α, sorted scores s and the associated permutation of classes I for a test-time example, τ̂ccal
from Algorithm 2, kreg , λ, boolean rand 1: procedure RAPS(α, s, I, τ̂ccal, kreg, λ, rand) 2: L← | j ∈ Y : Σji=0si + λ(L− kreg)+ ≤ τ̂ccal |+ 1 3: V ← (τ̂ccal − ΣL−1i=0 si − λ(L− kreg)+ + sL−1)/sL−1 4: if rand & V ≤ U ∼ Unif(0, 1) then 5: L← L− 1 6: return C = { I1, ...IL } . The L most likely classes
Output: The 1− α confidence set, C . The set in Eq. (4)
Note that the first inequality is a corollary of Theorem 1, and the second inequality is a special case of the remark in Section 2.1. The restriction that π̂x(y) > 0 is not necessary for the first inequality.
2.3 WHY REGULARIZE?
In our experiments, the sets from APS are larger than necessary, because APS is sensitive to the noisy probability estimates far down the list of classes. This noise leads to a permutation problem of unlikely classes, where ordering of the classes with small probability estimates is determined mostly by random chance. If 5% of the true classes from the calibration set are deep in the tail due to the permutation problem, APS will choose large 95% predictive sets; see Figure 2. The inclusion of the RAPS regularization causes the algorithm to avoid using the unreliable probabilities in the tail; see Figure 4. We discuss how RAPS improves the adaptiveness of APS in Section 4 and Appendix E.
2.4 OPTIMALITY CONSIDERATIONS
To complement these experimental results, we now formally prove that RAPS with the correct regularization parameters will always dominate the simple procedure that returns a fixed set size. (Section 3.5 shows the parameters are easy to select and RAPS is not sensitive to their values). For a feature vector x, let ŷ(j)(x) be the label with the jth highest predicted probability. We define the top-k predictive sets to be {ŷ(1)(x), . . . , ŷ(k)(x)}. Proposition 2 (RAPS dominates top-k sets). Suppose (Xi, Yi, Ui)i=1,...,n and (Xn+1, Yn+1, Un+1) are i.i.d. draws. Let k∗ be the smallest k such that the top-k predictive sets have coverage at least d(n+ 1)(1−α)e/n on the conformal calibration points (Xi, Yi)i=1,...,n. Take C∗(x, u, τ) as in Eq. (4) with any kreg ≤ k∗ and λ = 1. Then with τ̂ccal chosen as in Eq. (3), we have
C∗(Xn+1, Un+1, τ̂ccal) ⊆ {ŷ(1)(x), . . . , ŷ(k∗)(x)}.
In words, the RAPS procedure with heavy regularization will be at least as good as the top-k procedure in the sense that it has smaller or same average set size while maintaining the desired coverage level. This is not true of either the naive baseline or the APS procedure; Table 2 shows that these two procedures usually return predictive sets with size much larger than k∗.
3 EXPERIMENTS
In this section we report on experiments that study the performance of the predictive sets from naive, APS, and RAPS, evaluating each based on the three desiderata above. We begin with a brief preview of the experiments. In Experiment 1, we evaluate naive, APS, and RAPS on ImagenetVal. Both APS and RAPS provided almost exact coverage, while naive sets had coverage slightly below the specified level. APS has larger sets on average than naive and RAPS. RAPS has a much smaller average set size than APS and naive. In Experiment 2, we repeat Experiment 1 on Imagenet-V2, and the conclusions still hold. In Experiment 3, we produce histograms of set sizes for naive, APS, and RAPS for several different values of λ, illustrating a simple tradeoff between set size and adaptiveness. In Experiment 4, we compute histograms of RAPS sets stratified by image difficulty, showing that RAPS sets are smaller for easier images than for difficult ones. In Experiment 5, we report the performance of RAPS with many values of the tuning parameters.
In our experiments, we use nine standard, pretrained Imagenet classifiers from the torchvision repository (Paszke et al., 2019) with standard normalization, resize, and crop parameters. Before applying naive, APS, or RAPS, we calibrated the classifiers using the standard temperature scaling/Platt scaling procedure as in Guo et al. (2017) on the calibration set. Thereafter, naive, APS, and RAPS were applied, with RAPS using a data-driven choice of parameters described in Appendix E. We use the randomized versions of these algorithms—see Appendix B for a discussion.
3.1 EXPERIMENT 1: COVERAGE VS SET SIZE ON IMAGENET
In this experiment, we calculated the coverage and mean set size of each procedure for two different choices of α. Over 100 trials, we randomly sampled two subsets of Imagenet-Val: one conformal calibration subset of size 20K and one evaluation subset of size 20K. The median-of-means over trials for both coverage and set size are reported in Table 1. Figure 2 illustrates the performances of naive, APS, and RAPS; RAPS has much smaller sets than both naive and APS, while achieving coverage. We also report results from a conformalized fixed-k procedure, which finds the smallest fixed set size achieving coverage on the holdout set, k∗, then predicts sets of size k∗ − 1 or k∗ on new examples in order to achieve exact coverage; see Algorithm 4 in Appendix E.
3.2 EXPERIMENT 2: COVERAGE VS SET SIZE ON IMAGENET-V2
The same procedure as Experiment 1 was repeated on Imagenet-V2, with exactly the same normalization, resize, and crop parameters. The size of the calibration and evaluation sets was 5K, since Imagenet-V2 is a smaller dataset. The result shows that our method can still provide coverage even for models trained on different distributions, as long as the conformal calibration set comes from the new distribution. The variance of the coverage is higher due to having less data.
3.3 EXPERIMENT 3: SET SIZES OF NAIVE , APS, AND RAPS ON IMAGENET
We investigate the effect of regularization in more detail. For three values of λ, we collected the set sizes produced by each of naive, APS, and RAPS and report their histograms in Figure 4.
3.4 EXPERIMENT 4: ADAPTIVENESS OF RAPS ON IMAGENET
We now show that RAPS sets are smaller for easy images than hard ones, addressing the adaptiveness desideratum. Table 4 reports the size-stratified coverages of RAPS at the 90% level with kreg = 5 and different choices of λ. When λ is small, RAPS allows sets to be large. But when λ = 1, RAPS
clips sets to be a maximum of size 5. Table 7 (in the Appendix) stratifies by image difficulty, showing that RAPS sets are small for easy examples and large for hard ones. Experiments 3 and 4 together illustrate the tradeoff between adaptiveness and size: as the average set size decreases, the RAPS procedure truncates sets larger than the smallest fixed set that provides coverage, taming the heavy tail of the APS procedure. Since RAPS with large λ undercovers hard examples, it must compensate by taking larger sets for easy examples to ensure the 1− α marginal coverage guarantee. However, the size only increases slightly since easy images are more common than hard ones, and the total probability mass can often exceed τ̂ccal by including only one more class. If this behavior is not desired, we can instead automatically pick λ to optimize the adaptiveness of RAPS; see Section 4.
3.5 EXPERIMENT 5: CHOICE OF TUNING PARAMETERS
While any value of the tuning parameters λ and kreg lead to coverage (Proposition 1), some values will lead to smaller sets. In Experiments 1 and 2, we chose kreg and λ adaptively from data (see Appendix E), achieving strong results for all models and choices of the coverage level. Table 3 gives the performance of RAPS with many choices of kreg and λ for ResNet-152.
4 ADAPTIVENESS AND CONDITIONAL COVERAGE
In this section, we point to a definition of adaptiveness that is more natural for the image classification setting than the existing notion of conditional coverage. We show that APS does not satisfy conditional coverage, and that RAPS with small λ outperforms it in terms of adaptiveness.
We say that a set-valued predictor C : Rd → 2Y satisfies exact conditional coverage if P (Y ∈ C(X) | X = x) = 1 − α for each x. Distribution-free guarantees on conditional coverage are impossible (Vovk, 2012; Lei & Wasserman, 2014), but many algorithms try to satisfy it approximately (Romano et al., 2019; 2020; Cauchois et al., 2020). In a similar spirit, Tibshirani et al. (2019) suggest a notion of local conditional coverage, where one asks for coverage in a neighborhood of each point, weighted according to a chosen kernel. Cauchois et al. (2020) introduce the worst-case slab metric for measuring violations of the conditional coverage property. We present a different way of measuring violations of conditional coverage.
Proposition 3. Suppose P (Y ∈ C(X) | X = x) = 1− α for each x ∈ Rd. Then, P (Y ∈ C(X) | {|C(X)| ∈ A}) = 1− α for any A ⊂ {0, 1, 2, . . . }.
In words, if conditional coverage holds, then coverage holds after stratifying by set size. Based on this result, In Appendix E, we introduce the size-stratified coverage violation criterion, a simple and pragmatic way of quantifying adaptiveness. Then, we automatically tune λ on this metric so RAPS markedly outperforms the adaptiveness of APS (see Table 8).
1 11.2 10.2 7.0 3.6 2.9 2.3 2.1 2.3 2.2 2.2
2 11.2 10.2 7.1 3.7 3.0 2.4 2.1 2.3 2.2 2.2
5 11.2 10.2 7.2 3.9 3.4 2.9 2.6 2.5 2.5 2.5
10 11.2 10.2 7.4 4.5 4.0 3.6 3.4 3.4 3.4 3.4
50 11.2 10.6 8.7 7.2 7.0 6.9 6.9 6.9 6.9 6.9
In Table 4, we report on the coverage of APS and RAPS, stratified by the size of the prediction set. Turning our attention to the λ = 0 column, we see that when APS outputs a set of size 101− 1000, APS has coverage 97%, substantially higher than 90% nominal rate. By Proposition 3, we conclude that APS is not achieving exact conditional coverage, because the scores are far from the oracle probabilities. The APS procedure still achieves marginal coverage by overcovering hard examples and undercovering easy ones, an undesirable behavior. Alternatively, RAPS can be used to regularize the set sizes—for λ = .001 to λ = .01 the coverage stratified by set size is more balanced. In summary, even purely based on the adaptiveness desideratum, RAPS with light regularization is preferable to APS. Note that as the size of the training data increases, as long as π̂ is consistent, naive and APS will become more stable, and so we expect less regularization will be needed.
Lastly, we argue that conditional coverage is a poor notion of adaptiveness when the best possible model (i.e., one fit on infinite data) has high accuracy. Given such a model, the oracle procedure from Romano et al. (2020) would return the correct label with probability 1 − α and the empty set with probability α. That is, having correct conditional coverage for high-signal problems where Y is perfectly determined by X requires a perfect classifier. In our experiments on ImageNet, APS does not approximate this behavior. Therefore, conditional coverage isn’t the right goal for prediction sets with realistic sample sizes. Proposition 3 suggests a relaxation. We could require that we have the right coverage, no matter the size of the prediction set: P (Y ∈ C(X) | {|C(x)| ∈ A}) ≥ 1− α for any A ⊂ {0, 1, 2, . . . }; Appendix E.2 develops this idea. We view this as a promising way to reason about adaptiveness in high-signal problems such as image classification.
5 DISCUSSION
For classification tasks with many possible labels, our method enables a researcher to take any base classifier and return predictive sets guaranteed to achieve a pre-specified error level, such as 90%, while retaining small average size. It is simple to deploy, so it is an attractive, automatic way to quantify the uncertainty of image classifiers—an essential task in such settings as medical diagnostics, self-driving vehicles, and flagging dangerous internet content. Predictive sets in computer vision (from RAPS and other conformal methods) have many further uses, since they systematically identify hard test-time examples. Finding such examples is useful in active learning where one only has resources to label a small number of points. In a different direction, one can improve efficiency of a classifier by using a cheap classifier outputting a prediction set first, and an expensive one only when the cheap classifier outputs a large set (a cascade; see, e.g., Li et al. 2015), and Fisch et al. (2021) for an implementation of conformal prediction in this setting. One can also use predictive sets during model development to identify failure cases and outliers and suggest strategies for improving its performance. Prediction sets are most useful for problems with many classes; returning to our initial medical motivation, we envision RAPS could be used by a doctor to automatically screen for a large number of diseases (e.g. via a blood sample) and refer the patient to relevant specialists.
A PROOFS
Theorem 1. Let s(x, u, y) = infτ{y ∈ C(x, u, τ)}, and let si = s(Xi, Ui, Yi) for i = 1, . . . , n. Then
{y : s(x, u, y) ≤ τ} = {y : y ∈ C(x, u, τ)}
because C(x, u, τ) is a finite set growing in τ by the assumption in Eq. (2). Thus,
{τ : |{i : si ≤ τ} | ≥ d(1−α)(n+ 1)e} = { τ : |{i : Yi ∈ C(Xi, Ui, τ)}|
n ≥ d(n+ 1)(1− α)e n
} .
Considering the left expression, the infimum over τ of the set on the left hand side is the d(1 − α)(n + 1)e smallest value of the si, so this is the value of τ̂ccal. Since s1, . . . , sn, s(Xn+1, Un+1, Yn+1) are exchangeable random variables, |{i : s(Xn+1, Un+1, Yn+1) > si}| is stochastically dominated by the discrete uniform distribution on {0, 1, . . . , n}. We thus have that
P (Yn+1 /∈ C(Xn+1, Un+1, τ̂ccal)) = P (s(Xn+1, Un+1, Yn+1) > τ̂ccal) = P (|{i : s(Xn+1, Un+1, Yn+1) > si}| ≥ d(n+ 1)(1− α)e)
= P
( |{i : s(Xn+1, Un+1, Yn+1) > si}|
n+ 1 ≥ d(n+ 1)(1− α)e n+ 1 ) ≤ α.
Proposition 1. The lower bound follows from Theorem 1. To prove the upper bound, using the result from Theorem 2.2 of Lei et al. (2018) it suffices to show that the variables s(Xi, Ui, Yi) = inf{τ : Yi ∈ C(Xi, Ui, τ)} are almost surely distinct. To this end, note that that
s(Xi, Ui, Yi) = ρXi(Yi) + π̂Xi(Yi) · Ui + λ(oXi(Yi)− kreg)+,
and due to the middle term of the sum, these values are distinct almost surely provided π̂Xi(Yi) > 0.
Proposition 2. We first show that τ̂ccal ≤ 1 + k∗ − kreg . Note that since at least d(1 − α)(n + 1)e of the conformal calibration points are covered by a set of size k∗, at least d(1 − α)(n + 1)e of the Ei in Algorithm 2 are less than or equal to 1 + k∗ − kreg. Thus, by the definition of τ̂ccal, we have that it is less than or equal to 1 + k∗ − kreg. Then, note that by the definition of C∗ in Eq. (4), we have that
|C∗(Xn+1, Un+1, τ̂ccal)| ≤ k∗.
as long as τ̂ccal ≤ 1+k∗−kreg , since for the k∗+1 most likely class, the sum in Eq. (4) will exceed λ · (1 + k∗ − kreg) = (1 + k∗ − kreg) ≥ τ̂ccal, and so the k∗ + 1 class will not be in the set.
Proposition 3. Suppose P (Y ∈ C(X) | X = x) = 1− α for each x ∈ Rd. Then, P (Y ∈ C(X) | |C(X)| ∈ A) = ∫ x P (Y ∈ C(x) | X = x})I{|C(x)|∈A}dP (x)
P (|C(X)| ∈ A)
=
∫ x (1− α)I{|C(x)|∈A}dP (x)
P (|C(X)| ∈ A) = 1− α.
B RANDOMIZED PREDICTORS
The reader may wonder why we choose to use a randomized procedure. The randomization is needed to achieve 1 − α coverage exactly, which we will explain via an example. Note that the randomization is of little practical importance, since the predictive set output by the randomized procedure will differ from the that of the non-randomized procedure by at most one element.
Turning to an example, assume for a particular input image we expect a set of size k to have 91% coverage, and a set of size k − 1 to have 89% coverage. In order to achieve our desired coverage of 90%, we randomly choose size k or k − 1 with equal probability. In general, the probabilities will not be equal, but rather chosen so the weighted average of the two coverages is exactly 90%. If a user of our method desires deterministic sets, it is easy to turn off this randomization with a single flag, resulting in slightly conservative sets.
C IMAGENET AND IMAGENETV2 RESULTS FOR α = 5%
We repeated Experiments 1 and 2 with α = 5%. See the results in Tables 5 and 6.
D COVERAGE AND SIZE CONDITIONAL ON IMAGE DIFFICULTY
In order to probe the adaptiveness properties of APS and RAPS we stratified coverage and size by image difficulty (the position of the true label in the list of most likely to least likely classes, based on the classifier predictions) in Table 7. With increasing λ, coverage decreases for more difficult images and increases for easier ones. In the most difficult regime, even though APS can output large sets, those sets still rarely contain the true class. This suggests regularization is a sensible way to stabilize the sets. As a final word on Table 7, notice that as λ increases, coverage improves for the more common medium-difficulty examples, although not for very rare and difficult ones.
E CHOOSING kreg AND λ TO OPTIMIZE SET SIZE AND ADAPTIVENESS
This section describes two procedures for picking kreg and λ that optimize for set size or adaptiveness, outperforming APS in both cases.
E.1 OPTIMIZING SET SIZE WITH RAPS
Algorithm 4 Adaptive Fixed-K
Input: α; I ∈ {1, ...,K}n×K , and one-hot y ∈ {0, 1}K corresponding respectively to the classes from highest to lowest estimated probability mass, and labels for each of n examples in the dataset
1: procedure GET-KSTAR(α,I ,y) 2: for i ∈ {1, · · · , n} do 3: Li ← { j : Ii,j = yi } 4: k̂∗ ← the d(1− α)(1 + n)e largest value in {Li}ni=1 5: return k̂∗
Output: The estimate of the smallest fixed size set that achieves coverage, k̂∗
To produce Tables 1, 5, 2, and 6, we chose kreg and λ adaptively. This required an extra data splitting step, where a small amount of tuning data { xi, yi }m i=1
were used to estimate k∗, and then kreg is set to k∗. Takingm ≈ 1000 was sufficient, since the algorithm is fairly insensitive to kreg (see Table 3). Then, k̂∗ was calculated with Algorithm 4. We produced the Imagenet V2 tables with m = 1000 and the Imagenet tables with m = 10000.
After choosing k̂∗, we chose λ to have small set size. We used the same tuning data to pick k̂∗ and λ for simplicity (this does not invalidate our coverage guarantee since conformal calibration still uses fresh data). A coarse grid search on λ sufficed, since small parameter variations have little impact on RAPS. For example, we chose the λ ∈ {0.001, 0.01, 0.1, 0.2, 0.5} that achieved the smallest size on the m holdout samples in order to produce Tables 1, 5, 2, and 6. We include a subroutine that automatically chooses k̂∗ and λ to optimize size in our GitHub codebase.
E.2 OPTIMIZING ADAPTIVENESS WITH RAPS
In this appendix, we show empirically that RAPS with an automatically chosen set of kreg and λ improves the adaptiveness of APS. Recall our discussion in Section 4 and Proposition 3, wherein we propose size-stratified coverage as a useful definition of adaptiveness in image classification. After picking kreg as in Appendix E, we can choose λ using the same tuning data to optimize this notion of adaptiveness.
We now describe a particular manifestation of our adaptiveness criterion that we will use to optimize λ. Consider disjoint set-size strata {Si}i=si=1, where ⋃j=s j=1 Si = {1, . . . , |Y|}. Then define the indexes
of examples stratified by the prediction set size of each example from algorithm C as Jj = { i :
|C(Xi, Yi, Ui)| ∈ Sj }
. Then we can define the size-stratified coverage violation of an algorithm C on strata {S}i=si=1 as
SSCV ( C, {S}j=sj=1 ) = sup
j ∣∣∣∣∣ |{i : Yi ∈ C(Xi,Yi,Ui), i ∈ Jj}||Jj| − (1− α) ∣∣∣∣∣. (5)
In words, Eq. (5) is the worst-case deviation of C from exact coverage when it outputs sets of a certain size. Computing the size-stratified coverage violation thus only requires post-stratifying the results of C on a set of labeled examples. If conditional coverage held, the worst stratum coverage violation would be 0 by Proposition 3.
To maximize adaptiveness, we’d like to choose λ to minimize the size-stratified coverage violation of RAPS. Write Cλ to mean the RAPS procedure for a fixed choice of kreg and λ. Then we would like to pick
λ = arg min λ′
SSCV(Cλ′ , {S}j=sj=1). (6)
In our experiments, we choose a relatively coarse partitioning of the possible set sizes: 0-1, 2-3, 4- 10, 11-100, and 101-1000. Then, we chose the λ ∈ {0.00001, 0.0001, 0.0008, 0.001, 0.0015, 0.002} which minimized the size-stratified coverage violation on the tuning set. The results in Table 8 show RAPS always outperforms the adaptiveness of APS on the test set, even with this coarse, automated choice of parameters. The table reports the median size-stratified coverage violation over 10 independent trials of APS and RAPS with automated parameter tuning.
F COMPARISON WITH LEAST AMBIGUOUS SET-VALUED CLASSIFIERS
In this section, we compare RAPS to the Least Ambiguous Set-valued Classifier (LAC) method introduced in Sadinle et al. (2019), an alternative conformal procedure that is designed to have small sets. The LAC method provable gives the smallest possible average set size in the case where the input probabilities are correct, with the idea that these sets should be small even when the estimated probabilities are only approximately correct. In the notation of this paper, the LAC method considers nested sets of the following form:
CLAC(x, τ) := {y : π̂x(y) ≥ 1− τ}, which can be calibrated using as before in using τ̂ccal from Eq. (3).
We first compare naive, APS, RAPS, and LAC in terms of power and coverage in Table 9. In this experiment, we tuned RAPS to have small set size as described in Appendix E.1. We see that LAC also achieves correct coverage, as expected since it is a conformal method and satisfies the guarantee from Theorem 1. We further see that it has systematically smaller sets that RAPS, although the difference is slight compared to the gap between APS and RAPS or APS and LAC.
We next compare RAPS to LAC in terms of adaptiveness, tuning RAPS as in Section E.2. First, in Table 10, we report on the coverage of LAC for images of different difficulties, and see that LAC has dramatically worse coverage for hard images than for easy ones. Comparing this to RAPS in Table 7, we see that RAPS also has worse coverage for more difficult images, although the gap is much smaller for RAPS. Next, in Table 11, we report on the SSCV metric for of adaptiveness (and conditional coverage) for APS, RAPS, and LAC. We find that APS and RAPS have much better adaptiveness than LAC, with RAPS being the overall winner. The results of all of these comparisons are expected: LAC is not targetting adpativeness and instead trying to achieve the smallest possible set size. It succeeds at its goal, sacrificing adaptiveness to do so.
1 15668 1.00 1.5 | 1. What is the focus of the paper regarding prediction sets in classification tasks?
2. What are the strengths of the proposed method, particularly in terms of adaptiveness and empirical performance?
3. What are the weaknesses of the paper, especially regarding the choice of regularization and sensitivity to tuning parameters?
4. How does the proposed method compare to other set-valued classifiers, specifically those that directly minimize the cardinality of prediction sets or intervals?
5. Can the authors provide theoretical proof of the optimality of their approach, or comment on its relation to high-quality prediction interval methods?
6. How does the uncertainty set approach relate to classification with rejection/abstain methods? | Review | Review
##########################################################################
Summary:
Prediction sets are used to quantify the uncertainty of classification. The naive approach which include the labels until a pre-specified coverage probability is satisfied often leads to large prediction sets. Adaptive Prediction Sets (APS) can output prediction sets with desired coverage but set sizes are still not satisfyingly small and the results are unstable, especially when many probability estimations fall into the tail of the distribution.
In order to make the prediction stable and sets as narrow as possible under pre-specified coverage probability, this paper extends APS to Regularized Adaptive Prediction Sets (RAPS) by penalizing those class with small probabilities beyond k many classes already included, which leads to a small prediction size. The regularization is an interesting idea in terms of minimizing prediction sets, which is different from previous works where most of them directly minimize a quantity related to the cardinality of prediction sets or intervals. Empirically, compared with other set-valued classifiers extracting information from the same base model CNNs, the proposed method outperforms significantly in terms of set sizes when fixing pre-specified coverage. Moreover, this work shows adaptiveness: it tries to allow large prediction size for difficult instances and small prediction size for easy instances.
##########################################################################
Reasons for score:
Overall, I vote for accepting. I think the method is well motivated and the solution is simple and portable (can be applied to many base methods). However, there could be more discussions on several aspects of the problem.
##########################################################################
Pros:
Studies an important problem.
The proposed method is easy to implement and can be applied to general scores or be used to improve base conformal prediction methods.
Very impressive empirical performance.
##########################################################################
Cons:
Theoretically the "optimal" set-valued classifier is based on P(Y=k | X=x). In this sense, the naive approach can be viewed as a plug-in approach when the score is an estimate of P(Y=k | X=x). When regularization is applied, something must be lost. This is as much like in lasso for high-dimensional regression, a penalty function makes the coefficient estimate biased (to trade for sparisity). It is unclear what is lost here with regularization. Is the solution no longer "Fisher consistent" in a sense?
More to the point: it seems that the proposed method is cut out for problems in which there are MANY classes. I wonder whether it will perform just as well for traditional problems in which there are only a few classes (like in the medical field.)
Choose a good value for k_reg and lambda seems to be critical. How sensitive is the result to k_reg? Is there any general theory or guidelines about tuning parameter lambda? In the experiments, the validation (calibration) data sets have huge sample size, which may be common in image data domains, but can be unrealistic for broader applications domains. I wonder if the good performance is largely relying on the large validation (calibration) sample size.
##########################################################################
Questions during rebuttal period:
The goal of narrowing prediction set size is achieved with the help of regularization, which does not directly try to minimize the cardinality of the prediction set. Can we theoretically prove it is asymptotically optimal? Any comparison to these direct approaches? There is a literature called high-quality prediction interval which directly minimizes the prediction size.
Tim Pearce, Mohamed Zaki, Alexandra Brintrup, and Andy Neely. High-quality prediction intervals for deep learning: A distribution-free, ensembled approach. arXiv preprint arXiv:1802.07167, 2018.
Any comments on the relation of the uncertainty set approach with the classification with rejection/abstain methods?
Zhang, C., Wang, W. and Qiao, X. (2018), “On Reject and Refine Options in Multicategory Classification,” Journal of the American Statistical Association, 113 (522), pp. 730–745.
Ramaswamy HG, Tewari A, Agarwal S. Consistent algorithms for multiclass classification with an abstain option. Electronic Journal of Statistics. 2018;12(1):530-54.
#########################################################################
Small comments:
Most of the figures and tables are far away from the descriptions which makes it hard to read. |
ICLR | Title
Uncertainty Sets for Image Classifiers using Conformal Prediction
Abstract
Convolutional image classifiers can achieve high predictive accuracy, but quantifying their uncertainty remains an unresolved challenge, hindering their deployment in consequential settings. Existing uncertainty quantification techniques, such as Platt scaling, attempt to calibrate the network’s probability estimates, but they do not have formal guarantees. We present an algorithm that modifies any classifier to output a predictive set containing the true label with a user-specified probability, such as 90%. The algorithm is simple and fast like Platt scaling, but provides a formal finite-sample coverage guarantee for every model and dataset. Our method modifies an existing conformal prediction algorithm to give more stable predictive sets by regularizing the small scores of unlikely classes after Platt scaling. In experiments on both Imagenet and Imagenet-V2 with ResNet-152 and other classifiers, our scheme outperforms existing approaches, achieving coverage with sets that are often factors of 5 to 10 smaller than a stand-alone Platt scaling baseline.
1 INTRODUCTION
Imagine you are a doctor making a high-stakes medical decision based on diagnostic information from a computer vision classifier. What would you want the classifier to output in order to make the best decision? This is not a casual hypothetical; such classifiers are already used in medical settings (e.g., Razzak et al., 2018; Lundervold & Lundervold, 2019; Li et al., 2014). A maximumlikelihood diagnosis with an accompanying probability may not be the most essential piece of information. To ensure the health of the patient, you must also rule in or rule out harmful diagnoses. In other words, even if the most likely diagnosis is a stomach ache, it is equally or more important to rule out stomach cancer. Therefore, you would want the classifier to give you—in addition to an estimate of the most likely outcome—actionable uncertainty quantification, such as a set of predictions that provably covers the true diagnosis with a high probability (e.g., 90%). This is called a prediction set (see Figure 1). Our paper describes a method for constructing prediction sets from any pre-trained image classifier that are formally guaranteed to contain the true class with the desired probability, relatively small, and practical to implement. Our method modifies a conformal predictor (Vovk et al., 2005) given in Romano et al. (2020) for the purpose of modern image classification in order to make it more stable in the presence of noisy small probability estimates. Just as importantly, we provide extensive evaluations and code for conformal prediction in computer vision.
Formally, for a discrete response Y ∈ Y = {1, . . . ,K} and a feature vector X ∈ Rd, we desire an uncertainty set function, C(X), mapping a feature vector to a subset of {1, . . . ,K} such that
P (Y ∈ C(X)) ≥ 1− α, (1)
for a pre-specified confidence level α such as 10%. Conformal predictors like our method can modify any black-box classifier to output predictive sets that are rigorously guaranteed to satisfy the desired coverage property shown in Eq. (1). For evaluations, we focus on Imagenet classification
∗Equal contribution. Blog: https://people.eecs.berkeley.edu/˜angelopoulos/blog/ posts/conformal-classification
using convolutional neural networks (CNNs) as the base classifiers, since this is a particularly challenging testbed. In this setting, X would be the image and Y would be the class label. Note that the guarantee in Eq. (1) is marginal over X and Y—it holds on average, not for a particular image X .
A first approach toward this goal might be to assemble the set by including classes from highest to lowest probability (e.g., after Platt scaling and a softmax function; see Platt et al., 1999; Guo et al., 2017) until their sum just exceeds the threshold 1 − α. We call this strategy naive and formulate it precisely in Algorithm 1. There are two problems with naive: first, the probabilities output by CNNs are known to be incorrect (Nixon et al., 2019), so the sets from naive do not achieve coverage. Second, image classification models’ tail probabilities are often badly miscalibrated, leading to large sets that do not faithfully articulate the uncertainty of the model; see Section 2.3. Moreover, smaller sets that achieve the same coverage level can be generated with other methods.
The coverage problem can be solved by picking a new threshold using holdout samples. For example, with α =10%, if choosing sets that contain 93% estimated probability achieves 90% coverage on the holdout set, we use the 93% cutoff instead. We refer to this algorithm, introduced in Romano et al. (2020), as Adaptive Prediction Sets (APS). The APS procedure provides coverage but still produces large sets. To fix this, we introduce a regularization technique that tempers the influence of these noisy estimates, leading to smaller, more stable sets. We describe our proposed algorithm, Regularized Adaptive Prediction Sets (RAPS), in Algorithms 2 and 3 (with APS as a special case). As we will see in Section 2, both APS and RAPS are always guaranteed to satisfy Eq. (1)—regardless of model and dataset. Furthermore, we show that RAPS is guaranteed to have better performance than choosing a fixed-size set. Both methods impose negligible computational requirements in both training and evaluation, and output useful estimates of the model’s uncertainty on a new image given, say, 1000 held-out examples.
In Section 3 we conduct the most extensive evaluation of conformal prediction in deep learning to date on Imagenet and Imagenet-V2. We find that RAPS sets always have smaller average size than naive and APSsets. For example, using a ResNeXt-101, naive does not achieve coverage, while APS and RAPS achieve it almost exactly. However, APS sets have an average size of 19, while RAPS sets have an average size of 2 at α = 10% (Figure 2 and Table 1). We will provide an accompanying codebase that implements our method as a wrapper for any PyTorch classifier, along with code to exactly reproduce all of our experiments.
1.1 RELATED WORK
Reliably estimating predictive uncertainty for neural networks is an unsolved problem. Historically, the standard approach has been to train a Bayesian neural network to learn a distribution over network weights (Quinonero-Candela et al., 2005; MacKay, 1992; Neal, 2012; Kuleshov et al., 2018; Gal, 2016). This approach requires computational and algorithmic modifications; other approaches avoid these via ensembles (Lakshminarayanan et al., 2017; Jiang et al., 2018) or approximations of Bayesian inference (Riquelme et al., 2018; Sensoy et al., 2018). These methods also have major practical limitations; for example, ensembling requires training many copies of a neural network adversarially. Therefore, the most widely used strategy is ad-hoc traditional calibration of the softmax scores with Platt scaling (Platt et al., 1999; Guo et al., 2017; Nixon et al., 2019).
This work develops a method for uncertainty quantification based on conformal prediction. Originating in the online learning literature, conformal prediction is an approach for generating predictive sets that satisfy the coverage property in Eq. (1) (Vovk et al., 1999; 2005). We use a convenient data-splitting version known as split conformal prediction that enables conformal prediction meth-
ods to be deployed for essentially any predictor (Papadopoulos et al., 2002; Lei et al., 2018). While mechanically very different from traditional calibration as discussed above, we will refer to our approach as conformal calibration to highlight that the two methodologies have overlapping but different goals.
Conformal prediction is a general framework, not a specific algorithm—important design decisions must be made to achieve the best performance for each context. To this end, Romano et al. (2020) and Cauchois et al. (2020) introduce techniques aimed at achieving coverage that is similar across regions of feature space, whereas Vovk et al. (2003); Hechtlinger et al. (2018) and Guan & Tibshirani (2019) introduce techniques aimed at achieving equal coverage for each class. While these methods have conceptual appeal, thus far there has been limited empirical evaluation of this general approach for state-of-the-art CNNs. Concretely, the only works that we are aware of that include some evaluation of conformal methods on ImageNet—the gold standard for benchmarking computer vision methods—are Hechtlinger et al. (2018), Park et al. (2019), Cauchois et al. (2020), and Messoudi et al. (2020), although in all four cases further experiments are needed to more fully evaluate their operating characteristics for practical deployment. At the heart of conformal prediction is the conformal score - a measure of similarity between labeled examples which is used to compare a new point to among those in a hold out set. Our theoretical contribution can be summarized as a modification of the conformal score from Romano et al. (2020) to have smaller, more stable sets. Lastly, there are alternative approaches to returning prediction sets not based on conformal prediction (Pearce et al., 2018; Zhang et al., 2018). These methods can be used as input to a conformal procedure to potentially improve performance, but they do not have finite-sample coverage guarantees when used alone.
2 METHODS
In developing uncertainty set methods to improve upon naive, we are guided by three desiderata. First and most importantly, the coverage desideratum says the sets must provide 1−α coverage, as discussed above. Secondly, the size desideratum says we want sets of small size, since these convey more detailed information and may be more useful in practice. Lastly, the adaptiveness desideratum says we want the sets to communicate instance-wise uncertainty: they should be smaller for easy test-time examples than for hard ones; see Figure 1 for an illustration. Coverage and size are obviously competing objectives, but size and adaptiveness are also often in tension. The size desideratum seeks small sets, while the adaptiveness desideratum seeks larger sets when the classi-
Algorithm 1 Naive Prediction Sets Input: α, sorted scores s, associated permutation of classes I , boolean rand
1: procedure NAIVE(α, s, I, rand) 2: L← 1 3: while ∑L i=1 si < 1− α do . Stop if 1− α probability exceeded 4: L← L+ 1 5: if rand then . Break ties randomly (explained in Appendix B) 6: U ← Unif(0, 1) 7: V ← ( ∑L i=1 si − (1− α))/sL 8: if U ≤ V then 9: L← L− 1
10: return { I1, ..., IL } Output: The 1− α prediction set, { I1, ..., IL }
fier is uncertain. For example, always predicting a set of size five could achieve coverage, but it is not adaptive. As noted above, both APSand RAPS achieve correct coverage, and we will show that RAPS improves upon APS according to the other two desiderata.
We now turn to the specifics of our proposed method. We begin in Subsection 2.1 by describing an abstract data-splitting procedure called conformal calibration that enables the near-automatic construction of valid predictive sets (that is, sets satisfying Eq. (1)). Subsequently, in Subsection 2.2, we provide a detailed presentation of our procedure, with commentary in Section 2.3. In Subsection 2.4 we discuss the optimality of our procedure, proving that it is at least as good as the procedure that returns sets of a fixed size, unlike alternative approaches.
2.1 CONFORMAL CALIBRATION
We first review a general technique for producing valid prediction sets, following the articulation in Gupta et al. (2019). Consider a procedure that outputs a predictive set for each observation, and further suppose that this procedure has a tuning parameter τ that controls the size of the sets. (In RAPS, τ is the cumulative sum of the sorted, penalized classifier scores.) We take a small independent conformal calibration set of data, and then choose the tuning parameter τ such that the predictive sets are large enough to achieve 1 − α coverage on this set. See Figure 3 for an illustration. This calibration step yields a choice of τ , and the resulting set is formally guaranteed to have coverage 1− α on a future test point from the same distribution; see Theorem 1 below. Formally, let (Xi, Yi)i=1,...,n be an independent and identically distributed (i.i.d.) set of variables that was not used for model training. Further, let C(x, u, τ) : Rd × [0, 1] × R → 2Y be a setvalued function that takes a feature vector x to a subset of the possible labels. The second argument u is included to allow for randomized procedures; let U1, . . . , Un be i.i.d. uniform [0, 1] random variables that will serve as the second argument for each data point. Suppose that the sets are indexed by τ such that they are nested, meaning larger values of τ lead to larger sets:
C(x, u, τ1) ⊆ C(x, u, τ2) if τ1 ≤ τ2. (2) To find a function that will achieve 1 − α coverage on test data, we select the smallest τ that gives at least 1 − α coverage on the conformal calibration set, with a slight correction to account for the finite sample size:
τ̂ccal = inf
{ τ : |{i : Yi ∈ C(Xi, Ui, τ)}|
n ≥ d(n+ 1)(1− α)e n
} . (3)
The set function C(x, u, τ) with this data-driven choice of τ is guaranteed to have correct finitesample coverage on a fresh test observation, as stated formally next. Theorem 1 (Conformal calibration coverage guarantee). Suppose (Xi, Yi, Ui)i=1,...,n and (Xn+1, Yn+1, Un+1) are i.i.d. and let C(x, u, τ) be a set-valued function satisfying the nesting property in Eq. (2). Suppose further that the sets C(x, u, τ) grow to include all labels for large enough τ : for all x ∈ Rd, C(x, u, τ) = Y for some τ . Then for τ̂ccal defined as in Eq. (3), we have the following coverage guarantee:
P ( Yn+1 ∈ C(Xn+1, Un+1, τ̂ccal) ) ≥ 1− α.
This is the same coverage property as Eq. (1) in the introduction, written in a more explicit manner. The result is not new—a special case of this result leveraging sample-splitting first appears in the regression setting in Papadopoulos et al. (2002), and the core idea of conformal prediction was introduced even earlier; see (Vovk et al., 2005).
As a technical remark, the theorem also holds if the observations to satisfy the weaker condition of exchangeability; see Vovk et al. (2005). In addition, for most families of set-valued functions C(x, u, τ) there is a matching upper bound:
P ( Yn+1 ∈ C(Xn+1, Un+1, τ̂ccal) ) ≤ 1− α+ 1
n+ 1 .
Roughly speaking, this will hold whenever the sets grow smoothly in τ . See Lei et al. (2018) for a formal statement of the required conditions.
2.2 OUR METHOD
Conformal calibration is a powerful general idea, allowing one to achieve the coverage desideratum for any choice of sets C(x, u, τ). Nonetheless, this is not yet a full solution, since the quality of the resulting prediction sets can vary dramatically depending on the design of C(x, u, τ). In particular, we recall the size and adaptiveness desiderata from Section 1—we want our uncertainty sets to be as small as possible while faithfully articulating the instance-wise uncertainty of each test point. In this section, we explicitly give our algorithm, which can be viewed as a special case of conformal calibration with the uncertainty sets C designed to extract information from CNNs. Our algorithm has three main ingredients. First, for a feature vector x, the base model computes class probabilities π̂x ∈ Rk, and we order the classes from most probable to least probable. Then, we add a regularization term to promote small predictive sets. Finally, we conformally calibrate the penalized prediction sets to guarantee coverage on future test points.
Formally, let ρx(y) = ∑K y′=1 π̂x(y
′)I{π̂x(y′)>π̂x(y)} be the total probability mass of the set of labels that are more likely than y. These are all the labels that will be included before y is included. In addition, let ox(y) = |{y′ ∈ Y : π̂x(y′) ≥ π̂x(y)}| be the ranking of y among the label based on the probabilities π̂. For example, if y is the third most likely label, then ox(y) = 3.1 We take
C∗(x, u, τ) := { y : ρx(y) + π̂x(y) · u+ λ · (ox(y)− kreg)+︸ ︷︷ ︸
regularization
≤ τ } , (4)
where (z)+ denotes the positive part of z and λ, kreg ≥ 0 are regularization hyperparameters that are introduced to encourage small set sizes. See Figure 3 for a visualization of a RAPS predictive set and Appendix E for a discussion of how to select kreg and λ.
Since this is the heart of our proposal, we carefully parse each term. First, the ρx(y) term increases as y ranges from the most probable to least probable label, so our sets will prefer to include the y that are predicted to be the most probable. The second term, π̂x(y) · u, is a randomized term to handle the fact that the value will jump discretely with the inclusion of each new y. The randomization term can never impact more than one value of y: there is at most one value of y such that y ∈ C(x, 0, τ) but y /∈ C(x, 1, τ). These first two terms can be viewed as the CDF transform after arranging the classes from most likely to least likely, randomized in the usual way to result in a continuous uniform random variable (cf. Romano et al., 2020). We discuss randomization further in Appendix B.
Lastly, the regularization promotes small set sizes: for values of y that occur farther down the ordered list of classes, the term λ · (ox(y)− kreg)+ makes that value of y require a higher value of τ before it is included in the predictive set. For example, if kreg = 5, then the sixth most likely value of y has an extra penalty of size λ, so it will never be included until τ exceeds ρx(y) + π̂x(y) · u + λ, whereas it enters when τ exceeds ρx(y) + π̂x(y) · u in the nonregularized version. Our method has the following coverage property: Proposition 1 (RAPS coverage guarantee). Suppose (Xi, Yi, Ui)i=1,...,n and (Xn+1, Yn+1, Un+1) are i.i.d. and let C∗(x, u, τ) be defined as in Eq. (4). Suppose further that π̂x(y) > 0 for all x and y. Then for τ̂ccal defined as in Eq. (3), we have the following coverage guarantee:
1− α ≤ P ( Yn+1 ∈ C∗(Xn+1, Un+1, τ̂ccal) ) ≤ 1− α+ 1
n+ 1 .
1For ease of notation, we assume distinct probabilities. Else, label-ordering ties should be broken randomly.
Algorithm 2 RAPS Conformal Calibration
Input: α; s ∈ [0, 1]n×K , I ∈ {1, ...,K}n×K , and one-hot y ∈ {0, 1}K corresponding respectively to the sorted scores, the associated permutation of indexes, and labels for each of n examples in the calibration set; kreg; λ; boolean rand
1: procedure RAPSC(α,s,I ,y,λ) 2: for i ∈ {1, · · · , n} do 3: Li ← { j : Ii,j = yi } 4: Ei ← ΣLij=0si,j + λ(Li − kreg + 1)+ 5: if rand then 6: U ∼ Unif(0, 1) 7: Ei ← Ei − si,Li + U ∗ si,Li 8: τ̂ccal ← the d(1− α)(1 + n)e largest value in {Ei}ni=1 9: return τ̂ccal
Output: The generalized quantile, τ̂ccal . The value in Eq. (3)
Algorithm 3 RAPS Prediction Sets Input: α, sorted scores s and the associated permutation of classes I for a test-time example, τ̂ccal
from Algorithm 2, kreg , λ, boolean rand 1: procedure RAPS(α, s, I, τ̂ccal, kreg, λ, rand) 2: L← | j ∈ Y : Σji=0si + λ(L− kreg)+ ≤ τ̂ccal |+ 1 3: V ← (τ̂ccal − ΣL−1i=0 si − λ(L− kreg)+ + sL−1)/sL−1 4: if rand & V ≤ U ∼ Unif(0, 1) then 5: L← L− 1 6: return C = { I1, ...IL } . The L most likely classes
Output: The 1− α confidence set, C . The set in Eq. (4)
Note that the first inequality is a corollary of Theorem 1, and the second inequality is a special case of the remark in Section 2.1. The restriction that π̂x(y) > 0 is not necessary for the first inequality.
2.3 WHY REGULARIZE?
In our experiments, the sets from APS are larger than necessary, because APS is sensitive to the noisy probability estimates far down the list of classes. This noise leads to a permutation problem of unlikely classes, where ordering of the classes with small probability estimates is determined mostly by random chance. If 5% of the true classes from the calibration set are deep in the tail due to the permutation problem, APS will choose large 95% predictive sets; see Figure 2. The inclusion of the RAPS regularization causes the algorithm to avoid using the unreliable probabilities in the tail; see Figure 4. We discuss how RAPS improves the adaptiveness of APS in Section 4 and Appendix E.
2.4 OPTIMALITY CONSIDERATIONS
To complement these experimental results, we now formally prove that RAPS with the correct regularization parameters will always dominate the simple procedure that returns a fixed set size. (Section 3.5 shows the parameters are easy to select and RAPS is not sensitive to their values). For a feature vector x, let ŷ(j)(x) be the label with the jth highest predicted probability. We define the top-k predictive sets to be {ŷ(1)(x), . . . , ŷ(k)(x)}. Proposition 2 (RAPS dominates top-k sets). Suppose (Xi, Yi, Ui)i=1,...,n and (Xn+1, Yn+1, Un+1) are i.i.d. draws. Let k∗ be the smallest k such that the top-k predictive sets have coverage at least d(n+ 1)(1−α)e/n on the conformal calibration points (Xi, Yi)i=1,...,n. Take C∗(x, u, τ) as in Eq. (4) with any kreg ≤ k∗ and λ = 1. Then with τ̂ccal chosen as in Eq. (3), we have
C∗(Xn+1, Un+1, τ̂ccal) ⊆ {ŷ(1)(x), . . . , ŷ(k∗)(x)}.
In words, the RAPS procedure with heavy regularization will be at least as good as the top-k procedure in the sense that it has smaller or same average set size while maintaining the desired coverage level. This is not true of either the naive baseline or the APS procedure; Table 2 shows that these two procedures usually return predictive sets with size much larger than k∗.
3 EXPERIMENTS
In this section we report on experiments that study the performance of the predictive sets from naive, APS, and RAPS, evaluating each based on the three desiderata above. We begin with a brief preview of the experiments. In Experiment 1, we evaluate naive, APS, and RAPS on ImagenetVal. Both APS and RAPS provided almost exact coverage, while naive sets had coverage slightly below the specified level. APS has larger sets on average than naive and RAPS. RAPS has a much smaller average set size than APS and naive. In Experiment 2, we repeat Experiment 1 on Imagenet-V2, and the conclusions still hold. In Experiment 3, we produce histograms of set sizes for naive, APS, and RAPS for several different values of λ, illustrating a simple tradeoff between set size and adaptiveness. In Experiment 4, we compute histograms of RAPS sets stratified by image difficulty, showing that RAPS sets are smaller for easier images than for difficult ones. In Experiment 5, we report the performance of RAPS with many values of the tuning parameters.
In our experiments, we use nine standard, pretrained Imagenet classifiers from the torchvision repository (Paszke et al., 2019) with standard normalization, resize, and crop parameters. Before applying naive, APS, or RAPS, we calibrated the classifiers using the standard temperature scaling/Platt scaling procedure as in Guo et al. (2017) on the calibration set. Thereafter, naive, APS, and RAPS were applied, with RAPS using a data-driven choice of parameters described in Appendix E. We use the randomized versions of these algorithms—see Appendix B for a discussion.
3.1 EXPERIMENT 1: COVERAGE VS SET SIZE ON IMAGENET
In this experiment, we calculated the coverage and mean set size of each procedure for two different choices of α. Over 100 trials, we randomly sampled two subsets of Imagenet-Val: one conformal calibration subset of size 20K and one evaluation subset of size 20K. The median-of-means over trials for both coverage and set size are reported in Table 1. Figure 2 illustrates the performances of naive, APS, and RAPS; RAPS has much smaller sets than both naive and APS, while achieving coverage. We also report results from a conformalized fixed-k procedure, which finds the smallest fixed set size achieving coverage on the holdout set, k∗, then predicts sets of size k∗ − 1 or k∗ on new examples in order to achieve exact coverage; see Algorithm 4 in Appendix E.
3.2 EXPERIMENT 2: COVERAGE VS SET SIZE ON IMAGENET-V2
The same procedure as Experiment 1 was repeated on Imagenet-V2, with exactly the same normalization, resize, and crop parameters. The size of the calibration and evaluation sets was 5K, since Imagenet-V2 is a smaller dataset. The result shows that our method can still provide coverage even for models trained on different distributions, as long as the conformal calibration set comes from the new distribution. The variance of the coverage is higher due to having less data.
3.3 EXPERIMENT 3: SET SIZES OF NAIVE , APS, AND RAPS ON IMAGENET
We investigate the effect of regularization in more detail. For three values of λ, we collected the set sizes produced by each of naive, APS, and RAPS and report their histograms in Figure 4.
3.4 EXPERIMENT 4: ADAPTIVENESS OF RAPS ON IMAGENET
We now show that RAPS sets are smaller for easy images than hard ones, addressing the adaptiveness desideratum. Table 4 reports the size-stratified coverages of RAPS at the 90% level with kreg = 5 and different choices of λ. When λ is small, RAPS allows sets to be large. But when λ = 1, RAPS
clips sets to be a maximum of size 5. Table 7 (in the Appendix) stratifies by image difficulty, showing that RAPS sets are small for easy examples and large for hard ones. Experiments 3 and 4 together illustrate the tradeoff between adaptiveness and size: as the average set size decreases, the RAPS procedure truncates sets larger than the smallest fixed set that provides coverage, taming the heavy tail of the APS procedure. Since RAPS with large λ undercovers hard examples, it must compensate by taking larger sets for easy examples to ensure the 1− α marginal coverage guarantee. However, the size only increases slightly since easy images are more common than hard ones, and the total probability mass can often exceed τ̂ccal by including only one more class. If this behavior is not desired, we can instead automatically pick λ to optimize the adaptiveness of RAPS; see Section 4.
3.5 EXPERIMENT 5: CHOICE OF TUNING PARAMETERS
While any value of the tuning parameters λ and kreg lead to coverage (Proposition 1), some values will lead to smaller sets. In Experiments 1 and 2, we chose kreg and λ adaptively from data (see Appendix E), achieving strong results for all models and choices of the coverage level. Table 3 gives the performance of RAPS with many choices of kreg and λ for ResNet-152.
4 ADAPTIVENESS AND CONDITIONAL COVERAGE
In this section, we point to a definition of adaptiveness that is more natural for the image classification setting than the existing notion of conditional coverage. We show that APS does not satisfy conditional coverage, and that RAPS with small λ outperforms it in terms of adaptiveness.
We say that a set-valued predictor C : Rd → 2Y satisfies exact conditional coverage if P (Y ∈ C(X) | X = x) = 1 − α for each x. Distribution-free guarantees on conditional coverage are impossible (Vovk, 2012; Lei & Wasserman, 2014), but many algorithms try to satisfy it approximately (Romano et al., 2019; 2020; Cauchois et al., 2020). In a similar spirit, Tibshirani et al. (2019) suggest a notion of local conditional coverage, where one asks for coverage in a neighborhood of each point, weighted according to a chosen kernel. Cauchois et al. (2020) introduce the worst-case slab metric for measuring violations of the conditional coverage property. We present a different way of measuring violations of conditional coverage.
Proposition 3. Suppose P (Y ∈ C(X) | X = x) = 1− α for each x ∈ Rd. Then, P (Y ∈ C(X) | {|C(X)| ∈ A}) = 1− α for any A ⊂ {0, 1, 2, . . . }.
In words, if conditional coverage holds, then coverage holds after stratifying by set size. Based on this result, In Appendix E, we introduce the size-stratified coverage violation criterion, a simple and pragmatic way of quantifying adaptiveness. Then, we automatically tune λ on this metric so RAPS markedly outperforms the adaptiveness of APS (see Table 8).
1 11.2 10.2 7.0 3.6 2.9 2.3 2.1 2.3 2.2 2.2
2 11.2 10.2 7.1 3.7 3.0 2.4 2.1 2.3 2.2 2.2
5 11.2 10.2 7.2 3.9 3.4 2.9 2.6 2.5 2.5 2.5
10 11.2 10.2 7.4 4.5 4.0 3.6 3.4 3.4 3.4 3.4
50 11.2 10.6 8.7 7.2 7.0 6.9 6.9 6.9 6.9 6.9
In Table 4, we report on the coverage of APS and RAPS, stratified by the size of the prediction set. Turning our attention to the λ = 0 column, we see that when APS outputs a set of size 101− 1000, APS has coverage 97%, substantially higher than 90% nominal rate. By Proposition 3, we conclude that APS is not achieving exact conditional coverage, because the scores are far from the oracle probabilities. The APS procedure still achieves marginal coverage by overcovering hard examples and undercovering easy ones, an undesirable behavior. Alternatively, RAPS can be used to regularize the set sizes—for λ = .001 to λ = .01 the coverage stratified by set size is more balanced. In summary, even purely based on the adaptiveness desideratum, RAPS with light regularization is preferable to APS. Note that as the size of the training data increases, as long as π̂ is consistent, naive and APS will become more stable, and so we expect less regularization will be needed.
Lastly, we argue that conditional coverage is a poor notion of adaptiveness when the best possible model (i.e., one fit on infinite data) has high accuracy. Given such a model, the oracle procedure from Romano et al. (2020) would return the correct label with probability 1 − α and the empty set with probability α. That is, having correct conditional coverage for high-signal problems where Y is perfectly determined by X requires a perfect classifier. In our experiments on ImageNet, APS does not approximate this behavior. Therefore, conditional coverage isn’t the right goal for prediction sets with realistic sample sizes. Proposition 3 suggests a relaxation. We could require that we have the right coverage, no matter the size of the prediction set: P (Y ∈ C(X) | {|C(x)| ∈ A}) ≥ 1− α for any A ⊂ {0, 1, 2, . . . }; Appendix E.2 develops this idea. We view this as a promising way to reason about adaptiveness in high-signal problems such as image classification.
5 DISCUSSION
For classification tasks with many possible labels, our method enables a researcher to take any base classifier and return predictive sets guaranteed to achieve a pre-specified error level, such as 90%, while retaining small average size. It is simple to deploy, so it is an attractive, automatic way to quantify the uncertainty of image classifiers—an essential task in such settings as medical diagnostics, self-driving vehicles, and flagging dangerous internet content. Predictive sets in computer vision (from RAPS and other conformal methods) have many further uses, since they systematically identify hard test-time examples. Finding such examples is useful in active learning where one only has resources to label a small number of points. In a different direction, one can improve efficiency of a classifier by using a cheap classifier outputting a prediction set first, and an expensive one only when the cheap classifier outputs a large set (a cascade; see, e.g., Li et al. 2015), and Fisch et al. (2021) for an implementation of conformal prediction in this setting. One can also use predictive sets during model development to identify failure cases and outliers and suggest strategies for improving its performance. Prediction sets are most useful for problems with many classes; returning to our initial medical motivation, we envision RAPS could be used by a doctor to automatically screen for a large number of diseases (e.g. via a blood sample) and refer the patient to relevant specialists.
A PROOFS
Theorem 1. Let s(x, u, y) = infτ{y ∈ C(x, u, τ)}, and let si = s(Xi, Ui, Yi) for i = 1, . . . , n. Then
{y : s(x, u, y) ≤ τ} = {y : y ∈ C(x, u, τ)}
because C(x, u, τ) is a finite set growing in τ by the assumption in Eq. (2). Thus,
{τ : |{i : si ≤ τ} | ≥ d(1−α)(n+ 1)e} = { τ : |{i : Yi ∈ C(Xi, Ui, τ)}|
n ≥ d(n+ 1)(1− α)e n
} .
Considering the left expression, the infimum over τ of the set on the left hand side is the d(1 − α)(n + 1)e smallest value of the si, so this is the value of τ̂ccal. Since s1, . . . , sn, s(Xn+1, Un+1, Yn+1) are exchangeable random variables, |{i : s(Xn+1, Un+1, Yn+1) > si}| is stochastically dominated by the discrete uniform distribution on {0, 1, . . . , n}. We thus have that
P (Yn+1 /∈ C(Xn+1, Un+1, τ̂ccal)) = P (s(Xn+1, Un+1, Yn+1) > τ̂ccal) = P (|{i : s(Xn+1, Un+1, Yn+1) > si}| ≥ d(n+ 1)(1− α)e)
= P
( |{i : s(Xn+1, Un+1, Yn+1) > si}|
n+ 1 ≥ d(n+ 1)(1− α)e n+ 1 ) ≤ α.
Proposition 1. The lower bound follows from Theorem 1. To prove the upper bound, using the result from Theorem 2.2 of Lei et al. (2018) it suffices to show that the variables s(Xi, Ui, Yi) = inf{τ : Yi ∈ C(Xi, Ui, τ)} are almost surely distinct. To this end, note that that
s(Xi, Ui, Yi) = ρXi(Yi) + π̂Xi(Yi) · Ui + λ(oXi(Yi)− kreg)+,
and due to the middle term of the sum, these values are distinct almost surely provided π̂Xi(Yi) > 0.
Proposition 2. We first show that τ̂ccal ≤ 1 + k∗ − kreg . Note that since at least d(1 − α)(n + 1)e of the conformal calibration points are covered by a set of size k∗, at least d(1 − α)(n + 1)e of the Ei in Algorithm 2 are less than or equal to 1 + k∗ − kreg. Thus, by the definition of τ̂ccal, we have that it is less than or equal to 1 + k∗ − kreg. Then, note that by the definition of C∗ in Eq. (4), we have that
|C∗(Xn+1, Un+1, τ̂ccal)| ≤ k∗.
as long as τ̂ccal ≤ 1+k∗−kreg , since for the k∗+1 most likely class, the sum in Eq. (4) will exceed λ · (1 + k∗ − kreg) = (1 + k∗ − kreg) ≥ τ̂ccal, and so the k∗ + 1 class will not be in the set.
Proposition 3. Suppose P (Y ∈ C(X) | X = x) = 1− α for each x ∈ Rd. Then, P (Y ∈ C(X) | |C(X)| ∈ A) = ∫ x P (Y ∈ C(x) | X = x})I{|C(x)|∈A}dP (x)
P (|C(X)| ∈ A)
=
∫ x (1− α)I{|C(x)|∈A}dP (x)
P (|C(X)| ∈ A) = 1− α.
B RANDOMIZED PREDICTORS
The reader may wonder why we choose to use a randomized procedure. The randomization is needed to achieve 1 − α coverage exactly, which we will explain via an example. Note that the randomization is of little practical importance, since the predictive set output by the randomized procedure will differ from the that of the non-randomized procedure by at most one element.
Turning to an example, assume for a particular input image we expect a set of size k to have 91% coverage, and a set of size k − 1 to have 89% coverage. In order to achieve our desired coverage of 90%, we randomly choose size k or k − 1 with equal probability. In general, the probabilities will not be equal, but rather chosen so the weighted average of the two coverages is exactly 90%. If a user of our method desires deterministic sets, it is easy to turn off this randomization with a single flag, resulting in slightly conservative sets.
C IMAGENET AND IMAGENETV2 RESULTS FOR α = 5%
We repeated Experiments 1 and 2 with α = 5%. See the results in Tables 5 and 6.
D COVERAGE AND SIZE CONDITIONAL ON IMAGE DIFFICULTY
In order to probe the adaptiveness properties of APS and RAPS we stratified coverage and size by image difficulty (the position of the true label in the list of most likely to least likely classes, based on the classifier predictions) in Table 7. With increasing λ, coverage decreases for more difficult images and increases for easier ones. In the most difficult regime, even though APS can output large sets, those sets still rarely contain the true class. This suggests regularization is a sensible way to stabilize the sets. As a final word on Table 7, notice that as λ increases, coverage improves for the more common medium-difficulty examples, although not for very rare and difficult ones.
E CHOOSING kreg AND λ TO OPTIMIZE SET SIZE AND ADAPTIVENESS
This section describes two procedures for picking kreg and λ that optimize for set size or adaptiveness, outperforming APS in both cases.
E.1 OPTIMIZING SET SIZE WITH RAPS
Algorithm 4 Adaptive Fixed-K
Input: α; I ∈ {1, ...,K}n×K , and one-hot y ∈ {0, 1}K corresponding respectively to the classes from highest to lowest estimated probability mass, and labels for each of n examples in the dataset
1: procedure GET-KSTAR(α,I ,y) 2: for i ∈ {1, · · · , n} do 3: Li ← { j : Ii,j = yi } 4: k̂∗ ← the d(1− α)(1 + n)e largest value in {Li}ni=1 5: return k̂∗
Output: The estimate of the smallest fixed size set that achieves coverage, k̂∗
To produce Tables 1, 5, 2, and 6, we chose kreg and λ adaptively. This required an extra data splitting step, where a small amount of tuning data { xi, yi }m i=1
were used to estimate k∗, and then kreg is set to k∗. Takingm ≈ 1000 was sufficient, since the algorithm is fairly insensitive to kreg (see Table 3). Then, k̂∗ was calculated with Algorithm 4. We produced the Imagenet V2 tables with m = 1000 and the Imagenet tables with m = 10000.
After choosing k̂∗, we chose λ to have small set size. We used the same tuning data to pick k̂∗ and λ for simplicity (this does not invalidate our coverage guarantee since conformal calibration still uses fresh data). A coarse grid search on λ sufficed, since small parameter variations have little impact on RAPS. For example, we chose the λ ∈ {0.001, 0.01, 0.1, 0.2, 0.5} that achieved the smallest size on the m holdout samples in order to produce Tables 1, 5, 2, and 6. We include a subroutine that automatically chooses k̂∗ and λ to optimize size in our GitHub codebase.
E.2 OPTIMIZING ADAPTIVENESS WITH RAPS
In this appendix, we show empirically that RAPS with an automatically chosen set of kreg and λ improves the adaptiveness of APS. Recall our discussion in Section 4 and Proposition 3, wherein we propose size-stratified coverage as a useful definition of adaptiveness in image classification. After picking kreg as in Appendix E, we can choose λ using the same tuning data to optimize this notion of adaptiveness.
We now describe a particular manifestation of our adaptiveness criterion that we will use to optimize λ. Consider disjoint set-size strata {Si}i=si=1, where ⋃j=s j=1 Si = {1, . . . , |Y|}. Then define the indexes
of examples stratified by the prediction set size of each example from algorithm C as Jj = { i :
|C(Xi, Yi, Ui)| ∈ Sj }
. Then we can define the size-stratified coverage violation of an algorithm C on strata {S}i=si=1 as
SSCV ( C, {S}j=sj=1 ) = sup
j ∣∣∣∣∣ |{i : Yi ∈ C(Xi,Yi,Ui), i ∈ Jj}||Jj| − (1− α) ∣∣∣∣∣. (5)
In words, Eq. (5) is the worst-case deviation of C from exact coverage when it outputs sets of a certain size. Computing the size-stratified coverage violation thus only requires post-stratifying the results of C on a set of labeled examples. If conditional coverage held, the worst stratum coverage violation would be 0 by Proposition 3.
To maximize adaptiveness, we’d like to choose λ to minimize the size-stratified coverage violation of RAPS. Write Cλ to mean the RAPS procedure for a fixed choice of kreg and λ. Then we would like to pick
λ = arg min λ′
SSCV(Cλ′ , {S}j=sj=1). (6)
In our experiments, we choose a relatively coarse partitioning of the possible set sizes: 0-1, 2-3, 4- 10, 11-100, and 101-1000. Then, we chose the λ ∈ {0.00001, 0.0001, 0.0008, 0.001, 0.0015, 0.002} which minimized the size-stratified coverage violation on the tuning set. The results in Table 8 show RAPS always outperforms the adaptiveness of APS on the test set, even with this coarse, automated choice of parameters. The table reports the median size-stratified coverage violation over 10 independent trials of APS and RAPS with automated parameter tuning.
F COMPARISON WITH LEAST AMBIGUOUS SET-VALUED CLASSIFIERS
In this section, we compare RAPS to the Least Ambiguous Set-valued Classifier (LAC) method introduced in Sadinle et al. (2019), an alternative conformal procedure that is designed to have small sets. The LAC method provable gives the smallest possible average set size in the case where the input probabilities are correct, with the idea that these sets should be small even when the estimated probabilities are only approximately correct. In the notation of this paper, the LAC method considers nested sets of the following form:
CLAC(x, τ) := {y : π̂x(y) ≥ 1− τ}, which can be calibrated using as before in using τ̂ccal from Eq. (3).
We first compare naive, APS, RAPS, and LAC in terms of power and coverage in Table 9. In this experiment, we tuned RAPS to have small set size as described in Appendix E.1. We see that LAC also achieves correct coverage, as expected since it is a conformal method and satisfies the guarantee from Theorem 1. We further see that it has systematically smaller sets that RAPS, although the difference is slight compared to the gap between APS and RAPS or APS and LAC.
We next compare RAPS to LAC in terms of adaptiveness, tuning RAPS as in Section E.2. First, in Table 10, we report on the coverage of LAC for images of different difficulties, and see that LAC has dramatically worse coverage for hard images than for easy ones. Comparing this to RAPS in Table 7, we see that RAPS also has worse coverage for more difficult images, although the gap is much smaller for RAPS. Next, in Table 11, we report on the SSCV metric for of adaptiveness (and conditional coverage) for APS, RAPS, and LAC. We find that APS and RAPS have much better adaptiveness than LAC, with RAPS being the overall winner. The results of all of these comparisons are expected: LAC is not targetting adpativeness and instead trying to achieve the smallest possible set size. It succeeds at its goal, sacrificing adaptiveness to do so.
1 15668 1.00 1.5 | 1. What is the focus of the paper regarding conformalized procedures for classification tasks?
2. What are the strengths of the proposed method, particularly in terms of its simplicity and theoretical guarantees?
3. What are the weaknesses of the paper, especially regarding the choice of hyperparameters and the potential issue of over-coverage?
4. How does the reviewer assess the effectiveness of the proposed solution in addressing the issue of large uncertainty sets?
5. Are there any practical situations where the proposed method might be desirable, despite the reviewer's reservations?
6. How does the reviewer recommend improving the paper, including the inclusion of essential sections and tables, and the use of better metrics for performance comparisons? | Review | Review
Summary
The paper proposes a new conformalized procedure for computing uncertainty sets in classification tasks. The key feature of the method is that the size of the uncertainty sets are regularized via a penalty on the size. The issue of large uncertainty sets produced by conformalized procedures is an interesting one, which the paper does well to highlight. The proposed solution of using an additive regularizer is reasonable, and appears to be effective for sensible choices of the hyper-parameters. However, the paper has some significant weaknesses.
Strengths
The method is easy to understand and to implement.
The method satisfies some meaningful theoretical guarantees. (The proofs have been checked for correctness.)
Weaknesses (and Questions)
I am not 100% convinced that one would want to prune extremely large uncertainty sets arising out of the APS procedure. Because the marginal coverage must be maintained, this forces some small uncertainty sets to be enlarged. However, in many practical classification tasks (including the experiments in this paper), it tends also to be the case that difficult examples are less common than the easier ones. It is hard for me to imagine that one would ever be in a situation that one would wish to over-cover typical examples and undercover atypical ones to an even larger extent, even if the over-coverage is only by one or two classes.
However, perhaps there are real world situations in which this is precisely the case. It would be nice if the authors could offer additional insights on this point.
The method requires the user to choose two hyper-parameters
k
r
e
g
and
λ
, with some combinations leading to decidedly less desirable results. For the particular case of
k
r
e
g
=
k
∗
, perhaps one could look at the RAPS solution as "interpolating" between the APS solution and the top-
k
∗
solution. Related to #1 above, it isn't clear to me why this would be desirable in any way. Furthermore, although the hyper-parameters can be loosely interpreted as having to do with the size and the tradeoff with adaptivity, respectively, the precise relationship does not appear to be understood for general cases, making it more challenging to predict the behavior of the method. This is in contrast to the APS, which can be motivated via an oracle procedure with certain optimality properties.
Recommendation
A borderline reject. The paper has some significant weaknesses, but does contain some interesting insights.
Additional Feedback
Section B in the appendix is an essential reading for anyone looking to implement the procedure. I think it ought to be included in the paper.
It looks to me that the method could suffer from poor choices of
k
r
e
g
and
λ
. To prevent such choices, if
k
r
e
g
=
k
∗
is in some sense an oracle choice, wouldn't it be better to give the procedure with the particular hyper-parameter tuning procedure already incorporated?
Personally, I found Tables 4 and 5 far more informative than Figure 4. I would prefer to see the tables in the paper.
I am not sure if the median-of-means is the right metric for performance comparisons. One way to view the RAPS is that it "tames" the tail of the distribution of the size of uncertainty sets produced by the APS. Personally, I would prefer a direct comparison of the distributions, as in the tables in the appendix. However, even if a summary measure must be used, there are probably better options.
I find the comment in Section 3.2 "The result shows that our method can still provide exact coverage under a significant distribution shift" somewhat misleading. In a split conformal setup, the only distribution shift of any import (with regards to the marginal coverage guarantee) is the one between the calibration set and a new test point.
Update
Please see my reply to the authors' message. |
ICLR | Title
Uncertainty Sets for Image Classifiers using Conformal Prediction
Abstract
Convolutional image classifiers can achieve high predictive accuracy, but quantifying their uncertainty remains an unresolved challenge, hindering their deployment in consequential settings. Existing uncertainty quantification techniques, such as Platt scaling, attempt to calibrate the network’s probability estimates, but they do not have formal guarantees. We present an algorithm that modifies any classifier to output a predictive set containing the true label with a user-specified probability, such as 90%. The algorithm is simple and fast like Platt scaling, but provides a formal finite-sample coverage guarantee for every model and dataset. Our method modifies an existing conformal prediction algorithm to give more stable predictive sets by regularizing the small scores of unlikely classes after Platt scaling. In experiments on both Imagenet and Imagenet-V2 with ResNet-152 and other classifiers, our scheme outperforms existing approaches, achieving coverage with sets that are often factors of 5 to 10 smaller than a stand-alone Platt scaling baseline.
1 INTRODUCTION
Imagine you are a doctor making a high-stakes medical decision based on diagnostic information from a computer vision classifier. What would you want the classifier to output in order to make the best decision? This is not a casual hypothetical; such classifiers are already used in medical settings (e.g., Razzak et al., 2018; Lundervold & Lundervold, 2019; Li et al., 2014). A maximumlikelihood diagnosis with an accompanying probability may not be the most essential piece of information. To ensure the health of the patient, you must also rule in or rule out harmful diagnoses. In other words, even if the most likely diagnosis is a stomach ache, it is equally or more important to rule out stomach cancer. Therefore, you would want the classifier to give you—in addition to an estimate of the most likely outcome—actionable uncertainty quantification, such as a set of predictions that provably covers the true diagnosis with a high probability (e.g., 90%). This is called a prediction set (see Figure 1). Our paper describes a method for constructing prediction sets from any pre-trained image classifier that are formally guaranteed to contain the true class with the desired probability, relatively small, and practical to implement. Our method modifies a conformal predictor (Vovk et al., 2005) given in Romano et al. (2020) for the purpose of modern image classification in order to make it more stable in the presence of noisy small probability estimates. Just as importantly, we provide extensive evaluations and code for conformal prediction in computer vision.
Formally, for a discrete response Y ∈ Y = {1, . . . ,K} and a feature vector X ∈ Rd, we desire an uncertainty set function, C(X), mapping a feature vector to a subset of {1, . . . ,K} such that
P (Y ∈ C(X)) ≥ 1− α, (1)
for a pre-specified confidence level α such as 10%. Conformal predictors like our method can modify any black-box classifier to output predictive sets that are rigorously guaranteed to satisfy the desired coverage property shown in Eq. (1). For evaluations, we focus on Imagenet classification
∗Equal contribution. Blog: https://people.eecs.berkeley.edu/˜angelopoulos/blog/ posts/conformal-classification
using convolutional neural networks (CNNs) as the base classifiers, since this is a particularly challenging testbed. In this setting, X would be the image and Y would be the class label. Note that the guarantee in Eq. (1) is marginal over X and Y—it holds on average, not for a particular image X .
A first approach toward this goal might be to assemble the set by including classes from highest to lowest probability (e.g., after Platt scaling and a softmax function; see Platt et al., 1999; Guo et al., 2017) until their sum just exceeds the threshold 1 − α. We call this strategy naive and formulate it precisely in Algorithm 1. There are two problems with naive: first, the probabilities output by CNNs are known to be incorrect (Nixon et al., 2019), so the sets from naive do not achieve coverage. Second, image classification models’ tail probabilities are often badly miscalibrated, leading to large sets that do not faithfully articulate the uncertainty of the model; see Section 2.3. Moreover, smaller sets that achieve the same coverage level can be generated with other methods.
The coverage problem can be solved by picking a new threshold using holdout samples. For example, with α =10%, if choosing sets that contain 93% estimated probability achieves 90% coverage on the holdout set, we use the 93% cutoff instead. We refer to this algorithm, introduced in Romano et al. (2020), as Adaptive Prediction Sets (APS). The APS procedure provides coverage but still produces large sets. To fix this, we introduce a regularization technique that tempers the influence of these noisy estimates, leading to smaller, more stable sets. We describe our proposed algorithm, Regularized Adaptive Prediction Sets (RAPS), in Algorithms 2 and 3 (with APS as a special case). As we will see in Section 2, both APS and RAPS are always guaranteed to satisfy Eq. (1)—regardless of model and dataset. Furthermore, we show that RAPS is guaranteed to have better performance than choosing a fixed-size set. Both methods impose negligible computational requirements in both training and evaluation, and output useful estimates of the model’s uncertainty on a new image given, say, 1000 held-out examples.
In Section 3 we conduct the most extensive evaluation of conformal prediction in deep learning to date on Imagenet and Imagenet-V2. We find that RAPS sets always have smaller average size than naive and APSsets. For example, using a ResNeXt-101, naive does not achieve coverage, while APS and RAPS achieve it almost exactly. However, APS sets have an average size of 19, while RAPS sets have an average size of 2 at α = 10% (Figure 2 and Table 1). We will provide an accompanying codebase that implements our method as a wrapper for any PyTorch classifier, along with code to exactly reproduce all of our experiments.
1.1 RELATED WORK
Reliably estimating predictive uncertainty for neural networks is an unsolved problem. Historically, the standard approach has been to train a Bayesian neural network to learn a distribution over network weights (Quinonero-Candela et al., 2005; MacKay, 1992; Neal, 2012; Kuleshov et al., 2018; Gal, 2016). This approach requires computational and algorithmic modifications; other approaches avoid these via ensembles (Lakshminarayanan et al., 2017; Jiang et al., 2018) or approximations of Bayesian inference (Riquelme et al., 2018; Sensoy et al., 2018). These methods also have major practical limitations; for example, ensembling requires training many copies of a neural network adversarially. Therefore, the most widely used strategy is ad-hoc traditional calibration of the softmax scores with Platt scaling (Platt et al., 1999; Guo et al., 2017; Nixon et al., 2019).
This work develops a method for uncertainty quantification based on conformal prediction. Originating in the online learning literature, conformal prediction is an approach for generating predictive sets that satisfy the coverage property in Eq. (1) (Vovk et al., 1999; 2005). We use a convenient data-splitting version known as split conformal prediction that enables conformal prediction meth-
ods to be deployed for essentially any predictor (Papadopoulos et al., 2002; Lei et al., 2018). While mechanically very different from traditional calibration as discussed above, we will refer to our approach as conformal calibration to highlight that the two methodologies have overlapping but different goals.
Conformal prediction is a general framework, not a specific algorithm—important design decisions must be made to achieve the best performance for each context. To this end, Romano et al. (2020) and Cauchois et al. (2020) introduce techniques aimed at achieving coverage that is similar across regions of feature space, whereas Vovk et al. (2003); Hechtlinger et al. (2018) and Guan & Tibshirani (2019) introduce techniques aimed at achieving equal coverage for each class. While these methods have conceptual appeal, thus far there has been limited empirical evaluation of this general approach for state-of-the-art CNNs. Concretely, the only works that we are aware of that include some evaluation of conformal methods on ImageNet—the gold standard for benchmarking computer vision methods—are Hechtlinger et al. (2018), Park et al. (2019), Cauchois et al. (2020), and Messoudi et al. (2020), although in all four cases further experiments are needed to more fully evaluate their operating characteristics for practical deployment. At the heart of conformal prediction is the conformal score - a measure of similarity between labeled examples which is used to compare a new point to among those in a hold out set. Our theoretical contribution can be summarized as a modification of the conformal score from Romano et al. (2020) to have smaller, more stable sets. Lastly, there are alternative approaches to returning prediction sets not based on conformal prediction (Pearce et al., 2018; Zhang et al., 2018). These methods can be used as input to a conformal procedure to potentially improve performance, but they do not have finite-sample coverage guarantees when used alone.
2 METHODS
In developing uncertainty set methods to improve upon naive, we are guided by three desiderata. First and most importantly, the coverage desideratum says the sets must provide 1−α coverage, as discussed above. Secondly, the size desideratum says we want sets of small size, since these convey more detailed information and may be more useful in practice. Lastly, the adaptiveness desideratum says we want the sets to communicate instance-wise uncertainty: they should be smaller for easy test-time examples than for hard ones; see Figure 1 for an illustration. Coverage and size are obviously competing objectives, but size and adaptiveness are also often in tension. The size desideratum seeks small sets, while the adaptiveness desideratum seeks larger sets when the classi-
Algorithm 1 Naive Prediction Sets Input: α, sorted scores s, associated permutation of classes I , boolean rand
1: procedure NAIVE(α, s, I, rand) 2: L← 1 3: while ∑L i=1 si < 1− α do . Stop if 1− α probability exceeded 4: L← L+ 1 5: if rand then . Break ties randomly (explained in Appendix B) 6: U ← Unif(0, 1) 7: V ← ( ∑L i=1 si − (1− α))/sL 8: if U ≤ V then 9: L← L− 1
10: return { I1, ..., IL } Output: The 1− α prediction set, { I1, ..., IL }
fier is uncertain. For example, always predicting a set of size five could achieve coverage, but it is not adaptive. As noted above, both APSand RAPS achieve correct coverage, and we will show that RAPS improves upon APS according to the other two desiderata.
We now turn to the specifics of our proposed method. We begin in Subsection 2.1 by describing an abstract data-splitting procedure called conformal calibration that enables the near-automatic construction of valid predictive sets (that is, sets satisfying Eq. (1)). Subsequently, in Subsection 2.2, we provide a detailed presentation of our procedure, with commentary in Section 2.3. In Subsection 2.4 we discuss the optimality of our procedure, proving that it is at least as good as the procedure that returns sets of a fixed size, unlike alternative approaches.
2.1 CONFORMAL CALIBRATION
We first review a general technique for producing valid prediction sets, following the articulation in Gupta et al. (2019). Consider a procedure that outputs a predictive set for each observation, and further suppose that this procedure has a tuning parameter τ that controls the size of the sets. (In RAPS, τ is the cumulative sum of the sorted, penalized classifier scores.) We take a small independent conformal calibration set of data, and then choose the tuning parameter τ such that the predictive sets are large enough to achieve 1 − α coverage on this set. See Figure 3 for an illustration. This calibration step yields a choice of τ , and the resulting set is formally guaranteed to have coverage 1− α on a future test point from the same distribution; see Theorem 1 below. Formally, let (Xi, Yi)i=1,...,n be an independent and identically distributed (i.i.d.) set of variables that was not used for model training. Further, let C(x, u, τ) : Rd × [0, 1] × R → 2Y be a setvalued function that takes a feature vector x to a subset of the possible labels. The second argument u is included to allow for randomized procedures; let U1, . . . , Un be i.i.d. uniform [0, 1] random variables that will serve as the second argument for each data point. Suppose that the sets are indexed by τ such that they are nested, meaning larger values of τ lead to larger sets:
C(x, u, τ1) ⊆ C(x, u, τ2) if τ1 ≤ τ2. (2) To find a function that will achieve 1 − α coverage on test data, we select the smallest τ that gives at least 1 − α coverage on the conformal calibration set, with a slight correction to account for the finite sample size:
τ̂ccal = inf
{ τ : |{i : Yi ∈ C(Xi, Ui, τ)}|
n ≥ d(n+ 1)(1− α)e n
} . (3)
The set function C(x, u, τ) with this data-driven choice of τ is guaranteed to have correct finitesample coverage on a fresh test observation, as stated formally next. Theorem 1 (Conformal calibration coverage guarantee). Suppose (Xi, Yi, Ui)i=1,...,n and (Xn+1, Yn+1, Un+1) are i.i.d. and let C(x, u, τ) be a set-valued function satisfying the nesting property in Eq. (2). Suppose further that the sets C(x, u, τ) grow to include all labels for large enough τ : for all x ∈ Rd, C(x, u, τ) = Y for some τ . Then for τ̂ccal defined as in Eq. (3), we have the following coverage guarantee:
P ( Yn+1 ∈ C(Xn+1, Un+1, τ̂ccal) ) ≥ 1− α.
This is the same coverage property as Eq. (1) in the introduction, written in a more explicit manner. The result is not new—a special case of this result leveraging sample-splitting first appears in the regression setting in Papadopoulos et al. (2002), and the core idea of conformal prediction was introduced even earlier; see (Vovk et al., 2005).
As a technical remark, the theorem also holds if the observations to satisfy the weaker condition of exchangeability; see Vovk et al. (2005). In addition, for most families of set-valued functions C(x, u, τ) there is a matching upper bound:
P ( Yn+1 ∈ C(Xn+1, Un+1, τ̂ccal) ) ≤ 1− α+ 1
n+ 1 .
Roughly speaking, this will hold whenever the sets grow smoothly in τ . See Lei et al. (2018) for a formal statement of the required conditions.
2.2 OUR METHOD
Conformal calibration is a powerful general idea, allowing one to achieve the coverage desideratum for any choice of sets C(x, u, τ). Nonetheless, this is not yet a full solution, since the quality of the resulting prediction sets can vary dramatically depending on the design of C(x, u, τ). In particular, we recall the size and adaptiveness desiderata from Section 1—we want our uncertainty sets to be as small as possible while faithfully articulating the instance-wise uncertainty of each test point. In this section, we explicitly give our algorithm, which can be viewed as a special case of conformal calibration with the uncertainty sets C designed to extract information from CNNs. Our algorithm has three main ingredients. First, for a feature vector x, the base model computes class probabilities π̂x ∈ Rk, and we order the classes from most probable to least probable. Then, we add a regularization term to promote small predictive sets. Finally, we conformally calibrate the penalized prediction sets to guarantee coverage on future test points.
Formally, let ρx(y) = ∑K y′=1 π̂x(y
′)I{π̂x(y′)>π̂x(y)} be the total probability mass of the set of labels that are more likely than y. These are all the labels that will be included before y is included. In addition, let ox(y) = |{y′ ∈ Y : π̂x(y′) ≥ π̂x(y)}| be the ranking of y among the label based on the probabilities π̂. For example, if y is the third most likely label, then ox(y) = 3.1 We take
C∗(x, u, τ) := { y : ρx(y) + π̂x(y) · u+ λ · (ox(y)− kreg)+︸ ︷︷ ︸
regularization
≤ τ } , (4)
where (z)+ denotes the positive part of z and λ, kreg ≥ 0 are regularization hyperparameters that are introduced to encourage small set sizes. See Figure 3 for a visualization of a RAPS predictive set and Appendix E for a discussion of how to select kreg and λ.
Since this is the heart of our proposal, we carefully parse each term. First, the ρx(y) term increases as y ranges from the most probable to least probable label, so our sets will prefer to include the y that are predicted to be the most probable. The second term, π̂x(y) · u, is a randomized term to handle the fact that the value will jump discretely with the inclusion of each new y. The randomization term can never impact more than one value of y: there is at most one value of y such that y ∈ C(x, 0, τ) but y /∈ C(x, 1, τ). These first two terms can be viewed as the CDF transform after arranging the classes from most likely to least likely, randomized in the usual way to result in a continuous uniform random variable (cf. Romano et al., 2020). We discuss randomization further in Appendix B.
Lastly, the regularization promotes small set sizes: for values of y that occur farther down the ordered list of classes, the term λ · (ox(y)− kreg)+ makes that value of y require a higher value of τ before it is included in the predictive set. For example, if kreg = 5, then the sixth most likely value of y has an extra penalty of size λ, so it will never be included until τ exceeds ρx(y) + π̂x(y) · u + λ, whereas it enters when τ exceeds ρx(y) + π̂x(y) · u in the nonregularized version. Our method has the following coverage property: Proposition 1 (RAPS coverage guarantee). Suppose (Xi, Yi, Ui)i=1,...,n and (Xn+1, Yn+1, Un+1) are i.i.d. and let C∗(x, u, τ) be defined as in Eq. (4). Suppose further that π̂x(y) > 0 for all x and y. Then for τ̂ccal defined as in Eq. (3), we have the following coverage guarantee:
1− α ≤ P ( Yn+1 ∈ C∗(Xn+1, Un+1, τ̂ccal) ) ≤ 1− α+ 1
n+ 1 .
1For ease of notation, we assume distinct probabilities. Else, label-ordering ties should be broken randomly.
Algorithm 2 RAPS Conformal Calibration
Input: α; s ∈ [0, 1]n×K , I ∈ {1, ...,K}n×K , and one-hot y ∈ {0, 1}K corresponding respectively to the sorted scores, the associated permutation of indexes, and labels for each of n examples in the calibration set; kreg; λ; boolean rand
1: procedure RAPSC(α,s,I ,y,λ) 2: for i ∈ {1, · · · , n} do 3: Li ← { j : Ii,j = yi } 4: Ei ← ΣLij=0si,j + λ(Li − kreg + 1)+ 5: if rand then 6: U ∼ Unif(0, 1) 7: Ei ← Ei − si,Li + U ∗ si,Li 8: τ̂ccal ← the d(1− α)(1 + n)e largest value in {Ei}ni=1 9: return τ̂ccal
Output: The generalized quantile, τ̂ccal . The value in Eq. (3)
Algorithm 3 RAPS Prediction Sets Input: α, sorted scores s and the associated permutation of classes I for a test-time example, τ̂ccal
from Algorithm 2, kreg , λ, boolean rand 1: procedure RAPS(α, s, I, τ̂ccal, kreg, λ, rand) 2: L← | j ∈ Y : Σji=0si + λ(L− kreg)+ ≤ τ̂ccal |+ 1 3: V ← (τ̂ccal − ΣL−1i=0 si − λ(L− kreg)+ + sL−1)/sL−1 4: if rand & V ≤ U ∼ Unif(0, 1) then 5: L← L− 1 6: return C = { I1, ...IL } . The L most likely classes
Output: The 1− α confidence set, C . The set in Eq. (4)
Note that the first inequality is a corollary of Theorem 1, and the second inequality is a special case of the remark in Section 2.1. The restriction that π̂x(y) > 0 is not necessary for the first inequality.
2.3 WHY REGULARIZE?
In our experiments, the sets from APS are larger than necessary, because APS is sensitive to the noisy probability estimates far down the list of classes. This noise leads to a permutation problem of unlikely classes, where ordering of the classes with small probability estimates is determined mostly by random chance. If 5% of the true classes from the calibration set are deep in the tail due to the permutation problem, APS will choose large 95% predictive sets; see Figure 2. The inclusion of the RAPS regularization causes the algorithm to avoid using the unreliable probabilities in the tail; see Figure 4. We discuss how RAPS improves the adaptiveness of APS in Section 4 and Appendix E.
2.4 OPTIMALITY CONSIDERATIONS
To complement these experimental results, we now formally prove that RAPS with the correct regularization parameters will always dominate the simple procedure that returns a fixed set size. (Section 3.5 shows the parameters are easy to select and RAPS is not sensitive to their values). For a feature vector x, let ŷ(j)(x) be the label with the jth highest predicted probability. We define the top-k predictive sets to be {ŷ(1)(x), . . . , ŷ(k)(x)}. Proposition 2 (RAPS dominates top-k sets). Suppose (Xi, Yi, Ui)i=1,...,n and (Xn+1, Yn+1, Un+1) are i.i.d. draws. Let k∗ be the smallest k such that the top-k predictive sets have coverage at least d(n+ 1)(1−α)e/n on the conformal calibration points (Xi, Yi)i=1,...,n. Take C∗(x, u, τ) as in Eq. (4) with any kreg ≤ k∗ and λ = 1. Then with τ̂ccal chosen as in Eq. (3), we have
C∗(Xn+1, Un+1, τ̂ccal) ⊆ {ŷ(1)(x), . . . , ŷ(k∗)(x)}.
In words, the RAPS procedure with heavy regularization will be at least as good as the top-k procedure in the sense that it has smaller or same average set size while maintaining the desired coverage level. This is not true of either the naive baseline or the APS procedure; Table 2 shows that these two procedures usually return predictive sets with size much larger than k∗.
3 EXPERIMENTS
In this section we report on experiments that study the performance of the predictive sets from naive, APS, and RAPS, evaluating each based on the three desiderata above. We begin with a brief preview of the experiments. In Experiment 1, we evaluate naive, APS, and RAPS on ImagenetVal. Both APS and RAPS provided almost exact coverage, while naive sets had coverage slightly below the specified level. APS has larger sets on average than naive and RAPS. RAPS has a much smaller average set size than APS and naive. In Experiment 2, we repeat Experiment 1 on Imagenet-V2, and the conclusions still hold. In Experiment 3, we produce histograms of set sizes for naive, APS, and RAPS for several different values of λ, illustrating a simple tradeoff between set size and adaptiveness. In Experiment 4, we compute histograms of RAPS sets stratified by image difficulty, showing that RAPS sets are smaller for easier images than for difficult ones. In Experiment 5, we report the performance of RAPS with many values of the tuning parameters.
In our experiments, we use nine standard, pretrained Imagenet classifiers from the torchvision repository (Paszke et al., 2019) with standard normalization, resize, and crop parameters. Before applying naive, APS, or RAPS, we calibrated the classifiers using the standard temperature scaling/Platt scaling procedure as in Guo et al. (2017) on the calibration set. Thereafter, naive, APS, and RAPS were applied, with RAPS using a data-driven choice of parameters described in Appendix E. We use the randomized versions of these algorithms—see Appendix B for a discussion.
3.1 EXPERIMENT 1: COVERAGE VS SET SIZE ON IMAGENET
In this experiment, we calculated the coverage and mean set size of each procedure for two different choices of α. Over 100 trials, we randomly sampled two subsets of Imagenet-Val: one conformal calibration subset of size 20K and one evaluation subset of size 20K. The median-of-means over trials for both coverage and set size are reported in Table 1. Figure 2 illustrates the performances of naive, APS, and RAPS; RAPS has much smaller sets than both naive and APS, while achieving coverage. We also report results from a conformalized fixed-k procedure, which finds the smallest fixed set size achieving coverage on the holdout set, k∗, then predicts sets of size k∗ − 1 or k∗ on new examples in order to achieve exact coverage; see Algorithm 4 in Appendix E.
3.2 EXPERIMENT 2: COVERAGE VS SET SIZE ON IMAGENET-V2
The same procedure as Experiment 1 was repeated on Imagenet-V2, with exactly the same normalization, resize, and crop parameters. The size of the calibration and evaluation sets was 5K, since Imagenet-V2 is a smaller dataset. The result shows that our method can still provide coverage even for models trained on different distributions, as long as the conformal calibration set comes from the new distribution. The variance of the coverage is higher due to having less data.
3.3 EXPERIMENT 3: SET SIZES OF NAIVE , APS, AND RAPS ON IMAGENET
We investigate the effect of regularization in more detail. For three values of λ, we collected the set sizes produced by each of naive, APS, and RAPS and report their histograms in Figure 4.
3.4 EXPERIMENT 4: ADAPTIVENESS OF RAPS ON IMAGENET
We now show that RAPS sets are smaller for easy images than hard ones, addressing the adaptiveness desideratum. Table 4 reports the size-stratified coverages of RAPS at the 90% level with kreg = 5 and different choices of λ. When λ is small, RAPS allows sets to be large. But when λ = 1, RAPS
clips sets to be a maximum of size 5. Table 7 (in the Appendix) stratifies by image difficulty, showing that RAPS sets are small for easy examples and large for hard ones. Experiments 3 and 4 together illustrate the tradeoff between adaptiveness and size: as the average set size decreases, the RAPS procedure truncates sets larger than the smallest fixed set that provides coverage, taming the heavy tail of the APS procedure. Since RAPS with large λ undercovers hard examples, it must compensate by taking larger sets for easy examples to ensure the 1− α marginal coverage guarantee. However, the size only increases slightly since easy images are more common than hard ones, and the total probability mass can often exceed τ̂ccal by including only one more class. If this behavior is not desired, we can instead automatically pick λ to optimize the adaptiveness of RAPS; see Section 4.
3.5 EXPERIMENT 5: CHOICE OF TUNING PARAMETERS
While any value of the tuning parameters λ and kreg lead to coverage (Proposition 1), some values will lead to smaller sets. In Experiments 1 and 2, we chose kreg and λ adaptively from data (see Appendix E), achieving strong results for all models and choices of the coverage level. Table 3 gives the performance of RAPS with many choices of kreg and λ for ResNet-152.
4 ADAPTIVENESS AND CONDITIONAL COVERAGE
In this section, we point to a definition of adaptiveness that is more natural for the image classification setting than the existing notion of conditional coverage. We show that APS does not satisfy conditional coverage, and that RAPS with small λ outperforms it in terms of adaptiveness.
We say that a set-valued predictor C : Rd → 2Y satisfies exact conditional coverage if P (Y ∈ C(X) | X = x) = 1 − α for each x. Distribution-free guarantees on conditional coverage are impossible (Vovk, 2012; Lei & Wasserman, 2014), but many algorithms try to satisfy it approximately (Romano et al., 2019; 2020; Cauchois et al., 2020). In a similar spirit, Tibshirani et al. (2019) suggest a notion of local conditional coverage, where one asks for coverage in a neighborhood of each point, weighted according to a chosen kernel. Cauchois et al. (2020) introduce the worst-case slab metric for measuring violations of the conditional coverage property. We present a different way of measuring violations of conditional coverage.
Proposition 3. Suppose P (Y ∈ C(X) | X = x) = 1− α for each x ∈ Rd. Then, P (Y ∈ C(X) | {|C(X)| ∈ A}) = 1− α for any A ⊂ {0, 1, 2, . . . }.
In words, if conditional coverage holds, then coverage holds after stratifying by set size. Based on this result, In Appendix E, we introduce the size-stratified coverage violation criterion, a simple and pragmatic way of quantifying adaptiveness. Then, we automatically tune λ on this metric so RAPS markedly outperforms the adaptiveness of APS (see Table 8).
1 11.2 10.2 7.0 3.6 2.9 2.3 2.1 2.3 2.2 2.2
2 11.2 10.2 7.1 3.7 3.0 2.4 2.1 2.3 2.2 2.2
5 11.2 10.2 7.2 3.9 3.4 2.9 2.6 2.5 2.5 2.5
10 11.2 10.2 7.4 4.5 4.0 3.6 3.4 3.4 3.4 3.4
50 11.2 10.6 8.7 7.2 7.0 6.9 6.9 6.9 6.9 6.9
In Table 4, we report on the coverage of APS and RAPS, stratified by the size of the prediction set. Turning our attention to the λ = 0 column, we see that when APS outputs a set of size 101− 1000, APS has coverage 97%, substantially higher than 90% nominal rate. By Proposition 3, we conclude that APS is not achieving exact conditional coverage, because the scores are far from the oracle probabilities. The APS procedure still achieves marginal coverage by overcovering hard examples and undercovering easy ones, an undesirable behavior. Alternatively, RAPS can be used to regularize the set sizes—for λ = .001 to λ = .01 the coverage stratified by set size is more balanced. In summary, even purely based on the adaptiveness desideratum, RAPS with light regularization is preferable to APS. Note that as the size of the training data increases, as long as π̂ is consistent, naive and APS will become more stable, and so we expect less regularization will be needed.
Lastly, we argue that conditional coverage is a poor notion of adaptiveness when the best possible model (i.e., one fit on infinite data) has high accuracy. Given such a model, the oracle procedure from Romano et al. (2020) would return the correct label with probability 1 − α and the empty set with probability α. That is, having correct conditional coverage for high-signal problems where Y is perfectly determined by X requires a perfect classifier. In our experiments on ImageNet, APS does not approximate this behavior. Therefore, conditional coverage isn’t the right goal for prediction sets with realistic sample sizes. Proposition 3 suggests a relaxation. We could require that we have the right coverage, no matter the size of the prediction set: P (Y ∈ C(X) | {|C(x)| ∈ A}) ≥ 1− α for any A ⊂ {0, 1, 2, . . . }; Appendix E.2 develops this idea. We view this as a promising way to reason about adaptiveness in high-signal problems such as image classification.
5 DISCUSSION
For classification tasks with many possible labels, our method enables a researcher to take any base classifier and return predictive sets guaranteed to achieve a pre-specified error level, such as 90%, while retaining small average size. It is simple to deploy, so it is an attractive, automatic way to quantify the uncertainty of image classifiers—an essential task in such settings as medical diagnostics, self-driving vehicles, and flagging dangerous internet content. Predictive sets in computer vision (from RAPS and other conformal methods) have many further uses, since they systematically identify hard test-time examples. Finding such examples is useful in active learning where one only has resources to label a small number of points. In a different direction, one can improve efficiency of a classifier by using a cheap classifier outputting a prediction set first, and an expensive one only when the cheap classifier outputs a large set (a cascade; see, e.g., Li et al. 2015), and Fisch et al. (2021) for an implementation of conformal prediction in this setting. One can also use predictive sets during model development to identify failure cases and outliers and suggest strategies for improving its performance. Prediction sets are most useful for problems with many classes; returning to our initial medical motivation, we envision RAPS could be used by a doctor to automatically screen for a large number of diseases (e.g. via a blood sample) and refer the patient to relevant specialists.
A PROOFS
Theorem 1. Let s(x, u, y) = infτ{y ∈ C(x, u, τ)}, and let si = s(Xi, Ui, Yi) for i = 1, . . . , n. Then
{y : s(x, u, y) ≤ τ} = {y : y ∈ C(x, u, τ)}
because C(x, u, τ) is a finite set growing in τ by the assumption in Eq. (2). Thus,
{τ : |{i : si ≤ τ} | ≥ d(1−α)(n+ 1)e} = { τ : |{i : Yi ∈ C(Xi, Ui, τ)}|
n ≥ d(n+ 1)(1− α)e n
} .
Considering the left expression, the infimum over τ of the set on the left hand side is the d(1 − α)(n + 1)e smallest value of the si, so this is the value of τ̂ccal. Since s1, . . . , sn, s(Xn+1, Un+1, Yn+1) are exchangeable random variables, |{i : s(Xn+1, Un+1, Yn+1) > si}| is stochastically dominated by the discrete uniform distribution on {0, 1, . . . , n}. We thus have that
P (Yn+1 /∈ C(Xn+1, Un+1, τ̂ccal)) = P (s(Xn+1, Un+1, Yn+1) > τ̂ccal) = P (|{i : s(Xn+1, Un+1, Yn+1) > si}| ≥ d(n+ 1)(1− α)e)
= P
( |{i : s(Xn+1, Un+1, Yn+1) > si}|
n+ 1 ≥ d(n+ 1)(1− α)e n+ 1 ) ≤ α.
Proposition 1. The lower bound follows from Theorem 1. To prove the upper bound, using the result from Theorem 2.2 of Lei et al. (2018) it suffices to show that the variables s(Xi, Ui, Yi) = inf{τ : Yi ∈ C(Xi, Ui, τ)} are almost surely distinct. To this end, note that that
s(Xi, Ui, Yi) = ρXi(Yi) + π̂Xi(Yi) · Ui + λ(oXi(Yi)− kreg)+,
and due to the middle term of the sum, these values are distinct almost surely provided π̂Xi(Yi) > 0.
Proposition 2. We first show that τ̂ccal ≤ 1 + k∗ − kreg . Note that since at least d(1 − α)(n + 1)e of the conformal calibration points are covered by a set of size k∗, at least d(1 − α)(n + 1)e of the Ei in Algorithm 2 are less than or equal to 1 + k∗ − kreg. Thus, by the definition of τ̂ccal, we have that it is less than or equal to 1 + k∗ − kreg. Then, note that by the definition of C∗ in Eq. (4), we have that
|C∗(Xn+1, Un+1, τ̂ccal)| ≤ k∗.
as long as τ̂ccal ≤ 1+k∗−kreg , since for the k∗+1 most likely class, the sum in Eq. (4) will exceed λ · (1 + k∗ − kreg) = (1 + k∗ − kreg) ≥ τ̂ccal, and so the k∗ + 1 class will not be in the set.
Proposition 3. Suppose P (Y ∈ C(X) | X = x) = 1− α for each x ∈ Rd. Then, P (Y ∈ C(X) | |C(X)| ∈ A) = ∫ x P (Y ∈ C(x) | X = x})I{|C(x)|∈A}dP (x)
P (|C(X)| ∈ A)
=
∫ x (1− α)I{|C(x)|∈A}dP (x)
P (|C(X)| ∈ A) = 1− α.
B RANDOMIZED PREDICTORS
The reader may wonder why we choose to use a randomized procedure. The randomization is needed to achieve 1 − α coverage exactly, which we will explain via an example. Note that the randomization is of little practical importance, since the predictive set output by the randomized procedure will differ from the that of the non-randomized procedure by at most one element.
Turning to an example, assume for a particular input image we expect a set of size k to have 91% coverage, and a set of size k − 1 to have 89% coverage. In order to achieve our desired coverage of 90%, we randomly choose size k or k − 1 with equal probability. In general, the probabilities will not be equal, but rather chosen so the weighted average of the two coverages is exactly 90%. If a user of our method desires deterministic sets, it is easy to turn off this randomization with a single flag, resulting in slightly conservative sets.
C IMAGENET AND IMAGENETV2 RESULTS FOR α = 5%
We repeated Experiments 1 and 2 with α = 5%. See the results in Tables 5 and 6.
D COVERAGE AND SIZE CONDITIONAL ON IMAGE DIFFICULTY
In order to probe the adaptiveness properties of APS and RAPS we stratified coverage and size by image difficulty (the position of the true label in the list of most likely to least likely classes, based on the classifier predictions) in Table 7. With increasing λ, coverage decreases for more difficult images and increases for easier ones. In the most difficult regime, even though APS can output large sets, those sets still rarely contain the true class. This suggests regularization is a sensible way to stabilize the sets. As a final word on Table 7, notice that as λ increases, coverage improves for the more common medium-difficulty examples, although not for very rare and difficult ones.
E CHOOSING kreg AND λ TO OPTIMIZE SET SIZE AND ADAPTIVENESS
This section describes two procedures for picking kreg and λ that optimize for set size or adaptiveness, outperforming APS in both cases.
E.1 OPTIMIZING SET SIZE WITH RAPS
Algorithm 4 Adaptive Fixed-K
Input: α; I ∈ {1, ...,K}n×K , and one-hot y ∈ {0, 1}K corresponding respectively to the classes from highest to lowest estimated probability mass, and labels for each of n examples in the dataset
1: procedure GET-KSTAR(α,I ,y) 2: for i ∈ {1, · · · , n} do 3: Li ← { j : Ii,j = yi } 4: k̂∗ ← the d(1− α)(1 + n)e largest value in {Li}ni=1 5: return k̂∗
Output: The estimate of the smallest fixed size set that achieves coverage, k̂∗
To produce Tables 1, 5, 2, and 6, we chose kreg and λ adaptively. This required an extra data splitting step, where a small amount of tuning data { xi, yi }m i=1
were used to estimate k∗, and then kreg is set to k∗. Takingm ≈ 1000 was sufficient, since the algorithm is fairly insensitive to kreg (see Table 3). Then, k̂∗ was calculated with Algorithm 4. We produced the Imagenet V2 tables with m = 1000 and the Imagenet tables with m = 10000.
After choosing k̂∗, we chose λ to have small set size. We used the same tuning data to pick k̂∗ and λ for simplicity (this does not invalidate our coverage guarantee since conformal calibration still uses fresh data). A coarse grid search on λ sufficed, since small parameter variations have little impact on RAPS. For example, we chose the λ ∈ {0.001, 0.01, 0.1, 0.2, 0.5} that achieved the smallest size on the m holdout samples in order to produce Tables 1, 5, 2, and 6. We include a subroutine that automatically chooses k̂∗ and λ to optimize size in our GitHub codebase.
E.2 OPTIMIZING ADAPTIVENESS WITH RAPS
In this appendix, we show empirically that RAPS with an automatically chosen set of kreg and λ improves the adaptiveness of APS. Recall our discussion in Section 4 and Proposition 3, wherein we propose size-stratified coverage as a useful definition of adaptiveness in image classification. After picking kreg as in Appendix E, we can choose λ using the same tuning data to optimize this notion of adaptiveness.
We now describe a particular manifestation of our adaptiveness criterion that we will use to optimize λ. Consider disjoint set-size strata {Si}i=si=1, where ⋃j=s j=1 Si = {1, . . . , |Y|}. Then define the indexes
of examples stratified by the prediction set size of each example from algorithm C as Jj = { i :
|C(Xi, Yi, Ui)| ∈ Sj }
. Then we can define the size-stratified coverage violation of an algorithm C on strata {S}i=si=1 as
SSCV ( C, {S}j=sj=1 ) = sup
j ∣∣∣∣∣ |{i : Yi ∈ C(Xi,Yi,Ui), i ∈ Jj}||Jj| − (1− α) ∣∣∣∣∣. (5)
In words, Eq. (5) is the worst-case deviation of C from exact coverage when it outputs sets of a certain size. Computing the size-stratified coverage violation thus only requires post-stratifying the results of C on a set of labeled examples. If conditional coverage held, the worst stratum coverage violation would be 0 by Proposition 3.
To maximize adaptiveness, we’d like to choose λ to minimize the size-stratified coverage violation of RAPS. Write Cλ to mean the RAPS procedure for a fixed choice of kreg and λ. Then we would like to pick
λ = arg min λ′
SSCV(Cλ′ , {S}j=sj=1). (6)
In our experiments, we choose a relatively coarse partitioning of the possible set sizes: 0-1, 2-3, 4- 10, 11-100, and 101-1000. Then, we chose the λ ∈ {0.00001, 0.0001, 0.0008, 0.001, 0.0015, 0.002} which minimized the size-stratified coverage violation on the tuning set. The results in Table 8 show RAPS always outperforms the adaptiveness of APS on the test set, even with this coarse, automated choice of parameters. The table reports the median size-stratified coverage violation over 10 independent trials of APS and RAPS with automated parameter tuning.
F COMPARISON WITH LEAST AMBIGUOUS SET-VALUED CLASSIFIERS
In this section, we compare RAPS to the Least Ambiguous Set-valued Classifier (LAC) method introduced in Sadinle et al. (2019), an alternative conformal procedure that is designed to have small sets. The LAC method provable gives the smallest possible average set size in the case where the input probabilities are correct, with the idea that these sets should be small even when the estimated probabilities are only approximately correct. In the notation of this paper, the LAC method considers nested sets of the following form:
CLAC(x, τ) := {y : π̂x(y) ≥ 1− τ}, which can be calibrated using as before in using τ̂ccal from Eq. (3).
We first compare naive, APS, RAPS, and LAC in terms of power and coverage in Table 9. In this experiment, we tuned RAPS to have small set size as described in Appendix E.1. We see that LAC also achieves correct coverage, as expected since it is a conformal method and satisfies the guarantee from Theorem 1. We further see that it has systematically smaller sets that RAPS, although the difference is slight compared to the gap between APS and RAPS or APS and LAC.
We next compare RAPS to LAC in terms of adaptiveness, tuning RAPS as in Section E.2. First, in Table 10, we report on the coverage of LAC for images of different difficulties, and see that LAC has dramatically worse coverage for hard images than for easy ones. Comparing this to RAPS in Table 7, we see that RAPS also has worse coverage for more difficult images, although the gap is much smaller for RAPS. Next, in Table 11, we report on the SSCV metric for of adaptiveness (and conditional coverage) for APS, RAPS, and LAC. We find that APS and RAPS have much better adaptiveness than LAC, with RAPS being the overall winner. The results of all of these comparisons are expected: LAC is not targetting adpativeness and instead trying to achieve the smallest possible set size. It succeeds at its goal, sacrificing adaptiveness to do so.
1 15668 1.00 1.5 | 1. What is the main contribution of the paper regarding adaptive prediction sets?
2. How does the proposed regularization work, and what are its advantages?
3. Are there any limitations or potential improvements regarding the number of classes and conditional guarantees?
4. Can the approach be applied to other problems beyond image classification?
5. How does the reviewer assess the strengths and weaknesses of the paper? | Review | Review
This paper proposes a regularized generalization of adaptive prediction sets (APS by Romano et al 2020) that results in smaller prediction sets that still maintain the correct coverage level for statistical validity.
The basic idea of the new regularization is very well-explained and quite elegant: roughly speaking, there is an indifference between classes of low probability so we should penalize including them in the prediction set. Despite the simplicity of this idea, I consider this paper's contribution to be a significant advance and I suspect this idea to be useful beyond classification problems with images, although from my understanding, a crucial assumption is that the number of classes needs to be sufficiently large (e.g., the regularization will not really yield any benefit if it's binary classification). I also found the experiments to be well-chosen.
My main comment instead is perhaps more of a follow-up: would it be possible to get a conditional guarantee rather than a marginal guarantee (Proposition 1) by adapting proof ideas from Tibshirani et al's "Conformal Prediction Under Covariate Shift" (2019)? This seems more aligned with the healthcare related question the paper starts the paper with. In the healthcare context, the patient would ideally want the coverage to be conditional rather than marginal (i.e., in Proposition 1, for the probability that's being sandwiched, we want a version conditioned on
X
n
+
1
landing near an anchor feature vector to suggest randomly sampling feature vectors similar to, for instance, a specific patient in the healthcare context). Separately, it would be interesting if the approach from Tibshirani et al can help RAPS construct even smaller prediction sets than what you get in bold for Table 2 (in the marginal coverage setting and not worrying about conditional coverage).
Strengths:
elegant, well-explained method with crisp theory
nice suite of experiments
helps popularize conformal prediction, which I think more machine learning researchers should know about
Weakness:
the paper opens up with a high-stakes motivation but I don't think it really gets back to this high-stakes problem properly; I'd suggest discussing conditional coverage more (also some high-stakes classification problems have very few classes, which the proposed regularization doesn't provide much benefit for?) |
ICLR | Title
Multi-Dataset Multi-Task Framework for Learning Molecules and Protein-target Interactions Properties
Abstract
Molecular property prediction and protein-target interaction prediction with deep learning are becoming increasingly popular in drug discovery pipelines in recent years. An important factor that limits the development of these two areas is the insufficiency of labeled data. One promising direction to address this problem is to learn shared embedding from multiple prediction tasks within one molecular type, e.g., molecule or protein, because different tasks might actually share similar coarse-grained structural information. Unlike the previous methods, in this paper, we first argue that, due to the possible local structural similarity between molecules and protein-target complexes, coarse-grained latent embeddings can be found across different molecular types. To take advantage of this, we propose a new Multi-Dataset Multi-Task Graph Learning (MDMT-GL) framework, where we are able to make the most use of the labeled data by simultaneously training molecule property prediction and protein-target interaction prediction together. MDMT-GL augments molecular representations with equivariant properties, 2D local structures, and 3D geometric information. MDMT-GL can learn coarsegrained embeddings for molecules and proteins, and also distinguish fine-grained representations in various downstream prediction tasks with unique characteristics. Experimentally, we implement and evaluate MDMT-GL on 2 molecular dynamic datasets and 2 protein-target datasets, consisting of 825 tasks and over 3 million data points. MDMT-GL achieves state-of-the-art performance on several tasks and shows competitive performance on others. These experimental results confirm that molecules and proteins indeed share some coarse-grained structures and that the coarse-grained embedding is trainable, and their fine-grained embeddings are more representative. To the best of our knowledge, this is the first work to train multi-task learning across different molecular types, and to verify the structural similarity between the molecules and the protein-target complexes.
N/A
Molecular property prediction and protein-target interaction prediction with deep learning are becoming increasingly popular in drug discovery pipelines in recent years. An important factor that limits the development of these two areas is the insufficiency of labeled data. One promising direction to address this problem is to learn shared embedding from multiple prediction tasks within one molecular type, e.g., molecule or protein, because different tasks might actually share similar coarse-grained structural information. Unlike the previous methods, in this paper, we first argue that, due to the possible local structural similarity between molecules and protein-target complexes, coarse-grained latent embeddings can be found across different molecular types. To take advantage of this, we propose a new Multi-Dataset Multi-Task Graph Learning (MDMT-GL) framework, where we are able to make the most use of the labeled data by simultaneously training molecule property prediction and protein-target interaction prediction together. MDMT-GL augments molecular representations with equivariant properties, 2D local structures, and 3D geometric information. MDMT-GL can learn coarsegrained embeddings for molecules and proteins, and also distinguish fine-grained representations in various downstream prediction tasks with unique characteristics. Experimentally, we implement and evaluate MDMT-GL on 2 molecular dynamic datasets and 2 protein-target datasets, consisting of 825 tasks and over 3 million data points. MDMT-GL achieves state-of-the-art performance on several tasks and shows competitive performance on others. These experimental results confirm that molecules and proteins indeed share some coarse-grained structures and that the coarse-grained embedding is trainable, and their fine-grained embeddings are more representative. To the best of our knowledge, this is the first work to train multi-task learning across different molecular types, and to verify the structural similarity between the molecules and the protein-target complexes.
1 INTRODUCTION
The discovery and development of a new drug could take more than a decade and cost billions of dollars Hughes et al. (2011); Sliwoski et al. (2014). Therefore, to reduce costs, predicting the properties of molecules and protein-target complexes (e.g., heat capacity, force field, binding affinity) become an essential component for the early stage of the drug discovery pipeline. Molecules and complexes are always represented as graph-structured data Li et al. (2021); Maziarka et al. (2020); Thölke & De Fabritiis (2022), where atoms and bonds are nodes and edges, respectively, and graph neural networks are in favor of learning representations from relational datasets Kipf & Welling (2016); Luan et al. (2021); Hua et al. (2022). As a result, graph-based deep learning methods that learn molecular graph representations have achieved great success in predicting molecule properties Schütt et al. (2018; 2021); Klicpera et al. (2020); Thölke & De Fabritiis (2022) and protein-target interactions Lim et al. (2019), but the data we have at hand are often insufficient, which will limit model performance Sliwoski et al. (2014); Liu et al. (2022). Thus, reducing the requirement for labeled data needed for the effective prediction of molecular and protein target properties becomes a challenge in drug discovery.
To address the aforementioned issue, multi-task learning for molecular property prediction Tan et al. (2021) and protein-target interaction prediction Lee & Kim (2019); Hu et al. (2021); Liu et al. (2022) is gradually drawing attention from the drug discovery community. Their models always deal with a single molecular type, i.e., a molecule (or complex) is only used for multiple molecule property (or protein-target interaction) prediction tasks. The difficulty stems from the fact that knowledge from different molecular types cannot be easily decomposed and shared. However, we argue that due to the internal geometric and local structural similarities between the molecule and the protein-target complex, they should share similar coarse-grained latent embeddings Jain (2000); Bender & Glen (2004); Löfblom et al. (2010). Hence, we believe that representations of molecules and complexes could be coarse-grained and a coarse-grained latent embedding could be learned together under one learning framework. Embodiments should share internal geometric and local structural information across molecules and complexes from atomic perspectives. Eventually, the learning of protein representations can benefit from the learning of molecule representations, and vice versa.
Therefore, we propose a new learning framework, Multi-Dataset Multi-Task Graph learning (MDMTGL) for molecular property prediction and protein-target interaction prediction. MDMT-GL aims to make the best use of labeled data by transferring knowledge between molecules and complexes. The cross-dataset paradigm for multi-task learning enables the shared embedding to be more informative representations than the single-dataset paradigm. To the best of our knowledge, MDMT-GL is the first work to train molecular property prediction and protein-target interaction prediction together and to verify the structural similarities between the molecule and the protein-target complex. In addition to the major contribution, we also develop the 2D graph transformer proposed by Kim et al. (2021) into a 3D equivariant graph transformer for molecular dynamics, and the model is capable of capturing high-order atom interactions in 3D space. Moreover, unlike multi-task learning within a single dataset, the data imbalance of different datasets will lead to the task imbalance problem which is fatal to multi-task learning. To treat each task equally, we propose a weighted loss to balance the importance of the tasks, which is novel for MDMT-GL. The details of MDMT-GL are discussed in Sec. 3. Furthermore, in Sec. 4, the experimental results support our argument and show that molecules and complexes can share some similar coarse-grained structures, and the geometric and structural similarities can be learned to leverage any molecular prediction task.
2 RELATED WORK
2.1 MOLECULAR MULTI-TASK LEARNING
Molecular Multi-Task Learning (MTL) is mainly used to address the data insufficiency problem in drug discovery. Liu et al. (2019c) uses a general architecture of a shared representation module and multiple task-specific prediction modules for MTL. Tan et al. (2021) stacks a base regressor and classifier with an additional training stage on the expanded molecular feature space for the prediction of molecular properties. Lee & Kim (2019) finds that similarity within a target group can affect the performance of MTL in the prediction of protein binding. Liu et al. (2022) possesses the knowledge of task relations and constructs a task-relation graph to maximize the performance of MTL in protein targeting. However, the aforementioned methods do not transfer knowledge between molecules and protein-target complexes. Existing models only perform MTL on the same dataset, i.e., molecule or protein, but the MTL between molecule and protein has never been explored. In this work, we aim to make use of the shared information between molecules and proteins across various tasks, so that we can make the most and best use of the labeled data.
2.2 GRAPH NEURAL NETWORKS FOR PROPERTY PREDICTION
In drug discovery, people apply message-passing-based models to predict the properties of molecules and proteins. Schütt et al. (2018) respects essential quantum chemical constraints and models quantum interactions by modeling interactions of atoms at arbitrary positions in a molecule. Satorras et al. (2021) proposes a graph neural network, which is equivariant to rotations, translations, reflections, and permutations in 3D geometry, to model molecular dynamics. Thölke & De Fabritiis (2022) builds on top of the graph transformer and develops an equivariant graph transformer to predict quantum molecule properties. Lim et al. (2019) learns drug-target interactions by extracting the graph features of intermolecular interactions directly from 3D structural information on the protein-ligand binding pose. Li et al. (2021) proposes a structure-aware interactive graph neural network to preserve the
distance and angle information among atoms to learn interactions between proteins and ligands. Overall, our architecture mainly consists of two equivariant graph transformers that focus on longrange atom interactions and featurization of atomic types and coordinates, and a graph neural network to preserve local structure information.
3 MULTI-DATASET MULTI-TASK FRAMEWORK FOR LEARNING MOLECULES AND PROTEIN-TARGET COMPLEXES
As discussed in Sec. 1, the labeled data for molecules and protein-target complexes are often insufficient. Therefore, we strive to make the most of the available labeled data from molecule and protein datasets for various tasks. In other words, we aim to design an architecture that can learn simultaneously from different molecular and protein datasets, in which learning protein representations can benefit from learning molecule representations and vice versa. The core technical difficulty is how to identify their coarse-grained similar internal geometry and local structures, and to also differentiate their fine-grained representations for different conformation structures.
To achieve the goal, we divide our model into four components (1) a coarse-grained module, (2) a fine-grained data-specific module, (3) a task-specific prediction module, and (4) a multi-dataset multi-task loss (see the whole architecture in Fig. 1 and App. A).
The function of each module is as follows: (1) The coarse-grained module is designed to learn a coarse-grained representation of molecules and protein-target complexes. Common geometric and structural information can be obtained in molecules and complexes can be obtained. We will discuss the details in Sec. 3.1. (2) The fine-grained module will process the molecules-specific and complexes-specific representations separately. We will discuss it in Sec. 3.2. (3) Then, the data-type-specific representations are fed into different task-specific prediction modules to make predictions for various tasks, the details are discussed in Sec. 3.3. (4) Finally, weighted losses of all tasks are used to balance the importance of different tasks. We describe how to compute the MDMT loss in Sec. 3.4. The whole framework can be trained in an end-to-end manner. In Sec. 4, we experimentally show that the representations could be coarse-grained between molecules and protein-target complexes.
3.1 COARSE-GRAINED MODULE
Although having different conformation structures and dynamics, molecules and protein-target complexes are made of basic atoms and bonds, and should thus share fundamental internal geometric and local structural information Jain (2000); Bender & Glen (2004); Löfblom et al. (2010). For example, the carbon dioxide molecule O=C=O and methanoic acid H(C=O)OH have different conformation structures and different force fields, but they share the same carbon atom C and similar local structures around the carbon atoms, e.g., double bond with oxygen O. Thus, two carbon atoms could potentially share coarse-grained information about their local structures. The coarse-grained module is designed to capture such atomic-level similarities so that generalizable features between molecules and proteins can be learned.
To capture the atomic-level similarities, we give each basic atom a unique learnable embedding Schütt et al. (2018); Klicpera et al. (2020); Thölke & De Fabritiis (2022), which is shared by all compounds in all tasks across different datasets (see the atom-wise embedding layer in Fig. 1).This is the first time that the atomic-level coarse-grained representations are exploited in the MDMT setting for molecules and proteins. To be more specific, an input molecule or complex m = [a1, a2, ..., aNm ]
T ∈ NNm×1 is a 1D vector of the atoms that build m, where Nm is the number of atoms in m, ai is the number of atoms in the periodic table. The molecular embedding is zm = fatom(m), where fatom : NNm×1 → RNm×d projects a 1D molecule vector onto a 2D learnable embedding, where each row of the embedding represents a hidden atom feature, and d is the dimension of the embedding space. Take the carbon dioxide molecule O=C=O for example, its input is a 1D vector representation [8, 6, 8]T , where 8 and 6 are the number of atoms of oxygen and carbon in the periodic table, and its embedding follows zO=C=O = [fO(8), fC(6), fO(8)]T ∈ R3×d, where fC(6), fO(8) are the learnable embeddings for carbon and oxygen, respectively.
2D molecular local structures, 3D molecular geometric information, and equivariant property are important for coarse-grained representations to preserve physical constraints Löfblom et al. (2010);
Schütt et al. (2021). Therefore, to obtain the above capacities, we augment the coarse-fined representation zm by the following augmentation network faug, which can be an equivariant graph neural network Satorras et al. (2021); Schütt et al. (2021); Thölke & De Fabritiis (2022). The augmentation network faug takes zm, edge (bond) indices em ∈ [0, 1]Nm×Nm , edge (bond) features fm ∈ REm×fe and atom positions rm ∈ RNm×3 as input, and produces ẑm = faug(zm, rm, em,fm) ∈ RNm×d, which is an equivariant coarse-fined representation (see the augmentation network in Fig. 1). This design enables ẑm to learn the shared fundamental internal geometric and structural information across different tasks and datasets while preserving equivariant property.
In conclusion, the coarse-grained module consists of two components: (1) an atom-wise embedding layer and (2) an augmentation network. Atom-wise embedding layer is used to obtain an atom-wise coarse-grained representation zm for every input molecule or complex m, and the augmentation network augments every coarse-grained representation with equivariant property by 2D local structures and 3D geometric information to produce an equivariant coarse-grained representation ẑm.
In addition to the equivariant coarse-grained representations, different molecular types require fine-grained data-type specific representations to capture differences in conformation structure and geometric information for performing different downstream tasks. In Sec. 3.2, we will introduce the fine-grained data-specific module and discuss the initiative to have it.
3.2 FINE-GRAINED DATA-SPECIFIC MODULE
Previously in Sec. 3.1, we discuss how the coarse-grained module can learn atom-wise atom-wise coarse-grained representations to utilize the use of labeled molecules and complexes. And we discuss the initiative and reason to make coarse-grained representations fine-grained for downstream uses.
The chain of a protein-target complex (normally from 100 to more than 1000 atoms) is always significantly longer than the chain of a molecule (normally from 1 to 60 atoms), thus making atomwise interactions highly different, i.e., two atoms might be farther away in a long chain. They could potentially interact, and the high-order long-range interactions always exist, which should be captured between atoms in a complex but are not solid and required for molecules Luan et al. (2019); Morris et al. (2019). For example, oxidoreductase C879H1426N250O260S3 is a protein of 2818 atoms while carbon dioxide CO2 is a molecule that has only 3 atoms.
With this in mind, to distinguish the different conformation structures resulting from the chain-size difference between molecules and complexes, in the fine-grained data-specific module, we process coarse-grained representations ẑm of molecules and complexes in different ways. To be more specific, we use high-order graph networks for large graphs Morris et al. (2019) like complexes to capture high-order interactions, and shallow graph networks for small graphs like molecules where high-order interactions are not solid Luan et al. (2019).
Therefore, we divide our fine-grained module into two data-specific networks, (1) a fine-grained complex network fptc that has the ability to capture high-order long-range interactions for atoms in complexes (see the long-chain complex network in Fig. 1), and (2) a shallow fine-grained molecule network fmol for molecules (see the short-chain molecule network in Fig. 1).
The fine-grained complex network fptc can be any high-order graph neural network Li et al. (2021); Kim et al. (2021); Thölke & De Fabritiis (2022). We adopt and develop the 2D highorder transformer Kim et al. (2021) to a 3D equivariant transformer (see App. A), our fine-grained complex network fptc is capable of capturing any-order atom interactions and preserving equivariant property, which is novel. The fine-grained protein-target complex embedding follows z̃ptc = fptc(ẑm, rm, em,fm,xm) ∈ RNm×d ′ , where xm ∈ RNm×fn is atom features and d′ denotes the dimension of the embedding.
For the fine-grained molecule network fmol, the idea is fairly easy. Since equivariant property is closely related to high-order long-range interactions in 3D space Satorras et al. (2021), which is not required in small molecule graphs, we only need a shallow graph neural network as the fine-grained molecule network fmol to model local message passing in short-chain molecules Kipf & Welling (2016); Luan et al. (2020); Hua et al. (2022). And considering the computational cost, we choose the simplest graph convolutional network Kipf & Welling (2016) for fmol. The fine-grained molecule embedding follows z̃mol = fmol(ẑm, em,fm,xm) ∈ RNm×d ′ .
Overall, we have a fine-grained complex network fptc which is a high-order equivariant graph network, and a fine-grained molecule network fmol which is a shallow graph network. We treat molecules and protein-target complexes differently in fine-grained data-specific networks because complexes are always significantly longer than molecules and the high-order long-range interactions need to be captured among them. For a coarse-grained representation ẑm, if it is originally a protein-target complex, it will be embedded by the complex network fptc, or if it is originally a molecule, it will be embedded by the molecule network fmol.
3.3 TASK-SPECIFIC PREDICTION MODULE
The task-specific prediction module will distinguish representations ẑptc, ẑmol, and generate the outputs for each task ŷtask. In the multi-task learning setting, each task should have its own specific prediction network ftask Collobert & Weston (2008); Liu et al. (2019c); Aribandi et al. (2021) (see the task-specific prediction module in Fig. 1). In practice, our task-specific prediction module consists of 825 output networks corresponding to 825 prediction tasks from the following 4 datasets.
QM9 (12 prediction networks) QM9 is a dataset of molecules consisting of 12 tasks Ramakrishnan et al. (2014). We use the specialized output networks in Thölke & De Fabritiis (2022) for the prediction of molecular dipole moment µ and the prediction of electronic spatial extent 〈R2〉. The gated equivariant blocks Weiler et al. (2018); Schütt et al. (2021) are used for the remaining 10 tasks.
MD17 (14 prediction networks) MD17 is a dataset of molecules consisting of 7 sub-datasets (Aspirin, Ethanol, Malondialdehyde, Naphthalene, Salicylic Acid, Toluene, Uracil) Chmiela et al. (2017). There are 14 tasks in total, where each sub-dataset has 2 prediction tasks for molecular energy E and forces F⃗ . We use the gated equivariant blocks proposed in Weiler et al. (2018); Schütt et al. (2021) to predict E, and F⃗ are calculated using the negative gradient of E with respect to the atomic coordinates F⃗ = −∂E/∂r⃗ Thölke & De Fabritiis (2022). ChEMBL (798 prediction networks) ChEMBL is a protein-target dataset originally proposed in Mendez et al. (2019). Furthermore, 3 sub-datasets ChEMBL10, ChEMBL50, ChEMBL100 are developed by Mayr et al. (2018); Liu et al. (2022) for multi-task learning, and each sub-dataset contains 406, 263, 129 regression tasks, accordingly. We apply a linear function over z̃ptc and apply sum pooling to get an output for each regression task.
PDBbind (1 prediction network) PDBbind Wang et al. (2005) is a protein-target dataset consisting of 1 regression task for protein-ligand binding affinity prediction. We apply a linear function over z̃ptc and apply sum pooling to predict protein-ligand binding affinity.
The loss Li for each task will be calculated based on the outputs ŷi from each task and the ground truth labels yi, where i is the task number. All Li will be weighted and sum up to a multi-dataset multi-task loss LMDMT for optimization. One principle for the LMDMT design is to treat each task equally important. This principle is naturally held in conventional multi-task learning Mayr et al. (2018). But when it comes to the multi-dataset setting, the data imbalance between different molecular datasets will break this principle. In Sec. 3.4, we will discuss this problem and how to address it by the design of the weighted loss LMDMT .
3.4 MULTI-DATASET MULTI-TASK LOSS
In MDMT-GL, we will face the data imbalance problem. The problem only occurs when we train our model on different datasets simultaneously, e.g., molecules and protein-target complexes, because the number of labeled molecules is always greater than the number of labeled protein-target complexes, and the model will focus more on molecule datasets than protein datasets. This problem is special for multi-dataset setting and does not exist in previous works on multi-task learning with a single molecular type Tan et al. (2021); Lee & Kim (2019); Hu et al. (2021); Liu et al. (2022).
To address this issue, we propose a weighted loss, specific to MDMT-GL, to address the data imbalance problem between different molecular datasets. We are motivated to design the loss so that all tasks are treated equally regardless of the size of labeled training data.
Suppose that we have U tasks and originally n1, n2, . . . , nU labeled training data for each task, we obtain predictions ŷi,1, ŷi,2, . . . , ŷi,ni for task i, and compare them with ground-truth labels yi,1,yi,2, . . . ,yi,ni for the loss of i-th task Li = ∑ni j=1 li(yi,j , ŷi,j). The multi-dataset multi-
task loss LMDMT = ∑U
i=1 ciLi is a weighted sum of Li. To balance the weights of Li, we want n1∑ i=1 c1 = n2∑ i=1 c2 = · · · = nU∑ i=1 cU , which leads to c1n1 = c2n2 = · · · = cUnU . In practice, suppose nmin = MIN(n1, n2, . . . , nU ) = nk, then we set ck = 1 and for any i ̸= k, we have ci = nminni . We will discuss the implementation in Sec. 4.
4 EXPERIMENTS
In this section, we evaluate the Multi-Dataset Multi-Task Graph Learning framework (MDMT-GL) on real-world molecule and protein-target complex datasets, and show that our proposed learning method can be used to better learn molecule and complex representations. We briefly introduce our datasets in Sec. 3.3. We conduct experiments across 2 molecule datasets and 2 complex datasets, consisting of 825 tasks and 3,139,011 labeled molecular graphs. We divide the experiment section into two subsections, including discussions of molecule datasets in Sec. 4.1 in and discussions on protein datasets in Sec. 4.2. In more detail, we discuss the performance of the model on QM9 in Sec. 4.1.1, on MD17 in Sec. 4.1.1, on ChEMBL in Sec. 4.2.1, and on PDBbind in Sec. 4.2.2.
4.1 MOLECULE DATASETS
In this section, we discuss our model performance on molecule datasets including QM9 Ramakrishnan et al. (2014) and MD17 Chmiela et al. (2017). We compare our MDMT-GL with several classic baselines and the state-of-the-art models in Tab. 1&2. The experimental results show that learning molecule representations can benefit from learning protein representations.
4.1.1 QM9
Data QM9 dataset reports computed geometric, thermodynamic, energetic, and electronic properties for locally optimized geometries. We use the same data split as in Schütt et al. (2018); Klicpera et al. (2020); Thölke & De Fabritiis (2022), where the labeled molecules are divided into 110,000 / 10,000 / 10,831 for training / validation / testing.
Comparison We compare MDMT-GL with several popular baselines and state-of-the-art models, including SchNet Schütt et al. (2018), EGNN Satorras et al. (2021), PhysNet Unke & Meuwly (2019), DimeNet++ Klicpera et al. (2020), Cormorant Anderson et al. (2019), PaiNN Schütt et al. (2021), and Equivariant Transformer (ET) Thölke & De Fabritiis (2022), and report the results in Tab. 1. The results of baselines are obtained from Thölke & De Fabritiis (2022), and the MDMT-GL results are averaged over three runs.
From Tab. 1, we can observe that MDMT-GL outperforms most popular baselines with significant improvements on 6 out of 12 QM9 targets, including ϵHOMO, ϵLUMO,∆ϵ,U0,U,G. MDMT-GL shows very competitive performance and delivers significant improvements in the challenging molecular chemical property prediction problem via multi-dataset learning.
4.1.2 MD17
Data consists of molecular dynamics trajectories of small organic molecules, including both energies and forces. We use the same data split as in previous works Schütt et al. (2018); Klicpera et al. (2020); Thölke & De Fabritiis (2022). For each sub-dataset, we split the data into a training set with 950 molecules and a validation set with 50 molecules, leaving the remaining molecules for testing.
Comparison We compare MDMT-GL with several popular baselines and state-of-the-art models, including SchNet Schütt et al. (2018), PhysNet Unke & Meuwly (2019), DimeNet Klicpera et al. (2020), PaiNN Schütt et al. (2021), and Equivariant Transformer (ET) Thölke & De Fabritiis (2022), and report the results in Tab. 2. The baseline results are obtained from Thölke & De Fabritiis (2022), and the MDMT-GL results are averaged over three runs.
From Tab. 2, we can observe that MDMT-GL outperforms the most popular baselines with significant improvements on 8 out of 14 MD17 sub-datasets, except energy and forces for naphthalene, forces for salicylic acid, energy and forces for toluene, and forces for uracil. MDMT-GL shows very competitive performance and delivers significant improvements in the challenging molecular dynamics trajectory prediction problem via multi-dataset learning.
4.2 PROTEIN-TARGET DATASETS
In this section, we discuss our model performance on protein-target complex datasets including ChEMBL Mendez et al. (2019) and PDBbind Wang et al. (2005). We compare our MDMT-GL with several classic baselines and the state-of-the-art model in Tab. 3&4. The experimental results show that learning protein representations can benefit from learning molecule representations.
4.2.1 CHEMBL
Data The ChEMBL dataset is originally proposed by Mendez et al. (2019) for protein-targeting, but authors in Liu et al. (2022) modify the original dataset and provide three sub-datasets ChEMBL10, ChEMBL50, ChEMBL100 for multi-task learning. In Liu et al. (2022), they claim the tasks numbers are 382/ 152/ 132 (666 tasks in total) for ChEMBL10/ ChEMBL50/ ChEMBL100, but we actually get 406/ 263/ 129 (798 tasks in total) when running their data generation steps. So, we run and test baselines and MDMT-GL on 406/ 263/ 129 tasks, and report the averaged results over three runs. We use the same data split in Liu et al. (2022), splitting the labeled data into the ratio of 80%/ 10%/ 10% for training/ validation/ testing.
Comparison We compare MDMT-GL with several classic multi-task learning baselines and stateof-the-art models, including Multi-Task Learning (MTL) Mayr et al. (2018), Uncertainty Weighing (UW) Kendall et al. (2018), GradNorm Chen et al. (2018), Dynamic Weight Average (DWA) Liu et al. (2019b), Loss-Balanced Task Weighting (LBTW) Liu et al. (2019a), State Graph Neural Network (SGNN) Liu et al. (2022), and Energy-Based State Graph Neural Network (SGNN-EBM) Liu et al. (2022), and report the averaged results in Tab. 3.
From Tab. 3, we can observe that MDMT-GL outperforms all popular baselines with marginal improvements of AUC-ROC score on ChEMBL10, ChEMBL50, ChEMBL100. By simultaneously learning other molecular datasets and tasks, the MDMT-GL framework can make the best use of the data and leverage the results of predictions for protein-targeting.
4.2.2 PDBBIND
Data PDBbind dataset provides 3D binding structures of protein-ligand complexes with experimentally determined binding affinities. In our experiment, we use the PDBbind2016 dataset, which is the most used PDBbind dataset in previous works Lim et al. (2019); Li et al. (2021). We use the same data split in Li et al. (2021).
Comparison We compare MDMT-GL with several classic baselines and state-of-the-art models, including Spatial Graph Convolution Network (SGCN) Danel et al. (2020), GNN-DTI Lim et al. (2019), DMPNN Yang et al. (2019), Molecule Attention Transformer (MAT) Maziarka et al. (2020), DimeNet Klicpera et al. (2020), CMPNN Song et al. (2020), and Structure-aware Interactive Graph Network (SIGN) Liu et al. (2022). The baseline results are obtained from Li et al. (2021), and MDMT-GL results are averaged over three runs.
From Tab. 4, we can observe that MDMT-GL outperforms all popular baselines with significant improvements in RMSE, MAE, SD, and R scores on the PDBbind dataset. MDMT-GL shows very competitive performance and delivers significant improvements on the challenging protein-binding affinity prediction problem via multi-dataset learning.
Overall, we can see that the Multi-Dataset Multi-Task Graph Learning framework (MDMT-GL) is very competitive in all tasks. We can conclude that MDMT-GL enables the learning of protein representations to benefit the learning of molecule representations, and vice versa. The strong experimental results show that our proposed learning method can utilize the use of labeled training data, and can make the most and best use of them. And this learning framework can mitigate the lack of labeled data in drug discovery.
5 CONCLUSION AND FUTURE WORK
In conclusion, our proposed Multi-Dataset Multi-Task Graph Learning (MDMT-GL) framework is able to address the data insufficiency problem by concurrently training the representations of molecules and protein-target complexes for multiple prediction tasks. The strong experimental results show that there does exist transferable information between molecules and protein-target complexes and it is learnable. We can also say that the learning of protein representations can facilitate the learning of molecule representations, and vice versa. Furthermore, in the future, we could discover some quantum chemical constraints and prior knowledge and add them to the coarse-grained network to capture more informative coarse-grained embeddings.
A MODEL ARCHITECTURE
We introduce the full MDMT-GL architecture. Suppose we are given an input molecular data of Nm atoms and Em edges, its atom numbers m ∈ NNm×1, atom features xm ∈ RNm×d, atom positions rm ∈ RNm×3 in 3D space, edge indices em ∈ [0, 1]Nm×Nm , edge features fm ∈ REm×fe , where fn, fe denote numbers of node features and edge features, respectively.
First, we embed the atom numbers m to an atom-wise coarse-grained representation zm by an atom-embedding transformation:
zm = Watom m ∈ RNm×d
where d is the hidden feature dimension.
Then the coarse-grained representation zm will be augmented to an equivariant coarse-grained representation ẑm by an augmentation network. There are many equivariant graph neural network options Satorras et al. (2021); Schütt et al. (2021); Thölke & De Fabritiis (2022), and our choice is the equivariant transformer proposed in Thölke & De Fabritiis (2022).
Before the coarse-grained representation gets augmented, there is an exponential normal radial basis function that resembles a continuous filter convolution to filter the neighborhood of an atom Schütt et al. (2018). The distance dij between atoms i, j is defined as:
eRBFk = ϕ(dij)exp(−βk(exp(−dij)− µk)2, ϕ(dij) = 1 2 (cos( πdij dcut ) + 1), if dij ≤ dcut
0, if dij > dcut
where βk, µk are fixed parameters specifying the center and width of the radial basis function k. β is initialized as (2K−1(1 − exp(dcut)))−2, µ is initialized with values equally spaced between exp(−dcut) and 1 for all k proposed by Unke & Meuwly (2019). And the cosine cutoff ϕ(dcut) is used to ensure a smooth transition to 0 as dij approaches to dcut.
The neighborhood embedding nm for m is then defined as:
nm ∈ RNm×d, nm,i = N∑ j=1 zm,j ⊙WFilter eRBF(dij) ∈ Rd,
where each row i corresponds to the neighbor embedding of atom i of m. We update the coarsegrained representations zm with the neighbor embedding nm:
zm = LayerNorm(WTransform[zm,nm] + bTransform).
Then the coarse-grained representation zm is augmented by an equivariant transformer layer proposed in Thölke & De Fabritiis (2022). The interatomic distances are projected into two multidimensional filters DK , DV :
DK = σ(WDK e RBF(rm,ij) + bDK ), D V = σ(WDV e RBF(rm,ij) + bDV ).
And attention is weighted by a cosine cutoff to ensure that atoms with a distance greater than dcut do not interact:
A = Activation( F∑ k Qk ⊙Kk ⊙DKk ) · ϕ(dij), Q = WQ1zm and K = WK1zm.
The attention mechanism’s value is also split into three vectors of equal dimension:
s1m,ij , s 2 m,ij , s 3 m,ij = split(Vj ⊙DVij) ∈ Rd, V = WV1zm,
and
ym ∈ RNm×3d, ym,i = WO1( N∑ j Aij · s3ij),
where ym,i, s1m,ij , s 2 m,ij correspond to features, and two filters. Then the features ym are split into three features of equal size: q1m,q 2 m,q 3 m ∈ RNm×d‘
∆zm = q 1 m + q 2 m ⊙<WLinear1v,WLinear2v> ∈ RNm×d,
notice that vm ∈ RNm×3 is set to 0 in the beginning, i.e., initially vm = 0Nm×3. And for v,
∆vm = wm + q 3 m ⊙WLinear3vm, wm,i = N∑ j s1m,ij ⊙ vm,j + s2m,ij ⊙ rm,i − rm,j ||rm,i − rm,j || ,
and zm = zm +∆zm, vm = vm +∆vm. More details on the transformer can be found in Thölke & De Fabritiis (2022). After iterative updates, we will receive our equivariant coarse-grained representation ẑm,
ẑm = LayerNorm(zm + ∑ l ∆zm) ∈ RN×d.
Then the equivariant coarse-grained representation is cooperated with node and edge features
ẑm = LayerNorm(WC [ẑm,xm,WEfm]) ∈ RN×d
If ẑm is originally a protein-target complex then will be encoded by an equivariant high-order graph neural network Satorras et al. (2021); Schütt et al. (2021); Thölke & De Fabritiis (2022). And our choice is to develop Kim et al. (2021) to an equivariant graph transformer for complex network, it follows
ẑm = Enck→l(ẑm) = Attnk→l(ẑm) + L2l→l(Activation(L 1 l→l(Attnk→l(ẑm)))) ∈ RN
l m×d ′ ,
Attnk→l(ẑm)j = H∑
h=1 ∑ µ ∑ i αh,µi,j ẑm,iW V2 h,µW O h,µ,
where in the first layer k = 1, H is the number of heads, L1l→l : RN l m×d → RN lm×d′ , L2l→l : RN lm×d′ → RN lm×d. And to compute each attention αh,µ ∈ Rnk+l from ẑm ∈ Rn k×d,
ah,µi,j = σ(Qµj ,K µ i )∑ i|(i,j)∈µ σ(Q µ j ,K µ i ) , (i, j) ∈ µ
0, otherwise
, Qµ = Lµk→l(ẑm) and K µ = Lµk→k(ẑm).
More details can be found in Kim et al. (2021), we augment ẑm ∈ RN l m×d ′ to an equivariant form by
s1m,ij , s 2 m,ij , s 3 m,ij = split(Vj) ∈ Rl×d ′ , V = WV2 ẑm,
and
ym ∈ RN l m×3d ′ , ym,i = WO2( N∑ j ai,j · s3ij).
Then the features ym are split into three features of equal size:
q1m,q 2 m,q 3 m ∈ RN
l m×d‘
∆ẑm = q 1 m + q 2 m ⊙<WLinear1’v,WLinear2’v> ∈ RN
l m×d ′ ,
notice that vm ∈ RN l m×3 is set to 0 in the beginning, i.e., initially vm = 0N l m×3. And for vm,
∆vm = wm + q 3 m ⊙WLinear3’vm, wm,i = N∑ j s1m,ij ⊙ vm,j + s2m,ij ⊙ rm,i − rm,j ||rm,i − rm,j || ,
and z̃mol = zm +∆zm ∈ RN l m×d ′ , vm = vm +∆vm ∈ RN l m×3. In the last layer, we set l = 1 and receive receive our equivariant fine-grained complex representation ẑm,
z̃ptc = LayerNorm(z̃ptc) ∈ RNm×d ′ .
We have the fine-gained representations for protein-target complex.
Or if ẑm is originally a molecule then will be encoded by a shallow graph neural network Kipf & Welling (2016); Luan et al. (2020); Hua et al. (2022). And our choice is the simplest graph convolution network Kipf & Welling (2016),
z̃mol = LayerNorm(Activation(emzmWm)) ∈ RNm×d ′ .
Then it will be fed into downstream task-specific module. | 1. What is the focus and contribution of the paper on molecular property and protein-target interaction prediction?
2. What are the strengths of the proposed approach, particularly in its ability to handle multi-task learning and dataset imbalance?
3. What are the weaknesses of the paper, especially regarding hyperparameter tuning and random initialization?
4. Do you have any concerns about the significance of the improvements reported in the paper, or the choice of baseline methods?
5. Would added discussion and explanations of the results help improve the understanding of the method's performance?
6. Were any ablation analyses performed to test the contributions of different parts of the model?
7. How could the paper improve its clarity, particularly in the motivation section?
8. How does the reviewer assess the quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper presents a Multi-Dataset Multi-Task Graph learning (MDMT-GL) framework for molecular property and protein-target interaction prediction. The combined prediction task helps alleviate the challenges of finding quality labeled data for each task. The novelty of the framework is that it is the only method to train molecular property prediction and protein-target interaction prediction together. The framework consists of 4 components - (1) a coarse-grained module to learn coarse representations of the molecules and protein-target complexes (2) a fine-grained module to process the molecule and complex specific representations (3) data-type specific prediction modules (4) task-wise weighted loss. The paper also presents an extension of the 2D graph transformer into a 3D graph transformer and a weighted loss for the multi-task setting to address dataset imbalance. The proposed method is benchmarked on four datasets, two for molecule structures and two for protein-target. The method achieves state-of-the-art performance on the protein-target prediction tasks and showed competitive results on the molecule prediction tasks.
Strengths And Weaknesses
Strengths:
The methodology and experiments sections are well-developed. Each framework component is explained with concrete examples, formulas, and explanations.
The experiments showcase the diverse capabilities of the method and report state-of-the-art results compared to existing methods.
Weakness:
The main weakness of the paper is that there is no mention of hyper-parameter tuning or random initialization choice for the method. Many of the results in the experiments, especially on the molecule datasets, differ very slightly from the previous methods (e.g., MD17 Aspirin energy), which could be attributed to the random initialization. Providing details on how the best method parameters for the method were determined would be helpful in establishing the significance of the results.
Additionally, explaining what each improvement in the method entails is needed. Is a 0.001 change significant in the field? What are the implications of such a change on each metric used? Similarly, how does the paper define significance when reporting significant improvements on 6 out of 12 QM9 targets?
Some description of the baseline methods in the experimental setup would be useful to understand what methods the paper is improving over. Also, were the baseline methods re-trained on the chosen dataset? Was there any hyper-parameter tuning performed? If not, is that fair?
The paper would benefit from added discussion in the result section. Some results could use explanations and interpretations based on domain knowledge as to what the performance reflects about the method. For example, since the authors stated that without a weighted loss, the model will focus more on molecule datasets than protein datasets, but the results reflect that the framework did strictly better in the protein case and not in the molecule case. Did the new weighted loss overcompensate for the difference? Or is there another intuitive explanation for why the protein benchmarks were consistently higher than the molecule ones?
Was any ablation analysis done to test how different contributions to the model affect its performance?
Minor points:
Some of the text in Figure 1 is impossible to read. Are those labels relevant for understanding the method?
The model architecture details in the appendix are hard to follow, and the extension of 2D high-order transformer to 3D equivariant transformer could be better described.
Clarity, Quality, Novelty And Reproducibility
Clarity: The paper is overall well-written, and the methodology and experiments parts are very clear.
I suggest adding more clarity to the motivation. It seems that the main motivation was “reducing the requirement for labeled data needed for the effective prediction of molecular and protein target properties…” but that is unclear for two reasons. 1) How extensive is the shortage of labeled data? The datasets presented in the paper would suggest plenty of diverse labeled data on both tasks. 2) How is the requirement for labeled data strictly connected to multi-task learning? Many methods tackle this problem, including unsupervised learning and few-shot/zero-shot learning methods. If those methods are not applicable in this scenario, the paper should further address the choice of using multi-task learning.
Quality: The paper presents quality experiments. The main suggestion here is adding hyperparameter tuning to further solidify the framework’s fairness compared to other methods.
Novelty: The paper presents novel ideas, especially in training a framework for protein and molecule prediction, as that has not been done before (to the best of my knowledge). However, the novelty of the loss function seems less convincing. Weighted loss has been used for various tasks, including multi-task learning.
Reproducibility: Without more details on how the experiments were conducted, it may be difficult to reproduce the results of this work. |
ICLR | Title
Multi-Dataset Multi-Task Framework for Learning Molecules and Protein-target Interactions Properties
Abstract
Molecular property prediction and protein-target interaction prediction with deep learning are becoming increasingly popular in drug discovery pipelines in recent years. An important factor that limits the development of these two areas is the insufficiency of labeled data. One promising direction to address this problem is to learn shared embedding from multiple prediction tasks within one molecular type, e.g., molecule or protein, because different tasks might actually share similar coarse-grained structural information. Unlike the previous methods, in this paper, we first argue that, due to the possible local structural similarity between molecules and protein-target complexes, coarse-grained latent embeddings can be found across different molecular types. To take advantage of this, we propose a new Multi-Dataset Multi-Task Graph Learning (MDMT-GL) framework, where we are able to make the most use of the labeled data by simultaneously training molecule property prediction and protein-target interaction prediction together. MDMT-GL augments molecular representations with equivariant properties, 2D local structures, and 3D geometric information. MDMT-GL can learn coarsegrained embeddings for molecules and proteins, and also distinguish fine-grained representations in various downstream prediction tasks with unique characteristics. Experimentally, we implement and evaluate MDMT-GL on 2 molecular dynamic datasets and 2 protein-target datasets, consisting of 825 tasks and over 3 million data points. MDMT-GL achieves state-of-the-art performance on several tasks and shows competitive performance on others. These experimental results confirm that molecules and proteins indeed share some coarse-grained structures and that the coarse-grained embedding is trainable, and their fine-grained embeddings are more representative. To the best of our knowledge, this is the first work to train multi-task learning across different molecular types, and to verify the structural similarity between the molecules and the protein-target complexes.
N/A
Molecular property prediction and protein-target interaction prediction with deep learning are becoming increasingly popular in drug discovery pipelines in recent years. An important factor that limits the development of these two areas is the insufficiency of labeled data. One promising direction to address this problem is to learn shared embedding from multiple prediction tasks within one molecular type, e.g., molecule or protein, because different tasks might actually share similar coarse-grained structural information. Unlike the previous methods, in this paper, we first argue that, due to the possible local structural similarity between molecules and protein-target complexes, coarse-grained latent embeddings can be found across different molecular types. To take advantage of this, we propose a new Multi-Dataset Multi-Task Graph Learning (MDMT-GL) framework, where we are able to make the most use of the labeled data by simultaneously training molecule property prediction and protein-target interaction prediction together. MDMT-GL augments molecular representations with equivariant properties, 2D local structures, and 3D geometric information. MDMT-GL can learn coarsegrained embeddings for molecules and proteins, and also distinguish fine-grained representations in various downstream prediction tasks with unique characteristics. Experimentally, we implement and evaluate MDMT-GL on 2 molecular dynamic datasets and 2 protein-target datasets, consisting of 825 tasks and over 3 million data points. MDMT-GL achieves state-of-the-art performance on several tasks and shows competitive performance on others. These experimental results confirm that molecules and proteins indeed share some coarse-grained structures and that the coarse-grained embedding is trainable, and their fine-grained embeddings are more representative. To the best of our knowledge, this is the first work to train multi-task learning across different molecular types, and to verify the structural similarity between the molecules and the protein-target complexes.
1 INTRODUCTION
The discovery and development of a new drug could take more than a decade and cost billions of dollars Hughes et al. (2011); Sliwoski et al. (2014). Therefore, to reduce costs, predicting the properties of molecules and protein-target complexes (e.g., heat capacity, force field, binding affinity) become an essential component for the early stage of the drug discovery pipeline. Molecules and complexes are always represented as graph-structured data Li et al. (2021); Maziarka et al. (2020); Thölke & De Fabritiis (2022), where atoms and bonds are nodes and edges, respectively, and graph neural networks are in favor of learning representations from relational datasets Kipf & Welling (2016); Luan et al. (2021); Hua et al. (2022). As a result, graph-based deep learning methods that learn molecular graph representations have achieved great success in predicting molecule properties Schütt et al. (2018; 2021); Klicpera et al. (2020); Thölke & De Fabritiis (2022) and protein-target interactions Lim et al. (2019), but the data we have at hand are often insufficient, which will limit model performance Sliwoski et al. (2014); Liu et al. (2022). Thus, reducing the requirement for labeled data needed for the effective prediction of molecular and protein target properties becomes a challenge in drug discovery.
To address the aforementioned issue, multi-task learning for molecular property prediction Tan et al. (2021) and protein-target interaction prediction Lee & Kim (2019); Hu et al. (2021); Liu et al. (2022) is gradually drawing attention from the drug discovery community. Their models always deal with a single molecular type, i.e., a molecule (or complex) is only used for multiple molecule property (or protein-target interaction) prediction tasks. The difficulty stems from the fact that knowledge from different molecular types cannot be easily decomposed and shared. However, we argue that due to the internal geometric and local structural similarities between the molecule and the protein-target complex, they should share similar coarse-grained latent embeddings Jain (2000); Bender & Glen (2004); Löfblom et al. (2010). Hence, we believe that representations of molecules and complexes could be coarse-grained and a coarse-grained latent embedding could be learned together under one learning framework. Embodiments should share internal geometric and local structural information across molecules and complexes from atomic perspectives. Eventually, the learning of protein representations can benefit from the learning of molecule representations, and vice versa.
Therefore, we propose a new learning framework, Multi-Dataset Multi-Task Graph learning (MDMTGL) for molecular property prediction and protein-target interaction prediction. MDMT-GL aims to make the best use of labeled data by transferring knowledge between molecules and complexes. The cross-dataset paradigm for multi-task learning enables the shared embedding to be more informative representations than the single-dataset paradigm. To the best of our knowledge, MDMT-GL is the first work to train molecular property prediction and protein-target interaction prediction together and to verify the structural similarities between the molecule and the protein-target complex. In addition to the major contribution, we also develop the 2D graph transformer proposed by Kim et al. (2021) into a 3D equivariant graph transformer for molecular dynamics, and the model is capable of capturing high-order atom interactions in 3D space. Moreover, unlike multi-task learning within a single dataset, the data imbalance of different datasets will lead to the task imbalance problem which is fatal to multi-task learning. To treat each task equally, we propose a weighted loss to balance the importance of the tasks, which is novel for MDMT-GL. The details of MDMT-GL are discussed in Sec. 3. Furthermore, in Sec. 4, the experimental results support our argument and show that molecules and complexes can share some similar coarse-grained structures, and the geometric and structural similarities can be learned to leverage any molecular prediction task.
2 RELATED WORK
2.1 MOLECULAR MULTI-TASK LEARNING
Molecular Multi-Task Learning (MTL) is mainly used to address the data insufficiency problem in drug discovery. Liu et al. (2019c) uses a general architecture of a shared representation module and multiple task-specific prediction modules for MTL. Tan et al. (2021) stacks a base regressor and classifier with an additional training stage on the expanded molecular feature space for the prediction of molecular properties. Lee & Kim (2019) finds that similarity within a target group can affect the performance of MTL in the prediction of protein binding. Liu et al. (2022) possesses the knowledge of task relations and constructs a task-relation graph to maximize the performance of MTL in protein targeting. However, the aforementioned methods do not transfer knowledge between molecules and protein-target complexes. Existing models only perform MTL on the same dataset, i.e., molecule or protein, but the MTL between molecule and protein has never been explored. In this work, we aim to make use of the shared information between molecules and proteins across various tasks, so that we can make the most and best use of the labeled data.
2.2 GRAPH NEURAL NETWORKS FOR PROPERTY PREDICTION
In drug discovery, people apply message-passing-based models to predict the properties of molecules and proteins. Schütt et al. (2018) respects essential quantum chemical constraints and models quantum interactions by modeling interactions of atoms at arbitrary positions in a molecule. Satorras et al. (2021) proposes a graph neural network, which is equivariant to rotations, translations, reflections, and permutations in 3D geometry, to model molecular dynamics. Thölke & De Fabritiis (2022) builds on top of the graph transformer and develops an equivariant graph transformer to predict quantum molecule properties. Lim et al. (2019) learns drug-target interactions by extracting the graph features of intermolecular interactions directly from 3D structural information on the protein-ligand binding pose. Li et al. (2021) proposes a structure-aware interactive graph neural network to preserve the
distance and angle information among atoms to learn interactions between proteins and ligands. Overall, our architecture mainly consists of two equivariant graph transformers that focus on longrange atom interactions and featurization of atomic types and coordinates, and a graph neural network to preserve local structure information.
3 MULTI-DATASET MULTI-TASK FRAMEWORK FOR LEARNING MOLECULES AND PROTEIN-TARGET COMPLEXES
As discussed in Sec. 1, the labeled data for molecules and protein-target complexes are often insufficient. Therefore, we strive to make the most of the available labeled data from molecule and protein datasets for various tasks. In other words, we aim to design an architecture that can learn simultaneously from different molecular and protein datasets, in which learning protein representations can benefit from learning molecule representations and vice versa. The core technical difficulty is how to identify their coarse-grained similar internal geometry and local structures, and to also differentiate their fine-grained representations for different conformation structures.
To achieve the goal, we divide our model into four components (1) a coarse-grained module, (2) a fine-grained data-specific module, (3) a task-specific prediction module, and (4) a multi-dataset multi-task loss (see the whole architecture in Fig. 1 and App. A).
The function of each module is as follows: (1) The coarse-grained module is designed to learn a coarse-grained representation of molecules and protein-target complexes. Common geometric and structural information can be obtained in molecules and complexes can be obtained. We will discuss the details in Sec. 3.1. (2) The fine-grained module will process the molecules-specific and complexes-specific representations separately. We will discuss it in Sec. 3.2. (3) Then, the data-type-specific representations are fed into different task-specific prediction modules to make predictions for various tasks, the details are discussed in Sec. 3.3. (4) Finally, weighted losses of all tasks are used to balance the importance of different tasks. We describe how to compute the MDMT loss in Sec. 3.4. The whole framework can be trained in an end-to-end manner. In Sec. 4, we experimentally show that the representations could be coarse-grained between molecules and protein-target complexes.
3.1 COARSE-GRAINED MODULE
Although having different conformation structures and dynamics, molecules and protein-target complexes are made of basic atoms and bonds, and should thus share fundamental internal geometric and local structural information Jain (2000); Bender & Glen (2004); Löfblom et al. (2010). For example, the carbon dioxide molecule O=C=O and methanoic acid H(C=O)OH have different conformation structures and different force fields, but they share the same carbon atom C and similar local structures around the carbon atoms, e.g., double bond with oxygen O. Thus, two carbon atoms could potentially share coarse-grained information about their local structures. The coarse-grained module is designed to capture such atomic-level similarities so that generalizable features between molecules and proteins can be learned.
To capture the atomic-level similarities, we give each basic atom a unique learnable embedding Schütt et al. (2018); Klicpera et al. (2020); Thölke & De Fabritiis (2022), which is shared by all compounds in all tasks across different datasets (see the atom-wise embedding layer in Fig. 1).This is the first time that the atomic-level coarse-grained representations are exploited in the MDMT setting for molecules and proteins. To be more specific, an input molecule or complex m = [a1, a2, ..., aNm ]
T ∈ NNm×1 is a 1D vector of the atoms that build m, where Nm is the number of atoms in m, ai is the number of atoms in the periodic table. The molecular embedding is zm = fatom(m), where fatom : NNm×1 → RNm×d projects a 1D molecule vector onto a 2D learnable embedding, where each row of the embedding represents a hidden atom feature, and d is the dimension of the embedding space. Take the carbon dioxide molecule O=C=O for example, its input is a 1D vector representation [8, 6, 8]T , where 8 and 6 are the number of atoms of oxygen and carbon in the periodic table, and its embedding follows zO=C=O = [fO(8), fC(6), fO(8)]T ∈ R3×d, where fC(6), fO(8) are the learnable embeddings for carbon and oxygen, respectively.
2D molecular local structures, 3D molecular geometric information, and equivariant property are important for coarse-grained representations to preserve physical constraints Löfblom et al. (2010);
Schütt et al. (2021). Therefore, to obtain the above capacities, we augment the coarse-fined representation zm by the following augmentation network faug, which can be an equivariant graph neural network Satorras et al. (2021); Schütt et al. (2021); Thölke & De Fabritiis (2022). The augmentation network faug takes zm, edge (bond) indices em ∈ [0, 1]Nm×Nm , edge (bond) features fm ∈ REm×fe and atom positions rm ∈ RNm×3 as input, and produces ẑm = faug(zm, rm, em,fm) ∈ RNm×d, which is an equivariant coarse-fined representation (see the augmentation network in Fig. 1). This design enables ẑm to learn the shared fundamental internal geometric and structural information across different tasks and datasets while preserving equivariant property.
In conclusion, the coarse-grained module consists of two components: (1) an atom-wise embedding layer and (2) an augmentation network. Atom-wise embedding layer is used to obtain an atom-wise coarse-grained representation zm for every input molecule or complex m, and the augmentation network augments every coarse-grained representation with equivariant property by 2D local structures and 3D geometric information to produce an equivariant coarse-grained representation ẑm.
In addition to the equivariant coarse-grained representations, different molecular types require fine-grained data-type specific representations to capture differences in conformation structure and geometric information for performing different downstream tasks. In Sec. 3.2, we will introduce the fine-grained data-specific module and discuss the initiative to have it.
3.2 FINE-GRAINED DATA-SPECIFIC MODULE
Previously in Sec. 3.1, we discuss how the coarse-grained module can learn atom-wise atom-wise coarse-grained representations to utilize the use of labeled molecules and complexes. And we discuss the initiative and reason to make coarse-grained representations fine-grained for downstream uses.
The chain of a protein-target complex (normally from 100 to more than 1000 atoms) is always significantly longer than the chain of a molecule (normally from 1 to 60 atoms), thus making atomwise interactions highly different, i.e., two atoms might be farther away in a long chain. They could potentially interact, and the high-order long-range interactions always exist, which should be captured between atoms in a complex but are not solid and required for molecules Luan et al. (2019); Morris et al. (2019). For example, oxidoreductase C879H1426N250O260S3 is a protein of 2818 atoms while carbon dioxide CO2 is a molecule that has only 3 atoms.
With this in mind, to distinguish the different conformation structures resulting from the chain-size difference between molecules and complexes, in the fine-grained data-specific module, we process coarse-grained representations ẑm of molecules and complexes in different ways. To be more specific, we use high-order graph networks for large graphs Morris et al. (2019) like complexes to capture high-order interactions, and shallow graph networks for small graphs like molecules where high-order interactions are not solid Luan et al. (2019).
Therefore, we divide our fine-grained module into two data-specific networks, (1) a fine-grained complex network fptc that has the ability to capture high-order long-range interactions for atoms in complexes (see the long-chain complex network in Fig. 1), and (2) a shallow fine-grained molecule network fmol for molecules (see the short-chain molecule network in Fig. 1).
The fine-grained complex network fptc can be any high-order graph neural network Li et al. (2021); Kim et al. (2021); Thölke & De Fabritiis (2022). We adopt and develop the 2D highorder transformer Kim et al. (2021) to a 3D equivariant transformer (see App. A), our fine-grained complex network fptc is capable of capturing any-order atom interactions and preserving equivariant property, which is novel. The fine-grained protein-target complex embedding follows z̃ptc = fptc(ẑm, rm, em,fm,xm) ∈ RNm×d ′ , where xm ∈ RNm×fn is atom features and d′ denotes the dimension of the embedding.
For the fine-grained molecule network fmol, the idea is fairly easy. Since equivariant property is closely related to high-order long-range interactions in 3D space Satorras et al. (2021), which is not required in small molecule graphs, we only need a shallow graph neural network as the fine-grained molecule network fmol to model local message passing in short-chain molecules Kipf & Welling (2016); Luan et al. (2020); Hua et al. (2022). And considering the computational cost, we choose the simplest graph convolutional network Kipf & Welling (2016) for fmol. The fine-grained molecule embedding follows z̃mol = fmol(ẑm, em,fm,xm) ∈ RNm×d ′ .
Overall, we have a fine-grained complex network fptc which is a high-order equivariant graph network, and a fine-grained molecule network fmol which is a shallow graph network. We treat molecules and protein-target complexes differently in fine-grained data-specific networks because complexes are always significantly longer than molecules and the high-order long-range interactions need to be captured among them. For a coarse-grained representation ẑm, if it is originally a protein-target complex, it will be embedded by the complex network fptc, or if it is originally a molecule, it will be embedded by the molecule network fmol.
3.3 TASK-SPECIFIC PREDICTION MODULE
The task-specific prediction module will distinguish representations ẑptc, ẑmol, and generate the outputs for each task ŷtask. In the multi-task learning setting, each task should have its own specific prediction network ftask Collobert & Weston (2008); Liu et al. (2019c); Aribandi et al. (2021) (see the task-specific prediction module in Fig. 1). In practice, our task-specific prediction module consists of 825 output networks corresponding to 825 prediction tasks from the following 4 datasets.
QM9 (12 prediction networks) QM9 is a dataset of molecules consisting of 12 tasks Ramakrishnan et al. (2014). We use the specialized output networks in Thölke & De Fabritiis (2022) for the prediction of molecular dipole moment µ and the prediction of electronic spatial extent 〈R2〉. The gated equivariant blocks Weiler et al. (2018); Schütt et al. (2021) are used for the remaining 10 tasks.
MD17 (14 prediction networks) MD17 is a dataset of molecules consisting of 7 sub-datasets (Aspirin, Ethanol, Malondialdehyde, Naphthalene, Salicylic Acid, Toluene, Uracil) Chmiela et al. (2017). There are 14 tasks in total, where each sub-dataset has 2 prediction tasks for molecular energy E and forces F⃗ . We use the gated equivariant blocks proposed in Weiler et al. (2018); Schütt et al. (2021) to predict E, and F⃗ are calculated using the negative gradient of E with respect to the atomic coordinates F⃗ = −∂E/∂r⃗ Thölke & De Fabritiis (2022). ChEMBL (798 prediction networks) ChEMBL is a protein-target dataset originally proposed in Mendez et al. (2019). Furthermore, 3 sub-datasets ChEMBL10, ChEMBL50, ChEMBL100 are developed by Mayr et al. (2018); Liu et al. (2022) for multi-task learning, and each sub-dataset contains 406, 263, 129 regression tasks, accordingly. We apply a linear function over z̃ptc and apply sum pooling to get an output for each regression task.
PDBbind (1 prediction network) PDBbind Wang et al. (2005) is a protein-target dataset consisting of 1 regression task for protein-ligand binding affinity prediction. We apply a linear function over z̃ptc and apply sum pooling to predict protein-ligand binding affinity.
The loss Li for each task will be calculated based on the outputs ŷi from each task and the ground truth labels yi, where i is the task number. All Li will be weighted and sum up to a multi-dataset multi-task loss LMDMT for optimization. One principle for the LMDMT design is to treat each task equally important. This principle is naturally held in conventional multi-task learning Mayr et al. (2018). But when it comes to the multi-dataset setting, the data imbalance between different molecular datasets will break this principle. In Sec. 3.4, we will discuss this problem and how to address it by the design of the weighted loss LMDMT .
3.4 MULTI-DATASET MULTI-TASK LOSS
In MDMT-GL, we will face the data imbalance problem. The problem only occurs when we train our model on different datasets simultaneously, e.g., molecules and protein-target complexes, because the number of labeled molecules is always greater than the number of labeled protein-target complexes, and the model will focus more on molecule datasets than protein datasets. This problem is special for multi-dataset setting and does not exist in previous works on multi-task learning with a single molecular type Tan et al. (2021); Lee & Kim (2019); Hu et al. (2021); Liu et al. (2022).
To address this issue, we propose a weighted loss, specific to MDMT-GL, to address the data imbalance problem between different molecular datasets. We are motivated to design the loss so that all tasks are treated equally regardless of the size of labeled training data.
Suppose that we have U tasks and originally n1, n2, . . . , nU labeled training data for each task, we obtain predictions ŷi,1, ŷi,2, . . . , ŷi,ni for task i, and compare them with ground-truth labels yi,1,yi,2, . . . ,yi,ni for the loss of i-th task Li = ∑ni j=1 li(yi,j , ŷi,j). The multi-dataset multi-
task loss LMDMT = ∑U
i=1 ciLi is a weighted sum of Li. To balance the weights of Li, we want n1∑ i=1 c1 = n2∑ i=1 c2 = · · · = nU∑ i=1 cU , which leads to c1n1 = c2n2 = · · · = cUnU . In practice, suppose nmin = MIN(n1, n2, . . . , nU ) = nk, then we set ck = 1 and for any i ̸= k, we have ci = nminni . We will discuss the implementation in Sec. 4.
4 EXPERIMENTS
In this section, we evaluate the Multi-Dataset Multi-Task Graph Learning framework (MDMT-GL) on real-world molecule and protein-target complex datasets, and show that our proposed learning method can be used to better learn molecule and complex representations. We briefly introduce our datasets in Sec. 3.3. We conduct experiments across 2 molecule datasets and 2 complex datasets, consisting of 825 tasks and 3,139,011 labeled molecular graphs. We divide the experiment section into two subsections, including discussions of molecule datasets in Sec. 4.1 in and discussions on protein datasets in Sec. 4.2. In more detail, we discuss the performance of the model on QM9 in Sec. 4.1.1, on MD17 in Sec. 4.1.1, on ChEMBL in Sec. 4.2.1, and on PDBbind in Sec. 4.2.2.
4.1 MOLECULE DATASETS
In this section, we discuss our model performance on molecule datasets including QM9 Ramakrishnan et al. (2014) and MD17 Chmiela et al. (2017). We compare our MDMT-GL with several classic baselines and the state-of-the-art models in Tab. 1&2. The experimental results show that learning molecule representations can benefit from learning protein representations.
4.1.1 QM9
Data QM9 dataset reports computed geometric, thermodynamic, energetic, and electronic properties for locally optimized geometries. We use the same data split as in Schütt et al. (2018); Klicpera et al. (2020); Thölke & De Fabritiis (2022), where the labeled molecules are divided into 110,000 / 10,000 / 10,831 for training / validation / testing.
Comparison We compare MDMT-GL with several popular baselines and state-of-the-art models, including SchNet Schütt et al. (2018), EGNN Satorras et al. (2021), PhysNet Unke & Meuwly (2019), DimeNet++ Klicpera et al. (2020), Cormorant Anderson et al. (2019), PaiNN Schütt et al. (2021), and Equivariant Transformer (ET) Thölke & De Fabritiis (2022), and report the results in Tab. 1. The results of baselines are obtained from Thölke & De Fabritiis (2022), and the MDMT-GL results are averaged over three runs.
From Tab. 1, we can observe that MDMT-GL outperforms most popular baselines with significant improvements on 6 out of 12 QM9 targets, including ϵHOMO, ϵLUMO,∆ϵ,U0,U,G. MDMT-GL shows very competitive performance and delivers significant improvements in the challenging molecular chemical property prediction problem via multi-dataset learning.
4.1.2 MD17
Data consists of molecular dynamics trajectories of small organic molecules, including both energies and forces. We use the same data split as in previous works Schütt et al. (2018); Klicpera et al. (2020); Thölke & De Fabritiis (2022). For each sub-dataset, we split the data into a training set with 950 molecules and a validation set with 50 molecules, leaving the remaining molecules for testing.
Comparison We compare MDMT-GL with several popular baselines and state-of-the-art models, including SchNet Schütt et al. (2018), PhysNet Unke & Meuwly (2019), DimeNet Klicpera et al. (2020), PaiNN Schütt et al. (2021), and Equivariant Transformer (ET) Thölke & De Fabritiis (2022), and report the results in Tab. 2. The baseline results are obtained from Thölke & De Fabritiis (2022), and the MDMT-GL results are averaged over three runs.
From Tab. 2, we can observe that MDMT-GL outperforms the most popular baselines with significant improvements on 8 out of 14 MD17 sub-datasets, except energy and forces for naphthalene, forces for salicylic acid, energy and forces for toluene, and forces for uracil. MDMT-GL shows very competitive performance and delivers significant improvements in the challenging molecular dynamics trajectory prediction problem via multi-dataset learning.
4.2 PROTEIN-TARGET DATASETS
In this section, we discuss our model performance on protein-target complex datasets including ChEMBL Mendez et al. (2019) and PDBbind Wang et al. (2005). We compare our MDMT-GL with several classic baselines and the state-of-the-art model in Tab. 3&4. The experimental results show that learning protein representations can benefit from learning molecule representations.
4.2.1 CHEMBL
Data The ChEMBL dataset is originally proposed by Mendez et al. (2019) for protein-targeting, but authors in Liu et al. (2022) modify the original dataset and provide three sub-datasets ChEMBL10, ChEMBL50, ChEMBL100 for multi-task learning. In Liu et al. (2022), they claim the tasks numbers are 382/ 152/ 132 (666 tasks in total) for ChEMBL10/ ChEMBL50/ ChEMBL100, but we actually get 406/ 263/ 129 (798 tasks in total) when running their data generation steps. So, we run and test baselines and MDMT-GL on 406/ 263/ 129 tasks, and report the averaged results over three runs. We use the same data split in Liu et al. (2022), splitting the labeled data into the ratio of 80%/ 10%/ 10% for training/ validation/ testing.
Comparison We compare MDMT-GL with several classic multi-task learning baselines and stateof-the-art models, including Multi-Task Learning (MTL) Mayr et al. (2018), Uncertainty Weighing (UW) Kendall et al. (2018), GradNorm Chen et al. (2018), Dynamic Weight Average (DWA) Liu et al. (2019b), Loss-Balanced Task Weighting (LBTW) Liu et al. (2019a), State Graph Neural Network (SGNN) Liu et al. (2022), and Energy-Based State Graph Neural Network (SGNN-EBM) Liu et al. (2022), and report the averaged results in Tab. 3.
From Tab. 3, we can observe that MDMT-GL outperforms all popular baselines with marginal improvements of AUC-ROC score on ChEMBL10, ChEMBL50, ChEMBL100. By simultaneously learning other molecular datasets and tasks, the MDMT-GL framework can make the best use of the data and leverage the results of predictions for protein-targeting.
4.2.2 PDBBIND
Data PDBbind dataset provides 3D binding structures of protein-ligand complexes with experimentally determined binding affinities. In our experiment, we use the PDBbind2016 dataset, which is the most used PDBbind dataset in previous works Lim et al. (2019); Li et al. (2021). We use the same data split in Li et al. (2021).
Comparison We compare MDMT-GL with several classic baselines and state-of-the-art models, including Spatial Graph Convolution Network (SGCN) Danel et al. (2020), GNN-DTI Lim et al. (2019), DMPNN Yang et al. (2019), Molecule Attention Transformer (MAT) Maziarka et al. (2020), DimeNet Klicpera et al. (2020), CMPNN Song et al. (2020), and Structure-aware Interactive Graph Network (SIGN) Liu et al. (2022). The baseline results are obtained from Li et al. (2021), and MDMT-GL results are averaged over three runs.
From Tab. 4, we can observe that MDMT-GL outperforms all popular baselines with significant improvements in RMSE, MAE, SD, and R scores on the PDBbind dataset. MDMT-GL shows very competitive performance and delivers significant improvements on the challenging protein-binding affinity prediction problem via multi-dataset learning.
Overall, we can see that the Multi-Dataset Multi-Task Graph Learning framework (MDMT-GL) is very competitive in all tasks. We can conclude that MDMT-GL enables the learning of protein representations to benefit the learning of molecule representations, and vice versa. The strong experimental results show that our proposed learning method can utilize the use of labeled training data, and can make the most and best use of them. And this learning framework can mitigate the lack of labeled data in drug discovery.
5 CONCLUSION AND FUTURE WORK
In conclusion, our proposed Multi-Dataset Multi-Task Graph Learning (MDMT-GL) framework is able to address the data insufficiency problem by concurrently training the representations of molecules and protein-target complexes for multiple prediction tasks. The strong experimental results show that there does exist transferable information between molecules and protein-target complexes and it is learnable. We can also say that the learning of protein representations can facilitate the learning of molecule representations, and vice versa. Furthermore, in the future, we could discover some quantum chemical constraints and prior knowledge and add them to the coarse-grained network to capture more informative coarse-grained embeddings.
A MODEL ARCHITECTURE
We introduce the full MDMT-GL architecture. Suppose we are given an input molecular data of Nm atoms and Em edges, its atom numbers m ∈ NNm×1, atom features xm ∈ RNm×d, atom positions rm ∈ RNm×3 in 3D space, edge indices em ∈ [0, 1]Nm×Nm , edge features fm ∈ REm×fe , where fn, fe denote numbers of node features and edge features, respectively.
First, we embed the atom numbers m to an atom-wise coarse-grained representation zm by an atom-embedding transformation:
zm = Watom m ∈ RNm×d
where d is the hidden feature dimension.
Then the coarse-grained representation zm will be augmented to an equivariant coarse-grained representation ẑm by an augmentation network. There are many equivariant graph neural network options Satorras et al. (2021); Schütt et al. (2021); Thölke & De Fabritiis (2022), and our choice is the equivariant transformer proposed in Thölke & De Fabritiis (2022).
Before the coarse-grained representation gets augmented, there is an exponential normal radial basis function that resembles a continuous filter convolution to filter the neighborhood of an atom Schütt et al. (2018). The distance dij between atoms i, j is defined as:
eRBFk = ϕ(dij)exp(−βk(exp(−dij)− µk)2, ϕ(dij) = 1 2 (cos( πdij dcut ) + 1), if dij ≤ dcut
0, if dij > dcut
where βk, µk are fixed parameters specifying the center and width of the radial basis function k. β is initialized as (2K−1(1 − exp(dcut)))−2, µ is initialized with values equally spaced between exp(−dcut) and 1 for all k proposed by Unke & Meuwly (2019). And the cosine cutoff ϕ(dcut) is used to ensure a smooth transition to 0 as dij approaches to dcut.
The neighborhood embedding nm for m is then defined as:
nm ∈ RNm×d, nm,i = N∑ j=1 zm,j ⊙WFilter eRBF(dij) ∈ Rd,
where each row i corresponds to the neighbor embedding of atom i of m. We update the coarsegrained representations zm with the neighbor embedding nm:
zm = LayerNorm(WTransform[zm,nm] + bTransform).
Then the coarse-grained representation zm is augmented by an equivariant transformer layer proposed in Thölke & De Fabritiis (2022). The interatomic distances are projected into two multidimensional filters DK , DV :
DK = σ(WDK e RBF(rm,ij) + bDK ), D V = σ(WDV e RBF(rm,ij) + bDV ).
And attention is weighted by a cosine cutoff to ensure that atoms with a distance greater than dcut do not interact:
A = Activation( F∑ k Qk ⊙Kk ⊙DKk ) · ϕ(dij), Q = WQ1zm and K = WK1zm.
The attention mechanism’s value is also split into three vectors of equal dimension:
s1m,ij , s 2 m,ij , s 3 m,ij = split(Vj ⊙DVij) ∈ Rd, V = WV1zm,
and
ym ∈ RNm×3d, ym,i = WO1( N∑ j Aij · s3ij),
where ym,i, s1m,ij , s 2 m,ij correspond to features, and two filters. Then the features ym are split into three features of equal size: q1m,q 2 m,q 3 m ∈ RNm×d‘
∆zm = q 1 m + q 2 m ⊙<WLinear1v,WLinear2v> ∈ RNm×d,
notice that vm ∈ RNm×3 is set to 0 in the beginning, i.e., initially vm = 0Nm×3. And for v,
∆vm = wm + q 3 m ⊙WLinear3vm, wm,i = N∑ j s1m,ij ⊙ vm,j + s2m,ij ⊙ rm,i − rm,j ||rm,i − rm,j || ,
and zm = zm +∆zm, vm = vm +∆vm. More details on the transformer can be found in Thölke & De Fabritiis (2022). After iterative updates, we will receive our equivariant coarse-grained representation ẑm,
ẑm = LayerNorm(zm + ∑ l ∆zm) ∈ RN×d.
Then the equivariant coarse-grained representation is cooperated with node and edge features
ẑm = LayerNorm(WC [ẑm,xm,WEfm]) ∈ RN×d
If ẑm is originally a protein-target complex then will be encoded by an equivariant high-order graph neural network Satorras et al. (2021); Schütt et al. (2021); Thölke & De Fabritiis (2022). And our choice is to develop Kim et al. (2021) to an equivariant graph transformer for complex network, it follows
ẑm = Enck→l(ẑm) = Attnk→l(ẑm) + L2l→l(Activation(L 1 l→l(Attnk→l(ẑm)))) ∈ RN
l m×d ′ ,
Attnk→l(ẑm)j = H∑
h=1 ∑ µ ∑ i αh,µi,j ẑm,iW V2 h,µW O h,µ,
where in the first layer k = 1, H is the number of heads, L1l→l : RN l m×d → RN lm×d′ , L2l→l : RN lm×d′ → RN lm×d. And to compute each attention αh,µ ∈ Rnk+l from ẑm ∈ Rn k×d,
ah,µi,j = σ(Qµj ,K µ i )∑ i|(i,j)∈µ σ(Q µ j ,K µ i ) , (i, j) ∈ µ
0, otherwise
, Qµ = Lµk→l(ẑm) and K µ = Lµk→k(ẑm).
More details can be found in Kim et al. (2021), we augment ẑm ∈ RN l m×d ′ to an equivariant form by
s1m,ij , s 2 m,ij , s 3 m,ij = split(Vj) ∈ Rl×d ′ , V = WV2 ẑm,
and
ym ∈ RN l m×3d ′ , ym,i = WO2( N∑ j ai,j · s3ij).
Then the features ym are split into three features of equal size:
q1m,q 2 m,q 3 m ∈ RN
l m×d‘
∆ẑm = q 1 m + q 2 m ⊙<WLinear1’v,WLinear2’v> ∈ RN
l m×d ′ ,
notice that vm ∈ RN l m×3 is set to 0 in the beginning, i.e., initially vm = 0N l m×3. And for vm,
∆vm = wm + q 3 m ⊙WLinear3’vm, wm,i = N∑ j s1m,ij ⊙ vm,j + s2m,ij ⊙ rm,i − rm,j ||rm,i − rm,j || ,
and z̃mol = zm +∆zm ∈ RN l m×d ′ , vm = vm +∆vm ∈ RN l m×3. In the last layer, we set l = 1 and receive receive our equivariant fine-grained complex representation ẑm,
z̃ptc = LayerNorm(z̃ptc) ∈ RNm×d ′ .
We have the fine-gained representations for protein-target complex.
Or if ẑm is originally a molecule then will be encoded by a shallow graph neural network Kipf & Welling (2016); Luan et al. (2020); Hua et al. (2022). And our choice is the simplest graph convolution network Kipf & Welling (2016),
z̃mol = LayerNorm(Activation(emzmWm)) ∈ RNm×d ′ .
Then it will be fed into downstream task-specific module. | 1. What is the focus of the paper regarding molecular predictive tasks?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its architecture and training regime?
3. Do you have any concerns about the experimental results and their relation to the approach's contributions?
4. How would you assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors propose to improve on molecular predictive tasks by formulating a multi-task problem over the union of ligand and protein inputs using a connectionist approach.
Strengths And Weaknesses
Good: Improving the protein ligand binding accuracy is of interest.
Bad: The paper is not sufficiently clear and detailed. In addition it does not seem to introduce any novel idea in the architecture space or loss/training regime space. The authors seem to take some equivariant message passing architectures from literature (on top of a simple atom embedding layer), chose one for ligand inputs and one for protein inputs, and jointly train the system on a large number of tasks (with a loss that is weighted by the number of instances available per task). The approach does not meet the minimal novelty requirements.
It is not clear if the experimental results reported are due to the larger training set rather than to any architectural contribution.
No ablation studies are reported to understand the relative importance of the parts.
Clarity, Quality, Novelty And Reproducibility
The paper is not reproducible with most parts described to an insufficient level of details.
The text is at times hard to understand: what does it mean that <<he high-order long-range interactions always exist, which should be captured between atoms in a complex but are not solid and required for molecules>>.
The main figure is too small and hence unreadable.
The statement: <<We adopt and develop the 2D high- order transformer Kim et al. (2021) to a 3D equivariant transformer (see App. A), our fine-grained complex network fptc is capable of capturing any-order atom interactions and preserving equivariant property, which is novel.>> seems to indicate a major contribution that is not described in any detail in the main text. |
ICLR | Title
Multi-Dataset Multi-Task Framework for Learning Molecules and Protein-target Interactions Properties
Abstract
Molecular property prediction and protein-target interaction prediction with deep learning are becoming increasingly popular in drug discovery pipelines in recent years. An important factor that limits the development of these two areas is the insufficiency of labeled data. One promising direction to address this problem is to learn shared embedding from multiple prediction tasks within one molecular type, e.g., molecule or protein, because different tasks might actually share similar coarse-grained structural information. Unlike the previous methods, in this paper, we first argue that, due to the possible local structural similarity between molecules and protein-target complexes, coarse-grained latent embeddings can be found across different molecular types. To take advantage of this, we propose a new Multi-Dataset Multi-Task Graph Learning (MDMT-GL) framework, where we are able to make the most use of the labeled data by simultaneously training molecule property prediction and protein-target interaction prediction together. MDMT-GL augments molecular representations with equivariant properties, 2D local structures, and 3D geometric information. MDMT-GL can learn coarsegrained embeddings for molecules and proteins, and also distinguish fine-grained representations in various downstream prediction tasks with unique characteristics. Experimentally, we implement and evaluate MDMT-GL on 2 molecular dynamic datasets and 2 protein-target datasets, consisting of 825 tasks and over 3 million data points. MDMT-GL achieves state-of-the-art performance on several tasks and shows competitive performance on others. These experimental results confirm that molecules and proteins indeed share some coarse-grained structures and that the coarse-grained embedding is trainable, and their fine-grained embeddings are more representative. To the best of our knowledge, this is the first work to train multi-task learning across different molecular types, and to verify the structural similarity between the molecules and the protein-target complexes.
N/A
Molecular property prediction and protein-target interaction prediction with deep learning are becoming increasingly popular in drug discovery pipelines in recent years. An important factor that limits the development of these two areas is the insufficiency of labeled data. One promising direction to address this problem is to learn shared embedding from multiple prediction tasks within one molecular type, e.g., molecule or protein, because different tasks might actually share similar coarse-grained structural information. Unlike the previous methods, in this paper, we first argue that, due to the possible local structural similarity between molecules and protein-target complexes, coarse-grained latent embeddings can be found across different molecular types. To take advantage of this, we propose a new Multi-Dataset Multi-Task Graph Learning (MDMT-GL) framework, where we are able to make the most use of the labeled data by simultaneously training molecule property prediction and protein-target interaction prediction together. MDMT-GL augments molecular representations with equivariant properties, 2D local structures, and 3D geometric information. MDMT-GL can learn coarsegrained embeddings for molecules and proteins, and also distinguish fine-grained representations in various downstream prediction tasks with unique characteristics. Experimentally, we implement and evaluate MDMT-GL on 2 molecular dynamic datasets and 2 protein-target datasets, consisting of 825 tasks and over 3 million data points. MDMT-GL achieves state-of-the-art performance on several tasks and shows competitive performance on others. These experimental results confirm that molecules and proteins indeed share some coarse-grained structures and that the coarse-grained embedding is trainable, and their fine-grained embeddings are more representative. To the best of our knowledge, this is the first work to train multi-task learning across different molecular types, and to verify the structural similarity between the molecules and the protein-target complexes.
1 INTRODUCTION
The discovery and development of a new drug could take more than a decade and cost billions of dollars Hughes et al. (2011); Sliwoski et al. (2014). Therefore, to reduce costs, predicting the properties of molecules and protein-target complexes (e.g., heat capacity, force field, binding affinity) become an essential component for the early stage of the drug discovery pipeline. Molecules and complexes are always represented as graph-structured data Li et al. (2021); Maziarka et al. (2020); Thölke & De Fabritiis (2022), where atoms and bonds are nodes and edges, respectively, and graph neural networks are in favor of learning representations from relational datasets Kipf & Welling (2016); Luan et al. (2021); Hua et al. (2022). As a result, graph-based deep learning methods that learn molecular graph representations have achieved great success in predicting molecule properties Schütt et al. (2018; 2021); Klicpera et al. (2020); Thölke & De Fabritiis (2022) and protein-target interactions Lim et al. (2019), but the data we have at hand are often insufficient, which will limit model performance Sliwoski et al. (2014); Liu et al. (2022). Thus, reducing the requirement for labeled data needed for the effective prediction of molecular and protein target properties becomes a challenge in drug discovery.
To address the aforementioned issue, multi-task learning for molecular property prediction Tan et al. (2021) and protein-target interaction prediction Lee & Kim (2019); Hu et al. (2021); Liu et al. (2022) is gradually drawing attention from the drug discovery community. Their models always deal with a single molecular type, i.e., a molecule (or complex) is only used for multiple molecule property (or protein-target interaction) prediction tasks. The difficulty stems from the fact that knowledge from different molecular types cannot be easily decomposed and shared. However, we argue that due to the internal geometric and local structural similarities between the molecule and the protein-target complex, they should share similar coarse-grained latent embeddings Jain (2000); Bender & Glen (2004); Löfblom et al. (2010). Hence, we believe that representations of molecules and complexes could be coarse-grained and a coarse-grained latent embedding could be learned together under one learning framework. Embodiments should share internal geometric and local structural information across molecules and complexes from atomic perspectives. Eventually, the learning of protein representations can benefit from the learning of molecule representations, and vice versa.
Therefore, we propose a new learning framework, Multi-Dataset Multi-Task Graph learning (MDMTGL) for molecular property prediction and protein-target interaction prediction. MDMT-GL aims to make the best use of labeled data by transferring knowledge between molecules and complexes. The cross-dataset paradigm for multi-task learning enables the shared embedding to be more informative representations than the single-dataset paradigm. To the best of our knowledge, MDMT-GL is the first work to train molecular property prediction and protein-target interaction prediction together and to verify the structural similarities between the molecule and the protein-target complex. In addition to the major contribution, we also develop the 2D graph transformer proposed by Kim et al. (2021) into a 3D equivariant graph transformer for molecular dynamics, and the model is capable of capturing high-order atom interactions in 3D space. Moreover, unlike multi-task learning within a single dataset, the data imbalance of different datasets will lead to the task imbalance problem which is fatal to multi-task learning. To treat each task equally, we propose a weighted loss to balance the importance of the tasks, which is novel for MDMT-GL. The details of MDMT-GL are discussed in Sec. 3. Furthermore, in Sec. 4, the experimental results support our argument and show that molecules and complexes can share some similar coarse-grained structures, and the geometric and structural similarities can be learned to leverage any molecular prediction task.
2 RELATED WORK
2.1 MOLECULAR MULTI-TASK LEARNING
Molecular Multi-Task Learning (MTL) is mainly used to address the data insufficiency problem in drug discovery. Liu et al. (2019c) uses a general architecture of a shared representation module and multiple task-specific prediction modules for MTL. Tan et al. (2021) stacks a base regressor and classifier with an additional training stage on the expanded molecular feature space for the prediction of molecular properties. Lee & Kim (2019) finds that similarity within a target group can affect the performance of MTL in the prediction of protein binding. Liu et al. (2022) possesses the knowledge of task relations and constructs a task-relation graph to maximize the performance of MTL in protein targeting. However, the aforementioned methods do not transfer knowledge between molecules and protein-target complexes. Existing models only perform MTL on the same dataset, i.e., molecule or protein, but the MTL between molecule and protein has never been explored. In this work, we aim to make use of the shared information between molecules and proteins across various tasks, so that we can make the most and best use of the labeled data.
2.2 GRAPH NEURAL NETWORKS FOR PROPERTY PREDICTION
In drug discovery, people apply message-passing-based models to predict the properties of molecules and proteins. Schütt et al. (2018) respects essential quantum chemical constraints and models quantum interactions by modeling interactions of atoms at arbitrary positions in a molecule. Satorras et al. (2021) proposes a graph neural network, which is equivariant to rotations, translations, reflections, and permutations in 3D geometry, to model molecular dynamics. Thölke & De Fabritiis (2022) builds on top of the graph transformer and develops an equivariant graph transformer to predict quantum molecule properties. Lim et al. (2019) learns drug-target interactions by extracting the graph features of intermolecular interactions directly from 3D structural information on the protein-ligand binding pose. Li et al. (2021) proposes a structure-aware interactive graph neural network to preserve the
distance and angle information among atoms to learn interactions between proteins and ligands. Overall, our architecture mainly consists of two equivariant graph transformers that focus on longrange atom interactions and featurization of atomic types and coordinates, and a graph neural network to preserve local structure information.
3 MULTI-DATASET MULTI-TASK FRAMEWORK FOR LEARNING MOLECULES AND PROTEIN-TARGET COMPLEXES
As discussed in Sec. 1, the labeled data for molecules and protein-target complexes are often insufficient. Therefore, we strive to make the most of the available labeled data from molecule and protein datasets for various tasks. In other words, we aim to design an architecture that can learn simultaneously from different molecular and protein datasets, in which learning protein representations can benefit from learning molecule representations and vice versa. The core technical difficulty is how to identify their coarse-grained similar internal geometry and local structures, and to also differentiate their fine-grained representations for different conformation structures.
To achieve the goal, we divide our model into four components (1) a coarse-grained module, (2) a fine-grained data-specific module, (3) a task-specific prediction module, and (4) a multi-dataset multi-task loss (see the whole architecture in Fig. 1 and App. A).
The function of each module is as follows: (1) The coarse-grained module is designed to learn a coarse-grained representation of molecules and protein-target complexes. Common geometric and structural information can be obtained in molecules and complexes can be obtained. We will discuss the details in Sec. 3.1. (2) The fine-grained module will process the molecules-specific and complexes-specific representations separately. We will discuss it in Sec. 3.2. (3) Then, the data-type-specific representations are fed into different task-specific prediction modules to make predictions for various tasks, the details are discussed in Sec. 3.3. (4) Finally, weighted losses of all tasks are used to balance the importance of different tasks. We describe how to compute the MDMT loss in Sec. 3.4. The whole framework can be trained in an end-to-end manner. In Sec. 4, we experimentally show that the representations could be coarse-grained between molecules and protein-target complexes.
3.1 COARSE-GRAINED MODULE
Although having different conformation structures and dynamics, molecules and protein-target complexes are made of basic atoms and bonds, and should thus share fundamental internal geometric and local structural information Jain (2000); Bender & Glen (2004); Löfblom et al. (2010). For example, the carbon dioxide molecule O=C=O and methanoic acid H(C=O)OH have different conformation structures and different force fields, but they share the same carbon atom C and similar local structures around the carbon atoms, e.g., double bond with oxygen O. Thus, two carbon atoms could potentially share coarse-grained information about their local structures. The coarse-grained module is designed to capture such atomic-level similarities so that generalizable features between molecules and proteins can be learned.
To capture the atomic-level similarities, we give each basic atom a unique learnable embedding Schütt et al. (2018); Klicpera et al. (2020); Thölke & De Fabritiis (2022), which is shared by all compounds in all tasks across different datasets (see the atom-wise embedding layer in Fig. 1).This is the first time that the atomic-level coarse-grained representations are exploited in the MDMT setting for molecules and proteins. To be more specific, an input molecule or complex m = [a1, a2, ..., aNm ]
T ∈ NNm×1 is a 1D vector of the atoms that build m, where Nm is the number of atoms in m, ai is the number of atoms in the periodic table. The molecular embedding is zm = fatom(m), where fatom : NNm×1 → RNm×d projects a 1D molecule vector onto a 2D learnable embedding, where each row of the embedding represents a hidden atom feature, and d is the dimension of the embedding space. Take the carbon dioxide molecule O=C=O for example, its input is a 1D vector representation [8, 6, 8]T , where 8 and 6 are the number of atoms of oxygen and carbon in the periodic table, and its embedding follows zO=C=O = [fO(8), fC(6), fO(8)]T ∈ R3×d, where fC(6), fO(8) are the learnable embeddings for carbon and oxygen, respectively.
2D molecular local structures, 3D molecular geometric information, and equivariant property are important for coarse-grained representations to preserve physical constraints Löfblom et al. (2010);
Schütt et al. (2021). Therefore, to obtain the above capacities, we augment the coarse-fined representation zm by the following augmentation network faug, which can be an equivariant graph neural network Satorras et al. (2021); Schütt et al. (2021); Thölke & De Fabritiis (2022). The augmentation network faug takes zm, edge (bond) indices em ∈ [0, 1]Nm×Nm , edge (bond) features fm ∈ REm×fe and atom positions rm ∈ RNm×3 as input, and produces ẑm = faug(zm, rm, em,fm) ∈ RNm×d, which is an equivariant coarse-fined representation (see the augmentation network in Fig. 1). This design enables ẑm to learn the shared fundamental internal geometric and structural information across different tasks and datasets while preserving equivariant property.
In conclusion, the coarse-grained module consists of two components: (1) an atom-wise embedding layer and (2) an augmentation network. Atom-wise embedding layer is used to obtain an atom-wise coarse-grained representation zm for every input molecule or complex m, and the augmentation network augments every coarse-grained representation with equivariant property by 2D local structures and 3D geometric information to produce an equivariant coarse-grained representation ẑm.
In addition to the equivariant coarse-grained representations, different molecular types require fine-grained data-type specific representations to capture differences in conformation structure and geometric information for performing different downstream tasks. In Sec. 3.2, we will introduce the fine-grained data-specific module and discuss the initiative to have it.
3.2 FINE-GRAINED DATA-SPECIFIC MODULE
Previously in Sec. 3.1, we discuss how the coarse-grained module can learn atom-wise atom-wise coarse-grained representations to utilize the use of labeled molecules and complexes. And we discuss the initiative and reason to make coarse-grained representations fine-grained for downstream uses.
The chain of a protein-target complex (normally from 100 to more than 1000 atoms) is always significantly longer than the chain of a molecule (normally from 1 to 60 atoms), thus making atomwise interactions highly different, i.e., two atoms might be farther away in a long chain. They could potentially interact, and the high-order long-range interactions always exist, which should be captured between atoms in a complex but are not solid and required for molecules Luan et al. (2019); Morris et al. (2019). For example, oxidoreductase C879H1426N250O260S3 is a protein of 2818 atoms while carbon dioxide CO2 is a molecule that has only 3 atoms.
With this in mind, to distinguish the different conformation structures resulting from the chain-size difference between molecules and complexes, in the fine-grained data-specific module, we process coarse-grained representations ẑm of molecules and complexes in different ways. To be more specific, we use high-order graph networks for large graphs Morris et al. (2019) like complexes to capture high-order interactions, and shallow graph networks for small graphs like molecules where high-order interactions are not solid Luan et al. (2019).
Therefore, we divide our fine-grained module into two data-specific networks, (1) a fine-grained complex network fptc that has the ability to capture high-order long-range interactions for atoms in complexes (see the long-chain complex network in Fig. 1), and (2) a shallow fine-grained molecule network fmol for molecules (see the short-chain molecule network in Fig. 1).
The fine-grained complex network fptc can be any high-order graph neural network Li et al. (2021); Kim et al. (2021); Thölke & De Fabritiis (2022). We adopt and develop the 2D highorder transformer Kim et al. (2021) to a 3D equivariant transformer (see App. A), our fine-grained complex network fptc is capable of capturing any-order atom interactions and preserving equivariant property, which is novel. The fine-grained protein-target complex embedding follows z̃ptc = fptc(ẑm, rm, em,fm,xm) ∈ RNm×d ′ , where xm ∈ RNm×fn is atom features and d′ denotes the dimension of the embedding.
For the fine-grained molecule network fmol, the idea is fairly easy. Since equivariant property is closely related to high-order long-range interactions in 3D space Satorras et al. (2021), which is not required in small molecule graphs, we only need a shallow graph neural network as the fine-grained molecule network fmol to model local message passing in short-chain molecules Kipf & Welling (2016); Luan et al. (2020); Hua et al. (2022). And considering the computational cost, we choose the simplest graph convolutional network Kipf & Welling (2016) for fmol. The fine-grained molecule embedding follows z̃mol = fmol(ẑm, em,fm,xm) ∈ RNm×d ′ .
Overall, we have a fine-grained complex network fptc which is a high-order equivariant graph network, and a fine-grained molecule network fmol which is a shallow graph network. We treat molecules and protein-target complexes differently in fine-grained data-specific networks because complexes are always significantly longer than molecules and the high-order long-range interactions need to be captured among them. For a coarse-grained representation ẑm, if it is originally a protein-target complex, it will be embedded by the complex network fptc, or if it is originally a molecule, it will be embedded by the molecule network fmol.
3.3 TASK-SPECIFIC PREDICTION MODULE
The task-specific prediction module will distinguish representations ẑptc, ẑmol, and generate the outputs for each task ŷtask. In the multi-task learning setting, each task should have its own specific prediction network ftask Collobert & Weston (2008); Liu et al. (2019c); Aribandi et al. (2021) (see the task-specific prediction module in Fig. 1). In practice, our task-specific prediction module consists of 825 output networks corresponding to 825 prediction tasks from the following 4 datasets.
QM9 (12 prediction networks) QM9 is a dataset of molecules consisting of 12 tasks Ramakrishnan et al. (2014). We use the specialized output networks in Thölke & De Fabritiis (2022) for the prediction of molecular dipole moment µ and the prediction of electronic spatial extent 〈R2〉. The gated equivariant blocks Weiler et al. (2018); Schütt et al. (2021) are used for the remaining 10 tasks.
MD17 (14 prediction networks) MD17 is a dataset of molecules consisting of 7 sub-datasets (Aspirin, Ethanol, Malondialdehyde, Naphthalene, Salicylic Acid, Toluene, Uracil) Chmiela et al. (2017). There are 14 tasks in total, where each sub-dataset has 2 prediction tasks for molecular energy E and forces F⃗ . We use the gated equivariant blocks proposed in Weiler et al. (2018); Schütt et al. (2021) to predict E, and F⃗ are calculated using the negative gradient of E with respect to the atomic coordinates F⃗ = −∂E/∂r⃗ Thölke & De Fabritiis (2022). ChEMBL (798 prediction networks) ChEMBL is a protein-target dataset originally proposed in Mendez et al. (2019). Furthermore, 3 sub-datasets ChEMBL10, ChEMBL50, ChEMBL100 are developed by Mayr et al. (2018); Liu et al. (2022) for multi-task learning, and each sub-dataset contains 406, 263, 129 regression tasks, accordingly. We apply a linear function over z̃ptc and apply sum pooling to get an output for each regression task.
PDBbind (1 prediction network) PDBbind Wang et al. (2005) is a protein-target dataset consisting of 1 regression task for protein-ligand binding affinity prediction. We apply a linear function over z̃ptc and apply sum pooling to predict protein-ligand binding affinity.
The loss Li for each task will be calculated based on the outputs ŷi from each task and the ground truth labels yi, where i is the task number. All Li will be weighted and sum up to a multi-dataset multi-task loss LMDMT for optimization. One principle for the LMDMT design is to treat each task equally important. This principle is naturally held in conventional multi-task learning Mayr et al. (2018). But when it comes to the multi-dataset setting, the data imbalance between different molecular datasets will break this principle. In Sec. 3.4, we will discuss this problem and how to address it by the design of the weighted loss LMDMT .
3.4 MULTI-DATASET MULTI-TASK LOSS
In MDMT-GL, we will face the data imbalance problem. The problem only occurs when we train our model on different datasets simultaneously, e.g., molecules and protein-target complexes, because the number of labeled molecules is always greater than the number of labeled protein-target complexes, and the model will focus more on molecule datasets than protein datasets. This problem is special for multi-dataset setting and does not exist in previous works on multi-task learning with a single molecular type Tan et al. (2021); Lee & Kim (2019); Hu et al. (2021); Liu et al. (2022).
To address this issue, we propose a weighted loss, specific to MDMT-GL, to address the data imbalance problem between different molecular datasets. We are motivated to design the loss so that all tasks are treated equally regardless of the size of labeled training data.
Suppose that we have U tasks and originally n1, n2, . . . , nU labeled training data for each task, we obtain predictions ŷi,1, ŷi,2, . . . , ŷi,ni for task i, and compare them with ground-truth labels yi,1,yi,2, . . . ,yi,ni for the loss of i-th task Li = ∑ni j=1 li(yi,j , ŷi,j). The multi-dataset multi-
task loss LMDMT = ∑U
i=1 ciLi is a weighted sum of Li. To balance the weights of Li, we want n1∑ i=1 c1 = n2∑ i=1 c2 = · · · = nU∑ i=1 cU , which leads to c1n1 = c2n2 = · · · = cUnU . In practice, suppose nmin = MIN(n1, n2, . . . , nU ) = nk, then we set ck = 1 and for any i ̸= k, we have ci = nminni . We will discuss the implementation in Sec. 4.
4 EXPERIMENTS
In this section, we evaluate the Multi-Dataset Multi-Task Graph Learning framework (MDMT-GL) on real-world molecule and protein-target complex datasets, and show that our proposed learning method can be used to better learn molecule and complex representations. We briefly introduce our datasets in Sec. 3.3. We conduct experiments across 2 molecule datasets and 2 complex datasets, consisting of 825 tasks and 3,139,011 labeled molecular graphs. We divide the experiment section into two subsections, including discussions of molecule datasets in Sec. 4.1 in and discussions on protein datasets in Sec. 4.2. In more detail, we discuss the performance of the model on QM9 in Sec. 4.1.1, on MD17 in Sec. 4.1.1, on ChEMBL in Sec. 4.2.1, and on PDBbind in Sec. 4.2.2.
4.1 MOLECULE DATASETS
In this section, we discuss our model performance on molecule datasets including QM9 Ramakrishnan et al. (2014) and MD17 Chmiela et al. (2017). We compare our MDMT-GL with several classic baselines and the state-of-the-art models in Tab. 1&2. The experimental results show that learning molecule representations can benefit from learning protein representations.
4.1.1 QM9
Data QM9 dataset reports computed geometric, thermodynamic, energetic, and electronic properties for locally optimized geometries. We use the same data split as in Schütt et al. (2018); Klicpera et al. (2020); Thölke & De Fabritiis (2022), where the labeled molecules are divided into 110,000 / 10,000 / 10,831 for training / validation / testing.
Comparison We compare MDMT-GL with several popular baselines and state-of-the-art models, including SchNet Schütt et al. (2018), EGNN Satorras et al. (2021), PhysNet Unke & Meuwly (2019), DimeNet++ Klicpera et al. (2020), Cormorant Anderson et al. (2019), PaiNN Schütt et al. (2021), and Equivariant Transformer (ET) Thölke & De Fabritiis (2022), and report the results in Tab. 1. The results of baselines are obtained from Thölke & De Fabritiis (2022), and the MDMT-GL results are averaged over three runs.
From Tab. 1, we can observe that MDMT-GL outperforms most popular baselines with significant improvements on 6 out of 12 QM9 targets, including ϵHOMO, ϵLUMO,∆ϵ,U0,U,G. MDMT-GL shows very competitive performance and delivers significant improvements in the challenging molecular chemical property prediction problem via multi-dataset learning.
4.1.2 MD17
Data consists of molecular dynamics trajectories of small organic molecules, including both energies and forces. We use the same data split as in previous works Schütt et al. (2018); Klicpera et al. (2020); Thölke & De Fabritiis (2022). For each sub-dataset, we split the data into a training set with 950 molecules and a validation set with 50 molecules, leaving the remaining molecules for testing.
Comparison We compare MDMT-GL with several popular baselines and state-of-the-art models, including SchNet Schütt et al. (2018), PhysNet Unke & Meuwly (2019), DimeNet Klicpera et al. (2020), PaiNN Schütt et al. (2021), and Equivariant Transformer (ET) Thölke & De Fabritiis (2022), and report the results in Tab. 2. The baseline results are obtained from Thölke & De Fabritiis (2022), and the MDMT-GL results are averaged over three runs.
From Tab. 2, we can observe that MDMT-GL outperforms the most popular baselines with significant improvements on 8 out of 14 MD17 sub-datasets, except energy and forces for naphthalene, forces for salicylic acid, energy and forces for toluene, and forces for uracil. MDMT-GL shows very competitive performance and delivers significant improvements in the challenging molecular dynamics trajectory prediction problem via multi-dataset learning.
4.2 PROTEIN-TARGET DATASETS
In this section, we discuss our model performance on protein-target complex datasets including ChEMBL Mendez et al. (2019) and PDBbind Wang et al. (2005). We compare our MDMT-GL with several classic baselines and the state-of-the-art model in Tab. 3&4. The experimental results show that learning protein representations can benefit from learning molecule representations.
4.2.1 CHEMBL
Data The ChEMBL dataset is originally proposed by Mendez et al. (2019) for protein-targeting, but authors in Liu et al. (2022) modify the original dataset and provide three sub-datasets ChEMBL10, ChEMBL50, ChEMBL100 for multi-task learning. In Liu et al. (2022), they claim the tasks numbers are 382/ 152/ 132 (666 tasks in total) for ChEMBL10/ ChEMBL50/ ChEMBL100, but we actually get 406/ 263/ 129 (798 tasks in total) when running their data generation steps. So, we run and test baselines and MDMT-GL on 406/ 263/ 129 tasks, and report the averaged results over three runs. We use the same data split in Liu et al. (2022), splitting the labeled data into the ratio of 80%/ 10%/ 10% for training/ validation/ testing.
Comparison We compare MDMT-GL with several classic multi-task learning baselines and stateof-the-art models, including Multi-Task Learning (MTL) Mayr et al. (2018), Uncertainty Weighing (UW) Kendall et al. (2018), GradNorm Chen et al. (2018), Dynamic Weight Average (DWA) Liu et al. (2019b), Loss-Balanced Task Weighting (LBTW) Liu et al. (2019a), State Graph Neural Network (SGNN) Liu et al. (2022), and Energy-Based State Graph Neural Network (SGNN-EBM) Liu et al. (2022), and report the averaged results in Tab. 3.
From Tab. 3, we can observe that MDMT-GL outperforms all popular baselines with marginal improvements of AUC-ROC score on ChEMBL10, ChEMBL50, ChEMBL100. By simultaneously learning other molecular datasets and tasks, the MDMT-GL framework can make the best use of the data and leverage the results of predictions for protein-targeting.
4.2.2 PDBBIND
Data PDBbind dataset provides 3D binding structures of protein-ligand complexes with experimentally determined binding affinities. In our experiment, we use the PDBbind2016 dataset, which is the most used PDBbind dataset in previous works Lim et al. (2019); Li et al. (2021). We use the same data split in Li et al. (2021).
Comparison We compare MDMT-GL with several classic baselines and state-of-the-art models, including Spatial Graph Convolution Network (SGCN) Danel et al. (2020), GNN-DTI Lim et al. (2019), DMPNN Yang et al. (2019), Molecule Attention Transformer (MAT) Maziarka et al. (2020), DimeNet Klicpera et al. (2020), CMPNN Song et al. (2020), and Structure-aware Interactive Graph Network (SIGN) Liu et al. (2022). The baseline results are obtained from Li et al. (2021), and MDMT-GL results are averaged over three runs.
From Tab. 4, we can observe that MDMT-GL outperforms all popular baselines with significant improvements in RMSE, MAE, SD, and R scores on the PDBbind dataset. MDMT-GL shows very competitive performance and delivers significant improvements on the challenging protein-binding affinity prediction problem via multi-dataset learning.
Overall, we can see that the Multi-Dataset Multi-Task Graph Learning framework (MDMT-GL) is very competitive in all tasks. We can conclude that MDMT-GL enables the learning of protein representations to benefit the learning of molecule representations, and vice versa. The strong experimental results show that our proposed learning method can utilize the use of labeled training data, and can make the most and best use of them. And this learning framework can mitigate the lack of labeled data in drug discovery.
5 CONCLUSION AND FUTURE WORK
In conclusion, our proposed Multi-Dataset Multi-Task Graph Learning (MDMT-GL) framework is able to address the data insufficiency problem by concurrently training the representations of molecules and protein-target complexes for multiple prediction tasks. The strong experimental results show that there does exist transferable information between molecules and protein-target complexes and it is learnable. We can also say that the learning of protein representations can facilitate the learning of molecule representations, and vice versa. Furthermore, in the future, we could discover some quantum chemical constraints and prior knowledge and add them to the coarse-grained network to capture more informative coarse-grained embeddings.
A MODEL ARCHITECTURE
We introduce the full MDMT-GL architecture. Suppose we are given an input molecular data of Nm atoms and Em edges, its atom numbers m ∈ NNm×1, atom features xm ∈ RNm×d, atom positions rm ∈ RNm×3 in 3D space, edge indices em ∈ [0, 1]Nm×Nm , edge features fm ∈ REm×fe , where fn, fe denote numbers of node features and edge features, respectively.
First, we embed the atom numbers m to an atom-wise coarse-grained representation zm by an atom-embedding transformation:
zm = Watom m ∈ RNm×d
where d is the hidden feature dimension.
Then the coarse-grained representation zm will be augmented to an equivariant coarse-grained representation ẑm by an augmentation network. There are many equivariant graph neural network options Satorras et al. (2021); Schütt et al. (2021); Thölke & De Fabritiis (2022), and our choice is the equivariant transformer proposed in Thölke & De Fabritiis (2022).
Before the coarse-grained representation gets augmented, there is an exponential normal radial basis function that resembles a continuous filter convolution to filter the neighborhood of an atom Schütt et al. (2018). The distance dij between atoms i, j is defined as:
eRBFk = ϕ(dij)exp(−βk(exp(−dij)− µk)2, ϕ(dij) = 1 2 (cos( πdij dcut ) + 1), if dij ≤ dcut
0, if dij > dcut
where βk, µk are fixed parameters specifying the center and width of the radial basis function k. β is initialized as (2K−1(1 − exp(dcut)))−2, µ is initialized with values equally spaced between exp(−dcut) and 1 for all k proposed by Unke & Meuwly (2019). And the cosine cutoff ϕ(dcut) is used to ensure a smooth transition to 0 as dij approaches to dcut.
The neighborhood embedding nm for m is then defined as:
nm ∈ RNm×d, nm,i = N∑ j=1 zm,j ⊙WFilter eRBF(dij) ∈ Rd,
where each row i corresponds to the neighbor embedding of atom i of m. We update the coarsegrained representations zm with the neighbor embedding nm:
zm = LayerNorm(WTransform[zm,nm] + bTransform).
Then the coarse-grained representation zm is augmented by an equivariant transformer layer proposed in Thölke & De Fabritiis (2022). The interatomic distances are projected into two multidimensional filters DK , DV :
DK = σ(WDK e RBF(rm,ij) + bDK ), D V = σ(WDV e RBF(rm,ij) + bDV ).
And attention is weighted by a cosine cutoff to ensure that atoms with a distance greater than dcut do not interact:
A = Activation( F∑ k Qk ⊙Kk ⊙DKk ) · ϕ(dij), Q = WQ1zm and K = WK1zm.
The attention mechanism’s value is also split into three vectors of equal dimension:
s1m,ij , s 2 m,ij , s 3 m,ij = split(Vj ⊙DVij) ∈ Rd, V = WV1zm,
and
ym ∈ RNm×3d, ym,i = WO1( N∑ j Aij · s3ij),
where ym,i, s1m,ij , s 2 m,ij correspond to features, and two filters. Then the features ym are split into three features of equal size: q1m,q 2 m,q 3 m ∈ RNm×d‘
∆zm = q 1 m + q 2 m ⊙<WLinear1v,WLinear2v> ∈ RNm×d,
notice that vm ∈ RNm×3 is set to 0 in the beginning, i.e., initially vm = 0Nm×3. And for v,
∆vm = wm + q 3 m ⊙WLinear3vm, wm,i = N∑ j s1m,ij ⊙ vm,j + s2m,ij ⊙ rm,i − rm,j ||rm,i − rm,j || ,
and zm = zm +∆zm, vm = vm +∆vm. More details on the transformer can be found in Thölke & De Fabritiis (2022). After iterative updates, we will receive our equivariant coarse-grained representation ẑm,
ẑm = LayerNorm(zm + ∑ l ∆zm) ∈ RN×d.
Then the equivariant coarse-grained representation is cooperated with node and edge features
ẑm = LayerNorm(WC [ẑm,xm,WEfm]) ∈ RN×d
If ẑm is originally a protein-target complex then will be encoded by an equivariant high-order graph neural network Satorras et al. (2021); Schütt et al. (2021); Thölke & De Fabritiis (2022). And our choice is to develop Kim et al. (2021) to an equivariant graph transformer for complex network, it follows
ẑm = Enck→l(ẑm) = Attnk→l(ẑm) + L2l→l(Activation(L 1 l→l(Attnk→l(ẑm)))) ∈ RN
l m×d ′ ,
Attnk→l(ẑm)j = H∑
h=1 ∑ µ ∑ i αh,µi,j ẑm,iW V2 h,µW O h,µ,
where in the first layer k = 1, H is the number of heads, L1l→l : RN l m×d → RN lm×d′ , L2l→l : RN lm×d′ → RN lm×d. And to compute each attention αh,µ ∈ Rnk+l from ẑm ∈ Rn k×d,
ah,µi,j = σ(Qµj ,K µ i )∑ i|(i,j)∈µ σ(Q µ j ,K µ i ) , (i, j) ∈ µ
0, otherwise
, Qµ = Lµk→l(ẑm) and K µ = Lµk→k(ẑm).
More details can be found in Kim et al. (2021), we augment ẑm ∈ RN l m×d ′ to an equivariant form by
s1m,ij , s 2 m,ij , s 3 m,ij = split(Vj) ∈ Rl×d ′ , V = WV2 ẑm,
and
ym ∈ RN l m×3d ′ , ym,i = WO2( N∑ j ai,j · s3ij).
Then the features ym are split into three features of equal size:
q1m,q 2 m,q 3 m ∈ RN
l m×d‘
∆ẑm = q 1 m + q 2 m ⊙<WLinear1’v,WLinear2’v> ∈ RN
l m×d ′ ,
notice that vm ∈ RN l m×3 is set to 0 in the beginning, i.e., initially vm = 0N l m×3. And for vm,
∆vm = wm + q 3 m ⊙WLinear3’vm, wm,i = N∑ j s1m,ij ⊙ vm,j + s2m,ij ⊙ rm,i − rm,j ||rm,i − rm,j || ,
and z̃mol = zm +∆zm ∈ RN l m×d ′ , vm = vm +∆vm ∈ RN l m×3. In the last layer, we set l = 1 and receive receive our equivariant fine-grained complex representation ẑm,
z̃ptc = LayerNorm(z̃ptc) ∈ RNm×d ′ .
We have the fine-gained representations for protein-target complex.
Or if ẑm is originally a molecule then will be encoded by a shallow graph neural network Kipf & Welling (2016); Luan et al. (2020); Hua et al. (2022). And our choice is the simplest graph convolution network Kipf & Welling (2016),
z̃mol = LayerNorm(Activation(emzmWm)) ∈ RNm×d ′ .
Then it will be fed into downstream task-specific module. | 1. What is the main contribution of the paper regarding molecular property prediction?
2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to share parameters across different modalities?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What questions does the reviewer have regarding the paper, such as the need for more information on hyperparameter tuning, ablation studies, and model details? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Summary
Multi-Dataset Multi-Task Framework for Learning Molecules and Protein-target Interactions Properties
What is the problem?
Predicting relevant biomedical properties of molecules of various types (including macromolecules like proteins), despite the lack of large labeled datasets in these modalities.
Why is it impactful?
This form of biomedical prediction is increasingly important in computationally aided drug discovery.
Why is it technically challenging/interesting (e.g., why do naive approaches not work)?
There are foundationally two main technical challenges: First, relative to the size of the space, we have very limited labeled data, and, second, the input modality is not a well understood modality like text or images, but instead is composed of graphs of highly varying size with a limited underlying vocabulary.
The main technical challenge that this work addresses is the limited dataset sizes available.
Why have existing approaches failed?
Existing approaches have been limited to processing different molecule types (here, this seems to mean strictly small molecules vs. macromolecular complexes) separately.
What is this paper's contribution?
The authors propose a multi-modal, multi-task learning solution where parameters are shared across both small molecules and macromolecular complexes.
In particular, their model first embeds all molecules in a shared space, by leveraging a shared atomic embedding layer, followed by a shared EGNN to produce modestly contextualized atomic embeddings for the molecular structure. Next, small molecules are processed with a simple graph convolutional neural network, and macromolecules are processed with a 3D equivariant graph transformer. The authors claim their modifications to Kim et al, 2021, to form their 3D equivariant graph transformer, are technically sophisticated, but they do not offer a sufficiently clear explanation of this to justify the claim. All details of this component are relegated to the appendix, and are presented in a very dense, very unclear format which hinders my ability to gauge the true sophistication of that contribution.
After these modality-specialized encoders, independent losses are computed across all tasks of interest, before being pooled with a weighted average operation where weights are determined to equalize the overall contribution of each task despite differing amounts of training data.
The authors do omit some important model details, such as what cutoffs their model imposes on molecule size and how they decide whether or not to use the molecule or protein embedding system (given that their model relis on the fact that macromolecules are just bigger molecules, do they use a size cutoff to determine which network a molecule should be routed through?).
How do these methods compare to prior works?
The authors claim that their work is the first to jointly embed small molecules and macro molecules, which is true to the best of my (limited) knowledge. I do not see any other aspects here that show significant novelty.
How do they validate their contributions?
The authors validate their work across various molecular and protein-specific benchmarks, finding significant improvements with their approach as a general rule. However, these improvements are hard to contextualize, as the authors are missing key details about their experiments, such as how they performed hyperparameter tuning and architecture search, how their model performed under various ablations, and any statistical significance assesments of their results.
Strengths And Weaknesses
Strengths and Weaknesses
Key Strengths (reasons I would advocate this paper be accepted)
I think it is a great idea to try to process molecules and proteins together, as these two modalities are highly related at both macro and micro levels.
You report strong numbers here, in particular on Chembl and PDBBind.
Key Weaknesses (reasons I would advocate this paper be rejected)
You don't do enough here to prove that the novel aspects of your approach are really providing a robust improvement over prior work. This problem is an aggregate of several subproblems:
You don't report variances on any of your numbers, despite the fact that you claim you've run multiple samples. Especially given how small the differences are between your results and baselines, in some instances, you need to report variance numbers and run statistical significance tests to assess claimed improvements, as otherwise I'm left wondering if your results are just due to chance.
You don't describe how you performed hyperparameter tuning, which makes me suspicious that your results could be due to disparate "tuning" effort between baselines and your model.
You don't provide any ablation studies to prove that the improvements you show are actually due to the novel aspects of your model. For example, is it really the case that sharing parameters between molecules and proteins drives any of your performance improvements here, or is it just the architectures you use? Is it really the case that the architectures you've chosen for your modality-specific encoders should actually differ, or would it work well if you swapped them / only used one? How would your system work if you embedded proteins via graphs or sequences of amino acids, as is standard, rather than atoms (which would obviously break the embedding sharing between modalities).
You lack sufficient information here to fully explain your model. In particular, your description of your fine-grained network, which you claim is novel, is far too dense and shouldn't be as fully relegated to the appendix. This, combined with the lack of details on things like hyperparameter tuning, make your work very non-reproducible.
Minor Strengths (things I like, but wouldn't sway me on their own)
The paper reads relatively well in general.
Minor Weaknesses (things I dislike, but wouldn't sway me on their own)
You're overselling your work here, in some borderline, but non-trivial ways. At its core, you're proposing a method that shares an embedding layer across molecules and proteins, then shares model encoders across tasks within the molecule/protein categories, and is jointly trained acorss a variety of tasks. This is interesting, and I like the almost hierarchical manner of parameter sharing in this multi-task learning setup. However, you describe your work as though you've inveted a fully new model architecture (including new encoders) to operate over multiple datasets and an unspecified number of molecule types simultanesouly, which sounds much more impressive. I was left a little underwhelmed when I realized your approach's actual scope is limited to joint embedding of traditional "molecules" and "proteins" and only operates over "multiple datasets" in a slightly novel, but still pretty standard multi-task learning format.
You've included unnecessary details at a number of points, which I think has limited your ability to include more relevant pieces of information. For example, you could greatly minimize your description of the atomic embedding later, the loss weighting details (as this is not particularly technically sophisticated), and details on your experimental tasks setup (by removing repeated details that are shared across all setups, such as the number of repetitions, and relegating information about task-specific descriptions and metrics to a table (or even a supplementary table, if necessary). This isn't really a big deal, except that you are missing important information.
You have slightly limited novelty here, as the only really novel part is that you jointly embed proteins and molecules.
Clarity, Quality, Novelty And Reproducibility
Clarity, Quality, Novelty, and Reproducibility
Clarity
In general, the clarity of this work is good -- here are a few small things:
You should use parenthetical in-text citations (with \citep) rather than just dumping them into the text.
Your bolding in your results tables is off. For ethanol energy, you bold MDMT-GL even though ET reports a lower MAE.
You should revise Figure 1 for clarity and to remove what appears to be hand-written or hand-drawn labels on the blue arrows?
Embedding layers are very standard -- the example and extended content at the bottom of page 3 describing those is likely unnecessary.
I've never seen anyone refer to a macromolecule in the way you do oxidoreductase --- to me, the notation you use seems to imply the molecule is structurally arranged as CCCCC...CHHH....H... which is obviously wrong. I might recommend just removing the formula as, regardless of whether this is an appropriate notation, the formula is not relevant to your point.
Quality
I think the work could be of good quality, but there isn't enough detail included to be sure.
Novelty
The work has modest novelty. I've not seen anyone process proteins and molecules together in this way, but that is the only component of this that seems significantly novel to me.
Reproducibility
The work is definitely not reproducible -- it lacks major details that would be needed. |
ICLR | Title
Multi-Dataset Multi-Task Framework for Learning Molecules and Protein-target Interactions Properties
Abstract
Molecular property prediction and protein-target interaction prediction with deep learning are becoming increasingly popular in drug discovery pipelines in recent years. An important factor that limits the development of these two areas is the insufficiency of labeled data. One promising direction to address this problem is to learn shared embedding from multiple prediction tasks within one molecular type, e.g., molecule or protein, because different tasks might actually share similar coarse-grained structural information. Unlike the previous methods, in this paper, we first argue that, due to the possible local structural similarity between molecules and protein-target complexes, coarse-grained latent embeddings can be found across different molecular types. To take advantage of this, we propose a new Multi-Dataset Multi-Task Graph Learning (MDMT-GL) framework, where we are able to make the most use of the labeled data by simultaneously training molecule property prediction and protein-target interaction prediction together. MDMT-GL augments molecular representations with equivariant properties, 2D local structures, and 3D geometric information. MDMT-GL can learn coarsegrained embeddings for molecules and proteins, and also distinguish fine-grained representations in various downstream prediction tasks with unique characteristics. Experimentally, we implement and evaluate MDMT-GL on 2 molecular dynamic datasets and 2 protein-target datasets, consisting of 825 tasks and over 3 million data points. MDMT-GL achieves state-of-the-art performance on several tasks and shows competitive performance on others. These experimental results confirm that molecules and proteins indeed share some coarse-grained structures and that the coarse-grained embedding is trainable, and their fine-grained embeddings are more representative. To the best of our knowledge, this is the first work to train multi-task learning across different molecular types, and to verify the structural similarity between the molecules and the protein-target complexes.
N/A
Molecular property prediction and protein-target interaction prediction with deep learning are becoming increasingly popular in drug discovery pipelines in recent years. An important factor that limits the development of these two areas is the insufficiency of labeled data. One promising direction to address this problem is to learn shared embedding from multiple prediction tasks within one molecular type, e.g., molecule or protein, because different tasks might actually share similar coarse-grained structural information. Unlike the previous methods, in this paper, we first argue that, due to the possible local structural similarity between molecules and protein-target complexes, coarse-grained latent embeddings can be found across different molecular types. To take advantage of this, we propose a new Multi-Dataset Multi-Task Graph Learning (MDMT-GL) framework, where we are able to make the most use of the labeled data by simultaneously training molecule property prediction and protein-target interaction prediction together. MDMT-GL augments molecular representations with equivariant properties, 2D local structures, and 3D geometric information. MDMT-GL can learn coarsegrained embeddings for molecules and proteins, and also distinguish fine-grained representations in various downstream prediction tasks with unique characteristics. Experimentally, we implement and evaluate MDMT-GL on 2 molecular dynamic datasets and 2 protein-target datasets, consisting of 825 tasks and over 3 million data points. MDMT-GL achieves state-of-the-art performance on several tasks and shows competitive performance on others. These experimental results confirm that molecules and proteins indeed share some coarse-grained structures and that the coarse-grained embedding is trainable, and their fine-grained embeddings are more representative. To the best of our knowledge, this is the first work to train multi-task learning across different molecular types, and to verify the structural similarity between the molecules and the protein-target complexes.
1 INTRODUCTION
The discovery and development of a new drug could take more than a decade and cost billions of dollars Hughes et al. (2011); Sliwoski et al. (2014). Therefore, to reduce costs, predicting the properties of molecules and protein-target complexes (e.g., heat capacity, force field, binding affinity) become an essential component for the early stage of the drug discovery pipeline. Molecules and complexes are always represented as graph-structured data Li et al. (2021); Maziarka et al. (2020); Thölke & De Fabritiis (2022), where atoms and bonds are nodes and edges, respectively, and graph neural networks are in favor of learning representations from relational datasets Kipf & Welling (2016); Luan et al. (2021); Hua et al. (2022). As a result, graph-based deep learning methods that learn molecular graph representations have achieved great success in predicting molecule properties Schütt et al. (2018; 2021); Klicpera et al. (2020); Thölke & De Fabritiis (2022) and protein-target interactions Lim et al. (2019), but the data we have at hand are often insufficient, which will limit model performance Sliwoski et al. (2014); Liu et al. (2022). Thus, reducing the requirement for labeled data needed for the effective prediction of molecular and protein target properties becomes a challenge in drug discovery.
To address the aforementioned issue, multi-task learning for molecular property prediction Tan et al. (2021) and protein-target interaction prediction Lee & Kim (2019); Hu et al. (2021); Liu et al. (2022) is gradually drawing attention from the drug discovery community. Their models always deal with a single molecular type, i.e., a molecule (or complex) is only used for multiple molecule property (or protein-target interaction) prediction tasks. The difficulty stems from the fact that knowledge from different molecular types cannot be easily decomposed and shared. However, we argue that due to the internal geometric and local structural similarities between the molecule and the protein-target complex, they should share similar coarse-grained latent embeddings Jain (2000); Bender & Glen (2004); Löfblom et al. (2010). Hence, we believe that representations of molecules and complexes could be coarse-grained and a coarse-grained latent embedding could be learned together under one learning framework. Embodiments should share internal geometric and local structural information across molecules and complexes from atomic perspectives. Eventually, the learning of protein representations can benefit from the learning of molecule representations, and vice versa.
Therefore, we propose a new learning framework, Multi-Dataset Multi-Task Graph learning (MDMTGL) for molecular property prediction and protein-target interaction prediction. MDMT-GL aims to make the best use of labeled data by transferring knowledge between molecules and complexes. The cross-dataset paradigm for multi-task learning enables the shared embedding to be more informative representations than the single-dataset paradigm. To the best of our knowledge, MDMT-GL is the first work to train molecular property prediction and protein-target interaction prediction together and to verify the structural similarities between the molecule and the protein-target complex. In addition to the major contribution, we also develop the 2D graph transformer proposed by Kim et al. (2021) into a 3D equivariant graph transformer for molecular dynamics, and the model is capable of capturing high-order atom interactions in 3D space. Moreover, unlike multi-task learning within a single dataset, the data imbalance of different datasets will lead to the task imbalance problem which is fatal to multi-task learning. To treat each task equally, we propose a weighted loss to balance the importance of the tasks, which is novel for MDMT-GL. The details of MDMT-GL are discussed in Sec. 3. Furthermore, in Sec. 4, the experimental results support our argument and show that molecules and complexes can share some similar coarse-grained structures, and the geometric and structural similarities can be learned to leverage any molecular prediction task.
2 RELATED WORK
2.1 MOLECULAR MULTI-TASK LEARNING
Molecular Multi-Task Learning (MTL) is mainly used to address the data insufficiency problem in drug discovery. Liu et al. (2019c) uses a general architecture of a shared representation module and multiple task-specific prediction modules for MTL. Tan et al. (2021) stacks a base regressor and classifier with an additional training stage on the expanded molecular feature space for the prediction of molecular properties. Lee & Kim (2019) finds that similarity within a target group can affect the performance of MTL in the prediction of protein binding. Liu et al. (2022) possesses the knowledge of task relations and constructs a task-relation graph to maximize the performance of MTL in protein targeting. However, the aforementioned methods do not transfer knowledge between molecules and protein-target complexes. Existing models only perform MTL on the same dataset, i.e., molecule or protein, but the MTL between molecule and protein has never been explored. In this work, we aim to make use of the shared information between molecules and proteins across various tasks, so that we can make the most and best use of the labeled data.
2.2 GRAPH NEURAL NETWORKS FOR PROPERTY PREDICTION
In drug discovery, people apply message-passing-based models to predict the properties of molecules and proteins. Schütt et al. (2018) respects essential quantum chemical constraints and models quantum interactions by modeling interactions of atoms at arbitrary positions in a molecule. Satorras et al. (2021) proposes a graph neural network, which is equivariant to rotations, translations, reflections, and permutations in 3D geometry, to model molecular dynamics. Thölke & De Fabritiis (2022) builds on top of the graph transformer and develops an equivariant graph transformer to predict quantum molecule properties. Lim et al. (2019) learns drug-target interactions by extracting the graph features of intermolecular interactions directly from 3D structural information on the protein-ligand binding pose. Li et al. (2021) proposes a structure-aware interactive graph neural network to preserve the
distance and angle information among atoms to learn interactions between proteins and ligands. Overall, our architecture mainly consists of two equivariant graph transformers that focus on longrange atom interactions and featurization of atomic types and coordinates, and a graph neural network to preserve local structure information.
3 MULTI-DATASET MULTI-TASK FRAMEWORK FOR LEARNING MOLECULES AND PROTEIN-TARGET COMPLEXES
As discussed in Sec. 1, the labeled data for molecules and protein-target complexes are often insufficient. Therefore, we strive to make the most of the available labeled data from molecule and protein datasets for various tasks. In other words, we aim to design an architecture that can learn simultaneously from different molecular and protein datasets, in which learning protein representations can benefit from learning molecule representations and vice versa. The core technical difficulty is how to identify their coarse-grained similar internal geometry and local structures, and to also differentiate their fine-grained representations for different conformation structures.
To achieve the goal, we divide our model into four components (1) a coarse-grained module, (2) a fine-grained data-specific module, (3) a task-specific prediction module, and (4) a multi-dataset multi-task loss (see the whole architecture in Fig. 1 and App. A).
The function of each module is as follows: (1) The coarse-grained module is designed to learn a coarse-grained representation of molecules and protein-target complexes. Common geometric and structural information can be obtained in molecules and complexes can be obtained. We will discuss the details in Sec. 3.1. (2) The fine-grained module will process the molecules-specific and complexes-specific representations separately. We will discuss it in Sec. 3.2. (3) Then, the data-type-specific representations are fed into different task-specific prediction modules to make predictions for various tasks, the details are discussed in Sec. 3.3. (4) Finally, weighted losses of all tasks are used to balance the importance of different tasks. We describe how to compute the MDMT loss in Sec. 3.4. The whole framework can be trained in an end-to-end manner. In Sec. 4, we experimentally show that the representations could be coarse-grained between molecules and protein-target complexes.
3.1 COARSE-GRAINED MODULE
Although having different conformation structures and dynamics, molecules and protein-target complexes are made of basic atoms and bonds, and should thus share fundamental internal geometric and local structural information Jain (2000); Bender & Glen (2004); Löfblom et al. (2010). For example, the carbon dioxide molecule O=C=O and methanoic acid H(C=O)OH have different conformation structures and different force fields, but they share the same carbon atom C and similar local structures around the carbon atoms, e.g., double bond with oxygen O. Thus, two carbon atoms could potentially share coarse-grained information about their local structures. The coarse-grained module is designed to capture such atomic-level similarities so that generalizable features between molecules and proteins can be learned.
To capture the atomic-level similarities, we give each basic atom a unique learnable embedding Schütt et al. (2018); Klicpera et al. (2020); Thölke & De Fabritiis (2022), which is shared by all compounds in all tasks across different datasets (see the atom-wise embedding layer in Fig. 1).This is the first time that the atomic-level coarse-grained representations are exploited in the MDMT setting for molecules and proteins. To be more specific, an input molecule or complex m = [a1, a2, ..., aNm ]
T ∈ NNm×1 is a 1D vector of the atoms that build m, where Nm is the number of atoms in m, ai is the number of atoms in the periodic table. The molecular embedding is zm = fatom(m), where fatom : NNm×1 → RNm×d projects a 1D molecule vector onto a 2D learnable embedding, where each row of the embedding represents a hidden atom feature, and d is the dimension of the embedding space. Take the carbon dioxide molecule O=C=O for example, its input is a 1D vector representation [8, 6, 8]T , where 8 and 6 are the number of atoms of oxygen and carbon in the periodic table, and its embedding follows zO=C=O = [fO(8), fC(6), fO(8)]T ∈ R3×d, where fC(6), fO(8) are the learnable embeddings for carbon and oxygen, respectively.
2D molecular local structures, 3D molecular geometric information, and equivariant property are important for coarse-grained representations to preserve physical constraints Löfblom et al. (2010);
Schütt et al. (2021). Therefore, to obtain the above capacities, we augment the coarse-fined representation zm by the following augmentation network faug, which can be an equivariant graph neural network Satorras et al. (2021); Schütt et al. (2021); Thölke & De Fabritiis (2022). The augmentation network faug takes zm, edge (bond) indices em ∈ [0, 1]Nm×Nm , edge (bond) features fm ∈ REm×fe and atom positions rm ∈ RNm×3 as input, and produces ẑm = faug(zm, rm, em,fm) ∈ RNm×d, which is an equivariant coarse-fined representation (see the augmentation network in Fig. 1). This design enables ẑm to learn the shared fundamental internal geometric and structural information across different tasks and datasets while preserving equivariant property.
In conclusion, the coarse-grained module consists of two components: (1) an atom-wise embedding layer and (2) an augmentation network. Atom-wise embedding layer is used to obtain an atom-wise coarse-grained representation zm for every input molecule or complex m, and the augmentation network augments every coarse-grained representation with equivariant property by 2D local structures and 3D geometric information to produce an equivariant coarse-grained representation ẑm.
In addition to the equivariant coarse-grained representations, different molecular types require fine-grained data-type specific representations to capture differences in conformation structure and geometric information for performing different downstream tasks. In Sec. 3.2, we will introduce the fine-grained data-specific module and discuss the initiative to have it.
3.2 FINE-GRAINED DATA-SPECIFIC MODULE
Previously in Sec. 3.1, we discuss how the coarse-grained module can learn atom-wise atom-wise coarse-grained representations to utilize the use of labeled molecules and complexes. And we discuss the initiative and reason to make coarse-grained representations fine-grained for downstream uses.
The chain of a protein-target complex (normally from 100 to more than 1000 atoms) is always significantly longer than the chain of a molecule (normally from 1 to 60 atoms), thus making atomwise interactions highly different, i.e., two atoms might be farther away in a long chain. They could potentially interact, and the high-order long-range interactions always exist, which should be captured between atoms in a complex but are not solid and required for molecules Luan et al. (2019); Morris et al. (2019). For example, oxidoreductase C879H1426N250O260S3 is a protein of 2818 atoms while carbon dioxide CO2 is a molecule that has only 3 atoms.
With this in mind, to distinguish the different conformation structures resulting from the chain-size difference between molecules and complexes, in the fine-grained data-specific module, we process coarse-grained representations ẑm of molecules and complexes in different ways. To be more specific, we use high-order graph networks for large graphs Morris et al. (2019) like complexes to capture high-order interactions, and shallow graph networks for small graphs like molecules where high-order interactions are not solid Luan et al. (2019).
Therefore, we divide our fine-grained module into two data-specific networks, (1) a fine-grained complex network fptc that has the ability to capture high-order long-range interactions for atoms in complexes (see the long-chain complex network in Fig. 1), and (2) a shallow fine-grained molecule network fmol for molecules (see the short-chain molecule network in Fig. 1).
The fine-grained complex network fptc can be any high-order graph neural network Li et al. (2021); Kim et al. (2021); Thölke & De Fabritiis (2022). We adopt and develop the 2D highorder transformer Kim et al. (2021) to a 3D equivariant transformer (see App. A), our fine-grained complex network fptc is capable of capturing any-order atom interactions and preserving equivariant property, which is novel. The fine-grained protein-target complex embedding follows z̃ptc = fptc(ẑm, rm, em,fm,xm) ∈ RNm×d ′ , where xm ∈ RNm×fn is atom features and d′ denotes the dimension of the embedding.
For the fine-grained molecule network fmol, the idea is fairly easy. Since equivariant property is closely related to high-order long-range interactions in 3D space Satorras et al. (2021), which is not required in small molecule graphs, we only need a shallow graph neural network as the fine-grained molecule network fmol to model local message passing in short-chain molecules Kipf & Welling (2016); Luan et al. (2020); Hua et al. (2022). And considering the computational cost, we choose the simplest graph convolutional network Kipf & Welling (2016) for fmol. The fine-grained molecule embedding follows z̃mol = fmol(ẑm, em,fm,xm) ∈ RNm×d ′ .
Overall, we have a fine-grained complex network fptc which is a high-order equivariant graph network, and a fine-grained molecule network fmol which is a shallow graph network. We treat molecules and protein-target complexes differently in fine-grained data-specific networks because complexes are always significantly longer than molecules and the high-order long-range interactions need to be captured among them. For a coarse-grained representation ẑm, if it is originally a protein-target complex, it will be embedded by the complex network fptc, or if it is originally a molecule, it will be embedded by the molecule network fmol.
3.3 TASK-SPECIFIC PREDICTION MODULE
The task-specific prediction module will distinguish representations ẑptc, ẑmol, and generate the outputs for each task ŷtask. In the multi-task learning setting, each task should have its own specific prediction network ftask Collobert & Weston (2008); Liu et al. (2019c); Aribandi et al. (2021) (see the task-specific prediction module in Fig. 1). In practice, our task-specific prediction module consists of 825 output networks corresponding to 825 prediction tasks from the following 4 datasets.
QM9 (12 prediction networks) QM9 is a dataset of molecules consisting of 12 tasks Ramakrishnan et al. (2014). We use the specialized output networks in Thölke & De Fabritiis (2022) for the prediction of molecular dipole moment µ and the prediction of electronic spatial extent 〈R2〉. The gated equivariant blocks Weiler et al. (2018); Schütt et al. (2021) are used for the remaining 10 tasks.
MD17 (14 prediction networks) MD17 is a dataset of molecules consisting of 7 sub-datasets (Aspirin, Ethanol, Malondialdehyde, Naphthalene, Salicylic Acid, Toluene, Uracil) Chmiela et al. (2017). There are 14 tasks in total, where each sub-dataset has 2 prediction tasks for molecular energy E and forces F⃗ . We use the gated equivariant blocks proposed in Weiler et al. (2018); Schütt et al. (2021) to predict E, and F⃗ are calculated using the negative gradient of E with respect to the atomic coordinates F⃗ = −∂E/∂r⃗ Thölke & De Fabritiis (2022). ChEMBL (798 prediction networks) ChEMBL is a protein-target dataset originally proposed in Mendez et al. (2019). Furthermore, 3 sub-datasets ChEMBL10, ChEMBL50, ChEMBL100 are developed by Mayr et al. (2018); Liu et al. (2022) for multi-task learning, and each sub-dataset contains 406, 263, 129 regression tasks, accordingly. We apply a linear function over z̃ptc and apply sum pooling to get an output for each regression task.
PDBbind (1 prediction network) PDBbind Wang et al. (2005) is a protein-target dataset consisting of 1 regression task for protein-ligand binding affinity prediction. We apply a linear function over z̃ptc and apply sum pooling to predict protein-ligand binding affinity.
The loss Li for each task will be calculated based on the outputs ŷi from each task and the ground truth labels yi, where i is the task number. All Li will be weighted and sum up to a multi-dataset multi-task loss LMDMT for optimization. One principle for the LMDMT design is to treat each task equally important. This principle is naturally held in conventional multi-task learning Mayr et al. (2018). But when it comes to the multi-dataset setting, the data imbalance between different molecular datasets will break this principle. In Sec. 3.4, we will discuss this problem and how to address it by the design of the weighted loss LMDMT .
3.4 MULTI-DATASET MULTI-TASK LOSS
In MDMT-GL, we will face the data imbalance problem. The problem only occurs when we train our model on different datasets simultaneously, e.g., molecules and protein-target complexes, because the number of labeled molecules is always greater than the number of labeled protein-target complexes, and the model will focus more on molecule datasets than protein datasets. This problem is special for multi-dataset setting and does not exist in previous works on multi-task learning with a single molecular type Tan et al. (2021); Lee & Kim (2019); Hu et al. (2021); Liu et al. (2022).
To address this issue, we propose a weighted loss, specific to MDMT-GL, to address the data imbalance problem between different molecular datasets. We are motivated to design the loss so that all tasks are treated equally regardless of the size of labeled training data.
Suppose that we have U tasks and originally n1, n2, . . . , nU labeled training data for each task, we obtain predictions ŷi,1, ŷi,2, . . . , ŷi,ni for task i, and compare them with ground-truth labels yi,1,yi,2, . . . ,yi,ni for the loss of i-th task Li = ∑ni j=1 li(yi,j , ŷi,j). The multi-dataset multi-
task loss LMDMT = ∑U
i=1 ciLi is a weighted sum of Li. To balance the weights of Li, we want n1∑ i=1 c1 = n2∑ i=1 c2 = · · · = nU∑ i=1 cU , which leads to c1n1 = c2n2 = · · · = cUnU . In practice, suppose nmin = MIN(n1, n2, . . . , nU ) = nk, then we set ck = 1 and for any i ̸= k, we have ci = nminni . We will discuss the implementation in Sec. 4.
4 EXPERIMENTS
In this section, we evaluate the Multi-Dataset Multi-Task Graph Learning framework (MDMT-GL) on real-world molecule and protein-target complex datasets, and show that our proposed learning method can be used to better learn molecule and complex representations. We briefly introduce our datasets in Sec. 3.3. We conduct experiments across 2 molecule datasets and 2 complex datasets, consisting of 825 tasks and 3,139,011 labeled molecular graphs. We divide the experiment section into two subsections, including discussions of molecule datasets in Sec. 4.1 in and discussions on protein datasets in Sec. 4.2. In more detail, we discuss the performance of the model on QM9 in Sec. 4.1.1, on MD17 in Sec. 4.1.1, on ChEMBL in Sec. 4.2.1, and on PDBbind in Sec. 4.2.2.
4.1 MOLECULE DATASETS
In this section, we discuss our model performance on molecule datasets including QM9 Ramakrishnan et al. (2014) and MD17 Chmiela et al. (2017). We compare our MDMT-GL with several classic baselines and the state-of-the-art models in Tab. 1&2. The experimental results show that learning molecule representations can benefit from learning protein representations.
4.1.1 QM9
Data QM9 dataset reports computed geometric, thermodynamic, energetic, and electronic properties for locally optimized geometries. We use the same data split as in Schütt et al. (2018); Klicpera et al. (2020); Thölke & De Fabritiis (2022), where the labeled molecules are divided into 110,000 / 10,000 / 10,831 for training / validation / testing.
Comparison We compare MDMT-GL with several popular baselines and state-of-the-art models, including SchNet Schütt et al. (2018), EGNN Satorras et al. (2021), PhysNet Unke & Meuwly (2019), DimeNet++ Klicpera et al. (2020), Cormorant Anderson et al. (2019), PaiNN Schütt et al. (2021), and Equivariant Transformer (ET) Thölke & De Fabritiis (2022), and report the results in Tab. 1. The results of baselines are obtained from Thölke & De Fabritiis (2022), and the MDMT-GL results are averaged over three runs.
From Tab. 1, we can observe that MDMT-GL outperforms most popular baselines with significant improvements on 6 out of 12 QM9 targets, including ϵHOMO, ϵLUMO,∆ϵ,U0,U,G. MDMT-GL shows very competitive performance and delivers significant improvements in the challenging molecular chemical property prediction problem via multi-dataset learning.
4.1.2 MD17
Data consists of molecular dynamics trajectories of small organic molecules, including both energies and forces. We use the same data split as in previous works Schütt et al. (2018); Klicpera et al. (2020); Thölke & De Fabritiis (2022). For each sub-dataset, we split the data into a training set with 950 molecules and a validation set with 50 molecules, leaving the remaining molecules for testing.
Comparison We compare MDMT-GL with several popular baselines and state-of-the-art models, including SchNet Schütt et al. (2018), PhysNet Unke & Meuwly (2019), DimeNet Klicpera et al. (2020), PaiNN Schütt et al. (2021), and Equivariant Transformer (ET) Thölke & De Fabritiis (2022), and report the results in Tab. 2. The baseline results are obtained from Thölke & De Fabritiis (2022), and the MDMT-GL results are averaged over three runs.
From Tab. 2, we can observe that MDMT-GL outperforms the most popular baselines with significant improvements on 8 out of 14 MD17 sub-datasets, except energy and forces for naphthalene, forces for salicylic acid, energy and forces for toluene, and forces for uracil. MDMT-GL shows very competitive performance and delivers significant improvements in the challenging molecular dynamics trajectory prediction problem via multi-dataset learning.
4.2 PROTEIN-TARGET DATASETS
In this section, we discuss our model performance on protein-target complex datasets including ChEMBL Mendez et al. (2019) and PDBbind Wang et al. (2005). We compare our MDMT-GL with several classic baselines and the state-of-the-art model in Tab. 3&4. The experimental results show that learning protein representations can benefit from learning molecule representations.
4.2.1 CHEMBL
Data The ChEMBL dataset is originally proposed by Mendez et al. (2019) for protein-targeting, but authors in Liu et al. (2022) modify the original dataset and provide three sub-datasets ChEMBL10, ChEMBL50, ChEMBL100 for multi-task learning. In Liu et al. (2022), they claim the tasks numbers are 382/ 152/ 132 (666 tasks in total) for ChEMBL10/ ChEMBL50/ ChEMBL100, but we actually get 406/ 263/ 129 (798 tasks in total) when running their data generation steps. So, we run and test baselines and MDMT-GL on 406/ 263/ 129 tasks, and report the averaged results over three runs. We use the same data split in Liu et al. (2022), splitting the labeled data into the ratio of 80%/ 10%/ 10% for training/ validation/ testing.
Comparison We compare MDMT-GL with several classic multi-task learning baselines and stateof-the-art models, including Multi-Task Learning (MTL) Mayr et al. (2018), Uncertainty Weighing (UW) Kendall et al. (2018), GradNorm Chen et al. (2018), Dynamic Weight Average (DWA) Liu et al. (2019b), Loss-Balanced Task Weighting (LBTW) Liu et al. (2019a), State Graph Neural Network (SGNN) Liu et al. (2022), and Energy-Based State Graph Neural Network (SGNN-EBM) Liu et al. (2022), and report the averaged results in Tab. 3.
From Tab. 3, we can observe that MDMT-GL outperforms all popular baselines with marginal improvements of AUC-ROC score on ChEMBL10, ChEMBL50, ChEMBL100. By simultaneously learning other molecular datasets and tasks, the MDMT-GL framework can make the best use of the data and leverage the results of predictions for protein-targeting.
4.2.2 PDBBIND
Data PDBbind dataset provides 3D binding structures of protein-ligand complexes with experimentally determined binding affinities. In our experiment, we use the PDBbind2016 dataset, which is the most used PDBbind dataset in previous works Lim et al. (2019); Li et al. (2021). We use the same data split in Li et al. (2021).
Comparison We compare MDMT-GL with several classic baselines and state-of-the-art models, including Spatial Graph Convolution Network (SGCN) Danel et al. (2020), GNN-DTI Lim et al. (2019), DMPNN Yang et al. (2019), Molecule Attention Transformer (MAT) Maziarka et al. (2020), DimeNet Klicpera et al. (2020), CMPNN Song et al. (2020), and Structure-aware Interactive Graph Network (SIGN) Liu et al. (2022). The baseline results are obtained from Li et al. (2021), and MDMT-GL results are averaged over three runs.
From Tab. 4, we can observe that MDMT-GL outperforms all popular baselines with significant improvements in RMSE, MAE, SD, and R scores on the PDBbind dataset. MDMT-GL shows very competitive performance and delivers significant improvements on the challenging protein-binding affinity prediction problem via multi-dataset learning.
Overall, we can see that the Multi-Dataset Multi-Task Graph Learning framework (MDMT-GL) is very competitive in all tasks. We can conclude that MDMT-GL enables the learning of protein representations to benefit the learning of molecule representations, and vice versa. The strong experimental results show that our proposed learning method can utilize the use of labeled training data, and can make the most and best use of them. And this learning framework can mitigate the lack of labeled data in drug discovery.
5 CONCLUSION AND FUTURE WORK
In conclusion, our proposed Multi-Dataset Multi-Task Graph Learning (MDMT-GL) framework is able to address the data insufficiency problem by concurrently training the representations of molecules and protein-target complexes for multiple prediction tasks. The strong experimental results show that there does exist transferable information between molecules and protein-target complexes and it is learnable. We can also say that the learning of protein representations can facilitate the learning of molecule representations, and vice versa. Furthermore, in the future, we could discover some quantum chemical constraints and prior knowledge and add them to the coarse-grained network to capture more informative coarse-grained embeddings.
A MODEL ARCHITECTURE
We introduce the full MDMT-GL architecture. Suppose we are given an input molecular data of Nm atoms and Em edges, its atom numbers m ∈ NNm×1, atom features xm ∈ RNm×d, atom positions rm ∈ RNm×3 in 3D space, edge indices em ∈ [0, 1]Nm×Nm , edge features fm ∈ REm×fe , where fn, fe denote numbers of node features and edge features, respectively.
First, we embed the atom numbers m to an atom-wise coarse-grained representation zm by an atom-embedding transformation:
zm = Watom m ∈ RNm×d
where d is the hidden feature dimension.
Then the coarse-grained representation zm will be augmented to an equivariant coarse-grained representation ẑm by an augmentation network. There are many equivariant graph neural network options Satorras et al. (2021); Schütt et al. (2021); Thölke & De Fabritiis (2022), and our choice is the equivariant transformer proposed in Thölke & De Fabritiis (2022).
Before the coarse-grained representation gets augmented, there is an exponential normal radial basis function that resembles a continuous filter convolution to filter the neighborhood of an atom Schütt et al. (2018). The distance dij between atoms i, j is defined as:
eRBFk = ϕ(dij)exp(−βk(exp(−dij)− µk)2, ϕ(dij) = 1 2 (cos( πdij dcut ) + 1), if dij ≤ dcut
0, if dij > dcut
where βk, µk are fixed parameters specifying the center and width of the radial basis function k. β is initialized as (2K−1(1 − exp(dcut)))−2, µ is initialized with values equally spaced between exp(−dcut) and 1 for all k proposed by Unke & Meuwly (2019). And the cosine cutoff ϕ(dcut) is used to ensure a smooth transition to 0 as dij approaches to dcut.
The neighborhood embedding nm for m is then defined as:
nm ∈ RNm×d, nm,i = N∑ j=1 zm,j ⊙WFilter eRBF(dij) ∈ Rd,
where each row i corresponds to the neighbor embedding of atom i of m. We update the coarsegrained representations zm with the neighbor embedding nm:
zm = LayerNorm(WTransform[zm,nm] + bTransform).
Then the coarse-grained representation zm is augmented by an equivariant transformer layer proposed in Thölke & De Fabritiis (2022). The interatomic distances are projected into two multidimensional filters DK , DV :
DK = σ(WDK e RBF(rm,ij) + bDK ), D V = σ(WDV e RBF(rm,ij) + bDV ).
And attention is weighted by a cosine cutoff to ensure that atoms with a distance greater than dcut do not interact:
A = Activation( F∑ k Qk ⊙Kk ⊙DKk ) · ϕ(dij), Q = WQ1zm and K = WK1zm.
The attention mechanism’s value is also split into three vectors of equal dimension:
s1m,ij , s 2 m,ij , s 3 m,ij = split(Vj ⊙DVij) ∈ Rd, V = WV1zm,
and
ym ∈ RNm×3d, ym,i = WO1( N∑ j Aij · s3ij),
where ym,i, s1m,ij , s 2 m,ij correspond to features, and two filters. Then the features ym are split into three features of equal size: q1m,q 2 m,q 3 m ∈ RNm×d‘
∆zm = q 1 m + q 2 m ⊙<WLinear1v,WLinear2v> ∈ RNm×d,
notice that vm ∈ RNm×3 is set to 0 in the beginning, i.e., initially vm = 0Nm×3. And for v,
∆vm = wm + q 3 m ⊙WLinear3vm, wm,i = N∑ j s1m,ij ⊙ vm,j + s2m,ij ⊙ rm,i − rm,j ||rm,i − rm,j || ,
and zm = zm +∆zm, vm = vm +∆vm. More details on the transformer can be found in Thölke & De Fabritiis (2022). After iterative updates, we will receive our equivariant coarse-grained representation ẑm,
ẑm = LayerNorm(zm + ∑ l ∆zm) ∈ RN×d.
Then the equivariant coarse-grained representation is cooperated with node and edge features
ẑm = LayerNorm(WC [ẑm,xm,WEfm]) ∈ RN×d
If ẑm is originally a protein-target complex then will be encoded by an equivariant high-order graph neural network Satorras et al. (2021); Schütt et al. (2021); Thölke & De Fabritiis (2022). And our choice is to develop Kim et al. (2021) to an equivariant graph transformer for complex network, it follows
ẑm = Enck→l(ẑm) = Attnk→l(ẑm) + L2l→l(Activation(L 1 l→l(Attnk→l(ẑm)))) ∈ RN
l m×d ′ ,
Attnk→l(ẑm)j = H∑
h=1 ∑ µ ∑ i αh,µi,j ẑm,iW V2 h,µW O h,µ,
where in the first layer k = 1, H is the number of heads, L1l→l : RN l m×d → RN lm×d′ , L2l→l : RN lm×d′ → RN lm×d. And to compute each attention αh,µ ∈ Rnk+l from ẑm ∈ Rn k×d,
ah,µi,j = σ(Qµj ,K µ i )∑ i|(i,j)∈µ σ(Q µ j ,K µ i ) , (i, j) ∈ µ
0, otherwise
, Qµ = Lµk→l(ẑm) and K µ = Lµk→k(ẑm).
More details can be found in Kim et al. (2021), we augment ẑm ∈ RN l m×d ′ to an equivariant form by
s1m,ij , s 2 m,ij , s 3 m,ij = split(Vj) ∈ Rl×d ′ , V = WV2 ẑm,
and
ym ∈ RN l m×3d ′ , ym,i = WO2( N∑ j ai,j · s3ij).
Then the features ym are split into three features of equal size:
q1m,q 2 m,q 3 m ∈ RN
l m×d‘
∆ẑm = q 1 m + q 2 m ⊙<WLinear1’v,WLinear2’v> ∈ RN
l m×d ′ ,
notice that vm ∈ RN l m×3 is set to 0 in the beginning, i.e., initially vm = 0N l m×3. And for vm,
∆vm = wm + q 3 m ⊙WLinear3’vm, wm,i = N∑ j s1m,ij ⊙ vm,j + s2m,ij ⊙ rm,i − rm,j ||rm,i − rm,j || ,
and z̃mol = zm +∆zm ∈ RN l m×d ′ , vm = vm +∆vm ∈ RN l m×3. In the last layer, we set l = 1 and receive receive our equivariant fine-grained complex representation ẑm,
z̃ptc = LayerNorm(z̃ptc) ∈ RNm×d ′ .
We have the fine-gained representations for protein-target complex.
Or if ẑm is originally a molecule then will be encoded by a shallow graph neural network Kipf & Welling (2016); Luan et al. (2020); Hua et al. (2022). And our choice is the simplest graph convolution network Kipf & Welling (2016),
z̃mol = LayerNorm(Activation(emzmWm)) ∈ RNm×d ′ .
Then it will be fed into downstream task-specific module. | 1. What is the focus and contribution of the paper on molecular property prediction and protein-target interaction prediction?
2. What are the strengths of the proposed approach, particularly in terms of its ability to transfer knowledge between molecules and complexes?
3. What are the weaknesses of the paper, especially regarding the explanation of the weighted loss function?
4. Do you have any concerns about the experimental results and their interpretation?
5. Is there a need for an ablation study to further validate the effectiveness of the proposed method? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a new learning framework, Multi-Dataset Multi-Task Graph learning (MDMT- GL) for molecular property prediction and protein-target interaction prediction. MDMT-GL aims to make the best use of labeled data by transferring knowledge between molecules and complexes. The cross-dataset paradigm for multi-task learning enables the shared embedding to be more informative representations than the single-dataset paradigm. Experimentally, the authors implement and evaluate MDMT-GL on 2 molecular dynamic datasets and 2 protein-target datasets, consisting of 825 tasks and over 3 million data points. MDMT-GL achieves state-of-the-art performance on several tasks and shows competitive performance on others. These experimental results confirm that molecules and proteins indeed share some coarse-grained structures and that the coarse-grained embedding is trainable, and their fine-grained embeddings are more representative.
Strengths And Weaknesses
##########################################################################
Pros:
The paper is the first work to train multi-task learning across different molecular types and to verify the structural similarity between the molecules and the protein-target complexes, which is an interesting idea for molecular property prediction and protein-target interaction prediction.
The proposed method achieves state-of-the-art performance on several tasks and shows competitive performance on others and these experimental results confirm that molecules and proteins indeed share some coarse-grained structures and that the coarse-grained embedding is trainable, and their fine-grained embeddings are more representative.
This paper also developed the 2D graph transformer into a 3D equivariant graph transformer for molecular dynamics, and the model is capable of capturing high-order atom interactions in 3D space. Moreover, unlike multi-task learning within a single dataset, the data imbalance of different datasets will lead to the task imbalance problem which is fatal to multi-task learning. To treat each task equally, the authors propose a weighted loss to balance the importance of the tasks, which is novel for MDMT-GL.
The paper is well-written and the comparison of benchmark methods is also interesting to read.
##########################################################################
Cons:
The core part of MDMT-GL is MULTI-DATASET MULTI-TASK LOSS, however, the authors didn't explain in detail how weighted loss addressed the data imbalance problem between different molecular datasets. How to design weighted loss, and what is the insight behind it? How are the weights set? This part needs more explanation.
In the experiment section, although the authors clearly list the data and comparison with baselines, lacking analysis of the experimental results. MDMT-GL does not achieve SOTA results on some metrics and the author needs to add some explanations to this part.
Missing ablation study part.
Clarity, Quality, Novelty And Reproducibility
The quality, clarity, and originality are good but lack some insight. |
ICLR | Title
On Anytime Learning at Macroscale
Abstract
Classical machine learning frameworks assume access to a possibly large dataset in order to train a predictive model. In many practical applications however, data does not arrive all at once, but in large batches over time. This creates a natural trade-off between accuracy of a model and time to obtain such a model. A greedy predictor could produce non-trivial predictions by immediately training on batches as soon as these become available but, it may also make sub-optimal use of future data. On the other hand, a tardy predictor could wait for a long time to aggregate several batches into a larger dataset, but ultimately deliver a much better performance. In this work, we consider such a streaming learning setting, which we dub anytime learning at macroscale (ALMA). It is an instance of anytime learning applied not at the level of a single chunk of data, but at the level of the entire sequence of large batches. We first formalize this learning setting, we then introduce metrics to assess how well learners perform on the given task for a given memory and compute budget, and finally we test about thirty baseline approaches on three standard benchmarks repurposed for anytime learning at macroscale. Our findings indicate that no model strikes the best trade-off across the board. While replay-based methods attain the lowest error rate, they also incur in a 5 to 10 times increase of compute. Approaches that grow capacity over time do offer better scaling in terms of training flops, but they also underperform simpler ensembling methods in terms of error rate. Overall, ALMA offers both a good abstraction of the typical learning setting faced everyday by practitioners, and a set of unsolved modeling problems for those interested in efficient learning of dynamic models.
1 INTRODUCTION
Empirical risk minimization (Vapnik, 1998) is the dominant framework to formalize the learning process of a supervised task, and it has been critical to the success of large scale training of deep learning systems on a wide variety of applications. Within this framework, training data is assumed to be provided to the learner all at once. Alternatively, when the dataset is very large (essentially infinite), data is streamed to the learner one minibatch at the time, assuming that the rate at which samples are received matches the model’s processing time to learn from them.
Learning over streams of data has been studied in the machine learning domain for a long time (see Section 2 and Figure 1 for more details) with different assumptions: for instance in online learning, it is usually assumed that datapoints are coming one by one and have to be processed as soon as they are received. In continual learning, the streaming of data usually corresponds to a stream of large datasets corresponding to different tasks to solve, etc. In this paper, we define a simple yet important setting where there is a single task to solve, and where training data often comes at a slower rate than a model can process it. Moreover, it comes in relatively large batches once in a while. While poorly studied, this setting corresponds to practical applications encountered in production pipelines. For instance, it is faced by teams deploying language modeling applications (e.g content moderation) build models that are trained on large amounts of data like filtered versions of Common Crawl, which are dumps of the internet. However, new snapshots are available every month, as new content is generated over time. Therefore datasets keep getting bigger every few months and models need to be retrained accordingly. Similarly, visual object recognition datasets used in deployed applications are often extended every few months to include new images with their corresponding annotations.
* Authors contributed equally
Practically, there are two main approaches to integrate information present in a new batch of data in an existing model. If a lot of computational resources are available, a new and bigger model is instantiated and trained from scratch on the union of the old training set with the new batch of data. However, since this is a computationally very intensive process, retraining is typically done only rarely, once several batches of data have been collected. We call this approach “tardy” large-scale learning, since a predictor is available only at a later time. Another option, particularly suitable when computational resources are scarce and a predictor is needed quickly, is to simply finetune the old model on the new data as this arrives. Note that, in that settings, methods from the data stream domain or from the online learning domain that are based on the idea of processing any datapoint just once are not suitable since they have been developed for different use-cases.
This trade-off is emblematic of anytime learning, a learning setting where a learner has to provide good predictions at any point in time, while improving its performance over time as more and more data is observed. From an anytime learning perspective, neither training a large model after all data is received nor finetuning on the newly added batch of data are not satisfying. The former approach is a poor anytime learner because one needs to wait for a long time before obtaining a useful predictor. The latter approach is a poor anytime learner because it typically cannot leverage very well future batches of data since the model has a fixed capacity, determined on a small portion of the overall dataset and because inherently the model is trained on non i.i.d. data.
In this work, we aim at exploring this accuracy versus time trade-off of anytime learning, not at the level of a single batch of data, but at the macroscale of the entire sequence of batches. This is a setting which more closely mimics practical applications, that we call anytime learning at mascroscale (ALMA). In this learning setting, we assume that the time to train a model is negligible compared to the interval of time between two consecutive batches of data (and therefore we do not care about how quickly a learner adapts to a new batch), yet efficiency matters in the sense that for the same performance a predictor that uses less compute and memory is preferable. In summary, we are interested in a learner that i) yields high accuracy, ii) can make non-trivial predictions at any point in time while iii) limiting its computational and memory resources.
Our first contribution is to formalize the ALMA problem and to introduce metrics to evaluate learners (§3). We consider three different axes: error rate, memory and amount of computation. By measuring these quantities against time, via an area under the curve, we account not only for the final performance but also for the whole training trajectory over the sequence of large batches of data.
Our second contribution is an extensive empirical evaluation (§5) of various models (§4) that strike different trade-offs between accuracy and time to obtain a useful predictor. In particular, we explore models that fall in between greedy finetuning and tardy large-scale learning, and investigate models that leverage batches of data at an intermediate rate. We also consider a rich family of modular architectures, from plain ensembling methods to hierarchical mixture of experts, and several variants thereof, including those that have access to a replay buffer storing all previous batches of data and those that can grow capacity over time.
Our findings across three different benchmarks, including a large scale language modeling one, can be summarized as follows. a) An intermediate waiting time offers the best trade-off between accuracy and time to yield such a predictor. However, b) there is no single approach striking the best trade-off between performance and efficiency for various model sizes. c) Retraining from scratch a big model does offer the lowest error rate but sacrifices efficiency. d) Interestingly, large models are the most statistically efficient even when considering small datasets (like MNIST) and fully
connected networks. e) While approaches to grow capacity exhibit gains in terms of computational efficiency, these do not even outpeform simple ensembles. Overall, our work points at several research opportunities to improve modeling in a streaming setting of broad practical relevance, rather then pointing at any particular solution. We have also released code to reproduce our experiments and the entire platform implementing ALMA.
2 RELATED WORK
ALMA relates to several other learning frameworks: offline learning, continual learning, online learning and transfer learning as illustrated in Figure 1. i) It shares the same assumptions of classical empirical risk minimization (ERM) (Vapnik, 1998) at the level of each batch of data. However, it overall violates ERM’s assumptions of i.i.d. observations, because data points come in a stream of data chunks. ii) Because of this, ALMA relates to continual learning (CL) (Ring, 1994; Thrun, 1994; Ring, 1997; Thrun, 1998), with the key difference that the data distribution across batches (or tasks) is assumed stationary in ALMA. Therefore, ALMA can be seen as a special case of CL with a single task to solve. iii) ALMA relates also to online learning (Bottou, 1998) since it assumes that data are coming in a stream, an assumption also made in the concept drift literature (Lu et al., 2018). However, in online learning examples are streamed one at the time (or at random from a large dataset), while in ALMA the learner receives large batches of data sequentially In ALMA, received data can be processed multiple times as opposite to the online learning setting that usually assumes that any new datapoint has to be processed as soon as it is available, and will not be reused in future updates. iv) Finally, ALMA relates more broadly to transfer learning (Pan & Yang, 2010), as the problem of adapting to a new batch of data can be interpreted as leveraging knowledge acquired on previous batches to more effciently learn from the new batch of data.
Of course, ALMA relates to anytime learning (Grefenstette & Ramsey, 1992; Ramsey & Grefenstette, 1994), which has been recently applied to compare various autoML frameworks (Liu et al., 2020). However, in this work we are not interested in assessing the anytime learning ability at the level of each chunk of data, but only at a coarser granularity, at the level of the entire stream of chunks. Inspired by Liu et al. (2020), we consider the area under the curve of error rate against time to measure performance, but in order to account also for compute and memory budget, we add to our evaluation metrics also the area under the curve for memory and compute.
From the more theoretical side, there has been work about sub-bagging (Bühlmann & Yu, 2002) (bagging using subsets of a larger dataset) which is similar to our setting but without the sequential aspect of it. In this context, Breiman (1999) proposed a model similar to our growing ensembling (gEns), Bühlmann & Yu (2002) studied sub-bagging as a way to make the prediction of tree classifiers more robust while Zou et al. (2021) studied the consistency of the estimator in this setting. We defer to future studies the analysis of ALMA, while in this work we focus on the empirical evaluation.
Shifting the discussion to prior work on models that adjust their capacity dynamically, Waterhouse & Robinson (1995) introduced an approach to grow a hierarchical mixture of experts model (Jordan & Jacobs, 1994). This is a tree structured model where experts are at the leaves and gating functions are at non-terminal nodes. The tree determines a hierarchical partition of the input space into regions that are associated to each expert. This approach was made more efficient in later work by (Fritsch et al., 1996). In this work we consider a baseline (gMoE) that extends this prior work to hierarchical mixture of experts (Eigen et al., 2014; Denoyer & Gallinari, 2015; Lepikhin et al., 2020).
Growing architectures have also been studied in CL. For instance, Fernando et al. (2017) and Veniat et al. (2021) proposed a modular architecture that is assembled for every task, possibly reusing previously trained modules. The major difference with our work is that in our case routing is input dependent as opposed to task dependent. Yoon et al. (2018) instead proposed a method to incrementally and smoothly add hidden units. Similarly, Wen et al. (2020) proposed a heuristic approach to automatically adjust the network depth. Wang et al. (2017) considered growing both depth and width when finetuning to a new task. Liu et al. (2019a) and Wu et al. (2020) proposed approaches to grow architectures in depth and width by leveraging Taylor approximation and greedy selection. In our work, we benchmark against this last variant. None of these approaches have been applied to the ALMA setting to date.
Finally, some of our findings are built upon and extend recent empirical evaluations studying the scaling properties of language models (Kaplan et al., 2020a; Li et al., 2020b). In this study, we
confirm the conclusion that bigger models generalize better and are more statistically efficient, not only in language modeling tasks using a transformer architecture, but also in smaller scale computer vision tasks using both fully connected and convolutional architectures.
3 LEARNING SETTING
In anytime learning at macroscale (ALMA), we assume that there exists an underlying data distribution p(x, y) with input x ∈ RD and desired label y ∈ {1, . . . , C}. Notice that extensions to regression and unsupervised learning (where y is missing) are trivial, and therefore in this work we focus on classification problems for simplicity of exposition. A important property of ALMA is that data is presented to the learner as a stream SB of B consecutive batches of examples. Let Di be a collection of N 0 i.i.d. samples randomly drawn from p(x, y), for i ∈ {1, . . . , B}. The stream is then defined as the ordered sequence SB = {D1, . . . ,DB}. We refer to each dataset Di as mega-batch, as it is composed by a large number of examples. Typically a learner m : RD → {1, . . . , C} updates its parameters by processing a mini-batch of n N examples at the time from each mega-batch Di, and by iterating several times over each mega-batch before being presented with the next mega-batch. Since the learner cannot access future mega-batches, overall the data distribution is not i.i.d., even though samples drawn from each mega-batch are i.i.d., and cross-validation is performed using a subset of the current mega-batch. A learner could decide to use previous mega-batches when learning on the current mega-batch, but this will increase its compute usage.
Finally, we assume that the time it takes a learner to update its internal parameters after having observed a mega-batch is much less than the interval between the arrival of two consecutive megabatches. In other words, the rate at which data arrives is slower than the processing time of the model, and therefore the model could decide to iterate several times over the data at its disposal to improve its prediction accuracy.
3.1 METRICS
We evaluate learners in the ALMA setting across three axes, namely: accuracy, memory and computation. Let t be the time at which the t-th mega-batch arrives; this data can be used by the model to update its parameters or it is simply aggregated to previous mega-batches for later use.
We compute the error rate of model m at time t (after the arrival of the t-th mega-batch) and compute the area under the curve obtained varying t from 0 till the total number of mega-batches B; the resulting cumulative error rate (CER) is:
CER = B∑ t=0 1 |DTs| ∑ (x,y)∈DTs |m(x; θt) 6= y| (1)
where m(x; θt) is the model at time t equipped with parameters θt, DTs is the test set, |DTs| is the number of examples in the test set, and |m(x; θt) 6= y| is one if the model prediction does not match the ground truth label and zero otherwise. The outer sum computes the discrete integral of the error rate over time. CER is going to be small only when the error rate is small throughout the whole stream . CER is instead large for a tardy model that waits till the very last mega-batch to update the model, even though eventually this may obtain a very low final error rate. If not perfect, CER provides a good summary of the performance of a system across time. Anyway, to fully capture the differences between two models, it is needed to have a deeper look at the performance across time as illustrated in Figure 2 for instance.
Similarly, we compute the cumulative memory usage and compute as:
Mem = B∑ t=0 |θt|, Comp = B∑ t=0 O(m(·; θt)) (2)
where |θt| is the number of free parameters of the model at time t, and O(m(·; θt)) is the number of flops used by the model to process the t-th mega-batch. Once again, by measuring the area under the curves obtained by tracking these quantities over time we obtain a holistic assessment of memory and compute throughout the whole stream. A model can obtain small Mem and Comp only if it does not consume memory and if it is computationally parsimonious throughout the entire duration of the stream.
Algorithm 1 Training in the ALMA setting 1: procedure TRAIN(m,w, replay, grow) . m is the model, w is the waiting time 2: t← 1 3: D ← ∅ 4: while t < B do . For each stage 5: if replay then . Acquire w mega-batches 6: D ← D ∪Dt ∪ ... ∪ Dt+w−1 7: else 8: D ← Dt ∪ ... ∪ Dt+w−1 9: t← t+ w 10: if grow then 11: m.grow() . Grow the model if the model is a growing model 12: m.train(D) . Fine-tune or retrain from scratch m on the collected dataset
4 LEARNING ALGORITHMS
In this section, we describe the methods we tested in the ALMA setting. They generally follow the learning procedure shown in Algorithm 1. At a high level, we consider two families of models, those with a monolithic architecture and those with a modular architecture (e.g., ensembling). The latter are amenable to grow over time by adding new modules to the existing set. We will start by describing fixed architectures (§4.1) and then conclude with growing architectures (§4.2). All models are also given the option to replay previous mega-batches.
4.1 FIXED ARCHITECTURES
The first family of methods trains models with a fixed architecture. These models are sequentially trained over new mega-batches and exhibit a fixed memory footprint. We consider three models:
Single Model (SM): This is a standard multi-layer neural network (e.g., fully connected neural network or transformer) trained by stochastic gradient descent. It can be initialized from random or from the parameters of the model trained on the previous mega-batch. The initializaiton choice is determined via cross-validation.
Ensemble of Models (Ens): The second approach is the simplest modular approach, consisting of an ensemble of N neural networks with the same architecture, each being trained independently on the same sequence of data. The output of the overall model at test time is the average probability distribution produced by each component1. The advantage of Ens is that training and inference can be trivially parallelized, enabling to scale up model parameters very easily. The disadvantange is that inference requires N times more compute than what is required by each component.
Uniform Mixture of Models (UMix): A potential drawback of Ens is that evaluation and training are inconsistent. UMix addresses this by training a model whose prediction is the average (in logit space) of the predictions produced by N networks. While this requires synchronization during training, now both training and evaluation use the same model.
4.2 GROWING ARCHITECTURES
In the previous section, the number of parameters and the architecture of the model are fixed throughout the model’s lifetime. However, as more data is observed, it is interesting to consider dynamic architectures that grow over time, because these may save compute and memory during the earlier stages of learning while providing more predictive power during the later stages. We consider three growing approaches:
1Classical bagging approaches and majority vote strategies have been also explored without significant difference.
Growing Ensemble (gEns): Like the Ens model, gEns is also a combination of neural networks trained independently. While Ens considers N networks that are, at each stage, trained over the new chunck of data, gEns replaces this step by a growing step where n neural networks are added. In our implementation, only these n neural networks are trained over the new data, while the other neural networks (trained on previous mega-batches) are kept fixed.
Growing Mixture of Experts (gMoE): A hierarchical mixture of experts models (MoE) is an architecture where at layer l the output representation zl is: zl = ∑k j=1 g(j|zl−1)h(zl−1|j), where g is the gating or routing function and h(·|j) is the j-th expert. Compared to Ens, MoE has exponentially many more components albeit with a lot of parameter sharing. Another advantage is that by selecting only one (or a few) experts, the computational cost is independent of the number of experts, assuming the cost of gating is negligible compared to the cost or executing the experts. The main issue is that MoE are notoriously harder to train (Eigen et al., 2014; Denoyer & Gallinari, 2015; Lepikhin et al., 2020). In this work, we consider a growing version of MoE, which we denote with gMoE, whereby experts are added over time. See Appendix A for more details.
Firefly (Wu et al., 2020) (FF): FF is a method which progressively grows neural networks, jointly optimizing both the model architecture and parameters. Growth includes both a width expansion by adding new hidden units (or feature maps) as well as a depth expansion by adding new layers. Importantly, this is an example of non-modular method unlike Ens or gMoE, which is potentially more expressive but also more inefficient at inference time because there is no structured sparsity that can be leveraged to speed up computation.
5 EXPERIMENTS
In this section we first describe how standard benchmarks can be repurposed for ALMA, we then provide the details of the models we tested, and we finally conclude with an analysis of the results we obtained, aiming to understand which method attains the best trade-off between time, accuracy, compute and memory usage.
Datasets We consider a variety of datasets. The first dataset is MNIST (LeCun et al., 1998), which consists of a training set with 60,000 quasi-binary handwritten digits of size 28x28 pixels, and a test set with 10,000 examples. The second dataset is CIFAR 10 (Krizhevsky, 2009) that has a training set with 50,000 images of size 32x32 pixels belonging to 10 classes such as bird, car, horse, ship, truck, etc. The third dataset, used for our large-scale language modeling evaluation, is a portion of the collection of English language text introduced in Liu et al. (2019b), consisting of Books, Wikipedia and Common Crawl. We consider 4 (large) mega-batches for training and one additional mega-batch for evaluation, each consisting of approximately 440M words; we also hold out a validation set with approximately 0.5M words of Common Crawl for model selection. We use a byte-pair encoding (BPE) (Sennrich et al., 2016) vocabulary with 50,000 units, following Radford et al. (2019). This dataset is fairly representative of what practitioners might face when maintaining a deployed system with new data arriving every few months.
Given a dataset like any of the above, we construct a benchmark for ALMA evaluation as follows: 1) we randomly partition the training set into B mega-batches with equal number of training examples (B = 50 for MNIST and CIFAR 10, and 4 for the text dataset), 2) from each mega-batch we extract 10% of the data to build the mega-batch validation set (except for the large scale language modeling dataset where we use the provided validation set), and 3) we create a learning experience by doing one pass over the sequence of mega-batches. For each mega-batch, the learner can query as many mini-batches as desired. The learner can also decide not to train on the data of a mega-batch right away but instead to wait and accumulate data across a few consecutive mega-batches. While the learner observes data, it is also tested on the test set. This is not used for validation purposes, but only for final reporting as shown in §5.1.
Models We evaluate the six approaches presented in §4, and for each of them we consider various waiting times, a version with and without replay, and at least two model sizes. For each setting, we cross validate over several hyper-parameters such as initializaiton type, learning rate, stopping criterion, growth rate, etc.
Next, we describe in details the architecture used on each dataset. Further experimental details to aide reproducibility are reported in Appendix B. On MNIST the backbone architecture of SM is a three layer fully connected neural network with ReLU units. We considered two hidden units sizes, namely 4 and 32 (denoted by [s] and [b], respectively), which let us simulate the regime of big data relative to the size of the network and explore how to grow architectures without worrying about overfitting. Similarly, the components of Ens, gEns and UMix are SM networks of the same size as stated above; gMoE also starts off as SM and adds modules (at the first two layers) that have the same size as the original layer of SM. When varying the waiting time, i.e., the number of mega-batches that are aggregated before initiating a new training session, we use the suffix “_w” to indicate its value.
On CIFAR 10, the methods and notations are the same as in MNIST. The only difference is that the backbone architecture is a scaled down version of a VGG19 convolutional neural network (Simonyan & Zisserman, 2015), where the number of intermediate feature maps is the same for each layer and equal to either 4 or 32. On this dataset, we also consider FF starting off from the same VGG19 backbone.
For the language modeling task SM is a Switch Transformer (Fedus et al., 2021), which is a hard mixture of experts model with an additional load balancing loss term and hard capacity constraint applied during training to prevent uneven expert utilization. Following Fedus et al. (2021), we fix the weight of the balancing loss term to 0.01 and use a capacity factor of 1, ensuring relatively uniform expert utilization. We train the model using Adam (Kingma & Ba, 2015) and tune the learning rate and dropout on the validation set. In the growing setting we copy the expert weights and gating network weights corresponding to the top-k experts incurring the largest loss, where k is typically between 2 and 4. We consider two model sizes: a base model with 6 layers and model dimension of 512, for a total of 40M shared parameters and 6M additional parameters per expert; and a large model with 12 layers and model dimension of 768, for a total of 96M shared parameters and 28M additional parameters per expert. We use an input sequence length of 512 tokens and we do not use replay given the large chunk sizes.
5.1 RESULTS
In Fig. 2 we start by analyzing learning curves on CIFAR 10 for a subset of the methods as a function of the waiting time. We then dive into analyzing all methods on both MNIST (Tab. 1) and CIFAR 10
(Tab. 2), using the optimal empirical value of waiting time. We conclude by confirming the major findings at scale on the language modeling task (Tab. 3).
Fig. 2 shows the test error rate as a function of the number of mega-batches received for both the small (left) and the large (right) model. We observe that an intermediate waiting time (in this case equal to 5) strikes the best trade-off between accuracy and time for all methods, since curves with waiting time equal to 5 have the lowest area under the curve. Greedy methods using waiting time equal to 1 achieve lower error rate only during the very beginning of the stream. Second, we observe that bigger models (SM and Ens) not only generalize better but they are also statistically more efficient: the small Ens obtained almost 35% error rate by the end of its learning experience, which is worse than the error rate obtained by the large Ens just after having observed one tenth of the entire stream. The statistical efficiency of large models does not apply only to large transformers (Kaplan et al., 2020a), but also to fully connected (we obtained similar results on MNIST) and convolutional models.
Next, using the waiting time that yielded the lowest cumulative error rate, we compare all methods discussed in §4, focusing our discussion on Tab. 2 of CIFAR 10 as same conclusions apply to MNIST as well (see Tab. 1).
First, replay lowers the CER by a relative amount of about 10% at the cost of increasing the cumulative training flops by a factor of more than 5, which is rather substantial. Notice that retraining from scratch using memory replay, as reported here in parentheses, is nowadays the dominant approach to deal with sequential datasets.
Second, Ens works better than UMix for larger models, and vice versa. We surmise that ensembling may alleviate overfitting of large models, but coordinating the components of the ensemble like UMix does, is more effective in an underfitting regime (i.e with small models). Ens thus looks like a good method to train large architectures without suffering of the overfitting aspect and may be used when the complexity of the task is not known a priori.
Third, all growing approaches perform rather similarly, particularly when starting from larger backbones, although they strike slightly different trade-offs. For instance, gMoE is the most efficient at test time, while FF yields a lower error rate. Interestingly, none of the approaches that grow architectures currently manages to beat Ens in terms of error rate when starting from a large backbone, although they require substantially fewer flops at inference time. Finally, while methods derived from SM (for the same size of the initial backbone, see rows with the same color in the table) all manage to beat SM, it is also worth noting that for the same number of parameters SM is still the best performing method, unless there is overfitting. In particular, Ens with 12550 parameters achieves a CER of 2440 while SM with 11710 parameters obtains a CER of 2038 while requiring much less compute; same considerations apply also to the gMoE with 29550 parameters compared to SM with 31660 parameters. Therefore there is no single model striking a much better trade-off, and more advanced approaches do not outperform simpler methods like Ens.
The results on the large scale language modeling task reported in Tab. 3 show that bigger models perform better (the larger the number of parameters the lower the PPL for a given model class) and are also more statistically efficient (for instance the base SM_w1 attains 26.53 after seeing the whole stream, while the large SM_w1 obtains 22.47 just after seeing the first chunk of data), consistent with recent related work (Kaplan et al., 2020b; Li et al., 2020a). We also observe that Ens is a strong performer, with Ens_w1 and gEns_w1 models dominating SM models in all settings. Surprisingly, ensembles trained on distinct data chunks (gEns_w1; t1 or t3) perform no better than ensembles trained on a single data chunk (Ens_w1; t0). For instance, among Base 2-model ensembles (4@2), Ens_w1 achieves a perplexity of 26.20 using a single data chunk (t0), while gEns_w1 achieves a perplexity of 26.27 using models trained on each of the two data chunks (t1). Finally, if test time inference is a concern, then gMoE is a preferable choice since its runtime is comparable to SM.
6 CONCLUSION AND PERSPECTIVES
In this work we introduced the anytime learning at macroscale (ALMA) setting, which is an instance of anytime learning under the assumption that data is observed as a sequence of large batches. ALMA better mimics the learning scenarios faced by machine learning practitioners, who want to efficiently solve a task, but time to time they receive more data to train on. We introduced metrics that enable the assessment in terms of error rate, memory usage and compute throughout the entire learning experience. Equipped with these tools, we then evaluated several approaches on three different datasets, including large scale language modeling. We found that methods that update parameters at an intermediate rate tend to yield a better trade-off, and that bigger models tend to generalize better. In particular, models that grow capacity over time generalize better particularly when the initial model is smaller, and ensembling is a very strong baseline.
A cynical interpretation of our finding that bigger models generalize better, could take the reader to the conclusion that it can all be solved by starting with a big model. However, as data is added over time so is computation. It is often the case that researchers working on large-scale learning instantiate the biggest possible model to train on their task, but few months later they can manage to launch even bigger models thanks to compute and engineering advances. How can the larger model leverage what has been learned from the previously trained model? Is there a modeling choice that strikes a better trade-off than retraining from scratch? More generally, what are good approaches to extract information from a new batch of data to integrate it into an existing model? While we do not provide a full answer to these questions, we do offer a framework to study them and several strong baseline approaches to compare against and build upon.
7 REPRODUCIBILITY STATEMENT
We have made several efforts to ensure that the results provided in the paper are fully reproducible. We first provide a clean codebase from which all the computer vision results in this paper are generated. In this codebase, one can find the exact hyperparameters used for each method in the provided configurations. We have attached a readme to the code in order to guide users running our code. For the LM experiments, as stated in the appendix we use the fairseq (Ott et al., 2019) and provide the required information to replicate our results.
APPENDIX
A GROWING MIXTURES OF EXPERTS
Growing Mixture of Experts (gMoE): A mixture of expert (MoE) is a sequence of non-linear functions, each of which is potentially a mixture of experts (omitting the dependence on parameters):
m(x) = f l(f l−1(. . . f1(x) . . . )), with f i(z) = k∑
j=1
gi(j|z)hi(z|j)
where gi is the gating function at the i-th layer which outputs a categorical distribution over the number of experts, and hi(·|j) is the j expert at layer i. The gating function can be “soft” in which case it outputs non-zero weights for each expert via a softmax, or “hard” version in which case only one expert is selected through a multinomial sampling (and learned through the straight-through estimator in this paper (Bengio et al., 2013)). At test time in the “hard” case, we select the expert with the largest probability. The interest of mixtures of experts is they have a high expressivity, and experts can be easily added to increase the capacity of the model. The gMoEmodel is the growing version where, at each stage as illustrated in Fig. 3, new experts are added at each layer – details about the precise expansion process are given in Appendix.
The key design considerations are: when to grow, what to grow and how to grow. Here, we will refer to our default setting which favors simplicity, unless otherwise specified.
A growth step is triggered at each stage, ensuring a linear growth over time. We grow by adding one expert at each layer, making sure that all experts within a layer have the same architecture albeit with different parameters. In order to grow, we look at which expert has associated the largest cumulative loss; we call such expert the losing expert. The cumulative loss is defined as the sum of the losses of examples on the validation set that have been routed through a particular expert; each expert has associated a cumulative loss value. The rationale is to identify at each layer the expert responsible for the largest contribution to the total loss.
To avoid drop in the loss function and to keep its differentiability when splitting an expert, we propose a tree-based approach we the losing expert is split such expert into two experts with exactly the same parameters as illustrated in Fig. 3: Two children leaves are derived and we instantiate a new gating for the children which decides whether an input example routed to the old expert, should now go to the right or left expert child. The parameters of the new gate are initialized at random while the parameters of the new experts are exact copies of the ones of the losing expert that we split.
More formally, if s is the losing expert then the term gi(s|z)hi(z|s) is replaced by: 2∑
k=1
gi(s|z)gi(k|z, s)hi(z|s, k) (3)
where gi(k|z, s) is the newly introduced gate, and z is the input of the gating and experts. Over time, the gating function learns to partition its input space into a binary tree (if we start from a single expert), and the gating value of an expert is the product of the gating probabilities on the path from root to the leaf expert. Both the gating tree structure and the particular initialization scheme guarantee that the growth step is smooth and fully differentiable, in particular, the loss before and after the growth step is the same.
If we consider each path in the MoE model to be a different model, then with L layer of k MoE components, there are kL many possible paths through the MoE model, hence the number of paths grows exponentially with the number of layers. You can think of this as an ensemble with exponentially many components, but this is still tractable because components share parameters.
Algorithm 2 gMoE 1: k: number of mega-batches to aggregate 2: D = ∅ 3: function TRAIN(Di, i) 4: D += Di 5: if i mod k == 0 then 6: Extract DVAL and DTR from D 7: while m is not converged: do 8: (x, y) ∼ DTR . In practice, sample mini-batches. 9: m.update(x, y)
10: D = ∅
11: m.grow(DVAL) . Growth step can be done at a different rate too. 12: function GROW(DVAL) 13: for each layer in the network do 14: Let i be the losing expert on DVAL, i.e. the expert incurring the largest cumulative loss. 15: Turn corresponding gating output in an internal node and derive 2 gate children 16: Initialize the new experts by copying the parameters from the old parent expert. 17: Initialize the new gating between the two siblings at random.
B HYPER-PARAMETER SETTINGS
B.1 COMPUTER VISION EXPERIMENTS
For each megabatch received, we keep 10% of the data to perform cross-validation. All experiments are run on a single 16GB Quadro GP100 GPU. We apply data normalization for each dataset considered. A training minibatch size of 128 is used. UMix and Ens models have N = 5 in all experiments. for gEns, we train one model n = 1 at every mega-batch, so the total number of models depends on the amount of mega-batches. For Firefly we use a growth rate of 0.25, meaning that at every growth phase, we add approximately a quarter of the initial number of parameters.
B.1.1 MNIST
Models are trained for 100 epochs, and we report results with soft gating. We use the AdaDelta (Zeiler (2012)) optimizer with default learning rate of 1. We use a MLP with 2 hidden layers of varying width (e.g. 4,8 or 32 neurons).
B.1.2 CIFAR-10
Models are trained for 200 epochs, as this was shown to be long enough to allow the model to converge with a learning rate of 0.01. We use Stochastic Gradient Descent with momentum value of 0.9 and weight decay of 1× 10−4. During training, we apply random horizontal flips and select random image crops with padding of 4 pixels. For the architecture, we use the same reduced VGG with batch normalization as prescribed in Wu et al. (2020). All layers are initialized with the same number of channels (e.g. 4, 8, or 32 channels). For the Firefly experiments, we keep all the Fireflyspecific hyperparameters to the default values suggested in the author’s public codebase. We make one exception to this, namely we adapt the growth ratio to result in linear (rather than exponential) growth.
B.2 LANGUAGE MODELING EXPERIMENTS
All the language models are trained using fairseq (Ott et al., 2019) with a maximum of eight 32GB GPUs (NVIDIA V100), optimized with Adam (Kingma & Ba, 2014) using β1 = 0.9, β2 = 0.98,
=1e-8. The learning rate is warmed up over the first several hundred updates (between 500 and 4000) and then linearly decayed to 0 over the remaining updates, with a peak value tuned between 2e-4 and 5e-3. Models are trained up to 120,000 updates with local batch size of 8 sequences per GPU, with gradient accumulation as needed to achieve a total batch size of 192 sequences; each sequence has 512 tokens. We fix the Switch Transformer balancing loss term to 0.01 and use a capacity factor of 1, following Fedus et al. (2021).
C ADDITIONAL COMPUTER VISION RESULTS
In this section we show the impact of several variants of our framework. Namely, we report results for (a) a varying number of mega-batches, (b) whether to use preemption or not, and (c) whether to initialize from scratch or simply finetuning when replay is performed.
C.1 CIFAR
In the following results, we vary the number of megabatches. Below you can find results for MB = 20
C.1.1 DIFFERENT MBS
C.1.2 PREEMPTED RESULTS
We also consider the use of a patience term when training the model. When the validation accuracy has not improved over 25 consecutive epochs, we stop training for the given learning phase. As expected, we observe gains on compute efficiency, with a small loss in performance.
C.1.3 INITIALIZING FROM SCRATCH
Below we show results, comparing the performance of re-training models from scratch on all the data seen so far vs simply finetuning the current model(s) on all the data. Main numbers are finetuned models, numbers in parentheses are trained from scratch.
Table 8: CIFAR-10 MB = 10 results with Replay. Numbers in () are models (re)initialized from scratch at the start of a new MB
C.2 MNIST | 1. What is the focus of the paper, particularly in introducing the concept of Anytime Learning at Macroscale?
2. What are the concerns regarding the motivation behind the setup, especially when compared to online learning?
3. How do randomized algorithms such as Thompson-sampling based methods perform in online learning, and how does this relate to the paper's claims? | Summary Of The Paper
Review | Summary Of The Paper
The paper introduces a novel setup called Anytime Learning at Macroscale. In this setup the learner receives the examples as a sequence of large batches, and is required to output a model after processing each batch. This model is used to give prediction for the next batch. The overall performance is then sum of the average losses on the individual batches.
Review
My main concern is with the motivation of the setup. In particular, it does not seem to be very different from online learning. The paper does discuss the differences between the two setup, however the arguments are a bit superficial. In particular, it is claimed that in online learning the examples are streamed one at the time as opposed to receiving them in large batches - but this does not seem to make the online learning methods completely handicapped. Randomized algorithms (such as Thompson-sampling based methods) should perform reasonably. (For each example in batch i, apply the model obtained after processing all the examples in batches 1,2,..., i-1.) It would be great if the authors could elaborate on this. |
ICLR | Title
On Anytime Learning at Macroscale
Abstract
Classical machine learning frameworks assume access to a possibly large dataset in order to train a predictive model. In many practical applications however, data does not arrive all at once, but in large batches over time. This creates a natural trade-off between accuracy of a model and time to obtain such a model. A greedy predictor could produce non-trivial predictions by immediately training on batches as soon as these become available but, it may also make sub-optimal use of future data. On the other hand, a tardy predictor could wait for a long time to aggregate several batches into a larger dataset, but ultimately deliver a much better performance. In this work, we consider such a streaming learning setting, which we dub anytime learning at macroscale (ALMA). It is an instance of anytime learning applied not at the level of a single chunk of data, but at the level of the entire sequence of large batches. We first formalize this learning setting, we then introduce metrics to assess how well learners perform on the given task for a given memory and compute budget, and finally we test about thirty baseline approaches on three standard benchmarks repurposed for anytime learning at macroscale. Our findings indicate that no model strikes the best trade-off across the board. While replay-based methods attain the lowest error rate, they also incur in a 5 to 10 times increase of compute. Approaches that grow capacity over time do offer better scaling in terms of training flops, but they also underperform simpler ensembling methods in terms of error rate. Overall, ALMA offers both a good abstraction of the typical learning setting faced everyday by practitioners, and a set of unsolved modeling problems for those interested in efficient learning of dynamic models.
1 INTRODUCTION
Empirical risk minimization (Vapnik, 1998) is the dominant framework to formalize the learning process of a supervised task, and it has been critical to the success of large scale training of deep learning systems on a wide variety of applications. Within this framework, training data is assumed to be provided to the learner all at once. Alternatively, when the dataset is very large (essentially infinite), data is streamed to the learner one minibatch at the time, assuming that the rate at which samples are received matches the model’s processing time to learn from them.
Learning over streams of data has been studied in the machine learning domain for a long time (see Section 2 and Figure 1 for more details) with different assumptions: for instance in online learning, it is usually assumed that datapoints are coming one by one and have to be processed as soon as they are received. In continual learning, the streaming of data usually corresponds to a stream of large datasets corresponding to different tasks to solve, etc. In this paper, we define a simple yet important setting where there is a single task to solve, and where training data often comes at a slower rate than a model can process it. Moreover, it comes in relatively large batches once in a while. While poorly studied, this setting corresponds to practical applications encountered in production pipelines. For instance, it is faced by teams deploying language modeling applications (e.g content moderation) build models that are trained on large amounts of data like filtered versions of Common Crawl, which are dumps of the internet. However, new snapshots are available every month, as new content is generated over time. Therefore datasets keep getting bigger every few months and models need to be retrained accordingly. Similarly, visual object recognition datasets used in deployed applications are often extended every few months to include new images with their corresponding annotations.
* Authors contributed equally
Practically, there are two main approaches to integrate information present in a new batch of data in an existing model. If a lot of computational resources are available, a new and bigger model is instantiated and trained from scratch on the union of the old training set with the new batch of data. However, since this is a computationally very intensive process, retraining is typically done only rarely, once several batches of data have been collected. We call this approach “tardy” large-scale learning, since a predictor is available only at a later time. Another option, particularly suitable when computational resources are scarce and a predictor is needed quickly, is to simply finetune the old model on the new data as this arrives. Note that, in that settings, methods from the data stream domain or from the online learning domain that are based on the idea of processing any datapoint just once are not suitable since they have been developed for different use-cases.
This trade-off is emblematic of anytime learning, a learning setting where a learner has to provide good predictions at any point in time, while improving its performance over time as more and more data is observed. From an anytime learning perspective, neither training a large model after all data is received nor finetuning on the newly added batch of data are not satisfying. The former approach is a poor anytime learner because one needs to wait for a long time before obtaining a useful predictor. The latter approach is a poor anytime learner because it typically cannot leverage very well future batches of data since the model has a fixed capacity, determined on a small portion of the overall dataset and because inherently the model is trained on non i.i.d. data.
In this work, we aim at exploring this accuracy versus time trade-off of anytime learning, not at the level of a single batch of data, but at the macroscale of the entire sequence of batches. This is a setting which more closely mimics practical applications, that we call anytime learning at mascroscale (ALMA). In this learning setting, we assume that the time to train a model is negligible compared to the interval of time between two consecutive batches of data (and therefore we do not care about how quickly a learner adapts to a new batch), yet efficiency matters in the sense that for the same performance a predictor that uses less compute and memory is preferable. In summary, we are interested in a learner that i) yields high accuracy, ii) can make non-trivial predictions at any point in time while iii) limiting its computational and memory resources.
Our first contribution is to formalize the ALMA problem and to introduce metrics to evaluate learners (§3). We consider three different axes: error rate, memory and amount of computation. By measuring these quantities against time, via an area under the curve, we account not only for the final performance but also for the whole training trajectory over the sequence of large batches of data.
Our second contribution is an extensive empirical evaluation (§5) of various models (§4) that strike different trade-offs between accuracy and time to obtain a useful predictor. In particular, we explore models that fall in between greedy finetuning and tardy large-scale learning, and investigate models that leverage batches of data at an intermediate rate. We also consider a rich family of modular architectures, from plain ensembling methods to hierarchical mixture of experts, and several variants thereof, including those that have access to a replay buffer storing all previous batches of data and those that can grow capacity over time.
Our findings across three different benchmarks, including a large scale language modeling one, can be summarized as follows. a) An intermediate waiting time offers the best trade-off between accuracy and time to yield such a predictor. However, b) there is no single approach striking the best trade-off between performance and efficiency for various model sizes. c) Retraining from scratch a big model does offer the lowest error rate but sacrifices efficiency. d) Interestingly, large models are the most statistically efficient even when considering small datasets (like MNIST) and fully
connected networks. e) While approaches to grow capacity exhibit gains in terms of computational efficiency, these do not even outpeform simple ensembles. Overall, our work points at several research opportunities to improve modeling in a streaming setting of broad practical relevance, rather then pointing at any particular solution. We have also released code to reproduce our experiments and the entire platform implementing ALMA.
2 RELATED WORK
ALMA relates to several other learning frameworks: offline learning, continual learning, online learning and transfer learning as illustrated in Figure 1. i) It shares the same assumptions of classical empirical risk minimization (ERM) (Vapnik, 1998) at the level of each batch of data. However, it overall violates ERM’s assumptions of i.i.d. observations, because data points come in a stream of data chunks. ii) Because of this, ALMA relates to continual learning (CL) (Ring, 1994; Thrun, 1994; Ring, 1997; Thrun, 1998), with the key difference that the data distribution across batches (or tasks) is assumed stationary in ALMA. Therefore, ALMA can be seen as a special case of CL with a single task to solve. iii) ALMA relates also to online learning (Bottou, 1998) since it assumes that data are coming in a stream, an assumption also made in the concept drift literature (Lu et al., 2018). However, in online learning examples are streamed one at the time (or at random from a large dataset), while in ALMA the learner receives large batches of data sequentially In ALMA, received data can be processed multiple times as opposite to the online learning setting that usually assumes that any new datapoint has to be processed as soon as it is available, and will not be reused in future updates. iv) Finally, ALMA relates more broadly to transfer learning (Pan & Yang, 2010), as the problem of adapting to a new batch of data can be interpreted as leveraging knowledge acquired on previous batches to more effciently learn from the new batch of data.
Of course, ALMA relates to anytime learning (Grefenstette & Ramsey, 1992; Ramsey & Grefenstette, 1994), which has been recently applied to compare various autoML frameworks (Liu et al., 2020). However, in this work we are not interested in assessing the anytime learning ability at the level of each chunk of data, but only at a coarser granularity, at the level of the entire stream of chunks. Inspired by Liu et al. (2020), we consider the area under the curve of error rate against time to measure performance, but in order to account also for compute and memory budget, we add to our evaluation metrics also the area under the curve for memory and compute.
From the more theoretical side, there has been work about sub-bagging (Bühlmann & Yu, 2002) (bagging using subsets of a larger dataset) which is similar to our setting but without the sequential aspect of it. In this context, Breiman (1999) proposed a model similar to our growing ensembling (gEns), Bühlmann & Yu (2002) studied sub-bagging as a way to make the prediction of tree classifiers more robust while Zou et al. (2021) studied the consistency of the estimator in this setting. We defer to future studies the analysis of ALMA, while in this work we focus on the empirical evaluation.
Shifting the discussion to prior work on models that adjust their capacity dynamically, Waterhouse & Robinson (1995) introduced an approach to grow a hierarchical mixture of experts model (Jordan & Jacobs, 1994). This is a tree structured model where experts are at the leaves and gating functions are at non-terminal nodes. The tree determines a hierarchical partition of the input space into regions that are associated to each expert. This approach was made more efficient in later work by (Fritsch et al., 1996). In this work we consider a baseline (gMoE) that extends this prior work to hierarchical mixture of experts (Eigen et al., 2014; Denoyer & Gallinari, 2015; Lepikhin et al., 2020).
Growing architectures have also been studied in CL. For instance, Fernando et al. (2017) and Veniat et al. (2021) proposed a modular architecture that is assembled for every task, possibly reusing previously trained modules. The major difference with our work is that in our case routing is input dependent as opposed to task dependent. Yoon et al. (2018) instead proposed a method to incrementally and smoothly add hidden units. Similarly, Wen et al. (2020) proposed a heuristic approach to automatically adjust the network depth. Wang et al. (2017) considered growing both depth and width when finetuning to a new task. Liu et al. (2019a) and Wu et al. (2020) proposed approaches to grow architectures in depth and width by leveraging Taylor approximation and greedy selection. In our work, we benchmark against this last variant. None of these approaches have been applied to the ALMA setting to date.
Finally, some of our findings are built upon and extend recent empirical evaluations studying the scaling properties of language models (Kaplan et al., 2020a; Li et al., 2020b). In this study, we
confirm the conclusion that bigger models generalize better and are more statistically efficient, not only in language modeling tasks using a transformer architecture, but also in smaller scale computer vision tasks using both fully connected and convolutional architectures.
3 LEARNING SETTING
In anytime learning at macroscale (ALMA), we assume that there exists an underlying data distribution p(x, y) with input x ∈ RD and desired label y ∈ {1, . . . , C}. Notice that extensions to regression and unsupervised learning (where y is missing) are trivial, and therefore in this work we focus on classification problems for simplicity of exposition. A important property of ALMA is that data is presented to the learner as a stream SB of B consecutive batches of examples. Let Di be a collection of N 0 i.i.d. samples randomly drawn from p(x, y), for i ∈ {1, . . . , B}. The stream is then defined as the ordered sequence SB = {D1, . . . ,DB}. We refer to each dataset Di as mega-batch, as it is composed by a large number of examples. Typically a learner m : RD → {1, . . . , C} updates its parameters by processing a mini-batch of n N examples at the time from each mega-batch Di, and by iterating several times over each mega-batch before being presented with the next mega-batch. Since the learner cannot access future mega-batches, overall the data distribution is not i.i.d., even though samples drawn from each mega-batch are i.i.d., and cross-validation is performed using a subset of the current mega-batch. A learner could decide to use previous mega-batches when learning on the current mega-batch, but this will increase its compute usage.
Finally, we assume that the time it takes a learner to update its internal parameters after having observed a mega-batch is much less than the interval between the arrival of two consecutive megabatches. In other words, the rate at which data arrives is slower than the processing time of the model, and therefore the model could decide to iterate several times over the data at its disposal to improve its prediction accuracy.
3.1 METRICS
We evaluate learners in the ALMA setting across three axes, namely: accuracy, memory and computation. Let t be the time at which the t-th mega-batch arrives; this data can be used by the model to update its parameters or it is simply aggregated to previous mega-batches for later use.
We compute the error rate of model m at time t (after the arrival of the t-th mega-batch) and compute the area under the curve obtained varying t from 0 till the total number of mega-batches B; the resulting cumulative error rate (CER) is:
CER = B∑ t=0 1 |DTs| ∑ (x,y)∈DTs |m(x; θt) 6= y| (1)
where m(x; θt) is the model at time t equipped with parameters θt, DTs is the test set, |DTs| is the number of examples in the test set, and |m(x; θt) 6= y| is one if the model prediction does not match the ground truth label and zero otherwise. The outer sum computes the discrete integral of the error rate over time. CER is going to be small only when the error rate is small throughout the whole stream . CER is instead large for a tardy model that waits till the very last mega-batch to update the model, even though eventually this may obtain a very low final error rate. If not perfect, CER provides a good summary of the performance of a system across time. Anyway, to fully capture the differences between two models, it is needed to have a deeper look at the performance across time as illustrated in Figure 2 for instance.
Similarly, we compute the cumulative memory usage and compute as:
Mem = B∑ t=0 |θt|, Comp = B∑ t=0 O(m(·; θt)) (2)
where |θt| is the number of free parameters of the model at time t, and O(m(·; θt)) is the number of flops used by the model to process the t-th mega-batch. Once again, by measuring the area under the curves obtained by tracking these quantities over time we obtain a holistic assessment of memory and compute throughout the whole stream. A model can obtain small Mem and Comp only if it does not consume memory and if it is computationally parsimonious throughout the entire duration of the stream.
Algorithm 1 Training in the ALMA setting 1: procedure TRAIN(m,w, replay, grow) . m is the model, w is the waiting time 2: t← 1 3: D ← ∅ 4: while t < B do . For each stage 5: if replay then . Acquire w mega-batches 6: D ← D ∪Dt ∪ ... ∪ Dt+w−1 7: else 8: D ← Dt ∪ ... ∪ Dt+w−1 9: t← t+ w 10: if grow then 11: m.grow() . Grow the model if the model is a growing model 12: m.train(D) . Fine-tune or retrain from scratch m on the collected dataset
4 LEARNING ALGORITHMS
In this section, we describe the methods we tested in the ALMA setting. They generally follow the learning procedure shown in Algorithm 1. At a high level, we consider two families of models, those with a monolithic architecture and those with a modular architecture (e.g., ensembling). The latter are amenable to grow over time by adding new modules to the existing set. We will start by describing fixed architectures (§4.1) and then conclude with growing architectures (§4.2). All models are also given the option to replay previous mega-batches.
4.1 FIXED ARCHITECTURES
The first family of methods trains models with a fixed architecture. These models are sequentially trained over new mega-batches and exhibit a fixed memory footprint. We consider three models:
Single Model (SM): This is a standard multi-layer neural network (e.g., fully connected neural network or transformer) trained by stochastic gradient descent. It can be initialized from random or from the parameters of the model trained on the previous mega-batch. The initializaiton choice is determined via cross-validation.
Ensemble of Models (Ens): The second approach is the simplest modular approach, consisting of an ensemble of N neural networks with the same architecture, each being trained independently on the same sequence of data. The output of the overall model at test time is the average probability distribution produced by each component1. The advantage of Ens is that training and inference can be trivially parallelized, enabling to scale up model parameters very easily. The disadvantange is that inference requires N times more compute than what is required by each component.
Uniform Mixture of Models (UMix): A potential drawback of Ens is that evaluation and training are inconsistent. UMix addresses this by training a model whose prediction is the average (in logit space) of the predictions produced by N networks. While this requires synchronization during training, now both training and evaluation use the same model.
4.2 GROWING ARCHITECTURES
In the previous section, the number of parameters and the architecture of the model are fixed throughout the model’s lifetime. However, as more data is observed, it is interesting to consider dynamic architectures that grow over time, because these may save compute and memory during the earlier stages of learning while providing more predictive power during the later stages. We consider three growing approaches:
1Classical bagging approaches and majority vote strategies have been also explored without significant difference.
Growing Ensemble (gEns): Like the Ens model, gEns is also a combination of neural networks trained independently. While Ens considers N networks that are, at each stage, trained over the new chunck of data, gEns replaces this step by a growing step where n neural networks are added. In our implementation, only these n neural networks are trained over the new data, while the other neural networks (trained on previous mega-batches) are kept fixed.
Growing Mixture of Experts (gMoE): A hierarchical mixture of experts models (MoE) is an architecture where at layer l the output representation zl is: zl = ∑k j=1 g(j|zl−1)h(zl−1|j), where g is the gating or routing function and h(·|j) is the j-th expert. Compared to Ens, MoE has exponentially many more components albeit with a lot of parameter sharing. Another advantage is that by selecting only one (or a few) experts, the computational cost is independent of the number of experts, assuming the cost of gating is negligible compared to the cost or executing the experts. The main issue is that MoE are notoriously harder to train (Eigen et al., 2014; Denoyer & Gallinari, 2015; Lepikhin et al., 2020). In this work, we consider a growing version of MoE, which we denote with gMoE, whereby experts are added over time. See Appendix A for more details.
Firefly (Wu et al., 2020) (FF): FF is a method which progressively grows neural networks, jointly optimizing both the model architecture and parameters. Growth includes both a width expansion by adding new hidden units (or feature maps) as well as a depth expansion by adding new layers. Importantly, this is an example of non-modular method unlike Ens or gMoE, which is potentially more expressive but also more inefficient at inference time because there is no structured sparsity that can be leveraged to speed up computation.
5 EXPERIMENTS
In this section we first describe how standard benchmarks can be repurposed for ALMA, we then provide the details of the models we tested, and we finally conclude with an analysis of the results we obtained, aiming to understand which method attains the best trade-off between time, accuracy, compute and memory usage.
Datasets We consider a variety of datasets. The first dataset is MNIST (LeCun et al., 1998), which consists of a training set with 60,000 quasi-binary handwritten digits of size 28x28 pixels, and a test set with 10,000 examples. The second dataset is CIFAR 10 (Krizhevsky, 2009) that has a training set with 50,000 images of size 32x32 pixels belonging to 10 classes such as bird, car, horse, ship, truck, etc. The third dataset, used for our large-scale language modeling evaluation, is a portion of the collection of English language text introduced in Liu et al. (2019b), consisting of Books, Wikipedia and Common Crawl. We consider 4 (large) mega-batches for training and one additional mega-batch for evaluation, each consisting of approximately 440M words; we also hold out a validation set with approximately 0.5M words of Common Crawl for model selection. We use a byte-pair encoding (BPE) (Sennrich et al., 2016) vocabulary with 50,000 units, following Radford et al. (2019). This dataset is fairly representative of what practitioners might face when maintaining a deployed system with new data arriving every few months.
Given a dataset like any of the above, we construct a benchmark for ALMA evaluation as follows: 1) we randomly partition the training set into B mega-batches with equal number of training examples (B = 50 for MNIST and CIFAR 10, and 4 for the text dataset), 2) from each mega-batch we extract 10% of the data to build the mega-batch validation set (except for the large scale language modeling dataset where we use the provided validation set), and 3) we create a learning experience by doing one pass over the sequence of mega-batches. For each mega-batch, the learner can query as many mini-batches as desired. The learner can also decide not to train on the data of a mega-batch right away but instead to wait and accumulate data across a few consecutive mega-batches. While the learner observes data, it is also tested on the test set. This is not used for validation purposes, but only for final reporting as shown in §5.1.
Models We evaluate the six approaches presented in §4, and for each of them we consider various waiting times, a version with and without replay, and at least two model sizes. For each setting, we cross validate over several hyper-parameters such as initializaiton type, learning rate, stopping criterion, growth rate, etc.
Next, we describe in details the architecture used on each dataset. Further experimental details to aide reproducibility are reported in Appendix B. On MNIST the backbone architecture of SM is a three layer fully connected neural network with ReLU units. We considered two hidden units sizes, namely 4 and 32 (denoted by [s] and [b], respectively), which let us simulate the regime of big data relative to the size of the network and explore how to grow architectures without worrying about overfitting. Similarly, the components of Ens, gEns and UMix are SM networks of the same size as stated above; gMoE also starts off as SM and adds modules (at the first two layers) that have the same size as the original layer of SM. When varying the waiting time, i.e., the number of mega-batches that are aggregated before initiating a new training session, we use the suffix “_w” to indicate its value.
On CIFAR 10, the methods and notations are the same as in MNIST. The only difference is that the backbone architecture is a scaled down version of a VGG19 convolutional neural network (Simonyan & Zisserman, 2015), where the number of intermediate feature maps is the same for each layer and equal to either 4 or 32. On this dataset, we also consider FF starting off from the same VGG19 backbone.
For the language modeling task SM is a Switch Transformer (Fedus et al., 2021), which is a hard mixture of experts model with an additional load balancing loss term and hard capacity constraint applied during training to prevent uneven expert utilization. Following Fedus et al. (2021), we fix the weight of the balancing loss term to 0.01 and use a capacity factor of 1, ensuring relatively uniform expert utilization. We train the model using Adam (Kingma & Ba, 2015) and tune the learning rate and dropout on the validation set. In the growing setting we copy the expert weights and gating network weights corresponding to the top-k experts incurring the largest loss, where k is typically between 2 and 4. We consider two model sizes: a base model with 6 layers and model dimension of 512, for a total of 40M shared parameters and 6M additional parameters per expert; and a large model with 12 layers and model dimension of 768, for a total of 96M shared parameters and 28M additional parameters per expert. We use an input sequence length of 512 tokens and we do not use replay given the large chunk sizes.
5.1 RESULTS
In Fig. 2 we start by analyzing learning curves on CIFAR 10 for a subset of the methods as a function of the waiting time. We then dive into analyzing all methods on both MNIST (Tab. 1) and CIFAR 10
(Tab. 2), using the optimal empirical value of waiting time. We conclude by confirming the major findings at scale on the language modeling task (Tab. 3).
Fig. 2 shows the test error rate as a function of the number of mega-batches received for both the small (left) and the large (right) model. We observe that an intermediate waiting time (in this case equal to 5) strikes the best trade-off between accuracy and time for all methods, since curves with waiting time equal to 5 have the lowest area under the curve. Greedy methods using waiting time equal to 1 achieve lower error rate only during the very beginning of the stream. Second, we observe that bigger models (SM and Ens) not only generalize better but they are also statistically more efficient: the small Ens obtained almost 35% error rate by the end of its learning experience, which is worse than the error rate obtained by the large Ens just after having observed one tenth of the entire stream. The statistical efficiency of large models does not apply only to large transformers (Kaplan et al., 2020a), but also to fully connected (we obtained similar results on MNIST) and convolutional models.
Next, using the waiting time that yielded the lowest cumulative error rate, we compare all methods discussed in §4, focusing our discussion on Tab. 2 of CIFAR 10 as same conclusions apply to MNIST as well (see Tab. 1).
First, replay lowers the CER by a relative amount of about 10% at the cost of increasing the cumulative training flops by a factor of more than 5, which is rather substantial. Notice that retraining from scratch using memory replay, as reported here in parentheses, is nowadays the dominant approach to deal with sequential datasets.
Second, Ens works better than UMix for larger models, and vice versa. We surmise that ensembling may alleviate overfitting of large models, but coordinating the components of the ensemble like UMix does, is more effective in an underfitting regime (i.e with small models). Ens thus looks like a good method to train large architectures without suffering of the overfitting aspect and may be used when the complexity of the task is not known a priori.
Third, all growing approaches perform rather similarly, particularly when starting from larger backbones, although they strike slightly different trade-offs. For instance, gMoE is the most efficient at test time, while FF yields a lower error rate. Interestingly, none of the approaches that grow architectures currently manages to beat Ens in terms of error rate when starting from a large backbone, although they require substantially fewer flops at inference time. Finally, while methods derived from SM (for the same size of the initial backbone, see rows with the same color in the table) all manage to beat SM, it is also worth noting that for the same number of parameters SM is still the best performing method, unless there is overfitting. In particular, Ens with 12550 parameters achieves a CER of 2440 while SM with 11710 parameters obtains a CER of 2038 while requiring much less compute; same considerations apply also to the gMoE with 29550 parameters compared to SM with 31660 parameters. Therefore there is no single model striking a much better trade-off, and more advanced approaches do not outperform simpler methods like Ens.
The results on the large scale language modeling task reported in Tab. 3 show that bigger models perform better (the larger the number of parameters the lower the PPL for a given model class) and are also more statistically efficient (for instance the base SM_w1 attains 26.53 after seeing the whole stream, while the large SM_w1 obtains 22.47 just after seeing the first chunk of data), consistent with recent related work (Kaplan et al., 2020b; Li et al., 2020a). We also observe that Ens is a strong performer, with Ens_w1 and gEns_w1 models dominating SM models in all settings. Surprisingly, ensembles trained on distinct data chunks (gEns_w1; t1 or t3) perform no better than ensembles trained on a single data chunk (Ens_w1; t0). For instance, among Base 2-model ensembles (4@2), Ens_w1 achieves a perplexity of 26.20 using a single data chunk (t0), while gEns_w1 achieves a perplexity of 26.27 using models trained on each of the two data chunks (t1). Finally, if test time inference is a concern, then gMoE is a preferable choice since its runtime is comparable to SM.
6 CONCLUSION AND PERSPECTIVES
In this work we introduced the anytime learning at macroscale (ALMA) setting, which is an instance of anytime learning under the assumption that data is observed as a sequence of large batches. ALMA better mimics the learning scenarios faced by machine learning practitioners, who want to efficiently solve a task, but time to time they receive more data to train on. We introduced metrics that enable the assessment in terms of error rate, memory usage and compute throughout the entire learning experience. Equipped with these tools, we then evaluated several approaches on three different datasets, including large scale language modeling. We found that methods that update parameters at an intermediate rate tend to yield a better trade-off, and that bigger models tend to generalize better. In particular, models that grow capacity over time generalize better particularly when the initial model is smaller, and ensembling is a very strong baseline.
A cynical interpretation of our finding that bigger models generalize better, could take the reader to the conclusion that it can all be solved by starting with a big model. However, as data is added over time so is computation. It is often the case that researchers working on large-scale learning instantiate the biggest possible model to train on their task, but few months later they can manage to launch even bigger models thanks to compute and engineering advances. How can the larger model leverage what has been learned from the previously trained model? Is there a modeling choice that strikes a better trade-off than retraining from scratch? More generally, what are good approaches to extract information from a new batch of data to integrate it into an existing model? While we do not provide a full answer to these questions, we do offer a framework to study them and several strong baseline approaches to compare against and build upon.
7 REPRODUCIBILITY STATEMENT
We have made several efforts to ensure that the results provided in the paper are fully reproducible. We first provide a clean codebase from which all the computer vision results in this paper are generated. In this codebase, one can find the exact hyperparameters used for each method in the provided configurations. We have attached a readme to the code in order to guide users running our code. For the LM experiments, as stated in the appendix we use the fairseq (Ott et al., 2019) and provide the required information to replicate our results.
APPENDIX
A GROWING MIXTURES OF EXPERTS
Growing Mixture of Experts (gMoE): A mixture of expert (MoE) is a sequence of non-linear functions, each of which is potentially a mixture of experts (omitting the dependence on parameters):
m(x) = f l(f l−1(. . . f1(x) . . . )), with f i(z) = k∑
j=1
gi(j|z)hi(z|j)
where gi is the gating function at the i-th layer which outputs a categorical distribution over the number of experts, and hi(·|j) is the j expert at layer i. The gating function can be “soft” in which case it outputs non-zero weights for each expert via a softmax, or “hard” version in which case only one expert is selected through a multinomial sampling (and learned through the straight-through estimator in this paper (Bengio et al., 2013)). At test time in the “hard” case, we select the expert with the largest probability. The interest of mixtures of experts is they have a high expressivity, and experts can be easily added to increase the capacity of the model. The gMoEmodel is the growing version where, at each stage as illustrated in Fig. 3, new experts are added at each layer – details about the precise expansion process are given in Appendix.
The key design considerations are: when to grow, what to grow and how to grow. Here, we will refer to our default setting which favors simplicity, unless otherwise specified.
A growth step is triggered at each stage, ensuring a linear growth over time. We grow by adding one expert at each layer, making sure that all experts within a layer have the same architecture albeit with different parameters. In order to grow, we look at which expert has associated the largest cumulative loss; we call such expert the losing expert. The cumulative loss is defined as the sum of the losses of examples on the validation set that have been routed through a particular expert; each expert has associated a cumulative loss value. The rationale is to identify at each layer the expert responsible for the largest contribution to the total loss.
To avoid drop in the loss function and to keep its differentiability when splitting an expert, we propose a tree-based approach we the losing expert is split such expert into two experts with exactly the same parameters as illustrated in Fig. 3: Two children leaves are derived and we instantiate a new gating for the children which decides whether an input example routed to the old expert, should now go to the right or left expert child. The parameters of the new gate are initialized at random while the parameters of the new experts are exact copies of the ones of the losing expert that we split.
More formally, if s is the losing expert then the term gi(s|z)hi(z|s) is replaced by: 2∑
k=1
gi(s|z)gi(k|z, s)hi(z|s, k) (3)
where gi(k|z, s) is the newly introduced gate, and z is the input of the gating and experts. Over time, the gating function learns to partition its input space into a binary tree (if we start from a single expert), and the gating value of an expert is the product of the gating probabilities on the path from root to the leaf expert. Both the gating tree structure and the particular initialization scheme guarantee that the growth step is smooth and fully differentiable, in particular, the loss before and after the growth step is the same.
If we consider each path in the MoE model to be a different model, then with L layer of k MoE components, there are kL many possible paths through the MoE model, hence the number of paths grows exponentially with the number of layers. You can think of this as an ensemble with exponentially many components, but this is still tractable because components share parameters.
Algorithm 2 gMoE 1: k: number of mega-batches to aggregate 2: D = ∅ 3: function TRAIN(Di, i) 4: D += Di 5: if i mod k == 0 then 6: Extract DVAL and DTR from D 7: while m is not converged: do 8: (x, y) ∼ DTR . In practice, sample mini-batches. 9: m.update(x, y)
10: D = ∅
11: m.grow(DVAL) . Growth step can be done at a different rate too. 12: function GROW(DVAL) 13: for each layer in the network do 14: Let i be the losing expert on DVAL, i.e. the expert incurring the largest cumulative loss. 15: Turn corresponding gating output in an internal node and derive 2 gate children 16: Initialize the new experts by copying the parameters from the old parent expert. 17: Initialize the new gating between the two siblings at random.
B HYPER-PARAMETER SETTINGS
B.1 COMPUTER VISION EXPERIMENTS
For each megabatch received, we keep 10% of the data to perform cross-validation. All experiments are run on a single 16GB Quadro GP100 GPU. We apply data normalization for each dataset considered. A training minibatch size of 128 is used. UMix and Ens models have N = 5 in all experiments. for gEns, we train one model n = 1 at every mega-batch, so the total number of models depends on the amount of mega-batches. For Firefly we use a growth rate of 0.25, meaning that at every growth phase, we add approximately a quarter of the initial number of parameters.
B.1.1 MNIST
Models are trained for 100 epochs, and we report results with soft gating. We use the AdaDelta (Zeiler (2012)) optimizer with default learning rate of 1. We use a MLP with 2 hidden layers of varying width (e.g. 4,8 or 32 neurons).
B.1.2 CIFAR-10
Models are trained for 200 epochs, as this was shown to be long enough to allow the model to converge with a learning rate of 0.01. We use Stochastic Gradient Descent with momentum value of 0.9 and weight decay of 1× 10−4. During training, we apply random horizontal flips and select random image crops with padding of 4 pixels. For the architecture, we use the same reduced VGG with batch normalization as prescribed in Wu et al. (2020). All layers are initialized with the same number of channels (e.g. 4, 8, or 32 channels). For the Firefly experiments, we keep all the Fireflyspecific hyperparameters to the default values suggested in the author’s public codebase. We make one exception to this, namely we adapt the growth ratio to result in linear (rather than exponential) growth.
B.2 LANGUAGE MODELING EXPERIMENTS
All the language models are trained using fairseq (Ott et al., 2019) with a maximum of eight 32GB GPUs (NVIDIA V100), optimized with Adam (Kingma & Ba, 2014) using β1 = 0.9, β2 = 0.98,
=1e-8. The learning rate is warmed up over the first several hundred updates (between 500 and 4000) and then linearly decayed to 0 over the remaining updates, with a peak value tuned between 2e-4 and 5e-3. Models are trained up to 120,000 updates with local batch size of 8 sequences per GPU, with gradient accumulation as needed to achieve a total batch size of 192 sequences; each sequence has 512 tokens. We fix the Switch Transformer balancing loss term to 0.01 and use a capacity factor of 1, following Fedus et al. (2021).
C ADDITIONAL COMPUTER VISION RESULTS
In this section we show the impact of several variants of our framework. Namely, we report results for (a) a varying number of mega-batches, (b) whether to use preemption or not, and (c) whether to initialize from scratch or simply finetuning when replay is performed.
C.1 CIFAR
In the following results, we vary the number of megabatches. Below you can find results for MB = 20
C.1.1 DIFFERENT MBS
C.1.2 PREEMPTED RESULTS
We also consider the use of a patience term when training the model. When the validation accuracy has not improved over 25 consecutive epochs, we stop training for the given learning phase. As expected, we observe gains on compute efficiency, with a small loss in performance.
C.1.3 INITIALIZING FROM SCRATCH
Below we show results, comparing the performance of re-training models from scratch on all the data seen so far vs simply finetuning the current model(s) on all the data. Main numbers are finetuned models, numbers in parentheses are trained from scratch.
Table 8: CIFAR-10 MB = 10 results with Replay. Numbers in () are models (re)initialized from scratch at the start of a new MB
C.2 MNIST | 1. What is the focus of the paper regarding empirical evaluation in an anytime learning setting?
2. What are the strengths and weaknesses of the proposed approach in comparison to prior works in data stream mining research?
3. How does the reviewer assess the significance and novelty of the paper's contributions?
4. What are some potential limitations of the paper regarding its experimental settings and dataset choices?
5. Are there any concerns or suggestions regarding the future directions mentioned in the paper? | Summary Of The Paper
Review | Summary Of The Paper
The authors describe a framework to perform empirical evaluation of an anytime learning setting where data is available in a streaming minibatch fashion. With a primary aim to measure performance of a classifier across variety of practical settings of such streaming data to not only achieve high accuracy, but also provide non-trivial prediction anytime using limited computational resources. Using multiple benchmark datasets, the paper concludes that methods with intermediate parameter updates are better on the accuracy to computational efficiency tradeoff, and larger models generalize better.
Review
Paper is well written. The authors document the approach they have considered, and the metrics used to evaluate the various experimental settings used.
As I started to read the paper, the problem setting seemed very similar to the ones used in data stream mining research over the past decade. Though the authors state that the primary differentiator to the stream setting is that the models use what they call as "meta-batches" as streams rather than streaming single data instances, I fail to understand any theoretical or empirical difference in the two approaches. There exists multiple popular data stream frameworks such as MOA (Massive Online Analysis) that is used exactly for the problem setting described in the paper. So, the primary contribution of the paper seems to be in extensively evaluating the model complexity and approaches over various data and problem settings. Moreover, majority of the future questions that the authors hope to answer have been studied across various papers (in similar forms). Please refer to Lu, Jie, et al. "Learning under concept drift: A review." IEEE Transactions on Knowledge and Data Engineering 31.12 (2018): 2346-236.
The second objection of the paper is that the authors seem to use simple datasets from today's standard to derive their conclusion. Though the conclusions in the paper are fair and not surprising, a more complex set of datasets may provide a stronger result. Furthermore, it is important to note that there are other factors that influence the classifier performance, more than the batch size and data size available for training. The data itself may be imbalanced, non standard etc. So, by using more datasets, these issues can potentially be elevated. |
ICLR | Title
On Anytime Learning at Macroscale
Abstract
Classical machine learning frameworks assume access to a possibly large dataset in order to train a predictive model. In many practical applications however, data does not arrive all at once, but in large batches over time. This creates a natural trade-off between accuracy of a model and time to obtain such a model. A greedy predictor could produce non-trivial predictions by immediately training on batches as soon as these become available but, it may also make sub-optimal use of future data. On the other hand, a tardy predictor could wait for a long time to aggregate several batches into a larger dataset, but ultimately deliver a much better performance. In this work, we consider such a streaming learning setting, which we dub anytime learning at macroscale (ALMA). It is an instance of anytime learning applied not at the level of a single chunk of data, but at the level of the entire sequence of large batches. We first formalize this learning setting, we then introduce metrics to assess how well learners perform on the given task for a given memory and compute budget, and finally we test about thirty baseline approaches on three standard benchmarks repurposed for anytime learning at macroscale. Our findings indicate that no model strikes the best trade-off across the board. While replay-based methods attain the lowest error rate, they also incur in a 5 to 10 times increase of compute. Approaches that grow capacity over time do offer better scaling in terms of training flops, but they also underperform simpler ensembling methods in terms of error rate. Overall, ALMA offers both a good abstraction of the typical learning setting faced everyday by practitioners, and a set of unsolved modeling problems for those interested in efficient learning of dynamic models.
1 INTRODUCTION
Empirical risk minimization (Vapnik, 1998) is the dominant framework to formalize the learning process of a supervised task, and it has been critical to the success of large scale training of deep learning systems on a wide variety of applications. Within this framework, training data is assumed to be provided to the learner all at once. Alternatively, when the dataset is very large (essentially infinite), data is streamed to the learner one minibatch at the time, assuming that the rate at which samples are received matches the model’s processing time to learn from them.
Learning over streams of data has been studied in the machine learning domain for a long time (see Section 2 and Figure 1 for more details) with different assumptions: for instance in online learning, it is usually assumed that datapoints are coming one by one and have to be processed as soon as they are received. In continual learning, the streaming of data usually corresponds to a stream of large datasets corresponding to different tasks to solve, etc. In this paper, we define a simple yet important setting where there is a single task to solve, and where training data often comes at a slower rate than a model can process it. Moreover, it comes in relatively large batches once in a while. While poorly studied, this setting corresponds to practical applications encountered in production pipelines. For instance, it is faced by teams deploying language modeling applications (e.g content moderation) build models that are trained on large amounts of data like filtered versions of Common Crawl, which are dumps of the internet. However, new snapshots are available every month, as new content is generated over time. Therefore datasets keep getting bigger every few months and models need to be retrained accordingly. Similarly, visual object recognition datasets used in deployed applications are often extended every few months to include new images with their corresponding annotations.
* Authors contributed equally
Practically, there are two main approaches to integrate information present in a new batch of data in an existing model. If a lot of computational resources are available, a new and bigger model is instantiated and trained from scratch on the union of the old training set with the new batch of data. However, since this is a computationally very intensive process, retraining is typically done only rarely, once several batches of data have been collected. We call this approach “tardy” large-scale learning, since a predictor is available only at a later time. Another option, particularly suitable when computational resources are scarce and a predictor is needed quickly, is to simply finetune the old model on the new data as this arrives. Note that, in that settings, methods from the data stream domain or from the online learning domain that are based on the idea of processing any datapoint just once are not suitable since they have been developed for different use-cases.
This trade-off is emblematic of anytime learning, a learning setting where a learner has to provide good predictions at any point in time, while improving its performance over time as more and more data is observed. From an anytime learning perspective, neither training a large model after all data is received nor finetuning on the newly added batch of data are not satisfying. The former approach is a poor anytime learner because one needs to wait for a long time before obtaining a useful predictor. The latter approach is a poor anytime learner because it typically cannot leverage very well future batches of data since the model has a fixed capacity, determined on a small portion of the overall dataset and because inherently the model is trained on non i.i.d. data.
In this work, we aim at exploring this accuracy versus time trade-off of anytime learning, not at the level of a single batch of data, but at the macroscale of the entire sequence of batches. This is a setting which more closely mimics practical applications, that we call anytime learning at mascroscale (ALMA). In this learning setting, we assume that the time to train a model is negligible compared to the interval of time between two consecutive batches of data (and therefore we do not care about how quickly a learner adapts to a new batch), yet efficiency matters in the sense that for the same performance a predictor that uses less compute and memory is preferable. In summary, we are interested in a learner that i) yields high accuracy, ii) can make non-trivial predictions at any point in time while iii) limiting its computational and memory resources.
Our first contribution is to formalize the ALMA problem and to introduce metrics to evaluate learners (§3). We consider three different axes: error rate, memory and amount of computation. By measuring these quantities against time, via an area under the curve, we account not only for the final performance but also for the whole training trajectory over the sequence of large batches of data.
Our second contribution is an extensive empirical evaluation (§5) of various models (§4) that strike different trade-offs between accuracy and time to obtain a useful predictor. In particular, we explore models that fall in between greedy finetuning and tardy large-scale learning, and investigate models that leverage batches of data at an intermediate rate. We also consider a rich family of modular architectures, from plain ensembling methods to hierarchical mixture of experts, and several variants thereof, including those that have access to a replay buffer storing all previous batches of data and those that can grow capacity over time.
Our findings across three different benchmarks, including a large scale language modeling one, can be summarized as follows. a) An intermediate waiting time offers the best trade-off between accuracy and time to yield such a predictor. However, b) there is no single approach striking the best trade-off between performance and efficiency for various model sizes. c) Retraining from scratch a big model does offer the lowest error rate but sacrifices efficiency. d) Interestingly, large models are the most statistically efficient even when considering small datasets (like MNIST) and fully
connected networks. e) While approaches to grow capacity exhibit gains in terms of computational efficiency, these do not even outpeform simple ensembles. Overall, our work points at several research opportunities to improve modeling in a streaming setting of broad practical relevance, rather then pointing at any particular solution. We have also released code to reproduce our experiments and the entire platform implementing ALMA.
2 RELATED WORK
ALMA relates to several other learning frameworks: offline learning, continual learning, online learning and transfer learning as illustrated in Figure 1. i) It shares the same assumptions of classical empirical risk minimization (ERM) (Vapnik, 1998) at the level of each batch of data. However, it overall violates ERM’s assumptions of i.i.d. observations, because data points come in a stream of data chunks. ii) Because of this, ALMA relates to continual learning (CL) (Ring, 1994; Thrun, 1994; Ring, 1997; Thrun, 1998), with the key difference that the data distribution across batches (or tasks) is assumed stationary in ALMA. Therefore, ALMA can be seen as a special case of CL with a single task to solve. iii) ALMA relates also to online learning (Bottou, 1998) since it assumes that data are coming in a stream, an assumption also made in the concept drift literature (Lu et al., 2018). However, in online learning examples are streamed one at the time (or at random from a large dataset), while in ALMA the learner receives large batches of data sequentially In ALMA, received data can be processed multiple times as opposite to the online learning setting that usually assumes that any new datapoint has to be processed as soon as it is available, and will not be reused in future updates. iv) Finally, ALMA relates more broadly to transfer learning (Pan & Yang, 2010), as the problem of adapting to a new batch of data can be interpreted as leveraging knowledge acquired on previous batches to more effciently learn from the new batch of data.
Of course, ALMA relates to anytime learning (Grefenstette & Ramsey, 1992; Ramsey & Grefenstette, 1994), which has been recently applied to compare various autoML frameworks (Liu et al., 2020). However, in this work we are not interested in assessing the anytime learning ability at the level of each chunk of data, but only at a coarser granularity, at the level of the entire stream of chunks. Inspired by Liu et al. (2020), we consider the area under the curve of error rate against time to measure performance, but in order to account also for compute and memory budget, we add to our evaluation metrics also the area under the curve for memory and compute.
From the more theoretical side, there has been work about sub-bagging (Bühlmann & Yu, 2002) (bagging using subsets of a larger dataset) which is similar to our setting but without the sequential aspect of it. In this context, Breiman (1999) proposed a model similar to our growing ensembling (gEns), Bühlmann & Yu (2002) studied sub-bagging as a way to make the prediction of tree classifiers more robust while Zou et al. (2021) studied the consistency of the estimator in this setting. We defer to future studies the analysis of ALMA, while in this work we focus on the empirical evaluation.
Shifting the discussion to prior work on models that adjust their capacity dynamically, Waterhouse & Robinson (1995) introduced an approach to grow a hierarchical mixture of experts model (Jordan & Jacobs, 1994). This is a tree structured model where experts are at the leaves and gating functions are at non-terminal nodes. The tree determines a hierarchical partition of the input space into regions that are associated to each expert. This approach was made more efficient in later work by (Fritsch et al., 1996). In this work we consider a baseline (gMoE) that extends this prior work to hierarchical mixture of experts (Eigen et al., 2014; Denoyer & Gallinari, 2015; Lepikhin et al., 2020).
Growing architectures have also been studied in CL. For instance, Fernando et al. (2017) and Veniat et al. (2021) proposed a modular architecture that is assembled for every task, possibly reusing previously trained modules. The major difference with our work is that in our case routing is input dependent as opposed to task dependent. Yoon et al. (2018) instead proposed a method to incrementally and smoothly add hidden units. Similarly, Wen et al. (2020) proposed a heuristic approach to automatically adjust the network depth. Wang et al. (2017) considered growing both depth and width when finetuning to a new task. Liu et al. (2019a) and Wu et al. (2020) proposed approaches to grow architectures in depth and width by leveraging Taylor approximation and greedy selection. In our work, we benchmark against this last variant. None of these approaches have been applied to the ALMA setting to date.
Finally, some of our findings are built upon and extend recent empirical evaluations studying the scaling properties of language models (Kaplan et al., 2020a; Li et al., 2020b). In this study, we
confirm the conclusion that bigger models generalize better and are more statistically efficient, not only in language modeling tasks using a transformer architecture, but also in smaller scale computer vision tasks using both fully connected and convolutional architectures.
3 LEARNING SETTING
In anytime learning at macroscale (ALMA), we assume that there exists an underlying data distribution p(x, y) with input x ∈ RD and desired label y ∈ {1, . . . , C}. Notice that extensions to regression and unsupervised learning (where y is missing) are trivial, and therefore in this work we focus on classification problems for simplicity of exposition. A important property of ALMA is that data is presented to the learner as a stream SB of B consecutive batches of examples. Let Di be a collection of N 0 i.i.d. samples randomly drawn from p(x, y), for i ∈ {1, . . . , B}. The stream is then defined as the ordered sequence SB = {D1, . . . ,DB}. We refer to each dataset Di as mega-batch, as it is composed by a large number of examples. Typically a learner m : RD → {1, . . . , C} updates its parameters by processing a mini-batch of n N examples at the time from each mega-batch Di, and by iterating several times over each mega-batch before being presented with the next mega-batch. Since the learner cannot access future mega-batches, overall the data distribution is not i.i.d., even though samples drawn from each mega-batch are i.i.d., and cross-validation is performed using a subset of the current mega-batch. A learner could decide to use previous mega-batches when learning on the current mega-batch, but this will increase its compute usage.
Finally, we assume that the time it takes a learner to update its internal parameters after having observed a mega-batch is much less than the interval between the arrival of two consecutive megabatches. In other words, the rate at which data arrives is slower than the processing time of the model, and therefore the model could decide to iterate several times over the data at its disposal to improve its prediction accuracy.
3.1 METRICS
We evaluate learners in the ALMA setting across three axes, namely: accuracy, memory and computation. Let t be the time at which the t-th mega-batch arrives; this data can be used by the model to update its parameters or it is simply aggregated to previous mega-batches for later use.
We compute the error rate of model m at time t (after the arrival of the t-th mega-batch) and compute the area under the curve obtained varying t from 0 till the total number of mega-batches B; the resulting cumulative error rate (CER) is:
CER = B∑ t=0 1 |DTs| ∑ (x,y)∈DTs |m(x; θt) 6= y| (1)
where m(x; θt) is the model at time t equipped with parameters θt, DTs is the test set, |DTs| is the number of examples in the test set, and |m(x; θt) 6= y| is one if the model prediction does not match the ground truth label and zero otherwise. The outer sum computes the discrete integral of the error rate over time. CER is going to be small only when the error rate is small throughout the whole stream . CER is instead large for a tardy model that waits till the very last mega-batch to update the model, even though eventually this may obtain a very low final error rate. If not perfect, CER provides a good summary of the performance of a system across time. Anyway, to fully capture the differences between two models, it is needed to have a deeper look at the performance across time as illustrated in Figure 2 for instance.
Similarly, we compute the cumulative memory usage and compute as:
Mem = B∑ t=0 |θt|, Comp = B∑ t=0 O(m(·; θt)) (2)
where |θt| is the number of free parameters of the model at time t, and O(m(·; θt)) is the number of flops used by the model to process the t-th mega-batch. Once again, by measuring the area under the curves obtained by tracking these quantities over time we obtain a holistic assessment of memory and compute throughout the whole stream. A model can obtain small Mem and Comp only if it does not consume memory and if it is computationally parsimonious throughout the entire duration of the stream.
Algorithm 1 Training in the ALMA setting 1: procedure TRAIN(m,w, replay, grow) . m is the model, w is the waiting time 2: t← 1 3: D ← ∅ 4: while t < B do . For each stage 5: if replay then . Acquire w mega-batches 6: D ← D ∪Dt ∪ ... ∪ Dt+w−1 7: else 8: D ← Dt ∪ ... ∪ Dt+w−1 9: t← t+ w 10: if grow then 11: m.grow() . Grow the model if the model is a growing model 12: m.train(D) . Fine-tune or retrain from scratch m on the collected dataset
4 LEARNING ALGORITHMS
In this section, we describe the methods we tested in the ALMA setting. They generally follow the learning procedure shown in Algorithm 1. At a high level, we consider two families of models, those with a monolithic architecture and those with a modular architecture (e.g., ensembling). The latter are amenable to grow over time by adding new modules to the existing set. We will start by describing fixed architectures (§4.1) and then conclude with growing architectures (§4.2). All models are also given the option to replay previous mega-batches.
4.1 FIXED ARCHITECTURES
The first family of methods trains models with a fixed architecture. These models are sequentially trained over new mega-batches and exhibit a fixed memory footprint. We consider three models:
Single Model (SM): This is a standard multi-layer neural network (e.g., fully connected neural network or transformer) trained by stochastic gradient descent. It can be initialized from random or from the parameters of the model trained on the previous mega-batch. The initializaiton choice is determined via cross-validation.
Ensemble of Models (Ens): The second approach is the simplest modular approach, consisting of an ensemble of N neural networks with the same architecture, each being trained independently on the same sequence of data. The output of the overall model at test time is the average probability distribution produced by each component1. The advantage of Ens is that training and inference can be trivially parallelized, enabling to scale up model parameters very easily. The disadvantange is that inference requires N times more compute than what is required by each component.
Uniform Mixture of Models (UMix): A potential drawback of Ens is that evaluation and training are inconsistent. UMix addresses this by training a model whose prediction is the average (in logit space) of the predictions produced by N networks. While this requires synchronization during training, now both training and evaluation use the same model.
4.2 GROWING ARCHITECTURES
In the previous section, the number of parameters and the architecture of the model are fixed throughout the model’s lifetime. However, as more data is observed, it is interesting to consider dynamic architectures that grow over time, because these may save compute and memory during the earlier stages of learning while providing more predictive power during the later stages. We consider three growing approaches:
1Classical bagging approaches and majority vote strategies have been also explored without significant difference.
Growing Ensemble (gEns): Like the Ens model, gEns is also a combination of neural networks trained independently. While Ens considers N networks that are, at each stage, trained over the new chunck of data, gEns replaces this step by a growing step where n neural networks are added. In our implementation, only these n neural networks are trained over the new data, while the other neural networks (trained on previous mega-batches) are kept fixed.
Growing Mixture of Experts (gMoE): A hierarchical mixture of experts models (MoE) is an architecture where at layer l the output representation zl is: zl = ∑k j=1 g(j|zl−1)h(zl−1|j), where g is the gating or routing function and h(·|j) is the j-th expert. Compared to Ens, MoE has exponentially many more components albeit with a lot of parameter sharing. Another advantage is that by selecting only one (or a few) experts, the computational cost is independent of the number of experts, assuming the cost of gating is negligible compared to the cost or executing the experts. The main issue is that MoE are notoriously harder to train (Eigen et al., 2014; Denoyer & Gallinari, 2015; Lepikhin et al., 2020). In this work, we consider a growing version of MoE, which we denote with gMoE, whereby experts are added over time. See Appendix A for more details.
Firefly (Wu et al., 2020) (FF): FF is a method which progressively grows neural networks, jointly optimizing both the model architecture and parameters. Growth includes both a width expansion by adding new hidden units (or feature maps) as well as a depth expansion by adding new layers. Importantly, this is an example of non-modular method unlike Ens or gMoE, which is potentially more expressive but also more inefficient at inference time because there is no structured sparsity that can be leveraged to speed up computation.
5 EXPERIMENTS
In this section we first describe how standard benchmarks can be repurposed for ALMA, we then provide the details of the models we tested, and we finally conclude with an analysis of the results we obtained, aiming to understand which method attains the best trade-off between time, accuracy, compute and memory usage.
Datasets We consider a variety of datasets. The first dataset is MNIST (LeCun et al., 1998), which consists of a training set with 60,000 quasi-binary handwritten digits of size 28x28 pixels, and a test set with 10,000 examples. The second dataset is CIFAR 10 (Krizhevsky, 2009) that has a training set with 50,000 images of size 32x32 pixels belonging to 10 classes such as bird, car, horse, ship, truck, etc. The third dataset, used for our large-scale language modeling evaluation, is a portion of the collection of English language text introduced in Liu et al. (2019b), consisting of Books, Wikipedia and Common Crawl. We consider 4 (large) mega-batches for training and one additional mega-batch for evaluation, each consisting of approximately 440M words; we also hold out a validation set with approximately 0.5M words of Common Crawl for model selection. We use a byte-pair encoding (BPE) (Sennrich et al., 2016) vocabulary with 50,000 units, following Radford et al. (2019). This dataset is fairly representative of what practitioners might face when maintaining a deployed system with new data arriving every few months.
Given a dataset like any of the above, we construct a benchmark for ALMA evaluation as follows: 1) we randomly partition the training set into B mega-batches with equal number of training examples (B = 50 for MNIST and CIFAR 10, and 4 for the text dataset), 2) from each mega-batch we extract 10% of the data to build the mega-batch validation set (except for the large scale language modeling dataset where we use the provided validation set), and 3) we create a learning experience by doing one pass over the sequence of mega-batches. For each mega-batch, the learner can query as many mini-batches as desired. The learner can also decide not to train on the data of a mega-batch right away but instead to wait and accumulate data across a few consecutive mega-batches. While the learner observes data, it is also tested on the test set. This is not used for validation purposes, but only for final reporting as shown in §5.1.
Models We evaluate the six approaches presented in §4, and for each of them we consider various waiting times, a version with and without replay, and at least two model sizes. For each setting, we cross validate over several hyper-parameters such as initializaiton type, learning rate, stopping criterion, growth rate, etc.
Next, we describe in details the architecture used on each dataset. Further experimental details to aide reproducibility are reported in Appendix B. On MNIST the backbone architecture of SM is a three layer fully connected neural network with ReLU units. We considered two hidden units sizes, namely 4 and 32 (denoted by [s] and [b], respectively), which let us simulate the regime of big data relative to the size of the network and explore how to grow architectures without worrying about overfitting. Similarly, the components of Ens, gEns and UMix are SM networks of the same size as stated above; gMoE also starts off as SM and adds modules (at the first two layers) that have the same size as the original layer of SM. When varying the waiting time, i.e., the number of mega-batches that are aggregated before initiating a new training session, we use the suffix “_w” to indicate its value.
On CIFAR 10, the methods and notations are the same as in MNIST. The only difference is that the backbone architecture is a scaled down version of a VGG19 convolutional neural network (Simonyan & Zisserman, 2015), where the number of intermediate feature maps is the same for each layer and equal to either 4 or 32. On this dataset, we also consider FF starting off from the same VGG19 backbone.
For the language modeling task SM is a Switch Transformer (Fedus et al., 2021), which is a hard mixture of experts model with an additional load balancing loss term and hard capacity constraint applied during training to prevent uneven expert utilization. Following Fedus et al. (2021), we fix the weight of the balancing loss term to 0.01 and use a capacity factor of 1, ensuring relatively uniform expert utilization. We train the model using Adam (Kingma & Ba, 2015) and tune the learning rate and dropout on the validation set. In the growing setting we copy the expert weights and gating network weights corresponding to the top-k experts incurring the largest loss, where k is typically between 2 and 4. We consider two model sizes: a base model with 6 layers and model dimension of 512, for a total of 40M shared parameters and 6M additional parameters per expert; and a large model with 12 layers and model dimension of 768, for a total of 96M shared parameters and 28M additional parameters per expert. We use an input sequence length of 512 tokens and we do not use replay given the large chunk sizes.
5.1 RESULTS
In Fig. 2 we start by analyzing learning curves on CIFAR 10 for a subset of the methods as a function of the waiting time. We then dive into analyzing all methods on both MNIST (Tab. 1) and CIFAR 10
(Tab. 2), using the optimal empirical value of waiting time. We conclude by confirming the major findings at scale on the language modeling task (Tab. 3).
Fig. 2 shows the test error rate as a function of the number of mega-batches received for both the small (left) and the large (right) model. We observe that an intermediate waiting time (in this case equal to 5) strikes the best trade-off between accuracy and time for all methods, since curves with waiting time equal to 5 have the lowest area under the curve. Greedy methods using waiting time equal to 1 achieve lower error rate only during the very beginning of the stream. Second, we observe that bigger models (SM and Ens) not only generalize better but they are also statistically more efficient: the small Ens obtained almost 35% error rate by the end of its learning experience, which is worse than the error rate obtained by the large Ens just after having observed one tenth of the entire stream. The statistical efficiency of large models does not apply only to large transformers (Kaplan et al., 2020a), but also to fully connected (we obtained similar results on MNIST) and convolutional models.
Next, using the waiting time that yielded the lowest cumulative error rate, we compare all methods discussed in §4, focusing our discussion on Tab. 2 of CIFAR 10 as same conclusions apply to MNIST as well (see Tab. 1).
First, replay lowers the CER by a relative amount of about 10% at the cost of increasing the cumulative training flops by a factor of more than 5, which is rather substantial. Notice that retraining from scratch using memory replay, as reported here in parentheses, is nowadays the dominant approach to deal with sequential datasets.
Second, Ens works better than UMix for larger models, and vice versa. We surmise that ensembling may alleviate overfitting of large models, but coordinating the components of the ensemble like UMix does, is more effective in an underfitting regime (i.e with small models). Ens thus looks like a good method to train large architectures without suffering of the overfitting aspect and may be used when the complexity of the task is not known a priori.
Third, all growing approaches perform rather similarly, particularly when starting from larger backbones, although they strike slightly different trade-offs. For instance, gMoE is the most efficient at test time, while FF yields a lower error rate. Interestingly, none of the approaches that grow architectures currently manages to beat Ens in terms of error rate when starting from a large backbone, although they require substantially fewer flops at inference time. Finally, while methods derived from SM (for the same size of the initial backbone, see rows with the same color in the table) all manage to beat SM, it is also worth noting that for the same number of parameters SM is still the best performing method, unless there is overfitting. In particular, Ens with 12550 parameters achieves a CER of 2440 while SM with 11710 parameters obtains a CER of 2038 while requiring much less compute; same considerations apply also to the gMoE with 29550 parameters compared to SM with 31660 parameters. Therefore there is no single model striking a much better trade-off, and more advanced approaches do not outperform simpler methods like Ens.
The results on the large scale language modeling task reported in Tab. 3 show that bigger models perform better (the larger the number of parameters the lower the PPL for a given model class) and are also more statistically efficient (for instance the base SM_w1 attains 26.53 after seeing the whole stream, while the large SM_w1 obtains 22.47 just after seeing the first chunk of data), consistent with recent related work (Kaplan et al., 2020b; Li et al., 2020a). We also observe that Ens is a strong performer, with Ens_w1 and gEns_w1 models dominating SM models in all settings. Surprisingly, ensembles trained on distinct data chunks (gEns_w1; t1 or t3) perform no better than ensembles trained on a single data chunk (Ens_w1; t0). For instance, among Base 2-model ensembles (4@2), Ens_w1 achieves a perplexity of 26.20 using a single data chunk (t0), while gEns_w1 achieves a perplexity of 26.27 using models trained on each of the two data chunks (t1). Finally, if test time inference is a concern, then gMoE is a preferable choice since its runtime is comparable to SM.
6 CONCLUSION AND PERSPECTIVES
In this work we introduced the anytime learning at macroscale (ALMA) setting, which is an instance of anytime learning under the assumption that data is observed as a sequence of large batches. ALMA better mimics the learning scenarios faced by machine learning practitioners, who want to efficiently solve a task, but time to time they receive more data to train on. We introduced metrics that enable the assessment in terms of error rate, memory usage and compute throughout the entire learning experience. Equipped with these tools, we then evaluated several approaches on three different datasets, including large scale language modeling. We found that methods that update parameters at an intermediate rate tend to yield a better trade-off, and that bigger models tend to generalize better. In particular, models that grow capacity over time generalize better particularly when the initial model is smaller, and ensembling is a very strong baseline.
A cynical interpretation of our finding that bigger models generalize better, could take the reader to the conclusion that it can all be solved by starting with a big model. However, as data is added over time so is computation. It is often the case that researchers working on large-scale learning instantiate the biggest possible model to train on their task, but few months later they can manage to launch even bigger models thanks to compute and engineering advances. How can the larger model leverage what has been learned from the previously trained model? Is there a modeling choice that strikes a better trade-off than retraining from scratch? More generally, what are good approaches to extract information from a new batch of data to integrate it into an existing model? While we do not provide a full answer to these questions, we do offer a framework to study them and several strong baseline approaches to compare against and build upon.
7 REPRODUCIBILITY STATEMENT
We have made several efforts to ensure that the results provided in the paper are fully reproducible. We first provide a clean codebase from which all the computer vision results in this paper are generated. In this codebase, one can find the exact hyperparameters used for each method in the provided configurations. We have attached a readme to the code in order to guide users running our code. For the LM experiments, as stated in the appendix we use the fairseq (Ott et al., 2019) and provide the required information to replicate our results.
APPENDIX
A GROWING MIXTURES OF EXPERTS
Growing Mixture of Experts (gMoE): A mixture of expert (MoE) is a sequence of non-linear functions, each of which is potentially a mixture of experts (omitting the dependence on parameters):
m(x) = f l(f l−1(. . . f1(x) . . . )), with f i(z) = k∑
j=1
gi(j|z)hi(z|j)
where gi is the gating function at the i-th layer which outputs a categorical distribution over the number of experts, and hi(·|j) is the j expert at layer i. The gating function can be “soft” in which case it outputs non-zero weights for each expert via a softmax, or “hard” version in which case only one expert is selected through a multinomial sampling (and learned through the straight-through estimator in this paper (Bengio et al., 2013)). At test time in the “hard” case, we select the expert with the largest probability. The interest of mixtures of experts is they have a high expressivity, and experts can be easily added to increase the capacity of the model. The gMoEmodel is the growing version where, at each stage as illustrated in Fig. 3, new experts are added at each layer – details about the precise expansion process are given in Appendix.
The key design considerations are: when to grow, what to grow and how to grow. Here, we will refer to our default setting which favors simplicity, unless otherwise specified.
A growth step is triggered at each stage, ensuring a linear growth over time. We grow by adding one expert at each layer, making sure that all experts within a layer have the same architecture albeit with different parameters. In order to grow, we look at which expert has associated the largest cumulative loss; we call such expert the losing expert. The cumulative loss is defined as the sum of the losses of examples on the validation set that have been routed through a particular expert; each expert has associated a cumulative loss value. The rationale is to identify at each layer the expert responsible for the largest contribution to the total loss.
To avoid drop in the loss function and to keep its differentiability when splitting an expert, we propose a tree-based approach we the losing expert is split such expert into two experts with exactly the same parameters as illustrated in Fig. 3: Two children leaves are derived and we instantiate a new gating for the children which decides whether an input example routed to the old expert, should now go to the right or left expert child. The parameters of the new gate are initialized at random while the parameters of the new experts are exact copies of the ones of the losing expert that we split.
More formally, if s is the losing expert then the term gi(s|z)hi(z|s) is replaced by: 2∑
k=1
gi(s|z)gi(k|z, s)hi(z|s, k) (3)
where gi(k|z, s) is the newly introduced gate, and z is the input of the gating and experts. Over time, the gating function learns to partition its input space into a binary tree (if we start from a single expert), and the gating value of an expert is the product of the gating probabilities on the path from root to the leaf expert. Both the gating tree structure and the particular initialization scheme guarantee that the growth step is smooth and fully differentiable, in particular, the loss before and after the growth step is the same.
If we consider each path in the MoE model to be a different model, then with L layer of k MoE components, there are kL many possible paths through the MoE model, hence the number of paths grows exponentially with the number of layers. You can think of this as an ensemble with exponentially many components, but this is still tractable because components share parameters.
Algorithm 2 gMoE 1: k: number of mega-batches to aggregate 2: D = ∅ 3: function TRAIN(Di, i) 4: D += Di 5: if i mod k == 0 then 6: Extract DVAL and DTR from D 7: while m is not converged: do 8: (x, y) ∼ DTR . In practice, sample mini-batches. 9: m.update(x, y)
10: D = ∅
11: m.grow(DVAL) . Growth step can be done at a different rate too. 12: function GROW(DVAL) 13: for each layer in the network do 14: Let i be the losing expert on DVAL, i.e. the expert incurring the largest cumulative loss. 15: Turn corresponding gating output in an internal node and derive 2 gate children 16: Initialize the new experts by copying the parameters from the old parent expert. 17: Initialize the new gating between the two siblings at random.
B HYPER-PARAMETER SETTINGS
B.1 COMPUTER VISION EXPERIMENTS
For each megabatch received, we keep 10% of the data to perform cross-validation. All experiments are run on a single 16GB Quadro GP100 GPU. We apply data normalization for each dataset considered. A training minibatch size of 128 is used. UMix and Ens models have N = 5 in all experiments. for gEns, we train one model n = 1 at every mega-batch, so the total number of models depends on the amount of mega-batches. For Firefly we use a growth rate of 0.25, meaning that at every growth phase, we add approximately a quarter of the initial number of parameters.
B.1.1 MNIST
Models are trained for 100 epochs, and we report results with soft gating. We use the AdaDelta (Zeiler (2012)) optimizer with default learning rate of 1. We use a MLP with 2 hidden layers of varying width (e.g. 4,8 or 32 neurons).
B.1.2 CIFAR-10
Models are trained for 200 epochs, as this was shown to be long enough to allow the model to converge with a learning rate of 0.01. We use Stochastic Gradient Descent with momentum value of 0.9 and weight decay of 1× 10−4. During training, we apply random horizontal flips and select random image crops with padding of 4 pixels. For the architecture, we use the same reduced VGG with batch normalization as prescribed in Wu et al. (2020). All layers are initialized with the same number of channels (e.g. 4, 8, or 32 channels). For the Firefly experiments, we keep all the Fireflyspecific hyperparameters to the default values suggested in the author’s public codebase. We make one exception to this, namely we adapt the growth ratio to result in linear (rather than exponential) growth.
B.2 LANGUAGE MODELING EXPERIMENTS
All the language models are trained using fairseq (Ott et al., 2019) with a maximum of eight 32GB GPUs (NVIDIA V100), optimized with Adam (Kingma & Ba, 2014) using β1 = 0.9, β2 = 0.98,
=1e-8. The learning rate is warmed up over the first several hundred updates (between 500 and 4000) and then linearly decayed to 0 over the remaining updates, with a peak value tuned between 2e-4 and 5e-3. Models are trained up to 120,000 updates with local batch size of 8 sequences per GPU, with gradient accumulation as needed to achieve a total batch size of 192 sequences; each sequence has 512 tokens. We fix the Switch Transformer balancing loss term to 0.01 and use a capacity factor of 1, following Fedus et al. (2021).
C ADDITIONAL COMPUTER VISION RESULTS
In this section we show the impact of several variants of our framework. Namely, we report results for (a) a varying number of mega-batches, (b) whether to use preemption or not, and (c) whether to initialize from scratch or simply finetuning when replay is performed.
C.1 CIFAR
In the following results, we vary the number of megabatches. Below you can find results for MB = 20
C.1.1 DIFFERENT MBS
C.1.2 PREEMPTED RESULTS
We also consider the use of a patience term when training the model. When the validation accuracy has not improved over 25 consecutive epochs, we stop training for the given learning phase. As expected, we observe gains on compute efficiency, with a small loss in performance.
C.1.3 INITIALIZING FROM SCRATCH
Below we show results, comparing the performance of re-training models from scratch on all the data seen so far vs simply finetuning the current model(s) on all the data. Main numbers are finetuned models, numbers in parentheses are trained from scratch.
Table 8: CIFAR-10 MB = 10 results with Replay. Numbers in () are models (re)initialized from scratch at the start of a new MB
C.2 MNIST | 1. What is the focus of the paper regarding batch learning and its significance in applied ML/AI problems?
2. What are the strengths and weaknesses of the proposed approach in terms of computational cost, model size, and error rate?
3. How does the reviewer assess the relevance and practicality of the cumulative error rate in the paper's context?
4. What are the concerns and potential strategies for evaluation in non-iid data settings?
5. How does the reviewer evaluate the scalability of the problem and its separation from smaller-scale problems?
6. Are there any underlying scaling laws that affect the performance of different learning strategies? | Summary Of The Paper
Review | Summary Of The Paper
The authors consider a batch learning problem, in which large batches of data arrive in series. They explore the performance of several types of algorithms in terms of their computational cost, model size, and error rate.
Review
The problem presented by the authors is relevant to applied ML/AI problems, which are always a work in progress. Further, improving the efficiency of learning is desirable. So, the problem is reasonably well motivated.
I'm skeptical about the value of the cumulative error rate. From a practical point of view a data engineer might be concerned with the questions; how good is the model I have now? What would be impact of further data collection? If one has collected a set of batches, the performance of prior models is not terribly relevant.
The non-iid nature of the data is mentioned, and it is mentioned that cross-validation is carried out only on the current batch. These concepts could be explored more thoroughly. What are the issues with evaluation in this setting? Should I hold out a portion of every batch to use for evaluation? What other evaluation strategies are possible? What are their strengths and weaknesses?
In continuous streaming settings, there may be distribution shift over time. That is not addressed in this work.
The authors make a big point about the scale of the problem. Obviously this makes naive approaches less appealing. But how does the problem scale? What really separates (if anything), the macroscale problem from more mundane sized problems? Are there underlying scaling laws at work that cause shifts the performance of each learning strategy? |
ICLR | Title
On Anytime Learning at Macroscale
Abstract
Classical machine learning frameworks assume access to a possibly large dataset in order to train a predictive model. In many practical applications however, data does not arrive all at once, but in large batches over time. This creates a natural trade-off between accuracy of a model and time to obtain such a model. A greedy predictor could produce non-trivial predictions by immediately training on batches as soon as these become available but, it may also make sub-optimal use of future data. On the other hand, a tardy predictor could wait for a long time to aggregate several batches into a larger dataset, but ultimately deliver a much better performance. In this work, we consider such a streaming learning setting, which we dub anytime learning at macroscale (ALMA). It is an instance of anytime learning applied not at the level of a single chunk of data, but at the level of the entire sequence of large batches. We first formalize this learning setting, we then introduce metrics to assess how well learners perform on the given task for a given memory and compute budget, and finally we test about thirty baseline approaches on three standard benchmarks repurposed for anytime learning at macroscale. Our findings indicate that no model strikes the best trade-off across the board. While replay-based methods attain the lowest error rate, they also incur in a 5 to 10 times increase of compute. Approaches that grow capacity over time do offer better scaling in terms of training flops, but they also underperform simpler ensembling methods in terms of error rate. Overall, ALMA offers both a good abstraction of the typical learning setting faced everyday by practitioners, and a set of unsolved modeling problems for those interested in efficient learning of dynamic models.
1 INTRODUCTION
Empirical risk minimization (Vapnik, 1998) is the dominant framework to formalize the learning process of a supervised task, and it has been critical to the success of large scale training of deep learning systems on a wide variety of applications. Within this framework, training data is assumed to be provided to the learner all at once. Alternatively, when the dataset is very large (essentially infinite), data is streamed to the learner one minibatch at the time, assuming that the rate at which samples are received matches the model’s processing time to learn from them.
Learning over streams of data has been studied in the machine learning domain for a long time (see Section 2 and Figure 1 for more details) with different assumptions: for instance in online learning, it is usually assumed that datapoints are coming one by one and have to be processed as soon as they are received. In continual learning, the streaming of data usually corresponds to a stream of large datasets corresponding to different tasks to solve, etc. In this paper, we define a simple yet important setting where there is a single task to solve, and where training data often comes at a slower rate than a model can process it. Moreover, it comes in relatively large batches once in a while. While poorly studied, this setting corresponds to practical applications encountered in production pipelines. For instance, it is faced by teams deploying language modeling applications (e.g content moderation) build models that are trained on large amounts of data like filtered versions of Common Crawl, which are dumps of the internet. However, new snapshots are available every month, as new content is generated over time. Therefore datasets keep getting bigger every few months and models need to be retrained accordingly. Similarly, visual object recognition datasets used in deployed applications are often extended every few months to include new images with their corresponding annotations.
* Authors contributed equally
Practically, there are two main approaches to integrate information present in a new batch of data in an existing model. If a lot of computational resources are available, a new and bigger model is instantiated and trained from scratch on the union of the old training set with the new batch of data. However, since this is a computationally very intensive process, retraining is typically done only rarely, once several batches of data have been collected. We call this approach “tardy” large-scale learning, since a predictor is available only at a later time. Another option, particularly suitable when computational resources are scarce and a predictor is needed quickly, is to simply finetune the old model on the new data as this arrives. Note that, in that settings, methods from the data stream domain or from the online learning domain that are based on the idea of processing any datapoint just once are not suitable since they have been developed for different use-cases.
This trade-off is emblematic of anytime learning, a learning setting where a learner has to provide good predictions at any point in time, while improving its performance over time as more and more data is observed. From an anytime learning perspective, neither training a large model after all data is received nor finetuning on the newly added batch of data are not satisfying. The former approach is a poor anytime learner because one needs to wait for a long time before obtaining a useful predictor. The latter approach is a poor anytime learner because it typically cannot leverage very well future batches of data since the model has a fixed capacity, determined on a small portion of the overall dataset and because inherently the model is trained on non i.i.d. data.
In this work, we aim at exploring this accuracy versus time trade-off of anytime learning, not at the level of a single batch of data, but at the macroscale of the entire sequence of batches. This is a setting which more closely mimics practical applications, that we call anytime learning at mascroscale (ALMA). In this learning setting, we assume that the time to train a model is negligible compared to the interval of time between two consecutive batches of data (and therefore we do not care about how quickly a learner adapts to a new batch), yet efficiency matters in the sense that for the same performance a predictor that uses less compute and memory is preferable. In summary, we are interested in a learner that i) yields high accuracy, ii) can make non-trivial predictions at any point in time while iii) limiting its computational and memory resources.
Our first contribution is to formalize the ALMA problem and to introduce metrics to evaluate learners (§3). We consider three different axes: error rate, memory and amount of computation. By measuring these quantities against time, via an area under the curve, we account not only for the final performance but also for the whole training trajectory over the sequence of large batches of data.
Our second contribution is an extensive empirical evaluation (§5) of various models (§4) that strike different trade-offs between accuracy and time to obtain a useful predictor. In particular, we explore models that fall in between greedy finetuning and tardy large-scale learning, and investigate models that leverage batches of data at an intermediate rate. We also consider a rich family of modular architectures, from plain ensembling methods to hierarchical mixture of experts, and several variants thereof, including those that have access to a replay buffer storing all previous batches of data and those that can grow capacity over time.
Our findings across three different benchmarks, including a large scale language modeling one, can be summarized as follows. a) An intermediate waiting time offers the best trade-off between accuracy and time to yield such a predictor. However, b) there is no single approach striking the best trade-off between performance and efficiency for various model sizes. c) Retraining from scratch a big model does offer the lowest error rate but sacrifices efficiency. d) Interestingly, large models are the most statistically efficient even when considering small datasets (like MNIST) and fully
connected networks. e) While approaches to grow capacity exhibit gains in terms of computational efficiency, these do not even outpeform simple ensembles. Overall, our work points at several research opportunities to improve modeling in a streaming setting of broad practical relevance, rather then pointing at any particular solution. We have also released code to reproduce our experiments and the entire platform implementing ALMA.
2 RELATED WORK
ALMA relates to several other learning frameworks: offline learning, continual learning, online learning and transfer learning as illustrated in Figure 1. i) It shares the same assumptions of classical empirical risk minimization (ERM) (Vapnik, 1998) at the level of each batch of data. However, it overall violates ERM’s assumptions of i.i.d. observations, because data points come in a stream of data chunks. ii) Because of this, ALMA relates to continual learning (CL) (Ring, 1994; Thrun, 1994; Ring, 1997; Thrun, 1998), with the key difference that the data distribution across batches (or tasks) is assumed stationary in ALMA. Therefore, ALMA can be seen as a special case of CL with a single task to solve. iii) ALMA relates also to online learning (Bottou, 1998) since it assumes that data are coming in a stream, an assumption also made in the concept drift literature (Lu et al., 2018). However, in online learning examples are streamed one at the time (or at random from a large dataset), while in ALMA the learner receives large batches of data sequentially In ALMA, received data can be processed multiple times as opposite to the online learning setting that usually assumes that any new datapoint has to be processed as soon as it is available, and will not be reused in future updates. iv) Finally, ALMA relates more broadly to transfer learning (Pan & Yang, 2010), as the problem of adapting to a new batch of data can be interpreted as leveraging knowledge acquired on previous batches to more effciently learn from the new batch of data.
Of course, ALMA relates to anytime learning (Grefenstette & Ramsey, 1992; Ramsey & Grefenstette, 1994), which has been recently applied to compare various autoML frameworks (Liu et al., 2020). However, in this work we are not interested in assessing the anytime learning ability at the level of each chunk of data, but only at a coarser granularity, at the level of the entire stream of chunks. Inspired by Liu et al. (2020), we consider the area under the curve of error rate against time to measure performance, but in order to account also for compute and memory budget, we add to our evaluation metrics also the area under the curve for memory and compute.
From the more theoretical side, there has been work about sub-bagging (Bühlmann & Yu, 2002) (bagging using subsets of a larger dataset) which is similar to our setting but without the sequential aspect of it. In this context, Breiman (1999) proposed a model similar to our growing ensembling (gEns), Bühlmann & Yu (2002) studied sub-bagging as a way to make the prediction of tree classifiers more robust while Zou et al. (2021) studied the consistency of the estimator in this setting. We defer to future studies the analysis of ALMA, while in this work we focus on the empirical evaluation.
Shifting the discussion to prior work on models that adjust their capacity dynamically, Waterhouse & Robinson (1995) introduced an approach to grow a hierarchical mixture of experts model (Jordan & Jacobs, 1994). This is a tree structured model where experts are at the leaves and gating functions are at non-terminal nodes. The tree determines a hierarchical partition of the input space into regions that are associated to each expert. This approach was made more efficient in later work by (Fritsch et al., 1996). In this work we consider a baseline (gMoE) that extends this prior work to hierarchical mixture of experts (Eigen et al., 2014; Denoyer & Gallinari, 2015; Lepikhin et al., 2020).
Growing architectures have also been studied in CL. For instance, Fernando et al. (2017) and Veniat et al. (2021) proposed a modular architecture that is assembled for every task, possibly reusing previously trained modules. The major difference with our work is that in our case routing is input dependent as opposed to task dependent. Yoon et al. (2018) instead proposed a method to incrementally and smoothly add hidden units. Similarly, Wen et al. (2020) proposed a heuristic approach to automatically adjust the network depth. Wang et al. (2017) considered growing both depth and width when finetuning to a new task. Liu et al. (2019a) and Wu et al. (2020) proposed approaches to grow architectures in depth and width by leveraging Taylor approximation and greedy selection. In our work, we benchmark against this last variant. None of these approaches have been applied to the ALMA setting to date.
Finally, some of our findings are built upon and extend recent empirical evaluations studying the scaling properties of language models (Kaplan et al., 2020a; Li et al., 2020b). In this study, we
confirm the conclusion that bigger models generalize better and are more statistically efficient, not only in language modeling tasks using a transformer architecture, but also in smaller scale computer vision tasks using both fully connected and convolutional architectures.
3 LEARNING SETTING
In anytime learning at macroscale (ALMA), we assume that there exists an underlying data distribution p(x, y) with input x ∈ RD and desired label y ∈ {1, . . . , C}. Notice that extensions to regression and unsupervised learning (where y is missing) are trivial, and therefore in this work we focus on classification problems for simplicity of exposition. A important property of ALMA is that data is presented to the learner as a stream SB of B consecutive batches of examples. Let Di be a collection of N 0 i.i.d. samples randomly drawn from p(x, y), for i ∈ {1, . . . , B}. The stream is then defined as the ordered sequence SB = {D1, . . . ,DB}. We refer to each dataset Di as mega-batch, as it is composed by a large number of examples. Typically a learner m : RD → {1, . . . , C} updates its parameters by processing a mini-batch of n N examples at the time from each mega-batch Di, and by iterating several times over each mega-batch before being presented with the next mega-batch. Since the learner cannot access future mega-batches, overall the data distribution is not i.i.d., even though samples drawn from each mega-batch are i.i.d., and cross-validation is performed using a subset of the current mega-batch. A learner could decide to use previous mega-batches when learning on the current mega-batch, but this will increase its compute usage.
Finally, we assume that the time it takes a learner to update its internal parameters after having observed a mega-batch is much less than the interval between the arrival of two consecutive megabatches. In other words, the rate at which data arrives is slower than the processing time of the model, and therefore the model could decide to iterate several times over the data at its disposal to improve its prediction accuracy.
3.1 METRICS
We evaluate learners in the ALMA setting across three axes, namely: accuracy, memory and computation. Let t be the time at which the t-th mega-batch arrives; this data can be used by the model to update its parameters or it is simply aggregated to previous mega-batches for later use.
We compute the error rate of model m at time t (after the arrival of the t-th mega-batch) and compute the area under the curve obtained varying t from 0 till the total number of mega-batches B; the resulting cumulative error rate (CER) is:
CER = B∑ t=0 1 |DTs| ∑ (x,y)∈DTs |m(x; θt) 6= y| (1)
where m(x; θt) is the model at time t equipped with parameters θt, DTs is the test set, |DTs| is the number of examples in the test set, and |m(x; θt) 6= y| is one if the model prediction does not match the ground truth label and zero otherwise. The outer sum computes the discrete integral of the error rate over time. CER is going to be small only when the error rate is small throughout the whole stream . CER is instead large for a tardy model that waits till the very last mega-batch to update the model, even though eventually this may obtain a very low final error rate. If not perfect, CER provides a good summary of the performance of a system across time. Anyway, to fully capture the differences between two models, it is needed to have a deeper look at the performance across time as illustrated in Figure 2 for instance.
Similarly, we compute the cumulative memory usage and compute as:
Mem = B∑ t=0 |θt|, Comp = B∑ t=0 O(m(·; θt)) (2)
where |θt| is the number of free parameters of the model at time t, and O(m(·; θt)) is the number of flops used by the model to process the t-th mega-batch. Once again, by measuring the area under the curves obtained by tracking these quantities over time we obtain a holistic assessment of memory and compute throughout the whole stream. A model can obtain small Mem and Comp only if it does not consume memory and if it is computationally parsimonious throughout the entire duration of the stream.
Algorithm 1 Training in the ALMA setting 1: procedure TRAIN(m,w, replay, grow) . m is the model, w is the waiting time 2: t← 1 3: D ← ∅ 4: while t < B do . For each stage 5: if replay then . Acquire w mega-batches 6: D ← D ∪Dt ∪ ... ∪ Dt+w−1 7: else 8: D ← Dt ∪ ... ∪ Dt+w−1 9: t← t+ w 10: if grow then 11: m.grow() . Grow the model if the model is a growing model 12: m.train(D) . Fine-tune or retrain from scratch m on the collected dataset
4 LEARNING ALGORITHMS
In this section, we describe the methods we tested in the ALMA setting. They generally follow the learning procedure shown in Algorithm 1. At a high level, we consider two families of models, those with a monolithic architecture and those with a modular architecture (e.g., ensembling). The latter are amenable to grow over time by adding new modules to the existing set. We will start by describing fixed architectures (§4.1) and then conclude with growing architectures (§4.2). All models are also given the option to replay previous mega-batches.
4.1 FIXED ARCHITECTURES
The first family of methods trains models with a fixed architecture. These models are sequentially trained over new mega-batches and exhibit a fixed memory footprint. We consider three models:
Single Model (SM): This is a standard multi-layer neural network (e.g., fully connected neural network or transformer) trained by stochastic gradient descent. It can be initialized from random or from the parameters of the model trained on the previous mega-batch. The initializaiton choice is determined via cross-validation.
Ensemble of Models (Ens): The second approach is the simplest modular approach, consisting of an ensemble of N neural networks with the same architecture, each being trained independently on the same sequence of data. The output of the overall model at test time is the average probability distribution produced by each component1. The advantage of Ens is that training and inference can be trivially parallelized, enabling to scale up model parameters very easily. The disadvantange is that inference requires N times more compute than what is required by each component.
Uniform Mixture of Models (UMix): A potential drawback of Ens is that evaluation and training are inconsistent. UMix addresses this by training a model whose prediction is the average (in logit space) of the predictions produced by N networks. While this requires synchronization during training, now both training and evaluation use the same model.
4.2 GROWING ARCHITECTURES
In the previous section, the number of parameters and the architecture of the model are fixed throughout the model’s lifetime. However, as more data is observed, it is interesting to consider dynamic architectures that grow over time, because these may save compute and memory during the earlier stages of learning while providing more predictive power during the later stages. We consider three growing approaches:
1Classical bagging approaches and majority vote strategies have been also explored without significant difference.
Growing Ensemble (gEns): Like the Ens model, gEns is also a combination of neural networks trained independently. While Ens considers N networks that are, at each stage, trained over the new chunck of data, gEns replaces this step by a growing step where n neural networks are added. In our implementation, only these n neural networks are trained over the new data, while the other neural networks (trained on previous mega-batches) are kept fixed.
Growing Mixture of Experts (gMoE): A hierarchical mixture of experts models (MoE) is an architecture where at layer l the output representation zl is: zl = ∑k j=1 g(j|zl−1)h(zl−1|j), where g is the gating or routing function and h(·|j) is the j-th expert. Compared to Ens, MoE has exponentially many more components albeit with a lot of parameter sharing. Another advantage is that by selecting only one (or a few) experts, the computational cost is independent of the number of experts, assuming the cost of gating is negligible compared to the cost or executing the experts. The main issue is that MoE are notoriously harder to train (Eigen et al., 2014; Denoyer & Gallinari, 2015; Lepikhin et al., 2020). In this work, we consider a growing version of MoE, which we denote with gMoE, whereby experts are added over time. See Appendix A for more details.
Firefly (Wu et al., 2020) (FF): FF is a method which progressively grows neural networks, jointly optimizing both the model architecture and parameters. Growth includes both a width expansion by adding new hidden units (or feature maps) as well as a depth expansion by adding new layers. Importantly, this is an example of non-modular method unlike Ens or gMoE, which is potentially more expressive but also more inefficient at inference time because there is no structured sparsity that can be leveraged to speed up computation.
5 EXPERIMENTS
In this section we first describe how standard benchmarks can be repurposed for ALMA, we then provide the details of the models we tested, and we finally conclude with an analysis of the results we obtained, aiming to understand which method attains the best trade-off between time, accuracy, compute and memory usage.
Datasets We consider a variety of datasets. The first dataset is MNIST (LeCun et al., 1998), which consists of a training set with 60,000 quasi-binary handwritten digits of size 28x28 pixels, and a test set with 10,000 examples. The second dataset is CIFAR 10 (Krizhevsky, 2009) that has a training set with 50,000 images of size 32x32 pixels belonging to 10 classes such as bird, car, horse, ship, truck, etc. The third dataset, used for our large-scale language modeling evaluation, is a portion of the collection of English language text introduced in Liu et al. (2019b), consisting of Books, Wikipedia and Common Crawl. We consider 4 (large) mega-batches for training and one additional mega-batch for evaluation, each consisting of approximately 440M words; we also hold out a validation set with approximately 0.5M words of Common Crawl for model selection. We use a byte-pair encoding (BPE) (Sennrich et al., 2016) vocabulary with 50,000 units, following Radford et al. (2019). This dataset is fairly representative of what practitioners might face when maintaining a deployed system with new data arriving every few months.
Given a dataset like any of the above, we construct a benchmark for ALMA evaluation as follows: 1) we randomly partition the training set into B mega-batches with equal number of training examples (B = 50 for MNIST and CIFAR 10, and 4 for the text dataset), 2) from each mega-batch we extract 10% of the data to build the mega-batch validation set (except for the large scale language modeling dataset where we use the provided validation set), and 3) we create a learning experience by doing one pass over the sequence of mega-batches. For each mega-batch, the learner can query as many mini-batches as desired. The learner can also decide not to train on the data of a mega-batch right away but instead to wait and accumulate data across a few consecutive mega-batches. While the learner observes data, it is also tested on the test set. This is not used for validation purposes, but only for final reporting as shown in §5.1.
Models We evaluate the six approaches presented in §4, and for each of them we consider various waiting times, a version with and without replay, and at least two model sizes. For each setting, we cross validate over several hyper-parameters such as initializaiton type, learning rate, stopping criterion, growth rate, etc.
Next, we describe in details the architecture used on each dataset. Further experimental details to aide reproducibility are reported in Appendix B. On MNIST the backbone architecture of SM is a three layer fully connected neural network with ReLU units. We considered two hidden units sizes, namely 4 and 32 (denoted by [s] and [b], respectively), which let us simulate the regime of big data relative to the size of the network and explore how to grow architectures without worrying about overfitting. Similarly, the components of Ens, gEns and UMix are SM networks of the same size as stated above; gMoE also starts off as SM and adds modules (at the first two layers) that have the same size as the original layer of SM. When varying the waiting time, i.e., the number of mega-batches that are aggregated before initiating a new training session, we use the suffix “_w” to indicate its value.
On CIFAR 10, the methods and notations are the same as in MNIST. The only difference is that the backbone architecture is a scaled down version of a VGG19 convolutional neural network (Simonyan & Zisserman, 2015), where the number of intermediate feature maps is the same for each layer and equal to either 4 or 32. On this dataset, we also consider FF starting off from the same VGG19 backbone.
For the language modeling task SM is a Switch Transformer (Fedus et al., 2021), which is a hard mixture of experts model with an additional load balancing loss term and hard capacity constraint applied during training to prevent uneven expert utilization. Following Fedus et al. (2021), we fix the weight of the balancing loss term to 0.01 and use a capacity factor of 1, ensuring relatively uniform expert utilization. We train the model using Adam (Kingma & Ba, 2015) and tune the learning rate and dropout on the validation set. In the growing setting we copy the expert weights and gating network weights corresponding to the top-k experts incurring the largest loss, where k is typically between 2 and 4. We consider two model sizes: a base model with 6 layers and model dimension of 512, for a total of 40M shared parameters and 6M additional parameters per expert; and a large model with 12 layers and model dimension of 768, for a total of 96M shared parameters and 28M additional parameters per expert. We use an input sequence length of 512 tokens and we do not use replay given the large chunk sizes.
5.1 RESULTS
In Fig. 2 we start by analyzing learning curves on CIFAR 10 for a subset of the methods as a function of the waiting time. We then dive into analyzing all methods on both MNIST (Tab. 1) and CIFAR 10
(Tab. 2), using the optimal empirical value of waiting time. We conclude by confirming the major findings at scale on the language modeling task (Tab. 3).
Fig. 2 shows the test error rate as a function of the number of mega-batches received for both the small (left) and the large (right) model. We observe that an intermediate waiting time (in this case equal to 5) strikes the best trade-off between accuracy and time for all methods, since curves with waiting time equal to 5 have the lowest area under the curve. Greedy methods using waiting time equal to 1 achieve lower error rate only during the very beginning of the stream. Second, we observe that bigger models (SM and Ens) not only generalize better but they are also statistically more efficient: the small Ens obtained almost 35% error rate by the end of its learning experience, which is worse than the error rate obtained by the large Ens just after having observed one tenth of the entire stream. The statistical efficiency of large models does not apply only to large transformers (Kaplan et al., 2020a), but also to fully connected (we obtained similar results on MNIST) and convolutional models.
Next, using the waiting time that yielded the lowest cumulative error rate, we compare all methods discussed in §4, focusing our discussion on Tab. 2 of CIFAR 10 as same conclusions apply to MNIST as well (see Tab. 1).
First, replay lowers the CER by a relative amount of about 10% at the cost of increasing the cumulative training flops by a factor of more than 5, which is rather substantial. Notice that retraining from scratch using memory replay, as reported here in parentheses, is nowadays the dominant approach to deal with sequential datasets.
Second, Ens works better than UMix for larger models, and vice versa. We surmise that ensembling may alleviate overfitting of large models, but coordinating the components of the ensemble like UMix does, is more effective in an underfitting regime (i.e with small models). Ens thus looks like a good method to train large architectures without suffering of the overfitting aspect and may be used when the complexity of the task is not known a priori.
Third, all growing approaches perform rather similarly, particularly when starting from larger backbones, although they strike slightly different trade-offs. For instance, gMoE is the most efficient at test time, while FF yields a lower error rate. Interestingly, none of the approaches that grow architectures currently manages to beat Ens in terms of error rate when starting from a large backbone, although they require substantially fewer flops at inference time. Finally, while methods derived from SM (for the same size of the initial backbone, see rows with the same color in the table) all manage to beat SM, it is also worth noting that for the same number of parameters SM is still the best performing method, unless there is overfitting. In particular, Ens with 12550 parameters achieves a CER of 2440 while SM with 11710 parameters obtains a CER of 2038 while requiring much less compute; same considerations apply also to the gMoE with 29550 parameters compared to SM with 31660 parameters. Therefore there is no single model striking a much better trade-off, and more advanced approaches do not outperform simpler methods like Ens.
The results on the large scale language modeling task reported in Tab. 3 show that bigger models perform better (the larger the number of parameters the lower the PPL for a given model class) and are also more statistically efficient (for instance the base SM_w1 attains 26.53 after seeing the whole stream, while the large SM_w1 obtains 22.47 just after seeing the first chunk of data), consistent with recent related work (Kaplan et al., 2020b; Li et al., 2020a). We also observe that Ens is a strong performer, with Ens_w1 and gEns_w1 models dominating SM models in all settings. Surprisingly, ensembles trained on distinct data chunks (gEns_w1; t1 or t3) perform no better than ensembles trained on a single data chunk (Ens_w1; t0). For instance, among Base 2-model ensembles (4@2), Ens_w1 achieves a perplexity of 26.20 using a single data chunk (t0), while gEns_w1 achieves a perplexity of 26.27 using models trained on each of the two data chunks (t1). Finally, if test time inference is a concern, then gMoE is a preferable choice since its runtime is comparable to SM.
6 CONCLUSION AND PERSPECTIVES
In this work we introduced the anytime learning at macroscale (ALMA) setting, which is an instance of anytime learning under the assumption that data is observed as a sequence of large batches. ALMA better mimics the learning scenarios faced by machine learning practitioners, who want to efficiently solve a task, but time to time they receive more data to train on. We introduced metrics that enable the assessment in terms of error rate, memory usage and compute throughout the entire learning experience. Equipped with these tools, we then evaluated several approaches on three different datasets, including large scale language modeling. We found that methods that update parameters at an intermediate rate tend to yield a better trade-off, and that bigger models tend to generalize better. In particular, models that grow capacity over time generalize better particularly when the initial model is smaller, and ensembling is a very strong baseline.
A cynical interpretation of our finding that bigger models generalize better, could take the reader to the conclusion that it can all be solved by starting with a big model. However, as data is added over time so is computation. It is often the case that researchers working on large-scale learning instantiate the biggest possible model to train on their task, but few months later they can manage to launch even bigger models thanks to compute and engineering advances. How can the larger model leverage what has been learned from the previously trained model? Is there a modeling choice that strikes a better trade-off than retraining from scratch? More generally, what are good approaches to extract information from a new batch of data to integrate it into an existing model? While we do not provide a full answer to these questions, we do offer a framework to study them and several strong baseline approaches to compare against and build upon.
7 REPRODUCIBILITY STATEMENT
We have made several efforts to ensure that the results provided in the paper are fully reproducible. We first provide a clean codebase from which all the computer vision results in this paper are generated. In this codebase, one can find the exact hyperparameters used for each method in the provided configurations. We have attached a readme to the code in order to guide users running our code. For the LM experiments, as stated in the appendix we use the fairseq (Ott et al., 2019) and provide the required information to replicate our results.
APPENDIX
A GROWING MIXTURES OF EXPERTS
Growing Mixture of Experts (gMoE): A mixture of expert (MoE) is a sequence of non-linear functions, each of which is potentially a mixture of experts (omitting the dependence on parameters):
m(x) = f l(f l−1(. . . f1(x) . . . )), with f i(z) = k∑
j=1
gi(j|z)hi(z|j)
where gi is the gating function at the i-th layer which outputs a categorical distribution over the number of experts, and hi(·|j) is the j expert at layer i. The gating function can be “soft” in which case it outputs non-zero weights for each expert via a softmax, or “hard” version in which case only one expert is selected through a multinomial sampling (and learned through the straight-through estimator in this paper (Bengio et al., 2013)). At test time in the “hard” case, we select the expert with the largest probability. The interest of mixtures of experts is they have a high expressivity, and experts can be easily added to increase the capacity of the model. The gMoEmodel is the growing version where, at each stage as illustrated in Fig. 3, new experts are added at each layer – details about the precise expansion process are given in Appendix.
The key design considerations are: when to grow, what to grow and how to grow. Here, we will refer to our default setting which favors simplicity, unless otherwise specified.
A growth step is triggered at each stage, ensuring a linear growth over time. We grow by adding one expert at each layer, making sure that all experts within a layer have the same architecture albeit with different parameters. In order to grow, we look at which expert has associated the largest cumulative loss; we call such expert the losing expert. The cumulative loss is defined as the sum of the losses of examples on the validation set that have been routed through a particular expert; each expert has associated a cumulative loss value. The rationale is to identify at each layer the expert responsible for the largest contribution to the total loss.
To avoid drop in the loss function and to keep its differentiability when splitting an expert, we propose a tree-based approach we the losing expert is split such expert into two experts with exactly the same parameters as illustrated in Fig. 3: Two children leaves are derived and we instantiate a new gating for the children which decides whether an input example routed to the old expert, should now go to the right or left expert child. The parameters of the new gate are initialized at random while the parameters of the new experts are exact copies of the ones of the losing expert that we split.
More formally, if s is the losing expert then the term gi(s|z)hi(z|s) is replaced by: 2∑
k=1
gi(s|z)gi(k|z, s)hi(z|s, k) (3)
where gi(k|z, s) is the newly introduced gate, and z is the input of the gating and experts. Over time, the gating function learns to partition its input space into a binary tree (if we start from a single expert), and the gating value of an expert is the product of the gating probabilities on the path from root to the leaf expert. Both the gating tree structure and the particular initialization scheme guarantee that the growth step is smooth and fully differentiable, in particular, the loss before and after the growth step is the same.
If we consider each path in the MoE model to be a different model, then with L layer of k MoE components, there are kL many possible paths through the MoE model, hence the number of paths grows exponentially with the number of layers. You can think of this as an ensemble with exponentially many components, but this is still tractable because components share parameters.
Algorithm 2 gMoE 1: k: number of mega-batches to aggregate 2: D = ∅ 3: function TRAIN(Di, i) 4: D += Di 5: if i mod k == 0 then 6: Extract DVAL and DTR from D 7: while m is not converged: do 8: (x, y) ∼ DTR . In practice, sample mini-batches. 9: m.update(x, y)
10: D = ∅
11: m.grow(DVAL) . Growth step can be done at a different rate too. 12: function GROW(DVAL) 13: for each layer in the network do 14: Let i be the losing expert on DVAL, i.e. the expert incurring the largest cumulative loss. 15: Turn corresponding gating output in an internal node and derive 2 gate children 16: Initialize the new experts by copying the parameters from the old parent expert. 17: Initialize the new gating between the two siblings at random.
B HYPER-PARAMETER SETTINGS
B.1 COMPUTER VISION EXPERIMENTS
For each megabatch received, we keep 10% of the data to perform cross-validation. All experiments are run on a single 16GB Quadro GP100 GPU. We apply data normalization for each dataset considered. A training minibatch size of 128 is used. UMix and Ens models have N = 5 in all experiments. for gEns, we train one model n = 1 at every mega-batch, so the total number of models depends on the amount of mega-batches. For Firefly we use a growth rate of 0.25, meaning that at every growth phase, we add approximately a quarter of the initial number of parameters.
B.1.1 MNIST
Models are trained for 100 epochs, and we report results with soft gating. We use the AdaDelta (Zeiler (2012)) optimizer with default learning rate of 1. We use a MLP with 2 hidden layers of varying width (e.g. 4,8 or 32 neurons).
B.1.2 CIFAR-10
Models are trained for 200 epochs, as this was shown to be long enough to allow the model to converge with a learning rate of 0.01. We use Stochastic Gradient Descent with momentum value of 0.9 and weight decay of 1× 10−4. During training, we apply random horizontal flips and select random image crops with padding of 4 pixels. For the architecture, we use the same reduced VGG with batch normalization as prescribed in Wu et al. (2020). All layers are initialized with the same number of channels (e.g. 4, 8, or 32 channels). For the Firefly experiments, we keep all the Fireflyspecific hyperparameters to the default values suggested in the author’s public codebase. We make one exception to this, namely we adapt the growth ratio to result in linear (rather than exponential) growth.
B.2 LANGUAGE MODELING EXPERIMENTS
All the language models are trained using fairseq (Ott et al., 2019) with a maximum of eight 32GB GPUs (NVIDIA V100), optimized with Adam (Kingma & Ba, 2014) using β1 = 0.9, β2 = 0.98,
=1e-8. The learning rate is warmed up over the first several hundred updates (between 500 and 4000) and then linearly decayed to 0 over the remaining updates, with a peak value tuned between 2e-4 and 5e-3. Models are trained up to 120,000 updates with local batch size of 8 sequences per GPU, with gradient accumulation as needed to achieve a total batch size of 192 sequences; each sequence has 512 tokens. We fix the Switch Transformer balancing loss term to 0.01 and use a capacity factor of 1, following Fedus et al. (2021).
C ADDITIONAL COMPUTER VISION RESULTS
In this section we show the impact of several variants of our framework. Namely, we report results for (a) a varying number of mega-batches, (b) whether to use preemption or not, and (c) whether to initialize from scratch or simply finetuning when replay is performed.
C.1 CIFAR
In the following results, we vary the number of megabatches. Below you can find results for MB = 20
C.1.1 DIFFERENT MBS
C.1.2 PREEMPTED RESULTS
We also consider the use of a patience term when training the model. When the validation accuracy has not improved over 25 consecutive epochs, we stop training for the given learning phase. As expected, we observe gains on compute efficiency, with a small loss in performance.
C.1.3 INITIALIZING FROM SCRATCH
Below we show results, comparing the performance of re-training models from scratch on all the data seen so far vs simply finetuning the current model(s) on all the data. Main numbers are finetuned models, numbers in parentheses are trained from scratch.
Table 8: CIFAR-10 MB = 10 results with Replay. Numbers in () are models (re)initialized from scratch at the start of a new MB
C.2 MNIST | 1. What is the focus and contribution of the paper on anytime learning at macroscale?
2. What are the strengths of the proposed approach, particularly in its setting and metrics?
3. What are the weaknesses of the paper regarding its organization, formulation, and clarity?
4. How does the reviewer assess the quality and clarity of the paper's content?
5. Are there any concerns or questions regarding the proposed method, its comparison to other works, and its applicability? | Summary Of The Paper
Review | Summary Of The Paper
Summary: This paper proposes anytime learning at macroscale (ALMA), which is anytime learning under the assumption that data is observed as a sequence of large batches. This paper introduces metrics that can be used to access the error rate, memory, and compute throughput the entire learning process. They evaluate multiple learning models on different datasets in the ALMA setting. They observe that methods that update parameters at a moderate rate tend to yield a better tradeoff, while bigger models tend to generalize better.
Review
The problem setting of anytime learning at macroscale is interesting and novel to me. How to efficiently learn data in a streaming fashion is a practical challenge. The proposed learning setting targets the level of the entire sequence of large datasets.
Although the method overall is valuable and interesting, the paper is poorly organized and thus hard to understand. It is difficult to find some critical details or notation definitions that are related but separated apart in the paper. The method lacks an integrated and principal formulation from which the techniques are derived. Though they show some theoretical results, it is hard to relate them to the objective and the algorithm steps explicitly and tightly. The contribution is unclear. Here are some detailed comments:
What does "organically generated" mean?
" both training ... and finetuning ... are not satisfying" should use "neither...nor..."
What does "constrained capacity" ?
What is the definition of macroscale?
Unclear the difference between ALMA and other learning frameworks.
Related work compares ALMA to lots of different prior work, but it is poorly organized and I am not sure why those prior work should be considered as comparisons.
It is suspicious to say that "extensions to regression and unsupervised learning (where y is missing) are trivial".
In fixed architecture, why "A potential drawback of Ens is that evaluation and training are inconsistent"?
In Growing Mixture of Experts, "Compared to Ens, MoE has exponentially many more components". I am not sure where "exponentially" comes from.
Quality: The submission is technically sound. The claims in the contribution are supported by empirically results. It is a complete piece of work.
Clarity: The experimental details are also very specific, such that reproducing the results should be possible. |
ICLR | Title
Inductive-Biases for Contrastive Learning of Disentangled Representations
Abstract
Learning disentangled representations is a core machine learning task. It has been shown that this task requires inductive biases. Recent work on class-content disentanglement has shown excellent performance, but required generative modeling of the entire dataset, which can be very demanding. Current discriminative approaches are typically based on adversarial-training and do not reach comparable accuracy. In this paper, we investigate how to transfer the inductive-biases implicit in generative-approaches to contrastive methods. Based on our findings we proposed a new, non-adversarial and non-generative method named ABCD: Augmentation Based Contrastive Disentanglement. ABCD uses contrastive representation learning relying only on content-invariant augmentations to achieve domain-disentangled representations. The discriminative approach, makes ABCD much faster to train relative to other generative approaches. We evaluate ABCD on image translation and retrieval tasks, and obtain state-of-the-art results.
1 INTRODUCTION
The task of learning domain-invariant representations of data is key in machine learning as it has many important downstream applications. Some of those include: cross-domain matching, synthesizing analogies across domains (image translation), domain adaptation and generalization, learning to make fair decisions etc. The required representations must satisfy two goals: i) invariance: the representation of a sample must not reveal the domain from which it was collected ii) alignment: similar representations should correspond to similar groundtruth hidden attributes. It is intuitive that learning such representations is a discriminative task which does not require generative modeling of the data. For example, while looking at an image of a car, a human can immediately infer its type, camera pose and color, without reconstructing every image pixel. Counter-intuitively, the best performing approaches for learning domain-invariant representations are generative and typically use some form of variational-autoencoder (VAE). These methods reported strong results on multiple benchmark datasets (Gabbay & Hoshen, 2020b; Bouchacourt et al., 2018; Denton et al., 2017). Although discriminative approaches based on adversarial-training have been proposed, the representations that they learn have typically not equalled those of generative approaches. The parameter-sensitivity of adversarial training make such approaches tricky to train, which may explain the performance gap.
We begin with the observation that although generative, VAE-based approaches are guaranteed to learn disentangled representations (under some restrictive conditions), they are not guaranteed to learn aligned representations across different domains. Remarkably, in practice, generative models often learn aligned representations. As this is not enforced by the objective, we hypothesize this is due to inductive biases implicit in generative models. To test our hypothesis, we first determine the inductive biases of generative models. We perform experiments that evaluate the invariance of autoencoder models to different image transformations. Our findings reveal that the list of invariances differs between autoencoders and datasets, some invariances are shared by all models trained on multiple datasets. Particularly, the most preserved invariances are: blur and high contrast and saturation transformations. We therefore hypothesize that these transformations play a key role in the ability of generative models to learn aligned invariant representations.
To test our hypotheses, we adapt contrastive learning for domain-invariant representation learning using the above inductive biases. First, we show that just the denominator of the contrastive ob-
jective is sufficient for learning domain invariant representation by enforcing the representation of every image is far from all other images in the same domain. Unfortunately, it is insufficient for learning aligned representations. To allow domain alignment, we use the inductive biases of the generative models. Specifically, for every image, we learn a representation that is similar to that of its augmented version using the transformations to which autoencoders were invariant. We show the choice of invariance is critical and that using the standard transformations (e.g. SimSiam augmentations) results in poor alignment or poor disentanglement.
We therefore introduce a new approach named ABCD: Augmentation Based Contrastive Disentanglement. We find that beyond the modifications to the standard contrastive objective mentioned above, ABCD enjoys the best of both worlds as it is i) non-generative: it does not require reconstruction of every pixel in the training set - which complicates and slows-down the training process. ii) discriminative but non-adversarial: the optimization is simple and does not suffer from the sensitive parameter tuning that plague discriminative, adversarial approaches.
We evaluate our method at various levels: i) direct measurement of the disentanglement and alignment of the learned representation ii) downstream tasks - cross-domain image translation and retrieval. We show that our method learns domain invariant representations that are aligned across domains. When compared to generative approaches, ABCD is faster to train as it does not require training a generator. ABCD is shown to achieve state-of-the-art performance on unsupervised image translation and retrieval tasks.
Our contribution include:
1. Developing an understanding of the inductive biases of generative models responsible for their strong domain alignment performance.
2. A new contrastive method that enjoys the inductive biases of generative models while being non-generative and non-adversarial.
3. An evaluation of our approach both at the representation level and also on downstream tasks.
2 RELATED WORK
Learning class-content disentangled representations. The task of separating between labeled and unlabelled attributes as been extensively researched. The objective is to learn a representation for the unlabelled attributes which is: i) independent of the labeled attributes. ii) informative on the unlabelled attributes. Several methods use adversarial training (Denton et al., 2017; Szabó et al., 2018; Mathieu et al., 2016). Other methods use other non-adversarial approaches, e.g. cycle consistency (Harsh Jha et al., 2018), group accumulation (Bouchacourt et al., 2018) or latent optimization (Gabbay & Hoshen, 2020a; 2021b). All the above methods are generative and require reconstruction of the entire training datasets. Here, we propose a discrminative approach that does not require learning to reconstruct the dataset - which is much faster and less computationally demanding.
Contrastive representation learning. Over the last several years, significant progress in selfsupervised representation learning was achieved by methods relying on pairs of augmented samples. Most recent methods use the constraint that the neural representations of different augmentations of the same image should be equal. Non-contrastive methods Chen & He (2020); Grill et al. (2020); Richemond et al. (2020) use the above constraint with various other tricks for learning representations. As the above formulation is prone to collapse, contrastive methods Ye et al. (2019); Hjelm et al. (2019); Wu et al. (2018); van den Oord et al. (2018); Hjelm et al. (2019); He et al. (2020); Chen et al. (2020c); Misra & Maaten (2020); Chen et al. (2020a;b) add an additional uniformity constraint that prohibits collapse of the representation to a single point. Our method adapts the contrastive objective for the task of class-content disentanglement.
Contrastive approaches for disentanglement. Recently, Zimmermann et al. (2021) proposed a seminal approach for contrastive learning of disentangled representations. They tackle the ambitious setting of unsupervised disentanglement, and therefore make strong assumptions on the distribution of the true factors of variation as well as requiring temporal sequences of images at training time. Our method applies to the different (and less ambitious) setting of class-content disentanglement - where we assume class supervision on the training data but do not require image sequences or
making particular assumptions on the evolution of unlabeled true factors. Our technical approaches are consequently very different.
Applications of disentangled representations. Learning disentangled representations has many applications including: controllable image generation (Zhu et al., 2018), image manipulation (Gabbay & Hoshen, 2020b; 2021a; Wu et al., 2021) and domain adaptation (Peng et al., 2019). Furthermore, it is believed that better disentangled representations will have future impact on model interpretability (Hsu et al., 2017), abstract reasoning (van Steenkiste et al., 2019) and fairness (Creager et al., 2019). . In this work, we concentrate on application to cross-domain translation and retrieval.
3 UNRAVELING THE INDUCTIVE BIASES OF GENERATIVE DISENTANGLEMENT MODELS
We receive as input a set of training samples x1, x2..xN . Each training sample x has labelled attributes y and also has unlabelled attributes u which are not correlated to y. In this paper, we assume that the labeled attribute y is a single, categorical variable. The objective is to learn an encoder E, which encodes each image x as code c = E(x). We require the code c to satisfy two requirements: i) Disentanglement: there should not exist a function that can predict the labelled attribute y given the representation c, in other words, the representation should not be informative of the labelled attribute. ii) Alignment: there should exist a function that can predict u given code c - in other words the representation c should be informative of the unlabelled attributes.
3.1 DISENTANGLEMENT OBJECTIVES DO NOT ENSURE UNKNOWN ATTRIBUTE IS IDENTIFIABLE
It has been established by Locatello et al. (2019) that any disentanglement method must have some source for inductive bias for the disentanglement to be possible. As the class-content disentanglement setting has labeled examples, it may be hoped this should enable recovery of the unlabeled attributes. Indeed, previous research confirmed that generative models have been empirically successful at learning disentangled representations. In this section, we will argue that standard class-content disentanglement objectives do not provide enough guidance for learning aligned-disentangled representations and therefore that inductive bias is necessary.
Both VAE and GAN-based disentanglement methods, learn a representation c that satisfies two properties: i) the representation is independent of the class, p(c|y) = p(c) ii) there exists a function G, such that x = G(c, y) for every image x. Most methods also force p(c|y) = N(0, I). Although this ensures independence from y, we explain this does not force identifiability of u given c. As a simple demonstration, we will show an unidentifiable case that satisfies the two requirements above. Assume that p(u) = N(0, I) (u ∈ Rd) and that we learned representations c s.t. c = u for images with y = 0 and c = Pu for images with y = 1 (where P is a permutation matrix). It is clear that p(c|y) = p(c). Also, as we assume there exists a function G∗ s.t. x = G∗(y, u), it is easy to construct a function x = G(y, c) = G∗(y, (P y)T c). However, given c and without knowledge of y, it is not possible to recover u (as it may be either c or PT c depending on the sign of y). This shows that the objective by itself, is insufficient for learning a representation c that has an injective mapping to the unknown attribute u.
3.2 INVESTIGATING THE INDUCTIVE BIASES OF GENERATIVE MODELS
In this section, we investigate the inductive biases of generators. We only investigate one class of possible inductive biases, invariance of generator to particular image transformations. We propose the following experiment: i) train an autoencoder AE on an image dataset without any augmentations s.t. minAE ∑ x∈X ‖x−AE(x)‖2, whereX is the training set. ii) transform the original images from the test set of the dataset with a range of image augmentations T iii) evaluate the invariance of the outputs of the autoencoder. We use the following two invariance metrics funnorm, fnorm for evaluating how much the distance between the original and transformed images change when evaluated on autoencoder outputs.
funnorm = dist(AE(x), AE(f(x))) (1)
We use the perceptual loss as the distance function. If an autoencoder is invariant to a particular transformation, both metrics should be be small. The normalized metric is sensitive to smaller transformation, and the unnormalized metric is sensitive to larger transformation.
We conducted the experiment on three datasets: Cars3D Krause et al. (2013), CelebA Liu et al. (2015) and Edges2Shoes (shoes only) Yu & Grauman (2014). The 14 augmentations from the TorchVision library were evaluated. The full results are presented in the appendix. Here, we present the metrics averaged over the three datasets. We observe that autoencoders are highly invariant to blur, high-saturation and high-contrast. They are mostly equivariant to horizontal flipping, and color changes. As these are the inductive biases of generative methods, it suggests that providing these biases to discriminative methods can potentially transfer some of the attractive qualities of generative methods.
4 ABCD: A CONTRASTIVE METHOD FOR REPRESENTATION DISENTANGLEMENT
In this section we introduce ABCD, a new, discriminative approach for class-content disentanglement.
As explained in Sec. 3, disentanglement methods learn representations c that are disentangled from the labeled attribute y s.t. p(c|y) = p(c). Although typically adversarial or VAE objectives are used, here we propose to use a contrastive objective. It was shown by Wang & Isola (2020) that the denominator of the contrastive objective ∑ j − log( ∑ i 1i 6=je
sim(E(xi),E(xj))) encourages the learned feature space of the encoder E to be uniformly distributed on the unit sphere. We propose to use this objective to learn an encoder E that learns a disentangled representation c for an image x. The key is to apply the contrastive objective for the images of each class y separately (but share the same encoder for all classes) - this ensures representations c of each class y are distributed uniformly on the unit sphere. As p(c|y) are equal for all values of y, we have p(c|y) = p(c) and c is independent of the class. Additionally, as for each image in the training set there exists a unique combination of c and class y, it is possible in-principle to construct a function such that x = G(y, c). The representations learned in this fashion therefore satisfy standard disentanglement objectives.
We conduct an experiment to evaluate the learned representations of the SmallNORB datasets, where the labeled attribute y is the object type while the unlabeled attribute u is the object pose. After learning the encoder E, we compute the representation c = E(x) for every image x. We compute a deep classifier that attempts to predict u from c and another that attempts to predict y from c. The results are presented in Tab. 4. This shows that although the learned representations are disentangled,
they do not uniquely identify u. It is apparent that the trivial contrastive formulation above does not provide the inductive biases required for learning identifiable representations.
To transfer the inductive biases from generative models to our contrastive formulation, we enforce the invariance of the learned representations to the transformations that generative models were found to be invariant to. Specifically, we add images augmented by blur, high contrast and color saturation as positive examples. The objective becomes:
Lcontrastive(xi) = esim(E(xi),E(f(xi)))∑ j − log( ∑ i 1yi=yjesim(E(xi),E(xj))) (3)
Where di is the domain from which xi is drawn from and f is randomly selected of the four augmentations listed above. We rerun the experiment above, now using the transferred inductive biases. The results are presented in Tab. 4. We now see that the representations remain disentangled, but they are now also informative of the unknown attribute u i.e. the pose can now be predicted from the learned representation. We conduct a further experiment, where instead of using negative examples with the same class, we use negative examples from the entirer mini-batch (across all classes). Results on SmallNORB when negative examples from all classes are used, are shown in Tab. 4. This illustrates that our modification to the denominator is key for making our approach work.
A key aspect of our approach is using transformations to which generative models were found to be invariant. It is imperative to investigate whether standard augmentations e.g. those used in SimSiam (or other augmentation-based representation leanring methods would suffice). To test this hypothesis, we repeated the same experiment as above but with all the augmentations used by SimSiam rather than the three invariant transformations from Sec. 3.2. We report the results in Tab. 4. We can see that using transformations to which generators are not invariant hurts disentanglement. To understand how including bad transformations can hurt performance, let us assume that a transformation changes the content, it would exclude content information from being included in the code. However, as the class is also excluded from the code by the uniformity constraint, it will not be possible to satisfy the contrastive objective causing reduced performance. This will be expressed either in reduced disentanglement or in reduced alignment.
To summarize, we train an encoder that takes in an image x and returns code c. The encoder is trained using the contrastive objective in Eq. 3. Although, at first sight, our objective might appear very similar to the standard contrastive objective, there are two key differences: i) the negative examples in the denominator are only taken from the same class as the target image, rather than all images. We showed theoretically and empirically that this simple modification is critical. ii) the augmentations used correspond to the three transformations that generators are invariant to. This was also shown to be critical for the performance of the method.
5 EXPERIMENTS
In this section, we evaluate our method against generative and adversarial approaches. In Sec. 5.2, we evaluate the disentanglement and alignment of the learned representations. In Sec. 5.3, we evaluate performance on downstream tasks, specifically, cross-domain translation and retrieval.
5.1 IMPLEMENTATION DETAILS
Architecture. We use a ResNet18 encoder. In line with other methods such as LORD (that uses a perceptual loss), we use ImageNet pretrained weights.
Optimization hyperparameters. We use a learning rate of 0.001. For SmallNORB and Cars3D we train our method for 200 epochs, using a batch size of 512, composed from 32 images drawn from 16 different classes. Since the classes in the CelebA dataset are smaller we use 16 classes and 8 samples from each one in each batch.
Temperature. We tune our temperature constant for the contrastive learning between the values of 0.1 to 0.3. We use 0.1, 0.2, 0.3 for CelebA, SmallNORB and Cars3D accordingly.
Baselines. We implement ML-VAE, DrNET using their default parameters. We tried to replace their encoders by Resnet18 but this resulted in degraded performance. We therefore report their best results. We train LORD’s second stage encoder using a Resnet18 as well. We trained it for 200 epochs for CelebA and Cars3D, and for 300 epochs on SmallNORB (as 200 were not sufficient for convergence).
Augmentations. As mentioned in Sec. 4, we used Gaussian blurring, high contrast and high saturation transformations as our positive augmentation.
5.2 DIRECT REPRESENTATION EVALUATIONS
In this section, we conduct direct evaluations of the learned representations.
Experimental setup. We evaluate the two key aspects of the representation, namely: i) disentanglement - prediction accuracy of the domain y, given the code c. Low accuracy would reflect a high degree of disentanglement. ii) alignment - prediction accuracy of the hidden attribute u given the code c. Note that this metric requires having groundtruth labels for the hidden attributes, which is typically available for synthetic datasets such as Cars3D and SmallNorb, but not for real datasets like CelebA. We therefore provide this metric for the synthetic datasets only. We conducted the experiment for our method and LORD. As well as DrNet and ML-VAE that represent adversarial and non-adversarial baselines.
Results. We report results on Cars3D, SmallNorb and CelebA. We observe that on Cars3D, both our method and LORD achieved excellent (nearly perfect performance). This is expected, as this dataset is relatively simple. We can see however that ML-VAE and DrNet did not perform as well on this dataset. This is inline with the results reported in LORD. On SmallNorb, our method was able to achieve disentangled representations whereas none of the other methods could. Note this
SmallNorb benchmark is the original version and not the simplified version developed in the LORD paper. In this setting, the object category only is known whereas both pose and lighting are unknown. The poor disentanglement of other methods allows them to include more information on the unknown attributes in the code. However, it is clear that our method provides a better tradeoff between disentanglement and content alignment than the alternative methods (as it is the only one that allows good disentanglement). Finally, on celebA we provide better disentanglement than the competing methods. As there are no groundtruth labels on the unknown attributes in celebA, we did not provide this analysis.
5.2.1 TRAINING TIME ANALYSIS
We provide a training time comparison of the training between our method and the current SOTA, LORD (Gabbay & Hoshen, 2020b). Both algorithms were run on a single NVIDIA-QuadroRTX6000 for 200 epochs for all datasets. For LORD, we present 2 different timings, the end of the latent optimization stage, and the end of the amortized stage. Results are presented in Tab. 4. We can observe that our method is an order of magnitude faster than LORD.
5.3 DOWNSTREAM APPLICATION
5.3.1 IMAGE TRANSLATION
Experimental setup. Although the objective of our method is to learn strong representations rather than image generation, we provide some qualitative image translation results. For each image set, we extract the domain y (object category) from the left, while the unlabeled attributes (typically pose or lighting) are taken from the top row. We presented results for our method and LORD.
Results. We observe that LORD and our method achieve excellent results on Cars3D. We see however that LORD fails on SmallNorb. Although it is able to transfer the pose, it fails to transfer the lighting. On the other hand, our method is able to extract the correct representations from the rerlevant images.
5.3.2 CROSS DOMAIN RETRIEVAL
In this section we demonstrate the performance of our method on a discriminative downstream task.
Experimental setup. We evaluate the cross-domain retrieval task. Given an image from one domain, and a set of images from another domain, our objective is to recover the image whose unlabelled attributes are most similar to those of the target image. We evaluate the performance of our learned encoder against those of the competing disentanglement methods: LORD, DrNet and MLVAE. We compute results on Cars3D and on SmallNorm. We did not provide quantitative results on
celebA as its unknown attributes are not labeled. We evaluate the methods by their top-1 and top-5 retrieval performance.
Results. Our quantitative evaluation is presented in Tab. 7. We can see that our method dominated all other methods on all metrics.
6 DISCUSSION AND CONCLUSION
We presented a discriminative, non-adversarial method for learning disentangled and aligned representations. This was achieved by transferring the inductive biases of generative models, to a contrastive learning approach. We made several important modifications to the contrastive loss, and found they are critical for our methods to work. We evaluated our method and found that it indeed learns disentangled and aligned representations. Our method is about an order of magnitude faster than competing approaches. It was also found to achieve better results than strong baselines on several tasks and datasets.
Naturally, our method has several limitations that can be addressed in future work. We discuss some of those below:
Non-generative inductive biases. Our method currently replicates the inductive biases of generative models. It therefore does not have other useful inductive biases which generative models do not have. By designing new augmentation, future work may be able to extend the range of inductive biases.
Batch-size sensitivity. Our method is based on SimCLR, whose performance is positively correlated with the batch-size. Future work may investigate using other framework e.g. MoCo-v2 that have a reduced dependence on the batch size.
A APPENDIX
A.1 INDUCTIVE BIAS ANALYSIS | 1. What is the focus of the paper regarding image representation?
2. What are the strengths and weaknesses of the proposed method in comparison to generative methods?
3. How does the reviewer assess the clarity and completeness of the experimental evaluation?
4. What are the limitations of the paper regarding its claims and applicability to other data types?
5. Do you have any questions or suggestions regarding the presentation and referencing of the paper? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a method to learn disentangled representations of images, specifically (as acknowledged only in sec2) in the class vs content disentanglement setting, not in the unsupervised setting. The method is designed to be non-generative, non-adversarial, and to exhibit similar performance (but lower runtime) to generative methods. The method is based on reusing augmentations found to generate invariances in generative contrastive methods.
The paper also contributes an analysis and experiments on the impact of the shape of the contrastive learning objective, justifying the shape of the objective of the proposed method (eq 3).
Review
The paper's topic is important and justified by the intution that contrastive learning should, in principle, be possible without generative learning. The paper accordingly brings forward a discriminative objective. The method seems innovative despite the simplicity of the adaptation. It requires the identification of useful augmentations from the analysis of a VAE such as in sec3 table 1: this represents a strong dependance on an ancillary experiment; as a consequence, I am not convinced that the set of 3 isolated augmentations is universal, and certainly porting to a new type of images (e.g. photos, images with background) or other media (audio, speech...) will require new, similar experiments.
The method is based on a few simple observations for which the intuition is given clearly and through motivating experiments, which support the selected methodology well (sec3 and 4). The method exposition is clear.
The experiments are quite complete, with both an intrinsic and extrinsic evaluation, and make sense to me. Reporting is clear, but could be more detailed (eg where does the Resnet come from?) The discussion of results is not well motivated by the results tables. Some results visible in the tables are not commented on, e.g. table 3, SmallNorb Ours Factors is relatively low. Extrinsic evaluation could be stronger if it were quantitative.
The paper reads relatively clearly, with a few shortcomings.
From the ttitle, the abstract and the first paragraph of the introduction, the paper seems to universally address contrastive learning independently of the data type. Only accidentally does the reader learn that the paper's claim is restricted to images; I would not accept to extend the claim to other data types without corresponding experiments. I suggest rewriting the title, the abstract, the introduction to clarify this fact.
Table numbers are wrong
Competitor methods are introduced late, if at all; for instance LORD's citation is only in sec5.2.1. DrNet and ML-VAE are not referenced explicitly. In several places the paper implicitly assumes that the reader has read all the cited material, instead of properly referring to it.
sec3 How do you identify equivariance from the results table?
Typos, grammar and spelling
Hyphens should be removed from several expressions: inductive-biases in the title (!), adversarial-training, generative-approaches, parameter-sensitivity, slows-down, in-principle
SmallNorm for SmallNORB in several places
inline with -> in line
groundtruth -> ground truth
sec5.1 accordingly: do you mean respectively? |
ICLR | Title
Inductive-Biases for Contrastive Learning of Disentangled Representations
Abstract
Learning disentangled representations is a core machine learning task. It has been shown that this task requires inductive biases. Recent work on class-content disentanglement has shown excellent performance, but required generative modeling of the entire dataset, which can be very demanding. Current discriminative approaches are typically based on adversarial-training and do not reach comparable accuracy. In this paper, we investigate how to transfer the inductive-biases implicit in generative-approaches to contrastive methods. Based on our findings we proposed a new, non-adversarial and non-generative method named ABCD: Augmentation Based Contrastive Disentanglement. ABCD uses contrastive representation learning relying only on content-invariant augmentations to achieve domain-disentangled representations. The discriminative approach, makes ABCD much faster to train relative to other generative approaches. We evaluate ABCD on image translation and retrieval tasks, and obtain state-of-the-art results.
1 INTRODUCTION
The task of learning domain-invariant representations of data is key in machine learning as it has many important downstream applications. Some of those include: cross-domain matching, synthesizing analogies across domains (image translation), domain adaptation and generalization, learning to make fair decisions etc. The required representations must satisfy two goals: i) invariance: the representation of a sample must not reveal the domain from which it was collected ii) alignment: similar representations should correspond to similar groundtruth hidden attributes. It is intuitive that learning such representations is a discriminative task which does not require generative modeling of the data. For example, while looking at an image of a car, a human can immediately infer its type, camera pose and color, without reconstructing every image pixel. Counter-intuitively, the best performing approaches for learning domain-invariant representations are generative and typically use some form of variational-autoencoder (VAE). These methods reported strong results on multiple benchmark datasets (Gabbay & Hoshen, 2020b; Bouchacourt et al., 2018; Denton et al., 2017). Although discriminative approaches based on adversarial-training have been proposed, the representations that they learn have typically not equalled those of generative approaches. The parameter-sensitivity of adversarial training make such approaches tricky to train, which may explain the performance gap.
We begin with the observation that although generative, VAE-based approaches are guaranteed to learn disentangled representations (under some restrictive conditions), they are not guaranteed to learn aligned representations across different domains. Remarkably, in practice, generative models often learn aligned representations. As this is not enforced by the objective, we hypothesize this is due to inductive biases implicit in generative models. To test our hypothesis, we first determine the inductive biases of generative models. We perform experiments that evaluate the invariance of autoencoder models to different image transformations. Our findings reveal that the list of invariances differs between autoencoders and datasets, some invariances are shared by all models trained on multiple datasets. Particularly, the most preserved invariances are: blur and high contrast and saturation transformations. We therefore hypothesize that these transformations play a key role in the ability of generative models to learn aligned invariant representations.
To test our hypotheses, we adapt contrastive learning for domain-invariant representation learning using the above inductive biases. First, we show that just the denominator of the contrastive ob-
jective is sufficient for learning domain invariant representation by enforcing the representation of every image is far from all other images in the same domain. Unfortunately, it is insufficient for learning aligned representations. To allow domain alignment, we use the inductive biases of the generative models. Specifically, for every image, we learn a representation that is similar to that of its augmented version using the transformations to which autoencoders were invariant. We show the choice of invariance is critical and that using the standard transformations (e.g. SimSiam augmentations) results in poor alignment or poor disentanglement.
We therefore introduce a new approach named ABCD: Augmentation Based Contrastive Disentanglement. We find that beyond the modifications to the standard contrastive objective mentioned above, ABCD enjoys the best of both worlds as it is i) non-generative: it does not require reconstruction of every pixel in the training set - which complicates and slows-down the training process. ii) discriminative but non-adversarial: the optimization is simple and does not suffer from the sensitive parameter tuning that plague discriminative, adversarial approaches.
We evaluate our method at various levels: i) direct measurement of the disentanglement and alignment of the learned representation ii) downstream tasks - cross-domain image translation and retrieval. We show that our method learns domain invariant representations that are aligned across domains. When compared to generative approaches, ABCD is faster to train as it does not require training a generator. ABCD is shown to achieve state-of-the-art performance on unsupervised image translation and retrieval tasks.
Our contribution include:
1. Developing an understanding of the inductive biases of generative models responsible for their strong domain alignment performance.
2. A new contrastive method that enjoys the inductive biases of generative models while being non-generative and non-adversarial.
3. An evaluation of our approach both at the representation level and also on downstream tasks.
2 RELATED WORK
Learning class-content disentangled representations. The task of separating between labeled and unlabelled attributes as been extensively researched. The objective is to learn a representation for the unlabelled attributes which is: i) independent of the labeled attributes. ii) informative on the unlabelled attributes. Several methods use adversarial training (Denton et al., 2017; Szabó et al., 2018; Mathieu et al., 2016). Other methods use other non-adversarial approaches, e.g. cycle consistency (Harsh Jha et al., 2018), group accumulation (Bouchacourt et al., 2018) or latent optimization (Gabbay & Hoshen, 2020a; 2021b). All the above methods are generative and require reconstruction of the entire training datasets. Here, we propose a discrminative approach that does not require learning to reconstruct the dataset - which is much faster and less computationally demanding.
Contrastive representation learning. Over the last several years, significant progress in selfsupervised representation learning was achieved by methods relying on pairs of augmented samples. Most recent methods use the constraint that the neural representations of different augmentations of the same image should be equal. Non-contrastive methods Chen & He (2020); Grill et al. (2020); Richemond et al. (2020) use the above constraint with various other tricks for learning representations. As the above formulation is prone to collapse, contrastive methods Ye et al. (2019); Hjelm et al. (2019); Wu et al. (2018); van den Oord et al. (2018); Hjelm et al. (2019); He et al. (2020); Chen et al. (2020c); Misra & Maaten (2020); Chen et al. (2020a;b) add an additional uniformity constraint that prohibits collapse of the representation to a single point. Our method adapts the contrastive objective for the task of class-content disentanglement.
Contrastive approaches for disentanglement. Recently, Zimmermann et al. (2021) proposed a seminal approach for contrastive learning of disentangled representations. They tackle the ambitious setting of unsupervised disentanglement, and therefore make strong assumptions on the distribution of the true factors of variation as well as requiring temporal sequences of images at training time. Our method applies to the different (and less ambitious) setting of class-content disentanglement - where we assume class supervision on the training data but do not require image sequences or
making particular assumptions on the evolution of unlabeled true factors. Our technical approaches are consequently very different.
Applications of disentangled representations. Learning disentangled representations has many applications including: controllable image generation (Zhu et al., 2018), image manipulation (Gabbay & Hoshen, 2020b; 2021a; Wu et al., 2021) and domain adaptation (Peng et al., 2019). Furthermore, it is believed that better disentangled representations will have future impact on model interpretability (Hsu et al., 2017), abstract reasoning (van Steenkiste et al., 2019) and fairness (Creager et al., 2019). . In this work, we concentrate on application to cross-domain translation and retrieval.
3 UNRAVELING THE INDUCTIVE BIASES OF GENERATIVE DISENTANGLEMENT MODELS
We receive as input a set of training samples x1, x2..xN . Each training sample x has labelled attributes y and also has unlabelled attributes u which are not correlated to y. In this paper, we assume that the labeled attribute y is a single, categorical variable. The objective is to learn an encoder E, which encodes each image x as code c = E(x). We require the code c to satisfy two requirements: i) Disentanglement: there should not exist a function that can predict the labelled attribute y given the representation c, in other words, the representation should not be informative of the labelled attribute. ii) Alignment: there should exist a function that can predict u given code c - in other words the representation c should be informative of the unlabelled attributes.
3.1 DISENTANGLEMENT OBJECTIVES DO NOT ENSURE UNKNOWN ATTRIBUTE IS IDENTIFIABLE
It has been established by Locatello et al. (2019) that any disentanglement method must have some source for inductive bias for the disentanglement to be possible. As the class-content disentanglement setting has labeled examples, it may be hoped this should enable recovery of the unlabeled attributes. Indeed, previous research confirmed that generative models have been empirically successful at learning disentangled representations. In this section, we will argue that standard class-content disentanglement objectives do not provide enough guidance for learning aligned-disentangled representations and therefore that inductive bias is necessary.
Both VAE and GAN-based disentanglement methods, learn a representation c that satisfies two properties: i) the representation is independent of the class, p(c|y) = p(c) ii) there exists a function G, such that x = G(c, y) for every image x. Most methods also force p(c|y) = N(0, I). Although this ensures independence from y, we explain this does not force identifiability of u given c. As a simple demonstration, we will show an unidentifiable case that satisfies the two requirements above. Assume that p(u) = N(0, I) (u ∈ Rd) and that we learned representations c s.t. c = u for images with y = 0 and c = Pu for images with y = 1 (where P is a permutation matrix). It is clear that p(c|y) = p(c). Also, as we assume there exists a function G∗ s.t. x = G∗(y, u), it is easy to construct a function x = G(y, c) = G∗(y, (P y)T c). However, given c and without knowledge of y, it is not possible to recover u (as it may be either c or PT c depending on the sign of y). This shows that the objective by itself, is insufficient for learning a representation c that has an injective mapping to the unknown attribute u.
3.2 INVESTIGATING THE INDUCTIVE BIASES OF GENERATIVE MODELS
In this section, we investigate the inductive biases of generators. We only investigate one class of possible inductive biases, invariance of generator to particular image transformations. We propose the following experiment: i) train an autoencoder AE on an image dataset without any augmentations s.t. minAE ∑ x∈X ‖x−AE(x)‖2, whereX is the training set. ii) transform the original images from the test set of the dataset with a range of image augmentations T iii) evaluate the invariance of the outputs of the autoencoder. We use the following two invariance metrics funnorm, fnorm for evaluating how much the distance between the original and transformed images change when evaluated on autoencoder outputs.
funnorm = dist(AE(x), AE(f(x))) (1)
We use the perceptual loss as the distance function. If an autoencoder is invariant to a particular transformation, both metrics should be be small. The normalized metric is sensitive to smaller transformation, and the unnormalized metric is sensitive to larger transformation.
We conducted the experiment on three datasets: Cars3D Krause et al. (2013), CelebA Liu et al. (2015) and Edges2Shoes (shoes only) Yu & Grauman (2014). The 14 augmentations from the TorchVision library were evaluated. The full results are presented in the appendix. Here, we present the metrics averaged over the three datasets. We observe that autoencoders are highly invariant to blur, high-saturation and high-contrast. They are mostly equivariant to horizontal flipping, and color changes. As these are the inductive biases of generative methods, it suggests that providing these biases to discriminative methods can potentially transfer some of the attractive qualities of generative methods.
4 ABCD: A CONTRASTIVE METHOD FOR REPRESENTATION DISENTANGLEMENT
In this section we introduce ABCD, a new, discriminative approach for class-content disentanglement.
As explained in Sec. 3, disentanglement methods learn representations c that are disentangled from the labeled attribute y s.t. p(c|y) = p(c). Although typically adversarial or VAE objectives are used, here we propose to use a contrastive objective. It was shown by Wang & Isola (2020) that the denominator of the contrastive objective ∑ j − log( ∑ i 1i 6=je
sim(E(xi),E(xj))) encourages the learned feature space of the encoder E to be uniformly distributed on the unit sphere. We propose to use this objective to learn an encoder E that learns a disentangled representation c for an image x. The key is to apply the contrastive objective for the images of each class y separately (but share the same encoder for all classes) - this ensures representations c of each class y are distributed uniformly on the unit sphere. As p(c|y) are equal for all values of y, we have p(c|y) = p(c) and c is independent of the class. Additionally, as for each image in the training set there exists a unique combination of c and class y, it is possible in-principle to construct a function such that x = G(y, c). The representations learned in this fashion therefore satisfy standard disentanglement objectives.
We conduct an experiment to evaluate the learned representations of the SmallNORB datasets, where the labeled attribute y is the object type while the unlabeled attribute u is the object pose. After learning the encoder E, we compute the representation c = E(x) for every image x. We compute a deep classifier that attempts to predict u from c and another that attempts to predict y from c. The results are presented in Tab. 4. This shows that although the learned representations are disentangled,
they do not uniquely identify u. It is apparent that the trivial contrastive formulation above does not provide the inductive biases required for learning identifiable representations.
To transfer the inductive biases from generative models to our contrastive formulation, we enforce the invariance of the learned representations to the transformations that generative models were found to be invariant to. Specifically, we add images augmented by blur, high contrast and color saturation as positive examples. The objective becomes:
Lcontrastive(xi) = esim(E(xi),E(f(xi)))∑ j − log( ∑ i 1yi=yjesim(E(xi),E(xj))) (3)
Where di is the domain from which xi is drawn from and f is randomly selected of the four augmentations listed above. We rerun the experiment above, now using the transferred inductive biases. The results are presented in Tab. 4. We now see that the representations remain disentangled, but they are now also informative of the unknown attribute u i.e. the pose can now be predicted from the learned representation. We conduct a further experiment, where instead of using negative examples with the same class, we use negative examples from the entirer mini-batch (across all classes). Results on SmallNORB when negative examples from all classes are used, are shown in Tab. 4. This illustrates that our modification to the denominator is key for making our approach work.
A key aspect of our approach is using transformations to which generative models were found to be invariant. It is imperative to investigate whether standard augmentations e.g. those used in SimSiam (or other augmentation-based representation leanring methods would suffice). To test this hypothesis, we repeated the same experiment as above but with all the augmentations used by SimSiam rather than the three invariant transformations from Sec. 3.2. We report the results in Tab. 4. We can see that using transformations to which generators are not invariant hurts disentanglement. To understand how including bad transformations can hurt performance, let us assume that a transformation changes the content, it would exclude content information from being included in the code. However, as the class is also excluded from the code by the uniformity constraint, it will not be possible to satisfy the contrastive objective causing reduced performance. This will be expressed either in reduced disentanglement or in reduced alignment.
To summarize, we train an encoder that takes in an image x and returns code c. The encoder is trained using the contrastive objective in Eq. 3. Although, at first sight, our objective might appear very similar to the standard contrastive objective, there are two key differences: i) the negative examples in the denominator are only taken from the same class as the target image, rather than all images. We showed theoretically and empirically that this simple modification is critical. ii) the augmentations used correspond to the three transformations that generators are invariant to. This was also shown to be critical for the performance of the method.
5 EXPERIMENTS
In this section, we evaluate our method against generative and adversarial approaches. In Sec. 5.2, we evaluate the disentanglement and alignment of the learned representations. In Sec. 5.3, we evaluate performance on downstream tasks, specifically, cross-domain translation and retrieval.
5.1 IMPLEMENTATION DETAILS
Architecture. We use a ResNet18 encoder. In line with other methods such as LORD (that uses a perceptual loss), we use ImageNet pretrained weights.
Optimization hyperparameters. We use a learning rate of 0.001. For SmallNORB and Cars3D we train our method for 200 epochs, using a batch size of 512, composed from 32 images drawn from 16 different classes. Since the classes in the CelebA dataset are smaller we use 16 classes and 8 samples from each one in each batch.
Temperature. We tune our temperature constant for the contrastive learning between the values of 0.1 to 0.3. We use 0.1, 0.2, 0.3 for CelebA, SmallNORB and Cars3D accordingly.
Baselines. We implement ML-VAE, DrNET using their default parameters. We tried to replace their encoders by Resnet18 but this resulted in degraded performance. We therefore report their best results. We train LORD’s second stage encoder using a Resnet18 as well. We trained it for 200 epochs for CelebA and Cars3D, and for 300 epochs on SmallNORB (as 200 were not sufficient for convergence).
Augmentations. As mentioned in Sec. 4, we used Gaussian blurring, high contrast and high saturation transformations as our positive augmentation.
5.2 DIRECT REPRESENTATION EVALUATIONS
In this section, we conduct direct evaluations of the learned representations.
Experimental setup. We evaluate the two key aspects of the representation, namely: i) disentanglement - prediction accuracy of the domain y, given the code c. Low accuracy would reflect a high degree of disentanglement. ii) alignment - prediction accuracy of the hidden attribute u given the code c. Note that this metric requires having groundtruth labels for the hidden attributes, which is typically available for synthetic datasets such as Cars3D and SmallNorb, but not for real datasets like CelebA. We therefore provide this metric for the synthetic datasets only. We conducted the experiment for our method and LORD. As well as DrNet and ML-VAE that represent adversarial and non-adversarial baselines.
Results. We report results on Cars3D, SmallNorb and CelebA. We observe that on Cars3D, both our method and LORD achieved excellent (nearly perfect performance). This is expected, as this dataset is relatively simple. We can see however that ML-VAE and DrNet did not perform as well on this dataset. This is inline with the results reported in LORD. On SmallNorb, our method was able to achieve disentangled representations whereas none of the other methods could. Note this
SmallNorb benchmark is the original version and not the simplified version developed in the LORD paper. In this setting, the object category only is known whereas both pose and lighting are unknown. The poor disentanglement of other methods allows them to include more information on the unknown attributes in the code. However, it is clear that our method provides a better tradeoff between disentanglement and content alignment than the alternative methods (as it is the only one that allows good disentanglement). Finally, on celebA we provide better disentanglement than the competing methods. As there are no groundtruth labels on the unknown attributes in celebA, we did not provide this analysis.
5.2.1 TRAINING TIME ANALYSIS
We provide a training time comparison of the training between our method and the current SOTA, LORD (Gabbay & Hoshen, 2020b). Both algorithms were run on a single NVIDIA-QuadroRTX6000 for 200 epochs for all datasets. For LORD, we present 2 different timings, the end of the latent optimization stage, and the end of the amortized stage. Results are presented in Tab. 4. We can observe that our method is an order of magnitude faster than LORD.
5.3 DOWNSTREAM APPLICATION
5.3.1 IMAGE TRANSLATION
Experimental setup. Although the objective of our method is to learn strong representations rather than image generation, we provide some qualitative image translation results. For each image set, we extract the domain y (object category) from the left, while the unlabeled attributes (typically pose or lighting) are taken from the top row. We presented results for our method and LORD.
Results. We observe that LORD and our method achieve excellent results on Cars3D. We see however that LORD fails on SmallNorb. Although it is able to transfer the pose, it fails to transfer the lighting. On the other hand, our method is able to extract the correct representations from the rerlevant images.
5.3.2 CROSS DOMAIN RETRIEVAL
In this section we demonstrate the performance of our method on a discriminative downstream task.
Experimental setup. We evaluate the cross-domain retrieval task. Given an image from one domain, and a set of images from another domain, our objective is to recover the image whose unlabelled attributes are most similar to those of the target image. We evaluate the performance of our learned encoder against those of the competing disentanglement methods: LORD, DrNet and MLVAE. We compute results on Cars3D and on SmallNorm. We did not provide quantitative results on
celebA as its unknown attributes are not labeled. We evaluate the methods by their top-1 and top-5 retrieval performance.
Results. Our quantitative evaluation is presented in Tab. 7. We can see that our method dominated all other methods on all metrics.
6 DISCUSSION AND CONCLUSION
We presented a discriminative, non-adversarial method for learning disentangled and aligned representations. This was achieved by transferring the inductive biases of generative models, to a contrastive learning approach. We made several important modifications to the contrastive loss, and found they are critical for our methods to work. We evaluated our method and found that it indeed learns disentangled and aligned representations. Our method is about an order of magnitude faster than competing approaches. It was also found to achieve better results than strong baselines on several tasks and datasets.
Naturally, our method has several limitations that can be addressed in future work. We discuss some of those below:
Non-generative inductive biases. Our method currently replicates the inductive biases of generative models. It therefore does not have other useful inductive biases which generative models do not have. By designing new augmentation, future work may be able to extend the range of inductive biases.
Batch-size sensitivity. Our method is based on SimCLR, whose performance is positively correlated with the batch-size. Future work may investigate using other framework e.g. MoCo-v2 that have a reduced dependence on the batch size.
A APPENDIX
A.1 INDUCTIVE BIAS ANALYSIS | 1. What is the focus of the paper, and what are the authors' contributions to the field of contrastive self-supervised learning?
2. What are the strengths of the proposed approach, particularly regarding disentanglement and feature representation?
3. What are the weaknesses of the paper, including unclear or unsupported claims, confusing examples, and inadequate experimentation?
4. How might the authors improve their work by addressing these issues and providing more solid evidence for their conclusions? | Summary Of The Paper
Review | Summary Of The Paper
This paper investigates the disentanglement effects achieved by the contrastive self-supervised learning approach. The authors claim that providing a set of data augmentations methods that are invariant to a typical VAE model will help to learn features that are disentangled to the class of the objects. In the experimental analysis, the authors define two metrics to measure the quality of the feature disentanglement and demonstrate their proposed method has good performance.
Review
The paper is addressing one very important problem of contrastive self-supervised learning, it shows that under certain well-designed conditions (the inductive bias), the approach could learn disentangled feature representations (possibly due to the uniformity of the embedding space). I highly appreciate this contribution, which might be impactful to this community.
However, there are many weaknesses in the current version of this paper, which made it not solid and satisfying to read:
A clear definition for ``inductive-bias'' needs to be made and discussed in the related work part. The initial motivation of using the content invariant augmentation should also be clearly explained based on this.
Some claims in the papers are not well-supported. For instance, in the introduction, the authors say ``in practice, generative models often learn aligned representations''. Claims like this are extremely confusing, the types of generative models are variant and the aligned representations are not well-explained. I have to guess a lot about what the authors want to say and have no idea if these claims are correct.
The example in section 3.1 is unclear. What are the methods that force a standard normal distribution? IMO, the GANs do not, and the VAEs learn separated normal distributions for all the hidden dimensions. Also, what does the
(
P
y
)
T
mean? I have totally no idea what this example wants to prove.
The experiments are not convincing. There are no detailed explanations on the metrics introduced (domain accuracy and content mean accuracy), though intuitively I can guess what do they mean. However, since the performance of compared methods (LORD and DrNet) are not evaluated on these metrics, it should be clearly defined and every detail of the experiments should be clearly shown. On the other hand, there are a few more works to compare (e.g. overLORD), with different tasks (the same as other papers), the current experimental results are not strong enough.
In the conclusion, the authors say ``we made several important modifications to the contrastive loss''. So what are they? |
ICLR | Title
Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies
Abstract
Learning continuous representations of discrete objects such as text, users, movies, and URLs lies at the heart of many applications including language and user modeling. When using discrete objects as input to neural networks, we often ignore the underlying structures (e.g., natural groupings and similarities) and embed the objects independently into individual vectors. As a result, existing methods do not scale to large vocabulary sizes. In this paper, we design a simple and efficient embedding algorithm that learns a small set of anchor embeddings and a sparse transformation matrix. We call our method ANCHOR & TRANSFORM (ANT) as the embeddings of discrete objects are a sparse linear combination of the anchors, weighted according to the transformation matrix. ANT is scalable, flexible, and end-to-end trainable. We further provide a statistical interpretation of our algorithm as a Bayesian nonparametric prior for embeddings that encourages sparsity and leverages natural groupings among objects. By deriving an approximate inference algorithm based on Small Variance Asymptotics, we obtain a natural extension that automatically learns the optimal number of anchors instead of having to tune it as a hyperparameter. On text classification, language modeling, and movie recommendation benchmarks, we show that ANT is particularly suitable for large vocabulary sizes and demonstrates stronger performance with fewer parameters (up to 40× compression) as compared to existing compression baselines. Code for our experiments can be found at https://github.com/pliang279/ sparse_discrete.
1 INTRODUCTION
Most machine learning models, including neural networks, operate on vector spaces. Therefore, when working with discrete objects such as text, we must define a method of converting objects into vectors. The standard way to map objects to continuous representations involves: 1) defining the vocabulary V = {v1, ..., v∣V ∣} as the set of all objects, and 2) learning a ∣V ∣ × d embedding matrix that defines a d dimensional continuous representation for each object. This method has two main shortcomings. Firstly, when ∣V ∣ is large (e.g., million of words/users/URLs), this embedding matrix does not scale elegantly and may constitute up to 80% of all trainable parameters (Jozefowicz et al., 2016). Secondly, despite being discrete, these objects usually have underlying structures such as natural groupings and similarities among them. Assigning each object to an individual vector assumes independence and foregoes opportunities for statistical strength sharing. As a result, there has been a large amount of interest in learning sparse interdependent representations for large vocabularies rather than the full embedding matrix for cheaper training, storage, and inference.
In this paper, we propose a simple method to learn sparse representations that uses a global set of vectors, which we call the anchors, and expresses the embeddings of discrete objects as a sparse linear combination of these anchors, as shown in Figure 1. One can consider these anchors to represent latent topics or concepts. Therefore, we call the resulting method ANCHOR & TRANSFORM (ANT). The approach is reminiscent of low-rank and sparse coding approaches, however, surprisingly in the literature these methods were not elegantly integrated with deep networks. Competitive attempts are often complex (e.g., optimized with RL (Joglekar et al., 2019)), involve multiple training stages (Ginart et al., 2019; Liu et al., 2017), or require post-processing (Svenstrup et al., 2017; Guo et al., 2017; Aharon et al., 2006; Awasthi & Vijayaraghavan, 2018). We derive a simple optimization objective which learns these anchors and sparse transformations in an end-to-end manner. ANT is
∗work done during an internship at Google.
scalable, flexible, and allows the user flexibility in defining these anchors and adding more constraints on the transformations, possibly in a domain/task specific manner. We find that our proposed method demonstrates stronger performance with fewer parameters (up to 40× compression) on multiple tasks (text classification, language modeling, and recommendation) as compared to existing baselines.
We further provide a statistical interpretation of our algorithm as a Bayesian nonparametric (BNP) prior for neural embeddings that encourages sparsity and leverages natural groupings among objects. Specifically, we show its equivalence to Indian Buffet Process (IBP; Griffiths & Ghahramani (2005)) prior for embedding matrices. While such BNP priors have proven to be a flexible tools in graphical models to encourage hierarchies (Teh & Jordan, 2010), sparsity (Knowles & Ghahramani, 2011), and other structural constraints (Roy et al., 2016), these inference methods are usually complex, hand designed for each setup, and non-differentiable. Our proposed method opens the door towards integrating priors (e.g., IBP) with neural representation learning. These theoretical connections leads to practical insights - by asymptotically analyzing the likelihood of our model in the small variance limit using Small Variance Asymptotics (SVA; Roweis (1998)), we obtain a natural extension, NBANT, that automatically learns the optimal number of anchors to achieve a balance between performance and compression instead of having to tune it as a hyperparameter.
2 RELATED WORK
Prior work in learning sparse embeddings of discrete structures falls into three categories:
Matrix compression techniques such as low rank approximations (Acharya et al., 2019; Grachev et al., 2019; Markovsky, 2011), quantizing (Han et al., 2016), pruning (Anwar et al., 2017; Dong et al., 2017; Wen et al., 2016), or hashing (Chen et al., 2015; Guo et al., 2017; Qi et al., 2017) have been applied to embedding matrices. However, it is not trivial to learn sparse low-rank representations of large matrices, especially in conjunction with neural networks. To the best of our knowledge, we are the first to present the integration of sparse low-rank representations, their non-parametric extension, and demonstrate its effectiveness on many tasks in balancing the tradeoffs between performance & sparsity. We also outperform many baselines based on low-rank compression (Grachev et al., 2019), sparse coding (Chen et al., 2016b), and pruning (Liu et al., 2017).
Reducing representation size: These methods reduce the dimension d for different objects. Chen et al. (2016a) divides the embedding into buckets which are assigned to objects in order of importance, Joglekar et al. (2019) learns d by solving a discrete optimization problem with RL, and Baevski & Auli (2019) reduces dimensions for rarer words. These methods resort to RL or are difficult to tune with many hyperparameters. Each object is also modeled independently without information sharing.
Task specific methods include learning embeddings of only common words for language modeling (Chen et al., 2016b; Luong et al., 2015), and vocabulary selection for text classification (Chen et al., 2019). Other methods reconstruct pre-trained embeddings using codebook learning (Chen et al., 2018; Shu & Nakayama, 2018) or low rank tensors (Sedov & Yang, 2018). However, these methods cannot work for general tasks. For example, methods that only model a subset of objects cannot be used for retrieval because it would never retrieve the dropped objects. Rare objects might be highly relevant to a few users so it might not be ideal to completely ignore them. Similarly, task-specific methods such as subword (Bojanowski et al., 2017) and wordpiece (Wu et al., 2016) embeddings, while useful for text, do not generalize to general applications such as item and query retrieval.
3 ANCHOR & TRANSFORM
Suppose we are presented with data X ∈ V N ,Y ∈ RN×c drawn from some joint distribution p(x, y), where the support of x is over a discrete set V (the vocabulary) and N is the size of the training set. The entries in Y can be either discrete (classification) or continuous (regression). The goal is to learn a d-dimensional representation {e1, ...,e∣V ∣} for each object by learning an embedding matrix E ∈ R∣V ∣×d where row i is the representation ei of object i. A model fθ with parameters θ is then used to predict y, i.e., ŷi = fθ(xi;E) = fθ(E[xi]).
At a high level, to encourage statistical sharing between objects, we assume that the embedding of each object is obtained by linearly superimposing a small set of anchor objects. For example, when the objects considered are words, the anchors may represent latent abstract concepts (of unknown cardinality) and each word is a weighted mixture of different concepts. More generally, the model assumes that there are some unknown number of anchors, A = {a1, ...,a∣A∣}. The embedding ei for object i is generated by first choosing whether the object possesses each anchor ak ∈ Rd. The selected anchors then each contribute some weight to the representation of object i. Therefore, instead of learning the large embedding matrix E directly, ANT consists of two components:
Algorithm 1 ANCHOR & TRANSFORM algorithm for learning sparse representations of discrete objects.
ANCHOR & TRANSFORM: 1: Anchor: initialize anchor embeddings A. 2: Transform: initialize T as a sparse matrix. 3: Optionally + domain info: initialize domain sparsity ma-
trix S(G) as a sparse matrix (see Appendix F). 4: for each batch (X,Y) do 5: Compute loss L = ∑iDφ(yi, fθ(xi;TA)) 6: A,T, θ = UPDATE (∇L, η). 7: T = max{(T − ηλ2)⊙ S(G) +T⊙ (1 − S(G)),0}. 8: end for 9: return anchor embeddings A and transformations T.
1) ANCHOR: Learn embeddings A ∈ R∣A∣×d of a small set of anchor objects A = {a1, ...,a∣A∣}, ∣A∣ << ∣V ∣ that are representative of all discrete objects.
2) TRANSFORM: Learn a sparse transformation T from A to E. Each of the discrete objects is induced by some transformation from (a few) anchor objects. To ensure sparsity, we want nnz(T) << ∣V ∣ × d.
A and T are trained end-to-end for task specific representations. To enforce sparsity, we use an `1 penalty on T and constrain its domain to be non-negative to reduce redundancy in transformations (positive and negative entries canceling out).
min T≥0, A,θ ∑ i Dφ(yi, fθ(xi;TA)) + λ2∥T∥1, (1)
where Dφ is a suitable Bregman divergence between predicted and true labels, and ∥T∥1 denotes the sum of absolute values. Most deep learning frameworks directly use subgradient descent to solve eq (1), but unfortunately, such an approach will not yield sparsity. Instead, we perform optimization by proximal gradient descent (rather than approximate subgradient methods which have poorer convergence around non-smooth regions, e.g., sparse regions) to ensure exact zero entries in T:
A t+1 ,T t+1 , θ t+1
= UPDATE (∇∑ i
Dφ(yi, fθ(xi;T t A t )), η) , (2)
T t+1 = PROXηλ2(T t+1 ) = max (T t+1 − ηλ2, 0) , (3)
where η is the learning rate, and UPDATE is a gradient update rule (e.g., SGD (Lecun et al., 1998), ADAM (Kingma & Ba, 2015), YOGI (Zaheer et al., 2018)). PROXηλ2 is a composition of two proximal operators: 1) soft-thresholding (Beck & Teboulle, 2009) at ηλ2 which results from subgradient descent on λ2∥T∥1, and 2) max(⋅,0) due to the non-negative domain for T. We implement this proximal operator on top of the YOGI optimizer for our experiments.
Together, equations (2) and (3) give us an iterative process for end-to-end learning of A and T along with θ for specific tasks (Algorithm 1). T is implemented as a sparse matrix by only storing its non-zero entries and indices. Since nnz(T) << ∣V ∣× d, this makes storage of T extremely efficient as compared to traditional approaches of computing the entire ∣V ∣×d embedding matrix. We also provide implementation tips to further speedup training and ways to incorporate ANT with existing speedup techniques like softmax sampling (Mikolov et al., 2013) or noise-contrastive estimation (Mnih & Teh, 2012) in Appendix H. After training, we only store ∣A∣ × d + nnz(T) << ∣V ∣ × d entries that define the complete embedding matrix, thereby using fewer parameters than the traditional ∣V ∣ × d matrix. General purpose matrix compression techniques such as hashing (Qi et al., 2017), pruning (Dong
et al., 2017), and quantizing (Han et al., 2016) are compatible with our method: the matrices A and nnz(T) can be further compressed and stored.
We first discuss practical methods for anchor selection (§3.1). In Appendix F we describe several ways to incorporate domain knowledge into the anchor selection and transform process. We also provide a statistical interpretation of ANT as a sparsity promoting generative process using an IBP prior and derive approximate inference based on SVA (§3.2). This gives rise to a nonparametric version of ANT that automatically learns the optimal number of anchors.
3.1 ANCHOR: SELECTING THE ANCHORS A
Inspired by research integrating initialization strategies based on clustering (Teh et al., 2007) and Coresets (Bachem et al., 2015) with Bayesian nonparametrics, we describe several practical methods to select anchor objects that are most representative of all objects (refer to Appendix D for a comparison of initialization strategies.).
Frequency and TF-IDF: For tasks where frequency or TF-IDF (Ramos, 1999) are useful for prediction, the objects can simply be sorted by frequency and the most common objects selected as the anchor points. While this might make sense for tasks such as language modeling (Luong et al., 2015; Chen et al., 2016b), choosing the most frequent objects might not cover rare objects that are not well represented by common anchors.
“good”
“the”
Initialize with frequent words
pretrained space e.g. GloVe/co-occurrence
“good”
“the”
Clustering step 1
pretrained space e.g. GloVe/co-occurrence
“good”
“the”
Clustering step 2
pretrained space e.g. GloVe/co-occurrence
“good”
“the”
Clustering step 3
pretrained space e.g. GloVe/co-occurrence
Confidential + Proprietary
Clustering: To ensure that all objects are close to some anchor, we use k-means++ initialization (Arthur & Vassilvitskii, 2007). Given a feature space representative of the relationships between objects, such as Glove (Pennington et al., 2014) for words or a co-occurrence matrix (Haralick et al., 1973) for more general objects, k-means++ initialization picks cluster centers to span the entire space. This can augment other strategies, such as initializing anchors using frequency followed by clustering to complete remaining anchors (see Figure 2).
Random basis vectors: Initialize A to a set of random basis vectors. This simple yet powerful method captures the case where we have less knowledge about the objects (i.e., without access to any pretrained representation/similarity space).
3.2 STATISTICAL INTERPRETATION AS A BAYESIAN NONPARAMETRIC PRIOR
To provide a statistical interpretation of ANT, we first analyze a generative process for discrete representations that is consistent with our algorithm. Given a set of anchors, A = {a1, ...,a∣A∣}, we use a binary latent variable zik ∈ {0,1} to indicate whether object i possesses anchor k and a positive latent variable wik ∈ R≥0 to denote the weight that anchor k contributes towards object i. Therefore, the representation ei is given by ei = ∑k wikzikak. Ideally, we want the vector zi to be sparse for efficient learning and storage. More formally, suppose there are K ∶= ∣A∣ anchors, then:
• Z ∈ R∣V ∣×K ∼ IBP(a, b); A ∈ RK×d ∼ P (A) = N (0,1); W ∈ R∣V ∣×K ∼ P (W) = Exp(1) • for i = 1,⋯,N
- ŷi = fθ(xi; (Z ○W)A) - yi ∼ p(yi∣xi;Z,W,A) = exp{−Dφ(yi, ŷi)} bφ(yi)
In this generative process, the selection matrix Z follows a two-parameter Indian Buffet Process (IBP; Griffiths & Ghahramani (2005)) prior (Ghahramani et al., 2007). Not only does this BNP prior allow for a potentially infinite number of anchors, but it also encourages each object to only select a small subset of anchors, resulting in a sparse zi (see Appendix A for details). We place a standard Gaussian prior on the continuous anchors embeddings ak and an exponential prior on the weights W which give the actual non-negative transformation weights for the non-zero entries defined in Z. E = (Z ○W)A is the final embedding learnt by our model which represents a d-dimensional continuous representation {e1, ...,e∣V ∣} for each discrete object where row i is the representation ei of object i. Finally, a neural model fθ with parameters θ is used to predict yi given the embedded representations, i.e., ŷi = fθ(xi; (Z ○W)A) = fθ((Z ○W)A[xi]).
Likelihood Model/Loss: We assume that the final emission model yi∣ŷi belongs to the exponential family. Since exponential family distributions have a corresponding Bregman divergence (Banerjee et al. (2005); see Appendix C for examples), we choose Dφ(yi, ŷi) as the corresponding Bregman divergence between predicted and true labels. Appropriate choices for Dφ recover cross-entropy and MSE losses. bφ(yi) does not depend on any learnable parameter or variable and can be ignored.
Joint likelihood: Under the generative model as defined above, the joint likelihood is given by: log p(Y,Z,W,A∣X)∝∑
i
log p(yi∣xi;Z,W,A) + log p(Z) + log p(W) + log p(A)
=∑ i {−Dφ(yi, fθ(xi; (Z ○W)A)) + log bφ(yi)} + log p(Z) + log p(W) + log p(A).
However, calculating the posterior or MAP estimate is hard, especially due to the presence of the non-linear deep network fθ. Approximate inference methods such as MCMC, variational inference, or probabilistic programming would be computationally and statistically inefficient since it would involve sampling, evaluating, or training the model multiple times. To tackle this problem, we perform approximate inference via Small Variance Asymptotics (SVA), which captures the benefits of rich latent-variable models while providing a framework for scalable optimization (Broderick et al., 2013a; Jiang et al., 2012; Roychowdhury et al., 2013).
Approximate Inference via SVA: To use SVA, we introduce a scaling variable β and shrink the variance of the emission probability by taking β →∞. The scaled probability emission becomes
p(yi∣xi;Z,W,A) = exp{−βDφ(yi, ŷi)} bβφ(yi). (4) Following Broderick et al. (2013a), we modulate the number of features in the large-β limit by choosing constants λ1 > λ2 > 0 and setting the IBP hyperparameters a = exp(−βλ1) and b = exp(βλ2). This prevents a limiting objective function that favors a trivial cluster assignment (every data point assigned to its own separate feature). Maximizing the asymptotic joint likelihood (after taking limits, i.e., limβ→∞ 1β log p(Y,Z,W,A∣X)) results in the following objective function:
min T≥0, A,θ,K ∑ i Dφ(yi, fθ(xi;TA)) + λ2∥T∥0 + (λ1 − λ2)K, (5)
where we have combined the variables Z and W with their constraints into one variable T. The exponential prior for W results in a non-negative domain for T. Please refer to Appendix B for derivations. Note that eq (5) suggests a natural objective function in learning representations that minimize the prediction loss Dφ(yi, fθ(xi;TA)) while ensuring sparsity of T as measured by the `0-norm and using as few anchors as possible (K). Therefore, optimizing eq (5) gives rise to a nonparametric version of ANT, which we call NBANT, that automatically learns the optimal number of anchors. To perform optimization over the number of anchors, our algorithm starts with a small ∣A∣ = 10 and either adds anchors (i.e., adding a new row to A and a new column to T) or deletes anchors to minimize eq (5) at every epoch depending on the trend of the objective evaluated on validation set. We outline the exact algorithm in Appendix G along with more implementation details.
Analogously, we can derive the finite case objective function for a fixed number of anchors K: min
T≥0, A,θ ∑ i
Dφ(yi, fθ(xi;TA)) + λ2∥T∥0, (6)
which, together with a `1 penalty on T as a convex relaxation for the `0 penalty, recovers the objective function in eq (1). The solution for this finite version along with K yields the Pareto front. Different values of λ1 in eq (5) can be used for model selection along the front as elucidated in Appendix L.
4 EXPERIMENTS
To evaluate ANT, we experiment on text classification, language modeling, and movie recommendation tasks. Experimental details are in Appendix J and full results are in Appendix K.
4.1 TEXT CLASSIFICATION
Setup: We follow the setting in Chen et al. (2019) with four datasets: AG-News (V = 62K) (Zhang et al., 2015), DBPedia (V = 563K) (Lehmann et al., 2015), Sogou-News (V = 254K) (Zhang et al., 2015), and Yelp-review (V = 253K) (Zhang et al., 2015). We use a CNN for classification (Kim, 2014). ANT is used to replace the input embedding and domain knowledge is derived from WordNet and co-occurrence in the training set. We record test accuracy and number of parameters used in the embedding only. For ANT, num params is computed as ∣A∣ × d + nnz(T).
Baselines: On top of the CNN, we compare to the following compression approaches. Vocabulary selection methods: 1) FREQUENCY where only embeddings for most frequent words are learnt (Chen et al., 2016b; Luong et al., 2015), 2) TF-IDF which only learns embeddings for words with high TF-IDF score (Ramos, 1999), 3) GL (group lasso) which aims to find underlying sparse structures in the embedding matrix via row-wise `2 regularization (Liu et al., 2015; Park et al., 2016; Wen et al., 2016), 4) VVD (variational vocabulary dropout) which performs variational dropout for vocabulary selection (Chen et al., 2019). We also compare to 5) SPARSEVD (sparse variational dropout) which performs variational dropout on all parameters (Chirkova et al., 2018), 6) SPARSEVD-VOC which uses multiplicative weights for vocabulary sparsification (Chirkova et al., 2018), and 7) a SPARSE CODE model that learns a sparse code to reconstruct pretrained word representations (Chen et al., 2016b). All CNN architectures are the same for all baselines with details in Appendix J.1.
Results on AG-News are in Table 1 and results for other datasets are in Appendix K.1. We observe that restricting T ≥ 0 using an exponential prior is important in reducing redundancy in the entries. Domain knowledge from WordNet and co-occurrence also succeeded in reducing the total (non-zero) embedding parameters to 0.40M, a compression of 40× and outperforming the existing approaches.
4.2 LANGUAGE MODELING
Setup: We perform experiments on word-level Penn Treebank (PTB) (V = 10K) (Marcus et al., 1993) and WikiText-103 (V = 267K) (Merity et al., 2017) with LSTM (Hochreiter & Schmidhuber, 1997) and AWD-LSTM (Merity et al., 2018). We use ANT as the input embedding tied to the output embedding. Domain knowledge is derived from WordNet and co-occurrence on the training set. We record the test perplexity and the number of (non-zero) embedding parameters.
Baselines: We compare to SPARSEVD and SPARSEVD-VOC, as well as low-rank (LR) and tensortrain (TT) model compression techniques (Grachev et al., 2019). Note that the application of variational vocabulary selection to language modeling with tied weights is non-trivial since one is unable to predict next words when words are dynamically dropped out. We also compare against methods that compress the trained embedding matrix as a post-processing step before evaluation: POST-SPARSE HASH (post-processing using sparse hashing) (Guo et al., 2017) and POST-SPARSE HASH+k-SVD (Awasthi & Vijayaraghavan, 2018; Guo et al., 2017) which uses k-SVD (which is the basis of dictionary learning/sparse coding) (Aharon et al., 2006) to solve for a sparse embedding matrix, instead of adhoc-projection in (Guo et al., 2017). Comparing to these post-processing methods demonstrates that end-to-end training of sparse embeddings is superior to post-compression.
Results: On PTB (Table 2), we improve the perplexity and compression as compared to previously proposed methods. We observe that sparsity is important: baseline methods that only perform lowerrank compression with dense factors (e.g., LR LSTM) tend to suffer in performance and use many parameters, while ANT retains performance with much better compression. ANT also outperforms post-processing methods (POST-SPARSE HASH), we hypothesize this is because these post-processing methods accumulate errors in both language modeling as well as embedding reconstruction. Using an anchor size of 500/1,000 reaches a good perplexity/compression trade-off: we reach within 2 points perplexity with 5× reduction in parameters and within 7 points perplexity with 10× reduction. Using AWD-LSTM, ANT with 1,000 dynamic basis vectors is able to compress parameters by 10× while achieving 72.0 perplexity. Incorporating domain knowledge allows us to further compress the parameters by another 10× and achieve 70.0 perplexity, which results in 100× total compression.
On WikiText-103, we train using sampled softmax (Bengio & Senecal, 2008) (due to large vocabulary) for 500,000 steps. To best of our knowledge, we could not find literature on compressing language models on WikiText-103. We tried general compression techniques like low rank tensor and tensor train factorization (Grachev et al., 2019), but these did not scale. As an alternative, we consider a HASH EMBED baseline that retains the frequent k words and hashes the remaining words into 1,000 OOV buckets (Svenstrup et al., 2017). We vary k ∈ {1×105,5×104,1×104} (details in Appendix J.3). From Table 2 (bottom), we reach within 3 perplexity with ∼ 16× reduction in parameters and within 13 perplexity with ∼ 80× reduction, outperforming the frequency and hashing baselines. We observe that ANT’s improvement over post-compression methods (POST-SPARSE HASH) is larger on WikiText than PTB, suggesting that ANT is particularly suitable for large vocabularies.
4.3 RECOMMENDER SYSTEMS
Setup: We perform experiments on both movie and product recommendation tasks. For movie recommendations, we follow Ginart et al. (2019) and we experiment on MovieLens 25M (Harper & Konstan, 2015) with 126K users and 59K movies. We also present results for MovieLens 1M in Appendix K.3. On product recommendation, we show that ANT scales to Amazon Product reviews (Ni et al., 2019), the largest existing dataset for recommender systems with 233M reviews spanning 43.5M users and 15.2M products. Following Wan et al. (2020), we ensured that the users and products in the test set have appeared in the training data for generalization.
Baselines: We compare to a baseline Matrix Factorization (MF) model (Koren et al., 2009) with full embedding matrices for movies and users and to Mixed Dimension (MIXDIM) embeddings (Ginart et al., 2019), a compression technique that assigns different dimension to different users/items based on popularity. We also compare to SPARSE CBOW (Sun et al., 2016) which learns sparse E by placing an `1 penalty over all entries of E and optimizing using online subgradient descent, and SLIMMING (Liu et al., 2017), which performs subgradient descent before pruning small weights by setting them to 0. Such methods learn embeddings for objects independently without statistical strength sharing among related objects. We also test NBANT using the algorithm derived from the Bayesian nonparametric interpretation of ANT.
Results: From Table 3, ANT outperforms standard matrix factorization and dense mixed dimensional embeddings for performance and compression. NBANT is also able to automatically select an optimal
number of anchors (6/8) to achieve solutions along the performance-compression Pareto front. In Figure 3, we plot the value of eq (5) across values of ∣A∣ after a comprehensive hyperparameter sweep on ANT across 1000 settings. In comparison, NBANT optimizes ∣A∣ and reaches a good value of eq (5) in a single run without having to tune ∣A∣ as a hyperparameter, thereby achieving best balance between performance and compression. Please refer to Appendix K.3 for more results and discussion on NBANT.
For product recommendation, we first experiment on a commonly used subset of the data, Amazon Electronics (with 9.84M users and 0.76M products), to ensure that our results match published baselines (Wan et al., 2020), before scaling our experiment to the entire dataset. From Table 4, we find that ANT compresses embeddings by 25× on Amazon Electronics while maintaining performance, and 10× on the full Amazon reviews dataset.
Online NBANT: Since NBANT automatically grows/contracts ∣A∣ during training, we can further extend NBANT to an online version that sees a stream of batches without revisiting previous ones (Bryant & Sudderth, 2012). We treat each batch as a new set of data coming in and train on that batch until convergence, modify ∣A∣ as in Algorithm 2, before moving onto the next batch. In this significantly more challenging online setting, NBANT is still able to learn well and achieve a MSE of 0.875 with 1.25M non zero parameters. Interestingly this online version of NBANT settled on a similar range of final user (8) and item (8) anchors as compared to the non-online version (see Table 3), which confirms the robustness of NBANT in finding relevant anchors automatically. In Appendix K.3 we discuss more observations around online NBANT including ways of learning ∣A∣.
4.4 DISCUSSION AND OBSERVATIONS
Here we list some general observations regarding the importance of various design decisions in ANT:
1) Sparsity is important: Baselines that compress with dense factors (e.g., LR, TT) suffer in performance while using many parameters, while ANT retains performance with better compression.
2) Choice of A: We provide results on more clustering initializations in Appendix D. In general, performance is robust w.r.t. choice of A. While frequency and clustering work better, using a dynamic basis also performs well. Thus, it is beneficial to use any extra information about the discrete objects (e.g., domain knowledge or having a good representation space like GloVe to perform clustering).
Table 5: Word association results after training language models with ANT on the word-level PTB dataset. Left: the non-anchor words most induced by a given anchor word. Right: the largest (non-anchor, anchor) entries learnt in T after sparse `1-regularization. Bottom: movie clusters obtained by sorting movies with the highest coefficients with each anchor embedding.
Anchor words Non-anchor words year august, night, week, month, monday, summer, spring stock bonds, certificates, debt, notes, securities, mortgages
Largest word pairs trading, brokerage
stock, junk year, summer york, angeles year, month
government, administration
Movies Genre God’s Not Dead, Sex and the City, Sex and the City 2, The Twilight Saga: Breaking Dawn - Part 1,
The Princess Diaries 2: Royal Engagement, The Last Song, Legally Blonde 2: Red, White & Blonde, The Twilight Saga: Eclipse, Maid in Manhattan, The Twilight Saga: Breaking Dawn - Part 2
romance, comedy
Nostalghia, Last Days, Chimes at Midnight, Lessons of Darkness, Sonatine, Band of Outsiders, Gerry, Cyclo, Mishima: A Life in Four Chapters, George Washington
drama, indie
3) Anchors and sparse transformations learned: We visualize the important transformations (large entries) learned between anchors and non-anchors in Table 5. Left, we show the most associated non-anchors for a given anchor word and find that the induced non-anchors are highly plausible: stock accurately contributes to bonds, certificates, securities, and so on. Right, we show the largest (non-anchor, anchor) pairs learned, where we find related concepts such as (billion, trillion) and (government, administration). On MovieLens, for each anchor, we sort the movies according to the magnitude of their transformation coefficients which automatically discovers movie clusters based on underlying genres. We obtain a genre purity ratio of 61.7% by comparing automatically discovered movie clusters with the true genre tags provided in MovieLens.
4) Zero transformations learned: For MovieLens, we find that ANT assigns 2673 out of 59047 movies to an entire zero row, of which 84% only had 1 rating (i.e., very rare movies). Therefore, compression automatically discovers very rare objects (1 labeled point). On WikiText-103, rare words (e.g., Anarky, Perl, Voorhis, Gaudí, Lat, Bottomley, Nescopeck) are also automatically assigned zero rows when performing high compression (54.2 ppl with 0.4M params). Certain rare words that might be predictive, however, are assigned non-zero rows in T, such as: sociologists, deadlines, indestructible, causeways, outsourced, glacially, heartening, unchallenging, roughest.
5) Choice of λ1, λ2: Tuning λ1 allows us to perform model selection by controlling the trade-off between ∣A∣ (model complexity) and performance. By applying eq (5) on our trained models in Table 2, choosing a small λ1 = 2 × 10−5 prefers more anchors (∣A∣ = 1,000) and better performance (ppl = 79.4), while a larger λ1 = 1 × 10−1 selects fewer anchors (∣A∣ = 100) with a compromise in performance (ppl = 106.6). Tuning λ2 allows us to control the tradeoff between sparsity and performance (see details in Appendix L).
6) Convergence: In Figure 4, we plot the empirical convergence of validation loss across epochs. ANT converges as fast as the (non-sparse) MF baseline, and faster than compression baselines MixDim (Ginart et al., 2019) and Sparse CBOW (Sun et al., 2016). ANT also converges to the best validation loss.
7) Scalability: In addition to fast convergence, ANT also works effectively on large datasets such as Movielens 25M (162K users, 59K movies, 25M examples) and WikiText-103 (267K unique words, 103M tokens). For each epoch on Movielens 25M, standard MF takes 165s on a GTX 980 Ti GPU while ANT takes 176s for ∣A∣ = 5 and 180s for ∣A∣ = 20. ANT also scales to the largest recommendation dataset, Amazon reviews, with 25M users and 9M products.
5 CONCLUSION
This paper presented ANCHOR & TRANSFORM to learn sparse embeddings of large vocabularies using a small set of anchor embeddings and a sparse transformation from anchors to all objects. We also showed a statistical interpretation via integrating IBP priors with neural representation learning. Asymptotic analysis of the likelihood using SVA yields an extension that automatically learns the optimal number of anchors. On text classification, language modeling, and recommender systems, ANT outperforms existing approaches with respect to accuracy and sparsity.
B DERIVATION OF OBJECTIVE FUNCTION VIA SVA
In this section we derive our objective function using Small Variance Asymptotics (SVA) (Jiang et al., 2012). Recall that the generative process in our model is given by:
• Z ∈ R∣V ∣×K ∼ IBP(a, b) • A ∈ RK×d ∼ P (A) = N (0,1) • W ∈ R∣V ∣×K ∼ P (W) = Exponential(1) • for i = 1,⋯,N
- ŷi = fθ(xi; (Z ○W)A) - yi ∼ p(yi∣xi;Z,W,A) = exp{−Dφ(yi, ŷi)} bφ(yi)
The joint log-likelihood under our generative model above is therefore: log p(Y,Z,W,A∣X)
∝∑ i log p(yi∣xi,Z,W,A) + log p(Z) + log p(W) + log p(A)
=∑ i
{−Dφ(yi, fθ(xi, (Z ○W)A)) + log bφ(yi)} + log p(Z) + log p(W) + log p(A). (10)
To use SVA, an approximate objective function for finding point estimates is obtained by taking the limit of the emission probability variances down to zero. We begin by introducing a scaling variable β and shrinking the variance of the emission probability to 0 by taking β →∞. The scaled probability emission becomes p(yi∣xi,Z,W,A) = exp{−βDφ(yi, ŷi)} bβφ(yi) (11) Following Broderick et al. (2013a), we modulate the number of features in the large-β limit by choosing constants λ1 > λ2 > 0 and setting the IBP hyperparameters with β as follows:
a = exp(−βλ1) b = exp(βλ2) (12) This prevents a limiting objective function that favors a trivial cluster assignment (every data point assigned to its own separate feature).
We now take the limit of the log-likelihood term by term:
lim β→∞
1 β log p(Y,A,W,Z∣X) (13)
= lim β→∞
1 β log p(yi∣xi,Z,W,A) + lim β→∞ 1 β log p(Z) + lim β→∞ 1 β log p(W) + lim β→∞ 1 β log p(A).
(14)
• limβ→∞ 1β log p(yi∣xi,Z,W,A) = limβ→∞ 1 β (−βDφ(yi, ŷi) + log bβφ(yi))
= −Dφ(yi, ŷi) +O(1). • limβ→∞ 1β log p(Z) = −λ2∥Z∥0 − (λ1 − λ2)K, see box below. • limβ→∞ 1β log p(W) = 0, if W ≥ 0 else −∞. • limβ→∞ 1β log p(A) = 0 as log p(A) = O(1).
For convenience, we re-write the limit of the IBP prior as
lim β→∞
1 β log p(Z) = lim β→∞ 1 β log
(ab)K
∏ 2∣V ∣−1 h=1 Kh
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ a©
+ lim β→∞
1 β log exp(−abH∣V ∣)
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ b©
+ K
∑ k=1 lim β→∞
1 β log Γ(mk)Γ(∣V ∣ −mk + b)
Γ(∣V ∣ + b) ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
c©
(15)
For part a©:
lim β→∞
1 β log
(ab)K
∏ 2∣V ∣−1 h=1 Kh
= lim β→∞
1 β log exp(−β(λ1 − λ2)K)
∏ 2∣V ∣−1 h=1 Kh
= lim β→∞
1 β × −β(λ1 − λ2)K − lim β→∞ 1 β ×O(1)
= −(λ1 − λ2)K
(16)
For part b©:
lim β→∞
1 β log exp(−abH∣V ∣) = lim β→∞ 1 β × −abH∣V ∣
= lim β→∞
− exp(−β(λ1 − λ2)K)
β ×
∣V ∣
∑ j=1
1
exp(βλ2) + j − 1
= 0
(17)
For part c©:
lim β→∞
1 β log Γ(mk)Γ(∣V ∣ −mk + b) Γ(∣V ∣ + b) = lim β→∞ 1 β log Γ(mk) − lim β→∞ 1 β
mk
∑ j=1
log(∣V ∣ − j + b)
= 0 − mk
∑ j=1 lim β→∞
log(∣V ∣ − j + exp(βλ2))
β
= − mk
∑ j=1
λ2
= −λ2mk
(18)
We know that mk is the number of objects which uses anchor k which counts the number of non-zero entries in the k-th column of Z. When we sum over all k, it just becomes the number of non-zero entries in Z, which is equivalent to the L0 norm of Z, i.e., ∥Z∥0.
Therefore, the MAP estimate under SVA as given by
max lim β→∞
1 β log p(Y,A,W,Z∣X) (19)
is equivalent to optimizing the following objective function: max Z∈0,1 W≥0 A,θ,K ∑ i −Dφ(yi, fθ(xi, (Z ○W)A)) − λ2∥Z∥0 − (λ1 − λ2)K, (20)
where the exponential prior for W resulted in a limiting domain for W to be positive. Note that we can combine the optimizing variables Z and W with their constraints into one variable T ≥ 0. Also
we can switch from a maximization problem to a minimization problem by absorbing the negative sign. Finally we arrive at the desired objective:
min T≥0
A,θ,K
∑ i
Dφ(yi, fθ(xi,TA)) + λ2∥T∥0 + (λ1 − λ2)K. (21)
C EXPONENTIAL FAMILY DISTRIBUTIONS AS BREGMAN DIVERGENCES
In this section we provide some results that relate exponential families distributions and Bregman divergences. As a result, we can relate likelihood models from Sec. 3.2 to appropriate Bregman divergences. Thus, a probabilistic observation model can be translated to a loss functions minimizing the Bregman divergence, which are more amenable to deep network training using gradient based methods. We begin by defining the Bregman divergence below and stating the relationship formally in Theorem 1. Definition 1. (Bregman, 1967) Let φ ∶ S → R, S = dom(φ) be a strictly convex function defined on a convex set S ⊂ Rd such that φ is differentiable on ri(S), assumed to be non-empty. The Bregman divergence Dφ ∶ S × ri(S)→ [0,∞) is defined as
Dφ(x,y) = φ(x) − φ(y) − ⟨x − y,∇φ(y)⟩, (22) where ∇φ(y) represents the gradient vector of φ evaluated at y. Theorem 1. (Banerjee et al., 2005) There is a bijection between regular exponential families and regular Bregman divergences. In particular, for any exponential family distribution p(x∣θ) = p0(x) exp(⟨x,θ⟩ − g(θ)) can be written as p(x∣µ) = exp(−Dφ(x,µ))bφ(x) where φ is the Legendre dual of the log-partition function g(θ) and µ = ∇θg(θ).
From Theorem 1, we can see that maximizing log-likelihood log p(x∣θ) is same as minimizing the Bregman divergence Dφ(x,µ). Note that we can ignore bφ(x) as it depends only on observed data and does not depend on any parameters. We now illustrate some common examples of exponential families (like Gaussian and categorical), derive their corresponding Bregman divergences, and connect to usual loss functions used in deep networks (like MSE and cross-entropy).
Example 1: Gaussian distribution. (Banerjee et al., 2005) We start with the unit variance spherical Gaussian distributions with with mean µ, which have densities of the form:
p(x;µ) = 1
√ (2π)d
exp(− 1
2 ∥x −µ∥22) . (23)
Using the log-partition function for Gaussian distribution, we can calculate that φ(x) = 1 2 ∥x∥2, which yields Bregman divergence equal to: Dφ(x,µ) = φ(x) − φ(µ) − ⟨x −µ,∇φ(µ)⟩ (24)
= 1
2 ∥x∥22 −
1 2 ∥µ∥22 − ⟨x −µ,µ⟩ (25)
= 1
2 ∥x −µ∥22
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ mean squared error
, (26)
Thus, Dφ(x,µ) along with constant bφ(x) given by
bφ(x) = 1
√ (2π)d
, (27)
recovers the Gaussian density p(x) = exp(−Dφ(x,µ))bφ(x). Therefore, when we assume that labels have a Gaussian emmission model, the corresponding Bregman divergence Dφ(x,µ) = 12∥x −µ∥ 2 2 recovers the squared loss commonly used for regression.
Example 2: Multinomial distribution. (Banerjee et al., 2005) Another exponential family that is widely used is the family of multinomial distributions:
p(x,q) = N !
∏ d j=1 xj !
d
∏ j=1
q xj j (28)
where xj ∈ Z+ are frequencies of events, ∑dj=1 xj = N and qj ≥ 0 are probabilities of events, ∑ d j=1 qj = 1. The multinomial density can be expressed as the density of an exponential distribution
in x = {xj}d−1j=1 with natural parameter θ = log ( qj qd
) d−1
j=1 , cumulant function g(θ) = −N log qd, and
expectation parameter µ = ∇g(θ) = [Nqj]d−1j=1 . The Legendre dual φ of g is given by
φ(µ) = N d
∑ j=1
( µj
N ) log (
µj N ) = N
d
∑ j=1
qj log qj . (29)
As a result, the multinomial density can be expressed as a Bregman divergence equal to:
Dφ(x,µ) = d
∑ j=1
xj logxj
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ constant
− d
∑ j=1
xj logµj
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ cross-entropy loss
. (30)
and constant bφ(x) given by
bφ(x) = ∏ d j=1 x xj j
NN N !
∏ d j=1 xj !
, (31)
which recovers the multinomial density p(x) = exp(−Dφ(x,µ))bφ(x). Therefore, when the labels are generated from a multinomial distribution, the corresponding Bregman divergence Dφ(x,µ) = −∑ d j=1 xj logµj + constant recovers the cross-entropy loss commonly used for classification.
D LEARNING THE ANCHOR EMBEDDINGS A
Here we provide several other strategies for initializing the anchor embeddings:
• Sparse lasso and variational dropout (Chen et al., 2019). Given the strong performance of sparse lasso and variational dropout as vocabulary selection methods, it would be interesting to use sparse lasso/variational dropout to first select the important task-specific words before jointly learning their representations and their transformations to other words. However, sparse lasso and variational dropout require first training a model to completion unlike frequency and clustering based vocabulary selection methods that can be performed during data preprocessing. • Coresets involve constructing a reduced data set which can be used as proxy for the full data set, with provable guarantees such that the same algorithm run on the coreset and the full data set gives approximately similar results (Phillips, 2016; Har-Peled & Mazumdar, 2004). Coresets can be approximately computed quickly (Bachem et al., 2017) and can be used to initialize the set of anchors A.
In general, there is a trade-off between how quickly we can choose the anchor objects and their performance. Randomly picking anchor objects (which is equivalent to initializing the anchor embeddings with dynamic basis vectors) becomes similar to learning a low-rank factorization of the embedding matrix (Sedov & Yang, 2018), which works well for general cases but can be improved for task-specific applications or with domain knowledge. Stronger vocabulary selection methods like variational dropout and group lasso would perform better but takes significantly longer time to learn. We found that intermediate methods such as frequency, clustering, with WordNet/co-occurrence information works well while ensuring that the preprocessing and training stages are relatively quick.
In Appendix K we provide more results for different initialization strategies including those based on clustering initializations. In general, performance is robust with respect to the choice of A among the ones considered (i.e., random, frequency, and clustering). While frequency and clustering work better, using a set of dynamic basis embeddings still gives strong performance, especially when combined with domain knowledge from WordNet and co-occurrence statistics. This implies that when the user has more information about the discrete objects (e.g., having a good representation space to perform clustering), then the user should do so. However, for a completely new set of discrete objects, simply using low-rank basis embeddings with sparsity also work well.
E TRANSFORM: LEARNING A SPARSE T
In addition to a simple sparse linear transformation, we describe some extensions that improve sparsity and expressitivity of the learned representations. Reducing redundancy in representations: To further reduce redundancy in our sparse representations, we perform orthogonal regularization of dynamic basis vectors A by adding the loss term
Confidential + Proprietary
L(A) = ∑i≠j ∣a ⊺ i aj ∣ to the loss function in eq (1). This ensures that different basis vectors ai and aj are orthogonal instead of being linear combinations of one another which would lead to redundancies across different learnt entries in T. Mixture of anchors: In general, different initialization strategies may bring about different advantages. For example, using a mixture of random basis vectors has been shown to help model multisense embeddings (Athiwaratkun et al., 2018; Nguyen et al., 2017). One can define a set of M anchor embeddings A1, ...,AM each initialized by different strategies and of possibly different sizes. Nonlinear mixture of transformations: To complement learning multiple sets of anchor embeddings A1, ...,AM , the straightforward extension of the TRANSFORM step would be to learn a separate linear transformation for each anchor embedding and summing the result: E = ∑Mm=1TmAm. However, the expressive power of this linear combination is equivalent to one set of anchor embeddings equal to concatenating A1, ...,AM and one linear transformation. To truly exhibit the advantage of multiple anchors, we transform and combine them in a nonlinear fashion, e.g., E = ∑ M m=1 softmax(Tm)Am (softmax over the rows of Tm, Figure 5). Different transformations can be learned for different initializations of anchors. This is connected with the multi-head attention mechanism in the Transformer (Vaswani et al., 2017), where softmax(Tm) are the softmax-activated (sparse) attention weights and Am the values to attend over. The result is an embedding matrix formed via a nonlinear mixture of anchors (each initialized with different strategies) and sparse transformations.
F INCORPORATING DOMAIN KNOWLEDGE
ANT also allows incorporating domain knowledge about object relationships. Suppose we are given some relationship graph G = (V,E) where each object is a vertex v ∈ V and an edge (u, v) ∈ E exists between objects u and v if they are related. Real-world instantiations of such a graph include 1) WordNet (Miller, 1995) or ConceptNet (Liu & Singh, 2004) for semantic relations between words, 2) word co-occurrence matrices (Haralick et al., 1973), and 3) Movie Clustering datasets (Leskovec & Krevl, 2014). From these graphs, we extract related positive pairs P = {(u, v) ∈ E} and unrelated negative pairs N = {(u, v) ∉ E}. We incorporate domain information as follows (see Figure 6 for a visual example): Positive pairs: To incorporate a positive pair (u, v), we do not enforce sparsity on Tu,v . This allows ANT to freely learn the transformation between related objects u and v without being penalized for sparsity. On the other hand, transformations between negative pairs will be sparsely penalized. In other words, before computing the `1-penalty, we element-wise multiply T with a domain sparsity matrix S(G) where S(G)u,v = 0 for (u, v) ∈ P (entries not `1-penalized) and S(G)u,v = 1 otherwise (entries are `1-penalized), resulting in the following modified objective:
min T≥0, A,θ ∑ i Dφ(yi, fθ(xi,TA)) + λ2∥T⊙ S(G)∥1. (32)
Since we perform proximal GD, this is equivalent to only soft-thresholding the entries between unrelated objects, i.e., T = max{(T − ηλ2)⊙ S(G) +T⊙ (1 − S(G)),0}. Note that this strategy is applicable when anchors are selected using the frequency method. Negative pairs: For negative pairs, we add an additional constraint that unrelated pairs should not share entries in their linear combination coefficients of the anchor embeddings. In other words, we
Published as a conference paper at ICLR 2021
Confidential + Proprietary
“good”
“the”
Initialize with frequent words
pretrained space e.g. GloVe/co-occurrence
Confidential + Proprietary
“good”
“the”
Clustering step 1
pretrained space e.g. GloVe/co-occurrence
Confidential + Proprietary
“good”
“the”
Clustering step 2
pretrained space e.g. GloVe/co-occurrence
Confidential + Proprietary
“good”
“the”
Clustering step 3
pretrained space e.g. GloVe/co-occurrence
add the loss term L(T,N) = ∑
(u,v)∈N ∣tu∣
⊺ ∣tv ∣ (33)
to the loss in eq (1), where each inner sum discourages tu and tv from sharing similar entries. This strategy can used regardless of the way anchors are selected. We acknowledge that there are other ways to incorporate domain knowledge as well into the general ANT framework, and we only serve to give some initial examples of such methods.
G NONPARAMETRIC ANCHOR & TRANSFORM
In this section we provide details for our non-parametric extension of ANT. Recall that our full objective function derived via small variance asymptotics is given by:
min T≥0
A,θ,K
∑ i
Dφ(yi, fθ(xi;TA)) + λ2∥T∥0 + (λ1 − λ2)K, (34)
which suggests a natural objective function in learning representations that minimize the prediction loss Dφ(yi, fθ(xi;TA)) while ensuring sparsity of T as measured by the `0-norm and using as few anchors as possible (K). Therefore, optimizing eq (5) gives rise to a nonparametric version of ANT, which we call NBANT, that automatically learns the optimal number of anchors. To perform optimization over the number of anchors, our algorithm starts with a small initial number of anchors K = ∣A∣ = 10 and either adds ∆K anchors (i.e., adding ∆K new rows to A and ∆K new sparse columns to T) or deletes ∆K anchors to minimize eq (34) at every epoch depending on the trend of the objective evaluated on the training set. We detail the full Algorithm 2, and highlight the main changes as compared to ANT.
Practically, this algorithm involves the same number of training epochs and batches through each training epoch as the vanilla ANT method. To enable sharing of trained anchors, we change the indices from where A and T are read from so that the partially trained removed anchors are still stored in case more anchors need to be added again.
H EFFICIENT LEARNING AND INFERENCE
The naive method for learning E from anchor embeddings A and the sparse transformations T still scales linearly with ∣V ∣ × d. Here we describe some tips on how to perform efficient learning and inference of the anchor embeddings A and the sparse transformations T:
• Store T as a sparse matrix by only storing its non-zero entries and indices. From our experiments, we have shown that nnz(T) << ∣V ∣ × d which makes storage efficient. • For inference, use sparse matrix multiply as supported in TensorFlow and PyTorch to compute E = TA (or its non-linear extensions). This decreases the running time from scaling by ∣V ∣ × d to only scaling as a function of nnz(T). For training, using inbuilt sparse
Algorithm 2 NBANT: Nonparametric Bayesian ANT. Differences from ANT are highlighted in red. ANCHOR & TRANSFORM:
1: Anchor: initialize initial K = ∣A∣ and corresponding anchor embeddings A ∈ RK×d. 2: Transform: initialize T ∈ R∣V ∣×K as a sparse matrix. 3: for each each epoch do 4: for each batch (X,Y) do 5: Compute loss L = ∑iDφ(yi, fθ(xi;TA)) 6: A,T, θ = UPDATE (∇L, η). 7: T = max{T − ηλ2,0}. 8: end for 9: Compute eq (34) using current value of K,A,T on the validation set.
10: if eq (34) is on a decreasing trend then 11: K =K +∆K, add ∆K rows to A and ∆K (sparse) columns to T. 12: else if eq (34) is on an increasing trend then 13: K =K −∆K, remove ∆K rows from A and ∆K (sparse) columns from T. 14: else 15: keep current values of K,A,T. 16: end if 17: end for 18: return anchor embeddings A and transformations T.
representation of most deep learning frameworks like PyTorch or Tensorflow is not optimal, as they do not support changing non-zero locations in sparse matrix and apriori its not easy to find optimal set of non-zero locations. • During training, instead, implicitly construct E from its anchors and transformations. In fact, we can do better: instead of constructing the entire E matrix to embed a single datapoint x ∈ R1×∣V ∣, we can instead first index x into T, i.e., xT ∈ R1×∣A∣ before performing a sparse matrix multiplication with A, i.e., (xT)A ∈ R1×d. We are essentially taking advantage of the associative property of matrix multiplication and the fact that xT is a simple indexing step and (xT)A is an effective sparse matrix multiplication. To enable fast row slicing into sparse matrix, we just storing the matrix in adjacency list or CSOO format. (We move away from CSR as adding/deleting a non-zero location is very expensive.) When gradient comes back, only update the corresponding row in T. The gradient will be sparse as well due to the L1-prox operator. • Above trick solves the problem for tasks where embedding is used only at the input, e.g., classification. For tasks like language model, where embedding is used at output as well one can also use above mentioned trick with speedup techniques like various softmax sampling techniques (Bengio & Senecal, 2008; Mikolov et al., 2013) or noise-contrastive estimation (Gutmann & Hyvarinen, 2010; Mnih & Teh, 2012), which will be anyway used for large vocabulary sizes. To elaborate, consider the case of sampled softmax (Bengio & Senecal, 2008). We normally generate the negative sample indices, and then we can first index into T using the true and negative indices before performing sparse matrix multiplication with A. This way we do not have to instantiate entire E by expensive matrix multiplication. • When training is completed, only store the non-zero entries of T or store T as a sparse matrix to reconstruct E for inference. • To save time when initializing the anchor embeddings and incorporating domain knowledge, precompute the necessary statistics such as frequency statistics, co-occurrence statistics, and object relation statistics. We use a small context size of 10 to measure co-occurrence of two words to save time. When using WordNet to discover word relations, we only search for immediate relations between words instead of propagating relations across multiple steps (although this could further improve performance). • In order to incorporate domain knowledge in the sparsity structure, we again store 1−S(G) using sparse matrices. Recall that S(G) has an entry equal to 1 for entries representing unrelated objects that should be `1-penalized, which makes S(G) quite dense since most anchor and non-anchor objects are unrelated. Hence we store 1−S(G) instead which consists few non-zero entries only at (non-anchor, anchor) entries for related objects. Element-wise multiplications are also replaced by sparse element-wise multiplications when computing T⊙ S(G) and T⊙ (1 − S(G)).
I GENERALITY OF ANT
We show that under certain structural assumptions on the anchor embeddings and transformation matrices, ANT reduces to the following task-specific methods for learning sparse representations: 1) Frequency (Chen et al., 2016b), TF-IDF, Group Lasso (Wen et al., 2016), and variational dropout (Chen et al., 2019) based vocabulary selection, 2) Low-rank factorization (Grachev et al., 2019), and 3) Compositional code learning (Shu & Nakayama, 2018; Chen et al., 2018). Hence, ANT is general and unifies some of the work on sparse representation learning done independently in different research areas.
Frequency-based vocabulary selection (Luong et al., 2015; Chen et al., 2016b): Initialize A with the ∣A∣ most frequent objects and set Ta,a = 1 for all a ∈ A, T = 0 otherwise. Then E = TA consists of embeddings of the ∣A∣ most frequent objects with zero embeddings for all others. During training, gradients are used to update A but not T (i.e., only embeddings for frequent objects are learned). By changing the selection of A, ANT also reduces to other vocabulary selection methods such as TF-IDF (Ramos, 1999), Group Lasso (Wen et al., 2016), and variational dropout (Chen et al., 2019)
Low-rank factorization (Acharya et al., 2019; Markovsky, 2011; Grachev et al., 2019): Initialize A by a mixture of random basis embeddings (just 1 anchor per set) A1, ...,AM ∈ R1×d and do not enforce any sparsity on the transformations T1, ...,TM ∈ R∣V ∣×1. If we further restrict ourselves to only linear combinations E = ∑Mm=1TmAm, this is equivalent to implicitly learning the M low rank factors a1, ...,aM , t1, ..., tM that reconstruct embedding matrices of rank at most M .
Compositional code learning (Shu & Nakayama, 2018; Chen et al., 2018): Initialize A by a mixture of random basis embeddings A1, ...,AM , initialize transformations T1, ...,TM , and apply a linear combination E = ∑Mm=1TmAm. For sparsity regularization, set row i of S(G)mi as a reverse one-hot vector with entry dmi = 0 and all else 1. In other words, index dmi of row row Tmi is not regularized, and all other entries are `1-regularized with extremely high λ2 such that row Tmi essentially becomes an one-hot vector with dimension dmi = 1. This results in learning a codebook where each object in V is mapped to only one anchor in each mixture.
Therefore, ANT encompasses several popular methods for learning sparse representations, and gives further additional flexibility in defining various initialization strategies, applying nonlinear mixtures of transformations, and incorporating domain knowledge via object relationships.
J EXPERIMENTAL DETAILS
Here we provide more details for our experiments including hyperparameters used, design decisions, and comparison with baseline methods. We also include the anonymized code in the supplementary material.
J.1 TEXT CLASSIFICATION Base CNN model: For all text classification experiments, the base model is a CNN (Lecun et al., 1998) with layers of 2D convolutions and 2D max pooling, before a dense layer to the output softmax. The code was adapted from https://github.com/wenhuchen/ Variational-Vocabulary-Selection and the architecture hyperparameters are provided in Table 6. The only differences are the output dimensions which is 4 for AG-News, 14 for DBPedia, 5 for Sogou-News, and 5 for Yelp-review.
Anchor: We experiment with dynamic, frequency, and clustering initialization strategies. The number of anchors ∣A∣ is a hyperparameter that is selected using the validation set. The range of ∣A∣ is in {10,20,50,80,100,500,1,000}. Smaller values of ∣A∣ allows us to control for fewer anchors and smaller transformation matrix T at the expense of performance.
Transformation: We experiment with sparse linear transformations for T. λ2 is a hyperparameter that is selected using the validation set. Larger values of λ2 allows us to control for more sparse entries in T at the expense of performance. For experiments on dynamic mixtures, we use a softmax-based nonlinear combination E = ∑Mm=1 softmax(Tm)Am where softmax is performed over the rows of
Tm. Note that applying a softmax activation to the rows of Tm makes all entries dense so during training, we store Tm as sparse matrices (which is efficient since Tm has few non-zero entries) and implicitly reconstruct E.
Domain knowledge: When incorporating domain knowledge in ANT, we use both WordNet and cooccurrence statistics. For WordNet, we use the public WordNet interface provided by NLTK http: //www.nltk.org/howto/wordnet.html. For each word we search for its immediate related words among its hypernyms, hyponyms, synonyms, and antonyms. This defines the relationship graph. For co-occurrence statistics, we define a co-occurrence context size of 10 on the training data. Two words are defined to be related if they co-occur within this context size.
A note on baselines: Note that the reported results on SPARSEVD and SPARSEVD-VOC (Chirkova et al., 2018) have a different embedding size: 300 instead of 256. This is because they use pre-trained word2vec or GloVe embeddings to initialize their model before compression is performed.
J.2 LANGUAGE MODELING ON PTB
Base LSTM model: Our base model is a 2 layer LSTM with an embedding size of 200 and hidden layer size of 200. The code was adapted from https://github.com/salesforce/ awd-lstm-lm and the full table of hyperparameters is provided in Table 7.
Base AWD-LSTM model: In addition to experiments on an vanilla LSTM model as presented in the main text, we also performed experiments using a 3 layer AWD-LSTM with an embedding size of 400 and hidden layer size of 1,150. The full hyperparameters used can be found in Table 8.
Anchor: We experiment with dynamic, frequency, and clustering initialization strategies. The number of anchors ∣A∣ is a hyperparameter that is selected using the validation set. The range of ∣A∣ is in {10,20,50,80,100,500,1,000}. Smaller values of ∣A∣ allows us to control for fewer anchors and smaller transformation matrix T at the expense of performance.
Domain knowledge: When incorporating domain knowledge in ANT, we use both WordNet and cooccurrence statistics. For WordNet, we use the public WordNet interface provided by NLTK http: //www.nltk.org/howto/wordnet.html. For each word we search for its immediate related words among its hypernyms, hyponyms, synonyms, and antonyms. This defines the relationship graph. For co-occurrence statistics, we define a co-occurrence context size of 10 on the training data. Two words are defined to be related if they co-occur within this context size.
A note on baselines: We also used some of the baseline results as presented in Grachev et al. (2019). Their presented results differ from our computations in two aspects: they include the LSTM parameters on top of the embedding parameters, and they also count the embedding parameters twice since they do not perform weight tying (Press & Wolf, 2017) (see equation (6) of Grachev et al. (2019)). To account for this, the results of SPARSEVD and SPARSEVD-VOC (Chirkova et al., 2018), as well as the results of various LR and TT low rank compression methods (Grachev et al., 2019) were modified by subtracting off the LSTM parameters (200 × 200 × 16). This is derived since each of the 8 weight matrices Wi,f,o,c, Ui,f,o,c in an LSTM layer is of size 200 × 200, and there are a 2 LSTM layers. We then divide by two to account for weight tying. In the main text, we compared with the strongest baselines as reported in Grachev et al. (2019): these were the methods that performed low rank decomposition on both the input embedding (∣V ∣ × d), output embedding (d × ∣V ∣), and intermediate hidden layers of the model. For full results, please refer to Grachev et al. (2019).
Note that the reported results on SPARSEVD and SPARSEVD-VOC (Chirkova et al., 2018) have a different embedding size and hidden layer size of 256 instead of 200, although these numbers are close enough for fair comparison. In our experiments we additionally implemented an LSTM with an embedding size of 256 and hidden layer size of 256 so that we can directly compare with their reported numbers.
For baselines that perform post-processing compression of the embedding matrix, POST-SPARSE HASH (post-processing using sparse hashing) (Guo et al., 2017) and POST-SPARSE HASH+k-SVD (improving sparse hashing using k-SVD) (Guo et al., 2017; Awasthi & Vijayaraghavan, 2018), we choose two settings: the first using 500 anchors and 10 nearest neighbors to these anchor points, and the second using 1,000 anchors and 20 nearest neighbors. The first model uses 500 × d + ∣V ∣ × 10 non-zero embedding parameters while the second model uses 1,000 × d + ∣V ∣ × 20 parameters. For AWD-LSTM on PTB, this is equivalent to 0.3M and 0.6M embedding parameters respectively which is comparable to the number of non-zero parameters used by our method.
J.3 LANGUAGE MODELING ON WIKITEXT-103
Base AWD-LSTM model: Our base model is a 4 layer AWD-LSTM with an embedding size of 400 and hidden layer size of 2,500. The code was adapted from https://github.com/ salesforce/awd-lstm-lm and the hyperparameters used can be found in Table 9.
A note on baselines: While Baevski & Auli (2019) adapt embedding dimensions according to word frequencies, their goal is not to compress embedding parameters and they use 44.9M (dense) parameters in their adaptive embedding layer, while we use only 2M. Their embedding parameters are calculated by their reported bucket sizes and embedding sizes (three bands of size 20K (d = 1024), 40K (d = 256) and 200K (d = 64)). Their perplexity results are also obtained using a Transformer model with 250M params while our AWD-LSTM model uses 130M params.
For the HASH EMBED baseline that retains the frequent k words and hashes the remaining words into 1,000 OOV buckets (Svenstrup et al., 2017), We vary k ∈ {1 × 105,5 × 104,1 × 104} to obtain results across various parameter settings.
J.4 MOVIE RECOMMENDATION ON MOVIELENS
Base MF model: We show the hyperparamters used for the MF model in Table 10. We use the Yogi optimizer (Zaheer et al., 2018) to learn the parameters.
ANT and NBANT: We build ANT on top of the MF model while keeping the base hyperparamters constant. For ANT, we apply compression to both movie and user embedding matrices individually. NBANT involves defining the starting value of K = ∣A∣, and a ∆K value which determines the rate of increase or decrease in K. For Movielens 25M we use a larger initial ∣A∣ and ∆K since it is a larger dataset and also takes longer to train, so we wanted the increase and decrease in anchors to be faster (see Table 10). Beyond this initial setting, we found that performance is robust with respect to the initial value of K and ∆K, so we did not tune these parameters. In practice, we tie the updates of the number of user anchors and movie anchors instead of optimizing over both independently. Therefore, we start with the same number of initial user and movie anchors before incrementing or decrementing them by the same ∆K at the same time. We found that this simplification did not affect performance and NBANT was still able to find an optimal number of anchors for a good trade-off between performance and compression.
K MORE RESULTS
In the | 1. What is the main contribution of the paper on row-rank approximation of embeddings?
2. What are the strengths of the paper, particularly in terms of memory cost reduction and experimental results?
3. What are the weaknesses of the paper regarding its Bayesian non-parametric interpretation and experimental limitations?
4. How does the reviewer assess the significance of the proposed method in practical applications, such as recommendation systems?
5. Are there any suggestions or requests for additional experiments or improvements in the paper? | Review | Review
This paper introduces a row-rank approximation of embeddings using “anchors”. It also proposes a probabilistic interpretation of their method as a non-parametric Bayesian dictionary learning model, which can be inferred by optimizing the small-variance asymptotic objective.
What I agree with the authors are: i) Using properly chosen basis vectors may greatly reduce the memory cost for embeddings, especially for huge vocabulary sizes (e.g. over 100 million). ii) The initialization for basis vectors is extremely important and should be updated through training. iii) Experimental results look reasonable in this paper.
What I feel confused about are: i) Why interpret this method in a Bayesian non-parametric way? To be more specific: i.1) The final objective function (5) does not involve Bayesian posterior inference. If you want a point estimation of sparse representation + learnable anchors, you don’t need a Bayesian model. i.2) Bayesian non-parametric is useful because it can automatically learn the model size. In your case, |A|. You mention this point in Figure 3, but there is no online learning result showing that your model has the capacity to grow the model through training. One example is “Truly Nonparametric Online Variational Inference for Hierarchical Dirichlet Processes” by M. Bryant and E. Sudderth.
ii) The vocabulary size in your experiment is decent, but not very big. Normally in a recommendation system, the vocabulary size can be the number of users, which is at least 100M. Normally the embedding size is around 16 to 64 in real systems. The proposed method could be a huge gain in storing such a huge embedding table. But I cannot see an experiment at this vocabulary level. Even rough results at this level could make this paper much stronger.
iii) This is a minor point, but AUC results in MovieLens besides MSE can reflect the ranking quality in recommendations.
Overall, this paper proposes a practical solution to cut embedding storage. But the Bayesian interpretation is not persuasive and there are still missing pieces in the experiments. |
ICLR | Title
Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies
Abstract
Learning continuous representations of discrete objects such as text, users, movies, and URLs lies at the heart of many applications including language and user modeling. When using discrete objects as input to neural networks, we often ignore the underlying structures (e.g., natural groupings and similarities) and embed the objects independently into individual vectors. As a result, existing methods do not scale to large vocabulary sizes. In this paper, we design a simple and efficient embedding algorithm that learns a small set of anchor embeddings and a sparse transformation matrix. We call our method ANCHOR & TRANSFORM (ANT) as the embeddings of discrete objects are a sparse linear combination of the anchors, weighted according to the transformation matrix. ANT is scalable, flexible, and end-to-end trainable. We further provide a statistical interpretation of our algorithm as a Bayesian nonparametric prior for embeddings that encourages sparsity and leverages natural groupings among objects. By deriving an approximate inference algorithm based on Small Variance Asymptotics, we obtain a natural extension that automatically learns the optimal number of anchors instead of having to tune it as a hyperparameter. On text classification, language modeling, and movie recommendation benchmarks, we show that ANT is particularly suitable for large vocabulary sizes and demonstrates stronger performance with fewer parameters (up to 40× compression) as compared to existing compression baselines. Code for our experiments can be found at https://github.com/pliang279/ sparse_discrete.
1 INTRODUCTION
Most machine learning models, including neural networks, operate on vector spaces. Therefore, when working with discrete objects such as text, we must define a method of converting objects into vectors. The standard way to map objects to continuous representations involves: 1) defining the vocabulary V = {v1, ..., v∣V ∣} as the set of all objects, and 2) learning a ∣V ∣ × d embedding matrix that defines a d dimensional continuous representation for each object. This method has two main shortcomings. Firstly, when ∣V ∣ is large (e.g., million of words/users/URLs), this embedding matrix does not scale elegantly and may constitute up to 80% of all trainable parameters (Jozefowicz et al., 2016). Secondly, despite being discrete, these objects usually have underlying structures such as natural groupings and similarities among them. Assigning each object to an individual vector assumes independence and foregoes opportunities for statistical strength sharing. As a result, there has been a large amount of interest in learning sparse interdependent representations for large vocabularies rather than the full embedding matrix for cheaper training, storage, and inference.
In this paper, we propose a simple method to learn sparse representations that uses a global set of vectors, which we call the anchors, and expresses the embeddings of discrete objects as a sparse linear combination of these anchors, as shown in Figure 1. One can consider these anchors to represent latent topics or concepts. Therefore, we call the resulting method ANCHOR & TRANSFORM (ANT). The approach is reminiscent of low-rank and sparse coding approaches, however, surprisingly in the literature these methods were not elegantly integrated with deep networks. Competitive attempts are often complex (e.g., optimized with RL (Joglekar et al., 2019)), involve multiple training stages (Ginart et al., 2019; Liu et al., 2017), or require post-processing (Svenstrup et al., 2017; Guo et al., 2017; Aharon et al., 2006; Awasthi & Vijayaraghavan, 2018). We derive a simple optimization objective which learns these anchors and sparse transformations in an end-to-end manner. ANT is
∗work done during an internship at Google.
scalable, flexible, and allows the user flexibility in defining these anchors and adding more constraints on the transformations, possibly in a domain/task specific manner. We find that our proposed method demonstrates stronger performance with fewer parameters (up to 40× compression) on multiple tasks (text classification, language modeling, and recommendation) as compared to existing baselines.
We further provide a statistical interpretation of our algorithm as a Bayesian nonparametric (BNP) prior for neural embeddings that encourages sparsity and leverages natural groupings among objects. Specifically, we show its equivalence to Indian Buffet Process (IBP; Griffiths & Ghahramani (2005)) prior for embedding matrices. While such BNP priors have proven to be a flexible tools in graphical models to encourage hierarchies (Teh & Jordan, 2010), sparsity (Knowles & Ghahramani, 2011), and other structural constraints (Roy et al., 2016), these inference methods are usually complex, hand designed for each setup, and non-differentiable. Our proposed method opens the door towards integrating priors (e.g., IBP) with neural representation learning. These theoretical connections leads to practical insights - by asymptotically analyzing the likelihood of our model in the small variance limit using Small Variance Asymptotics (SVA; Roweis (1998)), we obtain a natural extension, NBANT, that automatically learns the optimal number of anchors to achieve a balance between performance and compression instead of having to tune it as a hyperparameter.
2 RELATED WORK
Prior work in learning sparse embeddings of discrete structures falls into three categories:
Matrix compression techniques such as low rank approximations (Acharya et al., 2019; Grachev et al., 2019; Markovsky, 2011), quantizing (Han et al., 2016), pruning (Anwar et al., 2017; Dong et al., 2017; Wen et al., 2016), or hashing (Chen et al., 2015; Guo et al., 2017; Qi et al., 2017) have been applied to embedding matrices. However, it is not trivial to learn sparse low-rank representations of large matrices, especially in conjunction with neural networks. To the best of our knowledge, we are the first to present the integration of sparse low-rank representations, their non-parametric extension, and demonstrate its effectiveness on many tasks in balancing the tradeoffs between performance & sparsity. We also outperform many baselines based on low-rank compression (Grachev et al., 2019), sparse coding (Chen et al., 2016b), and pruning (Liu et al., 2017).
Reducing representation size: These methods reduce the dimension d for different objects. Chen et al. (2016a) divides the embedding into buckets which are assigned to objects in order of importance, Joglekar et al. (2019) learns d by solving a discrete optimization problem with RL, and Baevski & Auli (2019) reduces dimensions for rarer words. These methods resort to RL or are difficult to tune with many hyperparameters. Each object is also modeled independently without information sharing.
Task specific methods include learning embeddings of only common words for language modeling (Chen et al., 2016b; Luong et al., 2015), and vocabulary selection for text classification (Chen et al., 2019). Other methods reconstruct pre-trained embeddings using codebook learning (Chen et al., 2018; Shu & Nakayama, 2018) or low rank tensors (Sedov & Yang, 2018). However, these methods cannot work for general tasks. For example, methods that only model a subset of objects cannot be used for retrieval because it would never retrieve the dropped objects. Rare objects might be highly relevant to a few users so it might not be ideal to completely ignore them. Similarly, task-specific methods such as subword (Bojanowski et al., 2017) and wordpiece (Wu et al., 2016) embeddings, while useful for text, do not generalize to general applications such as item and query retrieval.
3 ANCHOR & TRANSFORM
Suppose we are presented with data X ∈ V N ,Y ∈ RN×c drawn from some joint distribution p(x, y), where the support of x is over a discrete set V (the vocabulary) and N is the size of the training set. The entries in Y can be either discrete (classification) or continuous (regression). The goal is to learn a d-dimensional representation {e1, ...,e∣V ∣} for each object by learning an embedding matrix E ∈ R∣V ∣×d where row i is the representation ei of object i. A model fθ with parameters θ is then used to predict y, i.e., ŷi = fθ(xi;E) = fθ(E[xi]).
At a high level, to encourage statistical sharing between objects, we assume that the embedding of each object is obtained by linearly superimposing a small set of anchor objects. For example, when the objects considered are words, the anchors may represent latent abstract concepts (of unknown cardinality) and each word is a weighted mixture of different concepts. More generally, the model assumes that there are some unknown number of anchors, A = {a1, ...,a∣A∣}. The embedding ei for object i is generated by first choosing whether the object possesses each anchor ak ∈ Rd. The selected anchors then each contribute some weight to the representation of object i. Therefore, instead of learning the large embedding matrix E directly, ANT consists of two components:
Algorithm 1 ANCHOR & TRANSFORM algorithm for learning sparse representations of discrete objects.
ANCHOR & TRANSFORM: 1: Anchor: initialize anchor embeddings A. 2: Transform: initialize T as a sparse matrix. 3: Optionally + domain info: initialize domain sparsity ma-
trix S(G) as a sparse matrix (see Appendix F). 4: for each batch (X,Y) do 5: Compute loss L = ∑iDφ(yi, fθ(xi;TA)) 6: A,T, θ = UPDATE (∇L, η). 7: T = max{(T − ηλ2)⊙ S(G) +T⊙ (1 − S(G)),0}. 8: end for 9: return anchor embeddings A and transformations T.
1) ANCHOR: Learn embeddings A ∈ R∣A∣×d of a small set of anchor objects A = {a1, ...,a∣A∣}, ∣A∣ << ∣V ∣ that are representative of all discrete objects.
2) TRANSFORM: Learn a sparse transformation T from A to E. Each of the discrete objects is induced by some transformation from (a few) anchor objects. To ensure sparsity, we want nnz(T) << ∣V ∣ × d.
A and T are trained end-to-end for task specific representations. To enforce sparsity, we use an `1 penalty on T and constrain its domain to be non-negative to reduce redundancy in transformations (positive and negative entries canceling out).
min T≥0, A,θ ∑ i Dφ(yi, fθ(xi;TA)) + λ2∥T∥1, (1)
where Dφ is a suitable Bregman divergence between predicted and true labels, and ∥T∥1 denotes the sum of absolute values. Most deep learning frameworks directly use subgradient descent to solve eq (1), but unfortunately, such an approach will not yield sparsity. Instead, we perform optimization by proximal gradient descent (rather than approximate subgradient methods which have poorer convergence around non-smooth regions, e.g., sparse regions) to ensure exact zero entries in T:
A t+1 ,T t+1 , θ t+1
= UPDATE (∇∑ i
Dφ(yi, fθ(xi;T t A t )), η) , (2)
T t+1 = PROXηλ2(T t+1 ) = max (T t+1 − ηλ2, 0) , (3)
where η is the learning rate, and UPDATE is a gradient update rule (e.g., SGD (Lecun et al., 1998), ADAM (Kingma & Ba, 2015), YOGI (Zaheer et al., 2018)). PROXηλ2 is a composition of two proximal operators: 1) soft-thresholding (Beck & Teboulle, 2009) at ηλ2 which results from subgradient descent on λ2∥T∥1, and 2) max(⋅,0) due to the non-negative domain for T. We implement this proximal operator on top of the YOGI optimizer for our experiments.
Together, equations (2) and (3) give us an iterative process for end-to-end learning of A and T along with θ for specific tasks (Algorithm 1). T is implemented as a sparse matrix by only storing its non-zero entries and indices. Since nnz(T) << ∣V ∣× d, this makes storage of T extremely efficient as compared to traditional approaches of computing the entire ∣V ∣×d embedding matrix. We also provide implementation tips to further speedup training and ways to incorporate ANT with existing speedup techniques like softmax sampling (Mikolov et al., 2013) or noise-contrastive estimation (Mnih & Teh, 2012) in Appendix H. After training, we only store ∣A∣ × d + nnz(T) << ∣V ∣ × d entries that define the complete embedding matrix, thereby using fewer parameters than the traditional ∣V ∣ × d matrix. General purpose matrix compression techniques such as hashing (Qi et al., 2017), pruning (Dong
et al., 2017), and quantizing (Han et al., 2016) are compatible with our method: the matrices A and nnz(T) can be further compressed and stored.
We first discuss practical methods for anchor selection (§3.1). In Appendix F we describe several ways to incorporate domain knowledge into the anchor selection and transform process. We also provide a statistical interpretation of ANT as a sparsity promoting generative process using an IBP prior and derive approximate inference based on SVA (§3.2). This gives rise to a nonparametric version of ANT that automatically learns the optimal number of anchors.
3.1 ANCHOR: SELECTING THE ANCHORS A
Inspired by research integrating initialization strategies based on clustering (Teh et al., 2007) and Coresets (Bachem et al., 2015) with Bayesian nonparametrics, we describe several practical methods to select anchor objects that are most representative of all objects (refer to Appendix D for a comparison of initialization strategies.).
Frequency and TF-IDF: For tasks where frequency or TF-IDF (Ramos, 1999) are useful for prediction, the objects can simply be sorted by frequency and the most common objects selected as the anchor points. While this might make sense for tasks such as language modeling (Luong et al., 2015; Chen et al., 2016b), choosing the most frequent objects might not cover rare objects that are not well represented by common anchors.
“good”
“the”
Initialize with frequent words
pretrained space e.g. GloVe/co-occurrence
“good”
“the”
Clustering step 1
pretrained space e.g. GloVe/co-occurrence
“good”
“the”
Clustering step 2
pretrained space e.g. GloVe/co-occurrence
“good”
“the”
Clustering step 3
pretrained space e.g. GloVe/co-occurrence
Confidential + Proprietary
Clustering: To ensure that all objects are close to some anchor, we use k-means++ initialization (Arthur & Vassilvitskii, 2007). Given a feature space representative of the relationships between objects, such as Glove (Pennington et al., 2014) for words or a co-occurrence matrix (Haralick et al., 1973) for more general objects, k-means++ initialization picks cluster centers to span the entire space. This can augment other strategies, such as initializing anchors using frequency followed by clustering to complete remaining anchors (see Figure 2).
Random basis vectors: Initialize A to a set of random basis vectors. This simple yet powerful method captures the case where we have less knowledge about the objects (i.e., without access to any pretrained representation/similarity space).
3.2 STATISTICAL INTERPRETATION AS A BAYESIAN NONPARAMETRIC PRIOR
To provide a statistical interpretation of ANT, we first analyze a generative process for discrete representations that is consistent with our algorithm. Given a set of anchors, A = {a1, ...,a∣A∣}, we use a binary latent variable zik ∈ {0,1} to indicate whether object i possesses anchor k and a positive latent variable wik ∈ R≥0 to denote the weight that anchor k contributes towards object i. Therefore, the representation ei is given by ei = ∑k wikzikak. Ideally, we want the vector zi to be sparse for efficient learning and storage. More formally, suppose there are K ∶= ∣A∣ anchors, then:
• Z ∈ R∣V ∣×K ∼ IBP(a, b); A ∈ RK×d ∼ P (A) = N (0,1); W ∈ R∣V ∣×K ∼ P (W) = Exp(1) • for i = 1,⋯,N
- ŷi = fθ(xi; (Z ○W)A) - yi ∼ p(yi∣xi;Z,W,A) = exp{−Dφ(yi, ŷi)} bφ(yi)
In this generative process, the selection matrix Z follows a two-parameter Indian Buffet Process (IBP; Griffiths & Ghahramani (2005)) prior (Ghahramani et al., 2007). Not only does this BNP prior allow for a potentially infinite number of anchors, but it also encourages each object to only select a small subset of anchors, resulting in a sparse zi (see Appendix A for details). We place a standard Gaussian prior on the continuous anchors embeddings ak and an exponential prior on the weights W which give the actual non-negative transformation weights for the non-zero entries defined in Z. E = (Z ○W)A is the final embedding learnt by our model which represents a d-dimensional continuous representation {e1, ...,e∣V ∣} for each discrete object where row i is the representation ei of object i. Finally, a neural model fθ with parameters θ is used to predict yi given the embedded representations, i.e., ŷi = fθ(xi; (Z ○W)A) = fθ((Z ○W)A[xi]).
Likelihood Model/Loss: We assume that the final emission model yi∣ŷi belongs to the exponential family. Since exponential family distributions have a corresponding Bregman divergence (Banerjee et al. (2005); see Appendix C for examples), we choose Dφ(yi, ŷi) as the corresponding Bregman divergence between predicted and true labels. Appropriate choices for Dφ recover cross-entropy and MSE losses. bφ(yi) does not depend on any learnable parameter or variable and can be ignored.
Joint likelihood: Under the generative model as defined above, the joint likelihood is given by: log p(Y,Z,W,A∣X)∝∑
i
log p(yi∣xi;Z,W,A) + log p(Z) + log p(W) + log p(A)
=∑ i {−Dφ(yi, fθ(xi; (Z ○W)A)) + log bφ(yi)} + log p(Z) + log p(W) + log p(A).
However, calculating the posterior or MAP estimate is hard, especially due to the presence of the non-linear deep network fθ. Approximate inference methods such as MCMC, variational inference, or probabilistic programming would be computationally and statistically inefficient since it would involve sampling, evaluating, or training the model multiple times. To tackle this problem, we perform approximate inference via Small Variance Asymptotics (SVA), which captures the benefits of rich latent-variable models while providing a framework for scalable optimization (Broderick et al., 2013a; Jiang et al., 2012; Roychowdhury et al., 2013).
Approximate Inference via SVA: To use SVA, we introduce a scaling variable β and shrink the variance of the emission probability by taking β →∞. The scaled probability emission becomes
p(yi∣xi;Z,W,A) = exp{−βDφ(yi, ŷi)} bβφ(yi). (4) Following Broderick et al. (2013a), we modulate the number of features in the large-β limit by choosing constants λ1 > λ2 > 0 and setting the IBP hyperparameters a = exp(−βλ1) and b = exp(βλ2). This prevents a limiting objective function that favors a trivial cluster assignment (every data point assigned to its own separate feature). Maximizing the asymptotic joint likelihood (after taking limits, i.e., limβ→∞ 1β log p(Y,Z,W,A∣X)) results in the following objective function:
min T≥0, A,θ,K ∑ i Dφ(yi, fθ(xi;TA)) + λ2∥T∥0 + (λ1 − λ2)K, (5)
where we have combined the variables Z and W with their constraints into one variable T. The exponential prior for W results in a non-negative domain for T. Please refer to Appendix B for derivations. Note that eq (5) suggests a natural objective function in learning representations that minimize the prediction loss Dφ(yi, fθ(xi;TA)) while ensuring sparsity of T as measured by the `0-norm and using as few anchors as possible (K). Therefore, optimizing eq (5) gives rise to a nonparametric version of ANT, which we call NBANT, that automatically learns the optimal number of anchors. To perform optimization over the number of anchors, our algorithm starts with a small ∣A∣ = 10 and either adds anchors (i.e., adding a new row to A and a new column to T) or deletes anchors to minimize eq (5) at every epoch depending on the trend of the objective evaluated on validation set. We outline the exact algorithm in Appendix G along with more implementation details.
Analogously, we can derive the finite case objective function for a fixed number of anchors K: min
T≥0, A,θ ∑ i
Dφ(yi, fθ(xi;TA)) + λ2∥T∥0, (6)
which, together with a `1 penalty on T as a convex relaxation for the `0 penalty, recovers the objective function in eq (1). The solution for this finite version along with K yields the Pareto front. Different values of λ1 in eq (5) can be used for model selection along the front as elucidated in Appendix L.
4 EXPERIMENTS
To evaluate ANT, we experiment on text classification, language modeling, and movie recommendation tasks. Experimental details are in Appendix J and full results are in Appendix K.
4.1 TEXT CLASSIFICATION
Setup: We follow the setting in Chen et al. (2019) with four datasets: AG-News (V = 62K) (Zhang et al., 2015), DBPedia (V = 563K) (Lehmann et al., 2015), Sogou-News (V = 254K) (Zhang et al., 2015), and Yelp-review (V = 253K) (Zhang et al., 2015). We use a CNN for classification (Kim, 2014). ANT is used to replace the input embedding and domain knowledge is derived from WordNet and co-occurrence in the training set. We record test accuracy and number of parameters used in the embedding only. For ANT, num params is computed as ∣A∣ × d + nnz(T).
Baselines: On top of the CNN, we compare to the following compression approaches. Vocabulary selection methods: 1) FREQUENCY where only embeddings for most frequent words are learnt (Chen et al., 2016b; Luong et al., 2015), 2) TF-IDF which only learns embeddings for words with high TF-IDF score (Ramos, 1999), 3) GL (group lasso) which aims to find underlying sparse structures in the embedding matrix via row-wise `2 regularization (Liu et al., 2015; Park et al., 2016; Wen et al., 2016), 4) VVD (variational vocabulary dropout) which performs variational dropout for vocabulary selection (Chen et al., 2019). We also compare to 5) SPARSEVD (sparse variational dropout) which performs variational dropout on all parameters (Chirkova et al., 2018), 6) SPARSEVD-VOC which uses multiplicative weights for vocabulary sparsification (Chirkova et al., 2018), and 7) a SPARSE CODE model that learns a sparse code to reconstruct pretrained word representations (Chen et al., 2016b). All CNN architectures are the same for all baselines with details in Appendix J.1.
Results on AG-News are in Table 1 and results for other datasets are in Appendix K.1. We observe that restricting T ≥ 0 using an exponential prior is important in reducing redundancy in the entries. Domain knowledge from WordNet and co-occurrence also succeeded in reducing the total (non-zero) embedding parameters to 0.40M, a compression of 40× and outperforming the existing approaches.
4.2 LANGUAGE MODELING
Setup: We perform experiments on word-level Penn Treebank (PTB) (V = 10K) (Marcus et al., 1993) and WikiText-103 (V = 267K) (Merity et al., 2017) with LSTM (Hochreiter & Schmidhuber, 1997) and AWD-LSTM (Merity et al., 2018). We use ANT as the input embedding tied to the output embedding. Domain knowledge is derived from WordNet and co-occurrence on the training set. We record the test perplexity and the number of (non-zero) embedding parameters.
Baselines: We compare to SPARSEVD and SPARSEVD-VOC, as well as low-rank (LR) and tensortrain (TT) model compression techniques (Grachev et al., 2019). Note that the application of variational vocabulary selection to language modeling with tied weights is non-trivial since one is unable to predict next words when words are dynamically dropped out. We also compare against methods that compress the trained embedding matrix as a post-processing step before evaluation: POST-SPARSE HASH (post-processing using sparse hashing) (Guo et al., 2017) and POST-SPARSE HASH+k-SVD (Awasthi & Vijayaraghavan, 2018; Guo et al., 2017) which uses k-SVD (which is the basis of dictionary learning/sparse coding) (Aharon et al., 2006) to solve for a sparse embedding matrix, instead of adhoc-projection in (Guo et al., 2017). Comparing to these post-processing methods demonstrates that end-to-end training of sparse embeddings is superior to post-compression.
Results: On PTB (Table 2), we improve the perplexity and compression as compared to previously proposed methods. We observe that sparsity is important: baseline methods that only perform lowerrank compression with dense factors (e.g., LR LSTM) tend to suffer in performance and use many parameters, while ANT retains performance with much better compression. ANT also outperforms post-processing methods (POST-SPARSE HASH), we hypothesize this is because these post-processing methods accumulate errors in both language modeling as well as embedding reconstruction. Using an anchor size of 500/1,000 reaches a good perplexity/compression trade-off: we reach within 2 points perplexity with 5× reduction in parameters and within 7 points perplexity with 10× reduction. Using AWD-LSTM, ANT with 1,000 dynamic basis vectors is able to compress parameters by 10× while achieving 72.0 perplexity. Incorporating domain knowledge allows us to further compress the parameters by another 10× and achieve 70.0 perplexity, which results in 100× total compression.
On WikiText-103, we train using sampled softmax (Bengio & Senecal, 2008) (due to large vocabulary) for 500,000 steps. To best of our knowledge, we could not find literature on compressing language models on WikiText-103. We tried general compression techniques like low rank tensor and tensor train factorization (Grachev et al., 2019), but these did not scale. As an alternative, we consider a HASH EMBED baseline that retains the frequent k words and hashes the remaining words into 1,000 OOV buckets (Svenstrup et al., 2017). We vary k ∈ {1×105,5×104,1×104} (details in Appendix J.3). From Table 2 (bottom), we reach within 3 perplexity with ∼ 16× reduction in parameters and within 13 perplexity with ∼ 80× reduction, outperforming the frequency and hashing baselines. We observe that ANT’s improvement over post-compression methods (POST-SPARSE HASH) is larger on WikiText than PTB, suggesting that ANT is particularly suitable for large vocabularies.
4.3 RECOMMENDER SYSTEMS
Setup: We perform experiments on both movie and product recommendation tasks. For movie recommendations, we follow Ginart et al. (2019) and we experiment on MovieLens 25M (Harper & Konstan, 2015) with 126K users and 59K movies. We also present results for MovieLens 1M in Appendix K.3. On product recommendation, we show that ANT scales to Amazon Product reviews (Ni et al., 2019), the largest existing dataset for recommender systems with 233M reviews spanning 43.5M users and 15.2M products. Following Wan et al. (2020), we ensured that the users and products in the test set have appeared in the training data for generalization.
Baselines: We compare to a baseline Matrix Factorization (MF) model (Koren et al., 2009) with full embedding matrices for movies and users and to Mixed Dimension (MIXDIM) embeddings (Ginart et al., 2019), a compression technique that assigns different dimension to different users/items based on popularity. We also compare to SPARSE CBOW (Sun et al., 2016) which learns sparse E by placing an `1 penalty over all entries of E and optimizing using online subgradient descent, and SLIMMING (Liu et al., 2017), which performs subgradient descent before pruning small weights by setting them to 0. Such methods learn embeddings for objects independently without statistical strength sharing among related objects. We also test NBANT using the algorithm derived from the Bayesian nonparametric interpretation of ANT.
Results: From Table 3, ANT outperforms standard matrix factorization and dense mixed dimensional embeddings for performance and compression. NBANT is also able to automatically select an optimal
number of anchors (6/8) to achieve solutions along the performance-compression Pareto front. In Figure 3, we plot the value of eq (5) across values of ∣A∣ after a comprehensive hyperparameter sweep on ANT across 1000 settings. In comparison, NBANT optimizes ∣A∣ and reaches a good value of eq (5) in a single run without having to tune ∣A∣ as a hyperparameter, thereby achieving best balance between performance and compression. Please refer to Appendix K.3 for more results and discussion on NBANT.
For product recommendation, we first experiment on a commonly used subset of the data, Amazon Electronics (with 9.84M users and 0.76M products), to ensure that our results match published baselines (Wan et al., 2020), before scaling our experiment to the entire dataset. From Table 4, we find that ANT compresses embeddings by 25× on Amazon Electronics while maintaining performance, and 10× on the full Amazon reviews dataset.
Online NBANT: Since NBANT automatically grows/contracts ∣A∣ during training, we can further extend NBANT to an online version that sees a stream of batches without revisiting previous ones (Bryant & Sudderth, 2012). We treat each batch as a new set of data coming in and train on that batch until convergence, modify ∣A∣ as in Algorithm 2, before moving onto the next batch. In this significantly more challenging online setting, NBANT is still able to learn well and achieve a MSE of 0.875 with 1.25M non zero parameters. Interestingly this online version of NBANT settled on a similar range of final user (8) and item (8) anchors as compared to the non-online version (see Table 3), which confirms the robustness of NBANT in finding relevant anchors automatically. In Appendix K.3 we discuss more observations around online NBANT including ways of learning ∣A∣.
4.4 DISCUSSION AND OBSERVATIONS
Here we list some general observations regarding the importance of various design decisions in ANT:
1) Sparsity is important: Baselines that compress with dense factors (e.g., LR, TT) suffer in performance while using many parameters, while ANT retains performance with better compression.
2) Choice of A: We provide results on more clustering initializations in Appendix D. In general, performance is robust w.r.t. choice of A. While frequency and clustering work better, using a dynamic basis also performs well. Thus, it is beneficial to use any extra information about the discrete objects (e.g., domain knowledge or having a good representation space like GloVe to perform clustering).
Table 5: Word association results after training language models with ANT on the word-level PTB dataset. Left: the non-anchor words most induced by a given anchor word. Right: the largest (non-anchor, anchor) entries learnt in T after sparse `1-regularization. Bottom: movie clusters obtained by sorting movies with the highest coefficients with each anchor embedding.
Anchor words Non-anchor words year august, night, week, month, monday, summer, spring stock bonds, certificates, debt, notes, securities, mortgages
Largest word pairs trading, brokerage
stock, junk year, summer york, angeles year, month
government, administration
Movies Genre God’s Not Dead, Sex and the City, Sex and the City 2, The Twilight Saga: Breaking Dawn - Part 1,
The Princess Diaries 2: Royal Engagement, The Last Song, Legally Blonde 2: Red, White & Blonde, The Twilight Saga: Eclipse, Maid in Manhattan, The Twilight Saga: Breaking Dawn - Part 2
romance, comedy
Nostalghia, Last Days, Chimes at Midnight, Lessons of Darkness, Sonatine, Band of Outsiders, Gerry, Cyclo, Mishima: A Life in Four Chapters, George Washington
drama, indie
3) Anchors and sparse transformations learned: We visualize the important transformations (large entries) learned between anchors and non-anchors in Table 5. Left, we show the most associated non-anchors for a given anchor word and find that the induced non-anchors are highly plausible: stock accurately contributes to bonds, certificates, securities, and so on. Right, we show the largest (non-anchor, anchor) pairs learned, where we find related concepts such as (billion, trillion) and (government, administration). On MovieLens, for each anchor, we sort the movies according to the magnitude of their transformation coefficients which automatically discovers movie clusters based on underlying genres. We obtain a genre purity ratio of 61.7% by comparing automatically discovered movie clusters with the true genre tags provided in MovieLens.
4) Zero transformations learned: For MovieLens, we find that ANT assigns 2673 out of 59047 movies to an entire zero row, of which 84% only had 1 rating (i.e., very rare movies). Therefore, compression automatically discovers very rare objects (1 labeled point). On WikiText-103, rare words (e.g., Anarky, Perl, Voorhis, Gaudí, Lat, Bottomley, Nescopeck) are also automatically assigned zero rows when performing high compression (54.2 ppl with 0.4M params). Certain rare words that might be predictive, however, are assigned non-zero rows in T, such as: sociologists, deadlines, indestructible, causeways, outsourced, glacially, heartening, unchallenging, roughest.
5) Choice of λ1, λ2: Tuning λ1 allows us to perform model selection by controlling the trade-off between ∣A∣ (model complexity) and performance. By applying eq (5) on our trained models in Table 2, choosing a small λ1 = 2 × 10−5 prefers more anchors (∣A∣ = 1,000) and better performance (ppl = 79.4), while a larger λ1 = 1 × 10−1 selects fewer anchors (∣A∣ = 100) with a compromise in performance (ppl = 106.6). Tuning λ2 allows us to control the tradeoff between sparsity and performance (see details in Appendix L).
6) Convergence: In Figure 4, we plot the empirical convergence of validation loss across epochs. ANT converges as fast as the (non-sparse) MF baseline, and faster than compression baselines MixDim (Ginart et al., 2019) and Sparse CBOW (Sun et al., 2016). ANT also converges to the best validation loss.
7) Scalability: In addition to fast convergence, ANT also works effectively on large datasets such as Movielens 25M (162K users, 59K movies, 25M examples) and WikiText-103 (267K unique words, 103M tokens). For each epoch on Movielens 25M, standard MF takes 165s on a GTX 980 Ti GPU while ANT takes 176s for ∣A∣ = 5 and 180s for ∣A∣ = 20. ANT also scales to the largest recommendation dataset, Amazon reviews, with 25M users and 9M products.
5 CONCLUSION
This paper presented ANCHOR & TRANSFORM to learn sparse embeddings of large vocabularies using a small set of anchor embeddings and a sparse transformation from anchors to all objects. We also showed a statistical interpretation via integrating IBP priors with neural representation learning. Asymptotic analysis of the likelihood using SVA yields an extension that automatically learns the optimal number of anchors. On text classification, language modeling, and recommender systems, ANT outperforms existing approaches with respect to accuracy and sparsity.
B DERIVATION OF OBJECTIVE FUNCTION VIA SVA
In this section we derive our objective function using Small Variance Asymptotics (SVA) (Jiang et al., 2012). Recall that the generative process in our model is given by:
• Z ∈ R∣V ∣×K ∼ IBP(a, b) • A ∈ RK×d ∼ P (A) = N (0,1) • W ∈ R∣V ∣×K ∼ P (W) = Exponential(1) • for i = 1,⋯,N
- ŷi = fθ(xi; (Z ○W)A) - yi ∼ p(yi∣xi;Z,W,A) = exp{−Dφ(yi, ŷi)} bφ(yi)
The joint log-likelihood under our generative model above is therefore: log p(Y,Z,W,A∣X)
∝∑ i log p(yi∣xi,Z,W,A) + log p(Z) + log p(W) + log p(A)
=∑ i
{−Dφ(yi, fθ(xi, (Z ○W)A)) + log bφ(yi)} + log p(Z) + log p(W) + log p(A). (10)
To use SVA, an approximate objective function for finding point estimates is obtained by taking the limit of the emission probability variances down to zero. We begin by introducing a scaling variable β and shrinking the variance of the emission probability to 0 by taking β →∞. The scaled probability emission becomes p(yi∣xi,Z,W,A) = exp{−βDφ(yi, ŷi)} bβφ(yi) (11) Following Broderick et al. (2013a), we modulate the number of features in the large-β limit by choosing constants λ1 > λ2 > 0 and setting the IBP hyperparameters with β as follows:
a = exp(−βλ1) b = exp(βλ2) (12) This prevents a limiting objective function that favors a trivial cluster assignment (every data point assigned to its own separate feature).
We now take the limit of the log-likelihood term by term:
lim β→∞
1 β log p(Y,A,W,Z∣X) (13)
= lim β→∞
1 β log p(yi∣xi,Z,W,A) + lim β→∞ 1 β log p(Z) + lim β→∞ 1 β log p(W) + lim β→∞ 1 β log p(A).
(14)
• limβ→∞ 1β log p(yi∣xi,Z,W,A) = limβ→∞ 1 β (−βDφ(yi, ŷi) + log bβφ(yi))
= −Dφ(yi, ŷi) +O(1). • limβ→∞ 1β log p(Z) = −λ2∥Z∥0 − (λ1 − λ2)K, see box below. • limβ→∞ 1β log p(W) = 0, if W ≥ 0 else −∞. • limβ→∞ 1β log p(A) = 0 as log p(A) = O(1).
For convenience, we re-write the limit of the IBP prior as
lim β→∞
1 β log p(Z) = lim β→∞ 1 β log
(ab)K
∏ 2∣V ∣−1 h=1 Kh
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ a©
+ lim β→∞
1 β log exp(−abH∣V ∣)
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ b©
+ K
∑ k=1 lim β→∞
1 β log Γ(mk)Γ(∣V ∣ −mk + b)
Γ(∣V ∣ + b) ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
c©
(15)
For part a©:
lim β→∞
1 β log
(ab)K
∏ 2∣V ∣−1 h=1 Kh
= lim β→∞
1 β log exp(−β(λ1 − λ2)K)
∏ 2∣V ∣−1 h=1 Kh
= lim β→∞
1 β × −β(λ1 − λ2)K − lim β→∞ 1 β ×O(1)
= −(λ1 − λ2)K
(16)
For part b©:
lim β→∞
1 β log exp(−abH∣V ∣) = lim β→∞ 1 β × −abH∣V ∣
= lim β→∞
− exp(−β(λ1 − λ2)K)
β ×
∣V ∣
∑ j=1
1
exp(βλ2) + j − 1
= 0
(17)
For part c©:
lim β→∞
1 β log Γ(mk)Γ(∣V ∣ −mk + b) Γ(∣V ∣ + b) = lim β→∞ 1 β log Γ(mk) − lim β→∞ 1 β
mk
∑ j=1
log(∣V ∣ − j + b)
= 0 − mk
∑ j=1 lim β→∞
log(∣V ∣ − j + exp(βλ2))
β
= − mk
∑ j=1
λ2
= −λ2mk
(18)
We know that mk is the number of objects which uses anchor k which counts the number of non-zero entries in the k-th column of Z. When we sum over all k, it just becomes the number of non-zero entries in Z, which is equivalent to the L0 norm of Z, i.e., ∥Z∥0.
Therefore, the MAP estimate under SVA as given by
max lim β→∞
1 β log p(Y,A,W,Z∣X) (19)
is equivalent to optimizing the following objective function: max Z∈0,1 W≥0 A,θ,K ∑ i −Dφ(yi, fθ(xi, (Z ○W)A)) − λ2∥Z∥0 − (λ1 − λ2)K, (20)
where the exponential prior for W resulted in a limiting domain for W to be positive. Note that we can combine the optimizing variables Z and W with their constraints into one variable T ≥ 0. Also
we can switch from a maximization problem to a minimization problem by absorbing the negative sign. Finally we arrive at the desired objective:
min T≥0
A,θ,K
∑ i
Dφ(yi, fθ(xi,TA)) + λ2∥T∥0 + (λ1 − λ2)K. (21)
C EXPONENTIAL FAMILY DISTRIBUTIONS AS BREGMAN DIVERGENCES
In this section we provide some results that relate exponential families distributions and Bregman divergences. As a result, we can relate likelihood models from Sec. 3.2 to appropriate Bregman divergences. Thus, a probabilistic observation model can be translated to a loss functions minimizing the Bregman divergence, which are more amenable to deep network training using gradient based methods. We begin by defining the Bregman divergence below and stating the relationship formally in Theorem 1. Definition 1. (Bregman, 1967) Let φ ∶ S → R, S = dom(φ) be a strictly convex function defined on a convex set S ⊂ Rd such that φ is differentiable on ri(S), assumed to be non-empty. The Bregman divergence Dφ ∶ S × ri(S)→ [0,∞) is defined as
Dφ(x,y) = φ(x) − φ(y) − ⟨x − y,∇φ(y)⟩, (22) where ∇φ(y) represents the gradient vector of φ evaluated at y. Theorem 1. (Banerjee et al., 2005) There is a bijection between regular exponential families and regular Bregman divergences. In particular, for any exponential family distribution p(x∣θ) = p0(x) exp(⟨x,θ⟩ − g(θ)) can be written as p(x∣µ) = exp(−Dφ(x,µ))bφ(x) where φ is the Legendre dual of the log-partition function g(θ) and µ = ∇θg(θ).
From Theorem 1, we can see that maximizing log-likelihood log p(x∣θ) is same as minimizing the Bregman divergence Dφ(x,µ). Note that we can ignore bφ(x) as it depends only on observed data and does not depend on any parameters. We now illustrate some common examples of exponential families (like Gaussian and categorical), derive their corresponding Bregman divergences, and connect to usual loss functions used in deep networks (like MSE and cross-entropy).
Example 1: Gaussian distribution. (Banerjee et al., 2005) We start with the unit variance spherical Gaussian distributions with with mean µ, which have densities of the form:
p(x;µ) = 1
√ (2π)d
exp(− 1
2 ∥x −µ∥22) . (23)
Using the log-partition function for Gaussian distribution, we can calculate that φ(x) = 1 2 ∥x∥2, which yields Bregman divergence equal to: Dφ(x,µ) = φ(x) − φ(µ) − ⟨x −µ,∇φ(µ)⟩ (24)
= 1
2 ∥x∥22 −
1 2 ∥µ∥22 − ⟨x −µ,µ⟩ (25)
= 1
2 ∥x −µ∥22
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ mean squared error
, (26)
Thus, Dφ(x,µ) along with constant bφ(x) given by
bφ(x) = 1
√ (2π)d
, (27)
recovers the Gaussian density p(x) = exp(−Dφ(x,µ))bφ(x). Therefore, when we assume that labels have a Gaussian emmission model, the corresponding Bregman divergence Dφ(x,µ) = 12∥x −µ∥ 2 2 recovers the squared loss commonly used for regression.
Example 2: Multinomial distribution. (Banerjee et al., 2005) Another exponential family that is widely used is the family of multinomial distributions:
p(x,q) = N !
∏ d j=1 xj !
d
∏ j=1
q xj j (28)
where xj ∈ Z+ are frequencies of events, ∑dj=1 xj = N and qj ≥ 0 are probabilities of events, ∑ d j=1 qj = 1. The multinomial density can be expressed as the density of an exponential distribution
in x = {xj}d−1j=1 with natural parameter θ = log ( qj qd
) d−1
j=1 , cumulant function g(θ) = −N log qd, and
expectation parameter µ = ∇g(θ) = [Nqj]d−1j=1 . The Legendre dual φ of g is given by
φ(µ) = N d
∑ j=1
( µj
N ) log (
µj N ) = N
d
∑ j=1
qj log qj . (29)
As a result, the multinomial density can be expressed as a Bregman divergence equal to:
Dφ(x,µ) = d
∑ j=1
xj logxj
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ constant
− d
∑ j=1
xj logµj
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ cross-entropy loss
. (30)
and constant bφ(x) given by
bφ(x) = ∏ d j=1 x xj j
NN N !
∏ d j=1 xj !
, (31)
which recovers the multinomial density p(x) = exp(−Dφ(x,µ))bφ(x). Therefore, when the labels are generated from a multinomial distribution, the corresponding Bregman divergence Dφ(x,µ) = −∑ d j=1 xj logµj + constant recovers the cross-entropy loss commonly used for classification.
D LEARNING THE ANCHOR EMBEDDINGS A
Here we provide several other strategies for initializing the anchor embeddings:
• Sparse lasso and variational dropout (Chen et al., 2019). Given the strong performance of sparse lasso and variational dropout as vocabulary selection methods, it would be interesting to use sparse lasso/variational dropout to first select the important task-specific words before jointly learning their representations and their transformations to other words. However, sparse lasso and variational dropout require first training a model to completion unlike frequency and clustering based vocabulary selection methods that can be performed during data preprocessing. • Coresets involve constructing a reduced data set which can be used as proxy for the full data set, with provable guarantees such that the same algorithm run on the coreset and the full data set gives approximately similar results (Phillips, 2016; Har-Peled & Mazumdar, 2004). Coresets can be approximately computed quickly (Bachem et al., 2017) and can be used to initialize the set of anchors A.
In general, there is a trade-off between how quickly we can choose the anchor objects and their performance. Randomly picking anchor objects (which is equivalent to initializing the anchor embeddings with dynamic basis vectors) becomes similar to learning a low-rank factorization of the embedding matrix (Sedov & Yang, 2018), which works well for general cases but can be improved for task-specific applications or with domain knowledge. Stronger vocabulary selection methods like variational dropout and group lasso would perform better but takes significantly longer time to learn. We found that intermediate methods such as frequency, clustering, with WordNet/co-occurrence information works well while ensuring that the preprocessing and training stages are relatively quick.
In Appendix K we provide more results for different initialization strategies including those based on clustering initializations. In general, performance is robust with respect to the choice of A among the ones considered (i.e., random, frequency, and clustering). While frequency and clustering work better, using a set of dynamic basis embeddings still gives strong performance, especially when combined with domain knowledge from WordNet and co-occurrence statistics. This implies that when the user has more information about the discrete objects (e.g., having a good representation space to perform clustering), then the user should do so. However, for a completely new set of discrete objects, simply using low-rank basis embeddings with sparsity also work well.
E TRANSFORM: LEARNING A SPARSE T
In addition to a simple sparse linear transformation, we describe some extensions that improve sparsity and expressitivity of the learned representations. Reducing redundancy in representations: To further reduce redundancy in our sparse representations, we perform orthogonal regularization of dynamic basis vectors A by adding the loss term
Confidential + Proprietary
L(A) = ∑i≠j ∣a ⊺ i aj ∣ to the loss function in eq (1). This ensures that different basis vectors ai and aj are orthogonal instead of being linear combinations of one another which would lead to redundancies across different learnt entries in T. Mixture of anchors: In general, different initialization strategies may bring about different advantages. For example, using a mixture of random basis vectors has been shown to help model multisense embeddings (Athiwaratkun et al., 2018; Nguyen et al., 2017). One can define a set of M anchor embeddings A1, ...,AM each initialized by different strategies and of possibly different sizes. Nonlinear mixture of transformations: To complement learning multiple sets of anchor embeddings A1, ...,AM , the straightforward extension of the TRANSFORM step would be to learn a separate linear transformation for each anchor embedding and summing the result: E = ∑Mm=1TmAm. However, the expressive power of this linear combination is equivalent to one set of anchor embeddings equal to concatenating A1, ...,AM and one linear transformation. To truly exhibit the advantage of multiple anchors, we transform and combine them in a nonlinear fashion, e.g., E = ∑ M m=1 softmax(Tm)Am (softmax over the rows of Tm, Figure 5). Different transformations can be learned for different initializations of anchors. This is connected with the multi-head attention mechanism in the Transformer (Vaswani et al., 2017), where softmax(Tm) are the softmax-activated (sparse) attention weights and Am the values to attend over. The result is an embedding matrix formed via a nonlinear mixture of anchors (each initialized with different strategies) and sparse transformations.
F INCORPORATING DOMAIN KNOWLEDGE
ANT also allows incorporating domain knowledge about object relationships. Suppose we are given some relationship graph G = (V,E) where each object is a vertex v ∈ V and an edge (u, v) ∈ E exists between objects u and v if they are related. Real-world instantiations of such a graph include 1) WordNet (Miller, 1995) or ConceptNet (Liu & Singh, 2004) for semantic relations between words, 2) word co-occurrence matrices (Haralick et al., 1973), and 3) Movie Clustering datasets (Leskovec & Krevl, 2014). From these graphs, we extract related positive pairs P = {(u, v) ∈ E} and unrelated negative pairs N = {(u, v) ∉ E}. We incorporate domain information as follows (see Figure 6 for a visual example): Positive pairs: To incorporate a positive pair (u, v), we do not enforce sparsity on Tu,v . This allows ANT to freely learn the transformation between related objects u and v without being penalized for sparsity. On the other hand, transformations between negative pairs will be sparsely penalized. In other words, before computing the `1-penalty, we element-wise multiply T with a domain sparsity matrix S(G) where S(G)u,v = 0 for (u, v) ∈ P (entries not `1-penalized) and S(G)u,v = 1 otherwise (entries are `1-penalized), resulting in the following modified objective:
min T≥0, A,θ ∑ i Dφ(yi, fθ(xi,TA)) + λ2∥T⊙ S(G)∥1. (32)
Since we perform proximal GD, this is equivalent to only soft-thresholding the entries between unrelated objects, i.e., T = max{(T − ηλ2)⊙ S(G) +T⊙ (1 − S(G)),0}. Note that this strategy is applicable when anchors are selected using the frequency method. Negative pairs: For negative pairs, we add an additional constraint that unrelated pairs should not share entries in their linear combination coefficients of the anchor embeddings. In other words, we
Published as a conference paper at ICLR 2021
Confidential + Proprietary
“good”
“the”
Initialize with frequent words
pretrained space e.g. GloVe/co-occurrence
Confidential + Proprietary
“good”
“the”
Clustering step 1
pretrained space e.g. GloVe/co-occurrence
Confidential + Proprietary
“good”
“the”
Clustering step 2
pretrained space e.g. GloVe/co-occurrence
Confidential + Proprietary
“good”
“the”
Clustering step 3
pretrained space e.g. GloVe/co-occurrence
add the loss term L(T,N) = ∑
(u,v)∈N ∣tu∣
⊺ ∣tv ∣ (33)
to the loss in eq (1), where each inner sum discourages tu and tv from sharing similar entries. This strategy can used regardless of the way anchors are selected. We acknowledge that there are other ways to incorporate domain knowledge as well into the general ANT framework, and we only serve to give some initial examples of such methods.
G NONPARAMETRIC ANCHOR & TRANSFORM
In this section we provide details for our non-parametric extension of ANT. Recall that our full objective function derived via small variance asymptotics is given by:
min T≥0
A,θ,K
∑ i
Dφ(yi, fθ(xi;TA)) + λ2∥T∥0 + (λ1 − λ2)K, (34)
which suggests a natural objective function in learning representations that minimize the prediction loss Dφ(yi, fθ(xi;TA)) while ensuring sparsity of T as measured by the `0-norm and using as few anchors as possible (K). Therefore, optimizing eq (5) gives rise to a nonparametric version of ANT, which we call NBANT, that automatically learns the optimal number of anchors. To perform optimization over the number of anchors, our algorithm starts with a small initial number of anchors K = ∣A∣ = 10 and either adds ∆K anchors (i.e., adding ∆K new rows to A and ∆K new sparse columns to T) or deletes ∆K anchors to minimize eq (34) at every epoch depending on the trend of the objective evaluated on the training set. We detail the full Algorithm 2, and highlight the main changes as compared to ANT.
Practically, this algorithm involves the same number of training epochs and batches through each training epoch as the vanilla ANT method. To enable sharing of trained anchors, we change the indices from where A and T are read from so that the partially trained removed anchors are still stored in case more anchors need to be added again.
H EFFICIENT LEARNING AND INFERENCE
The naive method for learning E from anchor embeddings A and the sparse transformations T still scales linearly with ∣V ∣ × d. Here we describe some tips on how to perform efficient learning and inference of the anchor embeddings A and the sparse transformations T:
• Store T as a sparse matrix by only storing its non-zero entries and indices. From our experiments, we have shown that nnz(T) << ∣V ∣ × d which makes storage efficient. • For inference, use sparse matrix multiply as supported in TensorFlow and PyTorch to compute E = TA (or its non-linear extensions). This decreases the running time from scaling by ∣V ∣ × d to only scaling as a function of nnz(T). For training, using inbuilt sparse
Algorithm 2 NBANT: Nonparametric Bayesian ANT. Differences from ANT are highlighted in red. ANCHOR & TRANSFORM:
1: Anchor: initialize initial K = ∣A∣ and corresponding anchor embeddings A ∈ RK×d. 2: Transform: initialize T ∈ R∣V ∣×K as a sparse matrix. 3: for each each epoch do 4: for each batch (X,Y) do 5: Compute loss L = ∑iDφ(yi, fθ(xi;TA)) 6: A,T, θ = UPDATE (∇L, η). 7: T = max{T − ηλ2,0}. 8: end for 9: Compute eq (34) using current value of K,A,T on the validation set.
10: if eq (34) is on a decreasing trend then 11: K =K +∆K, add ∆K rows to A and ∆K (sparse) columns to T. 12: else if eq (34) is on an increasing trend then 13: K =K −∆K, remove ∆K rows from A and ∆K (sparse) columns from T. 14: else 15: keep current values of K,A,T. 16: end if 17: end for 18: return anchor embeddings A and transformations T.
representation of most deep learning frameworks like PyTorch or Tensorflow is not optimal, as they do not support changing non-zero locations in sparse matrix and apriori its not easy to find optimal set of non-zero locations. • During training, instead, implicitly construct E from its anchors and transformations. In fact, we can do better: instead of constructing the entire E matrix to embed a single datapoint x ∈ R1×∣V ∣, we can instead first index x into T, i.e., xT ∈ R1×∣A∣ before performing a sparse matrix multiplication with A, i.e., (xT)A ∈ R1×d. We are essentially taking advantage of the associative property of matrix multiplication and the fact that xT is a simple indexing step and (xT)A is an effective sparse matrix multiplication. To enable fast row slicing into sparse matrix, we just storing the matrix in adjacency list or CSOO format. (We move away from CSR as adding/deleting a non-zero location is very expensive.) When gradient comes back, only update the corresponding row in T. The gradient will be sparse as well due to the L1-prox operator. • Above trick solves the problem for tasks where embedding is used only at the input, e.g., classification. For tasks like language model, where embedding is used at output as well one can also use above mentioned trick with speedup techniques like various softmax sampling techniques (Bengio & Senecal, 2008; Mikolov et al., 2013) or noise-contrastive estimation (Gutmann & Hyvarinen, 2010; Mnih & Teh, 2012), which will be anyway used for large vocabulary sizes. To elaborate, consider the case of sampled softmax (Bengio & Senecal, 2008). We normally generate the negative sample indices, and then we can first index into T using the true and negative indices before performing sparse matrix multiplication with A. This way we do not have to instantiate entire E by expensive matrix multiplication. • When training is completed, only store the non-zero entries of T or store T as a sparse matrix to reconstruct E for inference. • To save time when initializing the anchor embeddings and incorporating domain knowledge, precompute the necessary statistics such as frequency statistics, co-occurrence statistics, and object relation statistics. We use a small context size of 10 to measure co-occurrence of two words to save time. When using WordNet to discover word relations, we only search for immediate relations between words instead of propagating relations across multiple steps (although this could further improve performance). • In order to incorporate domain knowledge in the sparsity structure, we again store 1−S(G) using sparse matrices. Recall that S(G) has an entry equal to 1 for entries representing unrelated objects that should be `1-penalized, which makes S(G) quite dense since most anchor and non-anchor objects are unrelated. Hence we store 1−S(G) instead which consists few non-zero entries only at (non-anchor, anchor) entries for related objects. Element-wise multiplications are also replaced by sparse element-wise multiplications when computing T⊙ S(G) and T⊙ (1 − S(G)).
I GENERALITY OF ANT
We show that under certain structural assumptions on the anchor embeddings and transformation matrices, ANT reduces to the following task-specific methods for learning sparse representations: 1) Frequency (Chen et al., 2016b), TF-IDF, Group Lasso (Wen et al., 2016), and variational dropout (Chen et al., 2019) based vocabulary selection, 2) Low-rank factorization (Grachev et al., 2019), and 3) Compositional code learning (Shu & Nakayama, 2018; Chen et al., 2018). Hence, ANT is general and unifies some of the work on sparse representation learning done independently in different research areas.
Frequency-based vocabulary selection (Luong et al., 2015; Chen et al., 2016b): Initialize A with the ∣A∣ most frequent objects and set Ta,a = 1 for all a ∈ A, T = 0 otherwise. Then E = TA consists of embeddings of the ∣A∣ most frequent objects with zero embeddings for all others. During training, gradients are used to update A but not T (i.e., only embeddings for frequent objects are learned). By changing the selection of A, ANT also reduces to other vocabulary selection methods such as TF-IDF (Ramos, 1999), Group Lasso (Wen et al., 2016), and variational dropout (Chen et al., 2019)
Low-rank factorization (Acharya et al., 2019; Markovsky, 2011; Grachev et al., 2019): Initialize A by a mixture of random basis embeddings (just 1 anchor per set) A1, ...,AM ∈ R1×d and do not enforce any sparsity on the transformations T1, ...,TM ∈ R∣V ∣×1. If we further restrict ourselves to only linear combinations E = ∑Mm=1TmAm, this is equivalent to implicitly learning the M low rank factors a1, ...,aM , t1, ..., tM that reconstruct embedding matrices of rank at most M .
Compositional code learning (Shu & Nakayama, 2018; Chen et al., 2018): Initialize A by a mixture of random basis embeddings A1, ...,AM , initialize transformations T1, ...,TM , and apply a linear combination E = ∑Mm=1TmAm. For sparsity regularization, set row i of S(G)mi as a reverse one-hot vector with entry dmi = 0 and all else 1. In other words, index dmi of row row Tmi is not regularized, and all other entries are `1-regularized with extremely high λ2 such that row Tmi essentially becomes an one-hot vector with dimension dmi = 1. This results in learning a codebook where each object in V is mapped to only one anchor in each mixture.
Therefore, ANT encompasses several popular methods for learning sparse representations, and gives further additional flexibility in defining various initialization strategies, applying nonlinear mixtures of transformations, and incorporating domain knowledge via object relationships.
J EXPERIMENTAL DETAILS
Here we provide more details for our experiments including hyperparameters used, design decisions, and comparison with baseline methods. We also include the anonymized code in the supplementary material.
J.1 TEXT CLASSIFICATION Base CNN model: For all text classification experiments, the base model is a CNN (Lecun et al., 1998) with layers of 2D convolutions and 2D max pooling, before a dense layer to the output softmax. The code was adapted from https://github.com/wenhuchen/ Variational-Vocabulary-Selection and the architecture hyperparameters are provided in Table 6. The only differences are the output dimensions which is 4 for AG-News, 14 for DBPedia, 5 for Sogou-News, and 5 for Yelp-review.
Anchor: We experiment with dynamic, frequency, and clustering initialization strategies. The number of anchors ∣A∣ is a hyperparameter that is selected using the validation set. The range of ∣A∣ is in {10,20,50,80,100,500,1,000}. Smaller values of ∣A∣ allows us to control for fewer anchors and smaller transformation matrix T at the expense of performance.
Transformation: We experiment with sparse linear transformations for T. λ2 is a hyperparameter that is selected using the validation set. Larger values of λ2 allows us to control for more sparse entries in T at the expense of performance. For experiments on dynamic mixtures, we use a softmax-based nonlinear combination E = ∑Mm=1 softmax(Tm)Am where softmax is performed over the rows of
Tm. Note that applying a softmax activation to the rows of Tm makes all entries dense so during training, we store Tm as sparse matrices (which is efficient since Tm has few non-zero entries) and implicitly reconstruct E.
Domain knowledge: When incorporating domain knowledge in ANT, we use both WordNet and cooccurrence statistics. For WordNet, we use the public WordNet interface provided by NLTK http: //www.nltk.org/howto/wordnet.html. For each word we search for its immediate related words among its hypernyms, hyponyms, synonyms, and antonyms. This defines the relationship graph. For co-occurrence statistics, we define a co-occurrence context size of 10 on the training data. Two words are defined to be related if they co-occur within this context size.
A note on baselines: Note that the reported results on SPARSEVD and SPARSEVD-VOC (Chirkova et al., 2018) have a different embedding size: 300 instead of 256. This is because they use pre-trained word2vec or GloVe embeddings to initialize their model before compression is performed.
J.2 LANGUAGE MODELING ON PTB
Base LSTM model: Our base model is a 2 layer LSTM with an embedding size of 200 and hidden layer size of 200. The code was adapted from https://github.com/salesforce/ awd-lstm-lm and the full table of hyperparameters is provided in Table 7.
Base AWD-LSTM model: In addition to experiments on an vanilla LSTM model as presented in the main text, we also performed experiments using a 3 layer AWD-LSTM with an embedding size of 400 and hidden layer size of 1,150. The full hyperparameters used can be found in Table 8.
Anchor: We experiment with dynamic, frequency, and clustering initialization strategies. The number of anchors ∣A∣ is a hyperparameter that is selected using the validation set. The range of ∣A∣ is in {10,20,50,80,100,500,1,000}. Smaller values of ∣A∣ allows us to control for fewer anchors and smaller transformation matrix T at the expense of performance.
Domain knowledge: When incorporating domain knowledge in ANT, we use both WordNet and cooccurrence statistics. For WordNet, we use the public WordNet interface provided by NLTK http: //www.nltk.org/howto/wordnet.html. For each word we search for its immediate related words among its hypernyms, hyponyms, synonyms, and antonyms. This defines the relationship graph. For co-occurrence statistics, we define a co-occurrence context size of 10 on the training data. Two words are defined to be related if they co-occur within this context size.
A note on baselines: We also used some of the baseline results as presented in Grachev et al. (2019). Their presented results differ from our computations in two aspects: they include the LSTM parameters on top of the embedding parameters, and they also count the embedding parameters twice since they do not perform weight tying (Press & Wolf, 2017) (see equation (6) of Grachev et al. (2019)). To account for this, the results of SPARSEVD and SPARSEVD-VOC (Chirkova et al., 2018), as well as the results of various LR and TT low rank compression methods (Grachev et al., 2019) were modified by subtracting off the LSTM parameters (200 × 200 × 16). This is derived since each of the 8 weight matrices Wi,f,o,c, Ui,f,o,c in an LSTM layer is of size 200 × 200, and there are a 2 LSTM layers. We then divide by two to account for weight tying. In the main text, we compared with the strongest baselines as reported in Grachev et al. (2019): these were the methods that performed low rank decomposition on both the input embedding (∣V ∣ × d), output embedding (d × ∣V ∣), and intermediate hidden layers of the model. For full results, please refer to Grachev et al. (2019).
Note that the reported results on SPARSEVD and SPARSEVD-VOC (Chirkova et al., 2018) have a different embedding size and hidden layer size of 256 instead of 200, although these numbers are close enough for fair comparison. In our experiments we additionally implemented an LSTM with an embedding size of 256 and hidden layer size of 256 so that we can directly compare with their reported numbers.
For baselines that perform post-processing compression of the embedding matrix, POST-SPARSE HASH (post-processing using sparse hashing) (Guo et al., 2017) and POST-SPARSE HASH+k-SVD (improving sparse hashing using k-SVD) (Guo et al., 2017; Awasthi & Vijayaraghavan, 2018), we choose two settings: the first using 500 anchors and 10 nearest neighbors to these anchor points, and the second using 1,000 anchors and 20 nearest neighbors. The first model uses 500 × d + ∣V ∣ × 10 non-zero embedding parameters while the second model uses 1,000 × d + ∣V ∣ × 20 parameters. For AWD-LSTM on PTB, this is equivalent to 0.3M and 0.6M embedding parameters respectively which is comparable to the number of non-zero parameters used by our method.
J.3 LANGUAGE MODELING ON WIKITEXT-103
Base AWD-LSTM model: Our base model is a 4 layer AWD-LSTM with an embedding size of 400 and hidden layer size of 2,500. The code was adapted from https://github.com/ salesforce/awd-lstm-lm and the hyperparameters used can be found in Table 9.
A note on baselines: While Baevski & Auli (2019) adapt embedding dimensions according to word frequencies, their goal is not to compress embedding parameters and they use 44.9M (dense) parameters in their adaptive embedding layer, while we use only 2M. Their embedding parameters are calculated by their reported bucket sizes and embedding sizes (three bands of size 20K (d = 1024), 40K (d = 256) and 200K (d = 64)). Their perplexity results are also obtained using a Transformer model with 250M params while our AWD-LSTM model uses 130M params.
For the HASH EMBED baseline that retains the frequent k words and hashes the remaining words into 1,000 OOV buckets (Svenstrup et al., 2017), We vary k ∈ {1 × 105,5 × 104,1 × 104} to obtain results across various parameter settings.
J.4 MOVIE RECOMMENDATION ON MOVIELENS
Base MF model: We show the hyperparamters used for the MF model in Table 10. We use the Yogi optimizer (Zaheer et al., 2018) to learn the parameters.
ANT and NBANT: We build ANT on top of the MF model while keeping the base hyperparamters constant. For ANT, we apply compression to both movie and user embedding matrices individually. NBANT involves defining the starting value of K = ∣A∣, and a ∆K value which determines the rate of increase or decrease in K. For Movielens 25M we use a larger initial ∣A∣ and ∆K since it is a larger dataset and also takes longer to train, so we wanted the increase and decrease in anchors to be faster (see Table 10). Beyond this initial setting, we found that performance is robust with respect to the initial value of K and ∆K, so we did not tune these parameters. In practice, we tie the updates of the number of user anchors and movie anchors instead of optimizing over both independently. Therefore, we start with the same number of initial user and movie anchors before incrementing or decrementing them by the same ∆K at the same time. We found that this simplification did not affect performance and NBANT was still able to find an optimal number of anchors for a good trade-off between performance and compression.
K MORE RESULTS
In the | 1. What is the main contribution of the paper, and how does it differ from previous works in the field?
2. How does the proposed method, ANT, solve the problem of learning sparse embeddings, and what are its advantages over dense counterparts?
3. Can you explain the pipeline of the end-to-end training process of ANT and how it differs from other methods?
4. How does the statistical interpretation of the approach using a generative formulation to the embedding vectors in terms of the latent vectors contribute to the understanding of the method's effectiveness?
5. What are the strengths and weaknesses of the experimental results, particularly in comparison to other methods in the same domain?
6. Are there any limitations or potential drawbacks to the proposed method that could be explored further in future research? | Review | Review
This paper proposes ANT to solve the problem of learning Sparse embeddings instead of dense counterparts for tasks like Text Classification, Language Modeling and Recommendation Systems. When the vocabulary size |V| runs into several 100Ks or millions, it is impractical to store one dense vector per label. Hence the paper proposes to only store a few anchor/latent vectors (the matrix is A with |A|<<|V|). All label vectors are expressed as linear combinations of a 'few' anchor vectors. To train this end-to-end, we need a transformation matrix T such that T*A = E (E is V\times d embedding matrix). T has to be structured, i.e., each row of T has to be sparse and positive only (although negative weights are also fine, I'm not sure if weight redundancy is that important).
This pipeline is trained end to end using YOGI optimizer for regular gradient updates and 'proximal gradient descent' for T which does soft thresholding with a lower bound of 0 (accomplishing both sparsity and positivity part).
This design admits multiple ways of initializing the anchors A. And the authors perform experiments with both frequent token vectors and random anchor vectors (both have their merits, random seems to be a robust choice).
The authors provide a statistical interpretation of their approach using a generative formulation to the embedding vectors in terms of the latent vectors (using a Indian Buffet Process membership matrix Z).
The experiments span two major domains, NLP and Information Retrieval. Across multiple NLP datasets, ANT outperforms Sparse-Coding (Chen et. al. 2016) and Post-Sparse-Hash (Guo et.al. 2017). On the IR task with MovieLens dataset, the primary comparison is against SLIMMING (Liu et.al. 2017). While gains are substantial on the NLP tasks, they seem minimal on the MovieLens task.
I've listed most pros above. The cons are here:
The idea seems a little similar to Compositional Embeddings (Shi et. al. 2020, Ginart et. al.2019). It might warrant a discussion or comparison.
There are other sparse embedding methods like SNRM (Zamani et.al. 2018) and SOLAR-Sparse Orthogonal ...(Medini et.al. 2020) which might be comparison candidates at-least for IR tasks.
The precision in table 1 for ANT anomalously increases when |A| is reduced. ANy explanation as to why this happens? The information bottleneck is supposed to reduce precision right? |
ICLR | Title
Anchor & Transform: Learning Sparse Embeddings for Large Vocabularies
Abstract
Learning continuous representations of discrete objects such as text, users, movies, and URLs lies at the heart of many applications including language and user modeling. When using discrete objects as input to neural networks, we often ignore the underlying structures (e.g., natural groupings and similarities) and embed the objects independently into individual vectors. As a result, existing methods do not scale to large vocabulary sizes. In this paper, we design a simple and efficient embedding algorithm that learns a small set of anchor embeddings and a sparse transformation matrix. We call our method ANCHOR & TRANSFORM (ANT) as the embeddings of discrete objects are a sparse linear combination of the anchors, weighted according to the transformation matrix. ANT is scalable, flexible, and end-to-end trainable. We further provide a statistical interpretation of our algorithm as a Bayesian nonparametric prior for embeddings that encourages sparsity and leverages natural groupings among objects. By deriving an approximate inference algorithm based on Small Variance Asymptotics, we obtain a natural extension that automatically learns the optimal number of anchors instead of having to tune it as a hyperparameter. On text classification, language modeling, and movie recommendation benchmarks, we show that ANT is particularly suitable for large vocabulary sizes and demonstrates stronger performance with fewer parameters (up to 40× compression) as compared to existing compression baselines. Code for our experiments can be found at https://github.com/pliang279/ sparse_discrete.
1 INTRODUCTION
Most machine learning models, including neural networks, operate on vector spaces. Therefore, when working with discrete objects such as text, we must define a method of converting objects into vectors. The standard way to map objects to continuous representations involves: 1) defining the vocabulary V = {v1, ..., v∣V ∣} as the set of all objects, and 2) learning a ∣V ∣ × d embedding matrix that defines a d dimensional continuous representation for each object. This method has two main shortcomings. Firstly, when ∣V ∣ is large (e.g., million of words/users/URLs), this embedding matrix does not scale elegantly and may constitute up to 80% of all trainable parameters (Jozefowicz et al., 2016). Secondly, despite being discrete, these objects usually have underlying structures such as natural groupings and similarities among them. Assigning each object to an individual vector assumes independence and foregoes opportunities for statistical strength sharing. As a result, there has been a large amount of interest in learning sparse interdependent representations for large vocabularies rather than the full embedding matrix for cheaper training, storage, and inference.
In this paper, we propose a simple method to learn sparse representations that uses a global set of vectors, which we call the anchors, and expresses the embeddings of discrete objects as a sparse linear combination of these anchors, as shown in Figure 1. One can consider these anchors to represent latent topics or concepts. Therefore, we call the resulting method ANCHOR & TRANSFORM (ANT). The approach is reminiscent of low-rank and sparse coding approaches, however, surprisingly in the literature these methods were not elegantly integrated with deep networks. Competitive attempts are often complex (e.g., optimized with RL (Joglekar et al., 2019)), involve multiple training stages (Ginart et al., 2019; Liu et al., 2017), or require post-processing (Svenstrup et al., 2017; Guo et al., 2017; Aharon et al., 2006; Awasthi & Vijayaraghavan, 2018). We derive a simple optimization objective which learns these anchors and sparse transformations in an end-to-end manner. ANT is
∗work done during an internship at Google.
scalable, flexible, and allows the user flexibility in defining these anchors and adding more constraints on the transformations, possibly in a domain/task specific manner. We find that our proposed method demonstrates stronger performance with fewer parameters (up to 40× compression) on multiple tasks (text classification, language modeling, and recommendation) as compared to existing baselines.
We further provide a statistical interpretation of our algorithm as a Bayesian nonparametric (BNP) prior for neural embeddings that encourages sparsity and leverages natural groupings among objects. Specifically, we show its equivalence to Indian Buffet Process (IBP; Griffiths & Ghahramani (2005)) prior for embedding matrices. While such BNP priors have proven to be a flexible tools in graphical models to encourage hierarchies (Teh & Jordan, 2010), sparsity (Knowles & Ghahramani, 2011), and other structural constraints (Roy et al., 2016), these inference methods are usually complex, hand designed for each setup, and non-differentiable. Our proposed method opens the door towards integrating priors (e.g., IBP) with neural representation learning. These theoretical connections leads to practical insights - by asymptotically analyzing the likelihood of our model in the small variance limit using Small Variance Asymptotics (SVA; Roweis (1998)), we obtain a natural extension, NBANT, that automatically learns the optimal number of anchors to achieve a balance between performance and compression instead of having to tune it as a hyperparameter.
2 RELATED WORK
Prior work in learning sparse embeddings of discrete structures falls into three categories:
Matrix compression techniques such as low rank approximations (Acharya et al., 2019; Grachev et al., 2019; Markovsky, 2011), quantizing (Han et al., 2016), pruning (Anwar et al., 2017; Dong et al., 2017; Wen et al., 2016), or hashing (Chen et al., 2015; Guo et al., 2017; Qi et al., 2017) have been applied to embedding matrices. However, it is not trivial to learn sparse low-rank representations of large matrices, especially in conjunction with neural networks. To the best of our knowledge, we are the first to present the integration of sparse low-rank representations, their non-parametric extension, and demonstrate its effectiveness on many tasks in balancing the tradeoffs between performance & sparsity. We also outperform many baselines based on low-rank compression (Grachev et al., 2019), sparse coding (Chen et al., 2016b), and pruning (Liu et al., 2017).
Reducing representation size: These methods reduce the dimension d for different objects. Chen et al. (2016a) divides the embedding into buckets which are assigned to objects in order of importance, Joglekar et al. (2019) learns d by solving a discrete optimization problem with RL, and Baevski & Auli (2019) reduces dimensions for rarer words. These methods resort to RL or are difficult to tune with many hyperparameters. Each object is also modeled independently without information sharing.
Task specific methods include learning embeddings of only common words for language modeling (Chen et al., 2016b; Luong et al., 2015), and vocabulary selection for text classification (Chen et al., 2019). Other methods reconstruct pre-trained embeddings using codebook learning (Chen et al., 2018; Shu & Nakayama, 2018) or low rank tensors (Sedov & Yang, 2018). However, these methods cannot work for general tasks. For example, methods that only model a subset of objects cannot be used for retrieval because it would never retrieve the dropped objects. Rare objects might be highly relevant to a few users so it might not be ideal to completely ignore them. Similarly, task-specific methods such as subword (Bojanowski et al., 2017) and wordpiece (Wu et al., 2016) embeddings, while useful for text, do not generalize to general applications such as item and query retrieval.
3 ANCHOR & TRANSFORM
Suppose we are presented with data X ∈ V N ,Y ∈ RN×c drawn from some joint distribution p(x, y), where the support of x is over a discrete set V (the vocabulary) and N is the size of the training set. The entries in Y can be either discrete (classification) or continuous (regression). The goal is to learn a d-dimensional representation {e1, ...,e∣V ∣} for each object by learning an embedding matrix E ∈ R∣V ∣×d where row i is the representation ei of object i. A model fθ with parameters θ is then used to predict y, i.e., ŷi = fθ(xi;E) = fθ(E[xi]).
At a high level, to encourage statistical sharing between objects, we assume that the embedding of each object is obtained by linearly superimposing a small set of anchor objects. For example, when the objects considered are words, the anchors may represent latent abstract concepts (of unknown cardinality) and each word is a weighted mixture of different concepts. More generally, the model assumes that there are some unknown number of anchors, A = {a1, ...,a∣A∣}. The embedding ei for object i is generated by first choosing whether the object possesses each anchor ak ∈ Rd. The selected anchors then each contribute some weight to the representation of object i. Therefore, instead of learning the large embedding matrix E directly, ANT consists of two components:
Algorithm 1 ANCHOR & TRANSFORM algorithm for learning sparse representations of discrete objects.
ANCHOR & TRANSFORM: 1: Anchor: initialize anchor embeddings A. 2: Transform: initialize T as a sparse matrix. 3: Optionally + domain info: initialize domain sparsity ma-
trix S(G) as a sparse matrix (see Appendix F). 4: for each batch (X,Y) do 5: Compute loss L = ∑iDφ(yi, fθ(xi;TA)) 6: A,T, θ = UPDATE (∇L, η). 7: T = max{(T − ηλ2)⊙ S(G) +T⊙ (1 − S(G)),0}. 8: end for 9: return anchor embeddings A and transformations T.
1) ANCHOR: Learn embeddings A ∈ R∣A∣×d of a small set of anchor objects A = {a1, ...,a∣A∣}, ∣A∣ << ∣V ∣ that are representative of all discrete objects.
2) TRANSFORM: Learn a sparse transformation T from A to E. Each of the discrete objects is induced by some transformation from (a few) anchor objects. To ensure sparsity, we want nnz(T) << ∣V ∣ × d.
A and T are trained end-to-end for task specific representations. To enforce sparsity, we use an `1 penalty on T and constrain its domain to be non-negative to reduce redundancy in transformations (positive and negative entries canceling out).
min T≥0, A,θ ∑ i Dφ(yi, fθ(xi;TA)) + λ2∥T∥1, (1)
where Dφ is a suitable Bregman divergence between predicted and true labels, and ∥T∥1 denotes the sum of absolute values. Most deep learning frameworks directly use subgradient descent to solve eq (1), but unfortunately, such an approach will not yield sparsity. Instead, we perform optimization by proximal gradient descent (rather than approximate subgradient methods which have poorer convergence around non-smooth regions, e.g., sparse regions) to ensure exact zero entries in T:
A t+1 ,T t+1 , θ t+1
= UPDATE (∇∑ i
Dφ(yi, fθ(xi;T t A t )), η) , (2)
T t+1 = PROXηλ2(T t+1 ) = max (T t+1 − ηλ2, 0) , (3)
where η is the learning rate, and UPDATE is a gradient update rule (e.g., SGD (Lecun et al., 1998), ADAM (Kingma & Ba, 2015), YOGI (Zaheer et al., 2018)). PROXηλ2 is a composition of two proximal operators: 1) soft-thresholding (Beck & Teboulle, 2009) at ηλ2 which results from subgradient descent on λ2∥T∥1, and 2) max(⋅,0) due to the non-negative domain for T. We implement this proximal operator on top of the YOGI optimizer for our experiments.
Together, equations (2) and (3) give us an iterative process for end-to-end learning of A and T along with θ for specific tasks (Algorithm 1). T is implemented as a sparse matrix by only storing its non-zero entries and indices. Since nnz(T) << ∣V ∣× d, this makes storage of T extremely efficient as compared to traditional approaches of computing the entire ∣V ∣×d embedding matrix. We also provide implementation tips to further speedup training and ways to incorporate ANT with existing speedup techniques like softmax sampling (Mikolov et al., 2013) or noise-contrastive estimation (Mnih & Teh, 2012) in Appendix H. After training, we only store ∣A∣ × d + nnz(T) << ∣V ∣ × d entries that define the complete embedding matrix, thereby using fewer parameters than the traditional ∣V ∣ × d matrix. General purpose matrix compression techniques such as hashing (Qi et al., 2017), pruning (Dong
et al., 2017), and quantizing (Han et al., 2016) are compatible with our method: the matrices A and nnz(T) can be further compressed and stored.
We first discuss practical methods for anchor selection (§3.1). In Appendix F we describe several ways to incorporate domain knowledge into the anchor selection and transform process. We also provide a statistical interpretation of ANT as a sparsity promoting generative process using an IBP prior and derive approximate inference based on SVA (§3.2). This gives rise to a nonparametric version of ANT that automatically learns the optimal number of anchors.
3.1 ANCHOR: SELECTING THE ANCHORS A
Inspired by research integrating initialization strategies based on clustering (Teh et al., 2007) and Coresets (Bachem et al., 2015) with Bayesian nonparametrics, we describe several practical methods to select anchor objects that are most representative of all objects (refer to Appendix D for a comparison of initialization strategies.).
Frequency and TF-IDF: For tasks where frequency or TF-IDF (Ramos, 1999) are useful for prediction, the objects can simply be sorted by frequency and the most common objects selected as the anchor points. While this might make sense for tasks such as language modeling (Luong et al., 2015; Chen et al., 2016b), choosing the most frequent objects might not cover rare objects that are not well represented by common anchors.
“good”
“the”
Initialize with frequent words
pretrained space e.g. GloVe/co-occurrence
“good”
“the”
Clustering step 1
pretrained space e.g. GloVe/co-occurrence
“good”
“the”
Clustering step 2
pretrained space e.g. GloVe/co-occurrence
“good”
“the”
Clustering step 3
pretrained space e.g. GloVe/co-occurrence
Confidential + Proprietary
Clustering: To ensure that all objects are close to some anchor, we use k-means++ initialization (Arthur & Vassilvitskii, 2007). Given a feature space representative of the relationships between objects, such as Glove (Pennington et al., 2014) for words or a co-occurrence matrix (Haralick et al., 1973) for more general objects, k-means++ initialization picks cluster centers to span the entire space. This can augment other strategies, such as initializing anchors using frequency followed by clustering to complete remaining anchors (see Figure 2).
Random basis vectors: Initialize A to a set of random basis vectors. This simple yet powerful method captures the case where we have less knowledge about the objects (i.e., without access to any pretrained representation/similarity space).
3.2 STATISTICAL INTERPRETATION AS A BAYESIAN NONPARAMETRIC PRIOR
To provide a statistical interpretation of ANT, we first analyze a generative process for discrete representations that is consistent with our algorithm. Given a set of anchors, A = {a1, ...,a∣A∣}, we use a binary latent variable zik ∈ {0,1} to indicate whether object i possesses anchor k and a positive latent variable wik ∈ R≥0 to denote the weight that anchor k contributes towards object i. Therefore, the representation ei is given by ei = ∑k wikzikak. Ideally, we want the vector zi to be sparse for efficient learning and storage. More formally, suppose there are K ∶= ∣A∣ anchors, then:
• Z ∈ R∣V ∣×K ∼ IBP(a, b); A ∈ RK×d ∼ P (A) = N (0,1); W ∈ R∣V ∣×K ∼ P (W) = Exp(1) • for i = 1,⋯,N
- ŷi = fθ(xi; (Z ○W)A) - yi ∼ p(yi∣xi;Z,W,A) = exp{−Dφ(yi, ŷi)} bφ(yi)
In this generative process, the selection matrix Z follows a two-parameter Indian Buffet Process (IBP; Griffiths & Ghahramani (2005)) prior (Ghahramani et al., 2007). Not only does this BNP prior allow for a potentially infinite number of anchors, but it also encourages each object to only select a small subset of anchors, resulting in a sparse zi (see Appendix A for details). We place a standard Gaussian prior on the continuous anchors embeddings ak and an exponential prior on the weights W which give the actual non-negative transformation weights for the non-zero entries defined in Z. E = (Z ○W)A is the final embedding learnt by our model which represents a d-dimensional continuous representation {e1, ...,e∣V ∣} for each discrete object where row i is the representation ei of object i. Finally, a neural model fθ with parameters θ is used to predict yi given the embedded representations, i.e., ŷi = fθ(xi; (Z ○W)A) = fθ((Z ○W)A[xi]).
Likelihood Model/Loss: We assume that the final emission model yi∣ŷi belongs to the exponential family. Since exponential family distributions have a corresponding Bregman divergence (Banerjee et al. (2005); see Appendix C for examples), we choose Dφ(yi, ŷi) as the corresponding Bregman divergence between predicted and true labels. Appropriate choices for Dφ recover cross-entropy and MSE losses. bφ(yi) does not depend on any learnable parameter or variable and can be ignored.
Joint likelihood: Under the generative model as defined above, the joint likelihood is given by: log p(Y,Z,W,A∣X)∝∑
i
log p(yi∣xi;Z,W,A) + log p(Z) + log p(W) + log p(A)
=∑ i {−Dφ(yi, fθ(xi; (Z ○W)A)) + log bφ(yi)} + log p(Z) + log p(W) + log p(A).
However, calculating the posterior or MAP estimate is hard, especially due to the presence of the non-linear deep network fθ. Approximate inference methods such as MCMC, variational inference, or probabilistic programming would be computationally and statistically inefficient since it would involve sampling, evaluating, or training the model multiple times. To tackle this problem, we perform approximate inference via Small Variance Asymptotics (SVA), which captures the benefits of rich latent-variable models while providing a framework for scalable optimization (Broderick et al., 2013a; Jiang et al., 2012; Roychowdhury et al., 2013).
Approximate Inference via SVA: To use SVA, we introduce a scaling variable β and shrink the variance of the emission probability by taking β →∞. The scaled probability emission becomes
p(yi∣xi;Z,W,A) = exp{−βDφ(yi, ŷi)} bβφ(yi). (4) Following Broderick et al. (2013a), we modulate the number of features in the large-β limit by choosing constants λ1 > λ2 > 0 and setting the IBP hyperparameters a = exp(−βλ1) and b = exp(βλ2). This prevents a limiting objective function that favors a trivial cluster assignment (every data point assigned to its own separate feature). Maximizing the asymptotic joint likelihood (after taking limits, i.e., limβ→∞ 1β log p(Y,Z,W,A∣X)) results in the following objective function:
min T≥0, A,θ,K ∑ i Dφ(yi, fθ(xi;TA)) + λ2∥T∥0 + (λ1 − λ2)K, (5)
where we have combined the variables Z and W with their constraints into one variable T. The exponential prior for W results in a non-negative domain for T. Please refer to Appendix B for derivations. Note that eq (5) suggests a natural objective function in learning representations that minimize the prediction loss Dφ(yi, fθ(xi;TA)) while ensuring sparsity of T as measured by the `0-norm and using as few anchors as possible (K). Therefore, optimizing eq (5) gives rise to a nonparametric version of ANT, which we call NBANT, that automatically learns the optimal number of anchors. To perform optimization over the number of anchors, our algorithm starts with a small ∣A∣ = 10 and either adds anchors (i.e., adding a new row to A and a new column to T) or deletes anchors to minimize eq (5) at every epoch depending on the trend of the objective evaluated on validation set. We outline the exact algorithm in Appendix G along with more implementation details.
Analogously, we can derive the finite case objective function for a fixed number of anchors K: min
T≥0, A,θ ∑ i
Dφ(yi, fθ(xi;TA)) + λ2∥T∥0, (6)
which, together with a `1 penalty on T as a convex relaxation for the `0 penalty, recovers the objective function in eq (1). The solution for this finite version along with K yields the Pareto front. Different values of λ1 in eq (5) can be used for model selection along the front as elucidated in Appendix L.
4 EXPERIMENTS
To evaluate ANT, we experiment on text classification, language modeling, and movie recommendation tasks. Experimental details are in Appendix J and full results are in Appendix K.
4.1 TEXT CLASSIFICATION
Setup: We follow the setting in Chen et al. (2019) with four datasets: AG-News (V = 62K) (Zhang et al., 2015), DBPedia (V = 563K) (Lehmann et al., 2015), Sogou-News (V = 254K) (Zhang et al., 2015), and Yelp-review (V = 253K) (Zhang et al., 2015). We use a CNN for classification (Kim, 2014). ANT is used to replace the input embedding and domain knowledge is derived from WordNet and co-occurrence in the training set. We record test accuracy and number of parameters used in the embedding only. For ANT, num params is computed as ∣A∣ × d + nnz(T).
Baselines: On top of the CNN, we compare to the following compression approaches. Vocabulary selection methods: 1) FREQUENCY where only embeddings for most frequent words are learnt (Chen et al., 2016b; Luong et al., 2015), 2) TF-IDF which only learns embeddings for words with high TF-IDF score (Ramos, 1999), 3) GL (group lasso) which aims to find underlying sparse structures in the embedding matrix via row-wise `2 regularization (Liu et al., 2015; Park et al., 2016; Wen et al., 2016), 4) VVD (variational vocabulary dropout) which performs variational dropout for vocabulary selection (Chen et al., 2019). We also compare to 5) SPARSEVD (sparse variational dropout) which performs variational dropout on all parameters (Chirkova et al., 2018), 6) SPARSEVD-VOC which uses multiplicative weights for vocabulary sparsification (Chirkova et al., 2018), and 7) a SPARSE CODE model that learns a sparse code to reconstruct pretrained word representations (Chen et al., 2016b). All CNN architectures are the same for all baselines with details in Appendix J.1.
Results on AG-News are in Table 1 and results for other datasets are in Appendix K.1. We observe that restricting T ≥ 0 using an exponential prior is important in reducing redundancy in the entries. Domain knowledge from WordNet and co-occurrence also succeeded in reducing the total (non-zero) embedding parameters to 0.40M, a compression of 40× and outperforming the existing approaches.
4.2 LANGUAGE MODELING
Setup: We perform experiments on word-level Penn Treebank (PTB) (V = 10K) (Marcus et al., 1993) and WikiText-103 (V = 267K) (Merity et al., 2017) with LSTM (Hochreiter & Schmidhuber, 1997) and AWD-LSTM (Merity et al., 2018). We use ANT as the input embedding tied to the output embedding. Domain knowledge is derived from WordNet and co-occurrence on the training set. We record the test perplexity and the number of (non-zero) embedding parameters.
Baselines: We compare to SPARSEVD and SPARSEVD-VOC, as well as low-rank (LR) and tensortrain (TT) model compression techniques (Grachev et al., 2019). Note that the application of variational vocabulary selection to language modeling with tied weights is non-trivial since one is unable to predict next words when words are dynamically dropped out. We also compare against methods that compress the trained embedding matrix as a post-processing step before evaluation: POST-SPARSE HASH (post-processing using sparse hashing) (Guo et al., 2017) and POST-SPARSE HASH+k-SVD (Awasthi & Vijayaraghavan, 2018; Guo et al., 2017) which uses k-SVD (which is the basis of dictionary learning/sparse coding) (Aharon et al., 2006) to solve for a sparse embedding matrix, instead of adhoc-projection in (Guo et al., 2017). Comparing to these post-processing methods demonstrates that end-to-end training of sparse embeddings is superior to post-compression.
Results: On PTB (Table 2), we improve the perplexity and compression as compared to previously proposed methods. We observe that sparsity is important: baseline methods that only perform lowerrank compression with dense factors (e.g., LR LSTM) tend to suffer in performance and use many parameters, while ANT retains performance with much better compression. ANT also outperforms post-processing methods (POST-SPARSE HASH), we hypothesize this is because these post-processing methods accumulate errors in both language modeling as well as embedding reconstruction. Using an anchor size of 500/1,000 reaches a good perplexity/compression trade-off: we reach within 2 points perplexity with 5× reduction in parameters and within 7 points perplexity with 10× reduction. Using AWD-LSTM, ANT with 1,000 dynamic basis vectors is able to compress parameters by 10× while achieving 72.0 perplexity. Incorporating domain knowledge allows us to further compress the parameters by another 10× and achieve 70.0 perplexity, which results in 100× total compression.
On WikiText-103, we train using sampled softmax (Bengio & Senecal, 2008) (due to large vocabulary) for 500,000 steps. To best of our knowledge, we could not find literature on compressing language models on WikiText-103. We tried general compression techniques like low rank tensor and tensor train factorization (Grachev et al., 2019), but these did not scale. As an alternative, we consider a HASH EMBED baseline that retains the frequent k words and hashes the remaining words into 1,000 OOV buckets (Svenstrup et al., 2017). We vary k ∈ {1×105,5×104,1×104} (details in Appendix J.3). From Table 2 (bottom), we reach within 3 perplexity with ∼ 16× reduction in parameters and within 13 perplexity with ∼ 80× reduction, outperforming the frequency and hashing baselines. We observe that ANT’s improvement over post-compression methods (POST-SPARSE HASH) is larger on WikiText than PTB, suggesting that ANT is particularly suitable for large vocabularies.
4.3 RECOMMENDER SYSTEMS
Setup: We perform experiments on both movie and product recommendation tasks. For movie recommendations, we follow Ginart et al. (2019) and we experiment on MovieLens 25M (Harper & Konstan, 2015) with 126K users and 59K movies. We also present results for MovieLens 1M in Appendix K.3. On product recommendation, we show that ANT scales to Amazon Product reviews (Ni et al., 2019), the largest existing dataset for recommender systems with 233M reviews spanning 43.5M users and 15.2M products. Following Wan et al. (2020), we ensured that the users and products in the test set have appeared in the training data for generalization.
Baselines: We compare to a baseline Matrix Factorization (MF) model (Koren et al., 2009) with full embedding matrices for movies and users and to Mixed Dimension (MIXDIM) embeddings (Ginart et al., 2019), a compression technique that assigns different dimension to different users/items based on popularity. We also compare to SPARSE CBOW (Sun et al., 2016) which learns sparse E by placing an `1 penalty over all entries of E and optimizing using online subgradient descent, and SLIMMING (Liu et al., 2017), which performs subgradient descent before pruning small weights by setting them to 0. Such methods learn embeddings for objects independently without statistical strength sharing among related objects. We also test NBANT using the algorithm derived from the Bayesian nonparametric interpretation of ANT.
Results: From Table 3, ANT outperforms standard matrix factorization and dense mixed dimensional embeddings for performance and compression. NBANT is also able to automatically select an optimal
number of anchors (6/8) to achieve solutions along the performance-compression Pareto front. In Figure 3, we plot the value of eq (5) across values of ∣A∣ after a comprehensive hyperparameter sweep on ANT across 1000 settings. In comparison, NBANT optimizes ∣A∣ and reaches a good value of eq (5) in a single run without having to tune ∣A∣ as a hyperparameter, thereby achieving best balance between performance and compression. Please refer to Appendix K.3 for more results and discussion on NBANT.
For product recommendation, we first experiment on a commonly used subset of the data, Amazon Electronics (with 9.84M users and 0.76M products), to ensure that our results match published baselines (Wan et al., 2020), before scaling our experiment to the entire dataset. From Table 4, we find that ANT compresses embeddings by 25× on Amazon Electronics while maintaining performance, and 10× on the full Amazon reviews dataset.
Online NBANT: Since NBANT automatically grows/contracts ∣A∣ during training, we can further extend NBANT to an online version that sees a stream of batches without revisiting previous ones (Bryant & Sudderth, 2012). We treat each batch as a new set of data coming in and train on that batch until convergence, modify ∣A∣ as in Algorithm 2, before moving onto the next batch. In this significantly more challenging online setting, NBANT is still able to learn well and achieve a MSE of 0.875 with 1.25M non zero parameters. Interestingly this online version of NBANT settled on a similar range of final user (8) and item (8) anchors as compared to the non-online version (see Table 3), which confirms the robustness of NBANT in finding relevant anchors automatically. In Appendix K.3 we discuss more observations around online NBANT including ways of learning ∣A∣.
4.4 DISCUSSION AND OBSERVATIONS
Here we list some general observations regarding the importance of various design decisions in ANT:
1) Sparsity is important: Baselines that compress with dense factors (e.g., LR, TT) suffer in performance while using many parameters, while ANT retains performance with better compression.
2) Choice of A: We provide results on more clustering initializations in Appendix D. In general, performance is robust w.r.t. choice of A. While frequency and clustering work better, using a dynamic basis also performs well. Thus, it is beneficial to use any extra information about the discrete objects (e.g., domain knowledge or having a good representation space like GloVe to perform clustering).
Table 5: Word association results after training language models with ANT on the word-level PTB dataset. Left: the non-anchor words most induced by a given anchor word. Right: the largest (non-anchor, anchor) entries learnt in T after sparse `1-regularization. Bottom: movie clusters obtained by sorting movies with the highest coefficients with each anchor embedding.
Anchor words Non-anchor words year august, night, week, month, monday, summer, spring stock bonds, certificates, debt, notes, securities, mortgages
Largest word pairs trading, brokerage
stock, junk year, summer york, angeles year, month
government, administration
Movies Genre God’s Not Dead, Sex and the City, Sex and the City 2, The Twilight Saga: Breaking Dawn - Part 1,
The Princess Diaries 2: Royal Engagement, The Last Song, Legally Blonde 2: Red, White & Blonde, The Twilight Saga: Eclipse, Maid in Manhattan, The Twilight Saga: Breaking Dawn - Part 2
romance, comedy
Nostalghia, Last Days, Chimes at Midnight, Lessons of Darkness, Sonatine, Band of Outsiders, Gerry, Cyclo, Mishima: A Life in Four Chapters, George Washington
drama, indie
3) Anchors and sparse transformations learned: We visualize the important transformations (large entries) learned between anchors and non-anchors in Table 5. Left, we show the most associated non-anchors for a given anchor word and find that the induced non-anchors are highly plausible: stock accurately contributes to bonds, certificates, securities, and so on. Right, we show the largest (non-anchor, anchor) pairs learned, where we find related concepts such as (billion, trillion) and (government, administration). On MovieLens, for each anchor, we sort the movies according to the magnitude of their transformation coefficients which automatically discovers movie clusters based on underlying genres. We obtain a genre purity ratio of 61.7% by comparing automatically discovered movie clusters with the true genre tags provided in MovieLens.
4) Zero transformations learned: For MovieLens, we find that ANT assigns 2673 out of 59047 movies to an entire zero row, of which 84% only had 1 rating (i.e., very rare movies). Therefore, compression automatically discovers very rare objects (1 labeled point). On WikiText-103, rare words (e.g., Anarky, Perl, Voorhis, Gaudí, Lat, Bottomley, Nescopeck) are also automatically assigned zero rows when performing high compression (54.2 ppl with 0.4M params). Certain rare words that might be predictive, however, are assigned non-zero rows in T, such as: sociologists, deadlines, indestructible, causeways, outsourced, glacially, heartening, unchallenging, roughest.
5) Choice of λ1, λ2: Tuning λ1 allows us to perform model selection by controlling the trade-off between ∣A∣ (model complexity) and performance. By applying eq (5) on our trained models in Table 2, choosing a small λ1 = 2 × 10−5 prefers more anchors (∣A∣ = 1,000) and better performance (ppl = 79.4), while a larger λ1 = 1 × 10−1 selects fewer anchors (∣A∣ = 100) with a compromise in performance (ppl = 106.6). Tuning λ2 allows us to control the tradeoff between sparsity and performance (see details in Appendix L).
6) Convergence: In Figure 4, we plot the empirical convergence of validation loss across epochs. ANT converges as fast as the (non-sparse) MF baseline, and faster than compression baselines MixDim (Ginart et al., 2019) and Sparse CBOW (Sun et al., 2016). ANT also converges to the best validation loss.
7) Scalability: In addition to fast convergence, ANT also works effectively on large datasets such as Movielens 25M (162K users, 59K movies, 25M examples) and WikiText-103 (267K unique words, 103M tokens). For each epoch on Movielens 25M, standard MF takes 165s on a GTX 980 Ti GPU while ANT takes 176s for ∣A∣ = 5 and 180s for ∣A∣ = 20. ANT also scales to the largest recommendation dataset, Amazon reviews, with 25M users and 9M products.
5 CONCLUSION
This paper presented ANCHOR & TRANSFORM to learn sparse embeddings of large vocabularies using a small set of anchor embeddings and a sparse transformation from anchors to all objects. We also showed a statistical interpretation via integrating IBP priors with neural representation learning. Asymptotic analysis of the likelihood using SVA yields an extension that automatically learns the optimal number of anchors. On text classification, language modeling, and recommender systems, ANT outperforms existing approaches with respect to accuracy and sparsity.
B DERIVATION OF OBJECTIVE FUNCTION VIA SVA
In this section we derive our objective function using Small Variance Asymptotics (SVA) (Jiang et al., 2012). Recall that the generative process in our model is given by:
• Z ∈ R∣V ∣×K ∼ IBP(a, b) • A ∈ RK×d ∼ P (A) = N (0,1) • W ∈ R∣V ∣×K ∼ P (W) = Exponential(1) • for i = 1,⋯,N
- ŷi = fθ(xi; (Z ○W)A) - yi ∼ p(yi∣xi;Z,W,A) = exp{−Dφ(yi, ŷi)} bφ(yi)
The joint log-likelihood under our generative model above is therefore: log p(Y,Z,W,A∣X)
∝∑ i log p(yi∣xi,Z,W,A) + log p(Z) + log p(W) + log p(A)
=∑ i
{−Dφ(yi, fθ(xi, (Z ○W)A)) + log bφ(yi)} + log p(Z) + log p(W) + log p(A). (10)
To use SVA, an approximate objective function for finding point estimates is obtained by taking the limit of the emission probability variances down to zero. We begin by introducing a scaling variable β and shrinking the variance of the emission probability to 0 by taking β →∞. The scaled probability emission becomes p(yi∣xi,Z,W,A) = exp{−βDφ(yi, ŷi)} bβφ(yi) (11) Following Broderick et al. (2013a), we modulate the number of features in the large-β limit by choosing constants λ1 > λ2 > 0 and setting the IBP hyperparameters with β as follows:
a = exp(−βλ1) b = exp(βλ2) (12) This prevents a limiting objective function that favors a trivial cluster assignment (every data point assigned to its own separate feature).
We now take the limit of the log-likelihood term by term:
lim β→∞
1 β log p(Y,A,W,Z∣X) (13)
= lim β→∞
1 β log p(yi∣xi,Z,W,A) + lim β→∞ 1 β log p(Z) + lim β→∞ 1 β log p(W) + lim β→∞ 1 β log p(A).
(14)
• limβ→∞ 1β log p(yi∣xi,Z,W,A) = limβ→∞ 1 β (−βDφ(yi, ŷi) + log bβφ(yi))
= −Dφ(yi, ŷi) +O(1). • limβ→∞ 1β log p(Z) = −λ2∥Z∥0 − (λ1 − λ2)K, see box below. • limβ→∞ 1β log p(W) = 0, if W ≥ 0 else −∞. • limβ→∞ 1β log p(A) = 0 as log p(A) = O(1).
For convenience, we re-write the limit of the IBP prior as
lim β→∞
1 β log p(Z) = lim β→∞ 1 β log
(ab)K
∏ 2∣V ∣−1 h=1 Kh
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ a©
+ lim β→∞
1 β log exp(−abH∣V ∣)
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ b©
+ K
∑ k=1 lim β→∞
1 β log Γ(mk)Γ(∣V ∣ −mk + b)
Γ(∣V ∣ + b) ´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶
c©
(15)
For part a©:
lim β→∞
1 β log
(ab)K
∏ 2∣V ∣−1 h=1 Kh
= lim β→∞
1 β log exp(−β(λ1 − λ2)K)
∏ 2∣V ∣−1 h=1 Kh
= lim β→∞
1 β × −β(λ1 − λ2)K − lim β→∞ 1 β ×O(1)
= −(λ1 − λ2)K
(16)
For part b©:
lim β→∞
1 β log exp(−abH∣V ∣) = lim β→∞ 1 β × −abH∣V ∣
= lim β→∞
− exp(−β(λ1 − λ2)K)
β ×
∣V ∣
∑ j=1
1
exp(βλ2) + j − 1
= 0
(17)
For part c©:
lim β→∞
1 β log Γ(mk)Γ(∣V ∣ −mk + b) Γ(∣V ∣ + b) = lim β→∞ 1 β log Γ(mk) − lim β→∞ 1 β
mk
∑ j=1
log(∣V ∣ − j + b)
= 0 − mk
∑ j=1 lim β→∞
log(∣V ∣ − j + exp(βλ2))
β
= − mk
∑ j=1
λ2
= −λ2mk
(18)
We know that mk is the number of objects which uses anchor k which counts the number of non-zero entries in the k-th column of Z. When we sum over all k, it just becomes the number of non-zero entries in Z, which is equivalent to the L0 norm of Z, i.e., ∥Z∥0.
Therefore, the MAP estimate under SVA as given by
max lim β→∞
1 β log p(Y,A,W,Z∣X) (19)
is equivalent to optimizing the following objective function: max Z∈0,1 W≥0 A,θ,K ∑ i −Dφ(yi, fθ(xi, (Z ○W)A)) − λ2∥Z∥0 − (λ1 − λ2)K, (20)
where the exponential prior for W resulted in a limiting domain for W to be positive. Note that we can combine the optimizing variables Z and W with their constraints into one variable T ≥ 0. Also
we can switch from a maximization problem to a minimization problem by absorbing the negative sign. Finally we arrive at the desired objective:
min T≥0
A,θ,K
∑ i
Dφ(yi, fθ(xi,TA)) + λ2∥T∥0 + (λ1 − λ2)K. (21)
C EXPONENTIAL FAMILY DISTRIBUTIONS AS BREGMAN DIVERGENCES
In this section we provide some results that relate exponential families distributions and Bregman divergences. As a result, we can relate likelihood models from Sec. 3.2 to appropriate Bregman divergences. Thus, a probabilistic observation model can be translated to a loss functions minimizing the Bregman divergence, which are more amenable to deep network training using gradient based methods. We begin by defining the Bregman divergence below and stating the relationship formally in Theorem 1. Definition 1. (Bregman, 1967) Let φ ∶ S → R, S = dom(φ) be a strictly convex function defined on a convex set S ⊂ Rd such that φ is differentiable on ri(S), assumed to be non-empty. The Bregman divergence Dφ ∶ S × ri(S)→ [0,∞) is defined as
Dφ(x,y) = φ(x) − φ(y) − ⟨x − y,∇φ(y)⟩, (22) where ∇φ(y) represents the gradient vector of φ evaluated at y. Theorem 1. (Banerjee et al., 2005) There is a bijection between regular exponential families and regular Bregman divergences. In particular, for any exponential family distribution p(x∣θ) = p0(x) exp(⟨x,θ⟩ − g(θ)) can be written as p(x∣µ) = exp(−Dφ(x,µ))bφ(x) where φ is the Legendre dual of the log-partition function g(θ) and µ = ∇θg(θ).
From Theorem 1, we can see that maximizing log-likelihood log p(x∣θ) is same as minimizing the Bregman divergence Dφ(x,µ). Note that we can ignore bφ(x) as it depends only on observed data and does not depend on any parameters. We now illustrate some common examples of exponential families (like Gaussian and categorical), derive their corresponding Bregman divergences, and connect to usual loss functions used in deep networks (like MSE and cross-entropy).
Example 1: Gaussian distribution. (Banerjee et al., 2005) We start with the unit variance spherical Gaussian distributions with with mean µ, which have densities of the form:
p(x;µ) = 1
√ (2π)d
exp(− 1
2 ∥x −µ∥22) . (23)
Using the log-partition function for Gaussian distribution, we can calculate that φ(x) = 1 2 ∥x∥2, which yields Bregman divergence equal to: Dφ(x,µ) = φ(x) − φ(µ) − ⟨x −µ,∇φ(µ)⟩ (24)
= 1
2 ∥x∥22 −
1 2 ∥µ∥22 − ⟨x −µ,µ⟩ (25)
= 1
2 ∥x −µ∥22
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ mean squared error
, (26)
Thus, Dφ(x,µ) along with constant bφ(x) given by
bφ(x) = 1
√ (2π)d
, (27)
recovers the Gaussian density p(x) = exp(−Dφ(x,µ))bφ(x). Therefore, when we assume that labels have a Gaussian emmission model, the corresponding Bregman divergence Dφ(x,µ) = 12∥x −µ∥ 2 2 recovers the squared loss commonly used for regression.
Example 2: Multinomial distribution. (Banerjee et al., 2005) Another exponential family that is widely used is the family of multinomial distributions:
p(x,q) = N !
∏ d j=1 xj !
d
∏ j=1
q xj j (28)
where xj ∈ Z+ are frequencies of events, ∑dj=1 xj = N and qj ≥ 0 are probabilities of events, ∑ d j=1 qj = 1. The multinomial density can be expressed as the density of an exponential distribution
in x = {xj}d−1j=1 with natural parameter θ = log ( qj qd
) d−1
j=1 , cumulant function g(θ) = −N log qd, and
expectation parameter µ = ∇g(θ) = [Nqj]d−1j=1 . The Legendre dual φ of g is given by
φ(µ) = N d
∑ j=1
( µj
N ) log (
µj N ) = N
d
∑ j=1
qj log qj . (29)
As a result, the multinomial density can be expressed as a Bregman divergence equal to:
Dφ(x,µ) = d
∑ j=1
xj logxj
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ constant
− d
∑ j=1
xj logµj
´¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¸¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¹¶ cross-entropy loss
. (30)
and constant bφ(x) given by
bφ(x) = ∏ d j=1 x xj j
NN N !
∏ d j=1 xj !
, (31)
which recovers the multinomial density p(x) = exp(−Dφ(x,µ))bφ(x). Therefore, when the labels are generated from a multinomial distribution, the corresponding Bregman divergence Dφ(x,µ) = −∑ d j=1 xj logµj + constant recovers the cross-entropy loss commonly used for classification.
D LEARNING THE ANCHOR EMBEDDINGS A
Here we provide several other strategies for initializing the anchor embeddings:
• Sparse lasso and variational dropout (Chen et al., 2019). Given the strong performance of sparse lasso and variational dropout as vocabulary selection methods, it would be interesting to use sparse lasso/variational dropout to first select the important task-specific words before jointly learning their representations and their transformations to other words. However, sparse lasso and variational dropout require first training a model to completion unlike frequency and clustering based vocabulary selection methods that can be performed during data preprocessing. • Coresets involve constructing a reduced data set which can be used as proxy for the full data set, with provable guarantees such that the same algorithm run on the coreset and the full data set gives approximately similar results (Phillips, 2016; Har-Peled & Mazumdar, 2004). Coresets can be approximately computed quickly (Bachem et al., 2017) and can be used to initialize the set of anchors A.
In general, there is a trade-off between how quickly we can choose the anchor objects and their performance. Randomly picking anchor objects (which is equivalent to initializing the anchor embeddings with dynamic basis vectors) becomes similar to learning a low-rank factorization of the embedding matrix (Sedov & Yang, 2018), which works well for general cases but can be improved for task-specific applications or with domain knowledge. Stronger vocabulary selection methods like variational dropout and group lasso would perform better but takes significantly longer time to learn. We found that intermediate methods such as frequency, clustering, with WordNet/co-occurrence information works well while ensuring that the preprocessing and training stages are relatively quick.
In Appendix K we provide more results for different initialization strategies including those based on clustering initializations. In general, performance is robust with respect to the choice of A among the ones considered (i.e., random, frequency, and clustering). While frequency and clustering work better, using a set of dynamic basis embeddings still gives strong performance, especially when combined with domain knowledge from WordNet and co-occurrence statistics. This implies that when the user has more information about the discrete objects (e.g., having a good representation space to perform clustering), then the user should do so. However, for a completely new set of discrete objects, simply using low-rank basis embeddings with sparsity also work well.
E TRANSFORM: LEARNING A SPARSE T
In addition to a simple sparse linear transformation, we describe some extensions that improve sparsity and expressitivity of the learned representations. Reducing redundancy in representations: To further reduce redundancy in our sparse representations, we perform orthogonal regularization of dynamic basis vectors A by adding the loss term
Confidential + Proprietary
L(A) = ∑i≠j ∣a ⊺ i aj ∣ to the loss function in eq (1). This ensures that different basis vectors ai and aj are orthogonal instead of being linear combinations of one another which would lead to redundancies across different learnt entries in T. Mixture of anchors: In general, different initialization strategies may bring about different advantages. For example, using a mixture of random basis vectors has been shown to help model multisense embeddings (Athiwaratkun et al., 2018; Nguyen et al., 2017). One can define a set of M anchor embeddings A1, ...,AM each initialized by different strategies and of possibly different sizes. Nonlinear mixture of transformations: To complement learning multiple sets of anchor embeddings A1, ...,AM , the straightforward extension of the TRANSFORM step would be to learn a separate linear transformation for each anchor embedding and summing the result: E = ∑Mm=1TmAm. However, the expressive power of this linear combination is equivalent to one set of anchor embeddings equal to concatenating A1, ...,AM and one linear transformation. To truly exhibit the advantage of multiple anchors, we transform and combine them in a nonlinear fashion, e.g., E = ∑ M m=1 softmax(Tm)Am (softmax over the rows of Tm, Figure 5). Different transformations can be learned for different initializations of anchors. This is connected with the multi-head attention mechanism in the Transformer (Vaswani et al., 2017), where softmax(Tm) are the softmax-activated (sparse) attention weights and Am the values to attend over. The result is an embedding matrix formed via a nonlinear mixture of anchors (each initialized with different strategies) and sparse transformations.
F INCORPORATING DOMAIN KNOWLEDGE
ANT also allows incorporating domain knowledge about object relationships. Suppose we are given some relationship graph G = (V,E) where each object is a vertex v ∈ V and an edge (u, v) ∈ E exists between objects u and v if they are related. Real-world instantiations of such a graph include 1) WordNet (Miller, 1995) or ConceptNet (Liu & Singh, 2004) for semantic relations between words, 2) word co-occurrence matrices (Haralick et al., 1973), and 3) Movie Clustering datasets (Leskovec & Krevl, 2014). From these graphs, we extract related positive pairs P = {(u, v) ∈ E} and unrelated negative pairs N = {(u, v) ∉ E}. We incorporate domain information as follows (see Figure 6 for a visual example): Positive pairs: To incorporate a positive pair (u, v), we do not enforce sparsity on Tu,v . This allows ANT to freely learn the transformation between related objects u and v without being penalized for sparsity. On the other hand, transformations between negative pairs will be sparsely penalized. In other words, before computing the `1-penalty, we element-wise multiply T with a domain sparsity matrix S(G) where S(G)u,v = 0 for (u, v) ∈ P (entries not `1-penalized) and S(G)u,v = 1 otherwise (entries are `1-penalized), resulting in the following modified objective:
min T≥0, A,θ ∑ i Dφ(yi, fθ(xi,TA)) + λ2∥T⊙ S(G)∥1. (32)
Since we perform proximal GD, this is equivalent to only soft-thresholding the entries between unrelated objects, i.e., T = max{(T − ηλ2)⊙ S(G) +T⊙ (1 − S(G)),0}. Note that this strategy is applicable when anchors are selected using the frequency method. Negative pairs: For negative pairs, we add an additional constraint that unrelated pairs should not share entries in their linear combination coefficients of the anchor embeddings. In other words, we
Published as a conference paper at ICLR 2021
Confidential + Proprietary
“good”
“the”
Initialize with frequent words
pretrained space e.g. GloVe/co-occurrence
Confidential + Proprietary
“good”
“the”
Clustering step 1
pretrained space e.g. GloVe/co-occurrence
Confidential + Proprietary
“good”
“the”
Clustering step 2
pretrained space e.g. GloVe/co-occurrence
Confidential + Proprietary
“good”
“the”
Clustering step 3
pretrained space e.g. GloVe/co-occurrence
add the loss term L(T,N) = ∑
(u,v)∈N ∣tu∣
⊺ ∣tv ∣ (33)
to the loss in eq (1), where each inner sum discourages tu and tv from sharing similar entries. This strategy can used regardless of the way anchors are selected. We acknowledge that there are other ways to incorporate domain knowledge as well into the general ANT framework, and we only serve to give some initial examples of such methods.
G NONPARAMETRIC ANCHOR & TRANSFORM
In this section we provide details for our non-parametric extension of ANT. Recall that our full objective function derived via small variance asymptotics is given by:
min T≥0
A,θ,K
∑ i
Dφ(yi, fθ(xi;TA)) + λ2∥T∥0 + (λ1 − λ2)K, (34)
which suggests a natural objective function in learning representations that minimize the prediction loss Dφ(yi, fθ(xi;TA)) while ensuring sparsity of T as measured by the `0-norm and using as few anchors as possible (K). Therefore, optimizing eq (5) gives rise to a nonparametric version of ANT, which we call NBANT, that automatically learns the optimal number of anchors. To perform optimization over the number of anchors, our algorithm starts with a small initial number of anchors K = ∣A∣ = 10 and either adds ∆K anchors (i.e., adding ∆K new rows to A and ∆K new sparse columns to T) or deletes ∆K anchors to minimize eq (34) at every epoch depending on the trend of the objective evaluated on the training set. We detail the full Algorithm 2, and highlight the main changes as compared to ANT.
Practically, this algorithm involves the same number of training epochs and batches through each training epoch as the vanilla ANT method. To enable sharing of trained anchors, we change the indices from where A and T are read from so that the partially trained removed anchors are still stored in case more anchors need to be added again.
H EFFICIENT LEARNING AND INFERENCE
The naive method for learning E from anchor embeddings A and the sparse transformations T still scales linearly with ∣V ∣ × d. Here we describe some tips on how to perform efficient learning and inference of the anchor embeddings A and the sparse transformations T:
• Store T as a sparse matrix by only storing its non-zero entries and indices. From our experiments, we have shown that nnz(T) << ∣V ∣ × d which makes storage efficient. • For inference, use sparse matrix multiply as supported in TensorFlow and PyTorch to compute E = TA (or its non-linear extensions). This decreases the running time from scaling by ∣V ∣ × d to only scaling as a function of nnz(T). For training, using inbuilt sparse
Algorithm 2 NBANT: Nonparametric Bayesian ANT. Differences from ANT are highlighted in red. ANCHOR & TRANSFORM:
1: Anchor: initialize initial K = ∣A∣ and corresponding anchor embeddings A ∈ RK×d. 2: Transform: initialize T ∈ R∣V ∣×K as a sparse matrix. 3: for each each epoch do 4: for each batch (X,Y) do 5: Compute loss L = ∑iDφ(yi, fθ(xi;TA)) 6: A,T, θ = UPDATE (∇L, η). 7: T = max{T − ηλ2,0}. 8: end for 9: Compute eq (34) using current value of K,A,T on the validation set.
10: if eq (34) is on a decreasing trend then 11: K =K +∆K, add ∆K rows to A and ∆K (sparse) columns to T. 12: else if eq (34) is on an increasing trend then 13: K =K −∆K, remove ∆K rows from A and ∆K (sparse) columns from T. 14: else 15: keep current values of K,A,T. 16: end if 17: end for 18: return anchor embeddings A and transformations T.
representation of most deep learning frameworks like PyTorch or Tensorflow is not optimal, as they do not support changing non-zero locations in sparse matrix and apriori its not easy to find optimal set of non-zero locations. • During training, instead, implicitly construct E from its anchors and transformations. In fact, we can do better: instead of constructing the entire E matrix to embed a single datapoint x ∈ R1×∣V ∣, we can instead first index x into T, i.e., xT ∈ R1×∣A∣ before performing a sparse matrix multiplication with A, i.e., (xT)A ∈ R1×d. We are essentially taking advantage of the associative property of matrix multiplication and the fact that xT is a simple indexing step and (xT)A is an effective sparse matrix multiplication. To enable fast row slicing into sparse matrix, we just storing the matrix in adjacency list or CSOO format. (We move away from CSR as adding/deleting a non-zero location is very expensive.) When gradient comes back, only update the corresponding row in T. The gradient will be sparse as well due to the L1-prox operator. • Above trick solves the problem for tasks where embedding is used only at the input, e.g., classification. For tasks like language model, where embedding is used at output as well one can also use above mentioned trick with speedup techniques like various softmax sampling techniques (Bengio & Senecal, 2008; Mikolov et al., 2013) or noise-contrastive estimation (Gutmann & Hyvarinen, 2010; Mnih & Teh, 2012), which will be anyway used for large vocabulary sizes. To elaborate, consider the case of sampled softmax (Bengio & Senecal, 2008). We normally generate the negative sample indices, and then we can first index into T using the true and negative indices before performing sparse matrix multiplication with A. This way we do not have to instantiate entire E by expensive matrix multiplication. • When training is completed, only store the non-zero entries of T or store T as a sparse matrix to reconstruct E for inference. • To save time when initializing the anchor embeddings and incorporating domain knowledge, precompute the necessary statistics such as frequency statistics, co-occurrence statistics, and object relation statistics. We use a small context size of 10 to measure co-occurrence of two words to save time. When using WordNet to discover word relations, we only search for immediate relations between words instead of propagating relations across multiple steps (although this could further improve performance). • In order to incorporate domain knowledge in the sparsity structure, we again store 1−S(G) using sparse matrices. Recall that S(G) has an entry equal to 1 for entries representing unrelated objects that should be `1-penalized, which makes S(G) quite dense since most anchor and non-anchor objects are unrelated. Hence we store 1−S(G) instead which consists few non-zero entries only at (non-anchor, anchor) entries for related objects. Element-wise multiplications are also replaced by sparse element-wise multiplications when computing T⊙ S(G) and T⊙ (1 − S(G)).
I GENERALITY OF ANT
We show that under certain structural assumptions on the anchor embeddings and transformation matrices, ANT reduces to the following task-specific methods for learning sparse representations: 1) Frequency (Chen et al., 2016b), TF-IDF, Group Lasso (Wen et al., 2016), and variational dropout (Chen et al., 2019) based vocabulary selection, 2) Low-rank factorization (Grachev et al., 2019), and 3) Compositional code learning (Shu & Nakayama, 2018; Chen et al., 2018). Hence, ANT is general and unifies some of the work on sparse representation learning done independently in different research areas.
Frequency-based vocabulary selection (Luong et al., 2015; Chen et al., 2016b): Initialize A with the ∣A∣ most frequent objects and set Ta,a = 1 for all a ∈ A, T = 0 otherwise. Then E = TA consists of embeddings of the ∣A∣ most frequent objects with zero embeddings for all others. During training, gradients are used to update A but not T (i.e., only embeddings for frequent objects are learned). By changing the selection of A, ANT also reduces to other vocabulary selection methods such as TF-IDF (Ramos, 1999), Group Lasso (Wen et al., 2016), and variational dropout (Chen et al., 2019)
Low-rank factorization (Acharya et al., 2019; Markovsky, 2011; Grachev et al., 2019): Initialize A by a mixture of random basis embeddings (just 1 anchor per set) A1, ...,AM ∈ R1×d and do not enforce any sparsity on the transformations T1, ...,TM ∈ R∣V ∣×1. If we further restrict ourselves to only linear combinations E = ∑Mm=1TmAm, this is equivalent to implicitly learning the M low rank factors a1, ...,aM , t1, ..., tM that reconstruct embedding matrices of rank at most M .
Compositional code learning (Shu & Nakayama, 2018; Chen et al., 2018): Initialize A by a mixture of random basis embeddings A1, ...,AM , initialize transformations T1, ...,TM , and apply a linear combination E = ∑Mm=1TmAm. For sparsity regularization, set row i of S(G)mi as a reverse one-hot vector with entry dmi = 0 and all else 1. In other words, index dmi of row row Tmi is not regularized, and all other entries are `1-regularized with extremely high λ2 such that row Tmi essentially becomes an one-hot vector with dimension dmi = 1. This results in learning a codebook where each object in V is mapped to only one anchor in each mixture.
Therefore, ANT encompasses several popular methods for learning sparse representations, and gives further additional flexibility in defining various initialization strategies, applying nonlinear mixtures of transformations, and incorporating domain knowledge via object relationships.
J EXPERIMENTAL DETAILS
Here we provide more details for our experiments including hyperparameters used, design decisions, and comparison with baseline methods. We also include the anonymized code in the supplementary material.
J.1 TEXT CLASSIFICATION Base CNN model: For all text classification experiments, the base model is a CNN (Lecun et al., 1998) with layers of 2D convolutions and 2D max pooling, before a dense layer to the output softmax. The code was adapted from https://github.com/wenhuchen/ Variational-Vocabulary-Selection and the architecture hyperparameters are provided in Table 6. The only differences are the output dimensions which is 4 for AG-News, 14 for DBPedia, 5 for Sogou-News, and 5 for Yelp-review.
Anchor: We experiment with dynamic, frequency, and clustering initialization strategies. The number of anchors ∣A∣ is a hyperparameter that is selected using the validation set. The range of ∣A∣ is in {10,20,50,80,100,500,1,000}. Smaller values of ∣A∣ allows us to control for fewer anchors and smaller transformation matrix T at the expense of performance.
Transformation: We experiment with sparse linear transformations for T. λ2 is a hyperparameter that is selected using the validation set. Larger values of λ2 allows us to control for more sparse entries in T at the expense of performance. For experiments on dynamic mixtures, we use a softmax-based nonlinear combination E = ∑Mm=1 softmax(Tm)Am where softmax is performed over the rows of
Tm. Note that applying a softmax activation to the rows of Tm makes all entries dense so during training, we store Tm as sparse matrices (which is efficient since Tm has few non-zero entries) and implicitly reconstruct E.
Domain knowledge: When incorporating domain knowledge in ANT, we use both WordNet and cooccurrence statistics. For WordNet, we use the public WordNet interface provided by NLTK http: //www.nltk.org/howto/wordnet.html. For each word we search for its immediate related words among its hypernyms, hyponyms, synonyms, and antonyms. This defines the relationship graph. For co-occurrence statistics, we define a co-occurrence context size of 10 on the training data. Two words are defined to be related if they co-occur within this context size.
A note on baselines: Note that the reported results on SPARSEVD and SPARSEVD-VOC (Chirkova et al., 2018) have a different embedding size: 300 instead of 256. This is because they use pre-trained word2vec or GloVe embeddings to initialize their model before compression is performed.
J.2 LANGUAGE MODELING ON PTB
Base LSTM model: Our base model is a 2 layer LSTM with an embedding size of 200 and hidden layer size of 200. The code was adapted from https://github.com/salesforce/ awd-lstm-lm and the full table of hyperparameters is provided in Table 7.
Base AWD-LSTM model: In addition to experiments on an vanilla LSTM model as presented in the main text, we also performed experiments using a 3 layer AWD-LSTM with an embedding size of 400 and hidden layer size of 1,150. The full hyperparameters used can be found in Table 8.
Anchor: We experiment with dynamic, frequency, and clustering initialization strategies. The number of anchors ∣A∣ is a hyperparameter that is selected using the validation set. The range of ∣A∣ is in {10,20,50,80,100,500,1,000}. Smaller values of ∣A∣ allows us to control for fewer anchors and smaller transformation matrix T at the expense of performance.
Domain knowledge: When incorporating domain knowledge in ANT, we use both WordNet and cooccurrence statistics. For WordNet, we use the public WordNet interface provided by NLTK http: //www.nltk.org/howto/wordnet.html. For each word we search for its immediate related words among its hypernyms, hyponyms, synonyms, and antonyms. This defines the relationship graph. For co-occurrence statistics, we define a co-occurrence context size of 10 on the training data. Two words are defined to be related if they co-occur within this context size.
A note on baselines: We also used some of the baseline results as presented in Grachev et al. (2019). Their presented results differ from our computations in two aspects: they include the LSTM parameters on top of the embedding parameters, and they also count the embedding parameters twice since they do not perform weight tying (Press & Wolf, 2017) (see equation (6) of Grachev et al. (2019)). To account for this, the results of SPARSEVD and SPARSEVD-VOC (Chirkova et al., 2018), as well as the results of various LR and TT low rank compression methods (Grachev et al., 2019) were modified by subtracting off the LSTM parameters (200 × 200 × 16). This is derived since each of the 8 weight matrices Wi,f,o,c, Ui,f,o,c in an LSTM layer is of size 200 × 200, and there are a 2 LSTM layers. We then divide by two to account for weight tying. In the main text, we compared with the strongest baselines as reported in Grachev et al. (2019): these were the methods that performed low rank decomposition on both the input embedding (∣V ∣ × d), output embedding (d × ∣V ∣), and intermediate hidden layers of the model. For full results, please refer to Grachev et al. (2019).
Note that the reported results on SPARSEVD and SPARSEVD-VOC (Chirkova et al., 2018) have a different embedding size and hidden layer size of 256 instead of 200, although these numbers are close enough for fair comparison. In our experiments we additionally implemented an LSTM with an embedding size of 256 and hidden layer size of 256 so that we can directly compare with their reported numbers.
For baselines that perform post-processing compression of the embedding matrix, POST-SPARSE HASH (post-processing using sparse hashing) (Guo et al., 2017) and POST-SPARSE HASH+k-SVD (improving sparse hashing using k-SVD) (Guo et al., 2017; Awasthi & Vijayaraghavan, 2018), we choose two settings: the first using 500 anchors and 10 nearest neighbors to these anchor points, and the second using 1,000 anchors and 20 nearest neighbors. The first model uses 500 × d + ∣V ∣ × 10 non-zero embedding parameters while the second model uses 1,000 × d + ∣V ∣ × 20 parameters. For AWD-LSTM on PTB, this is equivalent to 0.3M and 0.6M embedding parameters respectively which is comparable to the number of non-zero parameters used by our method.
J.3 LANGUAGE MODELING ON WIKITEXT-103
Base AWD-LSTM model: Our base model is a 4 layer AWD-LSTM with an embedding size of 400 and hidden layer size of 2,500. The code was adapted from https://github.com/ salesforce/awd-lstm-lm and the hyperparameters used can be found in Table 9.
A note on baselines: While Baevski & Auli (2019) adapt embedding dimensions according to word frequencies, their goal is not to compress embedding parameters and they use 44.9M (dense) parameters in their adaptive embedding layer, while we use only 2M. Their embedding parameters are calculated by their reported bucket sizes and embedding sizes (three bands of size 20K (d = 1024), 40K (d = 256) and 200K (d = 64)). Their perplexity results are also obtained using a Transformer model with 250M params while our AWD-LSTM model uses 130M params.
For the HASH EMBED baseline that retains the frequent k words and hashes the remaining words into 1,000 OOV buckets (Svenstrup et al., 2017), We vary k ∈ {1 × 105,5 × 104,1 × 104} to obtain results across various parameter settings.
J.4 MOVIE RECOMMENDATION ON MOVIELENS
Base MF model: We show the hyperparamters used for the MF model in Table 10. We use the Yogi optimizer (Zaheer et al., 2018) to learn the parameters.
ANT and NBANT: We build ANT on top of the MF model while keeping the base hyperparamters constant. For ANT, we apply compression to both movie and user embedding matrices individually. NBANT involves defining the starting value of K = ∣A∣, and a ∆K value which determines the rate of increase or decrease in K. For Movielens 25M we use a larger initial ∣A∣ and ∆K since it is a larger dataset and also takes longer to train, so we wanted the increase and decrease in anchors to be faster (see Table 10). Beyond this initial setting, we found that performance is robust with respect to the initial value of K and ∆K, so we did not tune these parameters. In practice, we tie the updates of the number of user anchors and movie anchors instead of optimizing over both independently. Therefore, we start with the same number of initial user and movie anchors before incrementing or decrementing them by the same ∆K at the same time. We found that this simplification did not affect performance and NBANT was still able to find an optimal number of anchors for a good trade-off between performance and compression.
K MORE RESULTS
In the | 1. What is the main contribution of the paper regarding token representation learning?
2. What are the strengths of the proposed approach, particularly in its two-step procedure?
3. Do you have any concerns regarding the incorporation of domain knowledge?
4. How effective is the method in reducing parameters while maintaining task performance?
5. Are there any questions regarding the training process, such as ensuring non-zero elements in T and selecting the regularization strength λ? | Review | Review
In this paper, the authors proposed a method to learn efficient representations of discrete tokens. They took a two step approach: in step 1, they learn "full fledged" embeddings for a subset of anchor tokens. In step 2, they learn a sparse matrix that is used to relate all tokens to the set of chosen anchors. This two-step approach reduced the overall number of parameters. The sparse matrix T can also encode domain knowledge (e.g. knowledge graphs). In the experiment section, the authors showed that their approach has good performance on several language tasks, with far fewer parameters.
In general the paper is well written and the flow is easy to follow. I find the main idea plausible and ingenious. For language tasks and word embeddings, anchoring method has been shown to be effective in several tasks already (e.g. [1] http://papers.nips.cc/paper/8152-the-global-anchor-method-for-quantifying-linguistic-shifts-and-domain-adaptation). The authors took two steps forward: 1) instead of in [1] where the entire vocab is used for anchoring purposes, the authors used a subset of tokens which reduces the amount of parameters. 2) they use a sparse T matrix to relate other tokens to anchors which again has reasonable prior: the meaning of a word can be efficiently defined by a few good chosen anchors. Although this paper is probably related to other strains of research (e.g. leaning manifolds for IR/NLP where anchoring is also a key concept, which the authors could have admittedly surveyed more), I particularly liked the fact that the two-step procedure decomposes two tasks that are often mixed together for embedding tasks: learning representation vs learning relations.
While the authors claimed that they can further impose domain knowledge in the learning process (which I think this is at least a good attempt), this part in general feels a bit less convincing. To be specific, there can be a variety of knowledge (like related, is a subset of, analogy, etc.). It is not clear how the distinction of different types of knowledge can be incorporated. What the authors proposed is lumping them into the notion of "positive pair" and relax constraints on them. This may or may not suffice (for the purpose of adding domain knowledge), but on paper, there is a chance that some finer structures of the domain knowledge may get lost. It's not clear how much gain (especially the experiment section, for fair comparison purposes where other methods do know use domain knowledge in particular) is from incorporating domain knowledge; an ablation study might help.
Another question is about training. It's not obvious to me how to guarantee that for every row of T, there is at least 1 non-zero element. Is some specific tuning needed for rows corresponding to rare words? How the regularization strength
λ
on T is selected?
The reduction of parameters is while keeping task performance is illustrated quite well in the experiment section. Their method does not reduce the theoretical complexity (still linear w.r.t. vocab size, as T must have at least one element per row), but in practice the reduction (which mostly comes from savings of dimensionality) is quite obvious. |
ICLR | Title
Compositional Attention Networks for Machine Reasoning
Abstract
We present Compositional Attention Networks, a novel fully differentiable neural network architecture, designed to facilitate explicit and expressive reasoning. While many types of neural networks are effective at learning and generalizing from massive quantities of data, this model moves away from monolithic blackbox architectures towards a design that provides a strong prior for iterative reasoning, allowing it to support explainable and structured learning, as well as generalization from a modest amount of data. The model builds on the great success of existing recurrent cells such as LSTMs: It sequences a single recurrent Memory, Attention, and Control (MAC) cell, and by careful design imposes structural constraints on the operation of each cell and the interactions between them, incorporating explicit control and soft attention mechanisms into their interfaces. We demonstrate the model’s strength and robustness on the challenging CLEVR dataset for visual reasoning, achieving a new state-of-the-art 98.9% accuracy, halving the error rate of the previous best model. More importantly, we show that the new model is more computationally efficient and data-efficient, requiring an order of magnitude less time and/or data to achieve good results.
1 INTRODUCTION
This paper considers how best to design neural networks to perform the iterative reasoning necessary for complex problem solving. Putting facts and observations together to arrive at conclusions is a central necessary ability as we work to move neural networks beyond their current great success with sensory perception tasks (LeCun et al., 1998; Krizhevsky et al., 2012) towards displaying Artificial General Intelligence.
Concretely, we develop a novel model that we apply to the CLEVR dataset (Johnson et al., 2016) for visual question answering (VQA). VQA (Antol et al., 2015; Gupta, 2017) is a challenging multimodal task that requires responding to natural language questions about images. However, Agrawal et al. (2016) show how the first generation of successful models on VQA tasks tend to acquire only superficial comprehension of both the image and the question, exploiting dataset biases rather than capturing a sound perception and reasoning process that would lead to the correct answer (Sturm, 2014). CLEVR was created to address this problem. As illustrated in figure 1, instances in the dataset consist of rendered images featuring 3D objects of several shapes, colors, materials and sizes, coupled with unbiased, compositional questions that require an array of challenging reasoning skills such as following transitive relations, counting objects and comparing their properties, without allowing any shortcuts around such reasoning. Notably, each in-
stance in CLEVR is also accompanied by a tree-structured functional program that was both used to construct the question and reflects its reasoning procedure – a series of predefined operations – that can be composed together to answer it.
Most neural networks are essentially very large correlation engines that will hone in on any statistical, potentially spurious pattern that allows them to model the observed data more accurately. In contrast, we seek to create a model structure that requires combining sound inference steps to solve a problem instance. At the other extreme, some approaches adopt symbolic structures that resemble the expression trees of programming languages to perform reasoning (Andreas et al., 2016b; Hu et al., 2017). In particular, some approaches to CLEVR use the supplied functional programs for supervised or semi-supervised training (Andreas et al., 2016a; Johnson et al., 2017). Not only do we wish to avoid using such supervision in our work, but we in general suspect that the rigidity of these structures and the use of an inventory of operation-specific neural modules undermines robustness and generalization, and at any rate requires more complex reinforcement learning methods.
To address these weaknesses, while still seeking to use a sound and transparent underlying reasoning process, we propose Compositional Attention Networks, a novel, fully differentiable, non-modular architecture for reasoning tasks. Our model is a straightforward recurrent neural network with attention; the novelty lies in the use of a new Memory, Attention and Composition (MAC) cell. The constrained and deliberate design of the MAC cell was developed as a kind of strong structural prior that encourages the network to solve problems by stringing together a sequence of transparent reasoning steps. MAC cells are versatile but constrained neural units. They explicitly separate out memory from control, both represented recurrently. The unit contains three sub-units: The control unit updates the control representation based on outside instructions (for VQA, the question), learning to successively attend to different parts of the instructions; the read unit gets information out of a knowledge base (for VQA, the image) based on the control signal and the previous memory; the write unit updates the memory based on soft self-attention to previous memories, controlled by the retrieved information and the control signal. A universal MAC unit with a single set of parameters is used throughout the reasoning process, but its behavior can vary widely based on the context in which it is applied – the input to the control unit and the contents of the knowledge base. With attention, our MAC network has the capacity to represent arbitrarily complex acyclic reasoning graphs in a soft manner, while having physically sequential structure. The result is a continuous counterpart to module networks that can be trained end-to-end simply by backpropagation.
We test the behavior of our new network on CLEVR and its associated datasets. On the primary CLEVR reasoning task, we achieve an accuracy of 98.9%, halving the error rate compared to the previous state-of-the-art FiLM model (Perez et al., 2017). In particular, we show that our architecture yields better performance on questions involving counting and aggregation. In supplementary studies, we show that the MAC network learns more quickly (both in terms of number of training epochs and training time) and more effectively from limited amounts of training data. Moreover, it also achieves a new state-of-the-art performance of 82.5% on the more varied and difficult humanauthored questions of the CLEVR-Humans dataset. The careful design of our cell encourages compositionality, versatility and transparency. We achieve these properties by defining attention-based interfaces that constrict the cell’s input and output spaces, and so constrain the interactions both between and inside cells in order to guide them towards simple reasoning behaviors. Although each cell’s functionality has only a limited range of possible continuous reasoning behaviors, when chained together in a MAC network, the whole system becomes expressive and powerful. In the future, we believe that the architecture will also prove beneficial for other multi-step reasoning and inference tasks, for instance in machine comprehension and textual question answering.
2 RELATED WORK
There have been several prominent models that address the CLEVR task. By and large they can be partitioned into two groups: module networks, which in practice have all used the strong supervision provided in the form of tree-structured functional programs that accompany each data instance, and large, relatively unstructured end-to-end differentiable networks that complement a fairly standard stack of CNNs with components that aid in performing reasoning tasks. In contrast to modular approaches (Andreas et al., 2016a;b; Hu et al., 2017; Johnson et al., 2017), our model does not require additional supervision and makes use of a single computational cell chained in sequence (like an LSTM) rather than a collection of custom modules deployed in a rigid tree structure. In contrast to augmented CNN approaches (Santoro et al., 2017; Perez et al., 2017), we suggest that our approach provides an ability for relational reasoning with better generalization capacity and higher
computational efficiency. These approaches and other related work are discussed and contrasted in more detail in the supplementary material in section C.
3 COMPOSITIONAL ATTENTION NETWORKS
Compositional Attention Networks is an end-to-end architecture for question-answering tasks that sequentially performs an explicit reasoning process by stringing together small building blocks, called MAC cells, each is responsible for performing one reasoning step.
We now provide an overview of the model, and a detailed discussion of the MAC cell. The model is composed of three components: an Input unit, the core MAC network, and an output unit. A TensorFlow implementation of the network, along with pretrained models will be made publicly available.
In this paper we explore the model in the context of VQA. However, it should be noted that while the input and output units are naturally domain-specific and should be designed to fit the task at hand, the MAC network has been designed to be generic and more broadly applicable, and may prove useful in contexts beyond those explored in the paper, such as machine comprehension or question answering over knowledge bases, which in our belief is a promising avenue for future work.
3.1 THE INPUT UNIT
The input unit processes the raw inputs given to the system into distributed vector representations. It receives a text question (or in general, a query), and an image (or in general, a Knowledge Base (KB)) and processes each of them with a matching sub-unit, for the query and the KB, here a biLSTM and a CNN. More details can be found in the supplementary material, section A.
At the end of this stage, we get from the query sub-unit a series of biLSTM output states, which we refer to as contextual words, [cw1, ..., cwS ], where S is the length of the question. In addition, we get q = [←−−cw1,−−→cwS ], the concatenation of the hidden states from the backward and forward LSTM passes. We refer to q as the question representation. Furthermore, we get from the Knowledge-Base sub-unit a static representation of the knowledge base. For the case of VQA, it will be represented by a continuous matrix KBV of dimension H,W, d, where H = W = 14 are the height and width of the transformed image, corresponding to each of its regions.
3.2 THE MAC CELL
The MAC network, which is the heart of our model, chains a sequence of small building blocks, called MAC cells, each responsible for performing one reasoning step. The model is provided access to a Knowledge Base (KB), which is, for the specific case of VQA, the given image, and then upon receiving a query, i.e. a question, the model iteratively focuses, in p steps, on the query’s various parts, each reflects in turn the current reasoning step, which we term the control. Consequently, guided by this control, it retrieves the relevant information from the KB, that is then passed to the next cell in a recurrent fashion.
Drawing inspiration from the Model-View-Controller paradigm used in software design and from the commonly exercised separation between control and data paths in computer architecture, the MAC cell is composed of three units: control unit, read unit and write unit. Each has a clearly defined role and an interface through which it interacts with the other units. See figure 2.
The careful design and imposed interfaces that constrain the interaction between the units inside the MAC cell, as described below, serve as structural prior that limits the space of hypotheses it can learn, thereby guiding it towards acquiring the intended reasoning behaviors. As such, this prior facilitates the learning process and mitigate overfitting issues.
In particular, and similar in spirit to Perez et al. (2017), we allow the question to interact with the Knowledge Base – the image for the case of VQA, only through indirect means: by guiding the cell to attend to different elements in the KB, as well as controlling its operation through gating mechanisms. Thus, in both cases, the interaction between these mediums, visual and textual, or knowledge and query, is mediated through probability distributions, either in the form of attention maps, or as gates, further detailed below. This stands in stark contrast to many common approaches that fuse the
question and image together into the same vector space through linear combinations, multiplication, or concatenation. Rather, our controlled interaction distills the influence that the query should have in processing the Knowledge Base, casting it onto discrete probability distributions instead.
The MAC cell has been designed to replace the discrete and predefined “modules” used in the modular approach (Andreas et al., 2016a;b; Hu et al., 2017; Johnson et al., 2017). Rather, we create one universal and versatile cell that is applied across all the reasoning steps, sharing both its architecture as well as its parameters, across all of its instantiations. In contrast to the discrete modules, each trained to specialize to some specific elementary reasoning task, the MAC cell is capable of demonstrating a continuous range of possible reasoning behaviors conditioned on the context in which it is applied – namely, the inputs it receives from the prior cell.
Each cell MACi maintains two dual states: control ci and memory mi, both are continuous vectors of dimension d. The control ci represents the reasoning operation the MAC cell should accomplish in the current step – focusing only on some aspect of the whole question. This is represented by a weighted-average attention-based sum of the question words. The memory mi represents the current context information deemed relevant to respond to the query, or answer the question.This is represented practically by a weighted average over elements from the KB, or for the case of VQA, regions in the image. m0 and c0 are initialized each to a random vector parameter of dimension d. The memory and control states are passed from one cell to the next in a recurrent fashion, and used in a way reminiscent of Key-Value memory networks (Miller et al., 2016), as discussed below.
3.2.1 THE CONTROL UNIT
The control unit determines the reasoning operation that should be applied at this step. It receives the contextual words [cw1, ..., cwS ], the question representation q, and the control state from the previous MAC cell ci−1, all of which are vectors of dimension d.
We would like to allow our MAC cell to perform continuously varied and adaptive range of behaviors, as demanded by the question. Therefore, we define the behavior of each cell to be a function of the contextual words [cw1, ..., cwS ], weighted-averaged according to the attention distribution that the control unit produces at each step. This will allow the cell to adapt its behavior – the reasoning operation it performs – to the question it receives, instead of having a fixed set of predefined behaviours as is the case in competing approaches Andreas et al. (2016a;b); Johnson et al. (2017).
The formal specification of the control unit is shown in figure 3. The question q is linearly transformed into a vector qi of the same dimension, which in turn is concatenated with the previous control state ci−1 and linearly transformed again to a d-dimensional vector cqi.
qi = W d,d i · q + b d i (1)
cqi = W 2d,d [qi, ci−1] + b d (2)
Note that in contrast to all other parameters of the cell, which are shared across its instantiations at the different steps i = 1, ..., p, the parameters W d,di and b d i are different for each iteration. This
is done to allow each cell to attend more readily to different aspects (i.e. parts) of the questions, depending on the index of the current step – its relative stage in the context of the whole reasoning process.
cqi represents the current reasoning operation we would like to perform in a continuous way, taking into account both the overall meaning of the question qi, as well as the words the model attended to in the previous step, ci−1.
However, we would like to prevent the cell from diverging in the reasoning operations it tries to perform, and instead anchor it back in the question words, by using them to represent the reasoning operation of the current step. We can achieve that by computing an attention distribution cvi over the contextual words [cw1, ..., cwS ] based on their similarity to cqi. Then, summing the contextual words according to the attention distribution cvi will allow us to have a new control state, ci, which is represented again in terms of words from the question. Intuitively, it is the gist of the question that is relevant to the reasoning operation we would like to perform in the current step.
cvi,s = softmax(W d,1(cqi ◦ cws) + b1) (3a)
ci = S∑ s=1 cvi,s · cws (3b)
Finally, the control unit returns the current control state ci, along with an attention map cvi over the contextual words.
3.2.2 THE READ UNIT
The Read Unit is provided with access to the knowledge base KBV , along with the previous memory state mi−1 and the current control ci. It is responsible for retrieving relevant content from the Knowledge Base KBV for the reasoning task that the MAC cell should accomplish at this step, which is represented by the current control state ci, as explained above. Figure 4 shows a diagram.
The relevance of the new information is judged in two stages by the “relatedness” of each element in the KB (or for the case of VQA, each region in the image) to either the memory mi−1 that has accumulated relevant information from previous iterations, or to the current control ci, pointing towards the next piece of information that should be taken into account. Here, relatedness is measured by trained linear transformations comparing each element to the previous memory and the current control.
More formally, at the first stage, the interaction between each element KBh,w, where h = 1, ...,H,w = 1, ...,W , and the previous memory mi−1 is computed by:
m′i−1 = W d,d ·mi−1 + bd (4)
KB′h,w = W d,d ·KBh,w + bd (5a) (Im−KB)h,w = m ′ i−1 ◦KB ′ h,w (5b)
These memory-KB interactions measure the relatedness of each element in the KB to the memory accumulated so far, which holds information that has been deemed relevant to handle previous reasoning steps towards addressing the question. They allow the model to perform transitive inference, retrieving a new piece of information that now seems important in light of the recent memory retrieved in a prior iteration.
However, there are cases which necessitate the model to temporarily ignore current memories, when choosing the new information to retrieve. Logical OR is a classical example: when the model has to look at two different objects at the same time, and assuming it stored one of them at the first iteration, it should briefly ignore it, considering new information that is relevant to the question but is unrelated to the memory. In order to achieve such capability, the read unit concatenates the original KB elements to each corresponding memory-KB interaction, which are then projected back to d-dimensional space (equation 6a):
Im−KB ′ = W 2d,d [Im−KB ,KBh,w] + b d (6a)
Icm−KB = ci ◦ (Im−KB)′ (6b)
At the second stage, the read unit compares the current ci with these memory-KB interactions, in order to focus on the information that is relevant to the current reasoning operation that the MAC cell seeks to accomplish. The result is then passed to a softmax layer yielding an attention map mvi over the KB, which is used in turn to retrieve the relevant information to perform the current reasoning step.
mvi = softmax ( W d,d · Icm−KB + bd ) (7a)
mnew = H,W∑ h,w=1,1 (mvi)h,w ·KBh,w (7b)
Finally, the read unit returns the newly retrieved information mnew, along with an attention map mvi over the Knowledge Base.
To give an example of the read unit operation, assume a given question q such as “What object is located left to the blue ball?”, whose associated answer is “cube”. Initially, no cue is provided to the model to attend to that cube, since no direct information about it presents in the question. Instead, based on its comprehension of the question, the model may start by focusing on the blue ball at the first iteration, such that the memory state m1 will capture the blue ball. However, in the second iteration, the control unit, after re-examining the question, may realize it should now look left, storing the word “left” in c2. Then, when considering both m1 and c2, the read unit will realize it should perform a reasoning operation corresponding to the word “left” (stored in c2) given a memory representing the blue ball in m1, thereby allowing it to look left to the blue ball and find the cube.
3.2.3 THE WRITE UNIT
The Write Unit is responsible for creating the new memory state mi that will reflect all the information considered to be important to answer the question so far, i.e. up to the current iteration in the
reasoning process. It receives the last memory state mi−1 from the previous MAC cell, along with the newly retrieved information from the read unit in the current iteration, mnew. See figure 5 for a diagram.
In the main design we have explored, merging the new information with the previous memory state is done simply by a linear transformation.
m′i = W 2d,d[mnew,mi−1] + b d (8)
In addition, we have explored two variations of this design. The first, self-attention, allows considering any previous memories rather than just the last one mi−1, thus providing the network with the capacity to model non-sequential reasoning processes. The second variation is adding gating mechanisms to the writing unit. These may allow the model to dynamically adjust the practical length of the computation to the question complexity and stabilize the memory content throughout the sequential network (similarly to GRUs and LSTMs).
Self-Attention. The current architecture that we have presented allows the model to perform reasoning steps in a sequence, passing control and memory states from one cell to the following. However, we would like to grant the system with more flexibility. Particularly, we would like to allow it to capture more complicated reasoning processes such as trees and graphs - Directed Acyclic Graph (DAG) in particular, where several branches of reasoning sub-processes are merged together in later stages. Indeed, the CLEVR dataset includes cases where the questions embody tree-like reasoning process, rather than just sequences, which we would like to address correctly in our model.
We achieve that by adding self-attention connections between each MAC cell and all the prior cells. Since each cell can look on all the prior reasoning steps and their corresponding memories retrieved from the Knowledge Base, it can virtually capture any directed acyclic graph, while still having physically sequential layout.
More formally, the current MAC cell, of the ith iteration, is granted with access to c1, ..., ci−1 along with the corresponding m1, ...,mi−1, that have been computed by the prior MAC cells. It begins by computing the similarity between ci and c1, ..., ci−1, and use it to derive an attention map over the prior MAC cells sai,j for j = 1, ..., i− 1. This represents the relevance of the jth prior reasoning step to the current one i (equation 9a).
Then, we average the previous memories according to this resulted attention map saij . We obtain msa, representing the information from all the other reasoning steps that is relevant to the current one (equation 9b).
This resembles the approach of Key-Value networks (Miller et al., 2016). The similarity between control states, corresponding to the reasoning operations that are performed in each prior step, allows the model to select which memories should be taken into account, when creating the new memory – namely, which branches of the reasoning process should be merged together at this point.
saij = softmax ( W d,1(ci ◦ cj) + b1 ) (9a)
(msa)i = i−1∑ j=1 saij ·mj (9b)
Finally, we use msa along with m′i to compute m ′′ i , the new memory content in this variation.
m′′i = W 2d,d[mnew,m ′ i] + b d (10)
Memory Gate. The currently presented MAC network has some fixed number p of concatenated MAC cells, representing the length of the overall reasoning process we perform. However, not all questions require reasoning sequence of the same length. Some questions are simpler while others more complex.
Motivated by this observation, we add a gate over the new memory computed at each step, that may selectively keep content of the previous memory mi−1 unchanged. Practically, the gate functions in a similar way to a highway network (Srivastava et al., 2015), where the gate value is conditioned on the current reasoning operation, ci.
ci ′ = W d,d · ci + bd (11a)
mi = sigmoid (ci′) ·mi−1 + (1− sigmoid (ci′)) ·mi′′ (11b)
The write unit returns the new memory state mi, that will be passed along with ci to the next MAC cell.
3.2.4 DISCUSSION
Overall, when designing the MAC cell, we have attempted to formulate the inner workings of an elementary, yet generic reasoning skills: the model decomposes the problem into steps, focusing on one at a time. At each such step, it takes into account:
• The control ci: Some aspect of the task - pointing to the future work that has left to be done.
• The previous memory or memories: The partial solution or evidence the cell has acquired so far – pointing to the past work that has already been achieved.
• The newly retrieved information mnew: that is retrieved from the knowledge base KB and may or may not be transitively related to that partial solution or evidence - the present, or current work.
Considering these three sources of information together, the cell finally adds the new information up into its working memory, mi, progressing one more step towards the final answer.
3.3 THE OUTPUT UNIT
The output unit receives the question representation q, along with the memory state passed from the last MAC cell mp, where p is the number of MAC cells in the network – representing the number of reasoning steps in the whole process. It inspects both and predicts an answer based on their concatenation. Intuitively, we would like our model to consider both the question as well as the relevant information that has been progressively retrieved from the KB, deemed the necessary information to answer it.
Note that considering both q and mp is critical to answer the question. While mp represents the information collected from KB, we still need to recall what has been asked about it to be able to answer accordingly. This is especially true in our case, when all other interactions between the question and the KB are mediated through attention distributions, rather than being transformed into a shared continuous vector space.
The prediction is built out of a standard 2-layers fully-connected softmax-based classifier with hidden dimension d and output dimension that matches the number of possible answers in the dataset. The classifier receives [mp, q] as input and returns a probability distribution over the answers.
4 EXPERIMENTS
We evaluate our model on the recent CLEVR dataset (Johnson et al., 2016). CLEVR is a synthetic dataset consisting of 700K tuples; each consists of a 3D-rendered image featuring objects of various shapes, colors, materials and sizes, coupled with compositional multi-step questions that measure performance on an array of challenging reasoning skills such as following transitive relations, counting objects and comparing their properties. In addition, each question is associated with a formal program, specifying the reasoning operations that should be performed to compute the answer, among 28 possibilities.
We first perform experiments on the original 700k CLEVR dataset (Johnson et al., 2016), comparing to prior work. As shown in table 1, our model matches or outperforms all existing models both in overall accuracy, as well as in each category, testing different reasoning skills. In particular, for the overall performance, we achieve 98.94% accuracy, more than halving the error rate of the prior best model, FiLM (Perez et al., 2017).
Counting and Numerical Comparison. Remarkably, our performance on questions testing counting and numerical comparisons is significantly higher than the competing models, which consistently struggle on this question type. Again, we nearly halve the corresponding error rate. These results demonstrate the aptitude of attention mechanisms to perform counting, reduction and aggregation, in contrast to alternative, CNN-based approaches.
Training Length and Computational-Efficiency. We examine the learning curves of our and competing models. We have trained all models on the same architecture and used the author code for the other models. Aiming at having equal settings for comparison, we ran all models including ours with learned random words vectors. In order to make sure the results are statistically significant we ran each model multiple (10) times, and plotted the averages and confidence intervals (figure 4). The results show that our model learns significantly faster than the other leading methods, FiLM (Perez et al., 2017) and PG+EE (Johnson et al., 2017). While we do not have learning curves for the Relational Network model, Santoro et al. (2017) report approximately 1.4 million iterations to achieve 95.5% accuracy, which are equivalent to 125 epochs approximately, whereas our model achieves a comparable accuracy after 3 epochs only, yielding 40x reduction in the length of the training process.
Naturally, the smaller number of required training steps also translates to comparably shorter training time. Perez et al. (2017) report training time of 4 days, equivalent to 80 epochs, to reach accuracy of 97.7%. In contrast, we achieve higher accuracy in 6 epochs, taking 9.5 hours overall, leading to 10x reduction in training time.
4.1 DATA EFFICIENCY
We have explored the performance of our and other leading approaches on smaller subsets of the CLEVR dataset, in order to study the ability of models to generalize from smaller amount of data. We sampled at random subsets of CLEVR, with 10%, 25% and 50% of its original 700k size, and used them to train our and other 3 proposed models for the CLEVR task: FiLM (Perez et al., 2017), the strongly-supervised PG+EE (Johnson et al., 2017), and stacked-attention networks (Johnson et al., 2017; Yang et al., 2016).
As shown in figure 4, our model outperforms the other models by a wide margin for all subsets of the CLEVR dataset. For 50% of the data, equivalent to 350k samples, other models obtain accuracies ranging between 70% and 92%, while our model achieves 97.9%. The gap becomes larger as the dataset size reduces: for 25% of the data, equivalent to 175k samples, performance of other models is between 50% and 77%, while our model maintains a high 95.4% accuracy.
Finally, for 10% of the data – 70k samples, still a sizeable amount – our model is the only one that manages to generalize, with performance of 84.7% on average, whereas the other three models fail, achieving 47.6%-57.5% . Note that as pointed out by (Johnson et al., 2016) a simple baseline that predicts the most frequent answer for each of the question types achieves already 42.1%, suggesting that answering half of the questions correctly means that the competing models barely learn to generalize from the smaller dataset. These results demonstrate the robustness of our architecture and its key role as a structural prior guiding our network to learn the intended reasoning skills.
4.2 CLEVR HUMANS - NATURAL LANGUAGE QUESTIONS
We analyze our model performance on the CLEVR-Humans dataset (Johnson et al., 2017), consisting of natural language questions collected through crowdsourcing. As such, the dataset has diverse vocabulary and linguistic variations, and it also demands more varied reasoning skills.
Since the training set is relatively small, consisting of 18k samples, we use it to finetune a model pretrained on the standard CLEVR dataset. However, since most of the vocabulary in CLEVRHumans is not covered by CLEVR, we do not train the word vectors during the pre-training stage, so to prevent drift in their meaning compared to other uncovered words in CLEVR-Humans that may be semantically related.
As shown in table 2, our model achieves state-of-the-art performance on CLEVR-Humans both before and after fine-tuning. It surpasses the next-best FiLM model, (Perez et al., 2017) by 6.6% percent, achieving 82.5%.
The results substantiate the model’s robustness against linguistic variations and noise, as well as its ability to adapt to diverse vocabulary and varied reasoning skills. Arguably, the soft attention performed over the question words allows the model to focus on the words that are most critical to answer the question and translate them to corresponding reasoning operations, giving less attention to irrelevant linguistic variations.
4.3 ABLATIONS
Based on the validation set, we have conducted an ablation study on our model to understand better the contribution of each of its component to the overall performance. We tested each setting on the standard 700K CLEVR dataset as well as on 10% subset of the dataset. See table 3 for the numerical results. In addition, figure 4.3 presents the training curves for the different settings trained on the standard dataset. Overall, the results demonstrate the robustness of the model to hyperparameter variations such as network dimension and length, and also the impact of different aspect and components of MAC on its performance.
Network Length. We have tested the model performance as a function of the network’s length – the number of MAC cells that were sequenced together. The results show the positive correlation between the network length and its performance. We can see that for 1 cell the scores are relatively low – 75%, but adding at least one more cell leads to a significant increase in performance above 95%. The performance keeps improving up to lengths 8-16 that achieve 98.9-99.1%. The results also teach us about the complexity of the dataset, by showing the relatively significant benefits of having at least 4 cells, each modeling a reasoning step.
Network Dimension. We have varied the state dimension to check the robustness of the model to hyperparameters. The results on the standard CLEVR dataset show the model is able to maintain high performance with dimension of 128, albeit after a longer training process, achieving 97.6%, compared to 98.94% achieved with dimension of 512. However, for 10% of CLEVR, the larger 512-dimension allows accuracy increase by 7.5% over dimension of 128.
Weight Sharing. We have tested the impact of sharing weights between cell has on the model performance for network of length p = 12. The results show that for the standard dataset there is only a small difference between these settings of 1%. However, for less data, we see much more significant drop 16.9% in the unshared-parameters setting compared to the shared one. Indeed, we observe that a model with less parameter is more data-efficient and has a lower tendency to overfit the data.
Control Unit. We have performed several ablations in the control unit to understand its contribution to the overall model performance. Based on the results, first, we can see the the question information is crucial for the model to handle the questions, as can be noted by the low performance of the model when there is no use of control signal whatsoever. Second, we have tested the model performance when using the continuous control state computed by question (2) in section 3.2.1, without having word-attention, in order to understand its relative contribution. Based on the results, we can indeed see that using word-attention is useful for accelerating the training process and achieving higher accuracies both for the standard dataset as well as for the small subset, where using word-attention increases results in 21.4%. We also see that using the “contextual words” produced by the questionunit LSTM is useful in accelerating the model performance, when compared to using the wordvectors directly.
Reading Unit. We have conducted several ablations for the reading unit to better understand its behavior and contribution to the performance of the model. The standard MAC reading unit uses the control state – which averages the question words based on attention distributions computed per each reasoning step. In this ablation experiment, we have tested using the full question representation q instead across all reasoning steps to gain better understanding of the the contribution of wordattention to the model performance. Indeed, we can see that using q rather then the control state ci results in a significant drops in performance – 19.4% for the full CLEVR dataset and 19.5% for 10% of the data.
We have conducted additional ablation experiment to better understand the contribution of using the KB features directly in the first-stage information retrieval process described in section 3.2.2, compared to using only the dot-products of the KB elements with the previous memory state mi−1. For the full CLEVR dataset, we can see that this component has only a small impact in the final performance - ultimately resulting in 0.06% performance difference. However, for the 10% of the data, we can see that the difference in performance when ablating this component is much larger - 11.2%.
Writing Unit Ablations. In our main MAC model variant, the memory unit merges the new information mnew with the previous memory state mi−1 by combining them through a linear transformation. In this experiment, we have explored other variations, such as assigning mnew to mi directly – ignoring previous memories, or doing a linear transformation based on mnew only. The results show that in fact such variant is only slightly worse than our main variant – 0.4%. We also conducted an experiment in which we merge the new information with the previous memory just by a having a gate that does a weighted average of them. The results show that this variant performs equivalently to our standard linear-transformation variant.
Writing Unit Additions. We have explored the impact of the writing unit variants described in section 3.2.3 – adding self-attention, gating mechanisms, or both, compared to our standard main model that uses a linear transformation to merge the newly retrieved information mnew with the previous memory mi. For the complete CLEVR dataset we can see that indeed both these variants are very helpful in increasing the model performance. Compared to our standard MAC model that achieves 98.94% on the validation set, self-attention yields accuracy of 99.23%, gating yields 99.36% and adding both achieves 99.48%.
Output Unit. In our standard model, the final predictions made in the output unit are based on the final memory state mp as well as question representation q (stands for the final hidden states of the backward and forwards passes of the LSTM). We have explored the contribution of basing the model prediction on the latter, by testing the model performance when prediction is based on memory alone, for the complete and 10% datasets. We can see that in both settings basing the model’s predictions on the question representation allows faster training and higher accuracies. Notable is the gap in performance for the 10% CLEVR - 19.8% increase by using the question representation to make predictions. These results are very reasonable intuitively, since the model is structured such that the memory holds only information that was retrieved from the image. Thus, questions that may ask for instance on different aspects (such as color or shape) of the same object in the image may result in the same memory content, which is thus does not directly contain enough information to respond such questions.
Position. In our standard model, similarly to the practice of competing models (Santoro et al., 2017; Perez et al., 2017; Hu et al., 2017), we have concatenated positional information to each region of the image, in order to increase the model capability to perform spatial reasoning. We have explored both simple linear maps at a constant [−1, 1] as well as more complex positional encoding suggested by (Vaswani et al., 2017). However, the results for both the standard dataset and the 10% version show a very negligible improvement at best when adding positional encoding information, demonstrating the capability of MAC to perform spatial reasoning without data augmentation.
Gate Bias Initialization. For our model variant with gating mechanism (described in section 3.2.3) we have tested the effect of setting different values for the gate bias - −1, 0 and 1. for −1 the model is initialized to biased for keeping the previous memory value whereas for 1 it will be biased for using the new memory instead. We can see that for the complete dataset setting the bias to 1 is optimal – apparently since the model has enough data to learn to apply each cell effectively. In contrast, for the small 10% CLEVR data, setting the bias to 0 shows better performance, biasing the model to using less cells overall which results ultimately in a theoretically-simpler model that can fit less data more effectively.
4.4 INTERPRETABILITY
We have looked into attention maps over the image and question that the model produces during its computation and provide a few examples in figure 4.4. The first example shows us how the model parses the question in steps, first focusing on the main entity that the question is about, then on
relation of this entity to the “brown matte thing” which is then located in the image. Finally, the model correctly focuses on the small brown cube and predicts the right answer – brown.
The second example shows a model with 4 cells instead of 6, that similarly parse the question in iterations and focuses on the relevant objects at each step, though we can see that the reasoning process looks somewhat different when the MAC network has fewer cells.
The last example shows how how the model handles counting and OR operations. It starts from identifying the task - computing a number, and then red objects as well as the cylinder, one at a time, allowing it ultimately to respond correctly, with the answer 2.
5 CONCLUSION
We have given a first demonstration of how a sequence of Memory, Attention and Control (MAC) cells combined into a Compositional Attention Network provides a very effective tool for neural reasoning. In future work, we wish to explore this promising architecture for other tasks and domains, including real-world VQA, machine comprehension and textual question answering.
A DETAILS OF INPUT UNIT
The input unit processes the raw inputs given to the system into distributed vector representations. It receives a text question (or in general, a query), and an image (or in general, a Knowledge Base (KB)) and processes each of them with a matching sub-unit. Here we provide details of the Query Unit and the Image Unit used in this work.
A.0.1 THE QUERY UNIT
We encode a query of S words into a continuous representation using a bidirectional LSTM (Hochreiter & Schmidhuber, 1997; Graves et al., 2013). Each word is associated with a word embedding ws, where s = 1, ..., S. In our case, we use GloVE words embeddings (Pennington et al., 2014). Then, these embeddings are processed by a bidirectional LSTM of dimension d that outputs:
• a matching sequence of d-dimensional output states, which we refer to as contextual words, [cw1, ..., cwS ]
• d-dimensional hidden state q = [←−−cw1,−−→cwS ], the concatenation of the hidden states from the backward and forward passes. We refer to q as the question representation.
Intuitively, each contextual word cws represents the meaning of sth word, in the context of the question, while the hidden state q represents the overall (compositional) meaning of the question.
A.0.2 THE IMAGE UNIT
Given an image, and following prior work on CLEVR (?Santoro et al., 2017; Perez et al., 2017), we extract conv4 features from ResNet101 (He et al., 2016) pretrained on ImageNet (Krizhevsky et al., 2012) which we treat as a fixed initial representation of the image, x of dimension H,W,C where H = W = 14 are the height and width of the transformed image and C = 1024 is the number of channels. Each feature xh,w represents one region in the original image.
Similar to prior work (Hu et al., 2017; Santoro et al., 2017; Perez et al., 2017), we would like to allow our model to reason explicitly about spatial locations, as required by many of the questions in CLEVR, and therefore we concatenate to this representation a spatial map that represents each of the positions in the image. However, in contrast to prior work that uses a linear meshgrid feature map with 2 features h and w ranging from −1 to 1, and to allow better representation of the positions, we use the positional encoding scheme proposed by Vaswani et al. (2017):
p(h,2i) = sin ( h/100002i/pd ) p(h,2i+1) = cos ( h/100002i/pd
) And similarly for w, where pd is a hyperparameter. Overall, the positional encoding of a feature at position (h,w) is [ph, pw], the concatenation of the positional encodings for h and w.
This positional encoding scheme allows better correspondence between the distance of 2 positions (x, y) and (x, y) in the image and a vector similarity of their positional encodings, even when pd is larger than two.
We then concatenate the obtained spatial map with x, receiving a spatially-aware image representation xp. Then, we pass this representation through two CNN layers with d output channels and obtain a final representation of the image, which we refer to as our Visual Knowledge Base (KBV that is used in further components of the model.
B IMPLEMENTATION AND TRAINING DETAILS
For the question processing, we use GloVE (Pennington et al., 2014) word-vectors with dimension 300. For the image processing, we extract conv4 features from ResNet101 (He et al., 2016) pretrained on ImageNet (Krizhevsky et al., 2012), with dimension H,W,C where H = W = 14 and C = 1024, followed by 2 CNN layers with kernel size 2. We use MAC network with p = 12 cells, and train it using Adam (Kingma & Ba, 2014), with learning rate 10−4. We train our model for 10 − 20 epochs, with batch size 64, and use early stopping based on validation accuracies. During training, the moving averages of all weights of the model are maintained with the exponential decay rate of 0.999. At test time, the moving averages instead of the raw weights are used. We use dropout 0.85, and ELU (Clevert et al., 2015) which in our experience has reduce the training process compared to RELU.The training process takes roughly 10-20 hours on a single Titan X GPU.
C FURTHER DISCUSSION OF RELATED WORK
In this section we provide detailed discussion of related work. Several models have been applied to the CLEVR task. These can be partitioned into two groups, module networks that use the strong supervision provided as a tree-structured functional program associated with each instance, and end-to-end, fully differentiable networks that combine a fairly standard stack of CNNs with components that aid them in performing reasoning tasks. We also discuss the relation of MAC to other approaches, such as memory networks and neural computers.
C.1 MODULE NETWORKS
The modular approach (Andreas et al., 2016a;b; Hu et al., 2017; Johnson et al., 2017) first translates the given question into a tree-structured action plan, aiming to imitate the ground-truth programs provided as a form of strong-supervision. Then, it constructs a tailor-made network that executes the plan on the image in multiple steps. This network is composed of discrete units selected out of a collection of predefined modules, each responsible for an elementary reasoning operation, such as identifying an objects color, filtering them for their shape, or comparing two amounts. Each module has its own set of learned parameters (Johnson et al., 2017), or even hand-crafted design (Andreas et al., 2016a) to guide it towards its intended behavior.
Overall, this approach makes discrete choices at two levels: the identity of each module – the behavior it should learn among a fixed set of possible types of behaviors, and the network layout – the way in which these modules are wired together to compute the answer progressively. Hence, their differentiability is confined to the boundaries of a single module, disallowing end-to-end training.
Several key differences exist between our approaches. First, our model replaces the fixed modules collection with one versatile and universal cell that shares both its architecture and parameters across all of its instantiations, and is applied across all the reasoning steps. Second, it replaces the dynamic recursive tree structures with a sequential topology, augmented by soft attention mechanisms, as done in Bahdanau et al. (2014). This confers our network with a virtual capacity to represent arbitrarily complex Directed Acyclic Graphs (DAGs) while still having efficient and readily deployed physical sequential structure. Together, both of these relaxations allow us to effectively train our model end-to-end by backpropagation alone, whereas module networks demand a more involved training scheme that relies on the strongly-supervised programs at the first stage, and on various Reinforcement Learning (RL) techniques at the second. Furthermore, while our model can be train without the strong supervisory programs, developing adaptive reasoning skills to address the task is it trained for, the modular approach reliance on questions structured and formal representation hinder its applicability to real-world tasks.
C.2 AUGMENTED CONVOLUTIONAL NEURAL NETWORKS
Alternative approaches for the CLEVR task that do not rely on the provided programs as a strong supervision signal are Santoro et al. (2017) and Perez et al. (2017). Both complement standard multi-layer Convolutional Neural Networks (CNNs) with components that aid them in handling compositional and relational questions.
Relational Networks. Santoro et al. (2017) appends a Relation Network (RN) layer to the CNN. This layer inspects all pairs of pixels in the image, thereby enhancing the network capacity to reason over binary relations between objects. While this approach is very simple and elegant conceptually, it suffers from quadratic computational complexity, in contrast to our and other leading approaches. But beyond that, closer inspection reveals that this direct pairwise comparison might be unnecessary. Based on the analogy suggested by Santoro et al. (2017), according to which pixels are equivalent to objects and their pairwise interactions to relations, a RN layer attempts to grasp the induced graph between objects all at once in one shallow and broad layer. Conversely, our attention-based model proceeds in steps. It basically compares the image to its current memory and control for this step, aggregates the attended regions into the new memory, and repeats the process. By the same analogy, it traverses a narrow and deep path, progressively following transitive relations. Consequently, our model exhibits a relational capacity while circumventing the computational inefficiency.
FiLM. FiLM (Perez et al., 2017) is a recently proposed method that interleaves standard CNN layers that process the given image with linear layers, reminiscent of layer normalization techniques (Ba et al., 2016; Ioffe & Szegedy, 2015). Each of these layers, called FiLM, is conditioned on the question: the question words are processed by a GRU, and its output is linearly transformed into matching biases and variances for each of the CNN layers, tilting its activations to reflect the specifics of the given question and affect the computation done over the image.
Similarly to our model, this approach features distant modulation between the question and the image, where rather than being fused together into the same vector space, the question can affect the image processing only through constrained means – for the case of FiLM – linear transformations. However, since the same transformation is applied to all the activations homogeneously, agnostic to both their spatial location as well as the features values, this approach does not allow the question to differentiate between regions in the image based on the objects or concepts they represent – on the content of the image. This stands in stark contrast to our attention-based model, which readily allows and actually encourages the question to inform the model about relevant regions to focus on. We speculate that this still distant, yet more direct interaction between the question and the data, or image, for the case of VQA, facilitates learning and increases generalizability. It may be more suitable to VQA tasks, and CLEVR in particular, where the questions demand the responder to focus on specific objects, and reason about their properties or relations, rather than respond based only on a holistic view of the image that may lead to sub-optimal results (Yang et al., 2016), as is the case of FiLM. Indeed, as demonstrated in 4, there is significant evidence showing our models better generalization capacity, allowing it to achieve high accuracies much faster, and from less data than FiLM and other competing methods.
C.3 MEMORY AND ATTENTION
Our architecture draws inspiration from recent research on memory and attention (Kumar et al., 2016; Xiong et al., 2016; Graves et al., 2014; 2016). Kumar et al. (2016); Xiong et al. (2016) propose the Dynamic Memory Network model that proceeds in an iterative process, applying soft attention to retrieve relevant information from a visual or textual KB, which is in turn accumulated into memory passed from one iteration to the next. However, in contrast to our model, it views the question as an atomic unit, whereas our model decomposes it into a multi-step action plan informing each cell in our sequential network about its current objective. Another key difference is the distant interaction between the question and the KB that characterizes our model. Conversely, DMN fuses their corresponding representations together into the same vector space.
Graves et al. (2016; 2014) complements a neural network with a memory array it can interact with, through the means of soft attention. Analogously to our model, it partitions the model into a core neural network, called controller, as well as reading and writing heads that interact with external memory array. However, a main point distinguishing our model from this approach, is the use of dynamic memory, as in Kumar et al. (2016), instead of a fixed-array memory. Each MAC cell is associated with a memory state, our reading unit inspects only the latest memory passed from the previous state, and our writing unit creates a new memory state rather than writing to multiple slots in a fixed shared external memory. Notably, our approach is much more reminiscent of the widely successful RNN structure, rather than to Graves et al. (2016; 2014) .
Finally, our approach has potential ties to the VQA models Hu et al. (2017); Lu et al. (2016) which also attend both the to question words and the image while progressively addressing the given question. However, both of these models have distinct specialized designs for each of their attention layers or modules, and have a discrete or fixed layout in which they are composed together. In contrast, our approach relax both of these limitations, having one universal cell design and one universal self-attending sequential network layout.
C.4 ATTENTION VS. CONVOLUTION
Compared to other leading methods, our model stands out by being heavily based on soft attention, whereas most competing approaches are CNN-based, surprisingly lack any attention mechanism. Since attention is commonly used in models designed for standard VQA (Antol et al., 2015; Gupta, 2017; Lu et al., 2016; Yang et al., 2016), it is reasonable to assume that it would be beneficial to incorporate such methods into visual reasoning systems for the CLEVR task as well. In fact, attention mechanisms should be especially useful for multi-step reasoning questions such as those present in CLEVR. Such questions refer to several relations between different objects in the image and feature compositional structure that may be approached one step at a time. Thus, it should be beneficial for a cogent responder to have the capacity to selectively focus on on one or some objects at each step, traversing the relevant relational links one after the other, both at the image level, and at the question level.
Moreover, attention mechanisms enhance our model’s ability to perform reasoning skills that pertain to aggregation of information across different regions, such as counting, finding maximum value, or performing other reduction operations over information that is spread across the image. Indeed, as discussed in 4, all existing models for visual reasoning, most of which lacking any attention mechanism, struggle with the counting and numerical comparisons questions present in CLEVR. Conversely, our model proves much more capable of performing these reasoning skills, outperforming the other approaches by a wide margin. Noticeably, incorporating soft attention into our model makes it much more adept at performing such aggregation reasoning skills, successfully addressing the this type of questions.
Finally, as pointed out by Lu et al. (2016); Yang et al. (2016), soft attention confers the model with robustness to noise introduced from irrelevant information presents in the image, and higher capacity for handling larger and more diverse vocabulary, the latter being demonstrated in 4. It allows the model to separate the wheat from the chaff, selectively attending to the relevant information only, and arguably, being more resilient to both visual and linguistic variations. | 1. What is the focus and contribution of the paper on visual reasoning and question answering?
2. What are the strengths of the proposed Compositional Attention Networks (CAN) model?
3. What are the weaknesses of the paper regarding its experimental analysis and comparisons with other works?
4. Do you have any concerns about the necessity of using both question and memory information in the model?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
Summary:
The paper presents a new model called Compositional Attention Networks (CAN) for visual reasoning. The complete model consists of an input unit, a sequence of the proposed Memory, Attention and Composition (MAC) cell, and an output unit. Experiments on CLEVR dataset shows that the proposed model outperforms previous models.
Strengths:
— The idea of building a compositional model for visual reasoning and visual question answering makes a lot of sense, and, I think, is the correct direction to go forward in these fields.
— The proposed model outperforms existing models pushing the state-of-the-art.
— The proposed model is computationally cheaper and generalizes well with less training data as compared to existing models.
— The proposed model has been described in detail in the paper.
Weaknesses:
— Given that the performance of state-on-art on CLEVR dataset is already very high ( <5% error) and the performance numbers of the proposed model are not very far from the previous models, it is very important to report the variance in accuracies along with the mean accuracies to determine if the performance of the proposed model is statistically significantly better than the previous models.
— It is not clear which part of the proposed model leads to how much improvement in performance. Ablations studies are needed to justify the motivations for each of the components of the proposed model.
— Analysis of qualitative results (including attention maps, gate values, etc.) is needed to justify if the model is actually doing what the authors think it should do. For example, the authors mention an example on page 6 at the end of Section 3.2.2, but do not justify if this is actually what the model is doing.
— Why is it necessary to use both question and memory information to answer the question even when the question was already used to compute the memory information? I would think that including the question information helps in learning the language priors in the dataset. Have the authors looked at some qualitative examples where the model which only uses memory information gives an incorrect answer but adding the question information results in a correct answer?
— Details such as using Glove word embeddings are important and can affect the performance of models significantly. Therefore, they should be clearly mentioned in the main paper while comparing with other models which do not use them.
— The comparisons of number of epochs required for training and the training time need fixed batch sizes and CPU/GPU configurations. Is that true? These should be reported in this section.
— The authors claim that their model is robust to linguistic variations and diverse vocabulary, by which I am guessing they are referring to experiments on CLEVR-Humans dataset. What is there in the architecture of the proposed model which provides this ability? If it is the Glove vectors, it should be clearly mentioned since any other model using Glove vectors should have this ability.
— On page 6, second paragraph, the authors mention that there are cases which necessitate the model to ignore current memories. Can the authors show some qualitative examples for such cases?
— In the intro, the authors claim that their proposed cell encourages transparency. But, the design of their cell doesn’t seem to do so, nor it is justified in the paper.
Overall: The performance reported in the paper is impressive and outperforms previous state-of-the-art, but without proper statistical significance analysis of performance, ablation studies, analysis of various attention maps, memory gates, etc. and qualitative results, I am not sure if this work would be directly useful for the research community. |
ICLR | Title
Compositional Attention Networks for Machine Reasoning
Abstract
We present Compositional Attention Networks, a novel fully differentiable neural network architecture, designed to facilitate explicit and expressive reasoning. While many types of neural networks are effective at learning and generalizing from massive quantities of data, this model moves away from monolithic blackbox architectures towards a design that provides a strong prior for iterative reasoning, allowing it to support explainable and structured learning, as well as generalization from a modest amount of data. The model builds on the great success of existing recurrent cells such as LSTMs: It sequences a single recurrent Memory, Attention, and Control (MAC) cell, and by careful design imposes structural constraints on the operation of each cell and the interactions between them, incorporating explicit control and soft attention mechanisms into their interfaces. We demonstrate the model’s strength and robustness on the challenging CLEVR dataset for visual reasoning, achieving a new state-of-the-art 98.9% accuracy, halving the error rate of the previous best model. More importantly, we show that the new model is more computationally efficient and data-efficient, requiring an order of magnitude less time and/or data to achieve good results.
1 INTRODUCTION
This paper considers how best to design neural networks to perform the iterative reasoning necessary for complex problem solving. Putting facts and observations together to arrive at conclusions is a central necessary ability as we work to move neural networks beyond their current great success with sensory perception tasks (LeCun et al., 1998; Krizhevsky et al., 2012) towards displaying Artificial General Intelligence.
Concretely, we develop a novel model that we apply to the CLEVR dataset (Johnson et al., 2016) for visual question answering (VQA). VQA (Antol et al., 2015; Gupta, 2017) is a challenging multimodal task that requires responding to natural language questions about images. However, Agrawal et al. (2016) show how the first generation of successful models on VQA tasks tend to acquire only superficial comprehension of both the image and the question, exploiting dataset biases rather than capturing a sound perception and reasoning process that would lead to the correct answer (Sturm, 2014). CLEVR was created to address this problem. As illustrated in figure 1, instances in the dataset consist of rendered images featuring 3D objects of several shapes, colors, materials and sizes, coupled with unbiased, compositional questions that require an array of challenging reasoning skills such as following transitive relations, counting objects and comparing their properties, without allowing any shortcuts around such reasoning. Notably, each in-
stance in CLEVR is also accompanied by a tree-structured functional program that was both used to construct the question and reflects its reasoning procedure – a series of predefined operations – that can be composed together to answer it.
Most neural networks are essentially very large correlation engines that will hone in on any statistical, potentially spurious pattern that allows them to model the observed data more accurately. In contrast, we seek to create a model structure that requires combining sound inference steps to solve a problem instance. At the other extreme, some approaches adopt symbolic structures that resemble the expression trees of programming languages to perform reasoning (Andreas et al., 2016b; Hu et al., 2017). In particular, some approaches to CLEVR use the supplied functional programs for supervised or semi-supervised training (Andreas et al., 2016a; Johnson et al., 2017). Not only do we wish to avoid using such supervision in our work, but we in general suspect that the rigidity of these structures and the use of an inventory of operation-specific neural modules undermines robustness and generalization, and at any rate requires more complex reinforcement learning methods.
To address these weaknesses, while still seeking to use a sound and transparent underlying reasoning process, we propose Compositional Attention Networks, a novel, fully differentiable, non-modular architecture for reasoning tasks. Our model is a straightforward recurrent neural network with attention; the novelty lies in the use of a new Memory, Attention and Composition (MAC) cell. The constrained and deliberate design of the MAC cell was developed as a kind of strong structural prior that encourages the network to solve problems by stringing together a sequence of transparent reasoning steps. MAC cells are versatile but constrained neural units. They explicitly separate out memory from control, both represented recurrently. The unit contains three sub-units: The control unit updates the control representation based on outside instructions (for VQA, the question), learning to successively attend to different parts of the instructions; the read unit gets information out of a knowledge base (for VQA, the image) based on the control signal and the previous memory; the write unit updates the memory based on soft self-attention to previous memories, controlled by the retrieved information and the control signal. A universal MAC unit with a single set of parameters is used throughout the reasoning process, but its behavior can vary widely based on the context in which it is applied – the input to the control unit and the contents of the knowledge base. With attention, our MAC network has the capacity to represent arbitrarily complex acyclic reasoning graphs in a soft manner, while having physically sequential structure. The result is a continuous counterpart to module networks that can be trained end-to-end simply by backpropagation.
We test the behavior of our new network on CLEVR and its associated datasets. On the primary CLEVR reasoning task, we achieve an accuracy of 98.9%, halving the error rate compared to the previous state-of-the-art FiLM model (Perez et al., 2017). In particular, we show that our architecture yields better performance on questions involving counting and aggregation. In supplementary studies, we show that the MAC network learns more quickly (both in terms of number of training epochs and training time) and more effectively from limited amounts of training data. Moreover, it also achieves a new state-of-the-art performance of 82.5% on the more varied and difficult humanauthored questions of the CLEVR-Humans dataset. The careful design of our cell encourages compositionality, versatility and transparency. We achieve these properties by defining attention-based interfaces that constrict the cell’s input and output spaces, and so constrain the interactions both between and inside cells in order to guide them towards simple reasoning behaviors. Although each cell’s functionality has only a limited range of possible continuous reasoning behaviors, when chained together in a MAC network, the whole system becomes expressive and powerful. In the future, we believe that the architecture will also prove beneficial for other multi-step reasoning and inference tasks, for instance in machine comprehension and textual question answering.
2 RELATED WORK
There have been several prominent models that address the CLEVR task. By and large they can be partitioned into two groups: module networks, which in practice have all used the strong supervision provided in the form of tree-structured functional programs that accompany each data instance, and large, relatively unstructured end-to-end differentiable networks that complement a fairly standard stack of CNNs with components that aid in performing reasoning tasks. In contrast to modular approaches (Andreas et al., 2016a;b; Hu et al., 2017; Johnson et al., 2017), our model does not require additional supervision and makes use of a single computational cell chained in sequence (like an LSTM) rather than a collection of custom modules deployed in a rigid tree structure. In contrast to augmented CNN approaches (Santoro et al., 2017; Perez et al., 2017), we suggest that our approach provides an ability for relational reasoning with better generalization capacity and higher
computational efficiency. These approaches and other related work are discussed and contrasted in more detail in the supplementary material in section C.
3 COMPOSITIONAL ATTENTION NETWORKS
Compositional Attention Networks is an end-to-end architecture for question-answering tasks that sequentially performs an explicit reasoning process by stringing together small building blocks, called MAC cells, each is responsible for performing one reasoning step.
We now provide an overview of the model, and a detailed discussion of the MAC cell. The model is composed of three components: an Input unit, the core MAC network, and an output unit. A TensorFlow implementation of the network, along with pretrained models will be made publicly available.
In this paper we explore the model in the context of VQA. However, it should be noted that while the input and output units are naturally domain-specific and should be designed to fit the task at hand, the MAC network has been designed to be generic and more broadly applicable, and may prove useful in contexts beyond those explored in the paper, such as machine comprehension or question answering over knowledge bases, which in our belief is a promising avenue for future work.
3.1 THE INPUT UNIT
The input unit processes the raw inputs given to the system into distributed vector representations. It receives a text question (or in general, a query), and an image (or in general, a Knowledge Base (KB)) and processes each of them with a matching sub-unit, for the query and the KB, here a biLSTM and a CNN. More details can be found in the supplementary material, section A.
At the end of this stage, we get from the query sub-unit a series of biLSTM output states, which we refer to as contextual words, [cw1, ..., cwS ], where S is the length of the question. In addition, we get q = [←−−cw1,−−→cwS ], the concatenation of the hidden states from the backward and forward LSTM passes. We refer to q as the question representation. Furthermore, we get from the Knowledge-Base sub-unit a static representation of the knowledge base. For the case of VQA, it will be represented by a continuous matrix KBV of dimension H,W, d, where H = W = 14 are the height and width of the transformed image, corresponding to each of its regions.
3.2 THE MAC CELL
The MAC network, which is the heart of our model, chains a sequence of small building blocks, called MAC cells, each responsible for performing one reasoning step. The model is provided access to a Knowledge Base (KB), which is, for the specific case of VQA, the given image, and then upon receiving a query, i.e. a question, the model iteratively focuses, in p steps, on the query’s various parts, each reflects in turn the current reasoning step, which we term the control. Consequently, guided by this control, it retrieves the relevant information from the KB, that is then passed to the next cell in a recurrent fashion.
Drawing inspiration from the Model-View-Controller paradigm used in software design and from the commonly exercised separation between control and data paths in computer architecture, the MAC cell is composed of three units: control unit, read unit and write unit. Each has a clearly defined role and an interface through which it interacts with the other units. See figure 2.
The careful design and imposed interfaces that constrain the interaction between the units inside the MAC cell, as described below, serve as structural prior that limits the space of hypotheses it can learn, thereby guiding it towards acquiring the intended reasoning behaviors. As such, this prior facilitates the learning process and mitigate overfitting issues.
In particular, and similar in spirit to Perez et al. (2017), we allow the question to interact with the Knowledge Base – the image for the case of VQA, only through indirect means: by guiding the cell to attend to different elements in the KB, as well as controlling its operation through gating mechanisms. Thus, in both cases, the interaction between these mediums, visual and textual, or knowledge and query, is mediated through probability distributions, either in the form of attention maps, or as gates, further detailed below. This stands in stark contrast to many common approaches that fuse the
question and image together into the same vector space through linear combinations, multiplication, or concatenation. Rather, our controlled interaction distills the influence that the query should have in processing the Knowledge Base, casting it onto discrete probability distributions instead.
The MAC cell has been designed to replace the discrete and predefined “modules” used in the modular approach (Andreas et al., 2016a;b; Hu et al., 2017; Johnson et al., 2017). Rather, we create one universal and versatile cell that is applied across all the reasoning steps, sharing both its architecture as well as its parameters, across all of its instantiations. In contrast to the discrete modules, each trained to specialize to some specific elementary reasoning task, the MAC cell is capable of demonstrating a continuous range of possible reasoning behaviors conditioned on the context in which it is applied – namely, the inputs it receives from the prior cell.
Each cell MACi maintains two dual states: control ci and memory mi, both are continuous vectors of dimension d. The control ci represents the reasoning operation the MAC cell should accomplish in the current step – focusing only on some aspect of the whole question. This is represented by a weighted-average attention-based sum of the question words. The memory mi represents the current context information deemed relevant to respond to the query, or answer the question.This is represented practically by a weighted average over elements from the KB, or for the case of VQA, regions in the image. m0 and c0 are initialized each to a random vector parameter of dimension d. The memory and control states are passed from one cell to the next in a recurrent fashion, and used in a way reminiscent of Key-Value memory networks (Miller et al., 2016), as discussed below.
3.2.1 THE CONTROL UNIT
The control unit determines the reasoning operation that should be applied at this step. It receives the contextual words [cw1, ..., cwS ], the question representation q, and the control state from the previous MAC cell ci−1, all of which are vectors of dimension d.
We would like to allow our MAC cell to perform continuously varied and adaptive range of behaviors, as demanded by the question. Therefore, we define the behavior of each cell to be a function of the contextual words [cw1, ..., cwS ], weighted-averaged according to the attention distribution that the control unit produces at each step. This will allow the cell to adapt its behavior – the reasoning operation it performs – to the question it receives, instead of having a fixed set of predefined behaviours as is the case in competing approaches Andreas et al. (2016a;b); Johnson et al. (2017).
The formal specification of the control unit is shown in figure 3. The question q is linearly transformed into a vector qi of the same dimension, which in turn is concatenated with the previous control state ci−1 and linearly transformed again to a d-dimensional vector cqi.
qi = W d,d i · q + b d i (1)
cqi = W 2d,d [qi, ci−1] + b d (2)
Note that in contrast to all other parameters of the cell, which are shared across its instantiations at the different steps i = 1, ..., p, the parameters W d,di and b d i are different for each iteration. This
is done to allow each cell to attend more readily to different aspects (i.e. parts) of the questions, depending on the index of the current step – its relative stage in the context of the whole reasoning process.
cqi represents the current reasoning operation we would like to perform in a continuous way, taking into account both the overall meaning of the question qi, as well as the words the model attended to in the previous step, ci−1.
However, we would like to prevent the cell from diverging in the reasoning operations it tries to perform, and instead anchor it back in the question words, by using them to represent the reasoning operation of the current step. We can achieve that by computing an attention distribution cvi over the contextual words [cw1, ..., cwS ] based on their similarity to cqi. Then, summing the contextual words according to the attention distribution cvi will allow us to have a new control state, ci, which is represented again in terms of words from the question. Intuitively, it is the gist of the question that is relevant to the reasoning operation we would like to perform in the current step.
cvi,s = softmax(W d,1(cqi ◦ cws) + b1) (3a)
ci = S∑ s=1 cvi,s · cws (3b)
Finally, the control unit returns the current control state ci, along with an attention map cvi over the contextual words.
3.2.2 THE READ UNIT
The Read Unit is provided with access to the knowledge base KBV , along with the previous memory state mi−1 and the current control ci. It is responsible for retrieving relevant content from the Knowledge Base KBV for the reasoning task that the MAC cell should accomplish at this step, which is represented by the current control state ci, as explained above. Figure 4 shows a diagram.
The relevance of the new information is judged in two stages by the “relatedness” of each element in the KB (or for the case of VQA, each region in the image) to either the memory mi−1 that has accumulated relevant information from previous iterations, or to the current control ci, pointing towards the next piece of information that should be taken into account. Here, relatedness is measured by trained linear transformations comparing each element to the previous memory and the current control.
More formally, at the first stage, the interaction between each element KBh,w, where h = 1, ...,H,w = 1, ...,W , and the previous memory mi−1 is computed by:
m′i−1 = W d,d ·mi−1 + bd (4)
KB′h,w = W d,d ·KBh,w + bd (5a) (Im−KB)h,w = m ′ i−1 ◦KB ′ h,w (5b)
These memory-KB interactions measure the relatedness of each element in the KB to the memory accumulated so far, which holds information that has been deemed relevant to handle previous reasoning steps towards addressing the question. They allow the model to perform transitive inference, retrieving a new piece of information that now seems important in light of the recent memory retrieved in a prior iteration.
However, there are cases which necessitate the model to temporarily ignore current memories, when choosing the new information to retrieve. Logical OR is a classical example: when the model has to look at two different objects at the same time, and assuming it stored one of them at the first iteration, it should briefly ignore it, considering new information that is relevant to the question but is unrelated to the memory. In order to achieve such capability, the read unit concatenates the original KB elements to each corresponding memory-KB interaction, which are then projected back to d-dimensional space (equation 6a):
Im−KB ′ = W 2d,d [Im−KB ,KBh,w] + b d (6a)
Icm−KB = ci ◦ (Im−KB)′ (6b)
At the second stage, the read unit compares the current ci with these memory-KB interactions, in order to focus on the information that is relevant to the current reasoning operation that the MAC cell seeks to accomplish. The result is then passed to a softmax layer yielding an attention map mvi over the KB, which is used in turn to retrieve the relevant information to perform the current reasoning step.
mvi = softmax ( W d,d · Icm−KB + bd ) (7a)
mnew = H,W∑ h,w=1,1 (mvi)h,w ·KBh,w (7b)
Finally, the read unit returns the newly retrieved information mnew, along with an attention map mvi over the Knowledge Base.
To give an example of the read unit operation, assume a given question q such as “What object is located left to the blue ball?”, whose associated answer is “cube”. Initially, no cue is provided to the model to attend to that cube, since no direct information about it presents in the question. Instead, based on its comprehension of the question, the model may start by focusing on the blue ball at the first iteration, such that the memory state m1 will capture the blue ball. However, in the second iteration, the control unit, after re-examining the question, may realize it should now look left, storing the word “left” in c2. Then, when considering both m1 and c2, the read unit will realize it should perform a reasoning operation corresponding to the word “left” (stored in c2) given a memory representing the blue ball in m1, thereby allowing it to look left to the blue ball and find the cube.
3.2.3 THE WRITE UNIT
The Write Unit is responsible for creating the new memory state mi that will reflect all the information considered to be important to answer the question so far, i.e. up to the current iteration in the
reasoning process. It receives the last memory state mi−1 from the previous MAC cell, along with the newly retrieved information from the read unit in the current iteration, mnew. See figure 5 for a diagram.
In the main design we have explored, merging the new information with the previous memory state is done simply by a linear transformation.
m′i = W 2d,d[mnew,mi−1] + b d (8)
In addition, we have explored two variations of this design. The first, self-attention, allows considering any previous memories rather than just the last one mi−1, thus providing the network with the capacity to model non-sequential reasoning processes. The second variation is adding gating mechanisms to the writing unit. These may allow the model to dynamically adjust the practical length of the computation to the question complexity and stabilize the memory content throughout the sequential network (similarly to GRUs and LSTMs).
Self-Attention. The current architecture that we have presented allows the model to perform reasoning steps in a sequence, passing control and memory states from one cell to the following. However, we would like to grant the system with more flexibility. Particularly, we would like to allow it to capture more complicated reasoning processes such as trees and graphs - Directed Acyclic Graph (DAG) in particular, where several branches of reasoning sub-processes are merged together in later stages. Indeed, the CLEVR dataset includes cases where the questions embody tree-like reasoning process, rather than just sequences, which we would like to address correctly in our model.
We achieve that by adding self-attention connections between each MAC cell and all the prior cells. Since each cell can look on all the prior reasoning steps and their corresponding memories retrieved from the Knowledge Base, it can virtually capture any directed acyclic graph, while still having physically sequential layout.
More formally, the current MAC cell, of the ith iteration, is granted with access to c1, ..., ci−1 along with the corresponding m1, ...,mi−1, that have been computed by the prior MAC cells. It begins by computing the similarity between ci and c1, ..., ci−1, and use it to derive an attention map over the prior MAC cells sai,j for j = 1, ..., i− 1. This represents the relevance of the jth prior reasoning step to the current one i (equation 9a).
Then, we average the previous memories according to this resulted attention map saij . We obtain msa, representing the information from all the other reasoning steps that is relevant to the current one (equation 9b).
This resembles the approach of Key-Value networks (Miller et al., 2016). The similarity between control states, corresponding to the reasoning operations that are performed in each prior step, allows the model to select which memories should be taken into account, when creating the new memory – namely, which branches of the reasoning process should be merged together at this point.
saij = softmax ( W d,1(ci ◦ cj) + b1 ) (9a)
(msa)i = i−1∑ j=1 saij ·mj (9b)
Finally, we use msa along with m′i to compute m ′′ i , the new memory content in this variation.
m′′i = W 2d,d[mnew,m ′ i] + b d (10)
Memory Gate. The currently presented MAC network has some fixed number p of concatenated MAC cells, representing the length of the overall reasoning process we perform. However, not all questions require reasoning sequence of the same length. Some questions are simpler while others more complex.
Motivated by this observation, we add a gate over the new memory computed at each step, that may selectively keep content of the previous memory mi−1 unchanged. Practically, the gate functions in a similar way to a highway network (Srivastava et al., 2015), where the gate value is conditioned on the current reasoning operation, ci.
ci ′ = W d,d · ci + bd (11a)
mi = sigmoid (ci′) ·mi−1 + (1− sigmoid (ci′)) ·mi′′ (11b)
The write unit returns the new memory state mi, that will be passed along with ci to the next MAC cell.
3.2.4 DISCUSSION
Overall, when designing the MAC cell, we have attempted to formulate the inner workings of an elementary, yet generic reasoning skills: the model decomposes the problem into steps, focusing on one at a time. At each such step, it takes into account:
• The control ci: Some aspect of the task - pointing to the future work that has left to be done.
• The previous memory or memories: The partial solution or evidence the cell has acquired so far – pointing to the past work that has already been achieved.
• The newly retrieved information mnew: that is retrieved from the knowledge base KB and may or may not be transitively related to that partial solution or evidence - the present, or current work.
Considering these three sources of information together, the cell finally adds the new information up into its working memory, mi, progressing one more step towards the final answer.
3.3 THE OUTPUT UNIT
The output unit receives the question representation q, along with the memory state passed from the last MAC cell mp, where p is the number of MAC cells in the network – representing the number of reasoning steps in the whole process. It inspects both and predicts an answer based on their concatenation. Intuitively, we would like our model to consider both the question as well as the relevant information that has been progressively retrieved from the KB, deemed the necessary information to answer it.
Note that considering both q and mp is critical to answer the question. While mp represents the information collected from KB, we still need to recall what has been asked about it to be able to answer accordingly. This is especially true in our case, when all other interactions between the question and the KB are mediated through attention distributions, rather than being transformed into a shared continuous vector space.
The prediction is built out of a standard 2-layers fully-connected softmax-based classifier with hidden dimension d and output dimension that matches the number of possible answers in the dataset. The classifier receives [mp, q] as input and returns a probability distribution over the answers.
4 EXPERIMENTS
We evaluate our model on the recent CLEVR dataset (Johnson et al., 2016). CLEVR is a synthetic dataset consisting of 700K tuples; each consists of a 3D-rendered image featuring objects of various shapes, colors, materials and sizes, coupled with compositional multi-step questions that measure performance on an array of challenging reasoning skills such as following transitive relations, counting objects and comparing their properties. In addition, each question is associated with a formal program, specifying the reasoning operations that should be performed to compute the answer, among 28 possibilities.
We first perform experiments on the original 700k CLEVR dataset (Johnson et al., 2016), comparing to prior work. As shown in table 1, our model matches or outperforms all existing models both in overall accuracy, as well as in each category, testing different reasoning skills. In particular, for the overall performance, we achieve 98.94% accuracy, more than halving the error rate of the prior best model, FiLM (Perez et al., 2017).
Counting and Numerical Comparison. Remarkably, our performance on questions testing counting and numerical comparisons is significantly higher than the competing models, which consistently struggle on this question type. Again, we nearly halve the corresponding error rate. These results demonstrate the aptitude of attention mechanisms to perform counting, reduction and aggregation, in contrast to alternative, CNN-based approaches.
Training Length and Computational-Efficiency. We examine the learning curves of our and competing models. We have trained all models on the same architecture and used the author code for the other models. Aiming at having equal settings for comparison, we ran all models including ours with learned random words vectors. In order to make sure the results are statistically significant we ran each model multiple (10) times, and plotted the averages and confidence intervals (figure 4). The results show that our model learns significantly faster than the other leading methods, FiLM (Perez et al., 2017) and PG+EE (Johnson et al., 2017). While we do not have learning curves for the Relational Network model, Santoro et al. (2017) report approximately 1.4 million iterations to achieve 95.5% accuracy, which are equivalent to 125 epochs approximately, whereas our model achieves a comparable accuracy after 3 epochs only, yielding 40x reduction in the length of the training process.
Naturally, the smaller number of required training steps also translates to comparably shorter training time. Perez et al. (2017) report training time of 4 days, equivalent to 80 epochs, to reach accuracy of 97.7%. In contrast, we achieve higher accuracy in 6 epochs, taking 9.5 hours overall, leading to 10x reduction in training time.
4.1 DATA EFFICIENCY
We have explored the performance of our and other leading approaches on smaller subsets of the CLEVR dataset, in order to study the ability of models to generalize from smaller amount of data. We sampled at random subsets of CLEVR, with 10%, 25% and 50% of its original 700k size, and used them to train our and other 3 proposed models for the CLEVR task: FiLM (Perez et al., 2017), the strongly-supervised PG+EE (Johnson et al., 2017), and stacked-attention networks (Johnson et al., 2017; Yang et al., 2016).
As shown in figure 4, our model outperforms the other models by a wide margin for all subsets of the CLEVR dataset. For 50% of the data, equivalent to 350k samples, other models obtain accuracies ranging between 70% and 92%, while our model achieves 97.9%. The gap becomes larger as the dataset size reduces: for 25% of the data, equivalent to 175k samples, performance of other models is between 50% and 77%, while our model maintains a high 95.4% accuracy.
Finally, for 10% of the data – 70k samples, still a sizeable amount – our model is the only one that manages to generalize, with performance of 84.7% on average, whereas the other three models fail, achieving 47.6%-57.5% . Note that as pointed out by (Johnson et al., 2016) a simple baseline that predicts the most frequent answer for each of the question types achieves already 42.1%, suggesting that answering half of the questions correctly means that the competing models barely learn to generalize from the smaller dataset. These results demonstrate the robustness of our architecture and its key role as a structural prior guiding our network to learn the intended reasoning skills.
4.2 CLEVR HUMANS - NATURAL LANGUAGE QUESTIONS
We analyze our model performance on the CLEVR-Humans dataset (Johnson et al., 2017), consisting of natural language questions collected through crowdsourcing. As such, the dataset has diverse vocabulary and linguistic variations, and it also demands more varied reasoning skills.
Since the training set is relatively small, consisting of 18k samples, we use it to finetune a model pretrained on the standard CLEVR dataset. However, since most of the vocabulary in CLEVRHumans is not covered by CLEVR, we do not train the word vectors during the pre-training stage, so to prevent drift in their meaning compared to other uncovered words in CLEVR-Humans that may be semantically related.
As shown in table 2, our model achieves state-of-the-art performance on CLEVR-Humans both before and after fine-tuning. It surpasses the next-best FiLM model, (Perez et al., 2017) by 6.6% percent, achieving 82.5%.
The results substantiate the model’s robustness against linguistic variations and noise, as well as its ability to adapt to diverse vocabulary and varied reasoning skills. Arguably, the soft attention performed over the question words allows the model to focus on the words that are most critical to answer the question and translate them to corresponding reasoning operations, giving less attention to irrelevant linguistic variations.
4.3 ABLATIONS
Based on the validation set, we have conducted an ablation study on our model to understand better the contribution of each of its component to the overall performance. We tested each setting on the standard 700K CLEVR dataset as well as on 10% subset of the dataset. See table 3 for the numerical results. In addition, figure 4.3 presents the training curves for the different settings trained on the standard dataset. Overall, the results demonstrate the robustness of the model to hyperparameter variations such as network dimension and length, and also the impact of different aspect and components of MAC on its performance.
Network Length. We have tested the model performance as a function of the network’s length – the number of MAC cells that were sequenced together. The results show the positive correlation between the network length and its performance. We can see that for 1 cell the scores are relatively low – 75%, but adding at least one more cell leads to a significant increase in performance above 95%. The performance keeps improving up to lengths 8-16 that achieve 98.9-99.1%. The results also teach us about the complexity of the dataset, by showing the relatively significant benefits of having at least 4 cells, each modeling a reasoning step.
Network Dimension. We have varied the state dimension to check the robustness of the model to hyperparameters. The results on the standard CLEVR dataset show the model is able to maintain high performance with dimension of 128, albeit after a longer training process, achieving 97.6%, compared to 98.94% achieved with dimension of 512. However, for 10% of CLEVR, the larger 512-dimension allows accuracy increase by 7.5% over dimension of 128.
Weight Sharing. We have tested the impact of sharing weights between cell has on the model performance for network of length p = 12. The results show that for the standard dataset there is only a small difference between these settings of 1%. However, for less data, we see much more significant drop 16.9% in the unshared-parameters setting compared to the shared one. Indeed, we observe that a model with less parameter is more data-efficient and has a lower tendency to overfit the data.
Control Unit. We have performed several ablations in the control unit to understand its contribution to the overall model performance. Based on the results, first, we can see the the question information is crucial for the model to handle the questions, as can be noted by the low performance of the model when there is no use of control signal whatsoever. Second, we have tested the model performance when using the continuous control state computed by question (2) in section 3.2.1, without having word-attention, in order to understand its relative contribution. Based on the results, we can indeed see that using word-attention is useful for accelerating the training process and achieving higher accuracies both for the standard dataset as well as for the small subset, where using word-attention increases results in 21.4%. We also see that using the “contextual words” produced by the questionunit LSTM is useful in accelerating the model performance, when compared to using the wordvectors directly.
Reading Unit. We have conducted several ablations for the reading unit to better understand its behavior and contribution to the performance of the model. The standard MAC reading unit uses the control state – which averages the question words based on attention distributions computed per each reasoning step. In this ablation experiment, we have tested using the full question representation q instead across all reasoning steps to gain better understanding of the the contribution of wordattention to the model performance. Indeed, we can see that using q rather then the control state ci results in a significant drops in performance – 19.4% for the full CLEVR dataset and 19.5% for 10% of the data.
We have conducted additional ablation experiment to better understand the contribution of using the KB features directly in the first-stage information retrieval process described in section 3.2.2, compared to using only the dot-products of the KB elements with the previous memory state mi−1. For the full CLEVR dataset, we can see that this component has only a small impact in the final performance - ultimately resulting in 0.06% performance difference. However, for the 10% of the data, we can see that the difference in performance when ablating this component is much larger - 11.2%.
Writing Unit Ablations. In our main MAC model variant, the memory unit merges the new information mnew with the previous memory state mi−1 by combining them through a linear transformation. In this experiment, we have explored other variations, such as assigning mnew to mi directly – ignoring previous memories, or doing a linear transformation based on mnew only. The results show that in fact such variant is only slightly worse than our main variant – 0.4%. We also conducted an experiment in which we merge the new information with the previous memory just by a having a gate that does a weighted average of them. The results show that this variant performs equivalently to our standard linear-transformation variant.
Writing Unit Additions. We have explored the impact of the writing unit variants described in section 3.2.3 – adding self-attention, gating mechanisms, or both, compared to our standard main model that uses a linear transformation to merge the newly retrieved information mnew with the previous memory mi. For the complete CLEVR dataset we can see that indeed both these variants are very helpful in increasing the model performance. Compared to our standard MAC model that achieves 98.94% on the validation set, self-attention yields accuracy of 99.23%, gating yields 99.36% and adding both achieves 99.48%.
Output Unit. In our standard model, the final predictions made in the output unit are based on the final memory state mp as well as question representation q (stands for the final hidden states of the backward and forwards passes of the LSTM). We have explored the contribution of basing the model prediction on the latter, by testing the model performance when prediction is based on memory alone, for the complete and 10% datasets. We can see that in both settings basing the model’s predictions on the question representation allows faster training and higher accuracies. Notable is the gap in performance for the 10% CLEVR - 19.8% increase by using the question representation to make predictions. These results are very reasonable intuitively, since the model is structured such that the memory holds only information that was retrieved from the image. Thus, questions that may ask for instance on different aspects (such as color or shape) of the same object in the image may result in the same memory content, which is thus does not directly contain enough information to respond such questions.
Position. In our standard model, similarly to the practice of competing models (Santoro et al., 2017; Perez et al., 2017; Hu et al., 2017), we have concatenated positional information to each region of the image, in order to increase the model capability to perform spatial reasoning. We have explored both simple linear maps at a constant [−1, 1] as well as more complex positional encoding suggested by (Vaswani et al., 2017). However, the results for both the standard dataset and the 10% version show a very negligible improvement at best when adding positional encoding information, demonstrating the capability of MAC to perform spatial reasoning without data augmentation.
Gate Bias Initialization. For our model variant with gating mechanism (described in section 3.2.3) we have tested the effect of setting different values for the gate bias - −1, 0 and 1. for −1 the model is initialized to biased for keeping the previous memory value whereas for 1 it will be biased for using the new memory instead. We can see that for the complete dataset setting the bias to 1 is optimal – apparently since the model has enough data to learn to apply each cell effectively. In contrast, for the small 10% CLEVR data, setting the bias to 0 shows better performance, biasing the model to using less cells overall which results ultimately in a theoretically-simpler model that can fit less data more effectively.
4.4 INTERPRETABILITY
We have looked into attention maps over the image and question that the model produces during its computation and provide a few examples in figure 4.4. The first example shows us how the model parses the question in steps, first focusing on the main entity that the question is about, then on
relation of this entity to the “brown matte thing” which is then located in the image. Finally, the model correctly focuses on the small brown cube and predicts the right answer – brown.
The second example shows a model with 4 cells instead of 6, that similarly parse the question in iterations and focuses on the relevant objects at each step, though we can see that the reasoning process looks somewhat different when the MAC network has fewer cells.
The last example shows how how the model handles counting and OR operations. It starts from identifying the task - computing a number, and then red objects as well as the cylinder, one at a time, allowing it ultimately to respond correctly, with the answer 2.
5 CONCLUSION
We have given a first demonstration of how a sequence of Memory, Attention and Control (MAC) cells combined into a Compositional Attention Network provides a very effective tool for neural reasoning. In future work, we wish to explore this promising architecture for other tasks and domains, including real-world VQA, machine comprehension and textual question answering.
A DETAILS OF INPUT UNIT
The input unit processes the raw inputs given to the system into distributed vector representations. It receives a text question (or in general, a query), and an image (or in general, a Knowledge Base (KB)) and processes each of them with a matching sub-unit. Here we provide details of the Query Unit and the Image Unit used in this work.
A.0.1 THE QUERY UNIT
We encode a query of S words into a continuous representation using a bidirectional LSTM (Hochreiter & Schmidhuber, 1997; Graves et al., 2013). Each word is associated with a word embedding ws, where s = 1, ..., S. In our case, we use GloVE words embeddings (Pennington et al., 2014). Then, these embeddings are processed by a bidirectional LSTM of dimension d that outputs:
• a matching sequence of d-dimensional output states, which we refer to as contextual words, [cw1, ..., cwS ]
• d-dimensional hidden state q = [←−−cw1,−−→cwS ], the concatenation of the hidden states from the backward and forward passes. We refer to q as the question representation.
Intuitively, each contextual word cws represents the meaning of sth word, in the context of the question, while the hidden state q represents the overall (compositional) meaning of the question.
A.0.2 THE IMAGE UNIT
Given an image, and following prior work on CLEVR (?Santoro et al., 2017; Perez et al., 2017), we extract conv4 features from ResNet101 (He et al., 2016) pretrained on ImageNet (Krizhevsky et al., 2012) which we treat as a fixed initial representation of the image, x of dimension H,W,C where H = W = 14 are the height and width of the transformed image and C = 1024 is the number of channels. Each feature xh,w represents one region in the original image.
Similar to prior work (Hu et al., 2017; Santoro et al., 2017; Perez et al., 2017), we would like to allow our model to reason explicitly about spatial locations, as required by many of the questions in CLEVR, and therefore we concatenate to this representation a spatial map that represents each of the positions in the image. However, in contrast to prior work that uses a linear meshgrid feature map with 2 features h and w ranging from −1 to 1, and to allow better representation of the positions, we use the positional encoding scheme proposed by Vaswani et al. (2017):
p(h,2i) = sin ( h/100002i/pd ) p(h,2i+1) = cos ( h/100002i/pd
) And similarly for w, where pd is a hyperparameter. Overall, the positional encoding of a feature at position (h,w) is [ph, pw], the concatenation of the positional encodings for h and w.
This positional encoding scheme allows better correspondence between the distance of 2 positions (x, y) and (x, y) in the image and a vector similarity of their positional encodings, even when pd is larger than two.
We then concatenate the obtained spatial map with x, receiving a spatially-aware image representation xp. Then, we pass this representation through two CNN layers with d output channels and obtain a final representation of the image, which we refer to as our Visual Knowledge Base (KBV that is used in further components of the model.
B IMPLEMENTATION AND TRAINING DETAILS
For the question processing, we use GloVE (Pennington et al., 2014) word-vectors with dimension 300. For the image processing, we extract conv4 features from ResNet101 (He et al., 2016) pretrained on ImageNet (Krizhevsky et al., 2012), with dimension H,W,C where H = W = 14 and C = 1024, followed by 2 CNN layers with kernel size 2. We use MAC network with p = 12 cells, and train it using Adam (Kingma & Ba, 2014), with learning rate 10−4. We train our model for 10 − 20 epochs, with batch size 64, and use early stopping based on validation accuracies. During training, the moving averages of all weights of the model are maintained with the exponential decay rate of 0.999. At test time, the moving averages instead of the raw weights are used. We use dropout 0.85, and ELU (Clevert et al., 2015) which in our experience has reduce the training process compared to RELU.The training process takes roughly 10-20 hours on a single Titan X GPU.
C FURTHER DISCUSSION OF RELATED WORK
In this section we provide detailed discussion of related work. Several models have been applied to the CLEVR task. These can be partitioned into two groups, module networks that use the strong supervision provided as a tree-structured functional program associated with each instance, and end-to-end, fully differentiable networks that combine a fairly standard stack of CNNs with components that aid them in performing reasoning tasks. We also discuss the relation of MAC to other approaches, such as memory networks and neural computers.
C.1 MODULE NETWORKS
The modular approach (Andreas et al., 2016a;b; Hu et al., 2017; Johnson et al., 2017) first translates the given question into a tree-structured action plan, aiming to imitate the ground-truth programs provided as a form of strong-supervision. Then, it constructs a tailor-made network that executes the plan on the image in multiple steps. This network is composed of discrete units selected out of a collection of predefined modules, each responsible for an elementary reasoning operation, such as identifying an objects color, filtering them for their shape, or comparing two amounts. Each module has its own set of learned parameters (Johnson et al., 2017), or even hand-crafted design (Andreas et al., 2016a) to guide it towards its intended behavior.
Overall, this approach makes discrete choices at two levels: the identity of each module – the behavior it should learn among a fixed set of possible types of behaviors, and the network layout – the way in which these modules are wired together to compute the answer progressively. Hence, their differentiability is confined to the boundaries of a single module, disallowing end-to-end training.
Several key differences exist between our approaches. First, our model replaces the fixed modules collection with one versatile and universal cell that shares both its architecture and parameters across all of its instantiations, and is applied across all the reasoning steps. Second, it replaces the dynamic recursive tree structures with a sequential topology, augmented by soft attention mechanisms, as done in Bahdanau et al. (2014). This confers our network with a virtual capacity to represent arbitrarily complex Directed Acyclic Graphs (DAGs) while still having efficient and readily deployed physical sequential structure. Together, both of these relaxations allow us to effectively train our model end-to-end by backpropagation alone, whereas module networks demand a more involved training scheme that relies on the strongly-supervised programs at the first stage, and on various Reinforcement Learning (RL) techniques at the second. Furthermore, while our model can be train without the strong supervisory programs, developing adaptive reasoning skills to address the task is it trained for, the modular approach reliance on questions structured and formal representation hinder its applicability to real-world tasks.
C.2 AUGMENTED CONVOLUTIONAL NEURAL NETWORKS
Alternative approaches for the CLEVR task that do not rely on the provided programs as a strong supervision signal are Santoro et al. (2017) and Perez et al. (2017). Both complement standard multi-layer Convolutional Neural Networks (CNNs) with components that aid them in handling compositional and relational questions.
Relational Networks. Santoro et al. (2017) appends a Relation Network (RN) layer to the CNN. This layer inspects all pairs of pixels in the image, thereby enhancing the network capacity to reason over binary relations between objects. While this approach is very simple and elegant conceptually, it suffers from quadratic computational complexity, in contrast to our and other leading approaches. But beyond that, closer inspection reveals that this direct pairwise comparison might be unnecessary. Based on the analogy suggested by Santoro et al. (2017), according to which pixels are equivalent to objects and their pairwise interactions to relations, a RN layer attempts to grasp the induced graph between objects all at once in one shallow and broad layer. Conversely, our attention-based model proceeds in steps. It basically compares the image to its current memory and control for this step, aggregates the attended regions into the new memory, and repeats the process. By the same analogy, it traverses a narrow and deep path, progressively following transitive relations. Consequently, our model exhibits a relational capacity while circumventing the computational inefficiency.
FiLM. FiLM (Perez et al., 2017) is a recently proposed method that interleaves standard CNN layers that process the given image with linear layers, reminiscent of layer normalization techniques (Ba et al., 2016; Ioffe & Szegedy, 2015). Each of these layers, called FiLM, is conditioned on the question: the question words are processed by a GRU, and its output is linearly transformed into matching biases and variances for each of the CNN layers, tilting its activations to reflect the specifics of the given question and affect the computation done over the image.
Similarly to our model, this approach features distant modulation between the question and the image, where rather than being fused together into the same vector space, the question can affect the image processing only through constrained means – for the case of FiLM – linear transformations. However, since the same transformation is applied to all the activations homogeneously, agnostic to both their spatial location as well as the features values, this approach does not allow the question to differentiate between regions in the image based on the objects or concepts they represent – on the content of the image. This stands in stark contrast to our attention-based model, which readily allows and actually encourages the question to inform the model about relevant regions to focus on. We speculate that this still distant, yet more direct interaction between the question and the data, or image, for the case of VQA, facilitates learning and increases generalizability. It may be more suitable to VQA tasks, and CLEVR in particular, where the questions demand the responder to focus on specific objects, and reason about their properties or relations, rather than respond based only on a holistic view of the image that may lead to sub-optimal results (Yang et al., 2016), as is the case of FiLM. Indeed, as demonstrated in 4, there is significant evidence showing our models better generalization capacity, allowing it to achieve high accuracies much faster, and from less data than FiLM and other competing methods.
C.3 MEMORY AND ATTENTION
Our architecture draws inspiration from recent research on memory and attention (Kumar et al., 2016; Xiong et al., 2016; Graves et al., 2014; 2016). Kumar et al. (2016); Xiong et al. (2016) propose the Dynamic Memory Network model that proceeds in an iterative process, applying soft attention to retrieve relevant information from a visual or textual KB, which is in turn accumulated into memory passed from one iteration to the next. However, in contrast to our model, it views the question as an atomic unit, whereas our model decomposes it into a multi-step action plan informing each cell in our sequential network about its current objective. Another key difference is the distant interaction between the question and the KB that characterizes our model. Conversely, DMN fuses their corresponding representations together into the same vector space.
Graves et al. (2016; 2014) complements a neural network with a memory array it can interact with, through the means of soft attention. Analogously to our model, it partitions the model into a core neural network, called controller, as well as reading and writing heads that interact with external memory array. However, a main point distinguishing our model from this approach, is the use of dynamic memory, as in Kumar et al. (2016), instead of a fixed-array memory. Each MAC cell is associated with a memory state, our reading unit inspects only the latest memory passed from the previous state, and our writing unit creates a new memory state rather than writing to multiple slots in a fixed shared external memory. Notably, our approach is much more reminiscent of the widely successful RNN structure, rather than to Graves et al. (2016; 2014) .
Finally, our approach has potential ties to the VQA models Hu et al. (2017); Lu et al. (2016) which also attend both the to question words and the image while progressively addressing the given question. However, both of these models have distinct specialized designs for each of their attention layers or modules, and have a discrete or fixed layout in which they are composed together. In contrast, our approach relax both of these limitations, having one universal cell design and one universal self-attending sequential network layout.
C.4 ATTENTION VS. CONVOLUTION
Compared to other leading methods, our model stands out by being heavily based on soft attention, whereas most competing approaches are CNN-based, surprisingly lack any attention mechanism. Since attention is commonly used in models designed for standard VQA (Antol et al., 2015; Gupta, 2017; Lu et al., 2016; Yang et al., 2016), it is reasonable to assume that it would be beneficial to incorporate such methods into visual reasoning systems for the CLEVR task as well. In fact, attention mechanisms should be especially useful for multi-step reasoning questions such as those present in CLEVR. Such questions refer to several relations between different objects in the image and feature compositional structure that may be approached one step at a time. Thus, it should be beneficial for a cogent responder to have the capacity to selectively focus on on one or some objects at each step, traversing the relevant relational links one after the other, both at the image level, and at the question level.
Moreover, attention mechanisms enhance our model’s ability to perform reasoning skills that pertain to aggregation of information across different regions, such as counting, finding maximum value, or performing other reduction operations over information that is spread across the image. Indeed, as discussed in 4, all existing models for visual reasoning, most of which lacking any attention mechanism, struggle with the counting and numerical comparisons questions present in CLEVR. Conversely, our model proves much more capable of performing these reasoning skills, outperforming the other approaches by a wide margin. Noticeably, incorporating soft attention into our model makes it much more adept at performing such aggregation reasoning skills, successfully addressing the this type of questions.
Finally, as pointed out by Lu et al. (2016); Yang et al. (2016), soft attention confers the model with robustness to noise introduced from irrelevant information presents in the image, and higher capacity for handling larger and more diverse vocabulary, the latter being demonstrated in 4. It allows the model to separate the wheat from the chaff, selectively attending to the relevant information only, and arguably, being more resilient to both visual and linguistic variations. | 1. What is the main contribution of the paper on machine reasoning?
2. What are the strengths and weaknesses of the proposed model architecture?
3. How does the reviewer assess the significance of the results on various datasets?
4. What additional experiments or analyses would help better understand the improvements brought by the new model?
5. Are there any concerns regarding the comparison with other models, such as FiLM?
6. How could the authors provide evidence for the reasonableness of the control mechanism, self-attention, and gating mechanisms?
7. What are some suggestions for improving the presentation typography? | Review | Review
This paper describes a new model architecture for machine reasoning. In contrast
to previous approaches that explicitly predict a question-specific module
network layout, the current paper introduces a monolithic feedforward network
with iterated rounds of attention and memory. On a few variants of the CLEVR
dataset, it outperforms both discrete modular approaches, existing iterated
attention models, and the conditional-normalization-based FiLM model.
So many models are close to perfect accuracy on the standard CLEVR dataset that
I'm not sure how interesting these results are. In this respect I think the
current paper's results on CLEVR-Humans and smaller fractions of synthetic CLEVR
are much more exciting.
On the whole I think this is a strong paper. I have two main concerns. The
largest is that this paper offers very little in the way of analysis. The model
is structurally quite similar to a stacked attention network or a particular
fixed arrangement of attentive N2NMN modules, and it's not at all clear based on
the limited set of experimental results where the improvements are actually
coming from. It's also possible that many of the proposed changes are
complementary to NMN- or CBN-type models, and it would be nice to know if this
is the case.
Secondarily, the paper asserts that "our architecture can handle
datasets more diverse than CLEVR", but runs no experiments to validate this. It
seems like once all the pieces are in place it should be very easy to get
numbers on VQA or even a more interesting synthetic dataset like NLVR.
Based on a sibling comment, it seems that there may also be some problems with
the comparison to FiLM, and I would like to see this addressed.
On the whole, the results are probably strong enough on their own to justify
admitting this paper. But I will become much more enthusiastic about if if the
authors can provide results on other datasets (even if they're not
state-of-the-art!) as well as evidence for the following:
1. Does the control mechanism attend to reasonable parts of the sentence?
Here it's probably enough to generate a bunch of examples showing sentence
attentions evolving over time.
2. Do these induce reasonable attentions over regions of the image?
Again, examples are fine.
3. Do the self-attention and gating mechanisms recover the right structure?
In addition to examples, here I think there are some useful qualitative
measures. It should be possible to extract reasonable discretized "reasoning
maps" by running MST or just thesholding on the "edge weights" induced by
attention and gating. Having extracted these from a bunch of examples, you can
compare them to the structural properties of the ground-truth CLEVR network
layouts by plotting a comparison of sizes, branching factors, etc.
4. More on the left side of the dataset size / accuracy curve. What happens if
you only give the model 7000 examples? 700? 70?
Fussy typographical notes:
- This paper makes use of a lot of multi-letter names in mathmode. These are
currently written like $KB$, which looks bad, and should instead be
$\mathit{KB}$.
- Variables with both superscripts and subscripts have the superscripts pushed
off to the right; I think you're writing these like $b_5 ^d$ but they should
just be $b_5^d$ (no space).
- Number equations and then don't bother carrying subscripts like $W_3$, $W_4$
around across different parts of the model---this isn't helpful.
- The superscripts indicating the dimensions of parameter matrices and vectors
are quite helpful, but don't seem to be explained anywhere in the text. I
think the notation $W^{(d \times d)}$ is more standard than $W^{d, d}$.
- Put the cell diagrams right next to the body text that describes them (maybe even
inline, rather than in figures). It's annoying to flip back and forth. |
ICLR | Title
Compositional Attention Networks for Machine Reasoning
Abstract
We present Compositional Attention Networks, a novel fully differentiable neural network architecture, designed to facilitate explicit and expressive reasoning. While many types of neural networks are effective at learning and generalizing from massive quantities of data, this model moves away from monolithic blackbox architectures towards a design that provides a strong prior for iterative reasoning, allowing it to support explainable and structured learning, as well as generalization from a modest amount of data. The model builds on the great success of existing recurrent cells such as LSTMs: It sequences a single recurrent Memory, Attention, and Control (MAC) cell, and by careful design imposes structural constraints on the operation of each cell and the interactions between them, incorporating explicit control and soft attention mechanisms into their interfaces. We demonstrate the model’s strength and robustness on the challenging CLEVR dataset for visual reasoning, achieving a new state-of-the-art 98.9% accuracy, halving the error rate of the previous best model. More importantly, we show that the new model is more computationally efficient and data-efficient, requiring an order of magnitude less time and/or data to achieve good results.
1 INTRODUCTION
This paper considers how best to design neural networks to perform the iterative reasoning necessary for complex problem solving. Putting facts and observations together to arrive at conclusions is a central necessary ability as we work to move neural networks beyond their current great success with sensory perception tasks (LeCun et al., 1998; Krizhevsky et al., 2012) towards displaying Artificial General Intelligence.
Concretely, we develop a novel model that we apply to the CLEVR dataset (Johnson et al., 2016) for visual question answering (VQA). VQA (Antol et al., 2015; Gupta, 2017) is a challenging multimodal task that requires responding to natural language questions about images. However, Agrawal et al. (2016) show how the first generation of successful models on VQA tasks tend to acquire only superficial comprehension of both the image and the question, exploiting dataset biases rather than capturing a sound perception and reasoning process that would lead to the correct answer (Sturm, 2014). CLEVR was created to address this problem. As illustrated in figure 1, instances in the dataset consist of rendered images featuring 3D objects of several shapes, colors, materials and sizes, coupled with unbiased, compositional questions that require an array of challenging reasoning skills such as following transitive relations, counting objects and comparing their properties, without allowing any shortcuts around such reasoning. Notably, each in-
stance in CLEVR is also accompanied by a tree-structured functional program that was both used to construct the question and reflects its reasoning procedure – a series of predefined operations – that can be composed together to answer it.
Most neural networks are essentially very large correlation engines that will hone in on any statistical, potentially spurious pattern that allows them to model the observed data more accurately. In contrast, we seek to create a model structure that requires combining sound inference steps to solve a problem instance. At the other extreme, some approaches adopt symbolic structures that resemble the expression trees of programming languages to perform reasoning (Andreas et al., 2016b; Hu et al., 2017). In particular, some approaches to CLEVR use the supplied functional programs for supervised or semi-supervised training (Andreas et al., 2016a; Johnson et al., 2017). Not only do we wish to avoid using such supervision in our work, but we in general suspect that the rigidity of these structures and the use of an inventory of operation-specific neural modules undermines robustness and generalization, and at any rate requires more complex reinforcement learning methods.
To address these weaknesses, while still seeking to use a sound and transparent underlying reasoning process, we propose Compositional Attention Networks, a novel, fully differentiable, non-modular architecture for reasoning tasks. Our model is a straightforward recurrent neural network with attention; the novelty lies in the use of a new Memory, Attention and Composition (MAC) cell. The constrained and deliberate design of the MAC cell was developed as a kind of strong structural prior that encourages the network to solve problems by stringing together a sequence of transparent reasoning steps. MAC cells are versatile but constrained neural units. They explicitly separate out memory from control, both represented recurrently. The unit contains three sub-units: The control unit updates the control representation based on outside instructions (for VQA, the question), learning to successively attend to different parts of the instructions; the read unit gets information out of a knowledge base (for VQA, the image) based on the control signal and the previous memory; the write unit updates the memory based on soft self-attention to previous memories, controlled by the retrieved information and the control signal. A universal MAC unit with a single set of parameters is used throughout the reasoning process, but its behavior can vary widely based on the context in which it is applied – the input to the control unit and the contents of the knowledge base. With attention, our MAC network has the capacity to represent arbitrarily complex acyclic reasoning graphs in a soft manner, while having physically sequential structure. The result is a continuous counterpart to module networks that can be trained end-to-end simply by backpropagation.
We test the behavior of our new network on CLEVR and its associated datasets. On the primary CLEVR reasoning task, we achieve an accuracy of 98.9%, halving the error rate compared to the previous state-of-the-art FiLM model (Perez et al., 2017). In particular, we show that our architecture yields better performance on questions involving counting and aggregation. In supplementary studies, we show that the MAC network learns more quickly (both in terms of number of training epochs and training time) and more effectively from limited amounts of training data. Moreover, it also achieves a new state-of-the-art performance of 82.5% on the more varied and difficult humanauthored questions of the CLEVR-Humans dataset. The careful design of our cell encourages compositionality, versatility and transparency. We achieve these properties by defining attention-based interfaces that constrict the cell’s input and output spaces, and so constrain the interactions both between and inside cells in order to guide them towards simple reasoning behaviors. Although each cell’s functionality has only a limited range of possible continuous reasoning behaviors, when chained together in a MAC network, the whole system becomes expressive and powerful. In the future, we believe that the architecture will also prove beneficial for other multi-step reasoning and inference tasks, for instance in machine comprehension and textual question answering.
2 RELATED WORK
There have been several prominent models that address the CLEVR task. By and large they can be partitioned into two groups: module networks, which in practice have all used the strong supervision provided in the form of tree-structured functional programs that accompany each data instance, and large, relatively unstructured end-to-end differentiable networks that complement a fairly standard stack of CNNs with components that aid in performing reasoning tasks. In contrast to modular approaches (Andreas et al., 2016a;b; Hu et al., 2017; Johnson et al., 2017), our model does not require additional supervision and makes use of a single computational cell chained in sequence (like an LSTM) rather than a collection of custom modules deployed in a rigid tree structure. In contrast to augmented CNN approaches (Santoro et al., 2017; Perez et al., 2017), we suggest that our approach provides an ability for relational reasoning with better generalization capacity and higher
computational efficiency. These approaches and other related work are discussed and contrasted in more detail in the supplementary material in section C.
3 COMPOSITIONAL ATTENTION NETWORKS
Compositional Attention Networks is an end-to-end architecture for question-answering tasks that sequentially performs an explicit reasoning process by stringing together small building blocks, called MAC cells, each is responsible for performing one reasoning step.
We now provide an overview of the model, and a detailed discussion of the MAC cell. The model is composed of three components: an Input unit, the core MAC network, and an output unit. A TensorFlow implementation of the network, along with pretrained models will be made publicly available.
In this paper we explore the model in the context of VQA. However, it should be noted that while the input and output units are naturally domain-specific and should be designed to fit the task at hand, the MAC network has been designed to be generic and more broadly applicable, and may prove useful in contexts beyond those explored in the paper, such as machine comprehension or question answering over knowledge bases, which in our belief is a promising avenue for future work.
3.1 THE INPUT UNIT
The input unit processes the raw inputs given to the system into distributed vector representations. It receives a text question (or in general, a query), and an image (or in general, a Knowledge Base (KB)) and processes each of them with a matching sub-unit, for the query and the KB, here a biLSTM and a CNN. More details can be found in the supplementary material, section A.
At the end of this stage, we get from the query sub-unit a series of biLSTM output states, which we refer to as contextual words, [cw1, ..., cwS ], where S is the length of the question. In addition, we get q = [←−−cw1,−−→cwS ], the concatenation of the hidden states from the backward and forward LSTM passes. We refer to q as the question representation. Furthermore, we get from the Knowledge-Base sub-unit a static representation of the knowledge base. For the case of VQA, it will be represented by a continuous matrix KBV of dimension H,W, d, where H = W = 14 are the height and width of the transformed image, corresponding to each of its regions.
3.2 THE MAC CELL
The MAC network, which is the heart of our model, chains a sequence of small building blocks, called MAC cells, each responsible for performing one reasoning step. The model is provided access to a Knowledge Base (KB), which is, for the specific case of VQA, the given image, and then upon receiving a query, i.e. a question, the model iteratively focuses, in p steps, on the query’s various parts, each reflects in turn the current reasoning step, which we term the control. Consequently, guided by this control, it retrieves the relevant information from the KB, that is then passed to the next cell in a recurrent fashion.
Drawing inspiration from the Model-View-Controller paradigm used in software design and from the commonly exercised separation between control and data paths in computer architecture, the MAC cell is composed of three units: control unit, read unit and write unit. Each has a clearly defined role and an interface through which it interacts with the other units. See figure 2.
The careful design and imposed interfaces that constrain the interaction between the units inside the MAC cell, as described below, serve as structural prior that limits the space of hypotheses it can learn, thereby guiding it towards acquiring the intended reasoning behaviors. As such, this prior facilitates the learning process and mitigate overfitting issues.
In particular, and similar in spirit to Perez et al. (2017), we allow the question to interact with the Knowledge Base – the image for the case of VQA, only through indirect means: by guiding the cell to attend to different elements in the KB, as well as controlling its operation through gating mechanisms. Thus, in both cases, the interaction between these mediums, visual and textual, or knowledge and query, is mediated through probability distributions, either in the form of attention maps, or as gates, further detailed below. This stands in stark contrast to many common approaches that fuse the
question and image together into the same vector space through linear combinations, multiplication, or concatenation. Rather, our controlled interaction distills the influence that the query should have in processing the Knowledge Base, casting it onto discrete probability distributions instead.
The MAC cell has been designed to replace the discrete and predefined “modules” used in the modular approach (Andreas et al., 2016a;b; Hu et al., 2017; Johnson et al., 2017). Rather, we create one universal and versatile cell that is applied across all the reasoning steps, sharing both its architecture as well as its parameters, across all of its instantiations. In contrast to the discrete modules, each trained to specialize to some specific elementary reasoning task, the MAC cell is capable of demonstrating a continuous range of possible reasoning behaviors conditioned on the context in which it is applied – namely, the inputs it receives from the prior cell.
Each cell MACi maintains two dual states: control ci and memory mi, both are continuous vectors of dimension d. The control ci represents the reasoning operation the MAC cell should accomplish in the current step – focusing only on some aspect of the whole question. This is represented by a weighted-average attention-based sum of the question words. The memory mi represents the current context information deemed relevant to respond to the query, or answer the question.This is represented practically by a weighted average over elements from the KB, or for the case of VQA, regions in the image. m0 and c0 are initialized each to a random vector parameter of dimension d. The memory and control states are passed from one cell to the next in a recurrent fashion, and used in a way reminiscent of Key-Value memory networks (Miller et al., 2016), as discussed below.
3.2.1 THE CONTROL UNIT
The control unit determines the reasoning operation that should be applied at this step. It receives the contextual words [cw1, ..., cwS ], the question representation q, and the control state from the previous MAC cell ci−1, all of which are vectors of dimension d.
We would like to allow our MAC cell to perform continuously varied and adaptive range of behaviors, as demanded by the question. Therefore, we define the behavior of each cell to be a function of the contextual words [cw1, ..., cwS ], weighted-averaged according to the attention distribution that the control unit produces at each step. This will allow the cell to adapt its behavior – the reasoning operation it performs – to the question it receives, instead of having a fixed set of predefined behaviours as is the case in competing approaches Andreas et al. (2016a;b); Johnson et al. (2017).
The formal specification of the control unit is shown in figure 3. The question q is linearly transformed into a vector qi of the same dimension, which in turn is concatenated with the previous control state ci−1 and linearly transformed again to a d-dimensional vector cqi.
qi = W d,d i · q + b d i (1)
cqi = W 2d,d [qi, ci−1] + b d (2)
Note that in contrast to all other parameters of the cell, which are shared across its instantiations at the different steps i = 1, ..., p, the parameters W d,di and b d i are different for each iteration. This
is done to allow each cell to attend more readily to different aspects (i.e. parts) of the questions, depending on the index of the current step – its relative stage in the context of the whole reasoning process.
cqi represents the current reasoning operation we would like to perform in a continuous way, taking into account both the overall meaning of the question qi, as well as the words the model attended to in the previous step, ci−1.
However, we would like to prevent the cell from diverging in the reasoning operations it tries to perform, and instead anchor it back in the question words, by using them to represent the reasoning operation of the current step. We can achieve that by computing an attention distribution cvi over the contextual words [cw1, ..., cwS ] based on their similarity to cqi. Then, summing the contextual words according to the attention distribution cvi will allow us to have a new control state, ci, which is represented again in terms of words from the question. Intuitively, it is the gist of the question that is relevant to the reasoning operation we would like to perform in the current step.
cvi,s = softmax(W d,1(cqi ◦ cws) + b1) (3a)
ci = S∑ s=1 cvi,s · cws (3b)
Finally, the control unit returns the current control state ci, along with an attention map cvi over the contextual words.
3.2.2 THE READ UNIT
The Read Unit is provided with access to the knowledge base KBV , along with the previous memory state mi−1 and the current control ci. It is responsible for retrieving relevant content from the Knowledge Base KBV for the reasoning task that the MAC cell should accomplish at this step, which is represented by the current control state ci, as explained above. Figure 4 shows a diagram.
The relevance of the new information is judged in two stages by the “relatedness” of each element in the KB (or for the case of VQA, each region in the image) to either the memory mi−1 that has accumulated relevant information from previous iterations, or to the current control ci, pointing towards the next piece of information that should be taken into account. Here, relatedness is measured by trained linear transformations comparing each element to the previous memory and the current control.
More formally, at the first stage, the interaction between each element KBh,w, where h = 1, ...,H,w = 1, ...,W , and the previous memory mi−1 is computed by:
m′i−1 = W d,d ·mi−1 + bd (4)
KB′h,w = W d,d ·KBh,w + bd (5a) (Im−KB)h,w = m ′ i−1 ◦KB ′ h,w (5b)
These memory-KB interactions measure the relatedness of each element in the KB to the memory accumulated so far, which holds information that has been deemed relevant to handle previous reasoning steps towards addressing the question. They allow the model to perform transitive inference, retrieving a new piece of information that now seems important in light of the recent memory retrieved in a prior iteration.
However, there are cases which necessitate the model to temporarily ignore current memories, when choosing the new information to retrieve. Logical OR is a classical example: when the model has to look at two different objects at the same time, and assuming it stored one of them at the first iteration, it should briefly ignore it, considering new information that is relevant to the question but is unrelated to the memory. In order to achieve such capability, the read unit concatenates the original KB elements to each corresponding memory-KB interaction, which are then projected back to d-dimensional space (equation 6a):
Im−KB ′ = W 2d,d [Im−KB ,KBh,w] + b d (6a)
Icm−KB = ci ◦ (Im−KB)′ (6b)
At the second stage, the read unit compares the current ci with these memory-KB interactions, in order to focus on the information that is relevant to the current reasoning operation that the MAC cell seeks to accomplish. The result is then passed to a softmax layer yielding an attention map mvi over the KB, which is used in turn to retrieve the relevant information to perform the current reasoning step.
mvi = softmax ( W d,d · Icm−KB + bd ) (7a)
mnew = H,W∑ h,w=1,1 (mvi)h,w ·KBh,w (7b)
Finally, the read unit returns the newly retrieved information mnew, along with an attention map mvi over the Knowledge Base.
To give an example of the read unit operation, assume a given question q such as “What object is located left to the blue ball?”, whose associated answer is “cube”. Initially, no cue is provided to the model to attend to that cube, since no direct information about it presents in the question. Instead, based on its comprehension of the question, the model may start by focusing on the blue ball at the first iteration, such that the memory state m1 will capture the blue ball. However, in the second iteration, the control unit, after re-examining the question, may realize it should now look left, storing the word “left” in c2. Then, when considering both m1 and c2, the read unit will realize it should perform a reasoning operation corresponding to the word “left” (stored in c2) given a memory representing the blue ball in m1, thereby allowing it to look left to the blue ball and find the cube.
3.2.3 THE WRITE UNIT
The Write Unit is responsible for creating the new memory state mi that will reflect all the information considered to be important to answer the question so far, i.e. up to the current iteration in the
reasoning process. It receives the last memory state mi−1 from the previous MAC cell, along with the newly retrieved information from the read unit in the current iteration, mnew. See figure 5 for a diagram.
In the main design we have explored, merging the new information with the previous memory state is done simply by a linear transformation.
m′i = W 2d,d[mnew,mi−1] + b d (8)
In addition, we have explored two variations of this design. The first, self-attention, allows considering any previous memories rather than just the last one mi−1, thus providing the network with the capacity to model non-sequential reasoning processes. The second variation is adding gating mechanisms to the writing unit. These may allow the model to dynamically adjust the practical length of the computation to the question complexity and stabilize the memory content throughout the sequential network (similarly to GRUs and LSTMs).
Self-Attention. The current architecture that we have presented allows the model to perform reasoning steps in a sequence, passing control and memory states from one cell to the following. However, we would like to grant the system with more flexibility. Particularly, we would like to allow it to capture more complicated reasoning processes such as trees and graphs - Directed Acyclic Graph (DAG) in particular, where several branches of reasoning sub-processes are merged together in later stages. Indeed, the CLEVR dataset includes cases where the questions embody tree-like reasoning process, rather than just sequences, which we would like to address correctly in our model.
We achieve that by adding self-attention connections between each MAC cell and all the prior cells. Since each cell can look on all the prior reasoning steps and their corresponding memories retrieved from the Knowledge Base, it can virtually capture any directed acyclic graph, while still having physically sequential layout.
More formally, the current MAC cell, of the ith iteration, is granted with access to c1, ..., ci−1 along with the corresponding m1, ...,mi−1, that have been computed by the prior MAC cells. It begins by computing the similarity between ci and c1, ..., ci−1, and use it to derive an attention map over the prior MAC cells sai,j for j = 1, ..., i− 1. This represents the relevance of the jth prior reasoning step to the current one i (equation 9a).
Then, we average the previous memories according to this resulted attention map saij . We obtain msa, representing the information from all the other reasoning steps that is relevant to the current one (equation 9b).
This resembles the approach of Key-Value networks (Miller et al., 2016). The similarity between control states, corresponding to the reasoning operations that are performed in each prior step, allows the model to select which memories should be taken into account, when creating the new memory – namely, which branches of the reasoning process should be merged together at this point.
saij = softmax ( W d,1(ci ◦ cj) + b1 ) (9a)
(msa)i = i−1∑ j=1 saij ·mj (9b)
Finally, we use msa along with m′i to compute m ′′ i , the new memory content in this variation.
m′′i = W 2d,d[mnew,m ′ i] + b d (10)
Memory Gate. The currently presented MAC network has some fixed number p of concatenated MAC cells, representing the length of the overall reasoning process we perform. However, not all questions require reasoning sequence of the same length. Some questions are simpler while others more complex.
Motivated by this observation, we add a gate over the new memory computed at each step, that may selectively keep content of the previous memory mi−1 unchanged. Practically, the gate functions in a similar way to a highway network (Srivastava et al., 2015), where the gate value is conditioned on the current reasoning operation, ci.
ci ′ = W d,d · ci + bd (11a)
mi = sigmoid (ci′) ·mi−1 + (1− sigmoid (ci′)) ·mi′′ (11b)
The write unit returns the new memory state mi, that will be passed along with ci to the next MAC cell.
3.2.4 DISCUSSION
Overall, when designing the MAC cell, we have attempted to formulate the inner workings of an elementary, yet generic reasoning skills: the model decomposes the problem into steps, focusing on one at a time. At each such step, it takes into account:
• The control ci: Some aspect of the task - pointing to the future work that has left to be done.
• The previous memory or memories: The partial solution or evidence the cell has acquired so far – pointing to the past work that has already been achieved.
• The newly retrieved information mnew: that is retrieved from the knowledge base KB and may or may not be transitively related to that partial solution or evidence - the present, or current work.
Considering these three sources of information together, the cell finally adds the new information up into its working memory, mi, progressing one more step towards the final answer.
3.3 THE OUTPUT UNIT
The output unit receives the question representation q, along with the memory state passed from the last MAC cell mp, where p is the number of MAC cells in the network – representing the number of reasoning steps in the whole process. It inspects both and predicts an answer based on their concatenation. Intuitively, we would like our model to consider both the question as well as the relevant information that has been progressively retrieved from the KB, deemed the necessary information to answer it.
Note that considering both q and mp is critical to answer the question. While mp represents the information collected from KB, we still need to recall what has been asked about it to be able to answer accordingly. This is especially true in our case, when all other interactions between the question and the KB are mediated through attention distributions, rather than being transformed into a shared continuous vector space.
The prediction is built out of a standard 2-layers fully-connected softmax-based classifier with hidden dimension d and output dimension that matches the number of possible answers in the dataset. The classifier receives [mp, q] as input and returns a probability distribution over the answers.
4 EXPERIMENTS
We evaluate our model on the recent CLEVR dataset (Johnson et al., 2016). CLEVR is a synthetic dataset consisting of 700K tuples; each consists of a 3D-rendered image featuring objects of various shapes, colors, materials and sizes, coupled with compositional multi-step questions that measure performance on an array of challenging reasoning skills such as following transitive relations, counting objects and comparing their properties. In addition, each question is associated with a formal program, specifying the reasoning operations that should be performed to compute the answer, among 28 possibilities.
We first perform experiments on the original 700k CLEVR dataset (Johnson et al., 2016), comparing to prior work. As shown in table 1, our model matches or outperforms all existing models both in overall accuracy, as well as in each category, testing different reasoning skills. In particular, for the overall performance, we achieve 98.94% accuracy, more than halving the error rate of the prior best model, FiLM (Perez et al., 2017).
Counting and Numerical Comparison. Remarkably, our performance on questions testing counting and numerical comparisons is significantly higher than the competing models, which consistently struggle on this question type. Again, we nearly halve the corresponding error rate. These results demonstrate the aptitude of attention mechanisms to perform counting, reduction and aggregation, in contrast to alternative, CNN-based approaches.
Training Length and Computational-Efficiency. We examine the learning curves of our and competing models. We have trained all models on the same architecture and used the author code for the other models. Aiming at having equal settings for comparison, we ran all models including ours with learned random words vectors. In order to make sure the results are statistically significant we ran each model multiple (10) times, and plotted the averages and confidence intervals (figure 4). The results show that our model learns significantly faster than the other leading methods, FiLM (Perez et al., 2017) and PG+EE (Johnson et al., 2017). While we do not have learning curves for the Relational Network model, Santoro et al. (2017) report approximately 1.4 million iterations to achieve 95.5% accuracy, which are equivalent to 125 epochs approximately, whereas our model achieves a comparable accuracy after 3 epochs only, yielding 40x reduction in the length of the training process.
Naturally, the smaller number of required training steps also translates to comparably shorter training time. Perez et al. (2017) report training time of 4 days, equivalent to 80 epochs, to reach accuracy of 97.7%. In contrast, we achieve higher accuracy in 6 epochs, taking 9.5 hours overall, leading to 10x reduction in training time.
4.1 DATA EFFICIENCY
We have explored the performance of our and other leading approaches on smaller subsets of the CLEVR dataset, in order to study the ability of models to generalize from smaller amount of data. We sampled at random subsets of CLEVR, with 10%, 25% and 50% of its original 700k size, and used them to train our and other 3 proposed models for the CLEVR task: FiLM (Perez et al., 2017), the strongly-supervised PG+EE (Johnson et al., 2017), and stacked-attention networks (Johnson et al., 2017; Yang et al., 2016).
As shown in figure 4, our model outperforms the other models by a wide margin for all subsets of the CLEVR dataset. For 50% of the data, equivalent to 350k samples, other models obtain accuracies ranging between 70% and 92%, while our model achieves 97.9%. The gap becomes larger as the dataset size reduces: for 25% of the data, equivalent to 175k samples, performance of other models is between 50% and 77%, while our model maintains a high 95.4% accuracy.
Finally, for 10% of the data – 70k samples, still a sizeable amount – our model is the only one that manages to generalize, with performance of 84.7% on average, whereas the other three models fail, achieving 47.6%-57.5% . Note that as pointed out by (Johnson et al., 2016) a simple baseline that predicts the most frequent answer for each of the question types achieves already 42.1%, suggesting that answering half of the questions correctly means that the competing models barely learn to generalize from the smaller dataset. These results demonstrate the robustness of our architecture and its key role as a structural prior guiding our network to learn the intended reasoning skills.
4.2 CLEVR HUMANS - NATURAL LANGUAGE QUESTIONS
We analyze our model performance on the CLEVR-Humans dataset (Johnson et al., 2017), consisting of natural language questions collected through crowdsourcing. As such, the dataset has diverse vocabulary and linguistic variations, and it also demands more varied reasoning skills.
Since the training set is relatively small, consisting of 18k samples, we use it to finetune a model pretrained on the standard CLEVR dataset. However, since most of the vocabulary in CLEVRHumans is not covered by CLEVR, we do not train the word vectors during the pre-training stage, so to prevent drift in their meaning compared to other uncovered words in CLEVR-Humans that may be semantically related.
As shown in table 2, our model achieves state-of-the-art performance on CLEVR-Humans both before and after fine-tuning. It surpasses the next-best FiLM model, (Perez et al., 2017) by 6.6% percent, achieving 82.5%.
The results substantiate the model’s robustness against linguistic variations and noise, as well as its ability to adapt to diverse vocabulary and varied reasoning skills. Arguably, the soft attention performed over the question words allows the model to focus on the words that are most critical to answer the question and translate them to corresponding reasoning operations, giving less attention to irrelevant linguistic variations.
4.3 ABLATIONS
Based on the validation set, we have conducted an ablation study on our model to understand better the contribution of each of its component to the overall performance. We tested each setting on the standard 700K CLEVR dataset as well as on 10% subset of the dataset. See table 3 for the numerical results. In addition, figure 4.3 presents the training curves for the different settings trained on the standard dataset. Overall, the results demonstrate the robustness of the model to hyperparameter variations such as network dimension and length, and also the impact of different aspect and components of MAC on its performance.
Network Length. We have tested the model performance as a function of the network’s length – the number of MAC cells that were sequenced together. The results show the positive correlation between the network length and its performance. We can see that for 1 cell the scores are relatively low – 75%, but adding at least one more cell leads to a significant increase in performance above 95%. The performance keeps improving up to lengths 8-16 that achieve 98.9-99.1%. The results also teach us about the complexity of the dataset, by showing the relatively significant benefits of having at least 4 cells, each modeling a reasoning step.
Network Dimension. We have varied the state dimension to check the robustness of the model to hyperparameters. The results on the standard CLEVR dataset show the model is able to maintain high performance with dimension of 128, albeit after a longer training process, achieving 97.6%, compared to 98.94% achieved with dimension of 512. However, for 10% of CLEVR, the larger 512-dimension allows accuracy increase by 7.5% over dimension of 128.
Weight Sharing. We have tested the impact of sharing weights between cell has on the model performance for network of length p = 12. The results show that for the standard dataset there is only a small difference between these settings of 1%. However, for less data, we see much more significant drop 16.9% in the unshared-parameters setting compared to the shared one. Indeed, we observe that a model with less parameter is more data-efficient and has a lower tendency to overfit the data.
Control Unit. We have performed several ablations in the control unit to understand its contribution to the overall model performance. Based on the results, first, we can see the the question information is crucial for the model to handle the questions, as can be noted by the low performance of the model when there is no use of control signal whatsoever. Second, we have tested the model performance when using the continuous control state computed by question (2) in section 3.2.1, without having word-attention, in order to understand its relative contribution. Based on the results, we can indeed see that using word-attention is useful for accelerating the training process and achieving higher accuracies both for the standard dataset as well as for the small subset, where using word-attention increases results in 21.4%. We also see that using the “contextual words” produced by the questionunit LSTM is useful in accelerating the model performance, when compared to using the wordvectors directly.
Reading Unit. We have conducted several ablations for the reading unit to better understand its behavior and contribution to the performance of the model. The standard MAC reading unit uses the control state – which averages the question words based on attention distributions computed per each reasoning step. In this ablation experiment, we have tested using the full question representation q instead across all reasoning steps to gain better understanding of the the contribution of wordattention to the model performance. Indeed, we can see that using q rather then the control state ci results in a significant drops in performance – 19.4% for the full CLEVR dataset and 19.5% for 10% of the data.
We have conducted additional ablation experiment to better understand the contribution of using the KB features directly in the first-stage information retrieval process described in section 3.2.2, compared to using only the dot-products of the KB elements with the previous memory state mi−1. For the full CLEVR dataset, we can see that this component has only a small impact in the final performance - ultimately resulting in 0.06% performance difference. However, for the 10% of the data, we can see that the difference in performance when ablating this component is much larger - 11.2%.
Writing Unit Ablations. In our main MAC model variant, the memory unit merges the new information mnew with the previous memory state mi−1 by combining them through a linear transformation. In this experiment, we have explored other variations, such as assigning mnew to mi directly – ignoring previous memories, or doing a linear transformation based on mnew only. The results show that in fact such variant is only slightly worse than our main variant – 0.4%. We also conducted an experiment in which we merge the new information with the previous memory just by a having a gate that does a weighted average of them. The results show that this variant performs equivalently to our standard linear-transformation variant.
Writing Unit Additions. We have explored the impact of the writing unit variants described in section 3.2.3 – adding self-attention, gating mechanisms, or both, compared to our standard main model that uses a linear transformation to merge the newly retrieved information mnew with the previous memory mi. For the complete CLEVR dataset we can see that indeed both these variants are very helpful in increasing the model performance. Compared to our standard MAC model that achieves 98.94% on the validation set, self-attention yields accuracy of 99.23%, gating yields 99.36% and adding both achieves 99.48%.
Output Unit. In our standard model, the final predictions made in the output unit are based on the final memory state mp as well as question representation q (stands for the final hidden states of the backward and forwards passes of the LSTM). We have explored the contribution of basing the model prediction on the latter, by testing the model performance when prediction is based on memory alone, for the complete and 10% datasets. We can see that in both settings basing the model’s predictions on the question representation allows faster training and higher accuracies. Notable is the gap in performance for the 10% CLEVR - 19.8% increase by using the question representation to make predictions. These results are very reasonable intuitively, since the model is structured such that the memory holds only information that was retrieved from the image. Thus, questions that may ask for instance on different aspects (such as color or shape) of the same object in the image may result in the same memory content, which is thus does not directly contain enough information to respond such questions.
Position. In our standard model, similarly to the practice of competing models (Santoro et al., 2017; Perez et al., 2017; Hu et al., 2017), we have concatenated positional information to each region of the image, in order to increase the model capability to perform spatial reasoning. We have explored both simple linear maps at a constant [−1, 1] as well as more complex positional encoding suggested by (Vaswani et al., 2017). However, the results for both the standard dataset and the 10% version show a very negligible improvement at best when adding positional encoding information, demonstrating the capability of MAC to perform spatial reasoning without data augmentation.
Gate Bias Initialization. For our model variant with gating mechanism (described in section 3.2.3) we have tested the effect of setting different values for the gate bias - −1, 0 and 1. for −1 the model is initialized to biased for keeping the previous memory value whereas for 1 it will be biased for using the new memory instead. We can see that for the complete dataset setting the bias to 1 is optimal – apparently since the model has enough data to learn to apply each cell effectively. In contrast, for the small 10% CLEVR data, setting the bias to 0 shows better performance, biasing the model to using less cells overall which results ultimately in a theoretically-simpler model that can fit less data more effectively.
4.4 INTERPRETABILITY
We have looked into attention maps over the image and question that the model produces during its computation and provide a few examples in figure 4.4. The first example shows us how the model parses the question in steps, first focusing on the main entity that the question is about, then on
relation of this entity to the “brown matte thing” which is then located in the image. Finally, the model correctly focuses on the small brown cube and predicts the right answer – brown.
The second example shows a model with 4 cells instead of 6, that similarly parse the question in iterations and focuses on the relevant objects at each step, though we can see that the reasoning process looks somewhat different when the MAC network has fewer cells.
The last example shows how how the model handles counting and OR operations. It starts from identifying the task - computing a number, and then red objects as well as the cylinder, one at a time, allowing it ultimately to respond correctly, with the answer 2.
5 CONCLUSION
We have given a first demonstration of how a sequence of Memory, Attention and Control (MAC) cells combined into a Compositional Attention Network provides a very effective tool for neural reasoning. In future work, we wish to explore this promising architecture for other tasks and domains, including real-world VQA, machine comprehension and textual question answering.
A DETAILS OF INPUT UNIT
The input unit processes the raw inputs given to the system into distributed vector representations. It receives a text question (or in general, a query), and an image (or in general, a Knowledge Base (KB)) and processes each of them with a matching sub-unit. Here we provide details of the Query Unit and the Image Unit used in this work.
A.0.1 THE QUERY UNIT
We encode a query of S words into a continuous representation using a bidirectional LSTM (Hochreiter & Schmidhuber, 1997; Graves et al., 2013). Each word is associated with a word embedding ws, where s = 1, ..., S. In our case, we use GloVE words embeddings (Pennington et al., 2014). Then, these embeddings are processed by a bidirectional LSTM of dimension d that outputs:
• a matching sequence of d-dimensional output states, which we refer to as contextual words, [cw1, ..., cwS ]
• d-dimensional hidden state q = [←−−cw1,−−→cwS ], the concatenation of the hidden states from the backward and forward passes. We refer to q as the question representation.
Intuitively, each contextual word cws represents the meaning of sth word, in the context of the question, while the hidden state q represents the overall (compositional) meaning of the question.
A.0.2 THE IMAGE UNIT
Given an image, and following prior work on CLEVR (?Santoro et al., 2017; Perez et al., 2017), we extract conv4 features from ResNet101 (He et al., 2016) pretrained on ImageNet (Krizhevsky et al., 2012) which we treat as a fixed initial representation of the image, x of dimension H,W,C where H = W = 14 are the height and width of the transformed image and C = 1024 is the number of channels. Each feature xh,w represents one region in the original image.
Similar to prior work (Hu et al., 2017; Santoro et al., 2017; Perez et al., 2017), we would like to allow our model to reason explicitly about spatial locations, as required by many of the questions in CLEVR, and therefore we concatenate to this representation a spatial map that represents each of the positions in the image. However, in contrast to prior work that uses a linear meshgrid feature map with 2 features h and w ranging from −1 to 1, and to allow better representation of the positions, we use the positional encoding scheme proposed by Vaswani et al. (2017):
p(h,2i) = sin ( h/100002i/pd ) p(h,2i+1) = cos ( h/100002i/pd
) And similarly for w, where pd is a hyperparameter. Overall, the positional encoding of a feature at position (h,w) is [ph, pw], the concatenation of the positional encodings for h and w.
This positional encoding scheme allows better correspondence between the distance of 2 positions (x, y) and (x, y) in the image and a vector similarity of their positional encodings, even when pd is larger than two.
We then concatenate the obtained spatial map with x, receiving a spatially-aware image representation xp. Then, we pass this representation through two CNN layers with d output channels and obtain a final representation of the image, which we refer to as our Visual Knowledge Base (KBV that is used in further components of the model.
B IMPLEMENTATION AND TRAINING DETAILS
For the question processing, we use GloVE (Pennington et al., 2014) word-vectors with dimension 300. For the image processing, we extract conv4 features from ResNet101 (He et al., 2016) pretrained on ImageNet (Krizhevsky et al., 2012), with dimension H,W,C where H = W = 14 and C = 1024, followed by 2 CNN layers with kernel size 2. We use MAC network with p = 12 cells, and train it using Adam (Kingma & Ba, 2014), with learning rate 10−4. We train our model for 10 − 20 epochs, with batch size 64, and use early stopping based on validation accuracies. During training, the moving averages of all weights of the model are maintained with the exponential decay rate of 0.999. At test time, the moving averages instead of the raw weights are used. We use dropout 0.85, and ELU (Clevert et al., 2015) which in our experience has reduce the training process compared to RELU.The training process takes roughly 10-20 hours on a single Titan X GPU.
C FURTHER DISCUSSION OF RELATED WORK
In this section we provide detailed discussion of related work. Several models have been applied to the CLEVR task. These can be partitioned into two groups, module networks that use the strong supervision provided as a tree-structured functional program associated with each instance, and end-to-end, fully differentiable networks that combine a fairly standard stack of CNNs with components that aid them in performing reasoning tasks. We also discuss the relation of MAC to other approaches, such as memory networks and neural computers.
C.1 MODULE NETWORKS
The modular approach (Andreas et al., 2016a;b; Hu et al., 2017; Johnson et al., 2017) first translates the given question into a tree-structured action plan, aiming to imitate the ground-truth programs provided as a form of strong-supervision. Then, it constructs a tailor-made network that executes the plan on the image in multiple steps. This network is composed of discrete units selected out of a collection of predefined modules, each responsible for an elementary reasoning operation, such as identifying an objects color, filtering them for their shape, or comparing two amounts. Each module has its own set of learned parameters (Johnson et al., 2017), or even hand-crafted design (Andreas et al., 2016a) to guide it towards its intended behavior.
Overall, this approach makes discrete choices at two levels: the identity of each module – the behavior it should learn among a fixed set of possible types of behaviors, and the network layout – the way in which these modules are wired together to compute the answer progressively. Hence, their differentiability is confined to the boundaries of a single module, disallowing end-to-end training.
Several key differences exist between our approaches. First, our model replaces the fixed modules collection with one versatile and universal cell that shares both its architecture and parameters across all of its instantiations, and is applied across all the reasoning steps. Second, it replaces the dynamic recursive tree structures with a sequential topology, augmented by soft attention mechanisms, as done in Bahdanau et al. (2014). This confers our network with a virtual capacity to represent arbitrarily complex Directed Acyclic Graphs (DAGs) while still having efficient and readily deployed physical sequential structure. Together, both of these relaxations allow us to effectively train our model end-to-end by backpropagation alone, whereas module networks demand a more involved training scheme that relies on the strongly-supervised programs at the first stage, and on various Reinforcement Learning (RL) techniques at the second. Furthermore, while our model can be train without the strong supervisory programs, developing adaptive reasoning skills to address the task is it trained for, the modular approach reliance on questions structured and formal representation hinder its applicability to real-world tasks.
C.2 AUGMENTED CONVOLUTIONAL NEURAL NETWORKS
Alternative approaches for the CLEVR task that do not rely on the provided programs as a strong supervision signal are Santoro et al. (2017) and Perez et al. (2017). Both complement standard multi-layer Convolutional Neural Networks (CNNs) with components that aid them in handling compositional and relational questions.
Relational Networks. Santoro et al. (2017) appends a Relation Network (RN) layer to the CNN. This layer inspects all pairs of pixels in the image, thereby enhancing the network capacity to reason over binary relations between objects. While this approach is very simple and elegant conceptually, it suffers from quadratic computational complexity, in contrast to our and other leading approaches. But beyond that, closer inspection reveals that this direct pairwise comparison might be unnecessary. Based on the analogy suggested by Santoro et al. (2017), according to which pixels are equivalent to objects and their pairwise interactions to relations, a RN layer attempts to grasp the induced graph between objects all at once in one shallow and broad layer. Conversely, our attention-based model proceeds in steps. It basically compares the image to its current memory and control for this step, aggregates the attended regions into the new memory, and repeats the process. By the same analogy, it traverses a narrow and deep path, progressively following transitive relations. Consequently, our model exhibits a relational capacity while circumventing the computational inefficiency.
FiLM. FiLM (Perez et al., 2017) is a recently proposed method that interleaves standard CNN layers that process the given image with linear layers, reminiscent of layer normalization techniques (Ba et al., 2016; Ioffe & Szegedy, 2015). Each of these layers, called FiLM, is conditioned on the question: the question words are processed by a GRU, and its output is linearly transformed into matching biases and variances for each of the CNN layers, tilting its activations to reflect the specifics of the given question and affect the computation done over the image.
Similarly to our model, this approach features distant modulation between the question and the image, where rather than being fused together into the same vector space, the question can affect the image processing only through constrained means – for the case of FiLM – linear transformations. However, since the same transformation is applied to all the activations homogeneously, agnostic to both their spatial location as well as the features values, this approach does not allow the question to differentiate between regions in the image based on the objects or concepts they represent – on the content of the image. This stands in stark contrast to our attention-based model, which readily allows and actually encourages the question to inform the model about relevant regions to focus on. We speculate that this still distant, yet more direct interaction between the question and the data, or image, for the case of VQA, facilitates learning and increases generalizability. It may be more suitable to VQA tasks, and CLEVR in particular, where the questions demand the responder to focus on specific objects, and reason about their properties or relations, rather than respond based only on a holistic view of the image that may lead to sub-optimal results (Yang et al., 2016), as is the case of FiLM. Indeed, as demonstrated in 4, there is significant evidence showing our models better generalization capacity, allowing it to achieve high accuracies much faster, and from less data than FiLM and other competing methods.
C.3 MEMORY AND ATTENTION
Our architecture draws inspiration from recent research on memory and attention (Kumar et al., 2016; Xiong et al., 2016; Graves et al., 2014; 2016). Kumar et al. (2016); Xiong et al. (2016) propose the Dynamic Memory Network model that proceeds in an iterative process, applying soft attention to retrieve relevant information from a visual or textual KB, which is in turn accumulated into memory passed from one iteration to the next. However, in contrast to our model, it views the question as an atomic unit, whereas our model decomposes it into a multi-step action plan informing each cell in our sequential network about its current objective. Another key difference is the distant interaction between the question and the KB that characterizes our model. Conversely, DMN fuses their corresponding representations together into the same vector space.
Graves et al. (2016; 2014) complements a neural network with a memory array it can interact with, through the means of soft attention. Analogously to our model, it partitions the model into a core neural network, called controller, as well as reading and writing heads that interact with external memory array. However, a main point distinguishing our model from this approach, is the use of dynamic memory, as in Kumar et al. (2016), instead of a fixed-array memory. Each MAC cell is associated with a memory state, our reading unit inspects only the latest memory passed from the previous state, and our writing unit creates a new memory state rather than writing to multiple slots in a fixed shared external memory. Notably, our approach is much more reminiscent of the widely successful RNN structure, rather than to Graves et al. (2016; 2014) .
Finally, our approach has potential ties to the VQA models Hu et al. (2017); Lu et al. (2016) which also attend both the to question words and the image while progressively addressing the given question. However, both of these models have distinct specialized designs for each of their attention layers or modules, and have a discrete or fixed layout in which they are composed together. In contrast, our approach relax both of these limitations, having one universal cell design and one universal self-attending sequential network layout.
C.4 ATTENTION VS. CONVOLUTION
Compared to other leading methods, our model stands out by being heavily based on soft attention, whereas most competing approaches are CNN-based, surprisingly lack any attention mechanism. Since attention is commonly used in models designed for standard VQA (Antol et al., 2015; Gupta, 2017; Lu et al., 2016; Yang et al., 2016), it is reasonable to assume that it would be beneficial to incorporate such methods into visual reasoning systems for the CLEVR task as well. In fact, attention mechanisms should be especially useful for multi-step reasoning questions such as those present in CLEVR. Such questions refer to several relations between different objects in the image and feature compositional structure that may be approached one step at a time. Thus, it should be beneficial for a cogent responder to have the capacity to selectively focus on on one or some objects at each step, traversing the relevant relational links one after the other, both at the image level, and at the question level.
Moreover, attention mechanisms enhance our model’s ability to perform reasoning skills that pertain to aggregation of information across different regions, such as counting, finding maximum value, or performing other reduction operations over information that is spread across the image. Indeed, as discussed in 4, all existing models for visual reasoning, most of which lacking any attention mechanism, struggle with the counting and numerical comparisons questions present in CLEVR. Conversely, our model proves much more capable of performing these reasoning skills, outperforming the other approaches by a wide margin. Noticeably, incorporating soft attention into our model makes it much more adept at performing such aggregation reasoning skills, successfully addressing the this type of questions.
Finally, as pointed out by Lu et al. (2016); Yang et al. (2016), soft attention confers the model with robustness to noise introduced from irrelevant information presents in the image, and higher capacity for handling larger and more diverse vocabulary, the latter being demonstrated in 4. It allows the model to separate the wheat from the chaff, selectively attending to the relevant information only, and arguably, being more resilient to both visual and linguistic variations. | 1. What are the strengths and weaknesses of the proposed model in terms of its design and experimental results?
2. What are the concerns regarding the usage of external components, specifically pre-trained word vectors?
3. How can the authors justify the design choices made in the proposed recurrent unit, particularly the separation of different units, attention-based input processing, and memory updates?
4. What additional experiments or analyses can be conducted to provide a deeper understanding of the model's characteristics and performance?
5. Are there any contradictory statements or unclear descriptions in the paper that need further clarification? | Review | Review
This paper proposes a recurrent neural network for visual question answering. The recurrent neural network is equipped with a carefully designed recurrent unit called MAC (Memory, Attention and Control) cell, which encourages sequential reasoning by restraining interaction between inputs and its hidden states. The proposed model shows the state-of-the-art performance on CLEVR and CLEVR-Humans dataset, which are standard benchmarks for visual reasoning problem. Additional experiments with limited training data shows the data efficiency of the model, which supports its strong generalization ability.
The proposed model in this paper is designed with reasonable motivations and shows strong experimental results in terms of overall accuracy and the data efficiency. However, an issue in the writing, usage of external component and lack of experimental justification of the design choices hinder the clear understanding of the proposed model.
An issue in the writing
Overall, the paper is well written and easy to understand, but Section 3.2.3 (The Write Unit) has contradictory statements about their implementation. Specifically, they proposed three different ways to update the memory (simple update, self attention and memory gate), but it is not clear which method is used in the end.
Usage of external component
The proposed model uses pretrained word vectors called GloVE, which has boosted the performance on visual question answering. This experimental setting makes fair comparison with the previous works difficult as the pre-trained word vectors are not used for the previous works. To isolate the strength of the proposed reasoning module, I ask to provide experiments without pretrained word vectors.
Lack of experimental justification of the design choices
The proposed recurrent unit contains various design choices such as separation of three different units (control unit, read unit and memory unit), attention based input processing and different memory updates stem from different motivations. However, these design choices are not justified well because there is neither ablation study nor visualization of internal states. Any analysis or empirical study on these design choices is necessary to understand the characteristics of the model. Here, I suggest to provide few visualizations of attention weights and ablation study that could support indispensability of the design choices. |
ICLR | Title
Heteroscedastic Temporal Variational Autoencoder For Irregularly Sampled Time Series
Abstract
Irregularly sampled time series commonly occur in several domains where they present a significant challenge to standard deep learning models. In this paper, we propose a new deep learning framework for probabilistic interpolation of irregularly sampled time series that we call the Heteroscedastic Temporal Variational Autoencoder (HeTVAE). HeTVAE includes a novel input layer to encode information about input observation sparsity, a temporal VAE architecture to propagate uncertainty due to input sparsity, and a heteroscedastic output layer to enable variable uncertainty in output interpolations. Our results show that the proposed architecture is better able to reflect variable uncertainty through time due to sparse and irregular sampling than a range of baseline and traditional models, as well as recent deep latent variable models that use homoscedastic output layers.1
1 INTRODUCTION
In this paper, we propose a novel deep learning framework for probabilistic interpolation of irregularly sampled time series. Irregularly sampled time series data occur in multiple scientific and industrial domains including finance (Manimaran et al., 2006), climate science (Schulz & Stattegger, 1997) and healthcare (Marlin et al., 2012; Yadav et al., 2018). In some domains including electronic health records and mobile health studies (Cheng et al., 2017), there can be significant variation in inter-observation intervals through time. This is due to the complexity of the underlying observation processes that can include “normal” variation in observation times combined with extended, blockstructured periods of missingness. For example, in the case of ICU EHR data, this can occur due to patients being moved between different locations for procedures or tests, resulting in missing physiological sensor data for extended periods of time. In mobile health studies, the same problem can occur due to mobile sensor batteries running out, or participants forgetting to wear or carry devices.
In such situations, it is of critical importance for interpolation models to be able to correctly reflect the variable input uncertainty that results from variable observation sparsity so as not to provide overly confident inferences. However, modeling time series data subject to irregular sampling poses a significant challenge to machine learning models that assume fully-observed, fixed-size feature representations (Marlin et al., 2012; Yadav et al., 2018; Shukla & Marlin, 2021b). The main challenges in dealing with such data include the presence of variable time gaps between the observation time points, partially observed feature vectors caused by the lack of temporal alignment across different dimensions, as well as different data cases, and variable numbers of observations across dimensions and data cases. Significant recent work has focused on developing specialized models and architectures to address these challenges in modeling irregularly sampled multivariate time series (Li & Marlin, 2015; 2016; Lipton et al., 2016; Futoma et al., 2017; Che et al., 2018a; Shukla & Marlin, 2019; Rubanova et al., 2019; Horn et al., 2020; Li & Marlin, 2020; Shukla & Marlin, 2021a; De Brouwer et al., 2019; Tan et al., 2020; Kidger et al., 2020).
Recently, Shukla & Marlin (2021a) introduced the Multi-Time Attention Network (mTAN) model, a variational autoencoder (VAE) architecture for continuous-time interpolation of irregularly sampled
1Implementation available at https://github.com/reml-lab/hetvae
time series. This model was shown to provide state-of-the-art classification and deterministic interpolation performance. However, like many VAEs, the mTAN architecture produces a homoscedastic output distribution conditioned on the latent state. This means that the model can only reflect uncertainty due to variable input sparsity through variations in the VAE latent state. As we will show, this mechanism is insufficient to capture differences in uncertainty over time. On the other hand, Gaussian Process Regression-based (GPR) methods (Rasmussen & Williams, 2006) have the ability to reflect variable uncertainty through the posterior inference process. The main drawbacks of GPR-based methods are their significantly higher run times during both training and inference, and the added restriction to define positive definite covariance functions for multivariate time series.
In this work, we propose a novel encoder-decoder architecture for multivariate probabilistic time series interpolation that we refer to as the Heteroscedastic Temporal Variational Autoencoder or HeTVAE. HeTVAE aims to address the challenges described above by encoding information about input sparsity using an uncertainty-aware multi-time attention network (UnTAN), flexibly capturing relationships between dimensions and time points using both probabilistic and deterministic latent pathways, and directly representing variable output uncertainty via a heteroscedastic output layer.
The proposed UnTAN layer generalizes the previously introduced mTAN layer with an additional intensity network that can more directly encode information about input uncertainty due to variable sparsity. The proposed UnTAN layer uses an attention mechanism to produce a distributed latent representation of irregularly sampled time series at a set of reference time points. The UnTAN module thus provides an interface between input multivariate, sparse and irregularly sampled time series data and more traditional deep learning components that expect fixed-dimensional or regularly spaced inputs. We combat the presence of additional local optima that arises from the use of a heteroscedastic output layer by leveraging an augmented training objective where we combine the ELBO loss with an uncertainty agnostic loss component. The uncertainty agnostic component helps to prevent learning from converging to local optima where the structure in data is explained as noise.
We evaluate the proposed architecture on both synthetic and real data sets. Our approach outperforms a variety of baseline models and recent approaches in terms of log likelihood, which is our primary metric of interest in the case of probabilistic interpolation. Finally, we perform ablation testing of different components of the architecture to assess their impact on interpolation performance.
2 RELATED WORK
Keeping in mind the focus of this work, we concentrate our discussion of related work on deterministic and probabilistic approaches applicable to the interpolation and imputation tasks.
Deterministic Interpolation Methods: Deterministic interpolation methods can be divided into filtering and smoothing-based approaches. Filtering-based approaches infer the values at a given time by conditioning only on past observations. For example, Han-Gyu Kim et al. (2017) use a unidirectional RNN for missing data imputation that conditions only on data from the relative past of the missing observations. On the other hand, smoothing-based methods condition on all possible observations (past and future) to infer any unobserved value. For example, Yoon et al. (2018) and Cao et al. (2018) present missing data imputation approach based on multi-directional and bi-directional RNNs. These models typically use the gated recurrent unit with decay (GRU-D) model (Che et al., 2018a) as a base architecture for dealing with irregular sampling. Interpolation-prediction networks take a different approach to interfacing with irregularly sampled data that is based on the use of temporal kernel smoother-based layers (Shukla & Marlin, 2019). Shan & Oliva (2021) propose hierarchical imputation strategy based on set-based architectures for imputation in irregularly sampled time series. Of course, the major disadvantage of deterministic interpolation approaches is that they do not express uncertainty over output interpolations and thus can not be applied to the problem of probabilistic interpolation without modifications.
Probabilistic Interpolation Methods: The two primary building blocks for probabilistic interpolation and imputation of multivariate irregularly sampled time series are Gaussian processes regression (GPR) (Rasmussen & Williams, 2006) and variational autoencoders (VAEs) (Rezende et al., 2014; Kingma & Welling, 2014). GPR models have the advantage of providing an analytically tractable full joint posterior distribution over interpolation outputs when conditioned on irregularly sampled input data. Commonly used covariance functions have the ability to translate variable input obser-
vation density into variable interpolation uncertainty. GPR-based models have been used as the core of several approaches for supervised learning and forecasting with irregularly sampled data (Ghassemi et al., 2015; Li & Marlin, 2015; 2016; Futoma et al., 2017). However, GPR-based models can become somewhat cumbersome in the multivariate setting due to the positive definiteness constraint on the covariance function (Rasmussen & Williams, 2006). The use of separable covariance functions is one common approach to the construction of GPR models over multiple dimensions (Bonilla et al., 2008), but this construction requires all dimensions to share the same temporal kernel parameters. A further drawback of GP-based methods is their significantly higher run times relative to deep learning-based models when applied to larger-scale data (Shukla & Marlin, 2019).
Variational autoencoders (VAEs) combine probabilistic latent states with deterministic encoder and decoder networks to define a flexible and computationally efficient class of probabilistic models that generalize classical factor analysis (Kingma & Welling, 2014; Rezende et al., 2014). Recent research has seen the proposal of several new VAE-based models for irregularly sampled time series. Chen et al. (2018) proposed a latent ordinary differential equation (ODE) model for continuous-time data using an RNN encoder and a neural ODE decoder. Building on the prior work of Chen et al. (2018), Rubanova et al. (2019) proposed a latent ODE model that replaces the RNN with an ODE-RNN model as the encoder. Li et al. (2020) replace the deterministic ODEs with stochastic differential equations(SDEs). Norcliffe et al. (2021) extends the prior work on neural ode by combining it with neural processes (Garnelo et al., 2018). Shukla & Marlin (2021a) proposed the Multi-Time Attention Network (mTAN) model, a VAE-based architecture that uses a multi-head temporal cross attention encoder and decoder module (the mTAND module) to provide the interface to multivariate irregularly sampled time series data. Fortuin et al. (2020) proposed a VAE-based approach for the task of smoothing in multivariate time series with a Gaussian process prior in the latent space to capture temporal dynamics. Garnelo et al. (2018); Kim et al. (2019) used heteroscedastic output layers to represent uncertainty in case of fixed dimensional inputs but these approaches are not applicable to irregularly sampled time series.
Similar to the mTAN model, the Heteroscedastic Temporal Variational Autoencoder (HeTVAE) model proposed in this work is an attention-based VAE architecture. The primary differences are that mTAN uses a homoscedastic output distribution that assumes constant uncertainty and that the mTAN model’s cross attention operation normalizes away information about input sparsity. These limitations are problematic in cases where there is variable input density through time resulting in the need for encoding, propagating, and reflecting that uncertainty in the output distribution. As we describe in the next section, HeTVAE addresses these issues by combining a novel sparsity-sensitive encoder module with a heteroscedastic output distribution and parallel probabilistic and deterministic pathways for propagating information through the model. Another important difference relative to these previous methods is that HeTVAE uses an augmented learning objective to address the underfitting of predictive variance caused by the use of the heteroscedastic layer.
3 PROBABILISTIC INTERPOLATION WITH THE HETVAE
In this section, we present the proposed architecture for probabilistic interpolation of irregularly sampled time series, the Heteroscedastic Temporal Variational Autoencoder (HeTVAE). HeTVAE leverages a sparsity-aware layer as the encoder and decoder in order to represent input uncertainty and propagate it to output interpolations. We begin by introducing notation. We then describe the architecture of the encoder/decoder network followed by the complete HeTVAE architecture.
3.1 NOTATION
We letD = {sn|n = 1, ..., N} represent a data set containing N data cases. An individual data case consists of a D-dimensional, sparse and irregularly sampled multivariate time series sn. Different dimensions d of the multivariate time series can have observations at different times, as well as different total numbers of observations Ldn. We follow the series-based representation of irregularly sampled time series (Shukla & Marlin, 2021b) and represent time series d for data case n as a tuple sdn = (tdn,xdn) where tdn = [t1dn, ..., tLdndn] is the list of time points at which observations are defined and xdn = [x1dn, ..., xLdndn] is the corresponding list of observed values. We drop the data case index n for brevity when the context is clear.
3.2 REPRESENTING INPUT SPARSITY
As noted in the previous section, the mTAN encoder module does not represent information about input sparsity due to the normalization of the attention weights. To address this issue, we propose an augmented module that we refer to as an Uncertainty Aware Multi-Time Attention Network (UnTAN). The UnTAN module is shown in Figure 1a. This module includes two encoding pathways that leverage a shared time embedding function and a shared attention function. The first encoding pathway (the intensity pathway, INT) focuses on representing information about the sparsity of observations while the second encoding pathway (the value pathway, VAL) focuses on representing information about values of observations. The outputs of these two pathways are concatenated and mixed via a linear layer to define the final output of the module. The mathematical description of the module is given in Equations 1 to 3 and is explained in detail below.
inth(rk, td) = pool({exp(αh(rk, tid)) | tid ∈ td}) pool({exp(αh(rk, ti′u)) | ti′u ∈ tu})
(1)
valh(rk, td,xd) = pool({exp(αh(rk, tid)) · xid | tid ∈ td, xid ∈ xd})
pool({exp(αh(rk, ti′d)) | ti′d ∈ td}) (2)
αh(t, t ′) =
( φh(t)wv
Tφh(t ′)T√
de
) (3)
Time Embeddings and Attention Weights: Similar to the mTAN module, the UnTAN module uses time embedding functions φh(t) to project univariate time values into a higher dimensional space. Each time embedding function is a one-layer fully connected network with a sine function non-linearity φh(t) = sin(ω · t+ β). We learn H time embeddings each of dimension de. w and v are the parameters of the scaled dot product attention function αh(t, t′) shown in Equation 3. The scaling factor 1/ √ de is used to normalize the dot product to counteract the growth in the dot product magnitude with increase in the time embedding dimension de.
Intensity Encoding: The intensity encoding pathway is defined by the function inth(rk, td) shown in Equation 1. The inputs to the intensity function are a query time point rk and a vector td containing all the time points at which observations are available for dimension d. The numerator of the intensity function exponentiates the attention weights between rk and each time point in td to ensure positivity, then pools over the observed time points. The denominator of this computation is identical to the numerator, but the set of time points tu that is pooled over is the union over all observed time points for dimension d from all data cases.
Intuitively, if the largest attention weight between rk and any element of td is small relative to attention weights between rk and the time points in tu, then the output of the intensity function will be low. Importantly, due to the use of the non-linear time embedding function, pairs of time points with high attention weights do not necessarily have to be close together in time meaning the notion of intensity that the network expresses is significantly generalized.
We also note that different sets could be used for tu including a regularly spaced set of reference time points. One advantage of using the union of all observed time points is that it fixes the maximum value of the intensity function at 1. The two pooling functions applicable in the computation of the intensity function are max and sum. If the time series is sparse, max works well because using sum in the sparse case can lead to very low output values. In a more densely observed time series, either sum or max can be used.
Value Encoding: The value encoding function valh(rk, td,xd) is presented in Equation 2 in a form that highlights the symmetry with the intensity encoding function. The primary differences are that valh(rk, td,xd) takes as input both observed time points td and their corresponding values xd, and the denominator of the function pools over td itself. While different pooling options could be used for this function, in practice we use sum-based pooling. These choices lead to a function valh(rk, td,xd) that interpolates the observed values at the query time points using softmax weights derived from the attention function. The values of observed points with higher attention weights contribute more to the output value. This structure is equivalent to that used in the mTAN module when sum-based pooling is used. We can also clearly see that this function on its own can not represent
information about input sparsity due to the normalization over td. Indeed, the function is completely invariant to an additive decrease in all of the attention weights α′h(rk, tid) = αh(rk, tid)− δ.
Module Output: The last stage of the UnTAN module concatenates the value and intensity pathway representations and then linearly weights them together to form the final J-dimensional representation that is output by the module. The parameters of this linear stage of the model are U inthdj and Uvalhdj . The value of the j th dimension of the output at a query time point rk is given by Equation 4.
UnTAN(rk, t,x)[j] = H∑ h=1 D∑ d=1
[ inth(rk, td)
valh(rk, td,xd)
]T [ U inthdj
Uvalhdj
] (4)
Finally, we note that the UnTAN module defines a continuous function of t given an input time series and hence cannot be directly incorporated into standard neural network architectures. We adapt the UnTAN module to produce fully observed fixed-dimensional discrete sequences by materializing its output at a set of reference time points. Reference time points can be fixed set of regularly spaced time points or may need to depend on the input time series. For a given set of reference time points r = [r1, · · · , rK ], the discretized UnTAN module UnTAND(r, t,x) is defined as UnTAND(r, t,x)[i] = UnTAN(ri, t,x). This module takes as input the time series s = (t,x) and the set of reference time points r and outputs a sequence of K UnTAN embeddings, each of dimension J corresponding to each reference point. As described in the next section, we use the UnTAND module to provide an interface between sparse and irregularly sampled data and fully connected MLP network structures.
3.3 THE HETVAE MODEL
In this section, we describe the overall architecture of the HeTVAE model, as shown in Figure 1b.
Model Architecture: The HeTVAE consists of parallel deterministic and probabilistic pathways for propagating input information to the output distribution, including information about input sparsity. We begin by mapping the input time series s = (t,x) through the UnTAND module along with a collection of K reference time points r. In the probabilistic path, we construct a distribution over latent variables at each reference time point using a diagonal Gaussian distribution q with mean and variance output by fully connected layers applied to the UnTAND output embeddings
henc = [henc1 , · · · ,hencK ] as shown in Equation 6. In the deterministic path, the UnTAND output embeddings henc are passed through a feed-forward network g to produce a deterministic temporal representation (at each reference point) of the same dimension as the probabilistic latent state.
The decoder takes as input the representation from both pathways along with the reference time points and a set of query points t′ (Eq 8). The UnTAND module produces a sequence of embeddings hdec = [hdec1 , · · · ,hdec|t′| ] corresponding to each time point in t
′. The UnTAND embeddings are then independently decoded using a fully connected decoder fdec and the result is used to parameterize the output distribution. We use a diagonal covariance Gaussian distribution where both the mean µ = [µ1, · · · ,µ|t′|],µi ∈ RD and variance σ2 = [σ21 , · · · ,σ2|t′|],σ 2 i ∈ RD are predicted for each time point by the final decoded representation as shown in Eq 9. The generated time series is sampled from this distribution and is given by ŝ = (t′,x′) with all data dimensions observed.
The complete model is described below. We define qγ(z|r, s) to be the distribution over the probabilistic latent variables z = [z1, · · · , zK ] induced by the input time series s = (t,x) at the reference time points r. We define the prior p(zi) over the latent states to be a standard multivariate normal distribution. We let phetθ (x ′ id | zcat, t′id) define the final probability distribution over the value of time point t′id on dimension d given the concatenated latent state z cat = [zcat1 , · · · , zcatK ]. γ and θ represent the parameters of all components of the encoder and decoder respectively.
henc = UnTANDenc(r, t,x) (5)
zk ∼ qγ(zk |µk,σ2k), µk = fencµ (henck ), σ2k = fencσ (henck ) (6) zcatk = concat(zk, g(h enc k )) (7)
hdec = UnTANDdec(t′, r, zcat) (8)
phetθ (x ′ id | zcat, t′id) = N (x′id; µi [d], σ2i [d]), µi = fdecµ (hdeci ), σ2i = fdecσ (hdeci ) (9)
x′id ∼ phetθ (x′id | zcat, t′id) (10) Compared to the constant output variance used to train the mTAN-based VAE model proposed in prior work (Shukla & Marlin, 2021a), our proposed model produces a heteroscedastic output distribution that we will show provides improved modeling for the probabilistic interpolation task. However, the increased complexity of the model’s output representation results in an increased space of local optima. We address this issue using an augmented learning objective, as described in the next section. Finally, we note that we can easily obtain a simplified homoscedastic version of the model with constant output variance σ2c using the alternate final output distribution pcθ(x ′ id | z, t′id) = N (x′id; µi [d], σ2c ).
Augmented Learning Objective: To learn the parameters of the HeTVAE framework given a data set of sparse and irregularly sampled time series, we propose an augmented learning objective based on a normalized version of the evidence lower bound (ELBO) combined with an uncertainty agnostic scaled squared loss. We normalize the contribution from each data case by the total number of observations so that the effective weight of each data case in the objective function is independent of the total number of observed values. The augmented learning objective is defined below. µn is the predicted mean over the test time points as defined in Equation 9. Also recall that the concatenated latent state zcat depends directly on the probabilistic latent state z.
LNVAE(θ, γ) = N∑ n=1 1∑ d Ldn ( Eqγ(z|r,sn)[log p het θ (xn|zcatn , tn)]−DKL(qγ(z|r, sn)||p(z)) (11)
− λEqγ(z|r,sn)‖xn − µn‖ 2 2] )
DKL(qγ(z|r, sn)||p(z)) = K∑ i=1 DKL(qγ(zi|r, sn)||p(zi)) (12)
log phetθ (xn|zcatn , tn) = D∑ d=1 Ldn∑ j=1 log phetθ (xjdn|zcatn , tjdn) (13)
We include the uncertainty agnostic scaled squared loss term to counteract the propensity of the heteroscedastic model to become stuck in poor local optima where the mean is essentially flat and
Observed data Ground truth Reconstructions
n = 3 n = 10 n = 20
Figure 2: We show example interpolations on the synthetic dataset. The set of 3 columns correspond to interpolation results with increasing numbers of observed points: 3, 10 and 20 respectively. The first, second and third rows correspond to STGP, HeTVAE and HTVAE mTAN respectively. The shaded region corresponds to ± one standard deviation. STGP and HetVAE exhibit variable output uncertainty in response to input sparsity while mTAN does not.
all of the structure in the data is explained as noise. This happens because the model has the ability to learn larger variances at the output, which allows the mean to underfit the data. The extra component (scaled squared loss) helps to push the optimization process to find more informative parameters by introducing a fixed penalty for the mean deviating from the data. As we will show in the experiments, the use of this augmented training procedure has a strong positive impact on final model performance. Since, we are focusing on the interpolation task, we train the HeTVAE by maximizing the augmented learning objective (Equation 11) on the interpolated time points (more details on training has been provided in the experimental protocols in Section 4).
4 EXPERIMENTS
In this section, we present interpolation experiments using a range of models on three real-world data sets. PhysioNet Challenge 2012 (Silva et al., 2012) and MIMIC-III (Johnson et al., 2016) consist of multivariate, sparse and irregularly sampled time series data. We also perform experiments on the Climate dataset (Menne et al., 2016), consisting of multi-rate time series. We also show qualitative results on a synthetic dataset. Details of each dataset can be found in the Appendix A.6.1.
Experimental Protocols: We randomly divide the real data sets into a training set containing 80% of the instances, and a test set containing the remaining 20% of instances. We use 20% of the training data for validation. In the interpolation task, we condition on a subset of available points and produce distributions over the rest of the time points. On the real-world datasets, we perform interpolation experiments by conditioning on 50% of the available points. At test time, the values of observed points are conditioned on and each model is used to infer single time point marginal distributions over values at the rest of the available time points in the test instance. In the case of methods that do not produce probabilistic outputs, we make mean predictions. In the case of the synthetic dataset where we have access to all true values, we use the observed points to infer the values at the rest of the available points. We repeat each real data experiment five times using different random seeds to initialize the model parameters. We assess performance using the negative log likelihood, which is our primary metric of interest. We also report mean squared and mean absolute error. For all experiments, we select hyper-parameters on the held-out validation set using grid search and then apply the best trained model to the test set. The hyper-parameter ranges searched for each model and dataset are fully described in Appendix A.5.
Models: We compare our proposed model HeTVAE to several probabilistic and deterministic interpolation methods. We compare to two Gaussian processes regression (GPR) approaches. The most basic GP model for multivariate time series fits one GPR model per dimension. This approach is known as a single task GP model (STGP) (Rasmussen & Williams, 2006). A potentially better option is to model data using a Multi Task GP (MTGP) (Bonilla et al., 2008). This approach models the correlations both across different dimensions and across time by defining a kernel expressed as the Hadamard product of a temporal kernel (as used in the STGP) and a task kernel. We also compare to several VAE-based approaches. These approaches use a homoscedastic output distribution with different encoder and decoder architectures. HVAE RNN employs a gated recurrent unit network (Chung et al., 2014) as encoder and decoder, HVAE RNN-ODE (Chen et al., 2018) replaces the RNN decoder with a neural ODE, HVAE ODE-RNN-ODE (Rubanova et al., 2019) employs
a ODE-RNN encoder and neural ODE decoder. Finally, we compare to HTVAE mTAN (Shukla & Marlin, 2021a), a temporal VAE model consisting of multi-time attention networks producing homoscedastic output. For VAE models with homoscedastic output, we treat the output variance term as a hyperparameter and select the variance using log likelihood on the validation set. Architecture details for these methods can be found in Appendix A.4. As baselines, we also consider deterministic mean and forward imputation-based methods. Forward imputation always predicts the last observed value on each dimension, while mean imputation predicts the mean of all the observations for each dimension.
Synthetic Data Results: Figure 2 shows sample visualization output for the synthetic dataset. For this experiment, we compare HTVAE mTAN, the single task Gaussian process STGP, and the proposed HeTVAE model. We vary the number of observed points (3, 10, 20) and each model is used to infer the distribution over the remaining time points. We draw multiple samples from the VAE latent state for HeTVAE and HTVAE mTAN and visualize the distribution of the resulting mixture. As we can see, the interpolations produced by HTVAE mTAN have approximately constant uncertainty across time and this uncertainty level does not change even when the number of points conditioned on increases. On the other hand, both HeTVAE and STGP show variable uncertainty across time. Their uncertainty reduces in the vicinity of input observations and increases in gaps between observations. Even though the STGP has an advantage in this experiment (the synthetic data were generated with an RBF kernel smoother and STGP uses RBF kernel as the covariance function), the proposed model HeTVAE shows comparable interpolation performance. We show more qualitative results in Appendix A.3.
Real Data Results: Tables 1, 2 and 3 compare the interpolation performance of all the approaches on PhysioNet, MIMIC-III and Climate dataset respectively. HeTVAE outperforms the prior approaches with respect to the negative log likelihood score on all three datasets. Gaussian Process based methods − STGP and MTGP achieve second and third best performance respectively. We emphasize that while the MAE and MSE values for some of the prior approaches are close to those obtained by the HeTVAE model, the primary metric of interest for comparing probabilistic interpolation approaches is log likelihood, where the HeTVAE performs much better than the other methods.
We note that the MAE/MSE of the VAE-based models with homoscedastic output can be improved by using a small fixed variance during training. However, this produces even worse log likelihood values. Further, we note that the current implementation of MTGP is not scalable to the Climate dataset (270 dimensions). We provide experiments on an additional dataset in Appendix A.1.
Ablation Results: Table 4 shows the results of ablating several different components of the HeTVAE model and training procedure. The first row shows the results for the full proposed approach. The HeTVAE - ALO ablation shows the result of removing the augmented learning objective and training the model only using the ELBO. This results in an immediate drop in performance on PhysioNet. HeTVAE - DET removes the deterministic pathway from the model, resulting in a performance drop on both MIMIC-III and PhysioNet. HeTVAE - INT removes the intensity encoding pathway from the UnTAND module. It results in a large drop in performance on both datasets. HeTVAE - HET- ALO removes the heteroscedastic layer and the augmented learning objective (since the augmented learning objective is introduced to improve the learning in the presence of heteroscedastic layer), resulting in a highly significant drop on both datasets. These results show that all of the components included in the proposed model contribute to improved model performance. We provide more ablation results in Appendix A.2 and discuss hyperparameter selection in Appendix A.5.
5 DISCUSSION AND CONCLUSIONS
In this paper, we have proposed the Heteroscedastic Temporal Variational Autoencoder (HeTVAE) for probabilistic interpolation of irregularly sampled time series data. HeTVAE consists of an input sparsity-aware encoder, parallel deterministic and probabilistic pathways for propagating input uncertainty to the output, and a heteroscedastic output distribution to represent variable uncertainty in the output interpolations. Furthermore, we propose an augmented training objective to combat the presence of additional local optima that arise from the use of the heteroscedastic output structure. Our results show that the proposed model significantly improves uncertainty quantification in the output interpolations as evidenced by significantly improved log likelihood scores compared to several baselines and state-of-the-art methods. While the HeTVAE model can produce a probability distribution over an arbitrary collection of output time points, it is currently restricted to producing marginal distributions. As a result, sampling from the model does not necessarily produce smooth trajectories as would be the case with GPR-based models. Augmenting the HeTVAE model to account for residual correlations in the output layer is an interesting direction for future work.
6 REPRODUCIBILITY STATEMENT
The source code for reproducing the results in this paper is available at https://github.com/ reml-lab/hetvae. It contains the instructions to reproduce the results in the paper including the hyperparameters. The hyperparameter ranges searched for each model are fully described in Appendix A.5. The source code also includes the synthetic dataset generation process as well as one of the real-world dataset. The other datasets can be downloaded and prepared following the preprocessing steps notes in Appendix A.6.1.
ACKNOWLEDGEMENTS
Research reported in this paper was partially supported by the National Institutes of Health under award number 1P41EB028242.
A APPENDIX
A.1 ADDITIONAL RESULTS
We also perform experiments on the UCI electricity dataset (described in Appendix A.6.1). We follow the same experiment protocols described in Section 4. As we can see from Table 5, the proposed model HeTVAE outperforms the prior approaches across all three metrics.
A.2 ABLATION STUDY
Tables 6 and 7 show the complete results of ablating several different components of the HeTVAE model and training procedure with respect to all three evaluation metrics on PhysioNet and MIMICIII respectively. We denote different components of the HeTVAE model as − HET: heteroscedastic output layer, ALO: augmented learning objective, INT: intensity encoding, DET: deterministic pathway. The results show selected individual and compound ablations of these components and indicate that all of these components contribute significantly to the model’s performance in terms of the negative log likelihood score. We provide detailed comments below.
Effect of Heteroscedastic Layer: Since the augmented learning objective is introduced to improve the learning in the presence of heteroscedastic layer, we remove the augmented learning objective (ALO) with the heteroscedastic layer (HET). This ablation corresponds to HeTVAE - HET - ALO. As we can see from both Table 6 and 7, this results in a highly significant drop in the log likelihood performance as compared to the full HeTVAE model on both datasets. However, it results in only a slight drop in performance with respect to MAE and MSE, which is sensible as the HET component only affects uncertainty sensitive performance metrics.
Effect of Intensity Encoding: HeTVAE - INT removes the intensity encoding pathway from the UnTAND module. It results in an immediate drop in performance on both datasets. We also compare the effect of intensity encoding after removing the deterministic pathway and the augmented learning objective. These ablations are shown in HeTVAE - DET - ALO and HeTVAE - INT - DET - ALO. The performance drop is less severe in this case because of the propensity of the heteroscedastic output layer to get stuck in poor local optima in the absence of the augmented learning objective (ALO).
Effect of Augmented Learning Objective: The HeTVAE - ALO ablation shows the result of removing the augmented learning objective and training the model only using only the ELBO. This results in an immediate drop in performance on PhysioNet. The performance drop is less severe on MIMIC-III. We further perform this ablation without the DET component and observe severe drops in performance across all metrics on both datasets. These ablations correspond to HeTVAE - DET and HeTVAE - DET - ALO. This shows that along with ALO component, the DET component also constrains the model from getting stuck in local optima where all of the structure in the data is explained as noise. We show interpolations corresponding to these ablations in Appendix A.3.1.
Effect of Deterministic Pathway: HeTVAE - DET removes the deterministic pathway from the model, resulting in a performance drop on both MIMIC-III and PhysioNet across all metrics. We further compare the performance of both the probabilistic and deterministic pathways in isolation as shown by ablation HeTVAE - DET - ALO and HeTVAE - PROB - ALO. We observe that the
deterministic pathway HeTVAE - PROB - ALO outperforms the probabilistic pathway HeTVAE - DET - ALO in terms of log likelihood on MIMIC-III while the opposite is true in case of PhysioNet. However, on both datasets using only the deterministic pathway (HeTVAE - PROB - ALO) achieves better MAE and MSE scores as compared to using only the probabilistic pathway (HeTVAE - DET - ALO).
A.3 VISUALIZATIONS
A.3.1 INTERPOLATIONS ON PHYSIONET
Figure 3 shows example interpolations on the PhysioNet dataset. Following the experimental setting mentioned in Section 4, the models were trained using all dimensions and the inference uses all dimensions. We only show interpolations corresponding to Heart Rate as an illustration. As we can see, the STGP and HeTVAE models exhibit good fit and variable uncertainty on the edges where there are no observations. We can also see that mTAN trained with homoscedastic output is not able to produce as good a fit because of the fixed variance at the output (discussed in Section 4).
The most interesting observation is the performance of HeTVAE - DET - ALO, an ablation of HeTVAE model that retains heteroscedastic output, but removes the deterministic pathways and the augmented learning objective. This ablation significantly underfits the data and performs similar to mTAN. This is an example of local optima that arises from the use of a heteroscedastic output layer where the mean is excessively smooth and all of the structure in the data is explained as noise. We address this with the use of augmented learning objective described in Section 3.3. As seen in the Figure 3, adding the augmented learning objective (HeTVAE - DET) clearly improves performance.
A.3.2 SYNTHETIC DATA VISUALIZATIONS: SPARSITY
In this section, we show supplemental interpolation results on the synthetic dataset. The setting here is same as in Section 4. Figure 4 compares HTVAE mTAN, the single task Gaussian process STGP, the proposed HeTVAE model and an ablation of proposed model without intensity encoding HeTVAE - INT. We vary the number of observed points (3, 10, 20) and each model is used to infer the distribution over the remaining time points. We draw multiple samples from the VAE latent state for HeTVAE, HeTVAE - INT and HTVAE mTAN, and visualize the distribution of the resulting mixture. Figure 4 illustrates the interpolation performance of each of the models. As we can see, the interpolations produced by HTVAE mTAN have approximately constant uncertainty across time and this uncertainty level does not change even when the number of points conditioned on increases. On the other hand, both HeTVAE and STGP show variable uncertainty across time. Their uncertainty reduces in the vicinity of input observations and increases in gaps between observations. The HeTVAE-INT model performs slightly better than HTVAE mTAN model but it does not show variable uncertainty due to input sparsity like HeTVAE.
A.3.3 SYNTHETIC DATA VISUALIZATIONS: INTER-OBSERVATION GAP
To demonstrate the effectiveness of intensity encoder (INT), we perform another experiment on synthetic dataset where we increase the maximum inter-observation gap between the observations.
We follow the same training protocol as described in Section 4. At test time, we condition on 10 observed points with increasing maximum inter-observation gap. We vary the maximum interobservation gap from 20% to 80% of the length of the original time series. Each model is used to infer single time point marginal distributions over values at the rest of the available time points in the test instance.
Figure 5 shows the interpolations with increasing maximum inter-observation gap. STGP and HeTVAE show variable uncertainty with time and the uncertainty increases with increasing maximum inter-observation gap. On the other hand, HTVAE mTAN with homoscedastic output shows approximately constant uncertainty with time and also across different maximum inter-observation gaps. These results clearly show that HTVAE mTAN produces over-confident probabilistic interpolations over large gaps.
Furthermore, we show an ablation of the proposed model HeTVAE - INT, where we remove the intensity encoder and perform the interpolations. As we see from the figure, this leads to approximately constant uncertainty across time as well as different maximum inter-observation gaps. This shows that the HeTVAE model is not able to capture uncertainty due to input sparsity as effectively without the intensity encoder.
A.4 ARCHITECTURE DETAILS
HeTVAE: Learnable parameters in the UnTAND architecture shown in Figure 1a include the weights of the three linear layers and the parameters of the shared time embedding functions. Each time embedding function is a one layer fully connected network with a sine function non-linearity. The two linear layers on top of embedding function are linear projections from time embedding dimension de to de/H where H is the number of time embeddings. Note that these linear layers do not share parameters. The third linear layer performs a linear projection from 2 ∗ D ∗ H to J . It takes as input the concatenation of the VAL encoder output and INT encoder output and produces an output of dimension J . de, H and J are all hyperparameters of the architecture. The ranges considered are described in the next section.
The HeTVAE model shown in the Figure 1b consists of three MLP blocks apart from the UnTAND modules. The MLP in the deterministic path is a one layer fully connected layer that projects the UnTAND output to match the dimension of the latent state. The remaining MLP blocks are twolayer fully connected networks with matching width and ReLU activations. The MLP in the decoder takes the output of UnTAND module and outputs the mean and variance of dimension D and sequence length t′. We use a softplus transformation on the decoder output to get the variance σi = 0.01 + softplus(fdecσ (h dec i )). Similarly, in the probabilistic path, we apply an exponential transformation to get the variance of the q distribution σ2k = exp(f enc σ (h enc k )). We use K reference time points regularly spaced between 0 and 1. K is considered to be a hyperparameter of the architecture. The ranges considered are described in the next section.
Baselines: For the HTVAE mTAN, we use a similar architecture as HeTVAE where we remove the deterministic path, heteroscedastic output layer and use the mTAND module instead of the UnTAND module (Shukla & Marlin, 2021a). We use the same architectures for the ODE and RNN-based VAEs as Rubanova et al. (2019).
A.5 HYPERPARAMETERS
HeTVAE: We fix the time embedding dimension to de = 128. The number of embeddings H is searched over the range {1, 2, 4}. We search the number of reference points K over the range {4, 8, 16, 32}, the latent dimension over the range {8, 16, 32, 64, 128}, the output dimension of UnTAND J over the range {16, 32, 64, 128}, and the width of the two-layer fully connected layers over {128, 256, 512}. In augmented learning objective, we search for λ over the range {1.0, 5.0, 10.0}. We use the Adam Optimizer for training the models. Experiments are run for 2, 000 iterations with a learning rate of 0.0001 and a batch size of 128. The best hyperparameters are reported in the code. We use 100 samples from the probabilistic latent state to compute the evaluation metrics.
Ablations: We note that the ablations were not performed with a fixed architecture. For all the ablation models, we tuned the hyperparameters and reported the results with the best hyperparameter setting. We also made sure that the hyperparameter ranges for ablated models with just deterministic/probabilistic path were wide enough that the optimal ablated models did not saturate the end of the ranges for architectural hyper-parameter values including the dimensionality of the latent representations.
VAE Baselines: For VAE models with homoscedastic output, we treat the output variance term as a hyperparameter and select the variance over the range {0.01, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0}. For HTVAE mTAN, we search the corresponding hyperparameters over the same range as HeTVAE. For ODE and RNN based VAEs, we search for GRU hidden units, latent dimension, the number of hidden units in the fully connected network for the ODE function in the encoder and decoder over the range {20, 32, 64, 128, 256}. For ODEs, we also search the number of layers in fully connected network in the range {1, 2, 3}. We use a batch size of 50 and a learning rate of 0.001. We use 100 samples from the latent state to compute the evaluation metrics.
Gaussian Processes: For single task GP, we use a squared exponential kernel. In case of multitask GP, we experimented with the Matern kernel with different smoothness parameters, and the
squared exponential kernel. We found that Matern kernel performs better. We use maximum marginal likelihood to train the GP hyperparameters. We search for learning rate over the range {0.1, 0.01, 0.001} and run for 100 iterations. We search for smoothness parameter over the range {0.5, 1.5, 2.5}. We search for the batch size over the range {32, 64, 128, 256}.
A.6 TRAINING DETAILS
A.6.1 DATA GENERATION AND PREPROCESSING
Synthetic Data Generation: We generate a synthetic dataset consisting of 2000 trajectories each consisting of 50 time points with values between 0 and 1. We fix 10 reference time points and draw values for each from a standard normal distribution. We then use an RBF kernel smoother with a fixed bandwidth of α = 120.0 to construct local interpolations over the 50 time points. The data generating process is shown below:
zk ∼ N (0, 1), k ∈ [1, · · · , 10] rk = 0.1 ∗ k ti = 0.02 ∗ i, i ∈ [1, · · · , 50]
xi = ∑ k exp(−α(ti − rk)2) · zk∑ k′ exp(−α(ti − rk′)2) +N (0, 0.12)
We randomly sample 3 − 10 observations from each trajectory to simulate a sparse and irregularly sampled univariate time series.
PhysioNet: The PhysioNet Challenge 2012 dataset (Silva et al., 2012) consists of multivariate time series data with 37 physiological variables from intensive care unit (ICU) records. Each record contains measurements from the first 48 hours after admission. We use the protocols described in Rubanova et al. (2019) and round the observation times to the nearest minute resulting in 2880 possible measurement times per time series. The data set consists includes 8000 instances that can be used for interpolation experiments. PhysioNet is freely available for research use and can be downloaded from https://physionet.org/content/challenge-2012/.
MIMIC-III: The MIMIC-III data set (Johnson et al., 2016) is a multivariate time series dataset containing sparse and irregularly sampled physiological signals collected at Beth Israel Deaconess Medical Center. We use the procedures proposed by Shukla & Marlin (2019) to process the data set. This results in 53, 211 records each containing 12 physiological variables. We use all 53, 211 instances to perform interpolation experiments. MIMIC-III is available through a permissive data use agreement which can be requested at https://mimic.mit.edu/iii/gettingstarted/. Once the request is approved, the dataset can be downloaded from https://mimic.mit.edu/ iii/gettingstarted/dbsetup/. The instructions and code to extract the MIMIC-III dataset is given at https://github.com/mlds-lab/interp-net.
Climate Dataset: The U.S. Historical Climatology Network Monthly (USHCN) dataset (Menne et al., 2016) is a publicly available dataset consisting of daily measurements of 5 climate variables − daily maximum temperature, daily minimum temperature, whether it was a snowy day or not, total daily precipitation, and daily snow precipitation. It contains data from the last 150 years for 1, 218 meteorological stations scattered over the United States. Following the preprocessing steps of Che et al. (2018b), we extract daily climate data for 100 consecutive years starting from 1910 to 2009 from 54 stations in California. To get multi-rate time series data, we split the stations into 3 groups with sampling rates of 2 days, 1 week, and 1 month respectively. We divide the data into smaller time series consisting of yearly data and end up with a dataset of 100 examples each consisting of 270 features. We perform the interpolation task on this dataset where we compute the feature values every day using the multi-rate time series data. The dataset is available for download at https://cdiac.ess-dive.lbl.gov/ftp/ushcn_daily/.
Electricity Dataset: The UCI household electricity dataset contains measurements of seven different quantities related to electricity consumption in a household. The data are recorded every minute for 47 months between December 2006 and November 2010, yielding over 2 million observations. To simulate irregular sampling, we keep observations only at durations sampled from
an exponential distribution with λ = 20. Following the preprocessing step of Binkowski et al. (2018), we also do random feature sampling where we choose one out of seven features at each time step. We divide the data into smaller time series consisting of monthly data and end up with a dataset of 1431 examples each consisting of 7 features. We perform interpolation experiments on this dataset where we compute feature values every minute using the irregularly sampled data. The dataset is available for download at https://archive.ics.uci.edu/ml/datasets/ individual+household+electric+power+consumption.
Dataset Preprocessing: We rescale time to be in [0, 1] for all datasets. We also re-scale all dimensions. In case of PhysioNet and MIMIC-III, for each dimensions we first remove outliers in the outer 0.1% percentile region. We then compute the mean and standard deviation of all observations on that dimension. The outlier detection step is used to mitigate the effect of rare large values in the data set from affecting the normalization statistics. Finally, we z-transform all of the available data (including the points identified as outliers). No data points are discarded from the data sets during the normalization process.
A.6.2 SOURCE CODE
The source code for reproducing the results in this paper is available at https://github.com/ reml-lab/hetvae.
A.6.3 COMPUTING INFRASTRUCTURE
All experiments were run on a Nvidia Titan X and 1080 Ti GPUs. The time required to run all the experiments in this paper including hyperparameter tuning was approximately eight days using eight GPUs. | 1. What is the focus of the paper on Variational Autoencoder?
2. What are the strengths of the proposed model, particularly in dealing with irregularly sampled time series?
3. Are there any concerns or similarities with other works, such as Multi-time attention networks for irregularly sampled time series? | Summary Of The Paper
Review | Summary Of The Paper
The paper introduces a novel model of Variational Autoencoder that deals with irregularly sampled time series with a probabilistic approach to do time series interpolation.
The main contribution is the architecture by itself, its components, and the training process. The model was evaluated on both real-world data sets from the medical and climate domain and synthetic data.
Review
+ Novel method Well written Notation is clear and organized Well evaluated on multiple datasets Extensive ablation tests and discussion about the model components Code available at review
Big novelty overlap with [1]
[1] Satya Narayan Shukla and Benjamin Marlin. Multi-time attention networks for irregularly sampled time series. In International Conference on Learning Representations, 2021 |
ICLR | Title
Heteroscedastic Temporal Variational Autoencoder For Irregularly Sampled Time Series
Abstract
Irregularly sampled time series commonly occur in several domains where they present a significant challenge to standard deep learning models. In this paper, we propose a new deep learning framework for probabilistic interpolation of irregularly sampled time series that we call the Heteroscedastic Temporal Variational Autoencoder (HeTVAE). HeTVAE includes a novel input layer to encode information about input observation sparsity, a temporal VAE architecture to propagate uncertainty due to input sparsity, and a heteroscedastic output layer to enable variable uncertainty in output interpolations. Our results show that the proposed architecture is better able to reflect variable uncertainty through time due to sparse and irregular sampling than a range of baseline and traditional models, as well as recent deep latent variable models that use homoscedastic output layers.1
1 INTRODUCTION
In this paper, we propose a novel deep learning framework for probabilistic interpolation of irregularly sampled time series. Irregularly sampled time series data occur in multiple scientific and industrial domains including finance (Manimaran et al., 2006), climate science (Schulz & Stattegger, 1997) and healthcare (Marlin et al., 2012; Yadav et al., 2018). In some domains including electronic health records and mobile health studies (Cheng et al., 2017), there can be significant variation in inter-observation intervals through time. This is due to the complexity of the underlying observation processes that can include “normal” variation in observation times combined with extended, blockstructured periods of missingness. For example, in the case of ICU EHR data, this can occur due to patients being moved between different locations for procedures or tests, resulting in missing physiological sensor data for extended periods of time. In mobile health studies, the same problem can occur due to mobile sensor batteries running out, or participants forgetting to wear or carry devices.
In such situations, it is of critical importance for interpolation models to be able to correctly reflect the variable input uncertainty that results from variable observation sparsity so as not to provide overly confident inferences. However, modeling time series data subject to irregular sampling poses a significant challenge to machine learning models that assume fully-observed, fixed-size feature representations (Marlin et al., 2012; Yadav et al., 2018; Shukla & Marlin, 2021b). The main challenges in dealing with such data include the presence of variable time gaps between the observation time points, partially observed feature vectors caused by the lack of temporal alignment across different dimensions, as well as different data cases, and variable numbers of observations across dimensions and data cases. Significant recent work has focused on developing specialized models and architectures to address these challenges in modeling irregularly sampled multivariate time series (Li & Marlin, 2015; 2016; Lipton et al., 2016; Futoma et al., 2017; Che et al., 2018a; Shukla & Marlin, 2019; Rubanova et al., 2019; Horn et al., 2020; Li & Marlin, 2020; Shukla & Marlin, 2021a; De Brouwer et al., 2019; Tan et al., 2020; Kidger et al., 2020).
Recently, Shukla & Marlin (2021a) introduced the Multi-Time Attention Network (mTAN) model, a variational autoencoder (VAE) architecture for continuous-time interpolation of irregularly sampled
1Implementation available at https://github.com/reml-lab/hetvae
time series. This model was shown to provide state-of-the-art classification and deterministic interpolation performance. However, like many VAEs, the mTAN architecture produces a homoscedastic output distribution conditioned on the latent state. This means that the model can only reflect uncertainty due to variable input sparsity through variations in the VAE latent state. As we will show, this mechanism is insufficient to capture differences in uncertainty over time. On the other hand, Gaussian Process Regression-based (GPR) methods (Rasmussen & Williams, 2006) have the ability to reflect variable uncertainty through the posterior inference process. The main drawbacks of GPR-based methods are their significantly higher run times during both training and inference, and the added restriction to define positive definite covariance functions for multivariate time series.
In this work, we propose a novel encoder-decoder architecture for multivariate probabilistic time series interpolation that we refer to as the Heteroscedastic Temporal Variational Autoencoder or HeTVAE. HeTVAE aims to address the challenges described above by encoding information about input sparsity using an uncertainty-aware multi-time attention network (UnTAN), flexibly capturing relationships between dimensions and time points using both probabilistic and deterministic latent pathways, and directly representing variable output uncertainty via a heteroscedastic output layer.
The proposed UnTAN layer generalizes the previously introduced mTAN layer with an additional intensity network that can more directly encode information about input uncertainty due to variable sparsity. The proposed UnTAN layer uses an attention mechanism to produce a distributed latent representation of irregularly sampled time series at a set of reference time points. The UnTAN module thus provides an interface between input multivariate, sparse and irregularly sampled time series data and more traditional deep learning components that expect fixed-dimensional or regularly spaced inputs. We combat the presence of additional local optima that arises from the use of a heteroscedastic output layer by leveraging an augmented training objective where we combine the ELBO loss with an uncertainty agnostic loss component. The uncertainty agnostic component helps to prevent learning from converging to local optima where the structure in data is explained as noise.
We evaluate the proposed architecture on both synthetic and real data sets. Our approach outperforms a variety of baseline models and recent approaches in terms of log likelihood, which is our primary metric of interest in the case of probabilistic interpolation. Finally, we perform ablation testing of different components of the architecture to assess their impact on interpolation performance.
2 RELATED WORK
Keeping in mind the focus of this work, we concentrate our discussion of related work on deterministic and probabilistic approaches applicable to the interpolation and imputation tasks.
Deterministic Interpolation Methods: Deterministic interpolation methods can be divided into filtering and smoothing-based approaches. Filtering-based approaches infer the values at a given time by conditioning only on past observations. For example, Han-Gyu Kim et al. (2017) use a unidirectional RNN for missing data imputation that conditions only on data from the relative past of the missing observations. On the other hand, smoothing-based methods condition on all possible observations (past and future) to infer any unobserved value. For example, Yoon et al. (2018) and Cao et al. (2018) present missing data imputation approach based on multi-directional and bi-directional RNNs. These models typically use the gated recurrent unit with decay (GRU-D) model (Che et al., 2018a) as a base architecture for dealing with irregular sampling. Interpolation-prediction networks take a different approach to interfacing with irregularly sampled data that is based on the use of temporal kernel smoother-based layers (Shukla & Marlin, 2019). Shan & Oliva (2021) propose hierarchical imputation strategy based on set-based architectures for imputation in irregularly sampled time series. Of course, the major disadvantage of deterministic interpolation approaches is that they do not express uncertainty over output interpolations and thus can not be applied to the problem of probabilistic interpolation without modifications.
Probabilistic Interpolation Methods: The two primary building blocks for probabilistic interpolation and imputation of multivariate irregularly sampled time series are Gaussian processes regression (GPR) (Rasmussen & Williams, 2006) and variational autoencoders (VAEs) (Rezende et al., 2014; Kingma & Welling, 2014). GPR models have the advantage of providing an analytically tractable full joint posterior distribution over interpolation outputs when conditioned on irregularly sampled input data. Commonly used covariance functions have the ability to translate variable input obser-
vation density into variable interpolation uncertainty. GPR-based models have been used as the core of several approaches for supervised learning and forecasting with irregularly sampled data (Ghassemi et al., 2015; Li & Marlin, 2015; 2016; Futoma et al., 2017). However, GPR-based models can become somewhat cumbersome in the multivariate setting due to the positive definiteness constraint on the covariance function (Rasmussen & Williams, 2006). The use of separable covariance functions is one common approach to the construction of GPR models over multiple dimensions (Bonilla et al., 2008), but this construction requires all dimensions to share the same temporal kernel parameters. A further drawback of GP-based methods is their significantly higher run times relative to deep learning-based models when applied to larger-scale data (Shukla & Marlin, 2019).
Variational autoencoders (VAEs) combine probabilistic latent states with deterministic encoder and decoder networks to define a flexible and computationally efficient class of probabilistic models that generalize classical factor analysis (Kingma & Welling, 2014; Rezende et al., 2014). Recent research has seen the proposal of several new VAE-based models for irregularly sampled time series. Chen et al. (2018) proposed a latent ordinary differential equation (ODE) model for continuous-time data using an RNN encoder and a neural ODE decoder. Building on the prior work of Chen et al. (2018), Rubanova et al. (2019) proposed a latent ODE model that replaces the RNN with an ODE-RNN model as the encoder. Li et al. (2020) replace the deterministic ODEs with stochastic differential equations(SDEs). Norcliffe et al. (2021) extends the prior work on neural ode by combining it with neural processes (Garnelo et al., 2018). Shukla & Marlin (2021a) proposed the Multi-Time Attention Network (mTAN) model, a VAE-based architecture that uses a multi-head temporal cross attention encoder and decoder module (the mTAND module) to provide the interface to multivariate irregularly sampled time series data. Fortuin et al. (2020) proposed a VAE-based approach for the task of smoothing in multivariate time series with a Gaussian process prior in the latent space to capture temporal dynamics. Garnelo et al. (2018); Kim et al. (2019) used heteroscedastic output layers to represent uncertainty in case of fixed dimensional inputs but these approaches are not applicable to irregularly sampled time series.
Similar to the mTAN model, the Heteroscedastic Temporal Variational Autoencoder (HeTVAE) model proposed in this work is an attention-based VAE architecture. The primary differences are that mTAN uses a homoscedastic output distribution that assumes constant uncertainty and that the mTAN model’s cross attention operation normalizes away information about input sparsity. These limitations are problematic in cases where there is variable input density through time resulting in the need for encoding, propagating, and reflecting that uncertainty in the output distribution. As we describe in the next section, HeTVAE addresses these issues by combining a novel sparsity-sensitive encoder module with a heteroscedastic output distribution and parallel probabilistic and deterministic pathways for propagating information through the model. Another important difference relative to these previous methods is that HeTVAE uses an augmented learning objective to address the underfitting of predictive variance caused by the use of the heteroscedastic layer.
3 PROBABILISTIC INTERPOLATION WITH THE HETVAE
In this section, we present the proposed architecture for probabilistic interpolation of irregularly sampled time series, the Heteroscedastic Temporal Variational Autoencoder (HeTVAE). HeTVAE leverages a sparsity-aware layer as the encoder and decoder in order to represent input uncertainty and propagate it to output interpolations. We begin by introducing notation. We then describe the architecture of the encoder/decoder network followed by the complete HeTVAE architecture.
3.1 NOTATION
We letD = {sn|n = 1, ..., N} represent a data set containing N data cases. An individual data case consists of a D-dimensional, sparse and irregularly sampled multivariate time series sn. Different dimensions d of the multivariate time series can have observations at different times, as well as different total numbers of observations Ldn. We follow the series-based representation of irregularly sampled time series (Shukla & Marlin, 2021b) and represent time series d for data case n as a tuple sdn = (tdn,xdn) where tdn = [t1dn, ..., tLdndn] is the list of time points at which observations are defined and xdn = [x1dn, ..., xLdndn] is the corresponding list of observed values. We drop the data case index n for brevity when the context is clear.
3.2 REPRESENTING INPUT SPARSITY
As noted in the previous section, the mTAN encoder module does not represent information about input sparsity due to the normalization of the attention weights. To address this issue, we propose an augmented module that we refer to as an Uncertainty Aware Multi-Time Attention Network (UnTAN). The UnTAN module is shown in Figure 1a. This module includes two encoding pathways that leverage a shared time embedding function and a shared attention function. The first encoding pathway (the intensity pathway, INT) focuses on representing information about the sparsity of observations while the second encoding pathway (the value pathway, VAL) focuses on representing information about values of observations. The outputs of these two pathways are concatenated and mixed via a linear layer to define the final output of the module. The mathematical description of the module is given in Equations 1 to 3 and is explained in detail below.
inth(rk, td) = pool({exp(αh(rk, tid)) | tid ∈ td}) pool({exp(αh(rk, ti′u)) | ti′u ∈ tu})
(1)
valh(rk, td,xd) = pool({exp(αh(rk, tid)) · xid | tid ∈ td, xid ∈ xd})
pool({exp(αh(rk, ti′d)) | ti′d ∈ td}) (2)
αh(t, t ′) =
( φh(t)wv
Tφh(t ′)T√
de
) (3)
Time Embeddings and Attention Weights: Similar to the mTAN module, the UnTAN module uses time embedding functions φh(t) to project univariate time values into a higher dimensional space. Each time embedding function is a one-layer fully connected network with a sine function non-linearity φh(t) = sin(ω · t+ β). We learn H time embeddings each of dimension de. w and v are the parameters of the scaled dot product attention function αh(t, t′) shown in Equation 3. The scaling factor 1/ √ de is used to normalize the dot product to counteract the growth in the dot product magnitude with increase in the time embedding dimension de.
Intensity Encoding: The intensity encoding pathway is defined by the function inth(rk, td) shown in Equation 1. The inputs to the intensity function are a query time point rk and a vector td containing all the time points at which observations are available for dimension d. The numerator of the intensity function exponentiates the attention weights between rk and each time point in td to ensure positivity, then pools over the observed time points. The denominator of this computation is identical to the numerator, but the set of time points tu that is pooled over is the union over all observed time points for dimension d from all data cases.
Intuitively, if the largest attention weight between rk and any element of td is small relative to attention weights between rk and the time points in tu, then the output of the intensity function will be low. Importantly, due to the use of the non-linear time embedding function, pairs of time points with high attention weights do not necessarily have to be close together in time meaning the notion of intensity that the network expresses is significantly generalized.
We also note that different sets could be used for tu including a regularly spaced set of reference time points. One advantage of using the union of all observed time points is that it fixes the maximum value of the intensity function at 1. The two pooling functions applicable in the computation of the intensity function are max and sum. If the time series is sparse, max works well because using sum in the sparse case can lead to very low output values. In a more densely observed time series, either sum or max can be used.
Value Encoding: The value encoding function valh(rk, td,xd) is presented in Equation 2 in a form that highlights the symmetry with the intensity encoding function. The primary differences are that valh(rk, td,xd) takes as input both observed time points td and their corresponding values xd, and the denominator of the function pools over td itself. While different pooling options could be used for this function, in practice we use sum-based pooling. These choices lead to a function valh(rk, td,xd) that interpolates the observed values at the query time points using softmax weights derived from the attention function. The values of observed points with higher attention weights contribute more to the output value. This structure is equivalent to that used in the mTAN module when sum-based pooling is used. We can also clearly see that this function on its own can not represent
information about input sparsity due to the normalization over td. Indeed, the function is completely invariant to an additive decrease in all of the attention weights α′h(rk, tid) = αh(rk, tid)− δ.
Module Output: The last stage of the UnTAN module concatenates the value and intensity pathway representations and then linearly weights them together to form the final J-dimensional representation that is output by the module. The parameters of this linear stage of the model are U inthdj and Uvalhdj . The value of the j th dimension of the output at a query time point rk is given by Equation 4.
UnTAN(rk, t,x)[j] = H∑ h=1 D∑ d=1
[ inth(rk, td)
valh(rk, td,xd)
]T [ U inthdj
Uvalhdj
] (4)
Finally, we note that the UnTAN module defines a continuous function of t given an input time series and hence cannot be directly incorporated into standard neural network architectures. We adapt the UnTAN module to produce fully observed fixed-dimensional discrete sequences by materializing its output at a set of reference time points. Reference time points can be fixed set of regularly spaced time points or may need to depend on the input time series. For a given set of reference time points r = [r1, · · · , rK ], the discretized UnTAN module UnTAND(r, t,x) is defined as UnTAND(r, t,x)[i] = UnTAN(ri, t,x). This module takes as input the time series s = (t,x) and the set of reference time points r and outputs a sequence of K UnTAN embeddings, each of dimension J corresponding to each reference point. As described in the next section, we use the UnTAND module to provide an interface between sparse and irregularly sampled data and fully connected MLP network structures.
3.3 THE HETVAE MODEL
In this section, we describe the overall architecture of the HeTVAE model, as shown in Figure 1b.
Model Architecture: The HeTVAE consists of parallel deterministic and probabilistic pathways for propagating input information to the output distribution, including information about input sparsity. We begin by mapping the input time series s = (t,x) through the UnTAND module along with a collection of K reference time points r. In the probabilistic path, we construct a distribution over latent variables at each reference time point using a diagonal Gaussian distribution q with mean and variance output by fully connected layers applied to the UnTAND output embeddings
henc = [henc1 , · · · ,hencK ] as shown in Equation 6. In the deterministic path, the UnTAND output embeddings henc are passed through a feed-forward network g to produce a deterministic temporal representation (at each reference point) of the same dimension as the probabilistic latent state.
The decoder takes as input the representation from both pathways along with the reference time points and a set of query points t′ (Eq 8). The UnTAND module produces a sequence of embeddings hdec = [hdec1 , · · · ,hdec|t′| ] corresponding to each time point in t
′. The UnTAND embeddings are then independently decoded using a fully connected decoder fdec and the result is used to parameterize the output distribution. We use a diagonal covariance Gaussian distribution where both the mean µ = [µ1, · · · ,µ|t′|],µi ∈ RD and variance σ2 = [σ21 , · · · ,σ2|t′|],σ 2 i ∈ RD are predicted for each time point by the final decoded representation as shown in Eq 9. The generated time series is sampled from this distribution and is given by ŝ = (t′,x′) with all data dimensions observed.
The complete model is described below. We define qγ(z|r, s) to be the distribution over the probabilistic latent variables z = [z1, · · · , zK ] induced by the input time series s = (t,x) at the reference time points r. We define the prior p(zi) over the latent states to be a standard multivariate normal distribution. We let phetθ (x ′ id | zcat, t′id) define the final probability distribution over the value of time point t′id on dimension d given the concatenated latent state z cat = [zcat1 , · · · , zcatK ]. γ and θ represent the parameters of all components of the encoder and decoder respectively.
henc = UnTANDenc(r, t,x) (5)
zk ∼ qγ(zk |µk,σ2k), µk = fencµ (henck ), σ2k = fencσ (henck ) (6) zcatk = concat(zk, g(h enc k )) (7)
hdec = UnTANDdec(t′, r, zcat) (8)
phetθ (x ′ id | zcat, t′id) = N (x′id; µi [d], σ2i [d]), µi = fdecµ (hdeci ), σ2i = fdecσ (hdeci ) (9)
x′id ∼ phetθ (x′id | zcat, t′id) (10) Compared to the constant output variance used to train the mTAN-based VAE model proposed in prior work (Shukla & Marlin, 2021a), our proposed model produces a heteroscedastic output distribution that we will show provides improved modeling for the probabilistic interpolation task. However, the increased complexity of the model’s output representation results in an increased space of local optima. We address this issue using an augmented learning objective, as described in the next section. Finally, we note that we can easily obtain a simplified homoscedastic version of the model with constant output variance σ2c using the alternate final output distribution pcθ(x ′ id | z, t′id) = N (x′id; µi [d], σ2c ).
Augmented Learning Objective: To learn the parameters of the HeTVAE framework given a data set of sparse and irregularly sampled time series, we propose an augmented learning objective based on a normalized version of the evidence lower bound (ELBO) combined with an uncertainty agnostic scaled squared loss. We normalize the contribution from each data case by the total number of observations so that the effective weight of each data case in the objective function is independent of the total number of observed values. The augmented learning objective is defined below. µn is the predicted mean over the test time points as defined in Equation 9. Also recall that the concatenated latent state zcat depends directly on the probabilistic latent state z.
LNVAE(θ, γ) = N∑ n=1 1∑ d Ldn ( Eqγ(z|r,sn)[log p het θ (xn|zcatn , tn)]−DKL(qγ(z|r, sn)||p(z)) (11)
− λEqγ(z|r,sn)‖xn − µn‖ 2 2] )
DKL(qγ(z|r, sn)||p(z)) = K∑ i=1 DKL(qγ(zi|r, sn)||p(zi)) (12)
log phetθ (xn|zcatn , tn) = D∑ d=1 Ldn∑ j=1 log phetθ (xjdn|zcatn , tjdn) (13)
We include the uncertainty agnostic scaled squared loss term to counteract the propensity of the heteroscedastic model to become stuck in poor local optima where the mean is essentially flat and
Observed data Ground truth Reconstructions
n = 3 n = 10 n = 20
Figure 2: We show example interpolations on the synthetic dataset. The set of 3 columns correspond to interpolation results with increasing numbers of observed points: 3, 10 and 20 respectively. The first, second and third rows correspond to STGP, HeTVAE and HTVAE mTAN respectively. The shaded region corresponds to ± one standard deviation. STGP and HetVAE exhibit variable output uncertainty in response to input sparsity while mTAN does not.
all of the structure in the data is explained as noise. This happens because the model has the ability to learn larger variances at the output, which allows the mean to underfit the data. The extra component (scaled squared loss) helps to push the optimization process to find more informative parameters by introducing a fixed penalty for the mean deviating from the data. As we will show in the experiments, the use of this augmented training procedure has a strong positive impact on final model performance. Since, we are focusing on the interpolation task, we train the HeTVAE by maximizing the augmented learning objective (Equation 11) on the interpolated time points (more details on training has been provided in the experimental protocols in Section 4).
4 EXPERIMENTS
In this section, we present interpolation experiments using a range of models on three real-world data sets. PhysioNet Challenge 2012 (Silva et al., 2012) and MIMIC-III (Johnson et al., 2016) consist of multivariate, sparse and irregularly sampled time series data. We also perform experiments on the Climate dataset (Menne et al., 2016), consisting of multi-rate time series. We also show qualitative results on a synthetic dataset. Details of each dataset can be found in the Appendix A.6.1.
Experimental Protocols: We randomly divide the real data sets into a training set containing 80% of the instances, and a test set containing the remaining 20% of instances. We use 20% of the training data for validation. In the interpolation task, we condition on a subset of available points and produce distributions over the rest of the time points. On the real-world datasets, we perform interpolation experiments by conditioning on 50% of the available points. At test time, the values of observed points are conditioned on and each model is used to infer single time point marginal distributions over values at the rest of the available time points in the test instance. In the case of methods that do not produce probabilistic outputs, we make mean predictions. In the case of the synthetic dataset where we have access to all true values, we use the observed points to infer the values at the rest of the available points. We repeat each real data experiment five times using different random seeds to initialize the model parameters. We assess performance using the negative log likelihood, which is our primary metric of interest. We also report mean squared and mean absolute error. For all experiments, we select hyper-parameters on the held-out validation set using grid search and then apply the best trained model to the test set. The hyper-parameter ranges searched for each model and dataset are fully described in Appendix A.5.
Models: We compare our proposed model HeTVAE to several probabilistic and deterministic interpolation methods. We compare to two Gaussian processes regression (GPR) approaches. The most basic GP model for multivariate time series fits one GPR model per dimension. This approach is known as a single task GP model (STGP) (Rasmussen & Williams, 2006). A potentially better option is to model data using a Multi Task GP (MTGP) (Bonilla et al., 2008). This approach models the correlations both across different dimensions and across time by defining a kernel expressed as the Hadamard product of a temporal kernel (as used in the STGP) and a task kernel. We also compare to several VAE-based approaches. These approaches use a homoscedastic output distribution with different encoder and decoder architectures. HVAE RNN employs a gated recurrent unit network (Chung et al., 2014) as encoder and decoder, HVAE RNN-ODE (Chen et al., 2018) replaces the RNN decoder with a neural ODE, HVAE ODE-RNN-ODE (Rubanova et al., 2019) employs
a ODE-RNN encoder and neural ODE decoder. Finally, we compare to HTVAE mTAN (Shukla & Marlin, 2021a), a temporal VAE model consisting of multi-time attention networks producing homoscedastic output. For VAE models with homoscedastic output, we treat the output variance term as a hyperparameter and select the variance using log likelihood on the validation set. Architecture details for these methods can be found in Appendix A.4. As baselines, we also consider deterministic mean and forward imputation-based methods. Forward imputation always predicts the last observed value on each dimension, while mean imputation predicts the mean of all the observations for each dimension.
Synthetic Data Results: Figure 2 shows sample visualization output for the synthetic dataset. For this experiment, we compare HTVAE mTAN, the single task Gaussian process STGP, and the proposed HeTVAE model. We vary the number of observed points (3, 10, 20) and each model is used to infer the distribution over the remaining time points. We draw multiple samples from the VAE latent state for HeTVAE and HTVAE mTAN and visualize the distribution of the resulting mixture. As we can see, the interpolations produced by HTVAE mTAN have approximately constant uncertainty across time and this uncertainty level does not change even when the number of points conditioned on increases. On the other hand, both HeTVAE and STGP show variable uncertainty across time. Their uncertainty reduces in the vicinity of input observations and increases in gaps between observations. Even though the STGP has an advantage in this experiment (the synthetic data were generated with an RBF kernel smoother and STGP uses RBF kernel as the covariance function), the proposed model HeTVAE shows comparable interpolation performance. We show more qualitative results in Appendix A.3.
Real Data Results: Tables 1, 2 and 3 compare the interpolation performance of all the approaches on PhysioNet, MIMIC-III and Climate dataset respectively. HeTVAE outperforms the prior approaches with respect to the negative log likelihood score on all three datasets. Gaussian Process based methods − STGP and MTGP achieve second and third best performance respectively. We emphasize that while the MAE and MSE values for some of the prior approaches are close to those obtained by the HeTVAE model, the primary metric of interest for comparing probabilistic interpolation approaches is log likelihood, where the HeTVAE performs much better than the other methods.
We note that the MAE/MSE of the VAE-based models with homoscedastic output can be improved by using a small fixed variance during training. However, this produces even worse log likelihood values. Further, we note that the current implementation of MTGP is not scalable to the Climate dataset (270 dimensions). We provide experiments on an additional dataset in Appendix A.1.
Ablation Results: Table 4 shows the results of ablating several different components of the HeTVAE model and training procedure. The first row shows the results for the full proposed approach. The HeTVAE - ALO ablation shows the result of removing the augmented learning objective and training the model only using the ELBO. This results in an immediate drop in performance on PhysioNet. HeTVAE - DET removes the deterministic pathway from the model, resulting in a performance drop on both MIMIC-III and PhysioNet. HeTVAE - INT removes the intensity encoding pathway from the UnTAND module. It results in a large drop in performance on both datasets. HeTVAE - HET- ALO removes the heteroscedastic layer and the augmented learning objective (since the augmented learning objective is introduced to improve the learning in the presence of heteroscedastic layer), resulting in a highly significant drop on both datasets. These results show that all of the components included in the proposed model contribute to improved model performance. We provide more ablation results in Appendix A.2 and discuss hyperparameter selection in Appendix A.5.
5 DISCUSSION AND CONCLUSIONS
In this paper, we have proposed the Heteroscedastic Temporal Variational Autoencoder (HeTVAE) for probabilistic interpolation of irregularly sampled time series data. HeTVAE consists of an input sparsity-aware encoder, parallel deterministic and probabilistic pathways for propagating input uncertainty to the output, and a heteroscedastic output distribution to represent variable uncertainty in the output interpolations. Furthermore, we propose an augmented training objective to combat the presence of additional local optima that arise from the use of the heteroscedastic output structure. Our results show that the proposed model significantly improves uncertainty quantification in the output interpolations as evidenced by significantly improved log likelihood scores compared to several baselines and state-of-the-art methods. While the HeTVAE model can produce a probability distribution over an arbitrary collection of output time points, it is currently restricted to producing marginal distributions. As a result, sampling from the model does not necessarily produce smooth trajectories as would be the case with GPR-based models. Augmenting the HeTVAE model to account for residual correlations in the output layer is an interesting direction for future work.
6 REPRODUCIBILITY STATEMENT
The source code for reproducing the results in this paper is available at https://github.com/ reml-lab/hetvae. It contains the instructions to reproduce the results in the paper including the hyperparameters. The hyperparameter ranges searched for each model are fully described in Appendix A.5. The source code also includes the synthetic dataset generation process as well as one of the real-world dataset. The other datasets can be downloaded and prepared following the preprocessing steps notes in Appendix A.6.1.
ACKNOWLEDGEMENTS
Research reported in this paper was partially supported by the National Institutes of Health under award number 1P41EB028242.
A APPENDIX
A.1 ADDITIONAL RESULTS
We also perform experiments on the UCI electricity dataset (described in Appendix A.6.1). We follow the same experiment protocols described in Section 4. As we can see from Table 5, the proposed model HeTVAE outperforms the prior approaches across all three metrics.
A.2 ABLATION STUDY
Tables 6 and 7 show the complete results of ablating several different components of the HeTVAE model and training procedure with respect to all three evaluation metrics on PhysioNet and MIMICIII respectively. We denote different components of the HeTVAE model as − HET: heteroscedastic output layer, ALO: augmented learning objective, INT: intensity encoding, DET: deterministic pathway. The results show selected individual and compound ablations of these components and indicate that all of these components contribute significantly to the model’s performance in terms of the negative log likelihood score. We provide detailed comments below.
Effect of Heteroscedastic Layer: Since the augmented learning objective is introduced to improve the learning in the presence of heteroscedastic layer, we remove the augmented learning objective (ALO) with the heteroscedastic layer (HET). This ablation corresponds to HeTVAE - HET - ALO. As we can see from both Table 6 and 7, this results in a highly significant drop in the log likelihood performance as compared to the full HeTVAE model on both datasets. However, it results in only a slight drop in performance with respect to MAE and MSE, which is sensible as the HET component only affects uncertainty sensitive performance metrics.
Effect of Intensity Encoding: HeTVAE - INT removes the intensity encoding pathway from the UnTAND module. It results in an immediate drop in performance on both datasets. We also compare the effect of intensity encoding after removing the deterministic pathway and the augmented learning objective. These ablations are shown in HeTVAE - DET - ALO and HeTVAE - INT - DET - ALO. The performance drop is less severe in this case because of the propensity of the heteroscedastic output layer to get stuck in poor local optima in the absence of the augmented learning objective (ALO).
Effect of Augmented Learning Objective: The HeTVAE - ALO ablation shows the result of removing the augmented learning objective and training the model only using only the ELBO. This results in an immediate drop in performance on PhysioNet. The performance drop is less severe on MIMIC-III. We further perform this ablation without the DET component and observe severe drops in performance across all metrics on both datasets. These ablations correspond to HeTVAE - DET and HeTVAE - DET - ALO. This shows that along with ALO component, the DET component also constrains the model from getting stuck in local optima where all of the structure in the data is explained as noise. We show interpolations corresponding to these ablations in Appendix A.3.1.
Effect of Deterministic Pathway: HeTVAE - DET removes the deterministic pathway from the model, resulting in a performance drop on both MIMIC-III and PhysioNet across all metrics. We further compare the performance of both the probabilistic and deterministic pathways in isolation as shown by ablation HeTVAE - DET - ALO and HeTVAE - PROB - ALO. We observe that the
deterministic pathway HeTVAE - PROB - ALO outperforms the probabilistic pathway HeTVAE - DET - ALO in terms of log likelihood on MIMIC-III while the opposite is true in case of PhysioNet. However, on both datasets using only the deterministic pathway (HeTVAE - PROB - ALO) achieves better MAE and MSE scores as compared to using only the probabilistic pathway (HeTVAE - DET - ALO).
A.3 VISUALIZATIONS
A.3.1 INTERPOLATIONS ON PHYSIONET
Figure 3 shows example interpolations on the PhysioNet dataset. Following the experimental setting mentioned in Section 4, the models were trained using all dimensions and the inference uses all dimensions. We only show interpolations corresponding to Heart Rate as an illustration. As we can see, the STGP and HeTVAE models exhibit good fit and variable uncertainty on the edges where there are no observations. We can also see that mTAN trained with homoscedastic output is not able to produce as good a fit because of the fixed variance at the output (discussed in Section 4).
The most interesting observation is the performance of HeTVAE - DET - ALO, an ablation of HeTVAE model that retains heteroscedastic output, but removes the deterministic pathways and the augmented learning objective. This ablation significantly underfits the data and performs similar to mTAN. This is an example of local optima that arises from the use of a heteroscedastic output layer where the mean is excessively smooth and all of the structure in the data is explained as noise. We address this with the use of augmented learning objective described in Section 3.3. As seen in the Figure 3, adding the augmented learning objective (HeTVAE - DET) clearly improves performance.
A.3.2 SYNTHETIC DATA VISUALIZATIONS: SPARSITY
In this section, we show supplemental interpolation results on the synthetic dataset. The setting here is same as in Section 4. Figure 4 compares HTVAE mTAN, the single task Gaussian process STGP, the proposed HeTVAE model and an ablation of proposed model without intensity encoding HeTVAE - INT. We vary the number of observed points (3, 10, 20) and each model is used to infer the distribution over the remaining time points. We draw multiple samples from the VAE latent state for HeTVAE, HeTVAE - INT and HTVAE mTAN, and visualize the distribution of the resulting mixture. Figure 4 illustrates the interpolation performance of each of the models. As we can see, the interpolations produced by HTVAE mTAN have approximately constant uncertainty across time and this uncertainty level does not change even when the number of points conditioned on increases. On the other hand, both HeTVAE and STGP show variable uncertainty across time. Their uncertainty reduces in the vicinity of input observations and increases in gaps between observations. The HeTVAE-INT model performs slightly better than HTVAE mTAN model but it does not show variable uncertainty due to input sparsity like HeTVAE.
A.3.3 SYNTHETIC DATA VISUALIZATIONS: INTER-OBSERVATION GAP
To demonstrate the effectiveness of intensity encoder (INT), we perform another experiment on synthetic dataset where we increase the maximum inter-observation gap between the observations.
We follow the same training protocol as described in Section 4. At test time, we condition on 10 observed points with increasing maximum inter-observation gap. We vary the maximum interobservation gap from 20% to 80% of the length of the original time series. Each model is used to infer single time point marginal distributions over values at the rest of the available time points in the test instance.
Figure 5 shows the interpolations with increasing maximum inter-observation gap. STGP and HeTVAE show variable uncertainty with time and the uncertainty increases with increasing maximum inter-observation gap. On the other hand, HTVAE mTAN with homoscedastic output shows approximately constant uncertainty with time and also across different maximum inter-observation gaps. These results clearly show that HTVAE mTAN produces over-confident probabilistic interpolations over large gaps.
Furthermore, we show an ablation of the proposed model HeTVAE - INT, where we remove the intensity encoder and perform the interpolations. As we see from the figure, this leads to approximately constant uncertainty across time as well as different maximum inter-observation gaps. This shows that the HeTVAE model is not able to capture uncertainty due to input sparsity as effectively without the intensity encoder.
A.4 ARCHITECTURE DETAILS
HeTVAE: Learnable parameters in the UnTAND architecture shown in Figure 1a include the weights of the three linear layers and the parameters of the shared time embedding functions. Each time embedding function is a one layer fully connected network with a sine function non-linearity. The two linear layers on top of embedding function are linear projections from time embedding dimension de to de/H where H is the number of time embeddings. Note that these linear layers do not share parameters. The third linear layer performs a linear projection from 2 ∗ D ∗ H to J . It takes as input the concatenation of the VAL encoder output and INT encoder output and produces an output of dimension J . de, H and J are all hyperparameters of the architecture. The ranges considered are described in the next section.
The HeTVAE model shown in the Figure 1b consists of three MLP blocks apart from the UnTAND modules. The MLP in the deterministic path is a one layer fully connected layer that projects the UnTAND output to match the dimension of the latent state. The remaining MLP blocks are twolayer fully connected networks with matching width and ReLU activations. The MLP in the decoder takes the output of UnTAND module and outputs the mean and variance of dimension D and sequence length t′. We use a softplus transformation on the decoder output to get the variance σi = 0.01 + softplus(fdecσ (h dec i )). Similarly, in the probabilistic path, we apply an exponential transformation to get the variance of the q distribution σ2k = exp(f enc σ (h enc k )). We use K reference time points regularly spaced between 0 and 1. K is considered to be a hyperparameter of the architecture. The ranges considered are described in the next section.
Baselines: For the HTVAE mTAN, we use a similar architecture as HeTVAE where we remove the deterministic path, heteroscedastic output layer and use the mTAND module instead of the UnTAND module (Shukla & Marlin, 2021a). We use the same architectures for the ODE and RNN-based VAEs as Rubanova et al. (2019).
A.5 HYPERPARAMETERS
HeTVAE: We fix the time embedding dimension to de = 128. The number of embeddings H is searched over the range {1, 2, 4}. We search the number of reference points K over the range {4, 8, 16, 32}, the latent dimension over the range {8, 16, 32, 64, 128}, the output dimension of UnTAND J over the range {16, 32, 64, 128}, and the width of the two-layer fully connected layers over {128, 256, 512}. In augmented learning objective, we search for λ over the range {1.0, 5.0, 10.0}. We use the Adam Optimizer for training the models. Experiments are run for 2, 000 iterations with a learning rate of 0.0001 and a batch size of 128. The best hyperparameters are reported in the code. We use 100 samples from the probabilistic latent state to compute the evaluation metrics.
Ablations: We note that the ablations were not performed with a fixed architecture. For all the ablation models, we tuned the hyperparameters and reported the results with the best hyperparameter setting. We also made sure that the hyperparameter ranges for ablated models with just deterministic/probabilistic path were wide enough that the optimal ablated models did not saturate the end of the ranges for architectural hyper-parameter values including the dimensionality of the latent representations.
VAE Baselines: For VAE models with homoscedastic output, we treat the output variance term as a hyperparameter and select the variance over the range {0.01, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0}. For HTVAE mTAN, we search the corresponding hyperparameters over the same range as HeTVAE. For ODE and RNN based VAEs, we search for GRU hidden units, latent dimension, the number of hidden units in the fully connected network for the ODE function in the encoder and decoder over the range {20, 32, 64, 128, 256}. For ODEs, we also search the number of layers in fully connected network in the range {1, 2, 3}. We use a batch size of 50 and a learning rate of 0.001. We use 100 samples from the latent state to compute the evaluation metrics.
Gaussian Processes: For single task GP, we use a squared exponential kernel. In case of multitask GP, we experimented with the Matern kernel with different smoothness parameters, and the
squared exponential kernel. We found that Matern kernel performs better. We use maximum marginal likelihood to train the GP hyperparameters. We search for learning rate over the range {0.1, 0.01, 0.001} and run for 100 iterations. We search for smoothness parameter over the range {0.5, 1.5, 2.5}. We search for the batch size over the range {32, 64, 128, 256}.
A.6 TRAINING DETAILS
A.6.1 DATA GENERATION AND PREPROCESSING
Synthetic Data Generation: We generate a synthetic dataset consisting of 2000 trajectories each consisting of 50 time points with values between 0 and 1. We fix 10 reference time points and draw values for each from a standard normal distribution. We then use an RBF kernel smoother with a fixed bandwidth of α = 120.0 to construct local interpolations over the 50 time points. The data generating process is shown below:
zk ∼ N (0, 1), k ∈ [1, · · · , 10] rk = 0.1 ∗ k ti = 0.02 ∗ i, i ∈ [1, · · · , 50]
xi = ∑ k exp(−α(ti − rk)2) · zk∑ k′ exp(−α(ti − rk′)2) +N (0, 0.12)
We randomly sample 3 − 10 observations from each trajectory to simulate a sparse and irregularly sampled univariate time series.
PhysioNet: The PhysioNet Challenge 2012 dataset (Silva et al., 2012) consists of multivariate time series data with 37 physiological variables from intensive care unit (ICU) records. Each record contains measurements from the first 48 hours after admission. We use the protocols described in Rubanova et al. (2019) and round the observation times to the nearest minute resulting in 2880 possible measurement times per time series. The data set consists includes 8000 instances that can be used for interpolation experiments. PhysioNet is freely available for research use and can be downloaded from https://physionet.org/content/challenge-2012/.
MIMIC-III: The MIMIC-III data set (Johnson et al., 2016) is a multivariate time series dataset containing sparse and irregularly sampled physiological signals collected at Beth Israel Deaconess Medical Center. We use the procedures proposed by Shukla & Marlin (2019) to process the data set. This results in 53, 211 records each containing 12 physiological variables. We use all 53, 211 instances to perform interpolation experiments. MIMIC-III is available through a permissive data use agreement which can be requested at https://mimic.mit.edu/iii/gettingstarted/. Once the request is approved, the dataset can be downloaded from https://mimic.mit.edu/ iii/gettingstarted/dbsetup/. The instructions and code to extract the MIMIC-III dataset is given at https://github.com/mlds-lab/interp-net.
Climate Dataset: The U.S. Historical Climatology Network Monthly (USHCN) dataset (Menne et al., 2016) is a publicly available dataset consisting of daily measurements of 5 climate variables − daily maximum temperature, daily minimum temperature, whether it was a snowy day or not, total daily precipitation, and daily snow precipitation. It contains data from the last 150 years for 1, 218 meteorological stations scattered over the United States. Following the preprocessing steps of Che et al. (2018b), we extract daily climate data for 100 consecutive years starting from 1910 to 2009 from 54 stations in California. To get multi-rate time series data, we split the stations into 3 groups with sampling rates of 2 days, 1 week, and 1 month respectively. We divide the data into smaller time series consisting of yearly data and end up with a dataset of 100 examples each consisting of 270 features. We perform the interpolation task on this dataset where we compute the feature values every day using the multi-rate time series data. The dataset is available for download at https://cdiac.ess-dive.lbl.gov/ftp/ushcn_daily/.
Electricity Dataset: The UCI household electricity dataset contains measurements of seven different quantities related to electricity consumption in a household. The data are recorded every minute for 47 months between December 2006 and November 2010, yielding over 2 million observations. To simulate irregular sampling, we keep observations only at durations sampled from
an exponential distribution with λ = 20. Following the preprocessing step of Binkowski et al. (2018), we also do random feature sampling where we choose one out of seven features at each time step. We divide the data into smaller time series consisting of monthly data and end up with a dataset of 1431 examples each consisting of 7 features. We perform interpolation experiments on this dataset where we compute feature values every minute using the irregularly sampled data. The dataset is available for download at https://archive.ics.uci.edu/ml/datasets/ individual+household+electric+power+consumption.
Dataset Preprocessing: We rescale time to be in [0, 1] for all datasets. We also re-scale all dimensions. In case of PhysioNet and MIMIC-III, for each dimensions we first remove outliers in the outer 0.1% percentile region. We then compute the mean and standard deviation of all observations on that dimension. The outlier detection step is used to mitigate the effect of rare large values in the data set from affecting the normalization statistics. Finally, we z-transform all of the available data (including the points identified as outliers). No data points are discarded from the data sets during the normalization process.
A.6.2 SOURCE CODE
The source code for reproducing the results in this paper is available at https://github.com/ reml-lab/hetvae.
A.6.3 COMPUTING INFRASTRUCTURE
All experiments were run on a Nvidia Titan X and 1080 Ti GPUs. The time required to run all the experiments in this paper including hyperparameter tuning was approximately eight days using eight GPUs. | 1. What is the main contribution of the paper regarding probabilistic interpolation of time series?
2. What are the strengths of the proposed model HeTVAE, particularly in its architectural improvements and novel contributions?
3. What are the limitations of the current version of the paper, such as significance of heteroscedasticity and lack of specification or explication of modeling choices?
4. How does the reviewer assess the performance of HeTVAE experimentally, and what suggestions do they have for additional ablation studies?
5. What other remarks and questions does the reviewer have regarding scalability, implementation, and reproducibility? | Summary Of The Paper
Review | Summary Of The Paper
This paper introduces a novel model, HeTVAE, for probabilistic interpolation of time series that are irregularly sampled. HeTVAE builds on prior work by complementing it with a learned time-dependent output variance in the VAE and architectural improvements. The latter include a new branch accounting for the distribution of the sampled timestamps in the series and the addition of a deterministic branch bypassing the stochastic variational latent variable. The performance of HeTVAE is evaluated on multiple datasets against various baselines and via ablation studies.
Review
Contribution
Advantages
The tackled problem of probabilistic interpolation of time series is relevant and especially valuable as it has received less attention than the forecasting task, at least in the neural networks community. This is especially the case in the considered setting where observations are not synchronized between dimensions.
The proposed model is for the most part interesting and well motivated, justifying the additions that are made on top of the prior mTAN. These contributions are, to the best of my knowledge, novel. The so-called heteroscedasticity of the output variance answers a crucial issue of standard VAEs with constant output variance in this context of application - even though this has already been considered elsewhere. The intensity branch of the introduced network clearly addresses the shortcomings of prior state-of-the-art methods. Accordingly, the paper is mostly well written and easy to read.
Experimentally, the performance of HeTVAE is state-of-the-art by a large margin, showing the benefits of the approach. Ablation studies show the individual contribution of each model component; I suggest that the authors include the full ablation of the appendix in the main paper. The experiments are sufficiently well designed and diverse on real-world applications in order to correctly assess this performance. Furthermore, qualitative experiments with examples of interpolation are appealing and highlight the impact of the model and its different components.
For all these reasons, I think that this paper is interesting and might be ready for publication at ICLR after the revision during the rebuttal. Indeed, I would express reservations, that I detail below.
Limitations and Potential Improvements
A first limitation of the current version of the paper deals with the significance of the introduced heteroscedasticity. While very few models that are able tackle the same task as HeTVAE seem to leverage temporally varying output variance, it is unclear whether this is an inherent advantage of the described model, or if it is orthogonal to the other architectural considerations. In other words, could heteroscedasticity be reasonably applied to the considered RNN/ODE-based baselines? If it does, I recommend that the authors include these augmented baselines in their experiments to better contextualize the performance of HeTVAE. In any case, a more detailed discussion of the related work is necessary in this regard. In particular, I would advise the authors to consider other works which, to my understanding, could be included in the related work and considered baselines with heteroscedasticity [1, 2].
A second limitation is a partial lack of specification or explication of the modeling choices. In particular, the motivation and intuition behind the deterministic path is missing from the paper, to my understanding; I am wondering about the impact of its introduction in parallel of the usual variational latent variable, since the deterministic variable is unconstrained whereas the stochastic one is constrained by the KL term in the loss. An additional ablation showing the performance of HeTVAE without the stochastic branch would thus be interesting. Moreover, the role of the reference timestamps
r
is unclear and they are only specified in the appendix; could the authors discuss their necessity in the model and the relevance of their choice? Further discussion about how the baselines are adapted for the considered task should also be included. Finally, it is unclear until Section 4 that the learned VAE also learns to predict interpolations besides reconstructing its inputs: this information should be clearly stated earlier, probably in the description of the ELBO.
Other Remarks and Questions
Scalability of the Intensity Pathway
Could the authors comment on the scalability of the implementation of the intensity pathway as described in Equation (1)? To my understanding, the numerator pools over all elements of the dataset and it would seem that a large-scale dataset would prevent an efficient computation of this branch.
Nature of the Augmented Learning Objective
There seems to be a typo in the Equation (11): the augmented learning objective should be minimized, so it should be negatively added the the ELBO.
Code and Supplementary Material
Based on my limited review, the experiments of this paper seem to be reproducible. The provided code is appreciated. I recommend the authors to remove the hidden folders in the archive, especially the .git which is irrelevant and makes the archive heavier than needed.
References
[1] X. Li et al. Scalable Gradients for Stochastic Differential Equations. AISTATS 2020.
[2] A. Norcliffe et al. Neural ODE Processes. ICLR 2020. |
ICLR | Title
Heteroscedastic Temporal Variational Autoencoder For Irregularly Sampled Time Series
Abstract
Irregularly sampled time series commonly occur in several domains where they present a significant challenge to standard deep learning models. In this paper, we propose a new deep learning framework for probabilistic interpolation of irregularly sampled time series that we call the Heteroscedastic Temporal Variational Autoencoder (HeTVAE). HeTVAE includes a novel input layer to encode information about input observation sparsity, a temporal VAE architecture to propagate uncertainty due to input sparsity, and a heteroscedastic output layer to enable variable uncertainty in output interpolations. Our results show that the proposed architecture is better able to reflect variable uncertainty through time due to sparse and irregular sampling than a range of baseline and traditional models, as well as recent deep latent variable models that use homoscedastic output layers.1
1 INTRODUCTION
In this paper, we propose a novel deep learning framework for probabilistic interpolation of irregularly sampled time series. Irregularly sampled time series data occur in multiple scientific and industrial domains including finance (Manimaran et al., 2006), climate science (Schulz & Stattegger, 1997) and healthcare (Marlin et al., 2012; Yadav et al., 2018). In some domains including electronic health records and mobile health studies (Cheng et al., 2017), there can be significant variation in inter-observation intervals through time. This is due to the complexity of the underlying observation processes that can include “normal” variation in observation times combined with extended, blockstructured periods of missingness. For example, in the case of ICU EHR data, this can occur due to patients being moved between different locations for procedures or tests, resulting in missing physiological sensor data for extended periods of time. In mobile health studies, the same problem can occur due to mobile sensor batteries running out, or participants forgetting to wear or carry devices.
In such situations, it is of critical importance for interpolation models to be able to correctly reflect the variable input uncertainty that results from variable observation sparsity so as not to provide overly confident inferences. However, modeling time series data subject to irregular sampling poses a significant challenge to machine learning models that assume fully-observed, fixed-size feature representations (Marlin et al., 2012; Yadav et al., 2018; Shukla & Marlin, 2021b). The main challenges in dealing with such data include the presence of variable time gaps between the observation time points, partially observed feature vectors caused by the lack of temporal alignment across different dimensions, as well as different data cases, and variable numbers of observations across dimensions and data cases. Significant recent work has focused on developing specialized models and architectures to address these challenges in modeling irregularly sampled multivariate time series (Li & Marlin, 2015; 2016; Lipton et al., 2016; Futoma et al., 2017; Che et al., 2018a; Shukla & Marlin, 2019; Rubanova et al., 2019; Horn et al., 2020; Li & Marlin, 2020; Shukla & Marlin, 2021a; De Brouwer et al., 2019; Tan et al., 2020; Kidger et al., 2020).
Recently, Shukla & Marlin (2021a) introduced the Multi-Time Attention Network (mTAN) model, a variational autoencoder (VAE) architecture for continuous-time interpolation of irregularly sampled
1Implementation available at https://github.com/reml-lab/hetvae
time series. This model was shown to provide state-of-the-art classification and deterministic interpolation performance. However, like many VAEs, the mTAN architecture produces a homoscedastic output distribution conditioned on the latent state. This means that the model can only reflect uncertainty due to variable input sparsity through variations in the VAE latent state. As we will show, this mechanism is insufficient to capture differences in uncertainty over time. On the other hand, Gaussian Process Regression-based (GPR) methods (Rasmussen & Williams, 2006) have the ability to reflect variable uncertainty through the posterior inference process. The main drawbacks of GPR-based methods are their significantly higher run times during both training and inference, and the added restriction to define positive definite covariance functions for multivariate time series.
In this work, we propose a novel encoder-decoder architecture for multivariate probabilistic time series interpolation that we refer to as the Heteroscedastic Temporal Variational Autoencoder or HeTVAE. HeTVAE aims to address the challenges described above by encoding information about input sparsity using an uncertainty-aware multi-time attention network (UnTAN), flexibly capturing relationships between dimensions and time points using both probabilistic and deterministic latent pathways, and directly representing variable output uncertainty via a heteroscedastic output layer.
The proposed UnTAN layer generalizes the previously introduced mTAN layer with an additional intensity network that can more directly encode information about input uncertainty due to variable sparsity. The proposed UnTAN layer uses an attention mechanism to produce a distributed latent representation of irregularly sampled time series at a set of reference time points. The UnTAN module thus provides an interface between input multivariate, sparse and irregularly sampled time series data and more traditional deep learning components that expect fixed-dimensional or regularly spaced inputs. We combat the presence of additional local optima that arises from the use of a heteroscedastic output layer by leveraging an augmented training objective where we combine the ELBO loss with an uncertainty agnostic loss component. The uncertainty agnostic component helps to prevent learning from converging to local optima where the structure in data is explained as noise.
We evaluate the proposed architecture on both synthetic and real data sets. Our approach outperforms a variety of baseline models and recent approaches in terms of log likelihood, which is our primary metric of interest in the case of probabilistic interpolation. Finally, we perform ablation testing of different components of the architecture to assess their impact on interpolation performance.
2 RELATED WORK
Keeping in mind the focus of this work, we concentrate our discussion of related work on deterministic and probabilistic approaches applicable to the interpolation and imputation tasks.
Deterministic Interpolation Methods: Deterministic interpolation methods can be divided into filtering and smoothing-based approaches. Filtering-based approaches infer the values at a given time by conditioning only on past observations. For example, Han-Gyu Kim et al. (2017) use a unidirectional RNN for missing data imputation that conditions only on data from the relative past of the missing observations. On the other hand, smoothing-based methods condition on all possible observations (past and future) to infer any unobserved value. For example, Yoon et al. (2018) and Cao et al. (2018) present missing data imputation approach based on multi-directional and bi-directional RNNs. These models typically use the gated recurrent unit with decay (GRU-D) model (Che et al., 2018a) as a base architecture for dealing with irregular sampling. Interpolation-prediction networks take a different approach to interfacing with irregularly sampled data that is based on the use of temporal kernel smoother-based layers (Shukla & Marlin, 2019). Shan & Oliva (2021) propose hierarchical imputation strategy based on set-based architectures for imputation in irregularly sampled time series. Of course, the major disadvantage of deterministic interpolation approaches is that they do not express uncertainty over output interpolations and thus can not be applied to the problem of probabilistic interpolation without modifications.
Probabilistic Interpolation Methods: The two primary building blocks for probabilistic interpolation and imputation of multivariate irregularly sampled time series are Gaussian processes regression (GPR) (Rasmussen & Williams, 2006) and variational autoencoders (VAEs) (Rezende et al., 2014; Kingma & Welling, 2014). GPR models have the advantage of providing an analytically tractable full joint posterior distribution over interpolation outputs when conditioned on irregularly sampled input data. Commonly used covariance functions have the ability to translate variable input obser-
vation density into variable interpolation uncertainty. GPR-based models have been used as the core of several approaches for supervised learning and forecasting with irregularly sampled data (Ghassemi et al., 2015; Li & Marlin, 2015; 2016; Futoma et al., 2017). However, GPR-based models can become somewhat cumbersome in the multivariate setting due to the positive definiteness constraint on the covariance function (Rasmussen & Williams, 2006). The use of separable covariance functions is one common approach to the construction of GPR models over multiple dimensions (Bonilla et al., 2008), but this construction requires all dimensions to share the same temporal kernel parameters. A further drawback of GP-based methods is their significantly higher run times relative to deep learning-based models when applied to larger-scale data (Shukla & Marlin, 2019).
Variational autoencoders (VAEs) combine probabilistic latent states with deterministic encoder and decoder networks to define a flexible and computationally efficient class of probabilistic models that generalize classical factor analysis (Kingma & Welling, 2014; Rezende et al., 2014). Recent research has seen the proposal of several new VAE-based models for irregularly sampled time series. Chen et al. (2018) proposed a latent ordinary differential equation (ODE) model for continuous-time data using an RNN encoder and a neural ODE decoder. Building on the prior work of Chen et al. (2018), Rubanova et al. (2019) proposed a latent ODE model that replaces the RNN with an ODE-RNN model as the encoder. Li et al. (2020) replace the deterministic ODEs with stochastic differential equations(SDEs). Norcliffe et al. (2021) extends the prior work on neural ode by combining it with neural processes (Garnelo et al., 2018). Shukla & Marlin (2021a) proposed the Multi-Time Attention Network (mTAN) model, a VAE-based architecture that uses a multi-head temporal cross attention encoder and decoder module (the mTAND module) to provide the interface to multivariate irregularly sampled time series data. Fortuin et al. (2020) proposed a VAE-based approach for the task of smoothing in multivariate time series with a Gaussian process prior in the latent space to capture temporal dynamics. Garnelo et al. (2018); Kim et al. (2019) used heteroscedastic output layers to represent uncertainty in case of fixed dimensional inputs but these approaches are not applicable to irregularly sampled time series.
Similar to the mTAN model, the Heteroscedastic Temporal Variational Autoencoder (HeTVAE) model proposed in this work is an attention-based VAE architecture. The primary differences are that mTAN uses a homoscedastic output distribution that assumes constant uncertainty and that the mTAN model’s cross attention operation normalizes away information about input sparsity. These limitations are problematic in cases where there is variable input density through time resulting in the need for encoding, propagating, and reflecting that uncertainty in the output distribution. As we describe in the next section, HeTVAE addresses these issues by combining a novel sparsity-sensitive encoder module with a heteroscedastic output distribution and parallel probabilistic and deterministic pathways for propagating information through the model. Another important difference relative to these previous methods is that HeTVAE uses an augmented learning objective to address the underfitting of predictive variance caused by the use of the heteroscedastic layer.
3 PROBABILISTIC INTERPOLATION WITH THE HETVAE
In this section, we present the proposed architecture for probabilistic interpolation of irregularly sampled time series, the Heteroscedastic Temporal Variational Autoencoder (HeTVAE). HeTVAE leverages a sparsity-aware layer as the encoder and decoder in order to represent input uncertainty and propagate it to output interpolations. We begin by introducing notation. We then describe the architecture of the encoder/decoder network followed by the complete HeTVAE architecture.
3.1 NOTATION
We letD = {sn|n = 1, ..., N} represent a data set containing N data cases. An individual data case consists of a D-dimensional, sparse and irregularly sampled multivariate time series sn. Different dimensions d of the multivariate time series can have observations at different times, as well as different total numbers of observations Ldn. We follow the series-based representation of irregularly sampled time series (Shukla & Marlin, 2021b) and represent time series d for data case n as a tuple sdn = (tdn,xdn) where tdn = [t1dn, ..., tLdndn] is the list of time points at which observations are defined and xdn = [x1dn, ..., xLdndn] is the corresponding list of observed values. We drop the data case index n for brevity when the context is clear.
3.2 REPRESENTING INPUT SPARSITY
As noted in the previous section, the mTAN encoder module does not represent information about input sparsity due to the normalization of the attention weights. To address this issue, we propose an augmented module that we refer to as an Uncertainty Aware Multi-Time Attention Network (UnTAN). The UnTAN module is shown in Figure 1a. This module includes two encoding pathways that leverage a shared time embedding function and a shared attention function. The first encoding pathway (the intensity pathway, INT) focuses on representing information about the sparsity of observations while the second encoding pathway (the value pathway, VAL) focuses on representing information about values of observations. The outputs of these two pathways are concatenated and mixed via a linear layer to define the final output of the module. The mathematical description of the module is given in Equations 1 to 3 and is explained in detail below.
inth(rk, td) = pool({exp(αh(rk, tid)) | tid ∈ td}) pool({exp(αh(rk, ti′u)) | ti′u ∈ tu})
(1)
valh(rk, td,xd) = pool({exp(αh(rk, tid)) · xid | tid ∈ td, xid ∈ xd})
pool({exp(αh(rk, ti′d)) | ti′d ∈ td}) (2)
αh(t, t ′) =
( φh(t)wv
Tφh(t ′)T√
de
) (3)
Time Embeddings and Attention Weights: Similar to the mTAN module, the UnTAN module uses time embedding functions φh(t) to project univariate time values into a higher dimensional space. Each time embedding function is a one-layer fully connected network with a sine function non-linearity φh(t) = sin(ω · t+ β). We learn H time embeddings each of dimension de. w and v are the parameters of the scaled dot product attention function αh(t, t′) shown in Equation 3. The scaling factor 1/ √ de is used to normalize the dot product to counteract the growth in the dot product magnitude with increase in the time embedding dimension de.
Intensity Encoding: The intensity encoding pathway is defined by the function inth(rk, td) shown in Equation 1. The inputs to the intensity function are a query time point rk and a vector td containing all the time points at which observations are available for dimension d. The numerator of the intensity function exponentiates the attention weights between rk and each time point in td to ensure positivity, then pools over the observed time points. The denominator of this computation is identical to the numerator, but the set of time points tu that is pooled over is the union over all observed time points for dimension d from all data cases.
Intuitively, if the largest attention weight between rk and any element of td is small relative to attention weights between rk and the time points in tu, then the output of the intensity function will be low. Importantly, due to the use of the non-linear time embedding function, pairs of time points with high attention weights do not necessarily have to be close together in time meaning the notion of intensity that the network expresses is significantly generalized.
We also note that different sets could be used for tu including a regularly spaced set of reference time points. One advantage of using the union of all observed time points is that it fixes the maximum value of the intensity function at 1. The two pooling functions applicable in the computation of the intensity function are max and sum. If the time series is sparse, max works well because using sum in the sparse case can lead to very low output values. In a more densely observed time series, either sum or max can be used.
Value Encoding: The value encoding function valh(rk, td,xd) is presented in Equation 2 in a form that highlights the symmetry with the intensity encoding function. The primary differences are that valh(rk, td,xd) takes as input both observed time points td and their corresponding values xd, and the denominator of the function pools over td itself. While different pooling options could be used for this function, in practice we use sum-based pooling. These choices lead to a function valh(rk, td,xd) that interpolates the observed values at the query time points using softmax weights derived from the attention function. The values of observed points with higher attention weights contribute more to the output value. This structure is equivalent to that used in the mTAN module when sum-based pooling is used. We can also clearly see that this function on its own can not represent
information about input sparsity due to the normalization over td. Indeed, the function is completely invariant to an additive decrease in all of the attention weights α′h(rk, tid) = αh(rk, tid)− δ.
Module Output: The last stage of the UnTAN module concatenates the value and intensity pathway representations and then linearly weights them together to form the final J-dimensional representation that is output by the module. The parameters of this linear stage of the model are U inthdj and Uvalhdj . The value of the j th dimension of the output at a query time point rk is given by Equation 4.
UnTAN(rk, t,x)[j] = H∑ h=1 D∑ d=1
[ inth(rk, td)
valh(rk, td,xd)
]T [ U inthdj
Uvalhdj
] (4)
Finally, we note that the UnTAN module defines a continuous function of t given an input time series and hence cannot be directly incorporated into standard neural network architectures. We adapt the UnTAN module to produce fully observed fixed-dimensional discrete sequences by materializing its output at a set of reference time points. Reference time points can be fixed set of regularly spaced time points or may need to depend on the input time series. For a given set of reference time points r = [r1, · · · , rK ], the discretized UnTAN module UnTAND(r, t,x) is defined as UnTAND(r, t,x)[i] = UnTAN(ri, t,x). This module takes as input the time series s = (t,x) and the set of reference time points r and outputs a sequence of K UnTAN embeddings, each of dimension J corresponding to each reference point. As described in the next section, we use the UnTAND module to provide an interface between sparse and irregularly sampled data and fully connected MLP network structures.
3.3 THE HETVAE MODEL
In this section, we describe the overall architecture of the HeTVAE model, as shown in Figure 1b.
Model Architecture: The HeTVAE consists of parallel deterministic and probabilistic pathways for propagating input information to the output distribution, including information about input sparsity. We begin by mapping the input time series s = (t,x) through the UnTAND module along with a collection of K reference time points r. In the probabilistic path, we construct a distribution over latent variables at each reference time point using a diagonal Gaussian distribution q with mean and variance output by fully connected layers applied to the UnTAND output embeddings
henc = [henc1 , · · · ,hencK ] as shown in Equation 6. In the deterministic path, the UnTAND output embeddings henc are passed through a feed-forward network g to produce a deterministic temporal representation (at each reference point) of the same dimension as the probabilistic latent state.
The decoder takes as input the representation from both pathways along with the reference time points and a set of query points t′ (Eq 8). The UnTAND module produces a sequence of embeddings hdec = [hdec1 , · · · ,hdec|t′| ] corresponding to each time point in t
′. The UnTAND embeddings are then independently decoded using a fully connected decoder fdec and the result is used to parameterize the output distribution. We use a diagonal covariance Gaussian distribution where both the mean µ = [µ1, · · · ,µ|t′|],µi ∈ RD and variance σ2 = [σ21 , · · · ,σ2|t′|],σ 2 i ∈ RD are predicted for each time point by the final decoded representation as shown in Eq 9. The generated time series is sampled from this distribution and is given by ŝ = (t′,x′) with all data dimensions observed.
The complete model is described below. We define qγ(z|r, s) to be the distribution over the probabilistic latent variables z = [z1, · · · , zK ] induced by the input time series s = (t,x) at the reference time points r. We define the prior p(zi) over the latent states to be a standard multivariate normal distribution. We let phetθ (x ′ id | zcat, t′id) define the final probability distribution over the value of time point t′id on dimension d given the concatenated latent state z cat = [zcat1 , · · · , zcatK ]. γ and θ represent the parameters of all components of the encoder and decoder respectively.
henc = UnTANDenc(r, t,x) (5)
zk ∼ qγ(zk |µk,σ2k), µk = fencµ (henck ), σ2k = fencσ (henck ) (6) zcatk = concat(zk, g(h enc k )) (7)
hdec = UnTANDdec(t′, r, zcat) (8)
phetθ (x ′ id | zcat, t′id) = N (x′id; µi [d], σ2i [d]), µi = fdecµ (hdeci ), σ2i = fdecσ (hdeci ) (9)
x′id ∼ phetθ (x′id | zcat, t′id) (10) Compared to the constant output variance used to train the mTAN-based VAE model proposed in prior work (Shukla & Marlin, 2021a), our proposed model produces a heteroscedastic output distribution that we will show provides improved modeling for the probabilistic interpolation task. However, the increased complexity of the model’s output representation results in an increased space of local optima. We address this issue using an augmented learning objective, as described in the next section. Finally, we note that we can easily obtain a simplified homoscedastic version of the model with constant output variance σ2c using the alternate final output distribution pcθ(x ′ id | z, t′id) = N (x′id; µi [d], σ2c ).
Augmented Learning Objective: To learn the parameters of the HeTVAE framework given a data set of sparse and irregularly sampled time series, we propose an augmented learning objective based on a normalized version of the evidence lower bound (ELBO) combined with an uncertainty agnostic scaled squared loss. We normalize the contribution from each data case by the total number of observations so that the effective weight of each data case in the objective function is independent of the total number of observed values. The augmented learning objective is defined below. µn is the predicted mean over the test time points as defined in Equation 9. Also recall that the concatenated latent state zcat depends directly on the probabilistic latent state z.
LNVAE(θ, γ) = N∑ n=1 1∑ d Ldn ( Eqγ(z|r,sn)[log p het θ (xn|zcatn , tn)]−DKL(qγ(z|r, sn)||p(z)) (11)
− λEqγ(z|r,sn)‖xn − µn‖ 2 2] )
DKL(qγ(z|r, sn)||p(z)) = K∑ i=1 DKL(qγ(zi|r, sn)||p(zi)) (12)
log phetθ (xn|zcatn , tn) = D∑ d=1 Ldn∑ j=1 log phetθ (xjdn|zcatn , tjdn) (13)
We include the uncertainty agnostic scaled squared loss term to counteract the propensity of the heteroscedastic model to become stuck in poor local optima where the mean is essentially flat and
Observed data Ground truth Reconstructions
n = 3 n = 10 n = 20
Figure 2: We show example interpolations on the synthetic dataset. The set of 3 columns correspond to interpolation results with increasing numbers of observed points: 3, 10 and 20 respectively. The first, second and third rows correspond to STGP, HeTVAE and HTVAE mTAN respectively. The shaded region corresponds to ± one standard deviation. STGP and HetVAE exhibit variable output uncertainty in response to input sparsity while mTAN does not.
all of the structure in the data is explained as noise. This happens because the model has the ability to learn larger variances at the output, which allows the mean to underfit the data. The extra component (scaled squared loss) helps to push the optimization process to find more informative parameters by introducing a fixed penalty for the mean deviating from the data. As we will show in the experiments, the use of this augmented training procedure has a strong positive impact on final model performance. Since, we are focusing on the interpolation task, we train the HeTVAE by maximizing the augmented learning objective (Equation 11) on the interpolated time points (more details on training has been provided in the experimental protocols in Section 4).
4 EXPERIMENTS
In this section, we present interpolation experiments using a range of models on three real-world data sets. PhysioNet Challenge 2012 (Silva et al., 2012) and MIMIC-III (Johnson et al., 2016) consist of multivariate, sparse and irregularly sampled time series data. We also perform experiments on the Climate dataset (Menne et al., 2016), consisting of multi-rate time series. We also show qualitative results on a synthetic dataset. Details of each dataset can be found in the Appendix A.6.1.
Experimental Protocols: We randomly divide the real data sets into a training set containing 80% of the instances, and a test set containing the remaining 20% of instances. We use 20% of the training data for validation. In the interpolation task, we condition on a subset of available points and produce distributions over the rest of the time points. On the real-world datasets, we perform interpolation experiments by conditioning on 50% of the available points. At test time, the values of observed points are conditioned on and each model is used to infer single time point marginal distributions over values at the rest of the available time points in the test instance. In the case of methods that do not produce probabilistic outputs, we make mean predictions. In the case of the synthetic dataset where we have access to all true values, we use the observed points to infer the values at the rest of the available points. We repeat each real data experiment five times using different random seeds to initialize the model parameters. We assess performance using the negative log likelihood, which is our primary metric of interest. We also report mean squared and mean absolute error. For all experiments, we select hyper-parameters on the held-out validation set using grid search and then apply the best trained model to the test set. The hyper-parameter ranges searched for each model and dataset are fully described in Appendix A.5.
Models: We compare our proposed model HeTVAE to several probabilistic and deterministic interpolation methods. We compare to two Gaussian processes regression (GPR) approaches. The most basic GP model for multivariate time series fits one GPR model per dimension. This approach is known as a single task GP model (STGP) (Rasmussen & Williams, 2006). A potentially better option is to model data using a Multi Task GP (MTGP) (Bonilla et al., 2008). This approach models the correlations both across different dimensions and across time by defining a kernel expressed as the Hadamard product of a temporal kernel (as used in the STGP) and a task kernel. We also compare to several VAE-based approaches. These approaches use a homoscedastic output distribution with different encoder and decoder architectures. HVAE RNN employs a gated recurrent unit network (Chung et al., 2014) as encoder and decoder, HVAE RNN-ODE (Chen et al., 2018) replaces the RNN decoder with a neural ODE, HVAE ODE-RNN-ODE (Rubanova et al., 2019) employs
a ODE-RNN encoder and neural ODE decoder. Finally, we compare to HTVAE mTAN (Shukla & Marlin, 2021a), a temporal VAE model consisting of multi-time attention networks producing homoscedastic output. For VAE models with homoscedastic output, we treat the output variance term as a hyperparameter and select the variance using log likelihood on the validation set. Architecture details for these methods can be found in Appendix A.4. As baselines, we also consider deterministic mean and forward imputation-based methods. Forward imputation always predicts the last observed value on each dimension, while mean imputation predicts the mean of all the observations for each dimension.
Synthetic Data Results: Figure 2 shows sample visualization output for the synthetic dataset. For this experiment, we compare HTVAE mTAN, the single task Gaussian process STGP, and the proposed HeTVAE model. We vary the number of observed points (3, 10, 20) and each model is used to infer the distribution over the remaining time points. We draw multiple samples from the VAE latent state for HeTVAE and HTVAE mTAN and visualize the distribution of the resulting mixture. As we can see, the interpolations produced by HTVAE mTAN have approximately constant uncertainty across time and this uncertainty level does not change even when the number of points conditioned on increases. On the other hand, both HeTVAE and STGP show variable uncertainty across time. Their uncertainty reduces in the vicinity of input observations and increases in gaps between observations. Even though the STGP has an advantage in this experiment (the synthetic data were generated with an RBF kernel smoother and STGP uses RBF kernel as the covariance function), the proposed model HeTVAE shows comparable interpolation performance. We show more qualitative results in Appendix A.3.
Real Data Results: Tables 1, 2 and 3 compare the interpolation performance of all the approaches on PhysioNet, MIMIC-III and Climate dataset respectively. HeTVAE outperforms the prior approaches with respect to the negative log likelihood score on all three datasets. Gaussian Process based methods − STGP and MTGP achieve second and third best performance respectively. We emphasize that while the MAE and MSE values for some of the prior approaches are close to those obtained by the HeTVAE model, the primary metric of interest for comparing probabilistic interpolation approaches is log likelihood, where the HeTVAE performs much better than the other methods.
We note that the MAE/MSE of the VAE-based models with homoscedastic output can be improved by using a small fixed variance during training. However, this produces even worse log likelihood values. Further, we note that the current implementation of MTGP is not scalable to the Climate dataset (270 dimensions). We provide experiments on an additional dataset in Appendix A.1.
Ablation Results: Table 4 shows the results of ablating several different components of the HeTVAE model and training procedure. The first row shows the results for the full proposed approach. The HeTVAE - ALO ablation shows the result of removing the augmented learning objective and training the model only using the ELBO. This results in an immediate drop in performance on PhysioNet. HeTVAE - DET removes the deterministic pathway from the model, resulting in a performance drop on both MIMIC-III and PhysioNet. HeTVAE - INT removes the intensity encoding pathway from the UnTAND module. It results in a large drop in performance on both datasets. HeTVAE - HET- ALO removes the heteroscedastic layer and the augmented learning objective (since the augmented learning objective is introduced to improve the learning in the presence of heteroscedastic layer), resulting in a highly significant drop on both datasets. These results show that all of the components included in the proposed model contribute to improved model performance. We provide more ablation results in Appendix A.2 and discuss hyperparameter selection in Appendix A.5.
5 DISCUSSION AND CONCLUSIONS
In this paper, we have proposed the Heteroscedastic Temporal Variational Autoencoder (HeTVAE) for probabilistic interpolation of irregularly sampled time series data. HeTVAE consists of an input sparsity-aware encoder, parallel deterministic and probabilistic pathways for propagating input uncertainty to the output, and a heteroscedastic output distribution to represent variable uncertainty in the output interpolations. Furthermore, we propose an augmented training objective to combat the presence of additional local optima that arise from the use of the heteroscedastic output structure. Our results show that the proposed model significantly improves uncertainty quantification in the output interpolations as evidenced by significantly improved log likelihood scores compared to several baselines and state-of-the-art methods. While the HeTVAE model can produce a probability distribution over an arbitrary collection of output time points, it is currently restricted to producing marginal distributions. As a result, sampling from the model does not necessarily produce smooth trajectories as would be the case with GPR-based models. Augmenting the HeTVAE model to account for residual correlations in the output layer is an interesting direction for future work.
6 REPRODUCIBILITY STATEMENT
The source code for reproducing the results in this paper is available at https://github.com/ reml-lab/hetvae. It contains the instructions to reproduce the results in the paper including the hyperparameters. The hyperparameter ranges searched for each model are fully described in Appendix A.5. The source code also includes the synthetic dataset generation process as well as one of the real-world dataset. The other datasets can be downloaded and prepared following the preprocessing steps notes in Appendix A.6.1.
ACKNOWLEDGEMENTS
Research reported in this paper was partially supported by the National Institutes of Health under award number 1P41EB028242.
A APPENDIX
A.1 ADDITIONAL RESULTS
We also perform experiments on the UCI electricity dataset (described in Appendix A.6.1). We follow the same experiment protocols described in Section 4. As we can see from Table 5, the proposed model HeTVAE outperforms the prior approaches across all three metrics.
A.2 ABLATION STUDY
Tables 6 and 7 show the complete results of ablating several different components of the HeTVAE model and training procedure with respect to all three evaluation metrics on PhysioNet and MIMICIII respectively. We denote different components of the HeTVAE model as − HET: heteroscedastic output layer, ALO: augmented learning objective, INT: intensity encoding, DET: deterministic pathway. The results show selected individual and compound ablations of these components and indicate that all of these components contribute significantly to the model’s performance in terms of the negative log likelihood score. We provide detailed comments below.
Effect of Heteroscedastic Layer: Since the augmented learning objective is introduced to improve the learning in the presence of heteroscedastic layer, we remove the augmented learning objective (ALO) with the heteroscedastic layer (HET). This ablation corresponds to HeTVAE - HET - ALO. As we can see from both Table 6 and 7, this results in a highly significant drop in the log likelihood performance as compared to the full HeTVAE model on both datasets. However, it results in only a slight drop in performance with respect to MAE and MSE, which is sensible as the HET component only affects uncertainty sensitive performance metrics.
Effect of Intensity Encoding: HeTVAE - INT removes the intensity encoding pathway from the UnTAND module. It results in an immediate drop in performance on both datasets. We also compare the effect of intensity encoding after removing the deterministic pathway and the augmented learning objective. These ablations are shown in HeTVAE - DET - ALO and HeTVAE - INT - DET - ALO. The performance drop is less severe in this case because of the propensity of the heteroscedastic output layer to get stuck in poor local optima in the absence of the augmented learning objective (ALO).
Effect of Augmented Learning Objective: The HeTVAE - ALO ablation shows the result of removing the augmented learning objective and training the model only using only the ELBO. This results in an immediate drop in performance on PhysioNet. The performance drop is less severe on MIMIC-III. We further perform this ablation without the DET component and observe severe drops in performance across all metrics on both datasets. These ablations correspond to HeTVAE - DET and HeTVAE - DET - ALO. This shows that along with ALO component, the DET component also constrains the model from getting stuck in local optima where all of the structure in the data is explained as noise. We show interpolations corresponding to these ablations in Appendix A.3.1.
Effect of Deterministic Pathway: HeTVAE - DET removes the deterministic pathway from the model, resulting in a performance drop on both MIMIC-III and PhysioNet across all metrics. We further compare the performance of both the probabilistic and deterministic pathways in isolation as shown by ablation HeTVAE - DET - ALO and HeTVAE - PROB - ALO. We observe that the
deterministic pathway HeTVAE - PROB - ALO outperforms the probabilistic pathway HeTVAE - DET - ALO in terms of log likelihood on MIMIC-III while the opposite is true in case of PhysioNet. However, on both datasets using only the deterministic pathway (HeTVAE - PROB - ALO) achieves better MAE and MSE scores as compared to using only the probabilistic pathway (HeTVAE - DET - ALO).
A.3 VISUALIZATIONS
A.3.1 INTERPOLATIONS ON PHYSIONET
Figure 3 shows example interpolations on the PhysioNet dataset. Following the experimental setting mentioned in Section 4, the models were trained using all dimensions and the inference uses all dimensions. We only show interpolations corresponding to Heart Rate as an illustration. As we can see, the STGP and HeTVAE models exhibit good fit and variable uncertainty on the edges where there are no observations. We can also see that mTAN trained with homoscedastic output is not able to produce as good a fit because of the fixed variance at the output (discussed in Section 4).
The most interesting observation is the performance of HeTVAE - DET - ALO, an ablation of HeTVAE model that retains heteroscedastic output, but removes the deterministic pathways and the augmented learning objective. This ablation significantly underfits the data and performs similar to mTAN. This is an example of local optima that arises from the use of a heteroscedastic output layer where the mean is excessively smooth and all of the structure in the data is explained as noise. We address this with the use of augmented learning objective described in Section 3.3. As seen in the Figure 3, adding the augmented learning objective (HeTVAE - DET) clearly improves performance.
A.3.2 SYNTHETIC DATA VISUALIZATIONS: SPARSITY
In this section, we show supplemental interpolation results on the synthetic dataset. The setting here is same as in Section 4. Figure 4 compares HTVAE mTAN, the single task Gaussian process STGP, the proposed HeTVAE model and an ablation of proposed model without intensity encoding HeTVAE - INT. We vary the number of observed points (3, 10, 20) and each model is used to infer the distribution over the remaining time points. We draw multiple samples from the VAE latent state for HeTVAE, HeTVAE - INT and HTVAE mTAN, and visualize the distribution of the resulting mixture. Figure 4 illustrates the interpolation performance of each of the models. As we can see, the interpolations produced by HTVAE mTAN have approximately constant uncertainty across time and this uncertainty level does not change even when the number of points conditioned on increases. On the other hand, both HeTVAE and STGP show variable uncertainty across time. Their uncertainty reduces in the vicinity of input observations and increases in gaps between observations. The HeTVAE-INT model performs slightly better than HTVAE mTAN model but it does not show variable uncertainty due to input sparsity like HeTVAE.
A.3.3 SYNTHETIC DATA VISUALIZATIONS: INTER-OBSERVATION GAP
To demonstrate the effectiveness of intensity encoder (INT), we perform another experiment on synthetic dataset where we increase the maximum inter-observation gap between the observations.
We follow the same training protocol as described in Section 4. At test time, we condition on 10 observed points with increasing maximum inter-observation gap. We vary the maximum interobservation gap from 20% to 80% of the length of the original time series. Each model is used to infer single time point marginal distributions over values at the rest of the available time points in the test instance.
Figure 5 shows the interpolations with increasing maximum inter-observation gap. STGP and HeTVAE show variable uncertainty with time and the uncertainty increases with increasing maximum inter-observation gap. On the other hand, HTVAE mTAN with homoscedastic output shows approximately constant uncertainty with time and also across different maximum inter-observation gaps. These results clearly show that HTVAE mTAN produces over-confident probabilistic interpolations over large gaps.
Furthermore, we show an ablation of the proposed model HeTVAE - INT, where we remove the intensity encoder and perform the interpolations. As we see from the figure, this leads to approximately constant uncertainty across time as well as different maximum inter-observation gaps. This shows that the HeTVAE model is not able to capture uncertainty due to input sparsity as effectively without the intensity encoder.
A.4 ARCHITECTURE DETAILS
HeTVAE: Learnable parameters in the UnTAND architecture shown in Figure 1a include the weights of the three linear layers and the parameters of the shared time embedding functions. Each time embedding function is a one layer fully connected network with a sine function non-linearity. The two linear layers on top of embedding function are linear projections from time embedding dimension de to de/H where H is the number of time embeddings. Note that these linear layers do not share parameters. The third linear layer performs a linear projection from 2 ∗ D ∗ H to J . It takes as input the concatenation of the VAL encoder output and INT encoder output and produces an output of dimension J . de, H and J are all hyperparameters of the architecture. The ranges considered are described in the next section.
The HeTVAE model shown in the Figure 1b consists of three MLP blocks apart from the UnTAND modules. The MLP in the deterministic path is a one layer fully connected layer that projects the UnTAND output to match the dimension of the latent state. The remaining MLP blocks are twolayer fully connected networks with matching width and ReLU activations. The MLP in the decoder takes the output of UnTAND module and outputs the mean and variance of dimension D and sequence length t′. We use a softplus transformation on the decoder output to get the variance σi = 0.01 + softplus(fdecσ (h dec i )). Similarly, in the probabilistic path, we apply an exponential transformation to get the variance of the q distribution σ2k = exp(f enc σ (h enc k )). We use K reference time points regularly spaced between 0 and 1. K is considered to be a hyperparameter of the architecture. The ranges considered are described in the next section.
Baselines: For the HTVAE mTAN, we use a similar architecture as HeTVAE where we remove the deterministic path, heteroscedastic output layer and use the mTAND module instead of the UnTAND module (Shukla & Marlin, 2021a). We use the same architectures for the ODE and RNN-based VAEs as Rubanova et al. (2019).
A.5 HYPERPARAMETERS
HeTVAE: We fix the time embedding dimension to de = 128. The number of embeddings H is searched over the range {1, 2, 4}. We search the number of reference points K over the range {4, 8, 16, 32}, the latent dimension over the range {8, 16, 32, 64, 128}, the output dimension of UnTAND J over the range {16, 32, 64, 128}, and the width of the two-layer fully connected layers over {128, 256, 512}. In augmented learning objective, we search for λ over the range {1.0, 5.0, 10.0}. We use the Adam Optimizer for training the models. Experiments are run for 2, 000 iterations with a learning rate of 0.0001 and a batch size of 128. The best hyperparameters are reported in the code. We use 100 samples from the probabilistic latent state to compute the evaluation metrics.
Ablations: We note that the ablations were not performed with a fixed architecture. For all the ablation models, we tuned the hyperparameters and reported the results with the best hyperparameter setting. We also made sure that the hyperparameter ranges for ablated models with just deterministic/probabilistic path were wide enough that the optimal ablated models did not saturate the end of the ranges for architectural hyper-parameter values including the dimensionality of the latent representations.
VAE Baselines: For VAE models with homoscedastic output, we treat the output variance term as a hyperparameter and select the variance over the range {0.01, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0}. For HTVAE mTAN, we search the corresponding hyperparameters over the same range as HeTVAE. For ODE and RNN based VAEs, we search for GRU hidden units, latent dimension, the number of hidden units in the fully connected network for the ODE function in the encoder and decoder over the range {20, 32, 64, 128, 256}. For ODEs, we also search the number of layers in fully connected network in the range {1, 2, 3}. We use a batch size of 50 and a learning rate of 0.001. We use 100 samples from the latent state to compute the evaluation metrics.
Gaussian Processes: For single task GP, we use a squared exponential kernel. In case of multitask GP, we experimented with the Matern kernel with different smoothness parameters, and the
squared exponential kernel. We found that Matern kernel performs better. We use maximum marginal likelihood to train the GP hyperparameters. We search for learning rate over the range {0.1, 0.01, 0.001} and run for 100 iterations. We search for smoothness parameter over the range {0.5, 1.5, 2.5}. We search for the batch size over the range {32, 64, 128, 256}.
A.6 TRAINING DETAILS
A.6.1 DATA GENERATION AND PREPROCESSING
Synthetic Data Generation: We generate a synthetic dataset consisting of 2000 trajectories each consisting of 50 time points with values between 0 and 1. We fix 10 reference time points and draw values for each from a standard normal distribution. We then use an RBF kernel smoother with a fixed bandwidth of α = 120.0 to construct local interpolations over the 50 time points. The data generating process is shown below:
zk ∼ N (0, 1), k ∈ [1, · · · , 10] rk = 0.1 ∗ k ti = 0.02 ∗ i, i ∈ [1, · · · , 50]
xi = ∑ k exp(−α(ti − rk)2) · zk∑ k′ exp(−α(ti − rk′)2) +N (0, 0.12)
We randomly sample 3 − 10 observations from each trajectory to simulate a sparse and irregularly sampled univariate time series.
PhysioNet: The PhysioNet Challenge 2012 dataset (Silva et al., 2012) consists of multivariate time series data with 37 physiological variables from intensive care unit (ICU) records. Each record contains measurements from the first 48 hours after admission. We use the protocols described in Rubanova et al. (2019) and round the observation times to the nearest minute resulting in 2880 possible measurement times per time series. The data set consists includes 8000 instances that can be used for interpolation experiments. PhysioNet is freely available for research use and can be downloaded from https://physionet.org/content/challenge-2012/.
MIMIC-III: The MIMIC-III data set (Johnson et al., 2016) is a multivariate time series dataset containing sparse and irregularly sampled physiological signals collected at Beth Israel Deaconess Medical Center. We use the procedures proposed by Shukla & Marlin (2019) to process the data set. This results in 53, 211 records each containing 12 physiological variables. We use all 53, 211 instances to perform interpolation experiments. MIMIC-III is available through a permissive data use agreement which can be requested at https://mimic.mit.edu/iii/gettingstarted/. Once the request is approved, the dataset can be downloaded from https://mimic.mit.edu/ iii/gettingstarted/dbsetup/. The instructions and code to extract the MIMIC-III dataset is given at https://github.com/mlds-lab/interp-net.
Climate Dataset: The U.S. Historical Climatology Network Monthly (USHCN) dataset (Menne et al., 2016) is a publicly available dataset consisting of daily measurements of 5 climate variables − daily maximum temperature, daily minimum temperature, whether it was a snowy day or not, total daily precipitation, and daily snow precipitation. It contains data from the last 150 years for 1, 218 meteorological stations scattered over the United States. Following the preprocessing steps of Che et al. (2018b), we extract daily climate data for 100 consecutive years starting from 1910 to 2009 from 54 stations in California. To get multi-rate time series data, we split the stations into 3 groups with sampling rates of 2 days, 1 week, and 1 month respectively. We divide the data into smaller time series consisting of yearly data and end up with a dataset of 100 examples each consisting of 270 features. We perform the interpolation task on this dataset where we compute the feature values every day using the multi-rate time series data. The dataset is available for download at https://cdiac.ess-dive.lbl.gov/ftp/ushcn_daily/.
Electricity Dataset: The UCI household electricity dataset contains measurements of seven different quantities related to electricity consumption in a household. The data are recorded every minute for 47 months between December 2006 and November 2010, yielding over 2 million observations. To simulate irregular sampling, we keep observations only at durations sampled from
an exponential distribution with λ = 20. Following the preprocessing step of Binkowski et al. (2018), we also do random feature sampling where we choose one out of seven features at each time step. We divide the data into smaller time series consisting of monthly data and end up with a dataset of 1431 examples each consisting of 7 features. We perform interpolation experiments on this dataset where we compute feature values every minute using the irregularly sampled data. The dataset is available for download at https://archive.ics.uci.edu/ml/datasets/ individual+household+electric+power+consumption.
Dataset Preprocessing: We rescale time to be in [0, 1] for all datasets. We also re-scale all dimensions. In case of PhysioNet and MIMIC-III, for each dimensions we first remove outliers in the outer 0.1% percentile region. We then compute the mean and standard deviation of all observations on that dimension. The outlier detection step is used to mitigate the effect of rare large values in the data set from affecting the normalization statistics. Finally, we z-transform all of the available data (including the points identified as outliers). No data points are discarded from the data sets during the normalization process.
A.6.2 SOURCE CODE
The source code for reproducing the results in this paper is available at https://github.com/ reml-lab/hetvae.
A.6.3 COMPUTING INFRASTRUCTURE
All experiments were run on a Nvidia Titan X and 1080 Ti GPUs. The time required to run all the experiments in this paper including hyperparameter tuning was approximately eight days using eight GPUs. | 1. What is the focus and contribution of the paper on interpolating irregularly sampled time series?
2. What are the strengths of the proposed HeTVAE model, particularly in its ability to capture uncertainty estimates?
3. What are the weaknesses of the paper, especially regarding the theoretical exposition and the choice of reference points?
4. Do you have any concerns about the robustness of the model's choice of scaling factor lambda across different datasets?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper introduces a VAE-based model for interpolation of irregularly sampled time series. The temporal input data is mapped to a latent representation over fixed reference points with an attention mechanism, using an intensity network that allows to encode data sparsity information. This latent representation can then be used to interpolate points at new time steps. Thanks to the intensity network and the heteroscedastic output layer, the proposed HeTVAE model can capture uncertainty estimates over the interpolated points. The model is tested on a number of datasets containing irregularly sampled points, and outperforms competing methods in the interpolation task.
Review
STRENGTHS
A great deal of modern machine learning literature for time series data often assumes regularly sampled time series, with no missing data and fixed size outputs. This is however rarely the case in many real world applications, which introduces many technical challenges that are not obvious to overcome with more standard architectures. The model presented in this paper is a possible way to tackle these challenges, and as such, I found this paper a very interesting read.
Despite building heavily on the mTAN model, the new ideas introduced in the paper are novel and well motivated.
Empirically, the HeTVAE outperforms competing methods by a large margin, and seem to be able to correctly capture uncertainty over time (as seen for example in figures 2 and 3)
The paper has extensive ablation studies that justify all the new components of the model
WEAKNESSES
I found the theoretical exposition in section 3 somewhat confusing in its current form, I could only follow it after reading details in the appendix and reading section 3 once again.
Reference points play a key role in the HeTVAE, but from section 3 it is not clear what their role is as well as how they are chosen. I was only able to really grasp their role after reading the appendix and looking at the code, which should not be the case. Only in appendix A4 I could understand that they are regularly spaced in [0,1], and only in A.6.1 that the time is scaled between 0 and 1 in all datasets (after which the choice of reference points makes sense)
related to the above, "reference points" are mentioned in the "intensity encoding" and "model output" paragraphs. But for a reader not familiar with the mTAN it is not obvious why we are interested in them.
The prior over z is not defined in the paper
The "augmented learning objective" seems quite hacky to me, and I wonder if there are better ways to achieve the same (e.g. better initializations, KL annealing, ..).
Learning output variances is normally not a problem in VAEs, why is it a problem in this case?
How robust is the choice of the scaling factor lambda across different datasets?
Choosing equally spaced reference points means that if one uses as a test point a sample temporally close to the input data (or even a sample from the input data), the imputation of the model might be unnecessarily poor. Could this be improved?
As stated by the authors, the model is only able to provide a marginal distribution at each time step, which practically means that the sampled trajectories might look inconsistent (non-smooth). Exploring the usage of sequential latent variable models for this would be an interesting future research direction.
The baselines for HVAE RNN and HVAE RNN-ODE are much worse than forward imputation, which makes me question the implementation of the models. |
ICLR | Title
Heteroscedastic Temporal Variational Autoencoder For Irregularly Sampled Time Series
Abstract
Irregularly sampled time series commonly occur in several domains where they present a significant challenge to standard deep learning models. In this paper, we propose a new deep learning framework for probabilistic interpolation of irregularly sampled time series that we call the Heteroscedastic Temporal Variational Autoencoder (HeTVAE). HeTVAE includes a novel input layer to encode information about input observation sparsity, a temporal VAE architecture to propagate uncertainty due to input sparsity, and a heteroscedastic output layer to enable variable uncertainty in output interpolations. Our results show that the proposed architecture is better able to reflect variable uncertainty through time due to sparse and irregular sampling than a range of baseline and traditional models, as well as recent deep latent variable models that use homoscedastic output layers.1
1 INTRODUCTION
In this paper, we propose a novel deep learning framework for probabilistic interpolation of irregularly sampled time series. Irregularly sampled time series data occur in multiple scientific and industrial domains including finance (Manimaran et al., 2006), climate science (Schulz & Stattegger, 1997) and healthcare (Marlin et al., 2012; Yadav et al., 2018). In some domains including electronic health records and mobile health studies (Cheng et al., 2017), there can be significant variation in inter-observation intervals through time. This is due to the complexity of the underlying observation processes that can include “normal” variation in observation times combined with extended, blockstructured periods of missingness. For example, in the case of ICU EHR data, this can occur due to patients being moved between different locations for procedures or tests, resulting in missing physiological sensor data for extended periods of time. In mobile health studies, the same problem can occur due to mobile sensor batteries running out, or participants forgetting to wear or carry devices.
In such situations, it is of critical importance for interpolation models to be able to correctly reflect the variable input uncertainty that results from variable observation sparsity so as not to provide overly confident inferences. However, modeling time series data subject to irregular sampling poses a significant challenge to machine learning models that assume fully-observed, fixed-size feature representations (Marlin et al., 2012; Yadav et al., 2018; Shukla & Marlin, 2021b). The main challenges in dealing with such data include the presence of variable time gaps between the observation time points, partially observed feature vectors caused by the lack of temporal alignment across different dimensions, as well as different data cases, and variable numbers of observations across dimensions and data cases. Significant recent work has focused on developing specialized models and architectures to address these challenges in modeling irregularly sampled multivariate time series (Li & Marlin, 2015; 2016; Lipton et al., 2016; Futoma et al., 2017; Che et al., 2018a; Shukla & Marlin, 2019; Rubanova et al., 2019; Horn et al., 2020; Li & Marlin, 2020; Shukla & Marlin, 2021a; De Brouwer et al., 2019; Tan et al., 2020; Kidger et al., 2020).
Recently, Shukla & Marlin (2021a) introduced the Multi-Time Attention Network (mTAN) model, a variational autoencoder (VAE) architecture for continuous-time interpolation of irregularly sampled
1Implementation available at https://github.com/reml-lab/hetvae
time series. This model was shown to provide state-of-the-art classification and deterministic interpolation performance. However, like many VAEs, the mTAN architecture produces a homoscedastic output distribution conditioned on the latent state. This means that the model can only reflect uncertainty due to variable input sparsity through variations in the VAE latent state. As we will show, this mechanism is insufficient to capture differences in uncertainty over time. On the other hand, Gaussian Process Regression-based (GPR) methods (Rasmussen & Williams, 2006) have the ability to reflect variable uncertainty through the posterior inference process. The main drawbacks of GPR-based methods are their significantly higher run times during both training and inference, and the added restriction to define positive definite covariance functions for multivariate time series.
In this work, we propose a novel encoder-decoder architecture for multivariate probabilistic time series interpolation that we refer to as the Heteroscedastic Temporal Variational Autoencoder or HeTVAE. HeTVAE aims to address the challenges described above by encoding information about input sparsity using an uncertainty-aware multi-time attention network (UnTAN), flexibly capturing relationships between dimensions and time points using both probabilistic and deterministic latent pathways, and directly representing variable output uncertainty via a heteroscedastic output layer.
The proposed UnTAN layer generalizes the previously introduced mTAN layer with an additional intensity network that can more directly encode information about input uncertainty due to variable sparsity. The proposed UnTAN layer uses an attention mechanism to produce a distributed latent representation of irregularly sampled time series at a set of reference time points. The UnTAN module thus provides an interface between input multivariate, sparse and irregularly sampled time series data and more traditional deep learning components that expect fixed-dimensional or regularly spaced inputs. We combat the presence of additional local optima that arises from the use of a heteroscedastic output layer by leveraging an augmented training objective where we combine the ELBO loss with an uncertainty agnostic loss component. The uncertainty agnostic component helps to prevent learning from converging to local optima where the structure in data is explained as noise.
We evaluate the proposed architecture on both synthetic and real data sets. Our approach outperforms a variety of baseline models and recent approaches in terms of log likelihood, which is our primary metric of interest in the case of probabilistic interpolation. Finally, we perform ablation testing of different components of the architecture to assess their impact on interpolation performance.
2 RELATED WORK
Keeping in mind the focus of this work, we concentrate our discussion of related work on deterministic and probabilistic approaches applicable to the interpolation and imputation tasks.
Deterministic Interpolation Methods: Deterministic interpolation methods can be divided into filtering and smoothing-based approaches. Filtering-based approaches infer the values at a given time by conditioning only on past observations. For example, Han-Gyu Kim et al. (2017) use a unidirectional RNN for missing data imputation that conditions only on data from the relative past of the missing observations. On the other hand, smoothing-based methods condition on all possible observations (past and future) to infer any unobserved value. For example, Yoon et al. (2018) and Cao et al. (2018) present missing data imputation approach based on multi-directional and bi-directional RNNs. These models typically use the gated recurrent unit with decay (GRU-D) model (Che et al., 2018a) as a base architecture for dealing with irregular sampling. Interpolation-prediction networks take a different approach to interfacing with irregularly sampled data that is based on the use of temporal kernel smoother-based layers (Shukla & Marlin, 2019). Shan & Oliva (2021) propose hierarchical imputation strategy based on set-based architectures for imputation in irregularly sampled time series. Of course, the major disadvantage of deterministic interpolation approaches is that they do not express uncertainty over output interpolations and thus can not be applied to the problem of probabilistic interpolation without modifications.
Probabilistic Interpolation Methods: The two primary building blocks for probabilistic interpolation and imputation of multivariate irregularly sampled time series are Gaussian processes regression (GPR) (Rasmussen & Williams, 2006) and variational autoencoders (VAEs) (Rezende et al., 2014; Kingma & Welling, 2014). GPR models have the advantage of providing an analytically tractable full joint posterior distribution over interpolation outputs when conditioned on irregularly sampled input data. Commonly used covariance functions have the ability to translate variable input obser-
vation density into variable interpolation uncertainty. GPR-based models have been used as the core of several approaches for supervised learning and forecasting with irregularly sampled data (Ghassemi et al., 2015; Li & Marlin, 2015; 2016; Futoma et al., 2017). However, GPR-based models can become somewhat cumbersome in the multivariate setting due to the positive definiteness constraint on the covariance function (Rasmussen & Williams, 2006). The use of separable covariance functions is one common approach to the construction of GPR models over multiple dimensions (Bonilla et al., 2008), but this construction requires all dimensions to share the same temporal kernel parameters. A further drawback of GP-based methods is their significantly higher run times relative to deep learning-based models when applied to larger-scale data (Shukla & Marlin, 2019).
Variational autoencoders (VAEs) combine probabilistic latent states with deterministic encoder and decoder networks to define a flexible and computationally efficient class of probabilistic models that generalize classical factor analysis (Kingma & Welling, 2014; Rezende et al., 2014). Recent research has seen the proposal of several new VAE-based models for irregularly sampled time series. Chen et al. (2018) proposed a latent ordinary differential equation (ODE) model for continuous-time data using an RNN encoder and a neural ODE decoder. Building on the prior work of Chen et al. (2018), Rubanova et al. (2019) proposed a latent ODE model that replaces the RNN with an ODE-RNN model as the encoder. Li et al. (2020) replace the deterministic ODEs with stochastic differential equations(SDEs). Norcliffe et al. (2021) extends the prior work on neural ode by combining it with neural processes (Garnelo et al., 2018). Shukla & Marlin (2021a) proposed the Multi-Time Attention Network (mTAN) model, a VAE-based architecture that uses a multi-head temporal cross attention encoder and decoder module (the mTAND module) to provide the interface to multivariate irregularly sampled time series data. Fortuin et al. (2020) proposed a VAE-based approach for the task of smoothing in multivariate time series with a Gaussian process prior in the latent space to capture temporal dynamics. Garnelo et al. (2018); Kim et al. (2019) used heteroscedastic output layers to represent uncertainty in case of fixed dimensional inputs but these approaches are not applicable to irregularly sampled time series.
Similar to the mTAN model, the Heteroscedastic Temporal Variational Autoencoder (HeTVAE) model proposed in this work is an attention-based VAE architecture. The primary differences are that mTAN uses a homoscedastic output distribution that assumes constant uncertainty and that the mTAN model’s cross attention operation normalizes away information about input sparsity. These limitations are problematic in cases where there is variable input density through time resulting in the need for encoding, propagating, and reflecting that uncertainty in the output distribution. As we describe in the next section, HeTVAE addresses these issues by combining a novel sparsity-sensitive encoder module with a heteroscedastic output distribution and parallel probabilistic and deterministic pathways for propagating information through the model. Another important difference relative to these previous methods is that HeTVAE uses an augmented learning objective to address the underfitting of predictive variance caused by the use of the heteroscedastic layer.
3 PROBABILISTIC INTERPOLATION WITH THE HETVAE
In this section, we present the proposed architecture for probabilistic interpolation of irregularly sampled time series, the Heteroscedastic Temporal Variational Autoencoder (HeTVAE). HeTVAE leverages a sparsity-aware layer as the encoder and decoder in order to represent input uncertainty and propagate it to output interpolations. We begin by introducing notation. We then describe the architecture of the encoder/decoder network followed by the complete HeTVAE architecture.
3.1 NOTATION
We letD = {sn|n = 1, ..., N} represent a data set containing N data cases. An individual data case consists of a D-dimensional, sparse and irregularly sampled multivariate time series sn. Different dimensions d of the multivariate time series can have observations at different times, as well as different total numbers of observations Ldn. We follow the series-based representation of irregularly sampled time series (Shukla & Marlin, 2021b) and represent time series d for data case n as a tuple sdn = (tdn,xdn) where tdn = [t1dn, ..., tLdndn] is the list of time points at which observations are defined and xdn = [x1dn, ..., xLdndn] is the corresponding list of observed values. We drop the data case index n for brevity when the context is clear.
3.2 REPRESENTING INPUT SPARSITY
As noted in the previous section, the mTAN encoder module does not represent information about input sparsity due to the normalization of the attention weights. To address this issue, we propose an augmented module that we refer to as an Uncertainty Aware Multi-Time Attention Network (UnTAN). The UnTAN module is shown in Figure 1a. This module includes two encoding pathways that leverage a shared time embedding function and a shared attention function. The first encoding pathway (the intensity pathway, INT) focuses on representing information about the sparsity of observations while the second encoding pathway (the value pathway, VAL) focuses on representing information about values of observations. The outputs of these two pathways are concatenated and mixed via a linear layer to define the final output of the module. The mathematical description of the module is given in Equations 1 to 3 and is explained in detail below.
inth(rk, td) = pool({exp(αh(rk, tid)) | tid ∈ td}) pool({exp(αh(rk, ti′u)) | ti′u ∈ tu})
(1)
valh(rk, td,xd) = pool({exp(αh(rk, tid)) · xid | tid ∈ td, xid ∈ xd})
pool({exp(αh(rk, ti′d)) | ti′d ∈ td}) (2)
αh(t, t ′) =
( φh(t)wv
Tφh(t ′)T√
de
) (3)
Time Embeddings and Attention Weights: Similar to the mTAN module, the UnTAN module uses time embedding functions φh(t) to project univariate time values into a higher dimensional space. Each time embedding function is a one-layer fully connected network with a sine function non-linearity φh(t) = sin(ω · t+ β). We learn H time embeddings each of dimension de. w and v are the parameters of the scaled dot product attention function αh(t, t′) shown in Equation 3. The scaling factor 1/ √ de is used to normalize the dot product to counteract the growth in the dot product magnitude with increase in the time embedding dimension de.
Intensity Encoding: The intensity encoding pathway is defined by the function inth(rk, td) shown in Equation 1. The inputs to the intensity function are a query time point rk and a vector td containing all the time points at which observations are available for dimension d. The numerator of the intensity function exponentiates the attention weights between rk and each time point in td to ensure positivity, then pools over the observed time points. The denominator of this computation is identical to the numerator, but the set of time points tu that is pooled over is the union over all observed time points for dimension d from all data cases.
Intuitively, if the largest attention weight between rk and any element of td is small relative to attention weights between rk and the time points in tu, then the output of the intensity function will be low. Importantly, due to the use of the non-linear time embedding function, pairs of time points with high attention weights do not necessarily have to be close together in time meaning the notion of intensity that the network expresses is significantly generalized.
We also note that different sets could be used for tu including a regularly spaced set of reference time points. One advantage of using the union of all observed time points is that it fixes the maximum value of the intensity function at 1. The two pooling functions applicable in the computation of the intensity function are max and sum. If the time series is sparse, max works well because using sum in the sparse case can lead to very low output values. In a more densely observed time series, either sum or max can be used.
Value Encoding: The value encoding function valh(rk, td,xd) is presented in Equation 2 in a form that highlights the symmetry with the intensity encoding function. The primary differences are that valh(rk, td,xd) takes as input both observed time points td and their corresponding values xd, and the denominator of the function pools over td itself. While different pooling options could be used for this function, in practice we use sum-based pooling. These choices lead to a function valh(rk, td,xd) that interpolates the observed values at the query time points using softmax weights derived from the attention function. The values of observed points with higher attention weights contribute more to the output value. This structure is equivalent to that used in the mTAN module when sum-based pooling is used. We can also clearly see that this function on its own can not represent
information about input sparsity due to the normalization over td. Indeed, the function is completely invariant to an additive decrease in all of the attention weights α′h(rk, tid) = αh(rk, tid)− δ.
Module Output: The last stage of the UnTAN module concatenates the value and intensity pathway representations and then linearly weights them together to form the final J-dimensional representation that is output by the module. The parameters of this linear stage of the model are U inthdj and Uvalhdj . The value of the j th dimension of the output at a query time point rk is given by Equation 4.
UnTAN(rk, t,x)[j] = H∑ h=1 D∑ d=1
[ inth(rk, td)
valh(rk, td,xd)
]T [ U inthdj
Uvalhdj
] (4)
Finally, we note that the UnTAN module defines a continuous function of t given an input time series and hence cannot be directly incorporated into standard neural network architectures. We adapt the UnTAN module to produce fully observed fixed-dimensional discrete sequences by materializing its output at a set of reference time points. Reference time points can be fixed set of regularly spaced time points or may need to depend on the input time series. For a given set of reference time points r = [r1, · · · , rK ], the discretized UnTAN module UnTAND(r, t,x) is defined as UnTAND(r, t,x)[i] = UnTAN(ri, t,x). This module takes as input the time series s = (t,x) and the set of reference time points r and outputs a sequence of K UnTAN embeddings, each of dimension J corresponding to each reference point. As described in the next section, we use the UnTAND module to provide an interface between sparse and irregularly sampled data and fully connected MLP network structures.
3.3 THE HETVAE MODEL
In this section, we describe the overall architecture of the HeTVAE model, as shown in Figure 1b.
Model Architecture: The HeTVAE consists of parallel deterministic and probabilistic pathways for propagating input information to the output distribution, including information about input sparsity. We begin by mapping the input time series s = (t,x) through the UnTAND module along with a collection of K reference time points r. In the probabilistic path, we construct a distribution over latent variables at each reference time point using a diagonal Gaussian distribution q with mean and variance output by fully connected layers applied to the UnTAND output embeddings
henc = [henc1 , · · · ,hencK ] as shown in Equation 6. In the deterministic path, the UnTAND output embeddings henc are passed through a feed-forward network g to produce a deterministic temporal representation (at each reference point) of the same dimension as the probabilistic latent state.
The decoder takes as input the representation from both pathways along with the reference time points and a set of query points t′ (Eq 8). The UnTAND module produces a sequence of embeddings hdec = [hdec1 , · · · ,hdec|t′| ] corresponding to each time point in t
′. The UnTAND embeddings are then independently decoded using a fully connected decoder fdec and the result is used to parameterize the output distribution. We use a diagonal covariance Gaussian distribution where both the mean µ = [µ1, · · · ,µ|t′|],µi ∈ RD and variance σ2 = [σ21 , · · · ,σ2|t′|],σ 2 i ∈ RD are predicted for each time point by the final decoded representation as shown in Eq 9. The generated time series is sampled from this distribution and is given by ŝ = (t′,x′) with all data dimensions observed.
The complete model is described below. We define qγ(z|r, s) to be the distribution over the probabilistic latent variables z = [z1, · · · , zK ] induced by the input time series s = (t,x) at the reference time points r. We define the prior p(zi) over the latent states to be a standard multivariate normal distribution. We let phetθ (x ′ id | zcat, t′id) define the final probability distribution over the value of time point t′id on dimension d given the concatenated latent state z cat = [zcat1 , · · · , zcatK ]. γ and θ represent the parameters of all components of the encoder and decoder respectively.
henc = UnTANDenc(r, t,x) (5)
zk ∼ qγ(zk |µk,σ2k), µk = fencµ (henck ), σ2k = fencσ (henck ) (6) zcatk = concat(zk, g(h enc k )) (7)
hdec = UnTANDdec(t′, r, zcat) (8)
phetθ (x ′ id | zcat, t′id) = N (x′id; µi [d], σ2i [d]), µi = fdecµ (hdeci ), σ2i = fdecσ (hdeci ) (9)
x′id ∼ phetθ (x′id | zcat, t′id) (10) Compared to the constant output variance used to train the mTAN-based VAE model proposed in prior work (Shukla & Marlin, 2021a), our proposed model produces a heteroscedastic output distribution that we will show provides improved modeling for the probabilistic interpolation task. However, the increased complexity of the model’s output representation results in an increased space of local optima. We address this issue using an augmented learning objective, as described in the next section. Finally, we note that we can easily obtain a simplified homoscedastic version of the model with constant output variance σ2c using the alternate final output distribution pcθ(x ′ id | z, t′id) = N (x′id; µi [d], σ2c ).
Augmented Learning Objective: To learn the parameters of the HeTVAE framework given a data set of sparse and irregularly sampled time series, we propose an augmented learning objective based on a normalized version of the evidence lower bound (ELBO) combined with an uncertainty agnostic scaled squared loss. We normalize the contribution from each data case by the total number of observations so that the effective weight of each data case in the objective function is independent of the total number of observed values. The augmented learning objective is defined below. µn is the predicted mean over the test time points as defined in Equation 9. Also recall that the concatenated latent state zcat depends directly on the probabilistic latent state z.
LNVAE(θ, γ) = N∑ n=1 1∑ d Ldn ( Eqγ(z|r,sn)[log p het θ (xn|zcatn , tn)]−DKL(qγ(z|r, sn)||p(z)) (11)
− λEqγ(z|r,sn)‖xn − µn‖ 2 2] )
DKL(qγ(z|r, sn)||p(z)) = K∑ i=1 DKL(qγ(zi|r, sn)||p(zi)) (12)
log phetθ (xn|zcatn , tn) = D∑ d=1 Ldn∑ j=1 log phetθ (xjdn|zcatn , tjdn) (13)
We include the uncertainty agnostic scaled squared loss term to counteract the propensity of the heteroscedastic model to become stuck in poor local optima where the mean is essentially flat and
Observed data Ground truth Reconstructions
n = 3 n = 10 n = 20
Figure 2: We show example interpolations on the synthetic dataset. The set of 3 columns correspond to interpolation results with increasing numbers of observed points: 3, 10 and 20 respectively. The first, second and third rows correspond to STGP, HeTVAE and HTVAE mTAN respectively. The shaded region corresponds to ± one standard deviation. STGP and HetVAE exhibit variable output uncertainty in response to input sparsity while mTAN does not.
all of the structure in the data is explained as noise. This happens because the model has the ability to learn larger variances at the output, which allows the mean to underfit the data. The extra component (scaled squared loss) helps to push the optimization process to find more informative parameters by introducing a fixed penalty for the mean deviating from the data. As we will show in the experiments, the use of this augmented training procedure has a strong positive impact on final model performance. Since, we are focusing on the interpolation task, we train the HeTVAE by maximizing the augmented learning objective (Equation 11) on the interpolated time points (more details on training has been provided in the experimental protocols in Section 4).
4 EXPERIMENTS
In this section, we present interpolation experiments using a range of models on three real-world data sets. PhysioNet Challenge 2012 (Silva et al., 2012) and MIMIC-III (Johnson et al., 2016) consist of multivariate, sparse and irregularly sampled time series data. We also perform experiments on the Climate dataset (Menne et al., 2016), consisting of multi-rate time series. We also show qualitative results on a synthetic dataset. Details of each dataset can be found in the Appendix A.6.1.
Experimental Protocols: We randomly divide the real data sets into a training set containing 80% of the instances, and a test set containing the remaining 20% of instances. We use 20% of the training data for validation. In the interpolation task, we condition on a subset of available points and produce distributions over the rest of the time points. On the real-world datasets, we perform interpolation experiments by conditioning on 50% of the available points. At test time, the values of observed points are conditioned on and each model is used to infer single time point marginal distributions over values at the rest of the available time points in the test instance. In the case of methods that do not produce probabilistic outputs, we make mean predictions. In the case of the synthetic dataset where we have access to all true values, we use the observed points to infer the values at the rest of the available points. We repeat each real data experiment five times using different random seeds to initialize the model parameters. We assess performance using the negative log likelihood, which is our primary metric of interest. We also report mean squared and mean absolute error. For all experiments, we select hyper-parameters on the held-out validation set using grid search and then apply the best trained model to the test set. The hyper-parameter ranges searched for each model and dataset are fully described in Appendix A.5.
Models: We compare our proposed model HeTVAE to several probabilistic and deterministic interpolation methods. We compare to two Gaussian processes regression (GPR) approaches. The most basic GP model for multivariate time series fits one GPR model per dimension. This approach is known as a single task GP model (STGP) (Rasmussen & Williams, 2006). A potentially better option is to model data using a Multi Task GP (MTGP) (Bonilla et al., 2008). This approach models the correlations both across different dimensions and across time by defining a kernel expressed as the Hadamard product of a temporal kernel (as used in the STGP) and a task kernel. We also compare to several VAE-based approaches. These approaches use a homoscedastic output distribution with different encoder and decoder architectures. HVAE RNN employs a gated recurrent unit network (Chung et al., 2014) as encoder and decoder, HVAE RNN-ODE (Chen et al., 2018) replaces the RNN decoder with a neural ODE, HVAE ODE-RNN-ODE (Rubanova et al., 2019) employs
a ODE-RNN encoder and neural ODE decoder. Finally, we compare to HTVAE mTAN (Shukla & Marlin, 2021a), a temporal VAE model consisting of multi-time attention networks producing homoscedastic output. For VAE models with homoscedastic output, we treat the output variance term as a hyperparameter and select the variance using log likelihood on the validation set. Architecture details for these methods can be found in Appendix A.4. As baselines, we also consider deterministic mean and forward imputation-based methods. Forward imputation always predicts the last observed value on each dimension, while mean imputation predicts the mean of all the observations for each dimension.
Synthetic Data Results: Figure 2 shows sample visualization output for the synthetic dataset. For this experiment, we compare HTVAE mTAN, the single task Gaussian process STGP, and the proposed HeTVAE model. We vary the number of observed points (3, 10, 20) and each model is used to infer the distribution over the remaining time points. We draw multiple samples from the VAE latent state for HeTVAE and HTVAE mTAN and visualize the distribution of the resulting mixture. As we can see, the interpolations produced by HTVAE mTAN have approximately constant uncertainty across time and this uncertainty level does not change even when the number of points conditioned on increases. On the other hand, both HeTVAE and STGP show variable uncertainty across time. Their uncertainty reduces in the vicinity of input observations and increases in gaps between observations. Even though the STGP has an advantage in this experiment (the synthetic data were generated with an RBF kernel smoother and STGP uses RBF kernel as the covariance function), the proposed model HeTVAE shows comparable interpolation performance. We show more qualitative results in Appendix A.3.
Real Data Results: Tables 1, 2 and 3 compare the interpolation performance of all the approaches on PhysioNet, MIMIC-III and Climate dataset respectively. HeTVAE outperforms the prior approaches with respect to the negative log likelihood score on all three datasets. Gaussian Process based methods − STGP and MTGP achieve second and third best performance respectively. We emphasize that while the MAE and MSE values for some of the prior approaches are close to those obtained by the HeTVAE model, the primary metric of interest for comparing probabilistic interpolation approaches is log likelihood, where the HeTVAE performs much better than the other methods.
We note that the MAE/MSE of the VAE-based models with homoscedastic output can be improved by using a small fixed variance during training. However, this produces even worse log likelihood values. Further, we note that the current implementation of MTGP is not scalable to the Climate dataset (270 dimensions). We provide experiments on an additional dataset in Appendix A.1.
Ablation Results: Table 4 shows the results of ablating several different components of the HeTVAE model and training procedure. The first row shows the results for the full proposed approach. The HeTVAE - ALO ablation shows the result of removing the augmented learning objective and training the model only using the ELBO. This results in an immediate drop in performance on PhysioNet. HeTVAE - DET removes the deterministic pathway from the model, resulting in a performance drop on both MIMIC-III and PhysioNet. HeTVAE - INT removes the intensity encoding pathway from the UnTAND module. It results in a large drop in performance on both datasets. HeTVAE - HET- ALO removes the heteroscedastic layer and the augmented learning objective (since the augmented learning objective is introduced to improve the learning in the presence of heteroscedastic layer), resulting in a highly significant drop on both datasets. These results show that all of the components included in the proposed model contribute to improved model performance. We provide more ablation results in Appendix A.2 and discuss hyperparameter selection in Appendix A.5.
5 DISCUSSION AND CONCLUSIONS
In this paper, we have proposed the Heteroscedastic Temporal Variational Autoencoder (HeTVAE) for probabilistic interpolation of irregularly sampled time series data. HeTVAE consists of an input sparsity-aware encoder, parallel deterministic and probabilistic pathways for propagating input uncertainty to the output, and a heteroscedastic output distribution to represent variable uncertainty in the output interpolations. Furthermore, we propose an augmented training objective to combat the presence of additional local optima that arise from the use of the heteroscedastic output structure. Our results show that the proposed model significantly improves uncertainty quantification in the output interpolations as evidenced by significantly improved log likelihood scores compared to several baselines and state-of-the-art methods. While the HeTVAE model can produce a probability distribution over an arbitrary collection of output time points, it is currently restricted to producing marginal distributions. As a result, sampling from the model does not necessarily produce smooth trajectories as would be the case with GPR-based models. Augmenting the HeTVAE model to account for residual correlations in the output layer is an interesting direction for future work.
6 REPRODUCIBILITY STATEMENT
The source code for reproducing the results in this paper is available at https://github.com/ reml-lab/hetvae. It contains the instructions to reproduce the results in the paper including the hyperparameters. The hyperparameter ranges searched for each model are fully described in Appendix A.5. The source code also includes the synthetic dataset generation process as well as one of the real-world dataset. The other datasets can be downloaded and prepared following the preprocessing steps notes in Appendix A.6.1.
ACKNOWLEDGEMENTS
Research reported in this paper was partially supported by the National Institutes of Health under award number 1P41EB028242.
A APPENDIX
A.1 ADDITIONAL RESULTS
We also perform experiments on the UCI electricity dataset (described in Appendix A.6.1). We follow the same experiment protocols described in Section 4. As we can see from Table 5, the proposed model HeTVAE outperforms the prior approaches across all three metrics.
A.2 ABLATION STUDY
Tables 6 and 7 show the complete results of ablating several different components of the HeTVAE model and training procedure with respect to all three evaluation metrics on PhysioNet and MIMICIII respectively. We denote different components of the HeTVAE model as − HET: heteroscedastic output layer, ALO: augmented learning objective, INT: intensity encoding, DET: deterministic pathway. The results show selected individual and compound ablations of these components and indicate that all of these components contribute significantly to the model’s performance in terms of the negative log likelihood score. We provide detailed comments below.
Effect of Heteroscedastic Layer: Since the augmented learning objective is introduced to improve the learning in the presence of heteroscedastic layer, we remove the augmented learning objective (ALO) with the heteroscedastic layer (HET). This ablation corresponds to HeTVAE - HET - ALO. As we can see from both Table 6 and 7, this results in a highly significant drop in the log likelihood performance as compared to the full HeTVAE model on both datasets. However, it results in only a slight drop in performance with respect to MAE and MSE, which is sensible as the HET component only affects uncertainty sensitive performance metrics.
Effect of Intensity Encoding: HeTVAE - INT removes the intensity encoding pathway from the UnTAND module. It results in an immediate drop in performance on both datasets. We also compare the effect of intensity encoding after removing the deterministic pathway and the augmented learning objective. These ablations are shown in HeTVAE - DET - ALO and HeTVAE - INT - DET - ALO. The performance drop is less severe in this case because of the propensity of the heteroscedastic output layer to get stuck in poor local optima in the absence of the augmented learning objective (ALO).
Effect of Augmented Learning Objective: The HeTVAE - ALO ablation shows the result of removing the augmented learning objective and training the model only using only the ELBO. This results in an immediate drop in performance on PhysioNet. The performance drop is less severe on MIMIC-III. We further perform this ablation without the DET component and observe severe drops in performance across all metrics on both datasets. These ablations correspond to HeTVAE - DET and HeTVAE - DET - ALO. This shows that along with ALO component, the DET component also constrains the model from getting stuck in local optima where all of the structure in the data is explained as noise. We show interpolations corresponding to these ablations in Appendix A.3.1.
Effect of Deterministic Pathway: HeTVAE - DET removes the deterministic pathway from the model, resulting in a performance drop on both MIMIC-III and PhysioNet across all metrics. We further compare the performance of both the probabilistic and deterministic pathways in isolation as shown by ablation HeTVAE - DET - ALO and HeTVAE - PROB - ALO. We observe that the
deterministic pathway HeTVAE - PROB - ALO outperforms the probabilistic pathway HeTVAE - DET - ALO in terms of log likelihood on MIMIC-III while the opposite is true in case of PhysioNet. However, on both datasets using only the deterministic pathway (HeTVAE - PROB - ALO) achieves better MAE and MSE scores as compared to using only the probabilistic pathway (HeTVAE - DET - ALO).
A.3 VISUALIZATIONS
A.3.1 INTERPOLATIONS ON PHYSIONET
Figure 3 shows example interpolations on the PhysioNet dataset. Following the experimental setting mentioned in Section 4, the models were trained using all dimensions and the inference uses all dimensions. We only show interpolations corresponding to Heart Rate as an illustration. As we can see, the STGP and HeTVAE models exhibit good fit and variable uncertainty on the edges where there are no observations. We can also see that mTAN trained with homoscedastic output is not able to produce as good a fit because of the fixed variance at the output (discussed in Section 4).
The most interesting observation is the performance of HeTVAE - DET - ALO, an ablation of HeTVAE model that retains heteroscedastic output, but removes the deterministic pathways and the augmented learning objective. This ablation significantly underfits the data and performs similar to mTAN. This is an example of local optima that arises from the use of a heteroscedastic output layer where the mean is excessively smooth and all of the structure in the data is explained as noise. We address this with the use of augmented learning objective described in Section 3.3. As seen in the Figure 3, adding the augmented learning objective (HeTVAE - DET) clearly improves performance.
A.3.2 SYNTHETIC DATA VISUALIZATIONS: SPARSITY
In this section, we show supplemental interpolation results on the synthetic dataset. The setting here is same as in Section 4. Figure 4 compares HTVAE mTAN, the single task Gaussian process STGP, the proposed HeTVAE model and an ablation of proposed model without intensity encoding HeTVAE - INT. We vary the number of observed points (3, 10, 20) and each model is used to infer the distribution over the remaining time points. We draw multiple samples from the VAE latent state for HeTVAE, HeTVAE - INT and HTVAE mTAN, and visualize the distribution of the resulting mixture. Figure 4 illustrates the interpolation performance of each of the models. As we can see, the interpolations produced by HTVAE mTAN have approximately constant uncertainty across time and this uncertainty level does not change even when the number of points conditioned on increases. On the other hand, both HeTVAE and STGP show variable uncertainty across time. Their uncertainty reduces in the vicinity of input observations and increases in gaps between observations. The HeTVAE-INT model performs slightly better than HTVAE mTAN model but it does not show variable uncertainty due to input sparsity like HeTVAE.
A.3.3 SYNTHETIC DATA VISUALIZATIONS: INTER-OBSERVATION GAP
To demonstrate the effectiveness of intensity encoder (INT), we perform another experiment on synthetic dataset where we increase the maximum inter-observation gap between the observations.
We follow the same training protocol as described in Section 4. At test time, we condition on 10 observed points with increasing maximum inter-observation gap. We vary the maximum interobservation gap from 20% to 80% of the length of the original time series. Each model is used to infer single time point marginal distributions over values at the rest of the available time points in the test instance.
Figure 5 shows the interpolations with increasing maximum inter-observation gap. STGP and HeTVAE show variable uncertainty with time and the uncertainty increases with increasing maximum inter-observation gap. On the other hand, HTVAE mTAN with homoscedastic output shows approximately constant uncertainty with time and also across different maximum inter-observation gaps. These results clearly show that HTVAE mTAN produces over-confident probabilistic interpolations over large gaps.
Furthermore, we show an ablation of the proposed model HeTVAE - INT, where we remove the intensity encoder and perform the interpolations. As we see from the figure, this leads to approximately constant uncertainty across time as well as different maximum inter-observation gaps. This shows that the HeTVAE model is not able to capture uncertainty due to input sparsity as effectively without the intensity encoder.
A.4 ARCHITECTURE DETAILS
HeTVAE: Learnable parameters in the UnTAND architecture shown in Figure 1a include the weights of the three linear layers and the parameters of the shared time embedding functions. Each time embedding function is a one layer fully connected network with a sine function non-linearity. The two linear layers on top of embedding function are linear projections from time embedding dimension de to de/H where H is the number of time embeddings. Note that these linear layers do not share parameters. The third linear layer performs a linear projection from 2 ∗ D ∗ H to J . It takes as input the concatenation of the VAL encoder output and INT encoder output and produces an output of dimension J . de, H and J are all hyperparameters of the architecture. The ranges considered are described in the next section.
The HeTVAE model shown in the Figure 1b consists of three MLP blocks apart from the UnTAND modules. The MLP in the deterministic path is a one layer fully connected layer that projects the UnTAND output to match the dimension of the latent state. The remaining MLP blocks are twolayer fully connected networks with matching width and ReLU activations. The MLP in the decoder takes the output of UnTAND module and outputs the mean and variance of dimension D and sequence length t′. We use a softplus transformation on the decoder output to get the variance σi = 0.01 + softplus(fdecσ (h dec i )). Similarly, in the probabilistic path, we apply an exponential transformation to get the variance of the q distribution σ2k = exp(f enc σ (h enc k )). We use K reference time points regularly spaced between 0 and 1. K is considered to be a hyperparameter of the architecture. The ranges considered are described in the next section.
Baselines: For the HTVAE mTAN, we use a similar architecture as HeTVAE where we remove the deterministic path, heteroscedastic output layer and use the mTAND module instead of the UnTAND module (Shukla & Marlin, 2021a). We use the same architectures for the ODE and RNN-based VAEs as Rubanova et al. (2019).
A.5 HYPERPARAMETERS
HeTVAE: We fix the time embedding dimension to de = 128. The number of embeddings H is searched over the range {1, 2, 4}. We search the number of reference points K over the range {4, 8, 16, 32}, the latent dimension over the range {8, 16, 32, 64, 128}, the output dimension of UnTAND J over the range {16, 32, 64, 128}, and the width of the two-layer fully connected layers over {128, 256, 512}. In augmented learning objective, we search for λ over the range {1.0, 5.0, 10.0}. We use the Adam Optimizer for training the models. Experiments are run for 2, 000 iterations with a learning rate of 0.0001 and a batch size of 128. The best hyperparameters are reported in the code. We use 100 samples from the probabilistic latent state to compute the evaluation metrics.
Ablations: We note that the ablations were not performed with a fixed architecture. For all the ablation models, we tuned the hyperparameters and reported the results with the best hyperparameter setting. We also made sure that the hyperparameter ranges for ablated models with just deterministic/probabilistic path were wide enough that the optimal ablated models did not saturate the end of the ranges for architectural hyper-parameter values including the dimensionality of the latent representations.
VAE Baselines: For VAE models with homoscedastic output, we treat the output variance term as a hyperparameter and select the variance over the range {0.01, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0}. For HTVAE mTAN, we search the corresponding hyperparameters over the same range as HeTVAE. For ODE and RNN based VAEs, we search for GRU hidden units, latent dimension, the number of hidden units in the fully connected network for the ODE function in the encoder and decoder over the range {20, 32, 64, 128, 256}. For ODEs, we also search the number of layers in fully connected network in the range {1, 2, 3}. We use a batch size of 50 and a learning rate of 0.001. We use 100 samples from the latent state to compute the evaluation metrics.
Gaussian Processes: For single task GP, we use a squared exponential kernel. In case of multitask GP, we experimented with the Matern kernel with different smoothness parameters, and the
squared exponential kernel. We found that Matern kernel performs better. We use maximum marginal likelihood to train the GP hyperparameters. We search for learning rate over the range {0.1, 0.01, 0.001} and run for 100 iterations. We search for smoothness parameter over the range {0.5, 1.5, 2.5}. We search for the batch size over the range {32, 64, 128, 256}.
A.6 TRAINING DETAILS
A.6.1 DATA GENERATION AND PREPROCESSING
Synthetic Data Generation: We generate a synthetic dataset consisting of 2000 trajectories each consisting of 50 time points with values between 0 and 1. We fix 10 reference time points and draw values for each from a standard normal distribution. We then use an RBF kernel smoother with a fixed bandwidth of α = 120.0 to construct local interpolations over the 50 time points. The data generating process is shown below:
zk ∼ N (0, 1), k ∈ [1, · · · , 10] rk = 0.1 ∗ k ti = 0.02 ∗ i, i ∈ [1, · · · , 50]
xi = ∑ k exp(−α(ti − rk)2) · zk∑ k′ exp(−α(ti − rk′)2) +N (0, 0.12)
We randomly sample 3 − 10 observations from each trajectory to simulate a sparse and irregularly sampled univariate time series.
PhysioNet: The PhysioNet Challenge 2012 dataset (Silva et al., 2012) consists of multivariate time series data with 37 physiological variables from intensive care unit (ICU) records. Each record contains measurements from the first 48 hours after admission. We use the protocols described in Rubanova et al. (2019) and round the observation times to the nearest minute resulting in 2880 possible measurement times per time series. The data set consists includes 8000 instances that can be used for interpolation experiments. PhysioNet is freely available for research use and can be downloaded from https://physionet.org/content/challenge-2012/.
MIMIC-III: The MIMIC-III data set (Johnson et al., 2016) is a multivariate time series dataset containing sparse and irregularly sampled physiological signals collected at Beth Israel Deaconess Medical Center. We use the procedures proposed by Shukla & Marlin (2019) to process the data set. This results in 53, 211 records each containing 12 physiological variables. We use all 53, 211 instances to perform interpolation experiments. MIMIC-III is available through a permissive data use agreement which can be requested at https://mimic.mit.edu/iii/gettingstarted/. Once the request is approved, the dataset can be downloaded from https://mimic.mit.edu/ iii/gettingstarted/dbsetup/. The instructions and code to extract the MIMIC-III dataset is given at https://github.com/mlds-lab/interp-net.
Climate Dataset: The U.S. Historical Climatology Network Monthly (USHCN) dataset (Menne et al., 2016) is a publicly available dataset consisting of daily measurements of 5 climate variables − daily maximum temperature, daily minimum temperature, whether it was a snowy day or not, total daily precipitation, and daily snow precipitation. It contains data from the last 150 years for 1, 218 meteorological stations scattered over the United States. Following the preprocessing steps of Che et al. (2018b), we extract daily climate data for 100 consecutive years starting from 1910 to 2009 from 54 stations in California. To get multi-rate time series data, we split the stations into 3 groups with sampling rates of 2 days, 1 week, and 1 month respectively. We divide the data into smaller time series consisting of yearly data and end up with a dataset of 100 examples each consisting of 270 features. We perform the interpolation task on this dataset where we compute the feature values every day using the multi-rate time series data. The dataset is available for download at https://cdiac.ess-dive.lbl.gov/ftp/ushcn_daily/.
Electricity Dataset: The UCI household electricity dataset contains measurements of seven different quantities related to electricity consumption in a household. The data are recorded every minute for 47 months between December 2006 and November 2010, yielding over 2 million observations. To simulate irregular sampling, we keep observations only at durations sampled from
an exponential distribution with λ = 20. Following the preprocessing step of Binkowski et al. (2018), we also do random feature sampling where we choose one out of seven features at each time step. We divide the data into smaller time series consisting of monthly data and end up with a dataset of 1431 examples each consisting of 7 features. We perform interpolation experiments on this dataset where we compute feature values every minute using the irregularly sampled data. The dataset is available for download at https://archive.ics.uci.edu/ml/datasets/ individual+household+electric+power+consumption.
Dataset Preprocessing: We rescale time to be in [0, 1] for all datasets. We also re-scale all dimensions. In case of PhysioNet and MIMIC-III, for each dimensions we first remove outliers in the outer 0.1% percentile region. We then compute the mean and standard deviation of all observations on that dimension. The outlier detection step is used to mitigate the effect of rare large values in the data set from affecting the normalization statistics. Finally, we z-transform all of the available data (including the points identified as outliers). No data points are discarded from the data sets during the normalization process.
A.6.2 SOURCE CODE
The source code for reproducing the results in this paper is available at https://github.com/ reml-lab/hetvae.
A.6.3 COMPUTING INFRASTRUCTURE
All experiments were run on a Nvidia Titan X and 1080 Ti GPUs. The time required to run all the experiments in this paper including hyperparameter tuning was approximately eight days using eight GPUs. | 1. What are the key contributions and novel aspects introduced by the paper in improving the previous work mTAN for probabilistic interpolation?
2. What are the strengths and weaknesses of the proposed intensity encoding and heteroscedastic output layer compared to prior works?
3. How does the reviewer assess the clarity, quality, originality, and reproducibility of the paper's content?
4. What are the suggestions provided by the reviewer regarding comparisons with other works and improvements for future research? | Summary Of The Paper
Review | Summary Of The Paper
This paper introduces several improvements over the previous work mTAN to better support probabilistic interpolation. Specifically, intensity encoding is introduced to make the model be aware of information about input sparsity. Also, the homoscedastic output distribution used by previous work is replaced by a heteroscedastic distribution. Experiments results show that this improved model (HeTVAE) achieves both better likelihood estimation and mean prediction compared to previous works.
Review
Clarity: I am confused about the differences between heteroscedastic and homoscedastic after reading the paper. Does the heteroscedastic output means that you learn the variance and the homoscedastic output means you fix the variance? What is the shape of w and v in equation 3? what is g in equation 7?
Originality: I think the intensity encoding is novel and makes sense to make the model be aware of input sparsity. For the heteroscedastic output layer, based on my understanding (see Clarity above), it just allows the model to learn the variance rather than using a fixed variance. If my understanding is correct, I don't this heteroscedastic output layer is novel because there are lots of works that learn a variance for maximum likelihood estimation. Combining a deterministic path and a probabilistic path is not novel, which has been proposed in [1, 2]. The augmented training objective in equation 11 that encourages the predicted mean to be close to samples x makes sense, but I think there should be some existing works that also use this trick.
Experiments: I am satisfied with the performance this model achieves. However, I think the authors should also compare to Neural Process [1] and Attentive Neural Process [2], which are mentioned in the related work. [1,2] also use attention mechanism and do probabilistic interpolation. This paper also misses citation and comparison to a previous work NRTSI [3] that can also impute irregularly-sampled time series. I suggest the author compare to NRTSI on the irregularly-sampled Billiard dataset introduced in NRTSI. Also, I am not clear about what the baseline HTVAE mTAN means? Does it mean "homoscedastic temporal VAE mTAN"? If my understanding is correct, I think the author should also compare to HeTVAE mTAN ("heteroscedastic temporal VAE mTAN") that allows mTAN to learn a variance rather than using a fixed variance. For the ablation study model HeTVAE - DET, because the deterministic path is removed, the model capacity decreases and makes the comparison unfair. For a fair comparison, I suggest increasing the capacity of the probabilistic path after removing the deterministic path.
[1] Garnelo, Marta, et al. "Neural processes." arXiv preprint arXiv:1807.01622 (2018).
[2] Kim, Hyunjik, et al. "Attentive neural processes." arXiv preprint arXiv:1901.05761 (2019).
[3] Shan, Siyuan, and Junier B. Oliva. "NRTSI: Non-Recurrent Time Series Imputation." arXiv preprint arXiv:2102.03340 (2021). |
ICLR | Title
Multi-Objective GFlowNets
Abstract
In many applications of machine learning, like drug discovery and material design, the goal is to generate candidates that simultaneously maximize a set of objectives. As these objectives are often conflicting, there is no single candidate that simultaneously maximizes all objectives, but rather a set of Pareto-optimal candidates where one objective cannot be improved without worsening another. Moreover, in practice, these objectives are often under-specified, making the diversity of candidates a key consideration. The existing multi-objective optimization methods focus predominantly on covering the Pareto front, failing to capture diversity in the space of candidates. Motivated by the success of GFlowNets for generation of diverse candidates in a single objective setting, in this paper we consider Multi-Objective GFlowNets (MOGFNs). MOGFNs consist of a novel Conditional GFlowNet which models a family of single-objective sub-problems derived by decomposing the multi-objective optimization problem. Our work is the first to empirically demonstrate conditional GFlowNets. Through a series of experiments on synthetic and benchmark tasks, we empirically demonstrate that MOGFNs outperform existing methods in terms of Hypervolume, R2-distance and candidate diversity. We also demonstrate the effectiveness of MOGFNs over existing methods in active learning settings. Finally, we supplement our empirical results with a careful analysis of each component of MOGFNs.
1 INTRODUCTION
Decision making in practical applications often involves reasoning about multiple, often conflicting, objectives (Keeney et al., 1993). For example, in drug discovery, the goal is to generate novel drug-like molecules that inhibit a target, are easy to synthesize and can safely be used by humans (Dara et al., 2021). Unfortunately, these objectives often conflict – molecules effective against a target might also have adverse effects on humans – so there is no single molecule which maximizes all the objectives simultaneously. Such problems fall under the umbrella of Multi-Objective Optimization (MOO; Ehrgott, 2005; Miettinen, 2012), wherein one is interested in identifying Pareto-optimal candidates. The set of Pareto-optimal candidates covers all the best tradeoffs among the objectives, i.e., the Pareto front, where each point on that front corresponds to a different set of weights associated with each of the objectives.
In-silico drug discovery and material design are typically driven by proxies trained with finite data, which only approximate the problem’s true objectives, and therefore include intrinsic epistemic uncertainty associated with their predictions. In such problems, not only it is important to cover the Pareto front, but also to generate sets of diverse candidates at each solution of the front so as to increase the likelihood of success in downstream evaluations (Jain et al., 2022).
Generative Flow Networks (GFlowNets; Bengio et al., 2021a;b) are a recently proposed family of probabilistic models which tackle the problem of diverse candidate generation. Contrary to the reward maximization view of reinforcement learning (RL) and Bayesian optimization (BO), GFlowNets sample candidates with probability proportional to the reward. Sampling candidates, as opposed to greedily generating them, implicitly encourages diversity in the generated candidates. GFlowNets have shown promising results in single objective problems of molecule generation (Bengio et al., 2021a) and biological sequence design (Jain et al., 2022).
In this paper, we study Multi-Objective GFlowNets (MOGFNs), extensions of GFlowNets which tackle the multi-objective optimization problem. We consider two variants of MOGFNs
– (a) Preference-Conditional GFlowNets (MOGFN-PC) which combine Reward-Conditional GFlowNets (Bengio et al., 2021b) with Weighted Sum Scalarization (Ehrgott, 2005) and (b) MOGFNAL, an extension of GFlowNet-AL (Jain et al., 2022) for multi-objective active learning settings. We empirically demonstrate the advantage of MOGFNs over existing approaches on a variety of highdimensional multi-objective optimization tasks: the generation of small molecules, DNA aptamer sequences and fluorescent proteins. Our contributions are as follows:
C1 We demonstrate how two variants of GFlowNets – MOGFN-PC and MOGFN-AL – can be applied to multi-objective optimization. Our work is the first successful empirical validation of Reward-Conditional GFlowNets (Bengio et al., 2021b).
C2 Through a series of experiments on molecule generation and sequence generation we demonstrate that MOGFN-PC generates diverse Pareto-optimal candidates.
C3 In a challenging active learning task for designing fluorescent proteins, we show that MOGFN-AL results in significant improvements to sample-efficiency and diversity of generated candidates.
C4 We perform a thorough analysis of the main components of MOGFNs to provide insights into design choices that affect performance.
2 BACKGROUND
2.1 MULTI-OBJECTIVE OPTIMIZATION
Multi-objective optimization (MOO) involves finding a set of feasible candidates x⋆ ∈ X which all simultaneously maximize a set of objectives:
max x∈X (R1(x), . . . , Rd(x)) . (1)
In general, the objectives being optimized can be conflicting such that there is no single x⋆ which simultaneously maximizes all objectives. Consequently, the concept of Pareto optimality is adopted in MOO, giving rise to a set of solutions trading off the objectives in different ways.
Given x1, x2 ∈ X , x1 is said to dominate x2, written (x1 ≻ x2), iff Ri(x1) ≥ Ri(x2) ∀i ∈ {1, . . . , d} and ∃k ∈ {1, . . . , d} such that Rk(x1) > Rk(x2). A candidate x⋆ is Pareto-optimal if there exists no other solution x′ ∈ X which dominates x⋆. In other words, for a Pareto-optimal candidate it is impossible to improve one objective without sacrificing another. The Pareto set is the set of all Pareto-optimal candidates in X , and the Pareto front is defined as the image of the Pareto set in objective-space. It is important to note that since the objectives being optimized in general might not be injective, any point on the Pareto front can be the image of several candidates in the Pareto set. This introduces a notion of diversity in the candidate space, capturing all the candidates corresponding to a point on the Pareto front, that is critical for applications such as drug discovery.
While there are several paradigms for tackling the MOO problem (Ehrgott, 2005; Miettinen, 2012; Pardalos et al., 2017), we consider Scalarization, where the multi-objective problem is decomposed into simpler single-objective problems, as it is well suited for the GFlowNet formulation introduced in Section 3.1. A set of weights (preferences) ωi are assigned to the objectives Ri, such that ωi ≥ 0 and ∑d i=1 ωi = 1. The MOO problem in Equation 1 is then decomposed into solving single-objective sub-problems of the form maxx∈X R(x|ω), where R is a scalarization function.
Weighted Sum Scalarization, R(x|ω) = ∑d
i=1 ωiRi(x) is a widely used scalarization function which results in Pareto optimal candidates for problems with a convex Pareto front (Ehrgott, 2005). Weighted Tchebycheff, R(x|ω) = min
1≤i≤d ωi|Ri(x)− z⋆i |, where z⋆i denotes some ideal value for objective Ri,
results in Pareto optimal solutions even for problems with a non-convex Pareto front (Pardalos et al., 2017). See Appendix B for more discussion on scalarization. In summary, using scalarization, the MOO problem can be viewed as solving a family of single-objective optimization problems.
2.2 GFLOWNETS
Generative Flow Networks (Bengio et al., 2021a;b) are a family of probabilistic models which generate, through a sequence of steps,compositional objects x ∈ X with probability proportional to a given reward R : X → R+. The sequential construction of x ∈ X can be described as a trajectory
τ ∈ T in a weighted directed acyclic graph (DAG)1 G = (S, E), starting from an empty object s0 and following actions a ∈ A as building blocks. The nodes S of this graph (states) correspond to the set of all possible objects that can be constructed using sequences of actions in A. An edge s a−→ s′ ∈ E indicates that action a at state s leads to state s′.
The forward policy PF (−|s) is a distribution over the children of state s. x can be generated by starting at s0 and sampling a sequence of actions iteratively from PF . Similarly, the backward policy PB(−|s) is a distribution over the parents of state s and can generate backward trajectories starting at any state x, e.g., iteratively sampling from PB starting at x shows a way x could have been constructed. Let π(x) be the marginal likelihood of sampling trajectories terminating in x following PF , and partition function Z = ∑ x∈X R(x). The learning problem solved by GFlowNets is to estimate PF such that π(x) ∝ R(x). This is achieved using learning objectives like trajectory balance (TB; Malkin et al., 2022), to learn PF (−|s; θ), PB(−|s; θ), Zθ which approximate the forward and backward policies and partition function, parameterized by θ. We refer the reader to Bengio et al. (2021b); Malkin et al. (2022) for a more thorough introduction to GFlowNets.
3 MULTI-OBJECTIVE GFLOWNETS
We broadly categorize Multi-Objective GFlowNets (MOGFNs) as GFlowNets which solve a family of sub-problems derived from a Multi-Objective Optimization (MOO) problem. We first consider solving a family of MOO sub-problems simultaneously with preference-conditional GFlowNets, followed by MOGFN-AL, which solves a sequence of MOO sub-problems.
3.1 PREFERENCE-CONDITIONAL GFLOWNETS
Whereas a GFlowNet learns how to sample according to a single reward function, reward-conditional GFlowNets (Bengio et al., 2021b) are a generalization of GFlowNets that simultaneously model a family of distributions associated with a corresponding family of reward functions. Let C denote a set of values c, with each c ∈ C inducing a unique reward function R(x|c). We can define a family of weighted DAGs {Gc = (Sc, E) , c ∈ C} which describe the construction of x ∈ X , with conditioning information c available at all states in Sc. We denote PF (−|s, c) and PB(−|s′, c) as the conditional forward and backward policies, Z(c) =∑
x∈X R(x|c) as the conditional partition function and π(x|c) as the marginal likelihood of sampling trajectories τ from PF terminating in x given c. The learning objective in reward-conditional GFlowNets is thus estimating PF (−|s, c) such that π(x|c) ∝ R(x|c). We refer the reader to Bengio et al. (2021b) for a more formal discussion on conditional GFlowNets.
Recall from Section 2.1 that MOO problems can be decomposed into a family of single-objective problems each defined by a preference ω over the objectives. Thus, we can employ reward-conditional GFlowNets to model the family of reward functions by using as the conditioning set C the d-simplex ∆d spanned by the preferences ω over d objectives.
Preference-conditional GFlowNets (MOGFN-PC) are reward-conditional GFlowNets conditioned on the preferences ω ∈ ∆d over a set of objectives {R1(x), . . . , Rd(x)}. In other words, MOGFN-PC model the family of reward functions R(x|ω) where R(x|ω) itself corresponds to a scalarization of the MOO problem. We consider three scalarization techniques, which are discussed in Appendix B:
• Weighted-sum (WS) (Ehrgott, 2005): R(x|ω) = ∑d
i=1 ωiRi(x) • Weighted-log-sum (WL): R(x|ω) = ∏d
i=1 Ri(x) ωi
• Weighted-Tchebycheff (WT) (Choo & Atkins, 1983): R(x|ω) = min 1≤i≤d ωi|Ri(x)− z⋆i |,.
MOGFN-PC is not constrained to any scalarization function, and can incorporate any user-defined scalarization scheme that fits the desired optimization needs.
Training MOGFN-PC The procedure to train MOGFN-PC, or any reward-conditional GFlowNet, closely follows that of a standard GFlowNet and is described in Algorithm 1. The objective is to learn
1If the object is constructed in a canonical order (say a string constructed from left to right), G is a tree.
the parameters θ of the forward and backward conditional policies PF (−|s, ω; θ) and PB(−|s′, ω; θ), and the log-partition function logZθ(ω). To this end, we consider an extension of the trajectory balance objective for reward-conditional GFlowNets:
L(τ, ω; θ) = ( log Zθ(ω) ∏ s→s′∈τ PF (s ′|s, ω; θ)
R(x|ω) ∏
s→s′∈τ PB(s|s′, ω; θ)
)2 . (2)
One important component is the distribution p(ω) used to sample preferences during training. p(ω) influences the regions of the Pareto front that are captured by MOGFN-PC. In our experiments, we use a Dirichlet(α) to sample preferences ω which are encoded with thermometer encoding (Buckman et al., 2018) when input to the policy. Following prior work, we also use an exponent β for the reward R(x|ω), i.e. π(x|ω) ∝ R(x|ω)β . This incentivizes the policy to focus on the modes of R(x|ω), which is critical for generation of high reward and diverse candidates.
MOGFN-PC and MOReinforce MOGFN-PC is closely related to MOReinforce (Lin et al., 2021) in that both learn a preference-conditional policy to sample Pareto-optimal candidates. The key difference is the learning objective: MOReinforce uses a multi-objective version of REINFORCE (Williams, 1992), whereas MOGFN-PC uses a preference-conditional GFlowNet objective as in Equation (2). As discussed in Section 2.1, each point on the Pareto front (corresponding to a unique ω) can be the image of multiple candidates in the Pareto set. MOReinforce, given a preference ω will converge to sampling a single candidate that maximizes R(x|ω). MOGFN-PC, on the other hand, samples from R(x|ω), which enables generation of diverse candidates from the Pareto set for a given ω. This is a key feature of MOGFN-PC whose advantage we empirically demonstrate in Section 5.
3.2 MULTI-OBJECTIVE ACTIVE LEARNING WITH GFLOWNETS
In many practical scenarios, the objective functions of interest are computationally expensive. For instance, in the drug discovery scenario, evaluating objectives such as the binding energy to a target even in simulations can take several hours. Sample-efficiency, in terms of number of evaluations of the objective functions, and diversity of candidates, thus become critical in such scenarios. Black-box optimization approaches involving active learning (Zuluaga et al., 2013), particularly multi-objective Bayesian optimization (MOBO) methods (Shah & Ghahramani, 2016; Garnett, 2022) are powerful approaches in these settings.
MOBO uses a probabilistic model to approximate the objectives R = {R1 . . . Rd} and leverages the epistemic uncertainty in the predictions of the model as a signal for prioritizing potentially useful candidates. The optimization is performed over M rounds, where each round i consists of generating a batch of candidates B given all the candidates Di proposed in the previous rounds. The batch B is then evaluated using the true objective functions. The candidates are generated in each round by maximizing an acquisition function a which combines the predictions with their epistemic uncertainty into a single scalar utility score. We note that each round is effectively a scalarization of the MOO problem, and as such it may be decomposed into each round’s single objective problem.
We broadly define MOGFN-AL as approaches which use GFlowNets to generate candidates in each round of an active learning loop for multi-objective optimization. MOGFN-AL tackles MOO through a sequence of single-objective sub-problems defined by acquisition function a. As such, MOGFN-AL can be viewed as a multi-objective extension of GFlowNet-AL (Jain et al., 2022). In this work, we consider an instantiation of MOGFN-AL for biological sequence design summarized in Algorithm 2 (Appendix A), building upon the framework proposed by Stanton et al. (2022).
We start with an initial dataset D0 = (xi, yi)Ni=1 of candidates xi ∈ X and their evaluation with the true objectives yi = R(x). Di is used to train a surrogate probabilistic model (proxy) of the true objectives f̂ : X → Rd, which we parameterize as a multi-task Gaussian process (Shah & Ghahramani, 2016) with a deep kernel (DKL GP; Maddox et al., 2021a;b). Using this proxy, the acquisition function defines the utility to be maximized a : X × F → R, where F denotes the space of functions represented by DKL GPs. In our work we use as acquisition function a noisy expected hypervolume improvement (NEHVI; Daulton et al., 2020).
We use GFlowNets to propose candidates at each round i by generating mutations for candidates x ∈ P̂i where P̂i is the set of non-dominated candidates in Di. Given a sequence x, the GFlowNet
generates a set of mutations m = {(li, vi)}Ti=1 where l ∈ {1, . . . , |x|} is the location to be replaced and v ∈ A is the token to replace x[l] while T is the number of mutations. This set is generated sequentially such that each mutation is sampled from PF conditioned on x and the mutations sampled so far {(li, vi)}. Let x′m be the sequence resulting from mutations m on sequence x. The reward for a set of sampled mutations for x is the value of the acquisition function on x′m, R(m,x) = a(x′m|f̂). This approach of generating mutations to existing sequences provides an key advantage over generating sequences token-by-token as done in prior work (Jain et al., 2022) – better scaling for longer sequences. We show empirically in Section 5.3 that generating mutations with GFlowNets results in more diverse candidates and faster improvements to the Pareto front than LaMBO (Stanton et al., 2022).
4 RELATED WORK
Evolutionary Algorithms (EA) Traditionally, evolutionary algorithms such as NSGA-II have been widely used in various multi-objective optimization problems (Ehrgott, 2005; Konak et al., 2006; Blank & Deb, 2020). More recently, Miret et al. (2022) incorporated graph neural networks into evolutionary algorithms enabling them to tackle large combinatorial spaces. Unlike MOGFNs, evolutionary algorithms do not leverage any type of data, including past experiences, and therefore are required to solve each instance of a MOO from scratch rather than by amortizing computation during training in order to quickly generate solutions at run-time. Evolutionary algorithms, however, can be augmented with MOGFNs for generating mutations to improve efficiency, as in Section 3.2.
Multi-Objective Reinforcement Learning MOO problems have also received significant interest in the reinforcement learning (RL) literature (Hayes et al., 2022). Traditional approaches broadly consist of learning sets of Pareto-dominant policies (Roijers et al., 2013; Van Moffaert & Nowé, 2014; Reymond et al., 2022). Recent work has focused on extending Deep RL algorithms for multi-objective settings such as Envelope-MOQ (Yang et al., 2019), MO-MPO (Abdolmaleki et al., 2020; 2021) , and MOReinforce (Lin et al., 2021). A general shortcoming of RL based approaches is that they only discover a single mode of the reward function, and thus cannot generate diverse candidates, which also persists in the multi-objective setting. In contrast, MOGFNs sample candidates proportional to the reward, implicitly resulting in diverse candidates.
Multi-Objective Bayesian Optimization (MOBO) Bayesian optimization (BO) has been used in the context of MOO when the objectives are expensive to evaluate and sample-efficiency is a key consideration. MOBO approaches consist of learning a surrogate model of the true objective functions, which is used to define an acquisition function such as expected hypervolume improvement (Emmerich et al., 2011; Daulton et al., 2020; 2021) and max-value entropy search (Belakaria et al., 2019), as well as scalarization-based approaches (Paria et al., 2020; Zhang & Golovin, 2020). Stanton et al. (2022) proposed LaMBO, which uses language models in conjunction with BO for multi-objective sequence design problems. The key drawbacks of MOBO approaches are that they do not consider the need for diversity in generated candidates and that they mainly consider continuous state spaces. As we discuss in Section 3.2, MOBO approaches can be augmented with GFlowNets for diverse candidate generation in discrete spaces.
Other Works Zhao et al. (2022) introduced LaMOO which tackles the MOO problem by iteratively splitting the candidate space into smaller regions, whereas Daulton et al. (2022) introduce MORBO, which performs BO in parallel on multiple local regions of the candidate space. Both these methods, however, are limited to continuous candidate spaces.
5 EMPIRICAL RESULTS
In this section, we present our empirical findings across a wide range of tasks ranging from sequence design to molecule generation.The experiments cover two distinct classes of problems in the context of GFlowNets: where G is a DAG and where it is a tree. Through our experiments, we aim to answer the following questions:
Q1 Can MOGFNs model the preference-conditional reward distribution?
Q2 Can MOGFNs sample Pareto-optimal candidates?
Q3 Are candidates sampled by MOGFNs diverse? Q4 Do MOGFNs scale to high-dimensional problems relevant in practice?
Metrics: We rely on standard metrics such as the Hypervolume (HV) and R2 indicators, as well as the Generational Distance+ (GD+). To measure diversity we use the Top-K Diversity and Top-K Reward metrics of Bengio et al. (2021a). We detail all metrics in Appendix D. For all our empirical evaluations we follow the same protocol. First, we sample a set of preferences which are fixed for all the methods. For each preference we sample 128 candidates from which we pick the top 10, compute their scalarized reward and diversity, and report the averages over preferences. We then use these samples to compute the HV and R2 indicators. We pick the best hyperparameters for all methods based on the HV and report the mean and standard deviation over 3 seeds for all quantities.
Baselines: We consider the closely related MOReinforce (Lin et al., 2021) as a baseline. We also study its variants MOSoftQL and MOA2C which use Soft Q-Learning (Haarnoja et al., 2017) and A2C (Mnih et al., 2016) in place of REINFORCE. We also compare against Envelope-MOQ (Yang et al., 2019), another popular multi-objective reinforcement learning method. For fragment-based molecule generation we consider an additional baseline MARS (Xie et al., 2021), a relevant MCMC approach for this task. To keep comparisons fair, we omit baselines like LaMOO (Zhao et al., 2022) and MORBO (Daulton et al., 2022) as they are designed for continuous spaces and rely on latent representations from pre-trained models for discrete tasks like molecule generation.
5.1 SYNTHETIC TASKS
5.1.1 HYPER-GRID
We first study the ability of MOGFN-PC to capture the preference-conditional reward distribution in a multi-objective version of the HyperGrid task from Bengio et al. (2021a). The goal here is to navigate proportional to a reward within a HyperGrid. We consider the following objectives for our experiments: brannin(x), currin(x), shubert(x)2.
Since the state space is small, we can compute the distribution learned by MOGFN-PC in closed form. In Figure 1a, we visualize π(x|ω), the distribution learned by MOGFN-PC conditioned on a set of fixed preference vectors ω and contrast it with the true distribution R(x|ω) in a 32 × 32 hypergrid with 3 objectives. We observe that π(−|ω) and R(−|ω) are very similar. To quantify this, we compute Ex [|π(x|ω)−R(x|ω)/Z(ω)|] averaged over a set of 64 preferences, and find a difference of about 10−4. Note that MOGFN-PC is able to capture all the modes in the distribution, which suggests the candidates sampled from π would be diverse. Further, we compute the GD+ metric for the Pareto front of candidates generated with MOGFN-PC, which comes up to an average value of 0.42. For more details about the task and the additional results, refer to Appendix E.1.
5.1.2 N-GRAMS TASK
We consider version of the synthetic sequence design task from Stanton et al. (2022). The task consists of generating strings with the objectives given by occurrences of a set of d n-grams.
In the results summarized in Table 1, we consider 3 Bigrams (with common characters in the bigrams resulting in correlated objectives) and 3 Unigrams (conflicting objectives) as the objectives. MOGFNPC outperforms the baselines in terms of the MOO objectives while generating diverse candidates.
2We present additional results with more objectives in Appendix E.1
Since the objective counts occurrences of n-grams, the diversity is limited by the performance, i.e. high scoring sequences will have lower diversity, explaining higher diversity of MOSoftQL. We note that the MOReinforce and Envelope-MOQ baselines struggle in this task potentially due to longer trajectories with sparse rewards. MOGFN-PC adequately models the trade-off between conflicting objectives in the 3 Monograms task as illustrated by the Pareto front of generated candidates in Figure 1b. For the 3 Bigrams task with correlated objectives, Figure 1c demonstrates MOGFN-PC generates candidates which can simultaneously maximize multiple objectives. We refer the reader to Appendix E.2 for more task details and additional results with different number of objectives and varying sequence length.
5.2 BENCHMARK TASKS
5.2.1 QM9
We first consider a small-molecule generation task based on the QM9 dataset (Ramakrishnan et al., 2014). We generate molecules atom-by-atom and bond-by-bond with up to 9 atoms and use 4 reward signals. The main reward is obtained via a MXMNet (Zhang et al., 2020) proxy trained on QM9 to predict the HOMO-LUMO gap. The other rewards are Synthetic Accessibility (SA), a molecular weight target, and a molecular logP target. Rewards are normalized to be between 0 and 1, but the gap proxy can exceed 1, and so is clipped at 2. We train the models with 1M molecules and present the results in Table 2, showing that MOGFN-PC outperforms all baselines in terms of Pareto performance and diverse candidate generation.
5.2.2 FRAGMENT-BASED MOLECULE GENERATION
We evaluate our method on the fragment-based (Kumar et al., 2012) molecular generation task of Bengio et al. (2021a), where the task is to generate molecules by linking fragments to form a junction tree (Jin et al., 2020). The main reward function is obtained via a pretrained proxy, available from Bengio et al. (2021a), trained on molecules docked with AutodockVina (Trott & Olson, 2010) for the sEH target. The other rewards are based on Synthetic Accessibility (SA), drug likeness (QED), and a molecular weight target. We detail the reward construction in Appendix E.4. Similarly to QM9, we train MOGFN-PC to generate 1M molecules and report the results in Table 3. We observe that MOGFN-PC is consistently outperforming baselines not only in terms of HV and R2, but also candidate diversity score. Note that we do not report reward and diversity scores for MARS, since the lack of preference conditioning would make it an unfair comparison.
5.2.3 DNA SEQUENCE GENERATION
As a practical domain where the GFlowNet graph is a tree, we consider the generation of DNA aptamers, single-stranded nucleotide sequences that are popular in biological polymer design due to their specificity and affinity as sensors in crowded biochemical environments (Zhou et al., 2017; Corey et al., 2022; Yesselman et al., 2019; Kilgour et al., 2021). We generate sequences by adding one nucleobase (A, C, T or G) at a time, with a maximum length of 60 bases. We consider three objectives:
the free energy of the secondary structured calculated with the software NUPACK (Zadeh et al., 2011), the number of base pairs and the inverse of the sequence length to favour shorter sequences.
We report the results in Table 4. In this case, the best Pareto performance is obtained by the multi-objective RL algorithm MOReinforce (Lin et al., 2021). However, it achieves so by finding a quasi-trivial solution with the pattern GCGCGC... for most lengths, yielding very low diversity. In contrast, MOGFN-PC obtains much higher diversity and Top-K rewards but worse Pareto performance. An extended discussion, ablation study and further details are provided in Appendix E.5.
5.3 ACTIVE LEARNING
Finally, to evaluate MOGFN-AL, we consider the Proxy RFP task from Stanton et al. (2022), with the aim of discovering novel proteins with red fluorescence properties, optimizing for folding stability and solvent-accessible surface area. We adopt all the experimental details (described in Appendix E.6) from Stanton et al. (2022), using MOGFN-AL for candidate generation. In addition to LaMBO, we use a model-free (NSGA-2) and model-based EA from Stanton et al. (2022) as baselines. We observe in Figure 2a that MOGFN-AL results in significant gains to the improvement in Hypervolume relative to the initial dataset, in a given budget of black-box evaluations. In fact, MOGFN-AL is able to match the performance of LaMBO within about half the number of black-box evaluations.
Figure 2b illustrates that the Pareto frontier of candidates generated with MOGFN-AL, which dominates the Pareto frontier of the initial dataset. As we the candidates are generated by mutating sequences in the existing Pareto front, we also highlight the sequences that are
mutations of each seqeunce in the initial dataset with the same color. To quantify the diversity of the generated candidates we measure the average e-value from DIAMOND (Buchfink et al., 2021) between the initial Pareto front and the Pareto frontier of generated candidates. Table 2c shows that MOGFN-AL generates candidates that are more diverse than the baselines.
6 ANALYSIS
In this section, we isolate the important components of MOGFN-PC: the distribution p(ω) for sampling preferences during training, the reward exponent β and the reward scalarization R(x|ω) to understand the impact of each component on Pareto performance and diversity. We consider the 3 Bigrams task discussed in Section 5.1.2 and the fragment-based molecule generation task from Section 5.2.1 for this analysis and provide further results in the Appendix.
Impact of p(ω) To examine the effect of p(ω), which controls the coverage of the Pareto front, we set it to Dirichlet(α) and vary α ∈ {0.1, 1, 10}. This results in ω being sampled from different regions of ∆d. Specifically, α = 1 corresponds to a uniform distribution over ∆d, α > 1 is skewed towards the center of ∆d whereas α < 1 is skewed towards the corners of ∆d. In Table 5 and Table 6 we observe that α = 1 results in the best performance. Despite the skewed distribution with α = 0.1 and α = 10, we still achieve performance close to that of α = 1 indicating that MOGFN-PC is able to interpolate to preferences not sampled during training. Note that diversity is not affected significantly by p(ω).
Impact of β During training β, controls the concentration of the reward density around modes of the distribution. For large values of β the reward density around the modes become more peaky and vice-versa. In Table 5 and Table 6 we present the results obtained by varying β ∈ {16, 32, 48}. As β increases, MOGFN-PC is incentivized to generate samples closer to the modes of R(x|ω), resulting in better Pareto performance. However, with high β values, the reward density is concentrated close to the modes and there is a negative impact on the diversity of the candidates.
Choice of scalarization R(x|ω) Next, we analyse the effect of the scalarization defining R(x|ω) used for training. The set of R(x|ω) for different ω specifies the family of MOO sub-problems and thus has a critical impact on the Pareto performance. Table 5 and Table 6 include results for the Weighted Sum (WS), Weighted-log-sum (WL) and Weighted Tchebycheff (WT) scalarizations. Note that we do not compare the Top-K Reward as different scalarizations cannot be compared directly. WS scalarization results in the best performance. WL scalarization on the other hand is not formally guaranteed to cover the Pareto front and consequently results in poor Pareto performance. We suspect the poor performance of WT and WL are in part also due to the harder reward landscapes they induce.
7 CONCLUSION
In this work, we have empirically demonstrated the generalization of GFlowNets to conditional GFlowNets for multi-objective optimization problems (MOGFN) to promote the generation of diverse optimal candidates. We presented two instantiations of MOGFN: MOGFN-PC, which leverages reward-conditional GFlowNets (Bengio et al., 2021b) to model a family of single-objective subproblems, and MOGFN-AL, which sequentially solves a set of single-objective problems defined by multi-objective acquisition functions. Finally, we empirically demonstrated the efficacy of MOGFNs for generating diverse Pareto-optimal candidates on sequence and graph generation tasks.
As a limitation, we identify that in certain domains, such as DNA sequence generation, MOGFN generates diverse candidates but currently does not match RL algorithms in terms of Pareto performance. The analysis in Section 6 hints that the distribution of sampling preferences p(ω) affects the Pareto performance. Since for certain practical applications only a specific region of the Pareto front is of interest, future work may explore gradient based techniques to learn preferences for more structured exploration of the preference space. Within the context of MOFGN-AL, an interesting research avenue is the development of preference-conditional acquisition functions.
Reproducibility Statement We include the code necessary to replicate experiments with our submission and provide detailed description of experimental setups in the Appendix. All datasets and pretrained models used are publicly available or included in the supplementary materials.
Ethics Statement We acknowledge that as with all machine learning algorithms, there is potential for dual use of multi-objective GFlowNets by nefarious agents. This work was motivated by the application of machine learning to accelerate scientific discovery in areas that can benefit humanity. We explicitly discourage the use of multi-objective GFlowNets in applications that may be harmful to others.
A ALGORITHMS
We summarize the algorithms for MOGFN-PC and MOGFN-AL here.
Algorithm 1: Training preference-conditional GFlowNets Input: p(ω): Distribution for sampling preferences; β: Reward Exponent; δ: Mixing Coefficient for uniform actions in sampling policy; N : Number of training steps; Initialize: (PF (s
′|s, ω), PB(s|s′, ω), logZ(ω)): Conditional GFlowNet with parameters θ; for i = 1 to N do
Sample preference ω ∼ p(ω); Sample trajectory τ following policy π̂ = (1− δ)PF + δUniform ; Compute reward R(x|ω)β for generated samples and corresponding loss L(τ, ω; θ) as in
Equation 2; Update parameters θ with gradients from the loss, ∇θL(τ, ω);
end
Algorithm 2: Training MOGFN-AL Input: R = {R1, . . . , Rd}: Oracles to evaluate candidates x and return true objectives (R1(x), . . . , Rd(x)) ; D0 = {(xi, yi)}: Initial dataset with yi = R(xi); f̂ : Probabilistic surrogate model to model posterior over R given a dataset D; a(x|f̂): Acquisition function computing a scalar utility for x given f̂ ; πθ: Learnable GFlowNet policy; b: Size of candidate batch to be generated; N : Number of active learning rounds; Initialize: f̂ , πθ; for i = 1 to N do
Fit f̂ on dataset Di−1; Extract the set of non-dominated candidates P̂i−1 from Di−1; Train πθ with to generate mutations for x ∈ P̂i using a(−|f̂) as the reward; Generate batch B = {x′1,mi , . . . , x ′ b,mb
} by sampling x′i from P̂i−1 and applying to it mutations mi sampled from πθ;
Evaluate batch B with R to generate D̂i = {(x1,R(x1)), . . . , (xb,R(xb))}; Update dataset Di = D̂i ∪Di−1
end Result: Approximate Pareto set P̂N
B SCALARIZATION
Scalarization is a popular approach for tackling multi-objective optimization problems. MOGFNPC can build upon any scalarization approach. We consider three choices. Weighted-sum (WS) scalarization has been widely used in literature. WS finds candidates on the convex hull of the Pareto front (Ehrgott, 2005). Under the assumption that the Pareto front is convex, every Pareto optimal solution is a solution to a weighted sum problem and the solution to every weighted sum problem is Pareto optimal. Weigthed Tchebycheff (WT), proposed by Choo & Atkins (1983) is
an alternative designed for non-convex Pareto fronts. Any Pareto optimal solution can be found by solving the weighted Tchebycheff problem with appropriate weights, and the solutions for any weights correspond to a weakly Pareto optimal solution of the original problem (Pardalos et al., 2017). Lin et al. (2021) deomstrated through their empirical results that WT can be used with neural network based policies. The third scheme we consider, Weighted-log-sum (WL) has not been considered in prior work. We hypothesized that in some practical scenarios, we might want to ensure that all objectives are optimized, since, for instance, in WS the scalarized reward can be dominated by a single reward. WL, which considers the weigthed sum in log space can potentially help with this drawback. However, as discussed in Section 6, in practice WL can be hard to optimize, and lead to poor performance.
C ADDITIONAL ANALYSIS
Can MOGFN-PC match Single Objective GFNs? To evaluate how well MOGFN-PC models the family of rewards R(x|ω), we consider a comparison with single objective GFlowNets. More specifically, we first sample a set of 10 preferences ω1, . . . , ω10, and train a standard single objective GFlowNet using the weighted sum scalar reward for each preference. We then generate N = 128 candidates from each GFlowNet, throughout training, and compute the mean reward for the top 10 candidates for each preference. We average this top 10 reward across {ω1, . . . , ω10}, and call it Rso. We then train MOGFN-PC, and apply the sample procedure with the preferences {ω1, . . . , ω10}, and call the resulting mean of top 10 rewards Rmo. We plot the value of the ratio Rmo/Rso in Figure 3. We observe that the ratio stays close to 1, indicating that MOGFN-PC can indeed model the entire family of rewards simultaneously at least as fast as a single objective GFlowNet could.
Effect of Model Capacity and Architecture Finally we look at the effect of model size in training MOGFN-PC. As MOGFN-PC models a conditional distribution, an entire family of functions as we’ve described before, we expect capacity to play a crucial role since the amount of information to be learned is higher than for a single-objective GFN. We increase model size in the 3 Bigrams task to see that effect, and see in Table 7 that larger models do help with performance–although the performance plateaus after a point. We suspect that in order to fully utilize the model capacity we might need better training objectives.
D METRICS
In this section we discuss the various metrics that we used to report the results in Section 5.
1. Generational Distance Plus (GD +) (Ishibuchi et al., 2015): This metric measures the euclidean distance between the solutions of the Pareto approximation and the true Pareto front by taking the dominance relation into account. To calculate GD+ we require the knowledge of the true Pareto front and hence we only report this metric for Hypergrid experiments (Section 5.1.1)
2. Hypervolume (HV) Indicator (Fonseca et al., 2006): This is a standard metric reported in MOO works which measures the volume in the objective space with respect to a reference point spanned by a set of non-dominated solutions in Pareto front approximation.
3. R2 Indicator (Hansen & Jaszkiewicz, 1994): R2 provides a monotonic metric comparing two Pareto front approximations using a set of uniform reference vectors and a utopian point z∗ representing the ideal solution of the MOO. This metric provides a monotonic reference to compare different Pareto front approximations relative to a utopian point. Specifically, we define a set of uniform reference vectors λ ∈ Λ that cover the space of the MOO and then calculate:R2(Γ,Λ, z∗) = 1 |Λ| ∑ λ∈Λ minγ∈Γ { maxi∈1,...,k{λi|z∗i − γi|} } where γ ∈ Γ corresponds to the set of
solutions in a given Pareto front approximations and z∗ is the utopian point corresponding to the ideal solution of the MOO. Generally, R2 metric calculations are performed with z∗ equal to the origin and all objectives transformed to a minimization setting, which serves to preserve the monotonic nature of the metric. This holds true for our experiments as well.
4. Top-K Reward This metric was originally used in (Bengio et al., 2021a), which we extend for our multi-objective setting. For MOGFN-PC, we sample N candidates per test preference and then pick the top-k candidates (k < N ) with highest scalarized rewards and calculate the mean. We repeat this for all test preferences enumerated from the simplex and report the average top-k reward score.
5. Top-K Diversity This metric was also originally used in (Bengio et al., 2021a), which we again extend for our multi-objective setting. We use this metric to quantify the notion of diversity of the generated candidates. Given a distance metric d(x, y) between candidates x and y we calculate the diversity of candidates as those who have d(x, y) greater than a
threshold ϵ. For MOGFN-PC, we sample N candidates per test preference and then pick the top-k candidates based on the diversity scores and take the mean. We repeat this for all test preferences sampled from simplex and report the average top-k diversity score. We use the edit distance for sequences, and 1 minus the Tanimoto similarity for molecules.
E ADDITIONAL EXPERIMENTAL DETAILS
E.1 HYPER-GRID
Here we elaborate on the Hyper-Grid experimental setup which we discussed in Section 5.1.1. Consider an n-dimensional hypercube gridworld where each cell in the grid corresponds to a state. The agent starts at the top left coordinate marked as (0, 0, . . . ) and is allowed to move only towards the right, down, or stop. When the agent performs the stop action, the trajectory terminates and the agent receives a non-zero reward. In this work, we consider the following reward functions - brannin(x), currin(x), sphere(x), shubert(x), beale(x). In Figure 4, we show the heatmap for each reward function. Note that we normalize all the reward functions between 0 and 1.
.
Additional Results To verify the efficacy of MOGFNs across different objectives sizes, we perform some additional experiments and measure the L1 loss and the GD+ metric. In Figure 5, we can see that as the reward dimension increases, the loss and GD+ increases. This is expected because the number of rewards is indicative of the difficulty of the problem. We also present extended qualitative visualizations across more preferences in Figure 6.
Model Details and Hyperparameters For MOGFN-PC policies we use an MLP with two hidden layers each consisting of 64 units. We use LeakyReLU as our activation function as in Bengio et al. (2021a). All models are trained with learning rate=0.01 with the Adam optimizer Kingma & Ba (2015) and batch size=128. We sample preferences ω from Dirichlet(α) where α = 1.5. We try two encoding techniques for encoding preferences - 1) Vanilla encoding where we just use the raw values of the preference vectors and 2) Thermometer encoding (Buckman et al., 2018). In our experiments we have not observed significant difference in performance difference.
E.2 N-GRAMS TASK
Task Details The task is to generate sequences of some maximum length L, which we set to 36 for the experiments in Section 5.1.2. We consider a vocabulary (actions) of size 21, with 20 characters ["A", "R", "N", "D", "C", "E", "Q", "G", "H", "I", "L", "K", "M", "F", "P", "S", "T", "W", "Y", "V"] and a special token to indicate the end of sequence. The rewards {Ri}di=1 are defined by the number of occurrences of a given set of n-grams in a sequence x. For instance, consider ["AB", "BA"] as the n-grams. The rewards for a sequence x = ABABC would be [2, 1]. We consider two choices of n-grams: (a) Unigrams: the number of occurrences of a set of unigrams induces conflicting objectives since we cannot increase the number of occurrences of a monogram without replacing another in a string of a particular length, (b) Bigrams: given common characters within the bigrams, the occurrences of multiple bigrams can be increased simultaneously within a string of a fixed length. We also consider different sizes for the set of n-grams considered, i.e. different number of objectives. This allows us to evaluate the behaviour of MOGFN-PC on a variety of objective spaces. We summarize the specific objectives used in our experiments in Table 8. We normalize the rewards to [0, 1] in our experiments.
Model Details and Hyperparameters We build upon the implementation from Stanton et al. (2022) for the task: https://github.com/samuelstanton/lambo. For the string generation task, the backward policy PB is trivial (as there is only one parent for each node s ∈ S), so we only have to parameterize PF and logZ. As PF (−|s, ω) is a conditional policy, we use a Conditional Transformer encoder as the architecture. This consists of a Transformer encoder (Vaswani et al., 2017) with 3 hidden layers of dimension 64 and 8 attention heads to embed the current state (string generated so far) s. We have an MLP which embeds the preferences ω which are encoded using thermometer encoding with 50 bins. The embeddings of the state and preferences are concatenated and passed to a final MLP which generates a categorical distribution over the actions (vocabulary token). We use the same architecture for the baselines using a conditional policy – MOReinforce and MOSoftQL. For EnvelopeMOQ, which does not condition on the preferences, we use a standard Transformer-encoder with a similar architecture. We present the hyperparameters we used in Table 9. Each method is trained for 10,000 iterations with a minibatch size of 128. For the baselines we adopt the official implementations released by the authors for MOReinforce – https://github.com/Xi-L/PMOCO and EnvelopeMOQ – https://github.com/RunzheYang/MORL.
Additional Results We present some additional results for the n-grams task. We consider different number of objectives d ∈ {2, 4} in Table 10 and Table 11 respectively. As with the experiments in Section 5.1.2 we observe that MOGFN-PC outperforms the baselines in Pareto performance while achieving high diversity scores. In Table 12, we consider the case of shorter sequences L = 24.
MOGFN-PC continues to provide significant improvements over the baselines. There are two trends we can observe considering the N-grams task holistically:
1. As the sequence size increases the advantage of MOGFN-PC becomes more significant. 2. The advantage of MOGFN-PC increases with the number of objectives.
Reward Details As mentioned in Section 5.2.1, we consider four reward functions for our experiments. The first reward function is the HUMO-LUMO gap, for which we rely on the predictions of a pretrained MXMNet (Zhang et al., 2020) model trained on the QM9 dataset (Ramakrishnan et al., 2014). The second reward is the standard Synthetic Accessibility score which we calculate using the RDKit library (Landrum), to get the reward we compute (10− SA)/9. The third reward function is molecular weight target. Here we first calculate the molecular weight of a molecule using RDKit, and then construct a reward function of the form e−(molWt−105)
2/150 which is maximized at 105. Our final reward function is a logP target, e−(logP−2.5)
2/2, which is again calculated with RDKit and is maximized at 2.5.
Model Details and Hyperparameters We sample new preferences for every episode from a Dirichlet(α), and encode the desired sampling temperature using a thermometer encoding (Buckman et al., 2018). We use a graph neural network based on a graph transformer architecture (Yun et al., 2019). We transform this conditional encoding to an embedding using an MLP. The embedding is then fed to the GNN as a virtual node, as well as concatenated with the node embeddings in the graph. The model’s action space is to add a new node to the graph, a new bond, or set node or bond properties (like making a bond a double bond). It also has a stop action. For more details please refer to the code provided in the supplementary material. We summarize the hyperparameters used in Table 13.
E.4 FRAGMENTS
More Details As mentioned in Section 5.2.2, we consider four reward functions for our experiments. The first reward function is a proxy trained on molecules docked with AutodockVina (Trott & Olson, 2010) for the sEH target; we use the weights provided by Bengio et al. (2021a). We also use synthetic accessibility, as for QM9, and a weight target region (instead of the specific target weight used for QM9), ((300 - molwt) / 700 + 1).clip(0, 1) which favors molecules with a weight of under 300. Our final reward function is QED which is again calculated with RDKit.
Model Details and Hyperparameters We again use a graph neural network based on a graph transformer architecture (Yun et al., 2019). The experimental protocol is similar to QM9 experiments discussed in Appendix E.3. We additionally sample from a lagged model whose parameters are updated as θ′ = τθ′ + (1− τ)θ. The model’s action space is to add a new node, by choosing from a
list of fragments and an attachment point on the current molecular graph. We list all hyperparameters used in Table 14.
Additional Results We also present in Figure 7 a view of the reward distribution produced by MOGFN-PC. Generally, the model is able to find good near-Pareto-optimal samples, but is also able to spend a lot of time exploring. The figure also shows that the model is able to respect the preference conditioning, and remains capable of generating a diverse distribution rather than a single point.
In the off-diagonal plots of Figure 7, we show pairwise scatter plots for each objective pair; the Pareto front is depicted with a red line; each point corresponds to a molecule generated by the model as it explores the state space; color is density (linear viridis palette). The diagonal plots show two overlaid informations: a blue histogram for each objective, and an orange scatter plot showing the relationship between preference conditioning and generated molecules. The effect of this conditioning is particularly visible for seh (top left) and wt (bottom right). As the preference for the sEH binding reward gets closer to 1, the generated molecules’ reward for sEH gets closer to 1 as well. Indeed, the expected shape for such a scatter plot is a triangular-ish shape: when the preference ωi for reward Ri is close to 1, the model is expected to generate objects with a high reward for Ri; as the preference ωi gets further away from 1, the model can generate anything, including objects with a high Ri–that is, unless there is a trade off between objectives, in which case in cannot; this is the case for the seh objective, but not for the wt objective, which has a more triangular shape.
E.5 DNA SEQUENCE DESIGN
Task Details The set of building blocks here consists of the bases["A", "C", "T", "G"] in addition to a special end of sequence token. In order to compute the free energy and number of base with the software NUPACK (Zadeh et al., 2011), we used 310 K as the temperature. The inverse of the length L objective was calculated as 30L , as 30 was the minimum length for sampled sequences. The rewards are normalized to [0, 1] for our experiments.
Model Details and Hyperparameters We use the same implementation as the N-grams task, detailed in Appendix E.2. Here we consider a 4-layer Transformer architecture, with 256 units per layer and 16 attention head instead. We detail the most relevant hyperparameters Table 15.
Discussion of Results Contrary to the other tasks on which we evaluated MOGFN-PC, for the generation of DNA aptamer sequences, our proposed model did not match the best baseline, multiobjective reinforcement learning (Lin et al., 2021), in terms of Pareto performance. Nonetheless, it is worth delving into the details in order to better understand the different solutions found by the two methods. First, as indicated in section 5, despite the better Pareto performance, the best sequences generated by the RL method have extremely low diversity (0.62), compared to MOGFN, which generates optimal sequences with diversity of 19.6 or higher. As a matter of fact, MOReinforce mostly samples sequences with the well-known pattern GCGC... for all possible lengths. Sequences with this pattern have indeed low (negative) energy and many number of pairs, but they offer little new insights and poor diversity if the model is not able to generate sequences with other distinct patterns. On the contrary, GFlowNets are able to generate sequences with patterns other than repeating the pair of bases G and C. Interestingly, we observed that GFlowNets were able to generate sequences with even lower energy than the best sequences generated by MOReinforce by inserting bases A and T into chains of GCGC.... Finally, we observed that one reason why MOGFN does not match the Pareto performance of MOReinforce is because for short lengths (one of the objectives) the energy and number of pairs are not successfully optimised. Nonetheless, the optimisation of energy and number of pairs is very good for the longest sequences. Given these observations, we conjecture that there is room for improving the set of hyperparameters or certain aspects of the algorithm.
Additional Results In order to better understand the impact of the main hyperparameters of MOGFN-PC in the Pareto performance and diversity of the optimal candidates, we train multiple instances by sweeping over several values of the hyperparameters, as indicated in Table 15. We present the results in Table 16. One key observation is that there seems to be a tradeoff between the Pareto performance and the diversity of the Top-K sequences. Nonetheless, even the models with the lowest diversity are able to generate much more diverse sequences than MOReinforce. Furthermore, we also observe α < 1 as the parameter of the Dirichlet distribution to sample the weight preferences, as well as higher β (reward exponent), both yield better metrics of Pareto performance but slightly worse diversity. In the case of β, this observation is consistent with the results in the Bigrams task (Table 5), but with Bigrams, best performance was obtained with α = 1. This is indicative of a degree of dependence on the task and the nature of the objectives.
E.6 ACTIVE LEARNING
Task Details We consider the Proxy RFP task from Stanton et al. (2022), an in silico benchmark task designed to simulate searching for improved red fluorescent protein (RFP) variants (Dance et al., 2021). The objectives considered are stability (-dG or negative change in Gibbs free energy) and
solvent-accessible surface area (SASA) (Shrake & Rupley, 1973) in simulation, computed using the FoldX suite (Schymkowitz et al., 2005) and BioPython (Cock et al., 2009). We use the dataset introduced in Stanton et al. (2022) as the initial pool of candidates D0 with |D0| = 512. Method Details and Hyperparameters Our implementation builds upon the publicly released code from (Stanton et al., 2022): https://github.com/samuelstanton/lambo. We follow the exact experimental setup used in Stanton et al. (2022). The surrogate model f̂ consists of an encoder with 1D convolutions (masking positions corresponding to padding tokens). We used 3 standard pre-activation residual blocks with two convolution layers, layer norm, and swish activations, with a kernel size of 5, 64 intermediate channels and 16 latent channels. A multi-task GP with an ICM kernel is defined in the latent space of this encoder, which outputs the predictions for each objective. We also use the training tricks detailed in Stanton et al. (2022) for the surrogate model. The hyperparameters, taken from Stanton et al. (2022) are shown in Table 17. The acquisiton function used is NEHVI (Daulton et al., 2021) defined as
α({xj}ij=1) = 1
N N∑ t=1 HVI({f̃t(xj)}i−1j=1|Pt) + 1 N N∑ t=1 HVI(f̃t(xj)|Pt ∪ {f̃t(xj)}i−1j=1) (3)
where f̃t, t = 1, . . . N are independent draws from the surrogate model (which is a posterior over functions), and Pt denotes the Pareto frontier in the current dataset D under f̃t.
We replace the LaMBO candidate generation with GFlowNets. We generate a set of mutations m = {(li, vi)} for a sequences x from the current approximation of the Pareto front P̂i. Note
Hyperparameter Values Learning Rate (PF ) {0.01, 0.001, 0.0001} Learning Rate (Z) {0.01, 0.001} Reward Exponent: β {16, 24} Uniform Policy Mix: δ {0.01, 0.05} Maximum number of mutations {10, 15, 20} δβ {0.5, 1, 2} | 1. What is the focus of the paper regarding GflowNets?
2. What are the strengths and weaknesses of the proposed approach?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or limitations regarding the extension of weighted sum scalarization to GflowNets?
5. Is the understanding of multi-objective optimization in the paper sufficient? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper extends the simple linear scalarization method in multi-objective optimization to the GflowNets setting. Two variants of GflowNets are proposed and extensive empirical experiments are provided, which verifies the effectiveness of them.
Strengths And Weaknesses
Strengths:
The idea of extending gflow nets to the multi-objective setting is novel and interesting.
Multi-objective gflow nets are useful in drug discovery.
Weaknesses:
The extension of weighted sum scalarization to glow nets seems incremental, which lacks novelty.
The understanding of multi-objective of this paper is limited. Some of the assertions are overclaimed. For example, weighted sum scalarization is only one simple way to do multi-objective optimization (MOO). It has many drawbacks, especially when non-convex Pareto front is encountered. But in Section 2.1, the authors claim solving weighted sum scalarization equals solving MOO. More precise background knowledge on MOO should be incorporated.
The study of the active version of gflow nets is not very clearly motivated.
Clarity, Quality, Novelty And Reproducibility
Clarity: The presentation is good. But at the knowledge level, this paper does not study the background of MOO well.
Quality: Fairly good.
Novelty: The novelty is limited from the technical level, as weighted sum scalarization seems too simple in MOO. But it seems good from the application level with respect to drug discovery
Reproducibility: Good. |
ICLR | Title
Multi-Objective GFlowNets
Abstract
In many applications of machine learning, like drug discovery and material design, the goal is to generate candidates that simultaneously maximize a set of objectives. As these objectives are often conflicting, there is no single candidate that simultaneously maximizes all objectives, but rather a set of Pareto-optimal candidates where one objective cannot be improved without worsening another. Moreover, in practice, these objectives are often under-specified, making the diversity of candidates a key consideration. The existing multi-objective optimization methods focus predominantly on covering the Pareto front, failing to capture diversity in the space of candidates. Motivated by the success of GFlowNets for generation of diverse candidates in a single objective setting, in this paper we consider Multi-Objective GFlowNets (MOGFNs). MOGFNs consist of a novel Conditional GFlowNet which models a family of single-objective sub-problems derived by decomposing the multi-objective optimization problem. Our work is the first to empirically demonstrate conditional GFlowNets. Through a series of experiments on synthetic and benchmark tasks, we empirically demonstrate that MOGFNs outperform existing methods in terms of Hypervolume, R2-distance and candidate diversity. We also demonstrate the effectiveness of MOGFNs over existing methods in active learning settings. Finally, we supplement our empirical results with a careful analysis of each component of MOGFNs.
1 INTRODUCTION
Decision making in practical applications often involves reasoning about multiple, often conflicting, objectives (Keeney et al., 1993). For example, in drug discovery, the goal is to generate novel drug-like molecules that inhibit a target, are easy to synthesize and can safely be used by humans (Dara et al., 2021). Unfortunately, these objectives often conflict – molecules effective against a target might also have adverse effects on humans – so there is no single molecule which maximizes all the objectives simultaneously. Such problems fall under the umbrella of Multi-Objective Optimization (MOO; Ehrgott, 2005; Miettinen, 2012), wherein one is interested in identifying Pareto-optimal candidates. The set of Pareto-optimal candidates covers all the best tradeoffs among the objectives, i.e., the Pareto front, where each point on that front corresponds to a different set of weights associated with each of the objectives.
In-silico drug discovery and material design are typically driven by proxies trained with finite data, which only approximate the problem’s true objectives, and therefore include intrinsic epistemic uncertainty associated with their predictions. In such problems, not only it is important to cover the Pareto front, but also to generate sets of diverse candidates at each solution of the front so as to increase the likelihood of success in downstream evaluations (Jain et al., 2022).
Generative Flow Networks (GFlowNets; Bengio et al., 2021a;b) are a recently proposed family of probabilistic models which tackle the problem of diverse candidate generation. Contrary to the reward maximization view of reinforcement learning (RL) and Bayesian optimization (BO), GFlowNets sample candidates with probability proportional to the reward. Sampling candidates, as opposed to greedily generating them, implicitly encourages diversity in the generated candidates. GFlowNets have shown promising results in single objective problems of molecule generation (Bengio et al., 2021a) and biological sequence design (Jain et al., 2022).
In this paper, we study Multi-Objective GFlowNets (MOGFNs), extensions of GFlowNets which tackle the multi-objective optimization problem. We consider two variants of MOGFNs
– (a) Preference-Conditional GFlowNets (MOGFN-PC) which combine Reward-Conditional GFlowNets (Bengio et al., 2021b) with Weighted Sum Scalarization (Ehrgott, 2005) and (b) MOGFNAL, an extension of GFlowNet-AL (Jain et al., 2022) for multi-objective active learning settings. We empirically demonstrate the advantage of MOGFNs over existing approaches on a variety of highdimensional multi-objective optimization tasks: the generation of small molecules, DNA aptamer sequences and fluorescent proteins. Our contributions are as follows:
C1 We demonstrate how two variants of GFlowNets – MOGFN-PC and MOGFN-AL – can be applied to multi-objective optimization. Our work is the first successful empirical validation of Reward-Conditional GFlowNets (Bengio et al., 2021b).
C2 Through a series of experiments on molecule generation and sequence generation we demonstrate that MOGFN-PC generates diverse Pareto-optimal candidates.
C3 In a challenging active learning task for designing fluorescent proteins, we show that MOGFN-AL results in significant improvements to sample-efficiency and diversity of generated candidates.
C4 We perform a thorough analysis of the main components of MOGFNs to provide insights into design choices that affect performance.
2 BACKGROUND
2.1 MULTI-OBJECTIVE OPTIMIZATION
Multi-objective optimization (MOO) involves finding a set of feasible candidates x⋆ ∈ X which all simultaneously maximize a set of objectives:
max x∈X (R1(x), . . . , Rd(x)) . (1)
In general, the objectives being optimized can be conflicting such that there is no single x⋆ which simultaneously maximizes all objectives. Consequently, the concept of Pareto optimality is adopted in MOO, giving rise to a set of solutions trading off the objectives in different ways.
Given x1, x2 ∈ X , x1 is said to dominate x2, written (x1 ≻ x2), iff Ri(x1) ≥ Ri(x2) ∀i ∈ {1, . . . , d} and ∃k ∈ {1, . . . , d} such that Rk(x1) > Rk(x2). A candidate x⋆ is Pareto-optimal if there exists no other solution x′ ∈ X which dominates x⋆. In other words, for a Pareto-optimal candidate it is impossible to improve one objective without sacrificing another. The Pareto set is the set of all Pareto-optimal candidates in X , and the Pareto front is defined as the image of the Pareto set in objective-space. It is important to note that since the objectives being optimized in general might not be injective, any point on the Pareto front can be the image of several candidates in the Pareto set. This introduces a notion of diversity in the candidate space, capturing all the candidates corresponding to a point on the Pareto front, that is critical for applications such as drug discovery.
While there are several paradigms for tackling the MOO problem (Ehrgott, 2005; Miettinen, 2012; Pardalos et al., 2017), we consider Scalarization, where the multi-objective problem is decomposed into simpler single-objective problems, as it is well suited for the GFlowNet formulation introduced in Section 3.1. A set of weights (preferences) ωi are assigned to the objectives Ri, such that ωi ≥ 0 and ∑d i=1 ωi = 1. The MOO problem in Equation 1 is then decomposed into solving single-objective sub-problems of the form maxx∈X R(x|ω), where R is a scalarization function.
Weighted Sum Scalarization, R(x|ω) = ∑d
i=1 ωiRi(x) is a widely used scalarization function which results in Pareto optimal candidates for problems with a convex Pareto front (Ehrgott, 2005). Weighted Tchebycheff, R(x|ω) = min
1≤i≤d ωi|Ri(x)− z⋆i |, where z⋆i denotes some ideal value for objective Ri,
results in Pareto optimal solutions even for problems with a non-convex Pareto front (Pardalos et al., 2017). See Appendix B for more discussion on scalarization. In summary, using scalarization, the MOO problem can be viewed as solving a family of single-objective optimization problems.
2.2 GFLOWNETS
Generative Flow Networks (Bengio et al., 2021a;b) are a family of probabilistic models which generate, through a sequence of steps,compositional objects x ∈ X with probability proportional to a given reward R : X → R+. The sequential construction of x ∈ X can be described as a trajectory
τ ∈ T in a weighted directed acyclic graph (DAG)1 G = (S, E), starting from an empty object s0 and following actions a ∈ A as building blocks. The nodes S of this graph (states) correspond to the set of all possible objects that can be constructed using sequences of actions in A. An edge s a−→ s′ ∈ E indicates that action a at state s leads to state s′.
The forward policy PF (−|s) is a distribution over the children of state s. x can be generated by starting at s0 and sampling a sequence of actions iteratively from PF . Similarly, the backward policy PB(−|s) is a distribution over the parents of state s and can generate backward trajectories starting at any state x, e.g., iteratively sampling from PB starting at x shows a way x could have been constructed. Let π(x) be the marginal likelihood of sampling trajectories terminating in x following PF , and partition function Z = ∑ x∈X R(x). The learning problem solved by GFlowNets is to estimate PF such that π(x) ∝ R(x). This is achieved using learning objectives like trajectory balance (TB; Malkin et al., 2022), to learn PF (−|s; θ), PB(−|s; θ), Zθ which approximate the forward and backward policies and partition function, parameterized by θ. We refer the reader to Bengio et al. (2021b); Malkin et al. (2022) for a more thorough introduction to GFlowNets.
3 MULTI-OBJECTIVE GFLOWNETS
We broadly categorize Multi-Objective GFlowNets (MOGFNs) as GFlowNets which solve a family of sub-problems derived from a Multi-Objective Optimization (MOO) problem. We first consider solving a family of MOO sub-problems simultaneously with preference-conditional GFlowNets, followed by MOGFN-AL, which solves a sequence of MOO sub-problems.
3.1 PREFERENCE-CONDITIONAL GFLOWNETS
Whereas a GFlowNet learns how to sample according to a single reward function, reward-conditional GFlowNets (Bengio et al., 2021b) are a generalization of GFlowNets that simultaneously model a family of distributions associated with a corresponding family of reward functions. Let C denote a set of values c, with each c ∈ C inducing a unique reward function R(x|c). We can define a family of weighted DAGs {Gc = (Sc, E) , c ∈ C} which describe the construction of x ∈ X , with conditioning information c available at all states in Sc. We denote PF (−|s, c) and PB(−|s′, c) as the conditional forward and backward policies, Z(c) =∑
x∈X R(x|c) as the conditional partition function and π(x|c) as the marginal likelihood of sampling trajectories τ from PF terminating in x given c. The learning objective in reward-conditional GFlowNets is thus estimating PF (−|s, c) such that π(x|c) ∝ R(x|c). We refer the reader to Bengio et al. (2021b) for a more formal discussion on conditional GFlowNets.
Recall from Section 2.1 that MOO problems can be decomposed into a family of single-objective problems each defined by a preference ω over the objectives. Thus, we can employ reward-conditional GFlowNets to model the family of reward functions by using as the conditioning set C the d-simplex ∆d spanned by the preferences ω over d objectives.
Preference-conditional GFlowNets (MOGFN-PC) are reward-conditional GFlowNets conditioned on the preferences ω ∈ ∆d over a set of objectives {R1(x), . . . , Rd(x)}. In other words, MOGFN-PC model the family of reward functions R(x|ω) where R(x|ω) itself corresponds to a scalarization of the MOO problem. We consider three scalarization techniques, which are discussed in Appendix B:
• Weighted-sum (WS) (Ehrgott, 2005): R(x|ω) = ∑d
i=1 ωiRi(x) • Weighted-log-sum (WL): R(x|ω) = ∏d
i=1 Ri(x) ωi
• Weighted-Tchebycheff (WT) (Choo & Atkins, 1983): R(x|ω) = min 1≤i≤d ωi|Ri(x)− z⋆i |,.
MOGFN-PC is not constrained to any scalarization function, and can incorporate any user-defined scalarization scheme that fits the desired optimization needs.
Training MOGFN-PC The procedure to train MOGFN-PC, or any reward-conditional GFlowNet, closely follows that of a standard GFlowNet and is described in Algorithm 1. The objective is to learn
1If the object is constructed in a canonical order (say a string constructed from left to right), G is a tree.
the parameters θ of the forward and backward conditional policies PF (−|s, ω; θ) and PB(−|s′, ω; θ), and the log-partition function logZθ(ω). To this end, we consider an extension of the trajectory balance objective for reward-conditional GFlowNets:
L(τ, ω; θ) = ( log Zθ(ω) ∏ s→s′∈τ PF (s ′|s, ω; θ)
R(x|ω) ∏
s→s′∈τ PB(s|s′, ω; θ)
)2 . (2)
One important component is the distribution p(ω) used to sample preferences during training. p(ω) influences the regions of the Pareto front that are captured by MOGFN-PC. In our experiments, we use a Dirichlet(α) to sample preferences ω which are encoded with thermometer encoding (Buckman et al., 2018) when input to the policy. Following prior work, we also use an exponent β for the reward R(x|ω), i.e. π(x|ω) ∝ R(x|ω)β . This incentivizes the policy to focus on the modes of R(x|ω), which is critical for generation of high reward and diverse candidates.
MOGFN-PC and MOReinforce MOGFN-PC is closely related to MOReinforce (Lin et al., 2021) in that both learn a preference-conditional policy to sample Pareto-optimal candidates. The key difference is the learning objective: MOReinforce uses a multi-objective version of REINFORCE (Williams, 1992), whereas MOGFN-PC uses a preference-conditional GFlowNet objective as in Equation (2). As discussed in Section 2.1, each point on the Pareto front (corresponding to a unique ω) can be the image of multiple candidates in the Pareto set. MOReinforce, given a preference ω will converge to sampling a single candidate that maximizes R(x|ω). MOGFN-PC, on the other hand, samples from R(x|ω), which enables generation of diverse candidates from the Pareto set for a given ω. This is a key feature of MOGFN-PC whose advantage we empirically demonstrate in Section 5.
3.2 MULTI-OBJECTIVE ACTIVE LEARNING WITH GFLOWNETS
In many practical scenarios, the objective functions of interest are computationally expensive. For instance, in the drug discovery scenario, evaluating objectives such as the binding energy to a target even in simulations can take several hours. Sample-efficiency, in terms of number of evaluations of the objective functions, and diversity of candidates, thus become critical in such scenarios. Black-box optimization approaches involving active learning (Zuluaga et al., 2013), particularly multi-objective Bayesian optimization (MOBO) methods (Shah & Ghahramani, 2016; Garnett, 2022) are powerful approaches in these settings.
MOBO uses a probabilistic model to approximate the objectives R = {R1 . . . Rd} and leverages the epistemic uncertainty in the predictions of the model as a signal for prioritizing potentially useful candidates. The optimization is performed over M rounds, where each round i consists of generating a batch of candidates B given all the candidates Di proposed in the previous rounds. The batch B is then evaluated using the true objective functions. The candidates are generated in each round by maximizing an acquisition function a which combines the predictions with their epistemic uncertainty into a single scalar utility score. We note that each round is effectively a scalarization of the MOO problem, and as such it may be decomposed into each round’s single objective problem.
We broadly define MOGFN-AL as approaches which use GFlowNets to generate candidates in each round of an active learning loop for multi-objective optimization. MOGFN-AL tackles MOO through a sequence of single-objective sub-problems defined by acquisition function a. As such, MOGFN-AL can be viewed as a multi-objective extension of GFlowNet-AL (Jain et al., 2022). In this work, we consider an instantiation of MOGFN-AL for biological sequence design summarized in Algorithm 2 (Appendix A), building upon the framework proposed by Stanton et al. (2022).
We start with an initial dataset D0 = (xi, yi)Ni=1 of candidates xi ∈ X and their evaluation with the true objectives yi = R(x). Di is used to train a surrogate probabilistic model (proxy) of the true objectives f̂ : X → Rd, which we parameterize as a multi-task Gaussian process (Shah & Ghahramani, 2016) with a deep kernel (DKL GP; Maddox et al., 2021a;b). Using this proxy, the acquisition function defines the utility to be maximized a : X × F → R, where F denotes the space of functions represented by DKL GPs. In our work we use as acquisition function a noisy expected hypervolume improvement (NEHVI; Daulton et al., 2020).
We use GFlowNets to propose candidates at each round i by generating mutations for candidates x ∈ P̂i where P̂i is the set of non-dominated candidates in Di. Given a sequence x, the GFlowNet
generates a set of mutations m = {(li, vi)}Ti=1 where l ∈ {1, . . . , |x|} is the location to be replaced and v ∈ A is the token to replace x[l] while T is the number of mutations. This set is generated sequentially such that each mutation is sampled from PF conditioned on x and the mutations sampled so far {(li, vi)}. Let x′m be the sequence resulting from mutations m on sequence x. The reward for a set of sampled mutations for x is the value of the acquisition function on x′m, R(m,x) = a(x′m|f̂). This approach of generating mutations to existing sequences provides an key advantage over generating sequences token-by-token as done in prior work (Jain et al., 2022) – better scaling for longer sequences. We show empirically in Section 5.3 that generating mutations with GFlowNets results in more diverse candidates and faster improvements to the Pareto front than LaMBO (Stanton et al., 2022).
4 RELATED WORK
Evolutionary Algorithms (EA) Traditionally, evolutionary algorithms such as NSGA-II have been widely used in various multi-objective optimization problems (Ehrgott, 2005; Konak et al., 2006; Blank & Deb, 2020). More recently, Miret et al. (2022) incorporated graph neural networks into evolutionary algorithms enabling them to tackle large combinatorial spaces. Unlike MOGFNs, evolutionary algorithms do not leverage any type of data, including past experiences, and therefore are required to solve each instance of a MOO from scratch rather than by amortizing computation during training in order to quickly generate solutions at run-time. Evolutionary algorithms, however, can be augmented with MOGFNs for generating mutations to improve efficiency, as in Section 3.2.
Multi-Objective Reinforcement Learning MOO problems have also received significant interest in the reinforcement learning (RL) literature (Hayes et al., 2022). Traditional approaches broadly consist of learning sets of Pareto-dominant policies (Roijers et al., 2013; Van Moffaert & Nowé, 2014; Reymond et al., 2022). Recent work has focused on extending Deep RL algorithms for multi-objective settings such as Envelope-MOQ (Yang et al., 2019), MO-MPO (Abdolmaleki et al., 2020; 2021) , and MOReinforce (Lin et al., 2021). A general shortcoming of RL based approaches is that they only discover a single mode of the reward function, and thus cannot generate diverse candidates, which also persists in the multi-objective setting. In contrast, MOGFNs sample candidates proportional to the reward, implicitly resulting in diverse candidates.
Multi-Objective Bayesian Optimization (MOBO) Bayesian optimization (BO) has been used in the context of MOO when the objectives are expensive to evaluate and sample-efficiency is a key consideration. MOBO approaches consist of learning a surrogate model of the true objective functions, which is used to define an acquisition function such as expected hypervolume improvement (Emmerich et al., 2011; Daulton et al., 2020; 2021) and max-value entropy search (Belakaria et al., 2019), as well as scalarization-based approaches (Paria et al., 2020; Zhang & Golovin, 2020). Stanton et al. (2022) proposed LaMBO, which uses language models in conjunction with BO for multi-objective sequence design problems. The key drawbacks of MOBO approaches are that they do not consider the need for diversity in generated candidates and that they mainly consider continuous state spaces. As we discuss in Section 3.2, MOBO approaches can be augmented with GFlowNets for diverse candidate generation in discrete spaces.
Other Works Zhao et al. (2022) introduced LaMOO which tackles the MOO problem by iteratively splitting the candidate space into smaller regions, whereas Daulton et al. (2022) introduce MORBO, which performs BO in parallel on multiple local regions of the candidate space. Both these methods, however, are limited to continuous candidate spaces.
5 EMPIRICAL RESULTS
In this section, we present our empirical findings across a wide range of tasks ranging from sequence design to molecule generation.The experiments cover two distinct classes of problems in the context of GFlowNets: where G is a DAG and where it is a tree. Through our experiments, we aim to answer the following questions:
Q1 Can MOGFNs model the preference-conditional reward distribution?
Q2 Can MOGFNs sample Pareto-optimal candidates?
Q3 Are candidates sampled by MOGFNs diverse? Q4 Do MOGFNs scale to high-dimensional problems relevant in practice?
Metrics: We rely on standard metrics such as the Hypervolume (HV) and R2 indicators, as well as the Generational Distance+ (GD+). To measure diversity we use the Top-K Diversity and Top-K Reward metrics of Bengio et al. (2021a). We detail all metrics in Appendix D. For all our empirical evaluations we follow the same protocol. First, we sample a set of preferences which are fixed for all the methods. For each preference we sample 128 candidates from which we pick the top 10, compute their scalarized reward and diversity, and report the averages over preferences. We then use these samples to compute the HV and R2 indicators. We pick the best hyperparameters for all methods based on the HV and report the mean and standard deviation over 3 seeds for all quantities.
Baselines: We consider the closely related MOReinforce (Lin et al., 2021) as a baseline. We also study its variants MOSoftQL and MOA2C which use Soft Q-Learning (Haarnoja et al., 2017) and A2C (Mnih et al., 2016) in place of REINFORCE. We also compare against Envelope-MOQ (Yang et al., 2019), another popular multi-objective reinforcement learning method. For fragment-based molecule generation we consider an additional baseline MARS (Xie et al., 2021), a relevant MCMC approach for this task. To keep comparisons fair, we omit baselines like LaMOO (Zhao et al., 2022) and MORBO (Daulton et al., 2022) as they are designed for continuous spaces and rely on latent representations from pre-trained models for discrete tasks like molecule generation.
5.1 SYNTHETIC TASKS
5.1.1 HYPER-GRID
We first study the ability of MOGFN-PC to capture the preference-conditional reward distribution in a multi-objective version of the HyperGrid task from Bengio et al. (2021a). The goal here is to navigate proportional to a reward within a HyperGrid. We consider the following objectives for our experiments: brannin(x), currin(x), shubert(x)2.
Since the state space is small, we can compute the distribution learned by MOGFN-PC in closed form. In Figure 1a, we visualize π(x|ω), the distribution learned by MOGFN-PC conditioned on a set of fixed preference vectors ω and contrast it with the true distribution R(x|ω) in a 32 × 32 hypergrid with 3 objectives. We observe that π(−|ω) and R(−|ω) are very similar. To quantify this, we compute Ex [|π(x|ω)−R(x|ω)/Z(ω)|] averaged over a set of 64 preferences, and find a difference of about 10−4. Note that MOGFN-PC is able to capture all the modes in the distribution, which suggests the candidates sampled from π would be diverse. Further, we compute the GD+ metric for the Pareto front of candidates generated with MOGFN-PC, which comes up to an average value of 0.42. For more details about the task and the additional results, refer to Appendix E.1.
5.1.2 N-GRAMS TASK
We consider version of the synthetic sequence design task from Stanton et al. (2022). The task consists of generating strings with the objectives given by occurrences of a set of d n-grams.
In the results summarized in Table 1, we consider 3 Bigrams (with common characters in the bigrams resulting in correlated objectives) and 3 Unigrams (conflicting objectives) as the objectives. MOGFNPC outperforms the baselines in terms of the MOO objectives while generating diverse candidates.
2We present additional results with more objectives in Appendix E.1
Since the objective counts occurrences of n-grams, the diversity is limited by the performance, i.e. high scoring sequences will have lower diversity, explaining higher diversity of MOSoftQL. We note that the MOReinforce and Envelope-MOQ baselines struggle in this task potentially due to longer trajectories with sparse rewards. MOGFN-PC adequately models the trade-off between conflicting objectives in the 3 Monograms task as illustrated by the Pareto front of generated candidates in Figure 1b. For the 3 Bigrams task with correlated objectives, Figure 1c demonstrates MOGFN-PC generates candidates which can simultaneously maximize multiple objectives. We refer the reader to Appendix E.2 for more task details and additional results with different number of objectives and varying sequence length.
5.2 BENCHMARK TASKS
5.2.1 QM9
We first consider a small-molecule generation task based on the QM9 dataset (Ramakrishnan et al., 2014). We generate molecules atom-by-atom and bond-by-bond with up to 9 atoms and use 4 reward signals. The main reward is obtained via a MXMNet (Zhang et al., 2020) proxy trained on QM9 to predict the HOMO-LUMO gap. The other rewards are Synthetic Accessibility (SA), a molecular weight target, and a molecular logP target. Rewards are normalized to be between 0 and 1, but the gap proxy can exceed 1, and so is clipped at 2. We train the models with 1M molecules and present the results in Table 2, showing that MOGFN-PC outperforms all baselines in terms of Pareto performance and diverse candidate generation.
5.2.2 FRAGMENT-BASED MOLECULE GENERATION
We evaluate our method on the fragment-based (Kumar et al., 2012) molecular generation task of Bengio et al. (2021a), where the task is to generate molecules by linking fragments to form a junction tree (Jin et al., 2020). The main reward function is obtained via a pretrained proxy, available from Bengio et al. (2021a), trained on molecules docked with AutodockVina (Trott & Olson, 2010) for the sEH target. The other rewards are based on Synthetic Accessibility (SA), drug likeness (QED), and a molecular weight target. We detail the reward construction in Appendix E.4. Similarly to QM9, we train MOGFN-PC to generate 1M molecules and report the results in Table 3. We observe that MOGFN-PC is consistently outperforming baselines not only in terms of HV and R2, but also candidate diversity score. Note that we do not report reward and diversity scores for MARS, since the lack of preference conditioning would make it an unfair comparison.
5.2.3 DNA SEQUENCE GENERATION
As a practical domain where the GFlowNet graph is a tree, we consider the generation of DNA aptamers, single-stranded nucleotide sequences that are popular in biological polymer design due to their specificity and affinity as sensors in crowded biochemical environments (Zhou et al., 2017; Corey et al., 2022; Yesselman et al., 2019; Kilgour et al., 2021). We generate sequences by adding one nucleobase (A, C, T or G) at a time, with a maximum length of 60 bases. We consider three objectives:
the free energy of the secondary structured calculated with the software NUPACK (Zadeh et al., 2011), the number of base pairs and the inverse of the sequence length to favour shorter sequences.
We report the results in Table 4. In this case, the best Pareto performance is obtained by the multi-objective RL algorithm MOReinforce (Lin et al., 2021). However, it achieves so by finding a quasi-trivial solution with the pattern GCGCGC... for most lengths, yielding very low diversity. In contrast, MOGFN-PC obtains much higher diversity and Top-K rewards but worse Pareto performance. An extended discussion, ablation study and further details are provided in Appendix E.5.
5.3 ACTIVE LEARNING
Finally, to evaluate MOGFN-AL, we consider the Proxy RFP task from Stanton et al. (2022), with the aim of discovering novel proteins with red fluorescence properties, optimizing for folding stability and solvent-accessible surface area. We adopt all the experimental details (described in Appendix E.6) from Stanton et al. (2022), using MOGFN-AL for candidate generation. In addition to LaMBO, we use a model-free (NSGA-2) and model-based EA from Stanton et al. (2022) as baselines. We observe in Figure 2a that MOGFN-AL results in significant gains to the improvement in Hypervolume relative to the initial dataset, in a given budget of black-box evaluations. In fact, MOGFN-AL is able to match the performance of LaMBO within about half the number of black-box evaluations.
Figure 2b illustrates that the Pareto frontier of candidates generated with MOGFN-AL, which dominates the Pareto frontier of the initial dataset. As we the candidates are generated by mutating sequences in the existing Pareto front, we also highlight the sequences that are
mutations of each seqeunce in the initial dataset with the same color. To quantify the diversity of the generated candidates we measure the average e-value from DIAMOND (Buchfink et al., 2021) between the initial Pareto front and the Pareto frontier of generated candidates. Table 2c shows that MOGFN-AL generates candidates that are more diverse than the baselines.
6 ANALYSIS
In this section, we isolate the important components of MOGFN-PC: the distribution p(ω) for sampling preferences during training, the reward exponent β and the reward scalarization R(x|ω) to understand the impact of each component on Pareto performance and diversity. We consider the 3 Bigrams task discussed in Section 5.1.2 and the fragment-based molecule generation task from Section 5.2.1 for this analysis and provide further results in the Appendix.
Impact of p(ω) To examine the effect of p(ω), which controls the coverage of the Pareto front, we set it to Dirichlet(α) and vary α ∈ {0.1, 1, 10}. This results in ω being sampled from different regions of ∆d. Specifically, α = 1 corresponds to a uniform distribution over ∆d, α > 1 is skewed towards the center of ∆d whereas α < 1 is skewed towards the corners of ∆d. In Table 5 and Table 6 we observe that α = 1 results in the best performance. Despite the skewed distribution with α = 0.1 and α = 10, we still achieve performance close to that of α = 1 indicating that MOGFN-PC is able to interpolate to preferences not sampled during training. Note that diversity is not affected significantly by p(ω).
Impact of β During training β, controls the concentration of the reward density around modes of the distribution. For large values of β the reward density around the modes become more peaky and vice-versa. In Table 5 and Table 6 we present the results obtained by varying β ∈ {16, 32, 48}. As β increases, MOGFN-PC is incentivized to generate samples closer to the modes of R(x|ω), resulting in better Pareto performance. However, with high β values, the reward density is concentrated close to the modes and there is a negative impact on the diversity of the candidates.
Choice of scalarization R(x|ω) Next, we analyse the effect of the scalarization defining R(x|ω) used for training. The set of R(x|ω) for different ω specifies the family of MOO sub-problems and thus has a critical impact on the Pareto performance. Table 5 and Table 6 include results for the Weighted Sum (WS), Weighted-log-sum (WL) and Weighted Tchebycheff (WT) scalarizations. Note that we do not compare the Top-K Reward as different scalarizations cannot be compared directly. WS scalarization results in the best performance. WL scalarization on the other hand is not formally guaranteed to cover the Pareto front and consequently results in poor Pareto performance. We suspect the poor performance of WT and WL are in part also due to the harder reward landscapes they induce.
7 CONCLUSION
In this work, we have empirically demonstrated the generalization of GFlowNets to conditional GFlowNets for multi-objective optimization problems (MOGFN) to promote the generation of diverse optimal candidates. We presented two instantiations of MOGFN: MOGFN-PC, which leverages reward-conditional GFlowNets (Bengio et al., 2021b) to model a family of single-objective subproblems, and MOGFN-AL, which sequentially solves a set of single-objective problems defined by multi-objective acquisition functions. Finally, we empirically demonstrated the efficacy of MOGFNs for generating diverse Pareto-optimal candidates on sequence and graph generation tasks.
As a limitation, we identify that in certain domains, such as DNA sequence generation, MOGFN generates diverse candidates but currently does not match RL algorithms in terms of Pareto performance. The analysis in Section 6 hints that the distribution of sampling preferences p(ω) affects the Pareto performance. Since for certain practical applications only a specific region of the Pareto front is of interest, future work may explore gradient based techniques to learn preferences for more structured exploration of the preference space. Within the context of MOFGN-AL, an interesting research avenue is the development of preference-conditional acquisition functions.
Reproducibility Statement We include the code necessary to replicate experiments with our submission and provide detailed description of experimental setups in the Appendix. All datasets and pretrained models used are publicly available or included in the supplementary materials.
Ethics Statement We acknowledge that as with all machine learning algorithms, there is potential for dual use of multi-objective GFlowNets by nefarious agents. This work was motivated by the application of machine learning to accelerate scientific discovery in areas that can benefit humanity. We explicitly discourage the use of multi-objective GFlowNets in applications that may be harmful to others.
A ALGORITHMS
We summarize the algorithms for MOGFN-PC and MOGFN-AL here.
Algorithm 1: Training preference-conditional GFlowNets Input: p(ω): Distribution for sampling preferences; β: Reward Exponent; δ: Mixing Coefficient for uniform actions in sampling policy; N : Number of training steps; Initialize: (PF (s
′|s, ω), PB(s|s′, ω), logZ(ω)): Conditional GFlowNet with parameters θ; for i = 1 to N do
Sample preference ω ∼ p(ω); Sample trajectory τ following policy π̂ = (1− δ)PF + δUniform ; Compute reward R(x|ω)β for generated samples and corresponding loss L(τ, ω; θ) as in
Equation 2; Update parameters θ with gradients from the loss, ∇θL(τ, ω);
end
Algorithm 2: Training MOGFN-AL Input: R = {R1, . . . , Rd}: Oracles to evaluate candidates x and return true objectives (R1(x), . . . , Rd(x)) ; D0 = {(xi, yi)}: Initial dataset with yi = R(xi); f̂ : Probabilistic surrogate model to model posterior over R given a dataset D; a(x|f̂): Acquisition function computing a scalar utility for x given f̂ ; πθ: Learnable GFlowNet policy; b: Size of candidate batch to be generated; N : Number of active learning rounds; Initialize: f̂ , πθ; for i = 1 to N do
Fit f̂ on dataset Di−1; Extract the set of non-dominated candidates P̂i−1 from Di−1; Train πθ with to generate mutations for x ∈ P̂i using a(−|f̂) as the reward; Generate batch B = {x′1,mi , . . . , x ′ b,mb
} by sampling x′i from P̂i−1 and applying to it mutations mi sampled from πθ;
Evaluate batch B with R to generate D̂i = {(x1,R(x1)), . . . , (xb,R(xb))}; Update dataset Di = D̂i ∪Di−1
end Result: Approximate Pareto set P̂N
B SCALARIZATION
Scalarization is a popular approach for tackling multi-objective optimization problems. MOGFNPC can build upon any scalarization approach. We consider three choices. Weighted-sum (WS) scalarization has been widely used in literature. WS finds candidates on the convex hull of the Pareto front (Ehrgott, 2005). Under the assumption that the Pareto front is convex, every Pareto optimal solution is a solution to a weighted sum problem and the solution to every weighted sum problem is Pareto optimal. Weigthed Tchebycheff (WT), proposed by Choo & Atkins (1983) is
an alternative designed for non-convex Pareto fronts. Any Pareto optimal solution can be found by solving the weighted Tchebycheff problem with appropriate weights, and the solutions for any weights correspond to a weakly Pareto optimal solution of the original problem (Pardalos et al., 2017). Lin et al. (2021) deomstrated through their empirical results that WT can be used with neural network based policies. The third scheme we consider, Weighted-log-sum (WL) has not been considered in prior work. We hypothesized that in some practical scenarios, we might want to ensure that all objectives are optimized, since, for instance, in WS the scalarized reward can be dominated by a single reward. WL, which considers the weigthed sum in log space can potentially help with this drawback. However, as discussed in Section 6, in practice WL can be hard to optimize, and lead to poor performance.
C ADDITIONAL ANALYSIS
Can MOGFN-PC match Single Objective GFNs? To evaluate how well MOGFN-PC models the family of rewards R(x|ω), we consider a comparison with single objective GFlowNets. More specifically, we first sample a set of 10 preferences ω1, . . . , ω10, and train a standard single objective GFlowNet using the weighted sum scalar reward for each preference. We then generate N = 128 candidates from each GFlowNet, throughout training, and compute the mean reward for the top 10 candidates for each preference. We average this top 10 reward across {ω1, . . . , ω10}, and call it Rso. We then train MOGFN-PC, and apply the sample procedure with the preferences {ω1, . . . , ω10}, and call the resulting mean of top 10 rewards Rmo. We plot the value of the ratio Rmo/Rso in Figure 3. We observe that the ratio stays close to 1, indicating that MOGFN-PC can indeed model the entire family of rewards simultaneously at least as fast as a single objective GFlowNet could.
Effect of Model Capacity and Architecture Finally we look at the effect of model size in training MOGFN-PC. As MOGFN-PC models a conditional distribution, an entire family of functions as we’ve described before, we expect capacity to play a crucial role since the amount of information to be learned is higher than for a single-objective GFN. We increase model size in the 3 Bigrams task to see that effect, and see in Table 7 that larger models do help with performance–although the performance plateaus after a point. We suspect that in order to fully utilize the model capacity we might need better training objectives.
D METRICS
In this section we discuss the various metrics that we used to report the results in Section 5.
1. Generational Distance Plus (GD +) (Ishibuchi et al., 2015): This metric measures the euclidean distance between the solutions of the Pareto approximation and the true Pareto front by taking the dominance relation into account. To calculate GD+ we require the knowledge of the true Pareto front and hence we only report this metric for Hypergrid experiments (Section 5.1.1)
2. Hypervolume (HV) Indicator (Fonseca et al., 2006): This is a standard metric reported in MOO works which measures the volume in the objective space with respect to a reference point spanned by a set of non-dominated solutions in Pareto front approximation.
3. R2 Indicator (Hansen & Jaszkiewicz, 1994): R2 provides a monotonic metric comparing two Pareto front approximations using a set of uniform reference vectors and a utopian point z∗ representing the ideal solution of the MOO. This metric provides a monotonic reference to compare different Pareto front approximations relative to a utopian point. Specifically, we define a set of uniform reference vectors λ ∈ Λ that cover the space of the MOO and then calculate:R2(Γ,Λ, z∗) = 1 |Λ| ∑ λ∈Λ minγ∈Γ { maxi∈1,...,k{λi|z∗i − γi|} } where γ ∈ Γ corresponds to the set of
solutions in a given Pareto front approximations and z∗ is the utopian point corresponding to the ideal solution of the MOO. Generally, R2 metric calculations are performed with z∗ equal to the origin and all objectives transformed to a minimization setting, which serves to preserve the monotonic nature of the metric. This holds true for our experiments as well.
4. Top-K Reward This metric was originally used in (Bengio et al., 2021a), which we extend for our multi-objective setting. For MOGFN-PC, we sample N candidates per test preference and then pick the top-k candidates (k < N ) with highest scalarized rewards and calculate the mean. We repeat this for all test preferences enumerated from the simplex and report the average top-k reward score.
5. Top-K Diversity This metric was also originally used in (Bengio et al., 2021a), which we again extend for our multi-objective setting. We use this metric to quantify the notion of diversity of the generated candidates. Given a distance metric d(x, y) between candidates x and y we calculate the diversity of candidates as those who have d(x, y) greater than a
threshold ϵ. For MOGFN-PC, we sample N candidates per test preference and then pick the top-k candidates based on the diversity scores and take the mean. We repeat this for all test preferences sampled from simplex and report the average top-k diversity score. We use the edit distance for sequences, and 1 minus the Tanimoto similarity for molecules.
E ADDITIONAL EXPERIMENTAL DETAILS
E.1 HYPER-GRID
Here we elaborate on the Hyper-Grid experimental setup which we discussed in Section 5.1.1. Consider an n-dimensional hypercube gridworld where each cell in the grid corresponds to a state. The agent starts at the top left coordinate marked as (0, 0, . . . ) and is allowed to move only towards the right, down, or stop. When the agent performs the stop action, the trajectory terminates and the agent receives a non-zero reward. In this work, we consider the following reward functions - brannin(x), currin(x), sphere(x), shubert(x), beale(x). In Figure 4, we show the heatmap for each reward function. Note that we normalize all the reward functions between 0 and 1.
.
Additional Results To verify the efficacy of MOGFNs across different objectives sizes, we perform some additional experiments and measure the L1 loss and the GD+ metric. In Figure 5, we can see that as the reward dimension increases, the loss and GD+ increases. This is expected because the number of rewards is indicative of the difficulty of the problem. We also present extended qualitative visualizations across more preferences in Figure 6.
Model Details and Hyperparameters For MOGFN-PC policies we use an MLP with two hidden layers each consisting of 64 units. We use LeakyReLU as our activation function as in Bengio et al. (2021a). All models are trained with learning rate=0.01 with the Adam optimizer Kingma & Ba (2015) and batch size=128. We sample preferences ω from Dirichlet(α) where α = 1.5. We try two encoding techniques for encoding preferences - 1) Vanilla encoding where we just use the raw values of the preference vectors and 2) Thermometer encoding (Buckman et al., 2018). In our experiments we have not observed significant difference in performance difference.
E.2 N-GRAMS TASK
Task Details The task is to generate sequences of some maximum length L, which we set to 36 for the experiments in Section 5.1.2. We consider a vocabulary (actions) of size 21, with 20 characters ["A", "R", "N", "D", "C", "E", "Q", "G", "H", "I", "L", "K", "M", "F", "P", "S", "T", "W", "Y", "V"] and a special token to indicate the end of sequence. The rewards {Ri}di=1 are defined by the number of occurrences of a given set of n-grams in a sequence x. For instance, consider ["AB", "BA"] as the n-grams. The rewards for a sequence x = ABABC would be [2, 1]. We consider two choices of n-grams: (a) Unigrams: the number of occurrences of a set of unigrams induces conflicting objectives since we cannot increase the number of occurrences of a monogram without replacing another in a string of a particular length, (b) Bigrams: given common characters within the bigrams, the occurrences of multiple bigrams can be increased simultaneously within a string of a fixed length. We also consider different sizes for the set of n-grams considered, i.e. different number of objectives. This allows us to evaluate the behaviour of MOGFN-PC on a variety of objective spaces. We summarize the specific objectives used in our experiments in Table 8. We normalize the rewards to [0, 1] in our experiments.
Model Details and Hyperparameters We build upon the implementation from Stanton et al. (2022) for the task: https://github.com/samuelstanton/lambo. For the string generation task, the backward policy PB is trivial (as there is only one parent for each node s ∈ S), so we only have to parameterize PF and logZ. As PF (−|s, ω) is a conditional policy, we use a Conditional Transformer encoder as the architecture. This consists of a Transformer encoder (Vaswani et al., 2017) with 3 hidden layers of dimension 64 and 8 attention heads to embed the current state (string generated so far) s. We have an MLP which embeds the preferences ω which are encoded using thermometer encoding with 50 bins. The embeddings of the state and preferences are concatenated and passed to a final MLP which generates a categorical distribution over the actions (vocabulary token). We use the same architecture for the baselines using a conditional policy – MOReinforce and MOSoftQL. For EnvelopeMOQ, which does not condition on the preferences, we use a standard Transformer-encoder with a similar architecture. We present the hyperparameters we used in Table 9. Each method is trained for 10,000 iterations with a minibatch size of 128. For the baselines we adopt the official implementations released by the authors for MOReinforce – https://github.com/Xi-L/PMOCO and EnvelopeMOQ – https://github.com/RunzheYang/MORL.
Additional Results We present some additional results for the n-grams task. We consider different number of objectives d ∈ {2, 4} in Table 10 and Table 11 respectively. As with the experiments in Section 5.1.2 we observe that MOGFN-PC outperforms the baselines in Pareto performance while achieving high diversity scores. In Table 12, we consider the case of shorter sequences L = 24.
MOGFN-PC continues to provide significant improvements over the baselines. There are two trends we can observe considering the N-grams task holistically:
1. As the sequence size increases the advantage of MOGFN-PC becomes more significant. 2. The advantage of MOGFN-PC increases with the number of objectives.
Reward Details As mentioned in Section 5.2.1, we consider four reward functions for our experiments. The first reward function is the HUMO-LUMO gap, for which we rely on the predictions of a pretrained MXMNet (Zhang et al., 2020) model trained on the QM9 dataset (Ramakrishnan et al., 2014). The second reward is the standard Synthetic Accessibility score which we calculate using the RDKit library (Landrum), to get the reward we compute (10− SA)/9. The third reward function is molecular weight target. Here we first calculate the molecular weight of a molecule using RDKit, and then construct a reward function of the form e−(molWt−105)
2/150 which is maximized at 105. Our final reward function is a logP target, e−(logP−2.5)
2/2, which is again calculated with RDKit and is maximized at 2.5.
Model Details and Hyperparameters We sample new preferences for every episode from a Dirichlet(α), and encode the desired sampling temperature using a thermometer encoding (Buckman et al., 2018). We use a graph neural network based on a graph transformer architecture (Yun et al., 2019). We transform this conditional encoding to an embedding using an MLP. The embedding is then fed to the GNN as a virtual node, as well as concatenated with the node embeddings in the graph. The model’s action space is to add a new node to the graph, a new bond, or set node or bond properties (like making a bond a double bond). It also has a stop action. For more details please refer to the code provided in the supplementary material. We summarize the hyperparameters used in Table 13.
E.4 FRAGMENTS
More Details As mentioned in Section 5.2.2, we consider four reward functions for our experiments. The first reward function is a proxy trained on molecules docked with AutodockVina (Trott & Olson, 2010) for the sEH target; we use the weights provided by Bengio et al. (2021a). We also use synthetic accessibility, as for QM9, and a weight target region (instead of the specific target weight used for QM9), ((300 - molwt) / 700 + 1).clip(0, 1) which favors molecules with a weight of under 300. Our final reward function is QED which is again calculated with RDKit.
Model Details and Hyperparameters We again use a graph neural network based on a graph transformer architecture (Yun et al., 2019). The experimental protocol is similar to QM9 experiments discussed in Appendix E.3. We additionally sample from a lagged model whose parameters are updated as θ′ = τθ′ + (1− τ)θ. The model’s action space is to add a new node, by choosing from a
list of fragments and an attachment point on the current molecular graph. We list all hyperparameters used in Table 14.
Additional Results We also present in Figure 7 a view of the reward distribution produced by MOGFN-PC. Generally, the model is able to find good near-Pareto-optimal samples, but is also able to spend a lot of time exploring. The figure also shows that the model is able to respect the preference conditioning, and remains capable of generating a diverse distribution rather than a single point.
In the off-diagonal plots of Figure 7, we show pairwise scatter plots for each objective pair; the Pareto front is depicted with a red line; each point corresponds to a molecule generated by the model as it explores the state space; color is density (linear viridis palette). The diagonal plots show two overlaid informations: a blue histogram for each objective, and an orange scatter plot showing the relationship between preference conditioning and generated molecules. The effect of this conditioning is particularly visible for seh (top left) and wt (bottom right). As the preference for the sEH binding reward gets closer to 1, the generated molecules’ reward for sEH gets closer to 1 as well. Indeed, the expected shape for such a scatter plot is a triangular-ish shape: when the preference ωi for reward Ri is close to 1, the model is expected to generate objects with a high reward for Ri; as the preference ωi gets further away from 1, the model can generate anything, including objects with a high Ri–that is, unless there is a trade off between objectives, in which case in cannot; this is the case for the seh objective, but not for the wt objective, which has a more triangular shape.
E.5 DNA SEQUENCE DESIGN
Task Details The set of building blocks here consists of the bases["A", "C", "T", "G"] in addition to a special end of sequence token. In order to compute the free energy and number of base with the software NUPACK (Zadeh et al., 2011), we used 310 K as the temperature. The inverse of the length L objective was calculated as 30L , as 30 was the minimum length for sampled sequences. The rewards are normalized to [0, 1] for our experiments.
Model Details and Hyperparameters We use the same implementation as the N-grams task, detailed in Appendix E.2. Here we consider a 4-layer Transformer architecture, with 256 units per layer and 16 attention head instead. We detail the most relevant hyperparameters Table 15.
Discussion of Results Contrary to the other tasks on which we evaluated MOGFN-PC, for the generation of DNA aptamer sequences, our proposed model did not match the best baseline, multiobjective reinforcement learning (Lin et al., 2021), in terms of Pareto performance. Nonetheless, it is worth delving into the details in order to better understand the different solutions found by the two methods. First, as indicated in section 5, despite the better Pareto performance, the best sequences generated by the RL method have extremely low diversity (0.62), compared to MOGFN, which generates optimal sequences with diversity of 19.6 or higher. As a matter of fact, MOReinforce mostly samples sequences with the well-known pattern GCGC... for all possible lengths. Sequences with this pattern have indeed low (negative) energy and many number of pairs, but they offer little new insights and poor diversity if the model is not able to generate sequences with other distinct patterns. On the contrary, GFlowNets are able to generate sequences with patterns other than repeating the pair of bases G and C. Interestingly, we observed that GFlowNets were able to generate sequences with even lower energy than the best sequences generated by MOReinforce by inserting bases A and T into chains of GCGC.... Finally, we observed that one reason why MOGFN does not match the Pareto performance of MOReinforce is because for short lengths (one of the objectives) the energy and number of pairs are not successfully optimised. Nonetheless, the optimisation of energy and number of pairs is very good for the longest sequences. Given these observations, we conjecture that there is room for improving the set of hyperparameters or certain aspects of the algorithm.
Additional Results In order to better understand the impact of the main hyperparameters of MOGFN-PC in the Pareto performance and diversity of the optimal candidates, we train multiple instances by sweeping over several values of the hyperparameters, as indicated in Table 15. We present the results in Table 16. One key observation is that there seems to be a tradeoff between the Pareto performance and the diversity of the Top-K sequences. Nonetheless, even the models with the lowest diversity are able to generate much more diverse sequences than MOReinforce. Furthermore, we also observe α < 1 as the parameter of the Dirichlet distribution to sample the weight preferences, as well as higher β (reward exponent), both yield better metrics of Pareto performance but slightly worse diversity. In the case of β, this observation is consistent with the results in the Bigrams task (Table 5), but with Bigrams, best performance was obtained with α = 1. This is indicative of a degree of dependence on the task and the nature of the objectives.
E.6 ACTIVE LEARNING
Task Details We consider the Proxy RFP task from Stanton et al. (2022), an in silico benchmark task designed to simulate searching for improved red fluorescent protein (RFP) variants (Dance et al., 2021). The objectives considered are stability (-dG or negative change in Gibbs free energy) and
solvent-accessible surface area (SASA) (Shrake & Rupley, 1973) in simulation, computed using the FoldX suite (Schymkowitz et al., 2005) and BioPython (Cock et al., 2009). We use the dataset introduced in Stanton et al. (2022) as the initial pool of candidates D0 with |D0| = 512. Method Details and Hyperparameters Our implementation builds upon the publicly released code from (Stanton et al., 2022): https://github.com/samuelstanton/lambo. We follow the exact experimental setup used in Stanton et al. (2022). The surrogate model f̂ consists of an encoder with 1D convolutions (masking positions corresponding to padding tokens). We used 3 standard pre-activation residual blocks with two convolution layers, layer norm, and swish activations, with a kernel size of 5, 64 intermediate channels and 16 latent channels. A multi-task GP with an ICM kernel is defined in the latent space of this encoder, which outputs the predictions for each objective. We also use the training tricks detailed in Stanton et al. (2022) for the surrogate model. The hyperparameters, taken from Stanton et al. (2022) are shown in Table 17. The acquisiton function used is NEHVI (Daulton et al., 2021) defined as
α({xj}ij=1) = 1
N N∑ t=1 HVI({f̃t(xj)}i−1j=1|Pt) + 1 N N∑ t=1 HVI(f̃t(xj)|Pt ∪ {f̃t(xj)}i−1j=1) (3)
where f̃t, t = 1, . . . N are independent draws from the surrogate model (which is a posterior over functions), and Pt denotes the Pareto frontier in the current dataset D under f̃t.
We replace the LaMBO candidate generation with GFlowNets. We generate a set of mutations m = {(li, vi)} for a sequences x from the current approximation of the Pareto front P̂i. Note
Hyperparameter Values Learning Rate (PF ) {0.01, 0.001, 0.0001} Learning Rate (Z) {0.01, 0.001} Reward Exponent: β {16, 24} Uniform Policy Mix: δ {0.01, 0.05} Maximum number of mutations {10, 15, 20} δβ {0.5, 1, 2} | 1. What is the focus and contribution of the paper regarding multi-objective optimization?
2. What are the strengths of the proposed approach, particularly in applying conditional GFlowNet?
3. What are the weaknesses of the paper, especially regarding its novelty and minor issues?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper leverages GFlowNet to solve the Multi-Objective Optimization problem. The authors derive two versions of the algorithm, regular and active learning variants. The authors empirically demonstrate that the proposed algorithms outperform existing methods almost on every considered benchmark and perform reasonably on DNA sequence design tasks. The method as expected performs very well in terms of generating a diverse set of candidates.
Strengths And Weaknesses
Strength
The authors successfully applied conditional GFlowNet (for the first time) to the Multi-Objective Optimization problem.
The author generalizes the GFlowNet active learning algorithm for the Multi-Objective Optimization problem case.
The results seem promising for both algorithms in terms of key metrics and especially the diversity of the generated candidate.
The results and comparisons with other methods are well explained. (e.g. why in some cases other baselines have higher scores on some of the metrics)
Weaknesses
I believe the novelty is limited in terms of new ideas as the main technical contribution of the paper is the adaptation of the previously derived method to MOO setting.
Minor issues and questions
To make the paper self-consistent can you add how \alpha function in the Active Learning section combines a reward with an epistemic uncertainty?
It is unclear to me from the paper how \alpha function map a set of rewards and their epistemic uncertainties to a single objective in each round of the Active Learning pipeline. Or is the \alpha function defined externally for each round and can be considered as a hyperparameter?
Do you retrain GFlowNet from scratch for each round of Active Learning?
Clarity, Quality, Novelty And Reproducibility
The technical novelty of the paper is how to apply and generalize the previously derived method to the MOO problem. I believe this technical novelty is solid. However, the novelty in terms of new ideas seems incremental to me. The paper is well-written with some exceptions that I mentioned. I believe the results are reproducible. |
ICLR | Title
Multi-Objective GFlowNets
Abstract
In many applications of machine learning, like drug discovery and material design, the goal is to generate candidates that simultaneously maximize a set of objectives. As these objectives are often conflicting, there is no single candidate that simultaneously maximizes all objectives, but rather a set of Pareto-optimal candidates where one objective cannot be improved without worsening another. Moreover, in practice, these objectives are often under-specified, making the diversity of candidates a key consideration. The existing multi-objective optimization methods focus predominantly on covering the Pareto front, failing to capture diversity in the space of candidates. Motivated by the success of GFlowNets for generation of diverse candidates in a single objective setting, in this paper we consider Multi-Objective GFlowNets (MOGFNs). MOGFNs consist of a novel Conditional GFlowNet which models a family of single-objective sub-problems derived by decomposing the multi-objective optimization problem. Our work is the first to empirically demonstrate conditional GFlowNets. Through a series of experiments on synthetic and benchmark tasks, we empirically demonstrate that MOGFNs outperform existing methods in terms of Hypervolume, R2-distance and candidate diversity. We also demonstrate the effectiveness of MOGFNs over existing methods in active learning settings. Finally, we supplement our empirical results with a careful analysis of each component of MOGFNs.
1 INTRODUCTION
Decision making in practical applications often involves reasoning about multiple, often conflicting, objectives (Keeney et al., 1993). For example, in drug discovery, the goal is to generate novel drug-like molecules that inhibit a target, are easy to synthesize and can safely be used by humans (Dara et al., 2021). Unfortunately, these objectives often conflict – molecules effective against a target might also have adverse effects on humans – so there is no single molecule which maximizes all the objectives simultaneously. Such problems fall under the umbrella of Multi-Objective Optimization (MOO; Ehrgott, 2005; Miettinen, 2012), wherein one is interested in identifying Pareto-optimal candidates. The set of Pareto-optimal candidates covers all the best tradeoffs among the objectives, i.e., the Pareto front, where each point on that front corresponds to a different set of weights associated with each of the objectives.
In-silico drug discovery and material design are typically driven by proxies trained with finite data, which only approximate the problem’s true objectives, and therefore include intrinsic epistemic uncertainty associated with their predictions. In such problems, not only it is important to cover the Pareto front, but also to generate sets of diverse candidates at each solution of the front so as to increase the likelihood of success in downstream evaluations (Jain et al., 2022).
Generative Flow Networks (GFlowNets; Bengio et al., 2021a;b) are a recently proposed family of probabilistic models which tackle the problem of diverse candidate generation. Contrary to the reward maximization view of reinforcement learning (RL) and Bayesian optimization (BO), GFlowNets sample candidates with probability proportional to the reward. Sampling candidates, as opposed to greedily generating them, implicitly encourages diversity in the generated candidates. GFlowNets have shown promising results in single objective problems of molecule generation (Bengio et al., 2021a) and biological sequence design (Jain et al., 2022).
In this paper, we study Multi-Objective GFlowNets (MOGFNs), extensions of GFlowNets which tackle the multi-objective optimization problem. We consider two variants of MOGFNs
– (a) Preference-Conditional GFlowNets (MOGFN-PC) which combine Reward-Conditional GFlowNets (Bengio et al., 2021b) with Weighted Sum Scalarization (Ehrgott, 2005) and (b) MOGFNAL, an extension of GFlowNet-AL (Jain et al., 2022) for multi-objective active learning settings. We empirically demonstrate the advantage of MOGFNs over existing approaches on a variety of highdimensional multi-objective optimization tasks: the generation of small molecules, DNA aptamer sequences and fluorescent proteins. Our contributions are as follows:
C1 We demonstrate how two variants of GFlowNets – MOGFN-PC and MOGFN-AL – can be applied to multi-objective optimization. Our work is the first successful empirical validation of Reward-Conditional GFlowNets (Bengio et al., 2021b).
C2 Through a series of experiments on molecule generation and sequence generation we demonstrate that MOGFN-PC generates diverse Pareto-optimal candidates.
C3 In a challenging active learning task for designing fluorescent proteins, we show that MOGFN-AL results in significant improvements to sample-efficiency and diversity of generated candidates.
C4 We perform a thorough analysis of the main components of MOGFNs to provide insights into design choices that affect performance.
2 BACKGROUND
2.1 MULTI-OBJECTIVE OPTIMIZATION
Multi-objective optimization (MOO) involves finding a set of feasible candidates x⋆ ∈ X which all simultaneously maximize a set of objectives:
max x∈X (R1(x), . . . , Rd(x)) . (1)
In general, the objectives being optimized can be conflicting such that there is no single x⋆ which simultaneously maximizes all objectives. Consequently, the concept of Pareto optimality is adopted in MOO, giving rise to a set of solutions trading off the objectives in different ways.
Given x1, x2 ∈ X , x1 is said to dominate x2, written (x1 ≻ x2), iff Ri(x1) ≥ Ri(x2) ∀i ∈ {1, . . . , d} and ∃k ∈ {1, . . . , d} such that Rk(x1) > Rk(x2). A candidate x⋆ is Pareto-optimal if there exists no other solution x′ ∈ X which dominates x⋆. In other words, for a Pareto-optimal candidate it is impossible to improve one objective without sacrificing another. The Pareto set is the set of all Pareto-optimal candidates in X , and the Pareto front is defined as the image of the Pareto set in objective-space. It is important to note that since the objectives being optimized in general might not be injective, any point on the Pareto front can be the image of several candidates in the Pareto set. This introduces a notion of diversity in the candidate space, capturing all the candidates corresponding to a point on the Pareto front, that is critical for applications such as drug discovery.
While there are several paradigms for tackling the MOO problem (Ehrgott, 2005; Miettinen, 2012; Pardalos et al., 2017), we consider Scalarization, where the multi-objective problem is decomposed into simpler single-objective problems, as it is well suited for the GFlowNet formulation introduced in Section 3.1. A set of weights (preferences) ωi are assigned to the objectives Ri, such that ωi ≥ 0 and ∑d i=1 ωi = 1. The MOO problem in Equation 1 is then decomposed into solving single-objective sub-problems of the form maxx∈X R(x|ω), where R is a scalarization function.
Weighted Sum Scalarization, R(x|ω) = ∑d
i=1 ωiRi(x) is a widely used scalarization function which results in Pareto optimal candidates for problems with a convex Pareto front (Ehrgott, 2005). Weighted Tchebycheff, R(x|ω) = min
1≤i≤d ωi|Ri(x)− z⋆i |, where z⋆i denotes some ideal value for objective Ri,
results in Pareto optimal solutions even for problems with a non-convex Pareto front (Pardalos et al., 2017). See Appendix B for more discussion on scalarization. In summary, using scalarization, the MOO problem can be viewed as solving a family of single-objective optimization problems.
2.2 GFLOWNETS
Generative Flow Networks (Bengio et al., 2021a;b) are a family of probabilistic models which generate, through a sequence of steps,compositional objects x ∈ X with probability proportional to a given reward R : X → R+. The sequential construction of x ∈ X can be described as a trajectory
τ ∈ T in a weighted directed acyclic graph (DAG)1 G = (S, E), starting from an empty object s0 and following actions a ∈ A as building blocks. The nodes S of this graph (states) correspond to the set of all possible objects that can be constructed using sequences of actions in A. An edge s a−→ s′ ∈ E indicates that action a at state s leads to state s′.
The forward policy PF (−|s) is a distribution over the children of state s. x can be generated by starting at s0 and sampling a sequence of actions iteratively from PF . Similarly, the backward policy PB(−|s) is a distribution over the parents of state s and can generate backward trajectories starting at any state x, e.g., iteratively sampling from PB starting at x shows a way x could have been constructed. Let π(x) be the marginal likelihood of sampling trajectories terminating in x following PF , and partition function Z = ∑ x∈X R(x). The learning problem solved by GFlowNets is to estimate PF such that π(x) ∝ R(x). This is achieved using learning objectives like trajectory balance (TB; Malkin et al., 2022), to learn PF (−|s; θ), PB(−|s; θ), Zθ which approximate the forward and backward policies and partition function, parameterized by θ. We refer the reader to Bengio et al. (2021b); Malkin et al. (2022) for a more thorough introduction to GFlowNets.
3 MULTI-OBJECTIVE GFLOWNETS
We broadly categorize Multi-Objective GFlowNets (MOGFNs) as GFlowNets which solve a family of sub-problems derived from a Multi-Objective Optimization (MOO) problem. We first consider solving a family of MOO sub-problems simultaneously with preference-conditional GFlowNets, followed by MOGFN-AL, which solves a sequence of MOO sub-problems.
3.1 PREFERENCE-CONDITIONAL GFLOWNETS
Whereas a GFlowNet learns how to sample according to a single reward function, reward-conditional GFlowNets (Bengio et al., 2021b) are a generalization of GFlowNets that simultaneously model a family of distributions associated with a corresponding family of reward functions. Let C denote a set of values c, with each c ∈ C inducing a unique reward function R(x|c). We can define a family of weighted DAGs {Gc = (Sc, E) , c ∈ C} which describe the construction of x ∈ X , with conditioning information c available at all states in Sc. We denote PF (−|s, c) and PB(−|s′, c) as the conditional forward and backward policies, Z(c) =∑
x∈X R(x|c) as the conditional partition function and π(x|c) as the marginal likelihood of sampling trajectories τ from PF terminating in x given c. The learning objective in reward-conditional GFlowNets is thus estimating PF (−|s, c) such that π(x|c) ∝ R(x|c). We refer the reader to Bengio et al. (2021b) for a more formal discussion on conditional GFlowNets.
Recall from Section 2.1 that MOO problems can be decomposed into a family of single-objective problems each defined by a preference ω over the objectives. Thus, we can employ reward-conditional GFlowNets to model the family of reward functions by using as the conditioning set C the d-simplex ∆d spanned by the preferences ω over d objectives.
Preference-conditional GFlowNets (MOGFN-PC) are reward-conditional GFlowNets conditioned on the preferences ω ∈ ∆d over a set of objectives {R1(x), . . . , Rd(x)}. In other words, MOGFN-PC model the family of reward functions R(x|ω) where R(x|ω) itself corresponds to a scalarization of the MOO problem. We consider three scalarization techniques, which are discussed in Appendix B:
• Weighted-sum (WS) (Ehrgott, 2005): R(x|ω) = ∑d
i=1 ωiRi(x) • Weighted-log-sum (WL): R(x|ω) = ∏d
i=1 Ri(x) ωi
• Weighted-Tchebycheff (WT) (Choo & Atkins, 1983): R(x|ω) = min 1≤i≤d ωi|Ri(x)− z⋆i |,.
MOGFN-PC is not constrained to any scalarization function, and can incorporate any user-defined scalarization scheme that fits the desired optimization needs.
Training MOGFN-PC The procedure to train MOGFN-PC, or any reward-conditional GFlowNet, closely follows that of a standard GFlowNet and is described in Algorithm 1. The objective is to learn
1If the object is constructed in a canonical order (say a string constructed from left to right), G is a tree.
the parameters θ of the forward and backward conditional policies PF (−|s, ω; θ) and PB(−|s′, ω; θ), and the log-partition function logZθ(ω). To this end, we consider an extension of the trajectory balance objective for reward-conditional GFlowNets:
L(τ, ω; θ) = ( log Zθ(ω) ∏ s→s′∈τ PF (s ′|s, ω; θ)
R(x|ω) ∏
s→s′∈τ PB(s|s′, ω; θ)
)2 . (2)
One important component is the distribution p(ω) used to sample preferences during training. p(ω) influences the regions of the Pareto front that are captured by MOGFN-PC. In our experiments, we use a Dirichlet(α) to sample preferences ω which are encoded with thermometer encoding (Buckman et al., 2018) when input to the policy. Following prior work, we also use an exponent β for the reward R(x|ω), i.e. π(x|ω) ∝ R(x|ω)β . This incentivizes the policy to focus on the modes of R(x|ω), which is critical for generation of high reward and diverse candidates.
MOGFN-PC and MOReinforce MOGFN-PC is closely related to MOReinforce (Lin et al., 2021) in that both learn a preference-conditional policy to sample Pareto-optimal candidates. The key difference is the learning objective: MOReinforce uses a multi-objective version of REINFORCE (Williams, 1992), whereas MOGFN-PC uses a preference-conditional GFlowNet objective as in Equation (2). As discussed in Section 2.1, each point on the Pareto front (corresponding to a unique ω) can be the image of multiple candidates in the Pareto set. MOReinforce, given a preference ω will converge to sampling a single candidate that maximizes R(x|ω). MOGFN-PC, on the other hand, samples from R(x|ω), which enables generation of diverse candidates from the Pareto set for a given ω. This is a key feature of MOGFN-PC whose advantage we empirically demonstrate in Section 5.
3.2 MULTI-OBJECTIVE ACTIVE LEARNING WITH GFLOWNETS
In many practical scenarios, the objective functions of interest are computationally expensive. For instance, in the drug discovery scenario, evaluating objectives such as the binding energy to a target even in simulations can take several hours. Sample-efficiency, in terms of number of evaluations of the objective functions, and diversity of candidates, thus become critical in such scenarios. Black-box optimization approaches involving active learning (Zuluaga et al., 2013), particularly multi-objective Bayesian optimization (MOBO) methods (Shah & Ghahramani, 2016; Garnett, 2022) are powerful approaches in these settings.
MOBO uses a probabilistic model to approximate the objectives R = {R1 . . . Rd} and leverages the epistemic uncertainty in the predictions of the model as a signal for prioritizing potentially useful candidates. The optimization is performed over M rounds, where each round i consists of generating a batch of candidates B given all the candidates Di proposed in the previous rounds. The batch B is then evaluated using the true objective functions. The candidates are generated in each round by maximizing an acquisition function a which combines the predictions with their epistemic uncertainty into a single scalar utility score. We note that each round is effectively a scalarization of the MOO problem, and as such it may be decomposed into each round’s single objective problem.
We broadly define MOGFN-AL as approaches which use GFlowNets to generate candidates in each round of an active learning loop for multi-objective optimization. MOGFN-AL tackles MOO through a sequence of single-objective sub-problems defined by acquisition function a. As such, MOGFN-AL can be viewed as a multi-objective extension of GFlowNet-AL (Jain et al., 2022). In this work, we consider an instantiation of MOGFN-AL for biological sequence design summarized in Algorithm 2 (Appendix A), building upon the framework proposed by Stanton et al. (2022).
We start with an initial dataset D0 = (xi, yi)Ni=1 of candidates xi ∈ X and their evaluation with the true objectives yi = R(x). Di is used to train a surrogate probabilistic model (proxy) of the true objectives f̂ : X → Rd, which we parameterize as a multi-task Gaussian process (Shah & Ghahramani, 2016) with a deep kernel (DKL GP; Maddox et al., 2021a;b). Using this proxy, the acquisition function defines the utility to be maximized a : X × F → R, where F denotes the space of functions represented by DKL GPs. In our work we use as acquisition function a noisy expected hypervolume improvement (NEHVI; Daulton et al., 2020).
We use GFlowNets to propose candidates at each round i by generating mutations for candidates x ∈ P̂i where P̂i is the set of non-dominated candidates in Di. Given a sequence x, the GFlowNet
generates a set of mutations m = {(li, vi)}Ti=1 where l ∈ {1, . . . , |x|} is the location to be replaced and v ∈ A is the token to replace x[l] while T is the number of mutations. This set is generated sequentially such that each mutation is sampled from PF conditioned on x and the mutations sampled so far {(li, vi)}. Let x′m be the sequence resulting from mutations m on sequence x. The reward for a set of sampled mutations for x is the value of the acquisition function on x′m, R(m,x) = a(x′m|f̂). This approach of generating mutations to existing sequences provides an key advantage over generating sequences token-by-token as done in prior work (Jain et al., 2022) – better scaling for longer sequences. We show empirically in Section 5.3 that generating mutations with GFlowNets results in more diverse candidates and faster improvements to the Pareto front than LaMBO (Stanton et al., 2022).
4 RELATED WORK
Evolutionary Algorithms (EA) Traditionally, evolutionary algorithms such as NSGA-II have been widely used in various multi-objective optimization problems (Ehrgott, 2005; Konak et al., 2006; Blank & Deb, 2020). More recently, Miret et al. (2022) incorporated graph neural networks into evolutionary algorithms enabling them to tackle large combinatorial spaces. Unlike MOGFNs, evolutionary algorithms do not leverage any type of data, including past experiences, and therefore are required to solve each instance of a MOO from scratch rather than by amortizing computation during training in order to quickly generate solutions at run-time. Evolutionary algorithms, however, can be augmented with MOGFNs for generating mutations to improve efficiency, as in Section 3.2.
Multi-Objective Reinforcement Learning MOO problems have also received significant interest in the reinforcement learning (RL) literature (Hayes et al., 2022). Traditional approaches broadly consist of learning sets of Pareto-dominant policies (Roijers et al., 2013; Van Moffaert & Nowé, 2014; Reymond et al., 2022). Recent work has focused on extending Deep RL algorithms for multi-objective settings such as Envelope-MOQ (Yang et al., 2019), MO-MPO (Abdolmaleki et al., 2020; 2021) , and MOReinforce (Lin et al., 2021). A general shortcoming of RL based approaches is that they only discover a single mode of the reward function, and thus cannot generate diverse candidates, which also persists in the multi-objective setting. In contrast, MOGFNs sample candidates proportional to the reward, implicitly resulting in diverse candidates.
Multi-Objective Bayesian Optimization (MOBO) Bayesian optimization (BO) has been used in the context of MOO when the objectives are expensive to evaluate and sample-efficiency is a key consideration. MOBO approaches consist of learning a surrogate model of the true objective functions, which is used to define an acquisition function such as expected hypervolume improvement (Emmerich et al., 2011; Daulton et al., 2020; 2021) and max-value entropy search (Belakaria et al., 2019), as well as scalarization-based approaches (Paria et al., 2020; Zhang & Golovin, 2020). Stanton et al. (2022) proposed LaMBO, which uses language models in conjunction with BO for multi-objective sequence design problems. The key drawbacks of MOBO approaches are that they do not consider the need for diversity in generated candidates and that they mainly consider continuous state spaces. As we discuss in Section 3.2, MOBO approaches can be augmented with GFlowNets for diverse candidate generation in discrete spaces.
Other Works Zhao et al. (2022) introduced LaMOO which tackles the MOO problem by iteratively splitting the candidate space into smaller regions, whereas Daulton et al. (2022) introduce MORBO, which performs BO in parallel on multiple local regions of the candidate space. Both these methods, however, are limited to continuous candidate spaces.
5 EMPIRICAL RESULTS
In this section, we present our empirical findings across a wide range of tasks ranging from sequence design to molecule generation.The experiments cover two distinct classes of problems in the context of GFlowNets: where G is a DAG and where it is a tree. Through our experiments, we aim to answer the following questions:
Q1 Can MOGFNs model the preference-conditional reward distribution?
Q2 Can MOGFNs sample Pareto-optimal candidates?
Q3 Are candidates sampled by MOGFNs diverse? Q4 Do MOGFNs scale to high-dimensional problems relevant in practice?
Metrics: We rely on standard metrics such as the Hypervolume (HV) and R2 indicators, as well as the Generational Distance+ (GD+). To measure diversity we use the Top-K Diversity and Top-K Reward metrics of Bengio et al. (2021a). We detail all metrics in Appendix D. For all our empirical evaluations we follow the same protocol. First, we sample a set of preferences which are fixed for all the methods. For each preference we sample 128 candidates from which we pick the top 10, compute their scalarized reward and diversity, and report the averages over preferences. We then use these samples to compute the HV and R2 indicators. We pick the best hyperparameters for all methods based on the HV and report the mean and standard deviation over 3 seeds for all quantities.
Baselines: We consider the closely related MOReinforce (Lin et al., 2021) as a baseline. We also study its variants MOSoftQL and MOA2C which use Soft Q-Learning (Haarnoja et al., 2017) and A2C (Mnih et al., 2016) in place of REINFORCE. We also compare against Envelope-MOQ (Yang et al., 2019), another popular multi-objective reinforcement learning method. For fragment-based molecule generation we consider an additional baseline MARS (Xie et al., 2021), a relevant MCMC approach for this task. To keep comparisons fair, we omit baselines like LaMOO (Zhao et al., 2022) and MORBO (Daulton et al., 2022) as they are designed for continuous spaces and rely on latent representations from pre-trained models for discrete tasks like molecule generation.
5.1 SYNTHETIC TASKS
5.1.1 HYPER-GRID
We first study the ability of MOGFN-PC to capture the preference-conditional reward distribution in a multi-objective version of the HyperGrid task from Bengio et al. (2021a). The goal here is to navigate proportional to a reward within a HyperGrid. We consider the following objectives for our experiments: brannin(x), currin(x), shubert(x)2.
Since the state space is small, we can compute the distribution learned by MOGFN-PC in closed form. In Figure 1a, we visualize π(x|ω), the distribution learned by MOGFN-PC conditioned on a set of fixed preference vectors ω and contrast it with the true distribution R(x|ω) in a 32 × 32 hypergrid with 3 objectives. We observe that π(−|ω) and R(−|ω) are very similar. To quantify this, we compute Ex [|π(x|ω)−R(x|ω)/Z(ω)|] averaged over a set of 64 preferences, and find a difference of about 10−4. Note that MOGFN-PC is able to capture all the modes in the distribution, which suggests the candidates sampled from π would be diverse. Further, we compute the GD+ metric for the Pareto front of candidates generated with MOGFN-PC, which comes up to an average value of 0.42. For more details about the task and the additional results, refer to Appendix E.1.
5.1.2 N-GRAMS TASK
We consider version of the synthetic sequence design task from Stanton et al. (2022). The task consists of generating strings with the objectives given by occurrences of a set of d n-grams.
In the results summarized in Table 1, we consider 3 Bigrams (with common characters in the bigrams resulting in correlated objectives) and 3 Unigrams (conflicting objectives) as the objectives. MOGFNPC outperforms the baselines in terms of the MOO objectives while generating diverse candidates.
2We present additional results with more objectives in Appendix E.1
Since the objective counts occurrences of n-grams, the diversity is limited by the performance, i.e. high scoring sequences will have lower diversity, explaining higher diversity of MOSoftQL. We note that the MOReinforce and Envelope-MOQ baselines struggle in this task potentially due to longer trajectories with sparse rewards. MOGFN-PC adequately models the trade-off between conflicting objectives in the 3 Monograms task as illustrated by the Pareto front of generated candidates in Figure 1b. For the 3 Bigrams task with correlated objectives, Figure 1c demonstrates MOGFN-PC generates candidates which can simultaneously maximize multiple objectives. We refer the reader to Appendix E.2 for more task details and additional results with different number of objectives and varying sequence length.
5.2 BENCHMARK TASKS
5.2.1 QM9
We first consider a small-molecule generation task based on the QM9 dataset (Ramakrishnan et al., 2014). We generate molecules atom-by-atom and bond-by-bond with up to 9 atoms and use 4 reward signals. The main reward is obtained via a MXMNet (Zhang et al., 2020) proxy trained on QM9 to predict the HOMO-LUMO gap. The other rewards are Synthetic Accessibility (SA), a molecular weight target, and a molecular logP target. Rewards are normalized to be between 0 and 1, but the gap proxy can exceed 1, and so is clipped at 2. We train the models with 1M molecules and present the results in Table 2, showing that MOGFN-PC outperforms all baselines in terms of Pareto performance and diverse candidate generation.
5.2.2 FRAGMENT-BASED MOLECULE GENERATION
We evaluate our method on the fragment-based (Kumar et al., 2012) molecular generation task of Bengio et al. (2021a), where the task is to generate molecules by linking fragments to form a junction tree (Jin et al., 2020). The main reward function is obtained via a pretrained proxy, available from Bengio et al. (2021a), trained on molecules docked with AutodockVina (Trott & Olson, 2010) for the sEH target. The other rewards are based on Synthetic Accessibility (SA), drug likeness (QED), and a molecular weight target. We detail the reward construction in Appendix E.4. Similarly to QM9, we train MOGFN-PC to generate 1M molecules and report the results in Table 3. We observe that MOGFN-PC is consistently outperforming baselines not only in terms of HV and R2, but also candidate diversity score. Note that we do not report reward and diversity scores for MARS, since the lack of preference conditioning would make it an unfair comparison.
5.2.3 DNA SEQUENCE GENERATION
As a practical domain where the GFlowNet graph is a tree, we consider the generation of DNA aptamers, single-stranded nucleotide sequences that are popular in biological polymer design due to their specificity and affinity as sensors in crowded biochemical environments (Zhou et al., 2017; Corey et al., 2022; Yesselman et al., 2019; Kilgour et al., 2021). We generate sequences by adding one nucleobase (A, C, T or G) at a time, with a maximum length of 60 bases. We consider three objectives:
the free energy of the secondary structured calculated with the software NUPACK (Zadeh et al., 2011), the number of base pairs and the inverse of the sequence length to favour shorter sequences.
We report the results in Table 4. In this case, the best Pareto performance is obtained by the multi-objective RL algorithm MOReinforce (Lin et al., 2021). However, it achieves so by finding a quasi-trivial solution with the pattern GCGCGC... for most lengths, yielding very low diversity. In contrast, MOGFN-PC obtains much higher diversity and Top-K rewards but worse Pareto performance. An extended discussion, ablation study and further details are provided in Appendix E.5.
5.3 ACTIVE LEARNING
Finally, to evaluate MOGFN-AL, we consider the Proxy RFP task from Stanton et al. (2022), with the aim of discovering novel proteins with red fluorescence properties, optimizing for folding stability and solvent-accessible surface area. We adopt all the experimental details (described in Appendix E.6) from Stanton et al. (2022), using MOGFN-AL for candidate generation. In addition to LaMBO, we use a model-free (NSGA-2) and model-based EA from Stanton et al. (2022) as baselines. We observe in Figure 2a that MOGFN-AL results in significant gains to the improvement in Hypervolume relative to the initial dataset, in a given budget of black-box evaluations. In fact, MOGFN-AL is able to match the performance of LaMBO within about half the number of black-box evaluations.
Figure 2b illustrates that the Pareto frontier of candidates generated with MOGFN-AL, which dominates the Pareto frontier of the initial dataset. As we the candidates are generated by mutating sequences in the existing Pareto front, we also highlight the sequences that are
mutations of each seqeunce in the initial dataset with the same color. To quantify the diversity of the generated candidates we measure the average e-value from DIAMOND (Buchfink et al., 2021) between the initial Pareto front and the Pareto frontier of generated candidates. Table 2c shows that MOGFN-AL generates candidates that are more diverse than the baselines.
6 ANALYSIS
In this section, we isolate the important components of MOGFN-PC: the distribution p(ω) for sampling preferences during training, the reward exponent β and the reward scalarization R(x|ω) to understand the impact of each component on Pareto performance and diversity. We consider the 3 Bigrams task discussed in Section 5.1.2 and the fragment-based molecule generation task from Section 5.2.1 for this analysis and provide further results in the Appendix.
Impact of p(ω) To examine the effect of p(ω), which controls the coverage of the Pareto front, we set it to Dirichlet(α) and vary α ∈ {0.1, 1, 10}. This results in ω being sampled from different regions of ∆d. Specifically, α = 1 corresponds to a uniform distribution over ∆d, α > 1 is skewed towards the center of ∆d whereas α < 1 is skewed towards the corners of ∆d. In Table 5 and Table 6 we observe that α = 1 results in the best performance. Despite the skewed distribution with α = 0.1 and α = 10, we still achieve performance close to that of α = 1 indicating that MOGFN-PC is able to interpolate to preferences not sampled during training. Note that diversity is not affected significantly by p(ω).
Impact of β During training β, controls the concentration of the reward density around modes of the distribution. For large values of β the reward density around the modes become more peaky and vice-versa. In Table 5 and Table 6 we present the results obtained by varying β ∈ {16, 32, 48}. As β increases, MOGFN-PC is incentivized to generate samples closer to the modes of R(x|ω), resulting in better Pareto performance. However, with high β values, the reward density is concentrated close to the modes and there is a negative impact on the diversity of the candidates.
Choice of scalarization R(x|ω) Next, we analyse the effect of the scalarization defining R(x|ω) used for training. The set of R(x|ω) for different ω specifies the family of MOO sub-problems and thus has a critical impact on the Pareto performance. Table 5 and Table 6 include results for the Weighted Sum (WS), Weighted-log-sum (WL) and Weighted Tchebycheff (WT) scalarizations. Note that we do not compare the Top-K Reward as different scalarizations cannot be compared directly. WS scalarization results in the best performance. WL scalarization on the other hand is not formally guaranteed to cover the Pareto front and consequently results in poor Pareto performance. We suspect the poor performance of WT and WL are in part also due to the harder reward landscapes they induce.
7 CONCLUSION
In this work, we have empirically demonstrated the generalization of GFlowNets to conditional GFlowNets for multi-objective optimization problems (MOGFN) to promote the generation of diverse optimal candidates. We presented two instantiations of MOGFN: MOGFN-PC, which leverages reward-conditional GFlowNets (Bengio et al., 2021b) to model a family of single-objective subproblems, and MOGFN-AL, which sequentially solves a set of single-objective problems defined by multi-objective acquisition functions. Finally, we empirically demonstrated the efficacy of MOGFNs for generating diverse Pareto-optimal candidates on sequence and graph generation tasks.
As a limitation, we identify that in certain domains, such as DNA sequence generation, MOGFN generates diverse candidates but currently does not match RL algorithms in terms of Pareto performance. The analysis in Section 6 hints that the distribution of sampling preferences p(ω) affects the Pareto performance. Since for certain practical applications only a specific region of the Pareto front is of interest, future work may explore gradient based techniques to learn preferences for more structured exploration of the preference space. Within the context of MOFGN-AL, an interesting research avenue is the development of preference-conditional acquisition functions.
Reproducibility Statement We include the code necessary to replicate experiments with our submission and provide detailed description of experimental setups in the Appendix. All datasets and pretrained models used are publicly available or included in the supplementary materials.
Ethics Statement We acknowledge that as with all machine learning algorithms, there is potential for dual use of multi-objective GFlowNets by nefarious agents. This work was motivated by the application of machine learning to accelerate scientific discovery in areas that can benefit humanity. We explicitly discourage the use of multi-objective GFlowNets in applications that may be harmful to others.
A ALGORITHMS
We summarize the algorithms for MOGFN-PC and MOGFN-AL here.
Algorithm 1: Training preference-conditional GFlowNets Input: p(ω): Distribution for sampling preferences; β: Reward Exponent; δ: Mixing Coefficient for uniform actions in sampling policy; N : Number of training steps; Initialize: (PF (s
′|s, ω), PB(s|s′, ω), logZ(ω)): Conditional GFlowNet with parameters θ; for i = 1 to N do
Sample preference ω ∼ p(ω); Sample trajectory τ following policy π̂ = (1− δ)PF + δUniform ; Compute reward R(x|ω)β for generated samples and corresponding loss L(τ, ω; θ) as in
Equation 2; Update parameters θ with gradients from the loss, ∇θL(τ, ω);
end
Algorithm 2: Training MOGFN-AL Input: R = {R1, . . . , Rd}: Oracles to evaluate candidates x and return true objectives (R1(x), . . . , Rd(x)) ; D0 = {(xi, yi)}: Initial dataset with yi = R(xi); f̂ : Probabilistic surrogate model to model posterior over R given a dataset D; a(x|f̂): Acquisition function computing a scalar utility for x given f̂ ; πθ: Learnable GFlowNet policy; b: Size of candidate batch to be generated; N : Number of active learning rounds; Initialize: f̂ , πθ; for i = 1 to N do
Fit f̂ on dataset Di−1; Extract the set of non-dominated candidates P̂i−1 from Di−1; Train πθ with to generate mutations for x ∈ P̂i using a(−|f̂) as the reward; Generate batch B = {x′1,mi , . . . , x ′ b,mb
} by sampling x′i from P̂i−1 and applying to it mutations mi sampled from πθ;
Evaluate batch B with R to generate D̂i = {(x1,R(x1)), . . . , (xb,R(xb))}; Update dataset Di = D̂i ∪Di−1
end Result: Approximate Pareto set P̂N
B SCALARIZATION
Scalarization is a popular approach for tackling multi-objective optimization problems. MOGFNPC can build upon any scalarization approach. We consider three choices. Weighted-sum (WS) scalarization has been widely used in literature. WS finds candidates on the convex hull of the Pareto front (Ehrgott, 2005). Under the assumption that the Pareto front is convex, every Pareto optimal solution is a solution to a weighted sum problem and the solution to every weighted sum problem is Pareto optimal. Weigthed Tchebycheff (WT), proposed by Choo & Atkins (1983) is
an alternative designed for non-convex Pareto fronts. Any Pareto optimal solution can be found by solving the weighted Tchebycheff problem with appropriate weights, and the solutions for any weights correspond to a weakly Pareto optimal solution of the original problem (Pardalos et al., 2017). Lin et al. (2021) deomstrated through their empirical results that WT can be used with neural network based policies. The third scheme we consider, Weighted-log-sum (WL) has not been considered in prior work. We hypothesized that in some practical scenarios, we might want to ensure that all objectives are optimized, since, for instance, in WS the scalarized reward can be dominated by a single reward. WL, which considers the weigthed sum in log space can potentially help with this drawback. However, as discussed in Section 6, in practice WL can be hard to optimize, and lead to poor performance.
C ADDITIONAL ANALYSIS
Can MOGFN-PC match Single Objective GFNs? To evaluate how well MOGFN-PC models the family of rewards R(x|ω), we consider a comparison with single objective GFlowNets. More specifically, we first sample a set of 10 preferences ω1, . . . , ω10, and train a standard single objective GFlowNet using the weighted sum scalar reward for each preference. We then generate N = 128 candidates from each GFlowNet, throughout training, and compute the mean reward for the top 10 candidates for each preference. We average this top 10 reward across {ω1, . . . , ω10}, and call it Rso. We then train MOGFN-PC, and apply the sample procedure with the preferences {ω1, . . . , ω10}, and call the resulting mean of top 10 rewards Rmo. We plot the value of the ratio Rmo/Rso in Figure 3. We observe that the ratio stays close to 1, indicating that MOGFN-PC can indeed model the entire family of rewards simultaneously at least as fast as a single objective GFlowNet could.
Effect of Model Capacity and Architecture Finally we look at the effect of model size in training MOGFN-PC. As MOGFN-PC models a conditional distribution, an entire family of functions as we’ve described before, we expect capacity to play a crucial role since the amount of information to be learned is higher than for a single-objective GFN. We increase model size in the 3 Bigrams task to see that effect, and see in Table 7 that larger models do help with performance–although the performance plateaus after a point. We suspect that in order to fully utilize the model capacity we might need better training objectives.
D METRICS
In this section we discuss the various metrics that we used to report the results in Section 5.
1. Generational Distance Plus (GD +) (Ishibuchi et al., 2015): This metric measures the euclidean distance between the solutions of the Pareto approximation and the true Pareto front by taking the dominance relation into account. To calculate GD+ we require the knowledge of the true Pareto front and hence we only report this metric for Hypergrid experiments (Section 5.1.1)
2. Hypervolume (HV) Indicator (Fonseca et al., 2006): This is a standard metric reported in MOO works which measures the volume in the objective space with respect to a reference point spanned by a set of non-dominated solutions in Pareto front approximation.
3. R2 Indicator (Hansen & Jaszkiewicz, 1994): R2 provides a monotonic metric comparing two Pareto front approximations using a set of uniform reference vectors and a utopian point z∗ representing the ideal solution of the MOO. This metric provides a monotonic reference to compare different Pareto front approximations relative to a utopian point. Specifically, we define a set of uniform reference vectors λ ∈ Λ that cover the space of the MOO and then calculate:R2(Γ,Λ, z∗) = 1 |Λ| ∑ λ∈Λ minγ∈Γ { maxi∈1,...,k{λi|z∗i − γi|} } where γ ∈ Γ corresponds to the set of
solutions in a given Pareto front approximations and z∗ is the utopian point corresponding to the ideal solution of the MOO. Generally, R2 metric calculations are performed with z∗ equal to the origin and all objectives transformed to a minimization setting, which serves to preserve the monotonic nature of the metric. This holds true for our experiments as well.
4. Top-K Reward This metric was originally used in (Bengio et al., 2021a), which we extend for our multi-objective setting. For MOGFN-PC, we sample N candidates per test preference and then pick the top-k candidates (k < N ) with highest scalarized rewards and calculate the mean. We repeat this for all test preferences enumerated from the simplex and report the average top-k reward score.
5. Top-K Diversity This metric was also originally used in (Bengio et al., 2021a), which we again extend for our multi-objective setting. We use this metric to quantify the notion of diversity of the generated candidates. Given a distance metric d(x, y) between candidates x and y we calculate the diversity of candidates as those who have d(x, y) greater than a
threshold ϵ. For MOGFN-PC, we sample N candidates per test preference and then pick the top-k candidates based on the diversity scores and take the mean. We repeat this for all test preferences sampled from simplex and report the average top-k diversity score. We use the edit distance for sequences, and 1 minus the Tanimoto similarity for molecules.
E ADDITIONAL EXPERIMENTAL DETAILS
E.1 HYPER-GRID
Here we elaborate on the Hyper-Grid experimental setup which we discussed in Section 5.1.1. Consider an n-dimensional hypercube gridworld where each cell in the grid corresponds to a state. The agent starts at the top left coordinate marked as (0, 0, . . . ) and is allowed to move only towards the right, down, or stop. When the agent performs the stop action, the trajectory terminates and the agent receives a non-zero reward. In this work, we consider the following reward functions - brannin(x), currin(x), sphere(x), shubert(x), beale(x). In Figure 4, we show the heatmap for each reward function. Note that we normalize all the reward functions between 0 and 1.
.
Additional Results To verify the efficacy of MOGFNs across different objectives sizes, we perform some additional experiments and measure the L1 loss and the GD+ metric. In Figure 5, we can see that as the reward dimension increases, the loss and GD+ increases. This is expected because the number of rewards is indicative of the difficulty of the problem. We also present extended qualitative visualizations across more preferences in Figure 6.
Model Details and Hyperparameters For MOGFN-PC policies we use an MLP with two hidden layers each consisting of 64 units. We use LeakyReLU as our activation function as in Bengio et al. (2021a). All models are trained with learning rate=0.01 with the Adam optimizer Kingma & Ba (2015) and batch size=128. We sample preferences ω from Dirichlet(α) where α = 1.5. We try two encoding techniques for encoding preferences - 1) Vanilla encoding where we just use the raw values of the preference vectors and 2) Thermometer encoding (Buckman et al., 2018). In our experiments we have not observed significant difference in performance difference.
E.2 N-GRAMS TASK
Task Details The task is to generate sequences of some maximum length L, which we set to 36 for the experiments in Section 5.1.2. We consider a vocabulary (actions) of size 21, with 20 characters ["A", "R", "N", "D", "C", "E", "Q", "G", "H", "I", "L", "K", "M", "F", "P", "S", "T", "W", "Y", "V"] and a special token to indicate the end of sequence. The rewards {Ri}di=1 are defined by the number of occurrences of a given set of n-grams in a sequence x. For instance, consider ["AB", "BA"] as the n-grams. The rewards for a sequence x = ABABC would be [2, 1]. We consider two choices of n-grams: (a) Unigrams: the number of occurrences of a set of unigrams induces conflicting objectives since we cannot increase the number of occurrences of a monogram without replacing another in a string of a particular length, (b) Bigrams: given common characters within the bigrams, the occurrences of multiple bigrams can be increased simultaneously within a string of a fixed length. We also consider different sizes for the set of n-grams considered, i.e. different number of objectives. This allows us to evaluate the behaviour of MOGFN-PC on a variety of objective spaces. We summarize the specific objectives used in our experiments in Table 8. We normalize the rewards to [0, 1] in our experiments.
Model Details and Hyperparameters We build upon the implementation from Stanton et al. (2022) for the task: https://github.com/samuelstanton/lambo. For the string generation task, the backward policy PB is trivial (as there is only one parent for each node s ∈ S), so we only have to parameterize PF and logZ. As PF (−|s, ω) is a conditional policy, we use a Conditional Transformer encoder as the architecture. This consists of a Transformer encoder (Vaswani et al., 2017) with 3 hidden layers of dimension 64 and 8 attention heads to embed the current state (string generated so far) s. We have an MLP which embeds the preferences ω which are encoded using thermometer encoding with 50 bins. The embeddings of the state and preferences are concatenated and passed to a final MLP which generates a categorical distribution over the actions (vocabulary token). We use the same architecture for the baselines using a conditional policy – MOReinforce and MOSoftQL. For EnvelopeMOQ, which does not condition on the preferences, we use a standard Transformer-encoder with a similar architecture. We present the hyperparameters we used in Table 9. Each method is trained for 10,000 iterations with a minibatch size of 128. For the baselines we adopt the official implementations released by the authors for MOReinforce – https://github.com/Xi-L/PMOCO and EnvelopeMOQ – https://github.com/RunzheYang/MORL.
Additional Results We present some additional results for the n-grams task. We consider different number of objectives d ∈ {2, 4} in Table 10 and Table 11 respectively. As with the experiments in Section 5.1.2 we observe that MOGFN-PC outperforms the baselines in Pareto performance while achieving high diversity scores. In Table 12, we consider the case of shorter sequences L = 24.
MOGFN-PC continues to provide significant improvements over the baselines. There are two trends we can observe considering the N-grams task holistically:
1. As the sequence size increases the advantage of MOGFN-PC becomes more significant. 2. The advantage of MOGFN-PC increases with the number of objectives.
Reward Details As mentioned in Section 5.2.1, we consider four reward functions for our experiments. The first reward function is the HUMO-LUMO gap, for which we rely on the predictions of a pretrained MXMNet (Zhang et al., 2020) model trained on the QM9 dataset (Ramakrishnan et al., 2014). The second reward is the standard Synthetic Accessibility score which we calculate using the RDKit library (Landrum), to get the reward we compute (10− SA)/9. The third reward function is molecular weight target. Here we first calculate the molecular weight of a molecule using RDKit, and then construct a reward function of the form e−(molWt−105)
2/150 which is maximized at 105. Our final reward function is a logP target, e−(logP−2.5)
2/2, which is again calculated with RDKit and is maximized at 2.5.
Model Details and Hyperparameters We sample new preferences for every episode from a Dirichlet(α), and encode the desired sampling temperature using a thermometer encoding (Buckman et al., 2018). We use a graph neural network based on a graph transformer architecture (Yun et al., 2019). We transform this conditional encoding to an embedding using an MLP. The embedding is then fed to the GNN as a virtual node, as well as concatenated with the node embeddings in the graph. The model’s action space is to add a new node to the graph, a new bond, or set node or bond properties (like making a bond a double bond). It also has a stop action. For more details please refer to the code provided in the supplementary material. We summarize the hyperparameters used in Table 13.
E.4 FRAGMENTS
More Details As mentioned in Section 5.2.2, we consider four reward functions for our experiments. The first reward function is a proxy trained on molecules docked with AutodockVina (Trott & Olson, 2010) for the sEH target; we use the weights provided by Bengio et al. (2021a). We also use synthetic accessibility, as for QM9, and a weight target region (instead of the specific target weight used for QM9), ((300 - molwt) / 700 + 1).clip(0, 1) which favors molecules with a weight of under 300. Our final reward function is QED which is again calculated with RDKit.
Model Details and Hyperparameters We again use a graph neural network based on a graph transformer architecture (Yun et al., 2019). The experimental protocol is similar to QM9 experiments discussed in Appendix E.3. We additionally sample from a lagged model whose parameters are updated as θ′ = τθ′ + (1− τ)θ. The model’s action space is to add a new node, by choosing from a
list of fragments and an attachment point on the current molecular graph. We list all hyperparameters used in Table 14.
Additional Results We also present in Figure 7 a view of the reward distribution produced by MOGFN-PC. Generally, the model is able to find good near-Pareto-optimal samples, but is also able to spend a lot of time exploring. The figure also shows that the model is able to respect the preference conditioning, and remains capable of generating a diverse distribution rather than a single point.
In the off-diagonal plots of Figure 7, we show pairwise scatter plots for each objective pair; the Pareto front is depicted with a red line; each point corresponds to a molecule generated by the model as it explores the state space; color is density (linear viridis palette). The diagonal plots show two overlaid informations: a blue histogram for each objective, and an orange scatter plot showing the relationship between preference conditioning and generated molecules. The effect of this conditioning is particularly visible for seh (top left) and wt (bottom right). As the preference for the sEH binding reward gets closer to 1, the generated molecules’ reward for sEH gets closer to 1 as well. Indeed, the expected shape for such a scatter plot is a triangular-ish shape: when the preference ωi for reward Ri is close to 1, the model is expected to generate objects with a high reward for Ri; as the preference ωi gets further away from 1, the model can generate anything, including objects with a high Ri–that is, unless there is a trade off between objectives, in which case in cannot; this is the case for the seh objective, but not for the wt objective, which has a more triangular shape.
E.5 DNA SEQUENCE DESIGN
Task Details The set of building blocks here consists of the bases["A", "C", "T", "G"] in addition to a special end of sequence token. In order to compute the free energy and number of base with the software NUPACK (Zadeh et al., 2011), we used 310 K as the temperature. The inverse of the length L objective was calculated as 30L , as 30 was the minimum length for sampled sequences. The rewards are normalized to [0, 1] for our experiments.
Model Details and Hyperparameters We use the same implementation as the N-grams task, detailed in Appendix E.2. Here we consider a 4-layer Transformer architecture, with 256 units per layer and 16 attention head instead. We detail the most relevant hyperparameters Table 15.
Discussion of Results Contrary to the other tasks on which we evaluated MOGFN-PC, for the generation of DNA aptamer sequences, our proposed model did not match the best baseline, multiobjective reinforcement learning (Lin et al., 2021), in terms of Pareto performance. Nonetheless, it is worth delving into the details in order to better understand the different solutions found by the two methods. First, as indicated in section 5, despite the better Pareto performance, the best sequences generated by the RL method have extremely low diversity (0.62), compared to MOGFN, which generates optimal sequences with diversity of 19.6 or higher. As a matter of fact, MOReinforce mostly samples sequences with the well-known pattern GCGC... for all possible lengths. Sequences with this pattern have indeed low (negative) energy and many number of pairs, but they offer little new insights and poor diversity if the model is not able to generate sequences with other distinct patterns. On the contrary, GFlowNets are able to generate sequences with patterns other than repeating the pair of bases G and C. Interestingly, we observed that GFlowNets were able to generate sequences with even lower energy than the best sequences generated by MOReinforce by inserting bases A and T into chains of GCGC.... Finally, we observed that one reason why MOGFN does not match the Pareto performance of MOReinforce is because for short lengths (one of the objectives) the energy and number of pairs are not successfully optimised. Nonetheless, the optimisation of energy and number of pairs is very good for the longest sequences. Given these observations, we conjecture that there is room for improving the set of hyperparameters or certain aspects of the algorithm.
Additional Results In order to better understand the impact of the main hyperparameters of MOGFN-PC in the Pareto performance and diversity of the optimal candidates, we train multiple instances by sweeping over several values of the hyperparameters, as indicated in Table 15. We present the results in Table 16. One key observation is that there seems to be a tradeoff between the Pareto performance and the diversity of the Top-K sequences. Nonetheless, even the models with the lowest diversity are able to generate much more diverse sequences than MOReinforce. Furthermore, we also observe α < 1 as the parameter of the Dirichlet distribution to sample the weight preferences, as well as higher β (reward exponent), both yield better metrics of Pareto performance but slightly worse diversity. In the case of β, this observation is consistent with the results in the Bigrams task (Table 5), but with Bigrams, best performance was obtained with α = 1. This is indicative of a degree of dependence on the task and the nature of the objectives.
E.6 ACTIVE LEARNING
Task Details We consider the Proxy RFP task from Stanton et al. (2022), an in silico benchmark task designed to simulate searching for improved red fluorescent protein (RFP) variants (Dance et al., 2021). The objectives considered are stability (-dG or negative change in Gibbs free energy) and
solvent-accessible surface area (SASA) (Shrake & Rupley, 1973) in simulation, computed using the FoldX suite (Schymkowitz et al., 2005) and BioPython (Cock et al., 2009). We use the dataset introduced in Stanton et al. (2022) as the initial pool of candidates D0 with |D0| = 512. Method Details and Hyperparameters Our implementation builds upon the publicly released code from (Stanton et al., 2022): https://github.com/samuelstanton/lambo. We follow the exact experimental setup used in Stanton et al. (2022). The surrogate model f̂ consists of an encoder with 1D convolutions (masking positions corresponding to padding tokens). We used 3 standard pre-activation residual blocks with two convolution layers, layer norm, and swish activations, with a kernel size of 5, 64 intermediate channels and 16 latent channels. A multi-task GP with an ICM kernel is defined in the latent space of this encoder, which outputs the predictions for each objective. We also use the training tricks detailed in Stanton et al. (2022) for the surrogate model. The hyperparameters, taken from Stanton et al. (2022) are shown in Table 17. The acquisiton function used is NEHVI (Daulton et al., 2021) defined as
α({xj}ij=1) = 1
N N∑ t=1 HVI({f̃t(xj)}i−1j=1|Pt) + 1 N N∑ t=1 HVI(f̃t(xj)|Pt ∪ {f̃t(xj)}i−1j=1) (3)
where f̃t, t = 1, . . . N are independent draws from the surrogate model (which is a posterior over functions), and Pt denotes the Pareto frontier in the current dataset D under f̃t.
We replace the LaMBO candidate generation with GFlowNets. We generate a set of mutations m = {(li, vi)} for a sequences x from the current approximation of the Pareto front P̂i. Note
Hyperparameter Values Learning Rate (PF ) {0.01, 0.001, 0.0001} Learning Rate (Z) {0.01, 0.001} Reward Exponent: β {16, 24} Uniform Policy Mix: δ {0.01, 0.05} Maximum number of mutations {10, 15, 20} δβ {0.5, 1, 2} | 1. What is the focus and contribution of the paper on multi-objective optimization?
2. What are the strengths of the proposed approach, particularly in extending GFlowNets?
3. What are the weaknesses of the paper regarding its goals, assumptions, and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper extends the GFlowNets proposed for single-objective optimization to multi-objective optimization, namely MOGFNs. Experiments are conducted on molecule generation and sequence generation to study the effectiveness of the proposed method.
Strengths And Weaknesses
Strengths 1. This paper extends the GFlowNets proposed for single-objective optimization to multi-objective optimization. 2. Analyses are conducted to study the main components of MOGFNs.
Weaknesses 1. The goal of the research in this paper is not clear. How is diversity defined if it is not to cover the Pareto front of a multi-objective optimization problem? 2. The proposed algorithm is mainly based on the assumption that a multi-objective optimization problem can be viewed as a family of single-objective problems defined by non-negative weighted vectors. This might be inappropriate when the Pareto front is not convex. 3. Similar to this paper, decomposition-based multi-objective evolutionary algorithms such as MOEA/D solve multi-objective optimization problems by considering a set of single-objective sub-problems. The advantages of the proposed method over such methods need to be discussed and evaluated.
Clarity, Quality, Novelty And Reproducibility
This paper is not well-organized. The problem to be solved is unclear and the novelty of the proposed method needs more discussions. |
ICLR | Title
An Analysis of Composite Neural Network Performance from Function Composition Perspective
Abstract
This work investigates the performance of a composite neural network, which is composed of pre-trained neural network models and non-instantiated neural network models, connected to form a rooted directed graph. A pre-trained neural network model is generally a well trained neural network model targeted for a specific function. The advantages of adopting such a pre-trained model in a composite neural network are two folds. One is to benefit from other’s intelligence and diligence, and the other is saving the efforts in data preparation and resources and time in training. However, the overall performance of composite neural network is still not clear. In this work, we prove that a composite neural network, with high probability, performs better than any of its pre-trained components under certain assumptions. In addition, if an extra pre-trained component is added to a composite network, with high probability the overall performance will be improved. In the empirical evaluations, distinctively different applications support the above findings.
N/A
This work investigates the performance of a composite neural network, which is composed of pre-trained neural network models and non-instantiated neural network models, connected to form a rooted directed graph. A pre-trained neural network model is generally a well trained neural network model targeted for a specific function. The advantages of adopting such a pre-trained model in a composite neural network are two folds. One is to benefit from other’s intelligence and diligence, and the other is saving the efforts in data preparation and resources and time in training. However, the overall performance of composite neural network is still not clear. In this work, we prove that a composite neural network, with high probability, performs better than any of its pre-trained components under certain assumptions. In addition, if an extra pre-trained component is added to a composite network, with high probability the overall performance will be improved. In the empirical evaluations, distinctively different applications support the above findings.
1 INTRODUCTION
Deep learning has been a great success in dealing with natural signals, e.g., images and voices, as well as artifact signals, e.g., nature language, while it is still in the early stage in handling sophisticated social and natural applications shaped by very diverse factors (e.g., stock market prediction), or resulted from complicated processes (e.g., pollution level prediction). One of distinctive features of the complicated applications is their applicable data sources are boundless. Consequently, their solutions need frequent revisions. Although neural networks can approximate arbitrary functions as close as possible (Hornik, 1991), the major reason for not existing such competent neural networks for those complicated applications is their problems are hardly fully understood and their applicable data sources cannot be identified all at once. By far the best practice is the developers pick a seemly neural network with available data to hope for the best. The apparent drawbacks, besides the performance, are the lack of flexibility in new data source emergence, better problem decomposition, and the opportunity of employing proven efforts from others. On the other hand, some adopts a composition of several neural network models, based on function composition using domain knowledge.
An emerging trend of deep learning solution development is to employ well crafted pre-trained neural networks (i.e., neural network models with instantiated weights), especially used as a component in a composited neural network model. Most popular pre-trained neural network models are well fine tuned with adequate training data, and made available to the public, either free or as a commercial product. During the training phase of composite neural network, the weights of pre-trained models are frozen to maintain its good quality and save the training time, while the weights of their outgoing edges are trainable. In some cases as in the transfer learning, the weights of pre-trained neural network are used as initial values in the training phase of composite neural network. It is intuitive that a composite neural network should perform better than any of its components. The ensemble learning (Freund & Schapire, 1997; Zhou, 2012) and the transfer learning (Galanti et al., 2016) have great success and are popular when pre-trained models are considered. However, the following example shows some aspects missed by these two methods, and requests for more complicated composite function.
Example 1. Assume there is a set of locations indexed as X = {(0, 0), (0, 1), (1, 0), (1, 0)} with the corresponding values Y = (0, 1, 1, 0). Obviously, the observed function is the XOR (Goodfellow et al., 2016). Now consider three models: f1(x1, x2) := x1, f2(x1, x2) := x2, and f3(x1, x2) := x1x2. Their corresponding output vectors are (0, 0, 1, 1), (0, 1, 0, 1), (0, 0, 0, 1) with bit-wise accuracy 50%, 50%, 25%, respectively. This means that the AdaBoosting algorithm will exclude f1 and f2 in the ensemble since their coefficients are 12 ln 1−50% 50% = 0. On the other hand, in the transfer learning, f3 is fine-tuned by applying the gradient descent method with respect to L2 loss on wf3 = wx1x2 to transfer the source task distribution to that of the target task. The result comes to w = 0, and f3 is excluded. Now consider g1(x1, x2) = α1f1 +α2f2 and apply the back-propagation method with respect to the L2 loss. The results are α1 = α2 = 13 , with loss 4 3 . If further define g2(x1, x2) = w1g1 +w2f3, the back-propagation yields g2 = 3g1−2f3 = x1 +x2−2x1x2 with the output (0, 1, 1, 0). The final g2 computes Y with loss 0. This example shows the power of composite function.
Composite Neural Network. In the transfer learning, how to overcome the negative transfer (a phenomenon of a pre-trained model has negative impact on the target task) is an important issue (Seah et al., 2013). In the ensemble learning, it is well known that the adding more pre-trained models, it is not always true to have the better accuracy of the ensemble (Zhou et al., 2002). Furthermore, Opitz & Maclin (1999) pointed that the ensemble by boosting having less accuracy than a single pre-trained model often happens for neural networks. In the unsupervised learning context, some experimental research concludes that although layer-wise pre-training can be significantly helpful, on average it is slightly harmful (Goodfellow et al., 2016). These empirical evidences suggest that in spite of the success of the ensemble learning and the transfer learning, the conditions that composite neural network can perform better is unclear, especially in the deep neural networks training process. The topology of a composite neural network can be represented as a rooted directed graph. For instance, an ensemble learning can be represented as 1-level graph, while a composite neural network with several pre-trained models that each is designed to solve a certain problem corresponds to a more complicated graph. It is desired to discover a mathematical theory, in addition to employing domain knowledge, to construct a composite neural network with guaranteed overall performance. In this work, we investigate the mathematical theory to ensure the overall performance of a composite neural network is better than any a pre-trained component, regardless the way of composition, to allow deep learning application developer great freedom in constructing a high performance composite neural network.
Contributions. In this work, we proved that a composite neural network with high probability performs better than any of its pre-trained components under certain assumptions. In addition, if extra pre-trained component is added into a composite network, with high probability the overall performance will be improved. In the empirical evaluations, distinctively different applications support the above findings.
2 PRELIMINARIES
In this Section, we introduce some notations and definitions about composite neural network. Parameters N ,K, d, dj , dj1 , and dj2 are positive integers. Denote {1, ...,K} as [K] and [K] ∪ {0} as [K]+. Let σ : R → R be a differentiable activation function, such as the Logistic function σ(z) = 1/(1 + e−z) and the hyperbolic tangent σ(z) = (ez − e−z)/(ez + e−z). For simplicity of notation, we sometimes abuse σ as a vector value function. A typical one hidden layer neural network can be formally presented as w1,1σ (∑d i=1 w0,ixi + w0,0 ) + w1,0.We abbreviate it as fσ,W(x), where W is the matrix defined byw1,1, w1,0, ..., w0,1, w0,0. Recursively applying this representation can obtain the neural network with more hidden layers. If there is no ambiguity on the activation function, then it can be skipped as fW(x). Now assume a set of neural networks {fWj (xj)}Kj=1 is given, where Wj is the real number matrix defining the neural network fWj : Rdj1×dj2 → Rdj , and xj ∈ Rdj1×dj2 is the input matrix of the jth neural network. For different fWj , the corresponding dj , dj1 and dj2 can be different. For each j ∈ [K], let Dj = {(x (i) j ,y (i) j ) ∈ R(dj1×dj2 )×dj}Ni=1 be a set of labeled data (for the jth neural network). For each i ∈ [N ], let x(i) = (x(i)1 , . . . ,x (i) K ), y(i) = (y (i) 1 , . . . ,y (i) K ), and D = {(x(i),y(i))}Ni=1.
For a pre-trained model (component), we mean Wj is fixed after its training process, and then we denote fWj as fj for simplicity. On the other hand, a component fWj is non-instantiated means Wj is still free. A deep feedforward neural network is a hierarchical acyclic graph, i.e. a directed tree. In this viewpoint, a feedforward neural network can be presented as a series of function compositions. For given {fWj (xj)}Kj=1, we assume θj ∈ Rdj , j ∈ [K], which make the product θjfWj (xj) is well-defined. Denote f0 as the constant function 1, then the liner combination with a bias is defined as as Θ(f1, ..., fK) = ∑ j∈[K]+ θjfj(xj). Hence, an L layers of neural network can be denoted as Θ(L) ◦ σ ◦ · · · ◦ Θ(0) (x). A composite neural network defined by components fWj (xj) can be designed as an directed tree. For instance, a composite neural network σ2 (θ1,0 + θ1,1f4(x4) + θ1,2σ1(θ0,0 + θ0,1f1(x1) + θ0,2fW2(x2) + θ0,3f3(x3))) can be denoted as σ2 ◦ Θ1 (f4, σ1 ◦Θ0(f1, fW2 , f3)), where f1 and f3 are pre-trained and fW2 is non-instantiated. Note that in this work Dj is the default training data of component fj of composite neural network, but Dj can be different from the training data deciding the frozen weights in the pre-trained fj .
Let 〈~a,~b〉 be the standard inner product of ~a and ~b, and || · || be the corresponding norm. For a composite neural network, the training algorithm is the gradient descent back-propagation algorithm and the loss function is the L2-norm of the difference vector. In particular, for a composite neural network g~θ the total loss on the data set D is
L~θ ( x; g~θ ) = 〈~g~θ (x)− ~y,~g~θ (x)− ~y〉 = ||~g~θ (x)− ~y|| 2 (1)
This in fact is ∑N i=1 ( g(x(i))− y(i) )2 . By the definition of g~θ(·), this total loss in fact depends on the given data x, the components defined by {Θj}Kj=1, the output activation σ, and the weight vector ~w. Similarly, let L(fj(xj)) be the loss function of a single component fi. Our goal is to find a feasible ~θ s.t. L~θ (x; g) < minj∈[K] L(fj(xj)).
3 PROBLEM SETTINGS AND RESULTS OVERVIEW
The problems considered in this work are as follows:
P1. What are the conditions that the pre-trained components must satisfy so that them can strictly improve the accuracy of the whole composition?
P2. Will more pre-trained components improve the accuracy of the whole composition?
Let ~fj be the output vector of the jth pre-trained component, and BK be the set of unit vectors in RK .
A1. Linearly Independent components (LIC) Assumption: ∀t ∈ [K],@{βj} ⊂ R, s.t. ~ft = ∑ j∈[K]\{t} βj ~fj .
A2. No Perfect component (NPC) Assumption: minj∈[K] {∑ i∈[N ] fj(x (i) j )− y(i) } > ∗, where ∗ > 0 is constant.
Our results are as follows:
Theorem 1. Assume the set of components {fj(xj)}Kj=1 satisfies LIC. Let g be Θ(f1, ..., fK). With probability at least 1− K
πeN , there is a vector ~θ ∈ RK \ BK s.t. L~θ (x; g) < minj∈[K]{L(fj(xj))}.
Theorem 2. Assume the set of pre-trained components {fj(xj)}Kj=1 satisfies both NPC and LIC, and g be σ ◦Θ(f1, ..., fK). Then with probability at least 1− KπeN there exists ~w s.t. L~w (x; g) < minj∈[K] L(fj(xj)).
Theorem 3. Assume the set of components {fj(xj)}Kj=1 satisfies LIC. Let gK−1 = Θ(f1, ...fK−1) and gK = Θ(f1, ...fK). With probability at least 1 − KπeN , there is a vector ~w ∈ R
K \ BRK s.t. L~w (x; gK) < L~w (x; gK−1).
Theorem 1, and 2 together answer Problem P1, and Theorem 3 answers Problem P2.
4 RELATED WORK
Our framework is related but not the same with the models such as transfer learning. (Erhan et al., 2010; Kandaswamy et al., 2014; Yao & Doretto, 2010) and ensemble leaning (Zhou, 2012).
Transfer Learning. Typically transfer learning deals with two data sets with different distributions, source and target domains. A neural network, such as an auto-encoder, is trained with source domain data and corresponding task, and then part of its weights are taken out and plugged into other neural network, which will be trained with target domain data and task. The transplanted weights can be kept fixed during the consequent steps or trainable for the fine-tune purpose (Erhan et al., 2010). For multi-source transfer, algorithms of boosting based are studied in the paper (Yao & Doretto, 2010). Kandaswamy et al. (Kandaswamy et al., 2014) proposed a method of cascading several pre-trained layers to improve the performance. Transfer learning is considered as a special case of composite neural network that the transfered knowledge can be viewed as a pre-trained component.
Ensemble (Bagging and Boosting). Since the Bagging needs to group data by sampling and the Boosting needs to tune the probability of data (Zhou et al., 2002), these frameworks are different from composite neural network. However, there are fine research results revealing many properties for accuracy improvement (Džeroski & Ženko, 2004; Gashler et al., 2008; Zhou et al., 2002). For example, it is known that in the ensemble framework, low diversity between members can be harmful to the accuracy of their ensemble (Džeroski & Ženko, 2004; Gashler et al., 2008). In this work, we consider neural network training, but not data processing.
Ensemble (Stacking). Among the ensemble methods, the stacking is closely related to our framework. The idea of stacked generalization (Wolpert, 1992), in Wolpert’s terminology, is to combine two levels of generalizers. The original data are taken by several level 0 generalizers, then their outputs are concatenated as an input vector to the level 1 generalizer. According to the empirical study of Ting and Witten (Ting & Witten, 1999), the probability of the outputs of level 0, instead of their values, is critical to accuracy. Besides, multi-linear regression is the best level 1 generalizer, and non-negative weights restriction is necessary for regression problem while not for classification problem. In (Breiman, 1996), it restricts non-negative combination weights to prevent from poor generalization error and concludes the restriction of the sum of weights equals to 1 is not necessary (Breiman, 1996). In (Hashem, 1997), Hashem showed that linear dependence of components could be, but not always, harmful to ensemble accuracy, while in our work, it allows a mix of pre-defined and undefined components as well as negative weights to provide flexibility in solution design.
Recently Proposed Frameworks. In You et al. (2017), Shan You et al. proposed a student-teacher framework where the outputs of pre-trained teachers are averaged as the knowledge for the student network. A test time combination of multiple trained predictors was proposed by Kim, Tompkin, and Richardt In Kim et al. (2017), that the combination weights are decided during test time. In above frameworks, the usage of pre-trained neural networks generally improves the accuracy of their combination.
5 THEORETICAL ANALYSIS
This section provides analyses of the loss function of composite neural network with the introduction of pre-trained components. For the complete proofs, please refer to Supplementary Material. Observe that for given pre-trained components {fj}Kj=1, a composite neural network can be defined recursively by postorder subtrees search. For instance, σ2 ◦ Θ1 (f4, σ1 ◦Θ0(f1, f2, f3)) can be presented as σ2 ◦ Θ1(f4, g1), and g1 = σ1 ◦ Θ0(f1, f2, f3). Without loss of generality, we assume dj = d = 1 for all j ∈ [K] in the following proofs. We denote ~fj the vector (fj(x(1)), · · · , fj(x(N))), as the sequence of fj during the training phase. Similarly, ~y := (y(1), · · · , y(N)). Let ~ej be an unit vector in the standard basis of RK for each j ∈ [K], i.e. ~e1 = (1, 0, · · · , 0) and ~e2 = (0, 1, 0, · · · , 0), etc. Let BK be the set containing all these standard unit-length basis of RK .
Theorem 1. Assume the set of components {fj(xj)}Kj=1 satisfies LIC. Let g be Θ(f1, ..., fK). With probability at least 1− K
πeN , there is a vector ~θ ∈ RK \ BK s.t. L~θ (x; g) < minj∈[K]{L(fj(xj))}.
Proof. (Proof Sketch) The whole proof is split to Lemma 5.1,5.2,5.3. Note that g(·) is the linear combination of ~θ and {fj(xj)}Kj=1. It is well known (Friedman et al., 2001) that to search the minimizer ~θ for L~θ, i.e. to solve a least square error problem, is equivalent to find an inverse matrix defined by {fj(xj)}Kj=1. Since {fj(xj)}Kj=1 satisfy LIC, the inverse matrix can be written down concretely, which proves the existence. Furthermore, if this solved minimizer ~θ∗ is not ~es for some s ∈ [K] then the g~θ∗ has lower loss than fs. Lemma 5.3 argues that the probability of ~θ
∗ = ~es is at most the probability of the event 〈~f − ~y, ~f〉 = 0, where ~f is uniformly taken from the vector set of the same length of ~f − ~y.
The statements of Lemmas needed by previous Theorem are as follows. Lemma 5.1. There exists ~θ ∈ RK+1 s.t. L~θ ( x; Θ(0)(f1, ...fK) ) ≤ minj∈[K]+{L(fj(xj))}.
This Lemma deals with the existence of the solution of the inequality. But our goal is to find a solution such that the loss is strictly less than any pre-trained component.
Lemma 5.2. Denote IL~θ the indicator variable for the event that at least one of ~ej ∈ BRK is the minimizer of L~θ. Then Pr { IL~θ = 1 } < K πeN , i.e. Pr { IL~θ = 0 } ≥ 1− K πeN .
Lemma 5.3. Define F(~y, L(f)) = { ~f ∈ RN : ∥∥∥~f − ~y∥∥∥2 = Lf} for given ~y and ~f . Then we have Pr~f∈F(~y,L(f)) { 〈~f − ~y, ~f〉 = 0 } < 1 πeN .
The above Lemmas prove Theorem 1. The following corollary is the closed from of the optimal weights.
Corollary 5.1. The closed form of the minimizer is: [θt]t∈[K]+ = [ 〈~fs, ~ft〉 ]−1 s,t∈[K]+ × [ 〈~fs, ~y〉 ] s∈[K]+ .
In the following, we deal with σ ◦Θ(f1, ..., fK) and Θ1 ◦ σ ◦Θ(f1, ..., fK). Theorem 2. Assume the set of pre-trained components {fj(xj)}Kj=1 satisfies both NPC and LIC, and g be σ ◦ Θ(f1, ..., fK). Then with probability at least 1 − KπeN there exists ~θ s.t. L~θ (x; g) < minj∈[K] L(fj(xj)).
Proof. (Proof Sketch) The whole proof is split to Lemma 5.4,5.5, and 5.6. The idea is to find an interval in the domain of σ such that the output can approximate linear function as well as possible. Then in this interval, the activation σ can approximate any given pre-trained component. However, under the assumptions LIC and NPC the gradient of the loss L is not zero with high probability. Since the training is based on the gradient descent algorithm, this none-zero gradient leads the direction of updating process to obtain a lower loss.
Lemma 5.4. Let N,K and j ∈ [K] be fixed. For small enough , there exists ~θ ∈ ZF,1, and 0 < α ∈ R s.t. |σ ◦Θ(0)(f1, ..., fK)− fj(x) α | < .
Lemma 5.5. Assume NPC holds with ∗ > 0. If ~θ ∗/3 satisfies |σ ◦Θ(0)(f1, ..., fK)(x)− fj(x)| < ∗ 3N for any j ∈ [K] +, then∇~θL(~θ ∗/3) 6= ~0.
Lemma 5.6. If ~θ ∗/3 makes ∇~θL(~θ ∗/3) 6= ~0, then there exist ~θ s.t. L~θ (x; g) < minj∈[K]+ L(fj(xj)).
Now we consider the difference of losses of σ ◦Θ′(f1, ..., fK) and σ ◦Θ(f1, ..., fK−1). Theorem 3. Assume the set of components {fj(xj)}Kj=1 satisfies LIC. Let gK−1 = Θ(f1, ...fK−1) and gK = Θ(f1, ...fK). With probability at least 1 − KπeN , there is a vector ~θ ∈ R
K \ BRK s.t. L~θ (x; gK) < L~θ (x; gK−1).
Proof. (Proof Sketch) The idea is directly solve the inequality for the case of K = 2, and then generalize the result to larger K.
The following provides a generalized error bound for a composite neural network.
Theorem 4. Assume pre-trained components {fj}Kj=1 satisfy LIC and NPC. Let {GE(fj)}Kj=1 be corresponding generalization errors of {fj}Kj=1, and Θ(L) ◦ σ(L) ◦ · · · ◦ σ(1) ◦Θ(0)(f1, ..., fK) be the composite neural network. Denote the generalization error, E{L(Θ(L) ◦ σ(L) ◦ · · · ◦ σ(1) ◦ Θ(0)(f1, ..., fK))}, of the composite neural network as E{LΘ,f1,...,fK}. Then with high probability, there exist a setting of {Θ∗(L), ...,Θ ∗ (0)} such that E{LΘ,f1,...,fK} ≤ Θ ∗ (L)(GE(f1), ...GE(fK)).
Proof. (Proof Sketch) We apply the idea similar to Kawaguchi (2016): the exception of non-liner activations is same with the exception of liner activations. Previous theorems provide that with high probability there exists the solution of Θ(i),∀i ∈ [L]+ s.t. each Θ(i)+1σΘ(i) approximates a degree one polynomial AΘ(i)+1σΘ(i),1 as well as possible. If the weights are obey the normal distribution, then E{LΘ,f1,...,fK} ≤ Θ∗(L)(GE(f1), ...GE(fK)).
6 EMPIRICAL STUDIES
This section is to numerically verify the performance of composite network for two distinctively different applications, image classification and PM2.5 prediction. For image classification, we examined two pre-trained components, the ResNet50 (He et al., 2016) from Keras and the SIFT algorithm(Lowe, 1999) from OpenCV, running on the benchmark of ImageNet competition(Russakovsky et al., 2015). For PM2.5 prediction, we implemented several models running on the open data of local weather bureau and environment protection agency to predict the PM2.5 level in the future hours.
6.1 IMAGENET CLASSIFICATION
We chose Resnet50 as the pre-trained baseline model and the SIFT model as an auxiliary model to form a composite neural network to validate the proposed theory. The experiments are conducted on the 1000-class single-label classification task of the ImageNet dataset, which has been a well received benchmark for image classification applications. A reason to choose the SIFT (Scale-Invariant Feature Transform) algorithm is that its function is very different from ResNet and it is interesting to see if the performance of ResNet50 can be improved as predicted from our theory.
We trained the SIFT model using the images of ImageNet, and directed the output to a CNN to extract useful features before merging with ResNet50 output. In the composite model, the softmax functions of both ResNet50 and SIFT model are removed that the outputs of length 1000 of both models are merged before the final softmax stage. During the training process of composite network, the weights of ResNet50 and SIFT model are fixed, and only the connecting weights and bias are trained.
The ResNet50 was from He et al. that its Top-1 accuracy in our context was lower than reported in (He et al., 2016) since we did not do any fine tuning and data preprocessing. In the Figure 1, it shows the composite network has higher accuracy than ResNet50 during almost the complete testing run. Table 1 shows the same result that the composite network performs better too. The experiment results support the claims of this work that a composite network performs better than any of its components, and more components work better than less components.
6.2 PM2.5 PREDICTION
The PM2.5 prediction problem is to forecast the particle density of fine atmospheric matter with the diameter at most 2.5 µm (PM2.5) in the future hours, mainly, for the next 12, 24, 48, 72 hours. The datasets used are open data provided by two sources including Environmental Protection Administration (EPA)1 , and Center Weather Bureau (CWB)2. The EPA dataset contains 21 observed features, including the speed and direction of wind, temperature, relative humidity, PM2.5 and PM10 density, etc., from 18 monitoring stations, with one record per hour. The CWB has seventy monitoring stations, one record per 6 hours, containing 26 features, such as temperature, dew point, precipitation, wind speed and direction, etc. We partitioned the observed area into a grid of 1140 km2 with 1 km×1 km blocks and aligned the both dataset into one-hour period. We called the two datasets as air quality and weather condition dataset.
We selected ConvLSTM (Convolution LSTM) and FNN (fully connected neural network) as the components used in this experiment. The reason to select ConvLSTM is that the dispersion of PM2.5 is both spatially and temporally dependent and ConvLSTM is considered capable of catching the dependency, and FNN is a fundamental neural network that acts as the auxiliary component in the experiment.
The prediction models were trained with the data of 2014 and 2015 years, then the 2016 data was used for testing. We considered two function compositions, the linear combination Θ and the Logistic function σ1 (as Theorem 2), to combine the two components to examine the applicability of the proposed theorems.
We trained and tested both ConvLSTM and FNN using air quality dataset (Dataset A) and weather condition dataset (Dataset B) separately as the baselines (denoted as f1, f2, f3 and f4) and their training error and testing error in MSE are list in the first part of Table 2. Then we composited FNNs using Dataset A and Dataset B, each FNN can be pre-trained (denoted as x) or non-instantiated (denoted as o). In addition, we used both linear and Sigmoid activation functions. As a result, we had eight combinations, as list in the part two. We treated ConvLSTM in the same way and the outcomes were in the part 3. Finally, we composited using one FNN and one ConvLSTM that each was the best in their category, and the resulting composite network was a tree of depth 2. For instance, the candidate of ConvLSTM of part 4 for 12 hours prediction was the 4th row (i.e., Θ(f◦3 ,f ◦ 4 )) of part 3. Their training and testing errors in MSE were listed in the part 4.
From the empirical study results, it shows mostly the proposed theorems are followed. While the composite networks with all pre-trained components may not perform better than others in their category, (which is not a surprise), what we expect to see is after adding a new component, the composite network has improvement over the previous one. For example, the σ ◦ Θ(f×3 , f × 4 ) has strictly better accuracy than both f3 and f4 for all future predictions. Another example is the NEXT 48 hr, σ ◦ Θ(C×, F×) also has strictly better accuracy than both C = σ ◦ Θ(f◦3 , f◦4 ) and F = σ ◦Θ(f◦3 , f◦4 ).
1https://opendata.epa.gov.tw/Home 2http://opendata.cwb.gov.tw/index
7 CONCLUSION
In this work, we investigated the composite neural network with pre-trained components problem and showed that the overall performance of a composite neural network is better than any of its components, and more components perform better than less components. In addition, the developed theory consider all differentiable activation functions.
While the proposed theory ensures the overall performance improvement, it is still not clear how to decompose a complicated problem into components and how to construct them into a composite neural network in order to have an acceptable performance. Another problem worth some thinking is when the performance improvement will diminish (by power law or exponentially decay) even adding more components. However, in the real world applications, the amount of data, data distribution and data quality will highly affect the performance.
8 SUPPLEMENTARY MATERIAL
For self-contained, we list some common Taylor expansion in the following. Logistic: S(z) := 11+e−z = 1 2 + 1 4z − 1 48z 3 + 1480z 5 − 1780640z
7 +O(z9), ∀z ∈ R, Hyperbolic Tan: tanH(z) = e
z−e−z ez+e−z = z − 1 3z 3 + 215z 5 +O(z7), ∀|z| ≤ π2
arcTan: arctan(z) = z − 13z 3 + 15z 5 + +O(z7), ∀|z| ≤ 1.
Definition 1. Given an activation σ(z) and its Taylor expansion Tσ(z), let Aσ,D(z) be the truncated the monomials of degree at most D from Tσ(z). We define Aσ,D(z) as the D-degree Taylor approximation polynomial, and Rσ,D+1(z) as the remainder part such that Tσ(z) = Aσ,D(z) +Rσ,D+1(z).
For instance, if we set D = 3 then the Taylor expansion of Logistic function S(z) is separated as the approximation partAS(z),3(z) = 12 + 1 4z− 1 48z 3 and the remainder partRS(z),4(z) = 1480z 5 +O(z7).
Proposition 8.1. (Error Bound of The Remainder) Let S(z) be the Logistic function. Consider the approximation AS(z),≤D(z) and the remainder RS(z),D+1(z) defined as above. For given ∈ (0, 11000 ) and D ∈ N, if |z| <
1/(D+2), then |S(z)−AS(z),D(z)| = |RS(z),D+1(z)| < .
Proof. Note that if < 1 then for all D ∈ N, 1/(D+1) < 1. If |z| < 1/3 and D = 1, then
|RS(z),D+1(z)| ≤ ∣∣∣∣− 148z3 + 1480z5 − 1780640z7 +O(z9) ∣∣∣∣ < 124 < . The general case (D ≥ 2) can be proven by the same argument as above.
This Proposition means that for a suitable range of z, the Logistic function can be seen as a linear function with the error at most .
Definition 2. For the Logistic activation σ(z) = S(z), > 0 and given polynomial degree D, we define ZD, = {z ∈ R : |σ(z)−Aσ,D(z)| < }. Furthermore, for given components {fj : j ∈ [K]} = F , we consider the variable z = Θ(f1, ..., fK) and define
ZF,D, = { ~θ ∈ RK+1 : z = Θ(f1, ..., fK), |σ(z)−Aσ,D(z)| < } .
Observe that if the parameters , F ,and |F | = K are fixed, then ZF,D, ⊂ ZF,D+1, ⊂ RK+1.
8.1 FUNCTION COMPOSITION BY LINEAR COMBINATION
Recall that for a set of pre-trained components {fj(xj) : j ∈ [K]}, Θ(0) (f1, ...fK) =∑ j∈[K]+ θ0,jfj , where f0 = 1. For simplicity, we consider Θ(1)(z) = αz. This means
Θ(1) ◦ σ ◦Θ(0) (f1, ...fK) = θ1,1σ (∑ j∈[K]+ θ0,jfj ) + θ1,0.
Theorem 1 is a consequence of the following lemmas:
Proof. (of Lemma 5.1) For simplicity of notations, let g(x) = Θ(0)(f1, ...fK), hence g(x) = ∑ j∈[K]+ θjfj(xj). Also
recall that L~θ (x; g) = ∑N i=1 ( g(x(i))− y(i) )2 . To prove the existence of the minimizer, it is enough to solve the equations of critical points, in the case of a quadratic object function. That is, to solve the set of equations:
∇~θL (x; g) = ( ∂L
∂θ0 , . . . ,
∂L
∂θK
)T = (0, . . . , 0) T ,
where for each s, t ∈ [K]+,
∂L ∂θs =2 N∑ i=1 ( g(x(i))− y(i) ) · fs(x(i)) = 2 N∑ i=1 ∑ j∈[K]+ θjfj(x (i) j )− y (i) · fs(x(i)) =2
∑ j∈[K]+ θj〈~fs, ~fj〉 − 〈~fs, ~y〉 . Hence, to solve ∇~θL (x; g) = ~0 is equivalent to solve [ 〈~fs, ~ft〉 ] s,t∈[K]+
× [θt]t∈[K]+ =[ 〈~fs, ~y〉 ] s∈[K]+ , where [ 〈~fs, ~ft〉 ] s,t∈[K]+
is a (K + 1) by (K + 1) matrix, [θt]t∈[K]+ and[ 〈~fs, ~y〉 ] s∈[K]+ are both 1 by (K + 1).
Note that linear independence of {~fj}j∈[K]+ makes [ 〈~fs, ~ft〉 ] s,t∈[K]+ a positive-definite Gram
matrix (Horn & Johnson, 2012) , which means the inversion [ 〈~fs, ~ft〉 ]−1 s,t∈[K]+ exists. Then ~θ is solved:
[θt]t∈[K]+ = [ 〈~fs, ~ft〉 ]−1 s,t∈[K]+ × [ 〈~fs, ~y〉 ] s∈[K]+
(2)
The above shows the existence of the critical points. On the other hand, since L~θ (x; g) is the summation of square terms, i.e. paraboloid, the the critical points can only be the minimum.
The meaning of the gradient on a function surface is the direction that increases the function value most efficiently. Hence, if the gradient is not the zero vector then the corresponding point can not be the minimizer of the function surface. Recall for any s ∈ [k],[
∂L ∂θt ] t∈[K]+ ∣∣ ~θ=~es = 2 [ 〈~fs − ~y, ~ft〉 ] t∈[K]+ .
Before the proof of Lemma 5.2, we need the upper bound of the probability of some events. Note that ~y is defined according to the given training data, and for each j ∈ [K]+ the length of ~fj − ~y, i.e. ∥∥∥~f − ~y∥∥∥, is also given. The question is, for fixed ~y what is the probability of selected ~f is perpendicular to ~f − ~y? A folklore approach is considering that {~f = (f(x(1)), ..., f(x(N)))} obeys the normal distribution, and setting the mean of f(x(i)) as y(i) for each i ∈ [N ]. In the following we propose another simple probability argument to obtain a loose upper bound.
Proof. (of Lemma 5.3) Observe that 〈~f − ~y, ~f〉 = 0 ⇔ (~f − ~y) ⊥ ~f , which implies the angle between them, ∠(~f−~y), ~f , is in the interval [ π− 2 , π+ 2 ] for small ∈ R +, as shown in the left part of Figure 2. The red, orange, and blue vectors show three possibles of the pair of ~f and ~f − ~y. The length of ~f −~y is fixed since ~f and ~y are given, but the angle between ~f −~y and ~y can decide whether (~f − ~y) ⊥ ~f . The gray circle collects all possible end-point of the vector ~f − ~y emission from the end-point of ~y. Although on the whole circle there are exactly two specific angles 3 can satisfy (~f − ~y) ⊥ ~f , we give a loose small interval with respect to π. In particularly, we set 0 < < e−N .
Pr ~f∈F(~y,Lf )
{ ∠(~f−~y), ~f = π
2
} ≤ Pr
~f∈F(~y,Lf )
{ π −
2 ≤ ∠(~f−~y), ~f ≤
π +
2
} =
π <
1
πeN .
Now we are ready to proof Lemma 5.2. 3That is, two points on the circumference, which is in fact measure zero on all possible angle [0, 2π)
Proof. (of Lemma 5.2) We denote A the event that at least one of ~ej ∈ BRK is the minimizer of L(~θ) for convenience.
IL(~w) = 1⇔ the event A is true ⇒ [ ∂L ∂θt ] t∈[K]+ ∣∣ ~θ=~es = 2 [ 〈~fs − ~y, ~ft〉 ] t∈[K]+ = [0]K×1 for somes ∈ [K] +
⇒〈~f1 − ~y, ~f1〉 = 0 ∧ 〈~f1 − ~y, ~f2〉 = 0 ∧ · · · ∧ 〈~f1 − ~y, ~fK〉 = 0
or · · · or 〈~fK − ~y, ~f1〉 = 0 ∧ 〈~fK − ~y, ~f2〉 = 0 ∧ · · · ∧ 〈~fK − ~y, ~fK〉 = 0 Hence, for given ~y and L(fj) = ∥∥∥~fj − ~y∥∥∥2, ∀ ∈ [K]+, we have
Pr { IL(~w) = 1 } ≤ ∑ j∈[K]+ Pr { 〈~fj − ~y, ~f1〉 = 0 ∧ 〈~fj − ~y, ~f2〉 = 0 ∧ · · · ∧ 〈~fj − ~y, ~fK〉 = 0 } ≤K · Pr { 〈~f1 − ~y, ~f1〉 = 0
} < K
πeN ,
where the second inequality is based on the symmetry between ~fs and ~ft for any s, t ∈ [K]+, and the last inequality is by Lemma 5.3.
Proof. (of Theorem 3) We start from a simple case: Claim: ∃β ∈ R s.t. ∑
i∈[N ] (f1(xi)− yi)2 − ∑ i∈[N ] (f1(xi) + βf2(xi)− yi)2 > 0.
Proof. ∑ i∈[N ] (f1(xi)− yi)2 − ∑ i∈[N ] (f1(xi) + βf2(xi)− yi)2
= ∑ i∈[N ] [ (f1(xi)− yi)2 − (f1(xi) + βf2(xi)− yi)2 ]
= − ∑ i∈[N ] f1(xi) 2 β2 + 2 ∑ i∈[N ] (f2(xi)yi − f2(xi)f1(xi)) β Observe that the above is a quadratic equation of β with negative leading coefficient. Hence, to obtain the maximum of the difference, we can set
β = ∑ i∈[N ] (f2(xi)yi − f2(xi)f1(xi))∑
i∈[N ] f1(xi) 2
= 〈~y − ~f1, ~f2〉 〈~f2, ~f2〉
Note that if 〈~y − ~f1, ~f2〉 = 0 then the last pre-trained component is no need to be added. We aim to calculate the probability of this case. Observe that 〈~y − ~f1, ~f2〉 = 0 ⇔ (~y − ~f1) ⊥ ~f2. This condition is different from previous Lemma. Here we have to find the upper bound of the probability of (~y − ~f1) ⊥ ~f2 for given ~f1 and ~y. As shown in the left part of Figure 2), the angle between ~f2 and ~y must be in a specific interval, say [π− 2 , π+ 2 ] for small ∈ R
+. In order to be concrete, we set 0 < < e−N .
Pr ~f∈F(~y,1)
{ (~y − ~f1) ⊥ ~f2 } ≤ π < 1 πeN .
The general case can be reduced to the above claim by considering gK−1 as f1 and θkfK as βf2. Furthermore, since there there K possibles the be selected as the least pre-trained component, the probability is upper bounded by K
πeN .
8.2 FUNCTION COMPOSITION BY NON-LINEAR ACTIVATION
Proof. (of Lemma 5.4) Although the lemma is an existence statement, we give a constructive proof here. By setting D = 1 in Proposition 8.1, we know that for Logistic S(z) and 0 < < 1/1000, the degree-one Taylor approximation AS(z),1 = 12 + 1 4z with the remainder |RS(z),2| < . Define M := 10 ·maxj∈[K]+,i∈[N ]{|fj(xi)|}. Hence by setting z = fj(xj) M , we have
∣∣∣S ( fj(xj)M )− 12 − fj(xj)4M ∣∣∣ < . This means that for the givenj ∈ [K], θj = 1M , θ0 = and for all j
′ 6= j, θj′ = 0. Furthermore, α = 4M .
This lemma implies that the S(z) can approximate linear function as possible in a non-zero length interval, hence if the scaling of ~θ is allowed then Theorem 1 can be applied. Corollary 8.1. If the activation function is the Logistic, S(z), and {fj(xj)}Kj=1 satisfies LIC, then with high probability there is a vector ~θ s.t. L~θ ( x;σ ◦Θ(0)(f1, ..., fK) ) < minj∈[K]+ L(fj(xj)).
Proof. Set of above Lemma as ∗
3N , then previous Lemma shows that there exists θ which maps {fj} into Zσ◦Θ(0),1, ∗3N . Since the output of σ ◦Θ(0) is a linear function with error at most ∗
3N , we have the same conclusion.
Proof. Let g(x) = σ ◦ Θ(0)(f1, ..., fK)(x) for short. First observe that |g(x) − fj(x)| < ∗
3N ⇒ ∀i ∈ [N ],g(x(i))− fj(x(i)) > − ∗
3N . Then∑ i∈[N ] ( g(x(i))− y(i) ) = ∑ i∈[N ] { (g(x(i))− fj(x(i))) + (fj(x(i))− y(i)) } > N · − ∗ 3N + ∗ = 2 ∗ 3 > 0.
On the other hand, it can be calculated that ∇~θL (x; g) |~θ=~θ ∗/3 = ( ∂L ∂w0 , ∂L ∂θ1 , . . . , ∂L ∂θK , ∂L ∂b0 )T |~θ=~θ ∗/3 ,
where [a]T is the transpose of the matrix [a]. Also note that ∂L∂w0 |~θ=~θ ∗/3 = 2 ·∑ i∈[N ] (g(x (i))− y(i)) · S(z), and ∂L∂b0 |~θ=~θ ∗/3 = 2 · ∑ i∈[N ] (g(x
(i))− y(i)) · 1. Since∑ i∈[N ] (g(x (i))− y(i)) > 0, we can conclude ∇~θL (x; g) |~θ=~θ ∗/3 6= ~0.
Proof. (of Lemma 5.6 ) By previous Lemma, it is valid to consider the best performance component, fj∗ ; i.e. L(fj∗) = minj∈[K]+ L(fj(xj)). Since ∇~θL(~θ ∗/3) 6= ~0, by definition of the gradient, moving along the direction of −∇~θL(~θ ∗/3)/ ∥∥∥∇~θL(~θ ∗/3)∥∥∥ with a step-size α > 0 must strictly decrease the value of L~θ (x; g). W.L.O.G, we can assume this α is optimal; that is, if α > r > 0 then
L(~θ ∗/3) > L(~θ ∗/3) − r · ∇~θL(~θ ∗/3)/ ∥∥∥∇~θL(~θ ∗/3)∥∥∥, while if r = α + δ for some δ > 0 then
L(~θ ∗/3) ≤ L(~θ ∗/3)− r · ∇~θL(~θ ∗/3)/ ∥∥∥∇~θL(~θ ∗/3)∥∥∥. The issue is how to find a proper step-size r > 0? We consider the line search approach such that:
r∗ = arg min r∈R L ~θ0 − r · ∇~θL(~θ0)∥∥∥∇~θL(~θ0)∥∥∥ This outputs F ∗ then we can make sure that L ( ~θ0 ) > L ( ~θ0 ) − r · ∇~θL(~θ0)/
∥∥∥∇~θL(~θ0)∥∥∥. Since the underlining θ0 is ~θ ∗/3 that makes the loss is the same with the best one. Hence, we have
(~θ0 − r∗ · ∇~θL(~θ ∗/3)/ ∥∥∥∇~θL(~θ ∗/3)∥∥∥) to fit our goal (beating the best one).
Combining these lemmas can obtain the conclusion of Theorem 2.
Corollary 8.2. The process of Lemma 5.5 converges.
Proof. (of 8.2) It is know that if a monotone decreasing sequence is bounded below, then this sequence convergences. ..... (revising) Repeating the process of Lemma 2, we can obtain a strictly decreasing sequence: L(~θ0) > L(~θ1) > L(~θ2) > . . . . Note that ∀i, (~θi) ≥ 0. This means the sequence is monotone decreasing and bounded below, so theoretically it converges by monotone convergence theorem of mathematical analysis. Algorithmically, the gradient descent based sequence finding process stops at some term with ∇~θL(~θ
′) = 0, which is a (local) minimum or a saddle point.
Corollary 8.3. If the assumptions in Theorem 2 are satisfied, then with height probability there exists ~θ′ s.t. L~θ′ is a (local) minimum or a saddle point, while L~θ′ (x; g) < minj∈[K]+ |L(fj(xj))| still holds.
Assume the pre-trained component set {fj(xj)}Kj=1 satisfies both NPC and LIC, then there exists ~θ s.t. L~θ (x; g) < minj∈[K]+ L(fj(xj)).
In the previous proof, the critical properties of activations are local linearity and differentiability. Hence, it is not hard to check that if we replace σ(·) in Eq. (1) with other common activations, the conclusion still holds. By local linearity we mean that on a non-zero length interval in its domain, the function can approximate the linear mapping as well as possible.
Corollary 8.4. Theorem 1 and 2 can apply on any activations with local linearity and differentiability.
Based on Corollary 5.1 and Lemma 5.6, it is natural to obtain the process of finding ~θ: by gradient descent or the closed form of Corollary 5.1. We can compute the optimal weights for the bottom Sigmiod block. On the other hand, after random initializing, the parameters of un-trained components in the Relu or Tanh blocks are assigned. This implies they can be treated as the all pre-trained case in Theorem 1 or 2. In fact, given the outputs from bottom level block then Corollary 5.1 provides weights improving the accuracy. Then it goes to the next up level block until the top, which is the forwarding steps of Back-propagation LeCun et al. (1988). Hence with initialization for un-trained component, Corollary 5.1 is essentially the same as Back-propagation.
8.3 A MIX OF PRE-TRAINED AND UN-TRAINED COMPONENTS
Now we first consider some of {fΘj (xj)}Kj=1 are pre-trained and some are un-trained, and then investigate the hierarchical combination of both kinds of components. In particularly, Eq. (1) can be re-written as g(x) = w0 · σ (θ1f1 + θ2fΘ2) + b0, where f1 is a pre-trained component and fΘ2 is un-trained. Since Θ2 is not fixed, it can not be checked that LIC and NPC assumptions are satisfied. On the other hand, after initialization, fΘ2 can be seen as a pre-trained component at any a snapshot during training phase.
Theorem 5. In the end of an weight updating iteration, if the components f1 and fΘ2 satisfy LIC and NPC assumptions, then with high probability ~w updated in the next iteration can improve the loss.
Proof. Recall the training algorithm is the backpropagation algorithm. Also note that according to Eq. (1), the order of updating is ~θ first and then Θ2. We denote in the end of iteration i the value of ~θ and Θ2 as ~θ(iter=i) and Θ (iter=i) 2 , respectively. With randomized initialization, Θ2 is assigned as Θ (iter=0) 2 before the execution of the iteration 1. Then in each iteration i ≥ 1, g(x) is a combination of fixed parameter components. Hence this can reduce to the all pre-trained cases, and can apply Theorem 1 and 2.
Lemma 8.1. For a given data set X , let ~g := (g1, ..., gN ) and ~y = (y1, ..., yN ). If 〈~g,~g − ~y〉 6= 0, then there exists α ∈ R s.t. ∑
i∈[N ] (αg(xi)− yi)2 < ∑ i∈[N ] (g(xi)− yi)2
Proof. It is equivalent to show the inequality∑ i∈[N ] (αg(xi)− yi)2 − ∑ i∈[N ] (g(xi)− yi)2 < 0
has a real number solution.∑ i∈[N ] [ (αg(xi)− yi)2 − (g(xi)− yi)2 ]
= ∑ i∈[N ] g(xi) 2 α2 + −2 ∑ i∈[N ] g(xi)yi α+ − ∑ i∈[N ] g(xi) 2 + 2g(xi)yi =〈~g,~g〉α2 + (−2〈~g, ~y〉)α+ (−〈~g,~g〉+ 2〈~g, ~y〉) .
This is a quadratic inequality of α, hence if
(−2〈~g, ~y〉)2 − 4 (〈~g,~g〉) (−〈~g,~g〉+ 2〈~g, ~y〉) ≥ 0,
then there exists at least one real solution.
Now we first consider some of {fΘj (xj)}Kj=1 are pre-trained and some are un-trained, and then investigate the hierarchical combination of both kinds of components. In particularly, Eq. (1) can be re-written as g(x) = w0 · σ (θ1f1 + θ2fΘ2) + b0, where f1 is a pre-trained component and fΘ2 is un-trained. Since Θ2 is not fixed, it can not be checked that LIC and NPC assumptions are satisfied. On the other hand, after initialization, fΘ2 can be seen as a pre-trained component at any a snapshot during training phase.
8.4 GENERALIZATION ERROR ANALYSIS
Theorem 4. Assume pre-trained components {fj}Kj=1 satisfy LIC and NPC. Let {GE(fj)}Kj=1 be corresponding generalization errors of {fj}Kj=1, and Θ(L) ◦ σ(L) ◦ · · · ◦ σ(1) ◦Θ(0)(f1, ..., fK) be the composite neural network. Denote the generalization error, E{L(Θ(L) ◦ σ(L) ◦ · · · ◦ σ(1) ◦ Θ(0)(f1, ..., fK))}, of the composite neural network as E{LΘ,f1,...,fK}. Suppose the learned weights obey the normal distribution. Then with high probability, there exist a setting of {Θ∗(L), ...,Θ ∗ (0)} such that E{LΘ,f1,...,fK} ≤ Θ∗(L)(GE(f1), ...GE(fK)).
Proof. (of Theorem 4) (Proof Sketch) We apply the idea similar to Kawaguchi (2016): the exception of non-liner activations is same with the exception of liner activations. Previous theorems provide that with high probability there exists the solution of Θ(i),∀i ∈ [L]+ s.t. each Θ(i)+1σΘ(i) approximates a degree one polynomial AΘ(i)+1σΘ(i),1 as well as possible. If the weights are obey the normal distribution, then E{LΘ,f1,...,fK} ≤ Θ∗(L)(GE(f1), ...GE(fK)). | 1. What is the focus of the paper in terms of composite networks and ensembles?
2. What is the main contribution of the paper, and how does it relate to previous works?
3. Are there any concerns regarding the assumptions made in the paper, particularly the linear independence assumption?
4. How does the reviewer assess the clarity and quality of the writing in the paper?
5. Does the reviewer think the paper adds any theoretical value to the field, and are the results of practical use? | Review | Review
The paper considers the problem of building a composite network from several pre-trained networks and whether it is possible to ensure that the final output has better accuracy than any of its components.
The analysis done in the paper is that of a simple linear mixture of the outputs produced by each component and then by showing that if the output of the components are linearly independent then you can find essentially a better ensemble. This is a natural and straightforward statement with a straightforward proof. It is unclear to me what theoretical value does the analysis of the paper add. Further the linear independence assumption in the paper seems very strong to make the results of value.
Further the paper seems very hastily written with inconsistent notation throughout making the paper very hard to read. Especially the superscript and the subscript on x have been jumbled up throughout the paper. I recommend rejection and encourage the authors to first clean up notation to make it readable. |
ICLR | Title
An Analysis of Composite Neural Network Performance from Function Composition Perspective
Abstract
This work investigates the performance of a composite neural network, which is composed of pre-trained neural network models and non-instantiated neural network models, connected to form a rooted directed graph. A pre-trained neural network model is generally a well trained neural network model targeted for a specific function. The advantages of adopting such a pre-trained model in a composite neural network are two folds. One is to benefit from other’s intelligence and diligence, and the other is saving the efforts in data preparation and resources and time in training. However, the overall performance of composite neural network is still not clear. In this work, we prove that a composite neural network, with high probability, performs better than any of its pre-trained components under certain assumptions. In addition, if an extra pre-trained component is added to a composite network, with high probability the overall performance will be improved. In the empirical evaluations, distinctively different applications support the above findings.
N/A
This work investigates the performance of a composite neural network, which is composed of pre-trained neural network models and non-instantiated neural network models, connected to form a rooted directed graph. A pre-trained neural network model is generally a well trained neural network model targeted for a specific function. The advantages of adopting such a pre-trained model in a composite neural network are two folds. One is to benefit from other’s intelligence and diligence, and the other is saving the efforts in data preparation and resources and time in training. However, the overall performance of composite neural network is still not clear. In this work, we prove that a composite neural network, with high probability, performs better than any of its pre-trained components under certain assumptions. In addition, if an extra pre-trained component is added to a composite network, with high probability the overall performance will be improved. In the empirical evaluations, distinctively different applications support the above findings.
1 INTRODUCTION
Deep learning has been a great success in dealing with natural signals, e.g., images and voices, as well as artifact signals, e.g., nature language, while it is still in the early stage in handling sophisticated social and natural applications shaped by very diverse factors (e.g., stock market prediction), or resulted from complicated processes (e.g., pollution level prediction). One of distinctive features of the complicated applications is their applicable data sources are boundless. Consequently, their solutions need frequent revisions. Although neural networks can approximate arbitrary functions as close as possible (Hornik, 1991), the major reason for not existing such competent neural networks for those complicated applications is their problems are hardly fully understood and their applicable data sources cannot be identified all at once. By far the best practice is the developers pick a seemly neural network with available data to hope for the best. The apparent drawbacks, besides the performance, are the lack of flexibility in new data source emergence, better problem decomposition, and the opportunity of employing proven efforts from others. On the other hand, some adopts a composition of several neural network models, based on function composition using domain knowledge.
An emerging trend of deep learning solution development is to employ well crafted pre-trained neural networks (i.e., neural network models with instantiated weights), especially used as a component in a composited neural network model. Most popular pre-trained neural network models are well fine tuned with adequate training data, and made available to the public, either free or as a commercial product. During the training phase of composite neural network, the weights of pre-trained models are frozen to maintain its good quality and save the training time, while the weights of their outgoing edges are trainable. In some cases as in the transfer learning, the weights of pre-trained neural network are used as initial values in the training phase of composite neural network. It is intuitive that a composite neural network should perform better than any of its components. The ensemble learning (Freund & Schapire, 1997; Zhou, 2012) and the transfer learning (Galanti et al., 2016) have great success and are popular when pre-trained models are considered. However, the following example shows some aspects missed by these two methods, and requests for more complicated composite function.
Example 1. Assume there is a set of locations indexed as X = {(0, 0), (0, 1), (1, 0), (1, 0)} with the corresponding values Y = (0, 1, 1, 0). Obviously, the observed function is the XOR (Goodfellow et al., 2016). Now consider three models: f1(x1, x2) := x1, f2(x1, x2) := x2, and f3(x1, x2) := x1x2. Their corresponding output vectors are (0, 0, 1, 1), (0, 1, 0, 1), (0, 0, 0, 1) with bit-wise accuracy 50%, 50%, 25%, respectively. This means that the AdaBoosting algorithm will exclude f1 and f2 in the ensemble since their coefficients are 12 ln 1−50% 50% = 0. On the other hand, in the transfer learning, f3 is fine-tuned by applying the gradient descent method with respect to L2 loss on wf3 = wx1x2 to transfer the source task distribution to that of the target task. The result comes to w = 0, and f3 is excluded. Now consider g1(x1, x2) = α1f1 +α2f2 and apply the back-propagation method with respect to the L2 loss. The results are α1 = α2 = 13 , with loss 4 3 . If further define g2(x1, x2) = w1g1 +w2f3, the back-propagation yields g2 = 3g1−2f3 = x1 +x2−2x1x2 with the output (0, 1, 1, 0). The final g2 computes Y with loss 0. This example shows the power of composite function.
Composite Neural Network. In the transfer learning, how to overcome the negative transfer (a phenomenon of a pre-trained model has negative impact on the target task) is an important issue (Seah et al., 2013). In the ensemble learning, it is well known that the adding more pre-trained models, it is not always true to have the better accuracy of the ensemble (Zhou et al., 2002). Furthermore, Opitz & Maclin (1999) pointed that the ensemble by boosting having less accuracy than a single pre-trained model often happens for neural networks. In the unsupervised learning context, some experimental research concludes that although layer-wise pre-training can be significantly helpful, on average it is slightly harmful (Goodfellow et al., 2016). These empirical evidences suggest that in spite of the success of the ensemble learning and the transfer learning, the conditions that composite neural network can perform better is unclear, especially in the deep neural networks training process. The topology of a composite neural network can be represented as a rooted directed graph. For instance, an ensemble learning can be represented as 1-level graph, while a composite neural network with several pre-trained models that each is designed to solve a certain problem corresponds to a more complicated graph. It is desired to discover a mathematical theory, in addition to employing domain knowledge, to construct a composite neural network with guaranteed overall performance. In this work, we investigate the mathematical theory to ensure the overall performance of a composite neural network is better than any a pre-trained component, regardless the way of composition, to allow deep learning application developer great freedom in constructing a high performance composite neural network.
Contributions. In this work, we proved that a composite neural network with high probability performs better than any of its pre-trained components under certain assumptions. In addition, if extra pre-trained component is added into a composite network, with high probability the overall performance will be improved. In the empirical evaluations, distinctively different applications support the above findings.
2 PRELIMINARIES
In this Section, we introduce some notations and definitions about composite neural network. Parameters N ,K, d, dj , dj1 , and dj2 are positive integers. Denote {1, ...,K} as [K] and [K] ∪ {0} as [K]+. Let σ : R → R be a differentiable activation function, such as the Logistic function σ(z) = 1/(1 + e−z) and the hyperbolic tangent σ(z) = (ez − e−z)/(ez + e−z). For simplicity of notation, we sometimes abuse σ as a vector value function. A typical one hidden layer neural network can be formally presented as w1,1σ (∑d i=1 w0,ixi + w0,0 ) + w1,0.We abbreviate it as fσ,W(x), where W is the matrix defined byw1,1, w1,0, ..., w0,1, w0,0. Recursively applying this representation can obtain the neural network with more hidden layers. If there is no ambiguity on the activation function, then it can be skipped as fW(x). Now assume a set of neural networks {fWj (xj)}Kj=1 is given, where Wj is the real number matrix defining the neural network fWj : Rdj1×dj2 → Rdj , and xj ∈ Rdj1×dj2 is the input matrix of the jth neural network. For different fWj , the corresponding dj , dj1 and dj2 can be different. For each j ∈ [K], let Dj = {(x (i) j ,y (i) j ) ∈ R(dj1×dj2 )×dj}Ni=1 be a set of labeled data (for the jth neural network). For each i ∈ [N ], let x(i) = (x(i)1 , . . . ,x (i) K ), y(i) = (y (i) 1 , . . . ,y (i) K ), and D = {(x(i),y(i))}Ni=1.
For a pre-trained model (component), we mean Wj is fixed after its training process, and then we denote fWj as fj for simplicity. On the other hand, a component fWj is non-instantiated means Wj is still free. A deep feedforward neural network is a hierarchical acyclic graph, i.e. a directed tree. In this viewpoint, a feedforward neural network can be presented as a series of function compositions. For given {fWj (xj)}Kj=1, we assume θj ∈ Rdj , j ∈ [K], which make the product θjfWj (xj) is well-defined. Denote f0 as the constant function 1, then the liner combination with a bias is defined as as Θ(f1, ..., fK) = ∑ j∈[K]+ θjfj(xj). Hence, an L layers of neural network can be denoted as Θ(L) ◦ σ ◦ · · · ◦ Θ(0) (x). A composite neural network defined by components fWj (xj) can be designed as an directed tree. For instance, a composite neural network σ2 (θ1,0 + θ1,1f4(x4) + θ1,2σ1(θ0,0 + θ0,1f1(x1) + θ0,2fW2(x2) + θ0,3f3(x3))) can be denoted as σ2 ◦ Θ1 (f4, σ1 ◦Θ0(f1, fW2 , f3)), where f1 and f3 are pre-trained and fW2 is non-instantiated. Note that in this work Dj is the default training data of component fj of composite neural network, but Dj can be different from the training data deciding the frozen weights in the pre-trained fj .
Let 〈~a,~b〉 be the standard inner product of ~a and ~b, and || · || be the corresponding norm. For a composite neural network, the training algorithm is the gradient descent back-propagation algorithm and the loss function is the L2-norm of the difference vector. In particular, for a composite neural network g~θ the total loss on the data set D is
L~θ ( x; g~θ ) = 〈~g~θ (x)− ~y,~g~θ (x)− ~y〉 = ||~g~θ (x)− ~y|| 2 (1)
This in fact is ∑N i=1 ( g(x(i))− y(i) )2 . By the definition of g~θ(·), this total loss in fact depends on the given data x, the components defined by {Θj}Kj=1, the output activation σ, and the weight vector ~w. Similarly, let L(fj(xj)) be the loss function of a single component fi. Our goal is to find a feasible ~θ s.t. L~θ (x; g) < minj∈[K] L(fj(xj)).
3 PROBLEM SETTINGS AND RESULTS OVERVIEW
The problems considered in this work are as follows:
P1. What are the conditions that the pre-trained components must satisfy so that them can strictly improve the accuracy of the whole composition?
P2. Will more pre-trained components improve the accuracy of the whole composition?
Let ~fj be the output vector of the jth pre-trained component, and BK be the set of unit vectors in RK .
A1. Linearly Independent components (LIC) Assumption: ∀t ∈ [K],@{βj} ⊂ R, s.t. ~ft = ∑ j∈[K]\{t} βj ~fj .
A2. No Perfect component (NPC) Assumption: minj∈[K] {∑ i∈[N ] fj(x (i) j )− y(i) } > ∗, where ∗ > 0 is constant.
Our results are as follows:
Theorem 1. Assume the set of components {fj(xj)}Kj=1 satisfies LIC. Let g be Θ(f1, ..., fK). With probability at least 1− K
πeN , there is a vector ~θ ∈ RK \ BK s.t. L~θ (x; g) < minj∈[K]{L(fj(xj))}.
Theorem 2. Assume the set of pre-trained components {fj(xj)}Kj=1 satisfies both NPC and LIC, and g be σ ◦Θ(f1, ..., fK). Then with probability at least 1− KπeN there exists ~w s.t. L~w (x; g) < minj∈[K] L(fj(xj)).
Theorem 3. Assume the set of components {fj(xj)}Kj=1 satisfies LIC. Let gK−1 = Θ(f1, ...fK−1) and gK = Θ(f1, ...fK). With probability at least 1 − KπeN , there is a vector ~w ∈ R
K \ BRK s.t. L~w (x; gK) < L~w (x; gK−1).
Theorem 1, and 2 together answer Problem P1, and Theorem 3 answers Problem P2.
4 RELATED WORK
Our framework is related but not the same with the models such as transfer learning. (Erhan et al., 2010; Kandaswamy et al., 2014; Yao & Doretto, 2010) and ensemble leaning (Zhou, 2012).
Transfer Learning. Typically transfer learning deals with two data sets with different distributions, source and target domains. A neural network, such as an auto-encoder, is trained with source domain data and corresponding task, and then part of its weights are taken out and plugged into other neural network, which will be trained with target domain data and task. The transplanted weights can be kept fixed during the consequent steps or trainable for the fine-tune purpose (Erhan et al., 2010). For multi-source transfer, algorithms of boosting based are studied in the paper (Yao & Doretto, 2010). Kandaswamy et al. (Kandaswamy et al., 2014) proposed a method of cascading several pre-trained layers to improve the performance. Transfer learning is considered as a special case of composite neural network that the transfered knowledge can be viewed as a pre-trained component.
Ensemble (Bagging and Boosting). Since the Bagging needs to group data by sampling and the Boosting needs to tune the probability of data (Zhou et al., 2002), these frameworks are different from composite neural network. However, there are fine research results revealing many properties for accuracy improvement (Džeroski & Ženko, 2004; Gashler et al., 2008; Zhou et al., 2002). For example, it is known that in the ensemble framework, low diversity between members can be harmful to the accuracy of their ensemble (Džeroski & Ženko, 2004; Gashler et al., 2008). In this work, we consider neural network training, but not data processing.
Ensemble (Stacking). Among the ensemble methods, the stacking is closely related to our framework. The idea of stacked generalization (Wolpert, 1992), in Wolpert’s terminology, is to combine two levels of generalizers. The original data are taken by several level 0 generalizers, then their outputs are concatenated as an input vector to the level 1 generalizer. According to the empirical study of Ting and Witten (Ting & Witten, 1999), the probability of the outputs of level 0, instead of their values, is critical to accuracy. Besides, multi-linear regression is the best level 1 generalizer, and non-negative weights restriction is necessary for regression problem while not for classification problem. In (Breiman, 1996), it restricts non-negative combination weights to prevent from poor generalization error and concludes the restriction of the sum of weights equals to 1 is not necessary (Breiman, 1996). In (Hashem, 1997), Hashem showed that linear dependence of components could be, but not always, harmful to ensemble accuracy, while in our work, it allows a mix of pre-defined and undefined components as well as negative weights to provide flexibility in solution design.
Recently Proposed Frameworks. In You et al. (2017), Shan You et al. proposed a student-teacher framework where the outputs of pre-trained teachers are averaged as the knowledge for the student network. A test time combination of multiple trained predictors was proposed by Kim, Tompkin, and Richardt In Kim et al. (2017), that the combination weights are decided during test time. In above frameworks, the usage of pre-trained neural networks generally improves the accuracy of their combination.
5 THEORETICAL ANALYSIS
This section provides analyses of the loss function of composite neural network with the introduction of pre-trained components. For the complete proofs, please refer to Supplementary Material. Observe that for given pre-trained components {fj}Kj=1, a composite neural network can be defined recursively by postorder subtrees search. For instance, σ2 ◦ Θ1 (f4, σ1 ◦Θ0(f1, f2, f3)) can be presented as σ2 ◦ Θ1(f4, g1), and g1 = σ1 ◦ Θ0(f1, f2, f3). Without loss of generality, we assume dj = d = 1 for all j ∈ [K] in the following proofs. We denote ~fj the vector (fj(x(1)), · · · , fj(x(N))), as the sequence of fj during the training phase. Similarly, ~y := (y(1), · · · , y(N)). Let ~ej be an unit vector in the standard basis of RK for each j ∈ [K], i.e. ~e1 = (1, 0, · · · , 0) and ~e2 = (0, 1, 0, · · · , 0), etc. Let BK be the set containing all these standard unit-length basis of RK .
Theorem 1. Assume the set of components {fj(xj)}Kj=1 satisfies LIC. Let g be Θ(f1, ..., fK). With probability at least 1− K
πeN , there is a vector ~θ ∈ RK \ BK s.t. L~θ (x; g) < minj∈[K]{L(fj(xj))}.
Proof. (Proof Sketch) The whole proof is split to Lemma 5.1,5.2,5.3. Note that g(·) is the linear combination of ~θ and {fj(xj)}Kj=1. It is well known (Friedman et al., 2001) that to search the minimizer ~θ for L~θ, i.e. to solve a least square error problem, is equivalent to find an inverse matrix defined by {fj(xj)}Kj=1. Since {fj(xj)}Kj=1 satisfy LIC, the inverse matrix can be written down concretely, which proves the existence. Furthermore, if this solved minimizer ~θ∗ is not ~es for some s ∈ [K] then the g~θ∗ has lower loss than fs. Lemma 5.3 argues that the probability of ~θ
∗ = ~es is at most the probability of the event 〈~f − ~y, ~f〉 = 0, where ~f is uniformly taken from the vector set of the same length of ~f − ~y.
The statements of Lemmas needed by previous Theorem are as follows. Lemma 5.1. There exists ~θ ∈ RK+1 s.t. L~θ ( x; Θ(0)(f1, ...fK) ) ≤ minj∈[K]+{L(fj(xj))}.
This Lemma deals with the existence of the solution of the inequality. But our goal is to find a solution such that the loss is strictly less than any pre-trained component.
Lemma 5.2. Denote IL~θ the indicator variable for the event that at least one of ~ej ∈ BRK is the minimizer of L~θ. Then Pr { IL~θ = 1 } < K πeN , i.e. Pr { IL~θ = 0 } ≥ 1− K πeN .
Lemma 5.3. Define F(~y, L(f)) = { ~f ∈ RN : ∥∥∥~f − ~y∥∥∥2 = Lf} for given ~y and ~f . Then we have Pr~f∈F(~y,L(f)) { 〈~f − ~y, ~f〉 = 0 } < 1 πeN .
The above Lemmas prove Theorem 1. The following corollary is the closed from of the optimal weights.
Corollary 5.1. The closed form of the minimizer is: [θt]t∈[K]+ = [ 〈~fs, ~ft〉 ]−1 s,t∈[K]+ × [ 〈~fs, ~y〉 ] s∈[K]+ .
In the following, we deal with σ ◦Θ(f1, ..., fK) and Θ1 ◦ σ ◦Θ(f1, ..., fK). Theorem 2. Assume the set of pre-trained components {fj(xj)}Kj=1 satisfies both NPC and LIC, and g be σ ◦ Θ(f1, ..., fK). Then with probability at least 1 − KπeN there exists ~θ s.t. L~θ (x; g) < minj∈[K] L(fj(xj)).
Proof. (Proof Sketch) The whole proof is split to Lemma 5.4,5.5, and 5.6. The idea is to find an interval in the domain of σ such that the output can approximate linear function as well as possible. Then in this interval, the activation σ can approximate any given pre-trained component. However, under the assumptions LIC and NPC the gradient of the loss L is not zero with high probability. Since the training is based on the gradient descent algorithm, this none-zero gradient leads the direction of updating process to obtain a lower loss.
Lemma 5.4. Let N,K and j ∈ [K] be fixed. For small enough , there exists ~θ ∈ ZF,1, and 0 < α ∈ R s.t. |σ ◦Θ(0)(f1, ..., fK)− fj(x) α | < .
Lemma 5.5. Assume NPC holds with ∗ > 0. If ~θ ∗/3 satisfies |σ ◦Θ(0)(f1, ..., fK)(x)− fj(x)| < ∗ 3N for any j ∈ [K] +, then∇~θL(~θ ∗/3) 6= ~0.
Lemma 5.6. If ~θ ∗/3 makes ∇~θL(~θ ∗/3) 6= ~0, then there exist ~θ s.t. L~θ (x; g) < minj∈[K]+ L(fj(xj)).
Now we consider the difference of losses of σ ◦Θ′(f1, ..., fK) and σ ◦Θ(f1, ..., fK−1). Theorem 3. Assume the set of components {fj(xj)}Kj=1 satisfies LIC. Let gK−1 = Θ(f1, ...fK−1) and gK = Θ(f1, ...fK). With probability at least 1 − KπeN , there is a vector ~θ ∈ R
K \ BRK s.t. L~θ (x; gK) < L~θ (x; gK−1).
Proof. (Proof Sketch) The idea is directly solve the inequality for the case of K = 2, and then generalize the result to larger K.
The following provides a generalized error bound for a composite neural network.
Theorem 4. Assume pre-trained components {fj}Kj=1 satisfy LIC and NPC. Let {GE(fj)}Kj=1 be corresponding generalization errors of {fj}Kj=1, and Θ(L) ◦ σ(L) ◦ · · · ◦ σ(1) ◦Θ(0)(f1, ..., fK) be the composite neural network. Denote the generalization error, E{L(Θ(L) ◦ σ(L) ◦ · · · ◦ σ(1) ◦ Θ(0)(f1, ..., fK))}, of the composite neural network as E{LΘ,f1,...,fK}. Then with high probability, there exist a setting of {Θ∗(L), ...,Θ ∗ (0)} such that E{LΘ,f1,...,fK} ≤ Θ ∗ (L)(GE(f1), ...GE(fK)).
Proof. (Proof Sketch) We apply the idea similar to Kawaguchi (2016): the exception of non-liner activations is same with the exception of liner activations. Previous theorems provide that with high probability there exists the solution of Θ(i),∀i ∈ [L]+ s.t. each Θ(i)+1σΘ(i) approximates a degree one polynomial AΘ(i)+1σΘ(i),1 as well as possible. If the weights are obey the normal distribution, then E{LΘ,f1,...,fK} ≤ Θ∗(L)(GE(f1), ...GE(fK)).
6 EMPIRICAL STUDIES
This section is to numerically verify the performance of composite network for two distinctively different applications, image classification and PM2.5 prediction. For image classification, we examined two pre-trained components, the ResNet50 (He et al., 2016) from Keras and the SIFT algorithm(Lowe, 1999) from OpenCV, running on the benchmark of ImageNet competition(Russakovsky et al., 2015). For PM2.5 prediction, we implemented several models running on the open data of local weather bureau and environment protection agency to predict the PM2.5 level in the future hours.
6.1 IMAGENET CLASSIFICATION
We chose Resnet50 as the pre-trained baseline model and the SIFT model as an auxiliary model to form a composite neural network to validate the proposed theory. The experiments are conducted on the 1000-class single-label classification task of the ImageNet dataset, which has been a well received benchmark for image classification applications. A reason to choose the SIFT (Scale-Invariant Feature Transform) algorithm is that its function is very different from ResNet and it is interesting to see if the performance of ResNet50 can be improved as predicted from our theory.
We trained the SIFT model using the images of ImageNet, and directed the output to a CNN to extract useful features before merging with ResNet50 output. In the composite model, the softmax functions of both ResNet50 and SIFT model are removed that the outputs of length 1000 of both models are merged before the final softmax stage. During the training process of composite network, the weights of ResNet50 and SIFT model are fixed, and only the connecting weights and bias are trained.
The ResNet50 was from He et al. that its Top-1 accuracy in our context was lower than reported in (He et al., 2016) since we did not do any fine tuning and data preprocessing. In the Figure 1, it shows the composite network has higher accuracy than ResNet50 during almost the complete testing run. Table 1 shows the same result that the composite network performs better too. The experiment results support the claims of this work that a composite network performs better than any of its components, and more components work better than less components.
6.2 PM2.5 PREDICTION
The PM2.5 prediction problem is to forecast the particle density of fine atmospheric matter with the diameter at most 2.5 µm (PM2.5) in the future hours, mainly, for the next 12, 24, 48, 72 hours. The datasets used are open data provided by two sources including Environmental Protection Administration (EPA)1 , and Center Weather Bureau (CWB)2. The EPA dataset contains 21 observed features, including the speed and direction of wind, temperature, relative humidity, PM2.5 and PM10 density, etc., from 18 monitoring stations, with one record per hour. The CWB has seventy monitoring stations, one record per 6 hours, containing 26 features, such as temperature, dew point, precipitation, wind speed and direction, etc. We partitioned the observed area into a grid of 1140 km2 with 1 km×1 km blocks and aligned the both dataset into one-hour period. We called the two datasets as air quality and weather condition dataset.
We selected ConvLSTM (Convolution LSTM) and FNN (fully connected neural network) as the components used in this experiment. The reason to select ConvLSTM is that the dispersion of PM2.5 is both spatially and temporally dependent and ConvLSTM is considered capable of catching the dependency, and FNN is a fundamental neural network that acts as the auxiliary component in the experiment.
The prediction models were trained with the data of 2014 and 2015 years, then the 2016 data was used for testing. We considered two function compositions, the linear combination Θ and the Logistic function σ1 (as Theorem 2), to combine the two components to examine the applicability of the proposed theorems.
We trained and tested both ConvLSTM and FNN using air quality dataset (Dataset A) and weather condition dataset (Dataset B) separately as the baselines (denoted as f1, f2, f3 and f4) and their training error and testing error in MSE are list in the first part of Table 2. Then we composited FNNs using Dataset A and Dataset B, each FNN can be pre-trained (denoted as x) or non-instantiated (denoted as o). In addition, we used both linear and Sigmoid activation functions. As a result, we had eight combinations, as list in the part two. We treated ConvLSTM in the same way and the outcomes were in the part 3. Finally, we composited using one FNN and one ConvLSTM that each was the best in their category, and the resulting composite network was a tree of depth 2. For instance, the candidate of ConvLSTM of part 4 for 12 hours prediction was the 4th row (i.e., Θ(f◦3 ,f ◦ 4 )) of part 3. Their training and testing errors in MSE were listed in the part 4.
From the empirical study results, it shows mostly the proposed theorems are followed. While the composite networks with all pre-trained components may not perform better than others in their category, (which is not a surprise), what we expect to see is after adding a new component, the composite network has improvement over the previous one. For example, the σ ◦ Θ(f×3 , f × 4 ) has strictly better accuracy than both f3 and f4 for all future predictions. Another example is the NEXT 48 hr, σ ◦ Θ(C×, F×) also has strictly better accuracy than both C = σ ◦ Θ(f◦3 , f◦4 ) and F = σ ◦Θ(f◦3 , f◦4 ).
1https://opendata.epa.gov.tw/Home 2http://opendata.cwb.gov.tw/index
7 CONCLUSION
In this work, we investigated the composite neural network with pre-trained components problem and showed that the overall performance of a composite neural network is better than any of its components, and more components perform better than less components. In addition, the developed theory consider all differentiable activation functions.
While the proposed theory ensures the overall performance improvement, it is still not clear how to decompose a complicated problem into components and how to construct them into a composite neural network in order to have an acceptable performance. Another problem worth some thinking is when the performance improvement will diminish (by power law or exponentially decay) even adding more components. However, in the real world applications, the amount of data, data distribution and data quality will highly affect the performance.
8 SUPPLEMENTARY MATERIAL
For self-contained, we list some common Taylor expansion in the following. Logistic: S(z) := 11+e−z = 1 2 + 1 4z − 1 48z 3 + 1480z 5 − 1780640z
7 +O(z9), ∀z ∈ R, Hyperbolic Tan: tanH(z) = e
z−e−z ez+e−z = z − 1 3z 3 + 215z 5 +O(z7), ∀|z| ≤ π2
arcTan: arctan(z) = z − 13z 3 + 15z 5 + +O(z7), ∀|z| ≤ 1.
Definition 1. Given an activation σ(z) and its Taylor expansion Tσ(z), let Aσ,D(z) be the truncated the monomials of degree at most D from Tσ(z). We define Aσ,D(z) as the D-degree Taylor approximation polynomial, and Rσ,D+1(z) as the remainder part such that Tσ(z) = Aσ,D(z) +Rσ,D+1(z).
For instance, if we set D = 3 then the Taylor expansion of Logistic function S(z) is separated as the approximation partAS(z),3(z) = 12 + 1 4z− 1 48z 3 and the remainder partRS(z),4(z) = 1480z 5 +O(z7).
Proposition 8.1. (Error Bound of The Remainder) Let S(z) be the Logistic function. Consider the approximation AS(z),≤D(z) and the remainder RS(z),D+1(z) defined as above. For given ∈ (0, 11000 ) and D ∈ N, if |z| <
1/(D+2), then |S(z)−AS(z),D(z)| = |RS(z),D+1(z)| < .
Proof. Note that if < 1 then for all D ∈ N, 1/(D+1) < 1. If |z| < 1/3 and D = 1, then
|RS(z),D+1(z)| ≤ ∣∣∣∣− 148z3 + 1480z5 − 1780640z7 +O(z9) ∣∣∣∣ < 124 < . The general case (D ≥ 2) can be proven by the same argument as above.
This Proposition means that for a suitable range of z, the Logistic function can be seen as a linear function with the error at most .
Definition 2. For the Logistic activation σ(z) = S(z), > 0 and given polynomial degree D, we define ZD, = {z ∈ R : |σ(z)−Aσ,D(z)| < }. Furthermore, for given components {fj : j ∈ [K]} = F , we consider the variable z = Θ(f1, ..., fK) and define
ZF,D, = { ~θ ∈ RK+1 : z = Θ(f1, ..., fK), |σ(z)−Aσ,D(z)| < } .
Observe that if the parameters , F ,and |F | = K are fixed, then ZF,D, ⊂ ZF,D+1, ⊂ RK+1.
8.1 FUNCTION COMPOSITION BY LINEAR COMBINATION
Recall that for a set of pre-trained components {fj(xj) : j ∈ [K]}, Θ(0) (f1, ...fK) =∑ j∈[K]+ θ0,jfj , where f0 = 1. For simplicity, we consider Θ(1)(z) = αz. This means
Θ(1) ◦ σ ◦Θ(0) (f1, ...fK) = θ1,1σ (∑ j∈[K]+ θ0,jfj ) + θ1,0.
Theorem 1 is a consequence of the following lemmas:
Proof. (of Lemma 5.1) For simplicity of notations, let g(x) = Θ(0)(f1, ...fK), hence g(x) = ∑ j∈[K]+ θjfj(xj). Also
recall that L~θ (x; g) = ∑N i=1 ( g(x(i))− y(i) )2 . To prove the existence of the minimizer, it is enough to solve the equations of critical points, in the case of a quadratic object function. That is, to solve the set of equations:
∇~θL (x; g) = ( ∂L
∂θ0 , . . . ,
∂L
∂θK
)T = (0, . . . , 0) T ,
where for each s, t ∈ [K]+,
∂L ∂θs =2 N∑ i=1 ( g(x(i))− y(i) ) · fs(x(i)) = 2 N∑ i=1 ∑ j∈[K]+ θjfj(x (i) j )− y (i) · fs(x(i)) =2
∑ j∈[K]+ θj〈~fs, ~fj〉 − 〈~fs, ~y〉 . Hence, to solve ∇~θL (x; g) = ~0 is equivalent to solve [ 〈~fs, ~ft〉 ] s,t∈[K]+
× [θt]t∈[K]+ =[ 〈~fs, ~y〉 ] s∈[K]+ , where [ 〈~fs, ~ft〉 ] s,t∈[K]+
is a (K + 1) by (K + 1) matrix, [θt]t∈[K]+ and[ 〈~fs, ~y〉 ] s∈[K]+ are both 1 by (K + 1).
Note that linear independence of {~fj}j∈[K]+ makes [ 〈~fs, ~ft〉 ] s,t∈[K]+ a positive-definite Gram
matrix (Horn & Johnson, 2012) , which means the inversion [ 〈~fs, ~ft〉 ]−1 s,t∈[K]+ exists. Then ~θ is solved:
[θt]t∈[K]+ = [ 〈~fs, ~ft〉 ]−1 s,t∈[K]+ × [ 〈~fs, ~y〉 ] s∈[K]+
(2)
The above shows the existence of the critical points. On the other hand, since L~θ (x; g) is the summation of square terms, i.e. paraboloid, the the critical points can only be the minimum.
The meaning of the gradient on a function surface is the direction that increases the function value most efficiently. Hence, if the gradient is not the zero vector then the corresponding point can not be the minimizer of the function surface. Recall for any s ∈ [k],[
∂L ∂θt ] t∈[K]+ ∣∣ ~θ=~es = 2 [ 〈~fs − ~y, ~ft〉 ] t∈[K]+ .
Before the proof of Lemma 5.2, we need the upper bound of the probability of some events. Note that ~y is defined according to the given training data, and for each j ∈ [K]+ the length of ~fj − ~y, i.e. ∥∥∥~f − ~y∥∥∥, is also given. The question is, for fixed ~y what is the probability of selected ~f is perpendicular to ~f − ~y? A folklore approach is considering that {~f = (f(x(1)), ..., f(x(N)))} obeys the normal distribution, and setting the mean of f(x(i)) as y(i) for each i ∈ [N ]. In the following we propose another simple probability argument to obtain a loose upper bound.
Proof. (of Lemma 5.3) Observe that 〈~f − ~y, ~f〉 = 0 ⇔ (~f − ~y) ⊥ ~f , which implies the angle between them, ∠(~f−~y), ~f , is in the interval [ π− 2 , π+ 2 ] for small ∈ R +, as shown in the left part of Figure 2. The red, orange, and blue vectors show three possibles of the pair of ~f and ~f − ~y. The length of ~f −~y is fixed since ~f and ~y are given, but the angle between ~f −~y and ~y can decide whether (~f − ~y) ⊥ ~f . The gray circle collects all possible end-point of the vector ~f − ~y emission from the end-point of ~y. Although on the whole circle there are exactly two specific angles 3 can satisfy (~f − ~y) ⊥ ~f , we give a loose small interval with respect to π. In particularly, we set 0 < < e−N .
Pr ~f∈F(~y,Lf )
{ ∠(~f−~y), ~f = π
2
} ≤ Pr
~f∈F(~y,Lf )
{ π −
2 ≤ ∠(~f−~y), ~f ≤
π +
2
} =
π <
1
πeN .
Now we are ready to proof Lemma 5.2. 3That is, two points on the circumference, which is in fact measure zero on all possible angle [0, 2π)
Proof. (of Lemma 5.2) We denote A the event that at least one of ~ej ∈ BRK is the minimizer of L(~θ) for convenience.
IL(~w) = 1⇔ the event A is true ⇒ [ ∂L ∂θt ] t∈[K]+ ∣∣ ~θ=~es = 2 [ 〈~fs − ~y, ~ft〉 ] t∈[K]+ = [0]K×1 for somes ∈ [K] +
⇒〈~f1 − ~y, ~f1〉 = 0 ∧ 〈~f1 − ~y, ~f2〉 = 0 ∧ · · · ∧ 〈~f1 − ~y, ~fK〉 = 0
or · · · or 〈~fK − ~y, ~f1〉 = 0 ∧ 〈~fK − ~y, ~f2〉 = 0 ∧ · · · ∧ 〈~fK − ~y, ~fK〉 = 0 Hence, for given ~y and L(fj) = ∥∥∥~fj − ~y∥∥∥2, ∀ ∈ [K]+, we have
Pr { IL(~w) = 1 } ≤ ∑ j∈[K]+ Pr { 〈~fj − ~y, ~f1〉 = 0 ∧ 〈~fj − ~y, ~f2〉 = 0 ∧ · · · ∧ 〈~fj − ~y, ~fK〉 = 0 } ≤K · Pr { 〈~f1 − ~y, ~f1〉 = 0
} < K
πeN ,
where the second inequality is based on the symmetry between ~fs and ~ft for any s, t ∈ [K]+, and the last inequality is by Lemma 5.3.
Proof. (of Theorem 3) We start from a simple case: Claim: ∃β ∈ R s.t. ∑
i∈[N ] (f1(xi)− yi)2 − ∑ i∈[N ] (f1(xi) + βf2(xi)− yi)2 > 0.
Proof. ∑ i∈[N ] (f1(xi)− yi)2 − ∑ i∈[N ] (f1(xi) + βf2(xi)− yi)2
= ∑ i∈[N ] [ (f1(xi)− yi)2 − (f1(xi) + βf2(xi)− yi)2 ]
= − ∑ i∈[N ] f1(xi) 2 β2 + 2 ∑ i∈[N ] (f2(xi)yi − f2(xi)f1(xi)) β Observe that the above is a quadratic equation of β with negative leading coefficient. Hence, to obtain the maximum of the difference, we can set
β = ∑ i∈[N ] (f2(xi)yi − f2(xi)f1(xi))∑
i∈[N ] f1(xi) 2
= 〈~y − ~f1, ~f2〉 〈~f2, ~f2〉
Note that if 〈~y − ~f1, ~f2〉 = 0 then the last pre-trained component is no need to be added. We aim to calculate the probability of this case. Observe that 〈~y − ~f1, ~f2〉 = 0 ⇔ (~y − ~f1) ⊥ ~f2. This condition is different from previous Lemma. Here we have to find the upper bound of the probability of (~y − ~f1) ⊥ ~f2 for given ~f1 and ~y. As shown in the left part of Figure 2), the angle between ~f2 and ~y must be in a specific interval, say [π− 2 , π+ 2 ] for small ∈ R
+. In order to be concrete, we set 0 < < e−N .
Pr ~f∈F(~y,1)
{ (~y − ~f1) ⊥ ~f2 } ≤ π < 1 πeN .
The general case can be reduced to the above claim by considering gK−1 as f1 and θkfK as βf2. Furthermore, since there there K possibles the be selected as the least pre-trained component, the probability is upper bounded by K
πeN .
8.2 FUNCTION COMPOSITION BY NON-LINEAR ACTIVATION
Proof. (of Lemma 5.4) Although the lemma is an existence statement, we give a constructive proof here. By setting D = 1 in Proposition 8.1, we know that for Logistic S(z) and 0 < < 1/1000, the degree-one Taylor approximation AS(z),1 = 12 + 1 4z with the remainder |RS(z),2| < . Define M := 10 ·maxj∈[K]+,i∈[N ]{|fj(xi)|}. Hence by setting z = fj(xj) M , we have
∣∣∣S ( fj(xj)M )− 12 − fj(xj)4M ∣∣∣ < . This means that for the givenj ∈ [K], θj = 1M , θ0 = and for all j
′ 6= j, θj′ = 0. Furthermore, α = 4M .
This lemma implies that the S(z) can approximate linear function as possible in a non-zero length interval, hence if the scaling of ~θ is allowed then Theorem 1 can be applied. Corollary 8.1. If the activation function is the Logistic, S(z), and {fj(xj)}Kj=1 satisfies LIC, then with high probability there is a vector ~θ s.t. L~θ ( x;σ ◦Θ(0)(f1, ..., fK) ) < minj∈[K]+ L(fj(xj)).
Proof. Set of above Lemma as ∗
3N , then previous Lemma shows that there exists θ which maps {fj} into Zσ◦Θ(0),1, ∗3N . Since the output of σ ◦Θ(0) is a linear function with error at most ∗
3N , we have the same conclusion.
Proof. Let g(x) = σ ◦ Θ(0)(f1, ..., fK)(x) for short. First observe that |g(x) − fj(x)| < ∗
3N ⇒ ∀i ∈ [N ],g(x(i))− fj(x(i)) > − ∗
3N . Then∑ i∈[N ] ( g(x(i))− y(i) ) = ∑ i∈[N ] { (g(x(i))− fj(x(i))) + (fj(x(i))− y(i)) } > N · − ∗ 3N + ∗ = 2 ∗ 3 > 0.
On the other hand, it can be calculated that ∇~θL (x; g) |~θ=~θ ∗/3 = ( ∂L ∂w0 , ∂L ∂θ1 , . . . , ∂L ∂θK , ∂L ∂b0 )T |~θ=~θ ∗/3 ,
where [a]T is the transpose of the matrix [a]. Also note that ∂L∂w0 |~θ=~θ ∗/3 = 2 ·∑ i∈[N ] (g(x (i))− y(i)) · S(z), and ∂L∂b0 |~θ=~θ ∗/3 = 2 · ∑ i∈[N ] (g(x
(i))− y(i)) · 1. Since∑ i∈[N ] (g(x (i))− y(i)) > 0, we can conclude ∇~θL (x; g) |~θ=~θ ∗/3 6= ~0.
Proof. (of Lemma 5.6 ) By previous Lemma, it is valid to consider the best performance component, fj∗ ; i.e. L(fj∗) = minj∈[K]+ L(fj(xj)). Since ∇~θL(~θ ∗/3) 6= ~0, by definition of the gradient, moving along the direction of −∇~θL(~θ ∗/3)/ ∥∥∥∇~θL(~θ ∗/3)∥∥∥ with a step-size α > 0 must strictly decrease the value of L~θ (x; g). W.L.O.G, we can assume this α is optimal; that is, if α > r > 0 then
L(~θ ∗/3) > L(~θ ∗/3) − r · ∇~θL(~θ ∗/3)/ ∥∥∥∇~θL(~θ ∗/3)∥∥∥, while if r = α + δ for some δ > 0 then
L(~θ ∗/3) ≤ L(~θ ∗/3)− r · ∇~θL(~θ ∗/3)/ ∥∥∥∇~θL(~θ ∗/3)∥∥∥. The issue is how to find a proper step-size r > 0? We consider the line search approach such that:
r∗ = arg min r∈R L ~θ0 − r · ∇~θL(~θ0)∥∥∥∇~θL(~θ0)∥∥∥ This outputs F ∗ then we can make sure that L ( ~θ0 ) > L ( ~θ0 ) − r · ∇~θL(~θ0)/
∥∥∥∇~θL(~θ0)∥∥∥. Since the underlining θ0 is ~θ ∗/3 that makes the loss is the same with the best one. Hence, we have
(~θ0 − r∗ · ∇~θL(~θ ∗/3)/ ∥∥∥∇~θL(~θ ∗/3)∥∥∥) to fit our goal (beating the best one).
Combining these lemmas can obtain the conclusion of Theorem 2.
Corollary 8.2. The process of Lemma 5.5 converges.
Proof. (of 8.2) It is know that if a monotone decreasing sequence is bounded below, then this sequence convergences. ..... (revising) Repeating the process of Lemma 2, we can obtain a strictly decreasing sequence: L(~θ0) > L(~θ1) > L(~θ2) > . . . . Note that ∀i, (~θi) ≥ 0. This means the sequence is monotone decreasing and bounded below, so theoretically it converges by monotone convergence theorem of mathematical analysis. Algorithmically, the gradient descent based sequence finding process stops at some term with ∇~θL(~θ
′) = 0, which is a (local) minimum or a saddle point.
Corollary 8.3. If the assumptions in Theorem 2 are satisfied, then with height probability there exists ~θ′ s.t. L~θ′ is a (local) minimum or a saddle point, while L~θ′ (x; g) < minj∈[K]+ |L(fj(xj))| still holds.
Assume the pre-trained component set {fj(xj)}Kj=1 satisfies both NPC and LIC, then there exists ~θ s.t. L~θ (x; g) < minj∈[K]+ L(fj(xj)).
In the previous proof, the critical properties of activations are local linearity and differentiability. Hence, it is not hard to check that if we replace σ(·) in Eq. (1) with other common activations, the conclusion still holds. By local linearity we mean that on a non-zero length interval in its domain, the function can approximate the linear mapping as well as possible.
Corollary 8.4. Theorem 1 and 2 can apply on any activations with local linearity and differentiability.
Based on Corollary 5.1 and Lemma 5.6, it is natural to obtain the process of finding ~θ: by gradient descent or the closed form of Corollary 5.1. We can compute the optimal weights for the bottom Sigmiod block. On the other hand, after random initializing, the parameters of un-trained components in the Relu or Tanh blocks are assigned. This implies they can be treated as the all pre-trained case in Theorem 1 or 2. In fact, given the outputs from bottom level block then Corollary 5.1 provides weights improving the accuracy. Then it goes to the next up level block until the top, which is the forwarding steps of Back-propagation LeCun et al. (1988). Hence with initialization for un-trained component, Corollary 5.1 is essentially the same as Back-propagation.
8.3 A MIX OF PRE-TRAINED AND UN-TRAINED COMPONENTS
Now we first consider some of {fΘj (xj)}Kj=1 are pre-trained and some are un-trained, and then investigate the hierarchical combination of both kinds of components. In particularly, Eq. (1) can be re-written as g(x) = w0 · σ (θ1f1 + θ2fΘ2) + b0, where f1 is a pre-trained component and fΘ2 is un-trained. Since Θ2 is not fixed, it can not be checked that LIC and NPC assumptions are satisfied. On the other hand, after initialization, fΘ2 can be seen as a pre-trained component at any a snapshot during training phase.
Theorem 5. In the end of an weight updating iteration, if the components f1 and fΘ2 satisfy LIC and NPC assumptions, then with high probability ~w updated in the next iteration can improve the loss.
Proof. Recall the training algorithm is the backpropagation algorithm. Also note that according to Eq. (1), the order of updating is ~θ first and then Θ2. We denote in the end of iteration i the value of ~θ and Θ2 as ~θ(iter=i) and Θ (iter=i) 2 , respectively. With randomized initialization, Θ2 is assigned as Θ (iter=0) 2 before the execution of the iteration 1. Then in each iteration i ≥ 1, g(x) is a combination of fixed parameter components. Hence this can reduce to the all pre-trained cases, and can apply Theorem 1 and 2.
Lemma 8.1. For a given data set X , let ~g := (g1, ..., gN ) and ~y = (y1, ..., yN ). If 〈~g,~g − ~y〉 6= 0, then there exists α ∈ R s.t. ∑
i∈[N ] (αg(xi)− yi)2 < ∑ i∈[N ] (g(xi)− yi)2
Proof. It is equivalent to show the inequality∑ i∈[N ] (αg(xi)− yi)2 − ∑ i∈[N ] (g(xi)− yi)2 < 0
has a real number solution.∑ i∈[N ] [ (αg(xi)− yi)2 − (g(xi)− yi)2 ]
= ∑ i∈[N ] g(xi) 2 α2 + −2 ∑ i∈[N ] g(xi)yi α+ − ∑ i∈[N ] g(xi) 2 + 2g(xi)yi =〈~g,~g〉α2 + (−2〈~g, ~y〉)α+ (−〈~g,~g〉+ 2〈~g, ~y〉) .
This is a quadratic inequality of α, hence if
(−2〈~g, ~y〉)2 − 4 (〈~g,~g〉) (−〈~g,~g〉+ 2〈~g, ~y〉) ≥ 0,
then there exists at least one real solution.
Now we first consider some of {fΘj (xj)}Kj=1 are pre-trained and some are un-trained, and then investigate the hierarchical combination of both kinds of components. In particularly, Eq. (1) can be re-written as g(x) = w0 · σ (θ1f1 + θ2fΘ2) + b0, where f1 is a pre-trained component and fΘ2 is un-trained. Since Θ2 is not fixed, it can not be checked that LIC and NPC assumptions are satisfied. On the other hand, after initialization, fΘ2 can be seen as a pre-trained component at any a snapshot during training phase.
8.4 GENERALIZATION ERROR ANALYSIS
Theorem 4. Assume pre-trained components {fj}Kj=1 satisfy LIC and NPC. Let {GE(fj)}Kj=1 be corresponding generalization errors of {fj}Kj=1, and Θ(L) ◦ σ(L) ◦ · · · ◦ σ(1) ◦Θ(0)(f1, ..., fK) be the composite neural network. Denote the generalization error, E{L(Θ(L) ◦ σ(L) ◦ · · · ◦ σ(1) ◦ Θ(0)(f1, ..., fK))}, of the composite neural network as E{LΘ,f1,...,fK}. Suppose the learned weights obey the normal distribution. Then with high probability, there exist a setting of {Θ∗(L), ...,Θ ∗ (0)} such that E{LΘ,f1,...,fK} ≤ Θ∗(L)(GE(f1), ...GE(fK)).
Proof. (of Theorem 4) (Proof Sketch) We apply the idea similar to Kawaguchi (2016): the exception of non-liner activations is same with the exception of liner activations. Previous theorems provide that with high probability there exists the solution of Θ(i),∀i ∈ [L]+ s.t. each Θ(i)+1σΘ(i) approximates a degree one polynomial AΘ(i)+1σΘ(i),1 as well as possible. If the weights are obey the normal distribution, then E{LΘ,f1,...,fK} ≤ Θ∗(L)(GE(f1), ...GE(fK)). | 1. What is the focus of the paper in terms of neural network performance?
2. What are the key contributions of the paper, particularly in regards to function composition?
3. Are there any concerns regarding the novelty of the results presented in the paper?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. Are there any questions or areas of confusion regarding the paper's methodology or conclusions? | Review | Review
This paper studies composite neural network performance from function composition perspective. In theorems 1, 2 and 3, the authors essentially prove that as the basis functions (pre trained components) increases (satisfying LIC condition), there are more vectors/objects can be represented by the basis.
To me, this is a very straight forward result. As the basis increases while the LIC condition is satisfied, we can of course represent more objects (the new component is one of them). I don't see any novelties here. The result is straightforward, and this should be a clear rejection. |
ICLR | Title
An Analysis of Composite Neural Network Performance from Function Composition Perspective
Abstract
This work investigates the performance of a composite neural network, which is composed of pre-trained neural network models and non-instantiated neural network models, connected to form a rooted directed graph. A pre-trained neural network model is generally a well trained neural network model targeted for a specific function. The advantages of adopting such a pre-trained model in a composite neural network are two folds. One is to benefit from other’s intelligence and diligence, and the other is saving the efforts in data preparation and resources and time in training. However, the overall performance of composite neural network is still not clear. In this work, we prove that a composite neural network, with high probability, performs better than any of its pre-trained components under certain assumptions. In addition, if an extra pre-trained component is added to a composite network, with high probability the overall performance will be improved. In the empirical evaluations, distinctively different applications support the above findings.
N/A
This work investigates the performance of a composite neural network, which is composed of pre-trained neural network models and non-instantiated neural network models, connected to form a rooted directed graph. A pre-trained neural network model is generally a well trained neural network model targeted for a specific function. The advantages of adopting such a pre-trained model in a composite neural network are two folds. One is to benefit from other’s intelligence and diligence, and the other is saving the efforts in data preparation and resources and time in training. However, the overall performance of composite neural network is still not clear. In this work, we prove that a composite neural network, with high probability, performs better than any of its pre-trained components under certain assumptions. In addition, if an extra pre-trained component is added to a composite network, with high probability the overall performance will be improved. In the empirical evaluations, distinctively different applications support the above findings.
1 INTRODUCTION
Deep learning has been a great success in dealing with natural signals, e.g., images and voices, as well as artifact signals, e.g., nature language, while it is still in the early stage in handling sophisticated social and natural applications shaped by very diverse factors (e.g., stock market prediction), or resulted from complicated processes (e.g., pollution level prediction). One of distinctive features of the complicated applications is their applicable data sources are boundless. Consequently, their solutions need frequent revisions. Although neural networks can approximate arbitrary functions as close as possible (Hornik, 1991), the major reason for not existing such competent neural networks for those complicated applications is their problems are hardly fully understood and their applicable data sources cannot be identified all at once. By far the best practice is the developers pick a seemly neural network with available data to hope for the best. The apparent drawbacks, besides the performance, are the lack of flexibility in new data source emergence, better problem decomposition, and the opportunity of employing proven efforts from others. On the other hand, some adopts a composition of several neural network models, based on function composition using domain knowledge.
An emerging trend of deep learning solution development is to employ well crafted pre-trained neural networks (i.e., neural network models with instantiated weights), especially used as a component in a composited neural network model. Most popular pre-trained neural network models are well fine tuned with adequate training data, and made available to the public, either free or as a commercial product. During the training phase of composite neural network, the weights of pre-trained models are frozen to maintain its good quality and save the training time, while the weights of their outgoing edges are trainable. In some cases as in the transfer learning, the weights of pre-trained neural network are used as initial values in the training phase of composite neural network. It is intuitive that a composite neural network should perform better than any of its components. The ensemble learning (Freund & Schapire, 1997; Zhou, 2012) and the transfer learning (Galanti et al., 2016) have great success and are popular when pre-trained models are considered. However, the following example shows some aspects missed by these two methods, and requests for more complicated composite function.
Example 1. Assume there is a set of locations indexed as X = {(0, 0), (0, 1), (1, 0), (1, 0)} with the corresponding values Y = (0, 1, 1, 0). Obviously, the observed function is the XOR (Goodfellow et al., 2016). Now consider three models: f1(x1, x2) := x1, f2(x1, x2) := x2, and f3(x1, x2) := x1x2. Their corresponding output vectors are (0, 0, 1, 1), (0, 1, 0, 1), (0, 0, 0, 1) with bit-wise accuracy 50%, 50%, 25%, respectively. This means that the AdaBoosting algorithm will exclude f1 and f2 in the ensemble since their coefficients are 12 ln 1−50% 50% = 0. On the other hand, in the transfer learning, f3 is fine-tuned by applying the gradient descent method with respect to L2 loss on wf3 = wx1x2 to transfer the source task distribution to that of the target task. The result comes to w = 0, and f3 is excluded. Now consider g1(x1, x2) = α1f1 +α2f2 and apply the back-propagation method with respect to the L2 loss. The results are α1 = α2 = 13 , with loss 4 3 . If further define g2(x1, x2) = w1g1 +w2f3, the back-propagation yields g2 = 3g1−2f3 = x1 +x2−2x1x2 with the output (0, 1, 1, 0). The final g2 computes Y with loss 0. This example shows the power of composite function.
Composite Neural Network. In the transfer learning, how to overcome the negative transfer (a phenomenon of a pre-trained model has negative impact on the target task) is an important issue (Seah et al., 2013). In the ensemble learning, it is well known that the adding more pre-trained models, it is not always true to have the better accuracy of the ensemble (Zhou et al., 2002). Furthermore, Opitz & Maclin (1999) pointed that the ensemble by boosting having less accuracy than a single pre-trained model often happens for neural networks. In the unsupervised learning context, some experimental research concludes that although layer-wise pre-training can be significantly helpful, on average it is slightly harmful (Goodfellow et al., 2016). These empirical evidences suggest that in spite of the success of the ensemble learning and the transfer learning, the conditions that composite neural network can perform better is unclear, especially in the deep neural networks training process. The topology of a composite neural network can be represented as a rooted directed graph. For instance, an ensemble learning can be represented as 1-level graph, while a composite neural network with several pre-trained models that each is designed to solve a certain problem corresponds to a more complicated graph. It is desired to discover a mathematical theory, in addition to employing domain knowledge, to construct a composite neural network with guaranteed overall performance. In this work, we investigate the mathematical theory to ensure the overall performance of a composite neural network is better than any a pre-trained component, regardless the way of composition, to allow deep learning application developer great freedom in constructing a high performance composite neural network.
Contributions. In this work, we proved that a composite neural network with high probability performs better than any of its pre-trained components under certain assumptions. In addition, if extra pre-trained component is added into a composite network, with high probability the overall performance will be improved. In the empirical evaluations, distinctively different applications support the above findings.
2 PRELIMINARIES
In this Section, we introduce some notations and definitions about composite neural network. Parameters N ,K, d, dj , dj1 , and dj2 are positive integers. Denote {1, ...,K} as [K] and [K] ∪ {0} as [K]+. Let σ : R → R be a differentiable activation function, such as the Logistic function σ(z) = 1/(1 + e−z) and the hyperbolic tangent σ(z) = (ez − e−z)/(ez + e−z). For simplicity of notation, we sometimes abuse σ as a vector value function. A typical one hidden layer neural network can be formally presented as w1,1σ (∑d i=1 w0,ixi + w0,0 ) + w1,0.We abbreviate it as fσ,W(x), where W is the matrix defined byw1,1, w1,0, ..., w0,1, w0,0. Recursively applying this representation can obtain the neural network with more hidden layers. If there is no ambiguity on the activation function, then it can be skipped as fW(x). Now assume a set of neural networks {fWj (xj)}Kj=1 is given, where Wj is the real number matrix defining the neural network fWj : Rdj1×dj2 → Rdj , and xj ∈ Rdj1×dj2 is the input matrix of the jth neural network. For different fWj , the corresponding dj , dj1 and dj2 can be different. For each j ∈ [K], let Dj = {(x (i) j ,y (i) j ) ∈ R(dj1×dj2 )×dj}Ni=1 be a set of labeled data (for the jth neural network). For each i ∈ [N ], let x(i) = (x(i)1 , . . . ,x (i) K ), y(i) = (y (i) 1 , . . . ,y (i) K ), and D = {(x(i),y(i))}Ni=1.
For a pre-trained model (component), we mean Wj is fixed after its training process, and then we denote fWj as fj for simplicity. On the other hand, a component fWj is non-instantiated means Wj is still free. A deep feedforward neural network is a hierarchical acyclic graph, i.e. a directed tree. In this viewpoint, a feedforward neural network can be presented as a series of function compositions. For given {fWj (xj)}Kj=1, we assume θj ∈ Rdj , j ∈ [K], which make the product θjfWj (xj) is well-defined. Denote f0 as the constant function 1, then the liner combination with a bias is defined as as Θ(f1, ..., fK) = ∑ j∈[K]+ θjfj(xj). Hence, an L layers of neural network can be denoted as Θ(L) ◦ σ ◦ · · · ◦ Θ(0) (x). A composite neural network defined by components fWj (xj) can be designed as an directed tree. For instance, a composite neural network σ2 (θ1,0 + θ1,1f4(x4) + θ1,2σ1(θ0,0 + θ0,1f1(x1) + θ0,2fW2(x2) + θ0,3f3(x3))) can be denoted as σ2 ◦ Θ1 (f4, σ1 ◦Θ0(f1, fW2 , f3)), where f1 and f3 are pre-trained and fW2 is non-instantiated. Note that in this work Dj is the default training data of component fj of composite neural network, but Dj can be different from the training data deciding the frozen weights in the pre-trained fj .
Let 〈~a,~b〉 be the standard inner product of ~a and ~b, and || · || be the corresponding norm. For a composite neural network, the training algorithm is the gradient descent back-propagation algorithm and the loss function is the L2-norm of the difference vector. In particular, for a composite neural network g~θ the total loss on the data set D is
L~θ ( x; g~θ ) = 〈~g~θ (x)− ~y,~g~θ (x)− ~y〉 = ||~g~θ (x)− ~y|| 2 (1)
This in fact is ∑N i=1 ( g(x(i))− y(i) )2 . By the definition of g~θ(·), this total loss in fact depends on the given data x, the components defined by {Θj}Kj=1, the output activation σ, and the weight vector ~w. Similarly, let L(fj(xj)) be the loss function of a single component fi. Our goal is to find a feasible ~θ s.t. L~θ (x; g) < minj∈[K] L(fj(xj)).
3 PROBLEM SETTINGS AND RESULTS OVERVIEW
The problems considered in this work are as follows:
P1. What are the conditions that the pre-trained components must satisfy so that them can strictly improve the accuracy of the whole composition?
P2. Will more pre-trained components improve the accuracy of the whole composition?
Let ~fj be the output vector of the jth pre-trained component, and BK be the set of unit vectors in RK .
A1. Linearly Independent components (LIC) Assumption: ∀t ∈ [K],@{βj} ⊂ R, s.t. ~ft = ∑ j∈[K]\{t} βj ~fj .
A2. No Perfect component (NPC) Assumption: minj∈[K] {∑ i∈[N ] fj(x (i) j )− y(i) } > ∗, where ∗ > 0 is constant.
Our results are as follows:
Theorem 1. Assume the set of components {fj(xj)}Kj=1 satisfies LIC. Let g be Θ(f1, ..., fK). With probability at least 1− K
πeN , there is a vector ~θ ∈ RK \ BK s.t. L~θ (x; g) < minj∈[K]{L(fj(xj))}.
Theorem 2. Assume the set of pre-trained components {fj(xj)}Kj=1 satisfies both NPC and LIC, and g be σ ◦Θ(f1, ..., fK). Then with probability at least 1− KπeN there exists ~w s.t. L~w (x; g) < minj∈[K] L(fj(xj)).
Theorem 3. Assume the set of components {fj(xj)}Kj=1 satisfies LIC. Let gK−1 = Θ(f1, ...fK−1) and gK = Θ(f1, ...fK). With probability at least 1 − KπeN , there is a vector ~w ∈ R
K \ BRK s.t. L~w (x; gK) < L~w (x; gK−1).
Theorem 1, and 2 together answer Problem P1, and Theorem 3 answers Problem P2.
4 RELATED WORK
Our framework is related but not the same with the models such as transfer learning. (Erhan et al., 2010; Kandaswamy et al., 2014; Yao & Doretto, 2010) and ensemble leaning (Zhou, 2012).
Transfer Learning. Typically transfer learning deals with two data sets with different distributions, source and target domains. A neural network, such as an auto-encoder, is trained with source domain data and corresponding task, and then part of its weights are taken out and plugged into other neural network, which will be trained with target domain data and task. The transplanted weights can be kept fixed during the consequent steps or trainable for the fine-tune purpose (Erhan et al., 2010). For multi-source transfer, algorithms of boosting based are studied in the paper (Yao & Doretto, 2010). Kandaswamy et al. (Kandaswamy et al., 2014) proposed a method of cascading several pre-trained layers to improve the performance. Transfer learning is considered as a special case of composite neural network that the transfered knowledge can be viewed as a pre-trained component.
Ensemble (Bagging and Boosting). Since the Bagging needs to group data by sampling and the Boosting needs to tune the probability of data (Zhou et al., 2002), these frameworks are different from composite neural network. However, there are fine research results revealing many properties for accuracy improvement (Džeroski & Ženko, 2004; Gashler et al., 2008; Zhou et al., 2002). For example, it is known that in the ensemble framework, low diversity between members can be harmful to the accuracy of their ensemble (Džeroski & Ženko, 2004; Gashler et al., 2008). In this work, we consider neural network training, but not data processing.
Ensemble (Stacking). Among the ensemble methods, the stacking is closely related to our framework. The idea of stacked generalization (Wolpert, 1992), in Wolpert’s terminology, is to combine two levels of generalizers. The original data are taken by several level 0 generalizers, then their outputs are concatenated as an input vector to the level 1 generalizer. According to the empirical study of Ting and Witten (Ting & Witten, 1999), the probability of the outputs of level 0, instead of their values, is critical to accuracy. Besides, multi-linear regression is the best level 1 generalizer, and non-negative weights restriction is necessary for regression problem while not for classification problem. In (Breiman, 1996), it restricts non-negative combination weights to prevent from poor generalization error and concludes the restriction of the sum of weights equals to 1 is not necessary (Breiman, 1996). In (Hashem, 1997), Hashem showed that linear dependence of components could be, but not always, harmful to ensemble accuracy, while in our work, it allows a mix of pre-defined and undefined components as well as negative weights to provide flexibility in solution design.
Recently Proposed Frameworks. In You et al. (2017), Shan You et al. proposed a student-teacher framework where the outputs of pre-trained teachers are averaged as the knowledge for the student network. A test time combination of multiple trained predictors was proposed by Kim, Tompkin, and Richardt In Kim et al. (2017), that the combination weights are decided during test time. In above frameworks, the usage of pre-trained neural networks generally improves the accuracy of their combination.
5 THEORETICAL ANALYSIS
This section provides analyses of the loss function of composite neural network with the introduction of pre-trained components. For the complete proofs, please refer to Supplementary Material. Observe that for given pre-trained components {fj}Kj=1, a composite neural network can be defined recursively by postorder subtrees search. For instance, σ2 ◦ Θ1 (f4, σ1 ◦Θ0(f1, f2, f3)) can be presented as σ2 ◦ Θ1(f4, g1), and g1 = σ1 ◦ Θ0(f1, f2, f3). Without loss of generality, we assume dj = d = 1 for all j ∈ [K] in the following proofs. We denote ~fj the vector (fj(x(1)), · · · , fj(x(N))), as the sequence of fj during the training phase. Similarly, ~y := (y(1), · · · , y(N)). Let ~ej be an unit vector in the standard basis of RK for each j ∈ [K], i.e. ~e1 = (1, 0, · · · , 0) and ~e2 = (0, 1, 0, · · · , 0), etc. Let BK be the set containing all these standard unit-length basis of RK .
Theorem 1. Assume the set of components {fj(xj)}Kj=1 satisfies LIC. Let g be Θ(f1, ..., fK). With probability at least 1− K
πeN , there is a vector ~θ ∈ RK \ BK s.t. L~θ (x; g) < minj∈[K]{L(fj(xj))}.
Proof. (Proof Sketch) The whole proof is split to Lemma 5.1,5.2,5.3. Note that g(·) is the linear combination of ~θ and {fj(xj)}Kj=1. It is well known (Friedman et al., 2001) that to search the minimizer ~θ for L~θ, i.e. to solve a least square error problem, is equivalent to find an inverse matrix defined by {fj(xj)}Kj=1. Since {fj(xj)}Kj=1 satisfy LIC, the inverse matrix can be written down concretely, which proves the existence. Furthermore, if this solved minimizer ~θ∗ is not ~es for some s ∈ [K] then the g~θ∗ has lower loss than fs. Lemma 5.3 argues that the probability of ~θ
∗ = ~es is at most the probability of the event 〈~f − ~y, ~f〉 = 0, where ~f is uniformly taken from the vector set of the same length of ~f − ~y.
The statements of Lemmas needed by previous Theorem are as follows. Lemma 5.1. There exists ~θ ∈ RK+1 s.t. L~θ ( x; Θ(0)(f1, ...fK) ) ≤ minj∈[K]+{L(fj(xj))}.
This Lemma deals with the existence of the solution of the inequality. But our goal is to find a solution such that the loss is strictly less than any pre-trained component.
Lemma 5.2. Denote IL~θ the indicator variable for the event that at least one of ~ej ∈ BRK is the minimizer of L~θ. Then Pr { IL~θ = 1 } < K πeN , i.e. Pr { IL~θ = 0 } ≥ 1− K πeN .
Lemma 5.3. Define F(~y, L(f)) = { ~f ∈ RN : ∥∥∥~f − ~y∥∥∥2 = Lf} for given ~y and ~f . Then we have Pr~f∈F(~y,L(f)) { 〈~f − ~y, ~f〉 = 0 } < 1 πeN .
The above Lemmas prove Theorem 1. The following corollary is the closed from of the optimal weights.
Corollary 5.1. The closed form of the minimizer is: [θt]t∈[K]+ = [ 〈~fs, ~ft〉 ]−1 s,t∈[K]+ × [ 〈~fs, ~y〉 ] s∈[K]+ .
In the following, we deal with σ ◦Θ(f1, ..., fK) and Θ1 ◦ σ ◦Θ(f1, ..., fK). Theorem 2. Assume the set of pre-trained components {fj(xj)}Kj=1 satisfies both NPC and LIC, and g be σ ◦ Θ(f1, ..., fK). Then with probability at least 1 − KπeN there exists ~θ s.t. L~θ (x; g) < minj∈[K] L(fj(xj)).
Proof. (Proof Sketch) The whole proof is split to Lemma 5.4,5.5, and 5.6. The idea is to find an interval in the domain of σ such that the output can approximate linear function as well as possible. Then in this interval, the activation σ can approximate any given pre-trained component. However, under the assumptions LIC and NPC the gradient of the loss L is not zero with high probability. Since the training is based on the gradient descent algorithm, this none-zero gradient leads the direction of updating process to obtain a lower loss.
Lemma 5.4. Let N,K and j ∈ [K] be fixed. For small enough , there exists ~θ ∈ ZF,1, and 0 < α ∈ R s.t. |σ ◦Θ(0)(f1, ..., fK)− fj(x) α | < .
Lemma 5.5. Assume NPC holds with ∗ > 0. If ~θ ∗/3 satisfies |σ ◦Θ(0)(f1, ..., fK)(x)− fj(x)| < ∗ 3N for any j ∈ [K] +, then∇~θL(~θ ∗/3) 6= ~0.
Lemma 5.6. If ~θ ∗/3 makes ∇~θL(~θ ∗/3) 6= ~0, then there exist ~θ s.t. L~θ (x; g) < minj∈[K]+ L(fj(xj)).
Now we consider the difference of losses of σ ◦Θ′(f1, ..., fK) and σ ◦Θ(f1, ..., fK−1). Theorem 3. Assume the set of components {fj(xj)}Kj=1 satisfies LIC. Let gK−1 = Θ(f1, ...fK−1) and gK = Θ(f1, ...fK). With probability at least 1 − KπeN , there is a vector ~θ ∈ R
K \ BRK s.t. L~θ (x; gK) < L~θ (x; gK−1).
Proof. (Proof Sketch) The idea is directly solve the inequality for the case of K = 2, and then generalize the result to larger K.
The following provides a generalized error bound for a composite neural network.
Theorem 4. Assume pre-trained components {fj}Kj=1 satisfy LIC and NPC. Let {GE(fj)}Kj=1 be corresponding generalization errors of {fj}Kj=1, and Θ(L) ◦ σ(L) ◦ · · · ◦ σ(1) ◦Θ(0)(f1, ..., fK) be the composite neural network. Denote the generalization error, E{L(Θ(L) ◦ σ(L) ◦ · · · ◦ σ(1) ◦ Θ(0)(f1, ..., fK))}, of the composite neural network as E{LΘ,f1,...,fK}. Then with high probability, there exist a setting of {Θ∗(L), ...,Θ ∗ (0)} such that E{LΘ,f1,...,fK} ≤ Θ ∗ (L)(GE(f1), ...GE(fK)).
Proof. (Proof Sketch) We apply the idea similar to Kawaguchi (2016): the exception of non-liner activations is same with the exception of liner activations. Previous theorems provide that with high probability there exists the solution of Θ(i),∀i ∈ [L]+ s.t. each Θ(i)+1σΘ(i) approximates a degree one polynomial AΘ(i)+1σΘ(i),1 as well as possible. If the weights are obey the normal distribution, then E{LΘ,f1,...,fK} ≤ Θ∗(L)(GE(f1), ...GE(fK)).
6 EMPIRICAL STUDIES
This section is to numerically verify the performance of composite network for two distinctively different applications, image classification and PM2.5 prediction. For image classification, we examined two pre-trained components, the ResNet50 (He et al., 2016) from Keras and the SIFT algorithm(Lowe, 1999) from OpenCV, running on the benchmark of ImageNet competition(Russakovsky et al., 2015). For PM2.5 prediction, we implemented several models running on the open data of local weather bureau and environment protection agency to predict the PM2.5 level in the future hours.
6.1 IMAGENET CLASSIFICATION
We chose Resnet50 as the pre-trained baseline model and the SIFT model as an auxiliary model to form a composite neural network to validate the proposed theory. The experiments are conducted on the 1000-class single-label classification task of the ImageNet dataset, which has been a well received benchmark for image classification applications. A reason to choose the SIFT (Scale-Invariant Feature Transform) algorithm is that its function is very different from ResNet and it is interesting to see if the performance of ResNet50 can be improved as predicted from our theory.
We trained the SIFT model using the images of ImageNet, and directed the output to a CNN to extract useful features before merging with ResNet50 output. In the composite model, the softmax functions of both ResNet50 and SIFT model are removed that the outputs of length 1000 of both models are merged before the final softmax stage. During the training process of composite network, the weights of ResNet50 and SIFT model are fixed, and only the connecting weights and bias are trained.
The ResNet50 was from He et al. that its Top-1 accuracy in our context was lower than reported in (He et al., 2016) since we did not do any fine tuning and data preprocessing. In the Figure 1, it shows the composite network has higher accuracy than ResNet50 during almost the complete testing run. Table 1 shows the same result that the composite network performs better too. The experiment results support the claims of this work that a composite network performs better than any of its components, and more components work better than less components.
6.2 PM2.5 PREDICTION
The PM2.5 prediction problem is to forecast the particle density of fine atmospheric matter with the diameter at most 2.5 µm (PM2.5) in the future hours, mainly, for the next 12, 24, 48, 72 hours. The datasets used are open data provided by two sources including Environmental Protection Administration (EPA)1 , and Center Weather Bureau (CWB)2. The EPA dataset contains 21 observed features, including the speed and direction of wind, temperature, relative humidity, PM2.5 and PM10 density, etc., from 18 monitoring stations, with one record per hour. The CWB has seventy monitoring stations, one record per 6 hours, containing 26 features, such as temperature, dew point, precipitation, wind speed and direction, etc. We partitioned the observed area into a grid of 1140 km2 with 1 km×1 km blocks and aligned the both dataset into one-hour period. We called the two datasets as air quality and weather condition dataset.
We selected ConvLSTM (Convolution LSTM) and FNN (fully connected neural network) as the components used in this experiment. The reason to select ConvLSTM is that the dispersion of PM2.5 is both spatially and temporally dependent and ConvLSTM is considered capable of catching the dependency, and FNN is a fundamental neural network that acts as the auxiliary component in the experiment.
The prediction models were trained with the data of 2014 and 2015 years, then the 2016 data was used for testing. We considered two function compositions, the linear combination Θ and the Logistic function σ1 (as Theorem 2), to combine the two components to examine the applicability of the proposed theorems.
We trained and tested both ConvLSTM and FNN using air quality dataset (Dataset A) and weather condition dataset (Dataset B) separately as the baselines (denoted as f1, f2, f3 and f4) and their training error and testing error in MSE are list in the first part of Table 2. Then we composited FNNs using Dataset A and Dataset B, each FNN can be pre-trained (denoted as x) or non-instantiated (denoted as o). In addition, we used both linear and Sigmoid activation functions. As a result, we had eight combinations, as list in the part two. We treated ConvLSTM in the same way and the outcomes were in the part 3. Finally, we composited using one FNN and one ConvLSTM that each was the best in their category, and the resulting composite network was a tree of depth 2. For instance, the candidate of ConvLSTM of part 4 for 12 hours prediction was the 4th row (i.e., Θ(f◦3 ,f ◦ 4 )) of part 3. Their training and testing errors in MSE were listed in the part 4.
From the empirical study results, it shows mostly the proposed theorems are followed. While the composite networks with all pre-trained components may not perform better than others in their category, (which is not a surprise), what we expect to see is after adding a new component, the composite network has improvement over the previous one. For example, the σ ◦ Θ(f×3 , f × 4 ) has strictly better accuracy than both f3 and f4 for all future predictions. Another example is the NEXT 48 hr, σ ◦ Θ(C×, F×) also has strictly better accuracy than both C = σ ◦ Θ(f◦3 , f◦4 ) and F = σ ◦Θ(f◦3 , f◦4 ).
1https://opendata.epa.gov.tw/Home 2http://opendata.cwb.gov.tw/index
7 CONCLUSION
In this work, we investigated the composite neural network with pre-trained components problem and showed that the overall performance of a composite neural network is better than any of its components, and more components perform better than less components. In addition, the developed theory consider all differentiable activation functions.
While the proposed theory ensures the overall performance improvement, it is still not clear how to decompose a complicated problem into components and how to construct them into a composite neural network in order to have an acceptable performance. Another problem worth some thinking is when the performance improvement will diminish (by power law or exponentially decay) even adding more components. However, in the real world applications, the amount of data, data distribution and data quality will highly affect the performance.
8 SUPPLEMENTARY MATERIAL
For self-contained, we list some common Taylor expansion in the following. Logistic: S(z) := 11+e−z = 1 2 + 1 4z − 1 48z 3 + 1480z 5 − 1780640z
7 +O(z9), ∀z ∈ R, Hyperbolic Tan: tanH(z) = e
z−e−z ez+e−z = z − 1 3z 3 + 215z 5 +O(z7), ∀|z| ≤ π2
arcTan: arctan(z) = z − 13z 3 + 15z 5 + +O(z7), ∀|z| ≤ 1.
Definition 1. Given an activation σ(z) and its Taylor expansion Tσ(z), let Aσ,D(z) be the truncated the monomials of degree at most D from Tσ(z). We define Aσ,D(z) as the D-degree Taylor approximation polynomial, and Rσ,D+1(z) as the remainder part such that Tσ(z) = Aσ,D(z) +Rσ,D+1(z).
For instance, if we set D = 3 then the Taylor expansion of Logistic function S(z) is separated as the approximation partAS(z),3(z) = 12 + 1 4z− 1 48z 3 and the remainder partRS(z),4(z) = 1480z 5 +O(z7).
Proposition 8.1. (Error Bound of The Remainder) Let S(z) be the Logistic function. Consider the approximation AS(z),≤D(z) and the remainder RS(z),D+1(z) defined as above. For given ∈ (0, 11000 ) and D ∈ N, if |z| <
1/(D+2), then |S(z)−AS(z),D(z)| = |RS(z),D+1(z)| < .
Proof. Note that if < 1 then for all D ∈ N, 1/(D+1) < 1. If |z| < 1/3 and D = 1, then
|RS(z),D+1(z)| ≤ ∣∣∣∣− 148z3 + 1480z5 − 1780640z7 +O(z9) ∣∣∣∣ < 124 < . The general case (D ≥ 2) can be proven by the same argument as above.
This Proposition means that for a suitable range of z, the Logistic function can be seen as a linear function with the error at most .
Definition 2. For the Logistic activation σ(z) = S(z), > 0 and given polynomial degree D, we define ZD, = {z ∈ R : |σ(z)−Aσ,D(z)| < }. Furthermore, for given components {fj : j ∈ [K]} = F , we consider the variable z = Θ(f1, ..., fK) and define
ZF,D, = { ~θ ∈ RK+1 : z = Θ(f1, ..., fK), |σ(z)−Aσ,D(z)| < } .
Observe that if the parameters , F ,and |F | = K are fixed, then ZF,D, ⊂ ZF,D+1, ⊂ RK+1.
8.1 FUNCTION COMPOSITION BY LINEAR COMBINATION
Recall that for a set of pre-trained components {fj(xj) : j ∈ [K]}, Θ(0) (f1, ...fK) =∑ j∈[K]+ θ0,jfj , where f0 = 1. For simplicity, we consider Θ(1)(z) = αz. This means
Θ(1) ◦ σ ◦Θ(0) (f1, ...fK) = θ1,1σ (∑ j∈[K]+ θ0,jfj ) + θ1,0.
Theorem 1 is a consequence of the following lemmas:
Proof. (of Lemma 5.1) For simplicity of notations, let g(x) = Θ(0)(f1, ...fK), hence g(x) = ∑ j∈[K]+ θjfj(xj). Also
recall that L~θ (x; g) = ∑N i=1 ( g(x(i))− y(i) )2 . To prove the existence of the minimizer, it is enough to solve the equations of critical points, in the case of a quadratic object function. That is, to solve the set of equations:
∇~θL (x; g) = ( ∂L
∂θ0 , . . . ,
∂L
∂θK
)T = (0, . . . , 0) T ,
where for each s, t ∈ [K]+,
∂L ∂θs =2 N∑ i=1 ( g(x(i))− y(i) ) · fs(x(i)) = 2 N∑ i=1 ∑ j∈[K]+ θjfj(x (i) j )− y (i) · fs(x(i)) =2
∑ j∈[K]+ θj〈~fs, ~fj〉 − 〈~fs, ~y〉 . Hence, to solve ∇~θL (x; g) = ~0 is equivalent to solve [ 〈~fs, ~ft〉 ] s,t∈[K]+
× [θt]t∈[K]+ =[ 〈~fs, ~y〉 ] s∈[K]+ , where [ 〈~fs, ~ft〉 ] s,t∈[K]+
is a (K + 1) by (K + 1) matrix, [θt]t∈[K]+ and[ 〈~fs, ~y〉 ] s∈[K]+ are both 1 by (K + 1).
Note that linear independence of {~fj}j∈[K]+ makes [ 〈~fs, ~ft〉 ] s,t∈[K]+ a positive-definite Gram
matrix (Horn & Johnson, 2012) , which means the inversion [ 〈~fs, ~ft〉 ]−1 s,t∈[K]+ exists. Then ~θ is solved:
[θt]t∈[K]+ = [ 〈~fs, ~ft〉 ]−1 s,t∈[K]+ × [ 〈~fs, ~y〉 ] s∈[K]+
(2)
The above shows the existence of the critical points. On the other hand, since L~θ (x; g) is the summation of square terms, i.e. paraboloid, the the critical points can only be the minimum.
The meaning of the gradient on a function surface is the direction that increases the function value most efficiently. Hence, if the gradient is not the zero vector then the corresponding point can not be the minimizer of the function surface. Recall for any s ∈ [k],[
∂L ∂θt ] t∈[K]+ ∣∣ ~θ=~es = 2 [ 〈~fs − ~y, ~ft〉 ] t∈[K]+ .
Before the proof of Lemma 5.2, we need the upper bound of the probability of some events. Note that ~y is defined according to the given training data, and for each j ∈ [K]+ the length of ~fj − ~y, i.e. ∥∥∥~f − ~y∥∥∥, is also given. The question is, for fixed ~y what is the probability of selected ~f is perpendicular to ~f − ~y? A folklore approach is considering that {~f = (f(x(1)), ..., f(x(N)))} obeys the normal distribution, and setting the mean of f(x(i)) as y(i) for each i ∈ [N ]. In the following we propose another simple probability argument to obtain a loose upper bound.
Proof. (of Lemma 5.3) Observe that 〈~f − ~y, ~f〉 = 0 ⇔ (~f − ~y) ⊥ ~f , which implies the angle between them, ∠(~f−~y), ~f , is in the interval [ π− 2 , π+ 2 ] for small ∈ R +, as shown in the left part of Figure 2. The red, orange, and blue vectors show three possibles of the pair of ~f and ~f − ~y. The length of ~f −~y is fixed since ~f and ~y are given, but the angle between ~f −~y and ~y can decide whether (~f − ~y) ⊥ ~f . The gray circle collects all possible end-point of the vector ~f − ~y emission from the end-point of ~y. Although on the whole circle there are exactly two specific angles 3 can satisfy (~f − ~y) ⊥ ~f , we give a loose small interval with respect to π. In particularly, we set 0 < < e−N .
Pr ~f∈F(~y,Lf )
{ ∠(~f−~y), ~f = π
2
} ≤ Pr
~f∈F(~y,Lf )
{ π −
2 ≤ ∠(~f−~y), ~f ≤
π +
2
} =
π <
1
πeN .
Now we are ready to proof Lemma 5.2. 3That is, two points on the circumference, which is in fact measure zero on all possible angle [0, 2π)
Proof. (of Lemma 5.2) We denote A the event that at least one of ~ej ∈ BRK is the minimizer of L(~θ) for convenience.
IL(~w) = 1⇔ the event A is true ⇒ [ ∂L ∂θt ] t∈[K]+ ∣∣ ~θ=~es = 2 [ 〈~fs − ~y, ~ft〉 ] t∈[K]+ = [0]K×1 for somes ∈ [K] +
⇒〈~f1 − ~y, ~f1〉 = 0 ∧ 〈~f1 − ~y, ~f2〉 = 0 ∧ · · · ∧ 〈~f1 − ~y, ~fK〉 = 0
or · · · or 〈~fK − ~y, ~f1〉 = 0 ∧ 〈~fK − ~y, ~f2〉 = 0 ∧ · · · ∧ 〈~fK − ~y, ~fK〉 = 0 Hence, for given ~y and L(fj) = ∥∥∥~fj − ~y∥∥∥2, ∀ ∈ [K]+, we have
Pr { IL(~w) = 1 } ≤ ∑ j∈[K]+ Pr { 〈~fj − ~y, ~f1〉 = 0 ∧ 〈~fj − ~y, ~f2〉 = 0 ∧ · · · ∧ 〈~fj − ~y, ~fK〉 = 0 } ≤K · Pr { 〈~f1 − ~y, ~f1〉 = 0
} < K
πeN ,
where the second inequality is based on the symmetry between ~fs and ~ft for any s, t ∈ [K]+, and the last inequality is by Lemma 5.3.
Proof. (of Theorem 3) We start from a simple case: Claim: ∃β ∈ R s.t. ∑
i∈[N ] (f1(xi)− yi)2 − ∑ i∈[N ] (f1(xi) + βf2(xi)− yi)2 > 0.
Proof. ∑ i∈[N ] (f1(xi)− yi)2 − ∑ i∈[N ] (f1(xi) + βf2(xi)− yi)2
= ∑ i∈[N ] [ (f1(xi)− yi)2 − (f1(xi) + βf2(xi)− yi)2 ]
= − ∑ i∈[N ] f1(xi) 2 β2 + 2 ∑ i∈[N ] (f2(xi)yi − f2(xi)f1(xi)) β Observe that the above is a quadratic equation of β with negative leading coefficient. Hence, to obtain the maximum of the difference, we can set
β = ∑ i∈[N ] (f2(xi)yi − f2(xi)f1(xi))∑
i∈[N ] f1(xi) 2
= 〈~y − ~f1, ~f2〉 〈~f2, ~f2〉
Note that if 〈~y − ~f1, ~f2〉 = 0 then the last pre-trained component is no need to be added. We aim to calculate the probability of this case. Observe that 〈~y − ~f1, ~f2〉 = 0 ⇔ (~y − ~f1) ⊥ ~f2. This condition is different from previous Lemma. Here we have to find the upper bound of the probability of (~y − ~f1) ⊥ ~f2 for given ~f1 and ~y. As shown in the left part of Figure 2), the angle between ~f2 and ~y must be in a specific interval, say [π− 2 , π+ 2 ] for small ∈ R
+. In order to be concrete, we set 0 < < e−N .
Pr ~f∈F(~y,1)
{ (~y − ~f1) ⊥ ~f2 } ≤ π < 1 πeN .
The general case can be reduced to the above claim by considering gK−1 as f1 and θkfK as βf2. Furthermore, since there there K possibles the be selected as the least pre-trained component, the probability is upper bounded by K
πeN .
8.2 FUNCTION COMPOSITION BY NON-LINEAR ACTIVATION
Proof. (of Lemma 5.4) Although the lemma is an existence statement, we give a constructive proof here. By setting D = 1 in Proposition 8.1, we know that for Logistic S(z) and 0 < < 1/1000, the degree-one Taylor approximation AS(z),1 = 12 + 1 4z with the remainder |RS(z),2| < . Define M := 10 ·maxj∈[K]+,i∈[N ]{|fj(xi)|}. Hence by setting z = fj(xj) M , we have
∣∣∣S ( fj(xj)M )− 12 − fj(xj)4M ∣∣∣ < . This means that for the givenj ∈ [K], θj = 1M , θ0 = and for all j
′ 6= j, θj′ = 0. Furthermore, α = 4M .
This lemma implies that the S(z) can approximate linear function as possible in a non-zero length interval, hence if the scaling of ~θ is allowed then Theorem 1 can be applied. Corollary 8.1. If the activation function is the Logistic, S(z), and {fj(xj)}Kj=1 satisfies LIC, then with high probability there is a vector ~θ s.t. L~θ ( x;σ ◦Θ(0)(f1, ..., fK) ) < minj∈[K]+ L(fj(xj)).
Proof. Set of above Lemma as ∗
3N , then previous Lemma shows that there exists θ which maps {fj} into Zσ◦Θ(0),1, ∗3N . Since the output of σ ◦Θ(0) is a linear function with error at most ∗
3N , we have the same conclusion.
Proof. Let g(x) = σ ◦ Θ(0)(f1, ..., fK)(x) for short. First observe that |g(x) − fj(x)| < ∗
3N ⇒ ∀i ∈ [N ],g(x(i))− fj(x(i)) > − ∗
3N . Then∑ i∈[N ] ( g(x(i))− y(i) ) = ∑ i∈[N ] { (g(x(i))− fj(x(i))) + (fj(x(i))− y(i)) } > N · − ∗ 3N + ∗ = 2 ∗ 3 > 0.
On the other hand, it can be calculated that ∇~θL (x; g) |~θ=~θ ∗/3 = ( ∂L ∂w0 , ∂L ∂θ1 , . . . , ∂L ∂θK , ∂L ∂b0 )T |~θ=~θ ∗/3 ,
where [a]T is the transpose of the matrix [a]. Also note that ∂L∂w0 |~θ=~θ ∗/3 = 2 ·∑ i∈[N ] (g(x (i))− y(i)) · S(z), and ∂L∂b0 |~θ=~θ ∗/3 = 2 · ∑ i∈[N ] (g(x
(i))− y(i)) · 1. Since∑ i∈[N ] (g(x (i))− y(i)) > 0, we can conclude ∇~θL (x; g) |~θ=~θ ∗/3 6= ~0.
Proof. (of Lemma 5.6 ) By previous Lemma, it is valid to consider the best performance component, fj∗ ; i.e. L(fj∗) = minj∈[K]+ L(fj(xj)). Since ∇~θL(~θ ∗/3) 6= ~0, by definition of the gradient, moving along the direction of −∇~θL(~θ ∗/3)/ ∥∥∥∇~θL(~θ ∗/3)∥∥∥ with a step-size α > 0 must strictly decrease the value of L~θ (x; g). W.L.O.G, we can assume this α is optimal; that is, if α > r > 0 then
L(~θ ∗/3) > L(~θ ∗/3) − r · ∇~θL(~θ ∗/3)/ ∥∥∥∇~θL(~θ ∗/3)∥∥∥, while if r = α + δ for some δ > 0 then
L(~θ ∗/3) ≤ L(~θ ∗/3)− r · ∇~θL(~θ ∗/3)/ ∥∥∥∇~θL(~θ ∗/3)∥∥∥. The issue is how to find a proper step-size r > 0? We consider the line search approach such that:
r∗ = arg min r∈R L ~θ0 − r · ∇~θL(~θ0)∥∥∥∇~θL(~θ0)∥∥∥ This outputs F ∗ then we can make sure that L ( ~θ0 ) > L ( ~θ0 ) − r · ∇~θL(~θ0)/
∥∥∥∇~θL(~θ0)∥∥∥. Since the underlining θ0 is ~θ ∗/3 that makes the loss is the same with the best one. Hence, we have
(~θ0 − r∗ · ∇~θL(~θ ∗/3)/ ∥∥∥∇~θL(~θ ∗/3)∥∥∥) to fit our goal (beating the best one).
Combining these lemmas can obtain the conclusion of Theorem 2.
Corollary 8.2. The process of Lemma 5.5 converges.
Proof. (of 8.2) It is know that if a monotone decreasing sequence is bounded below, then this sequence convergences. ..... (revising) Repeating the process of Lemma 2, we can obtain a strictly decreasing sequence: L(~θ0) > L(~θ1) > L(~θ2) > . . . . Note that ∀i, (~θi) ≥ 0. This means the sequence is monotone decreasing and bounded below, so theoretically it converges by monotone convergence theorem of mathematical analysis. Algorithmically, the gradient descent based sequence finding process stops at some term with ∇~θL(~θ
′) = 0, which is a (local) minimum or a saddle point.
Corollary 8.3. If the assumptions in Theorem 2 are satisfied, then with height probability there exists ~θ′ s.t. L~θ′ is a (local) minimum or a saddle point, while L~θ′ (x; g) < minj∈[K]+ |L(fj(xj))| still holds.
Assume the pre-trained component set {fj(xj)}Kj=1 satisfies both NPC and LIC, then there exists ~θ s.t. L~θ (x; g) < minj∈[K]+ L(fj(xj)).
In the previous proof, the critical properties of activations are local linearity and differentiability. Hence, it is not hard to check that if we replace σ(·) in Eq. (1) with other common activations, the conclusion still holds. By local linearity we mean that on a non-zero length interval in its domain, the function can approximate the linear mapping as well as possible.
Corollary 8.4. Theorem 1 and 2 can apply on any activations with local linearity and differentiability.
Based on Corollary 5.1 and Lemma 5.6, it is natural to obtain the process of finding ~θ: by gradient descent or the closed form of Corollary 5.1. We can compute the optimal weights for the bottom Sigmiod block. On the other hand, after random initializing, the parameters of un-trained components in the Relu or Tanh blocks are assigned. This implies they can be treated as the all pre-trained case in Theorem 1 or 2. In fact, given the outputs from bottom level block then Corollary 5.1 provides weights improving the accuracy. Then it goes to the next up level block until the top, which is the forwarding steps of Back-propagation LeCun et al. (1988). Hence with initialization for un-trained component, Corollary 5.1 is essentially the same as Back-propagation.
8.3 A MIX OF PRE-TRAINED AND UN-TRAINED COMPONENTS
Now we first consider some of {fΘj (xj)}Kj=1 are pre-trained and some are un-trained, and then investigate the hierarchical combination of both kinds of components. In particularly, Eq. (1) can be re-written as g(x) = w0 · σ (θ1f1 + θ2fΘ2) + b0, where f1 is a pre-trained component and fΘ2 is un-trained. Since Θ2 is not fixed, it can not be checked that LIC and NPC assumptions are satisfied. On the other hand, after initialization, fΘ2 can be seen as a pre-trained component at any a snapshot during training phase.
Theorem 5. In the end of an weight updating iteration, if the components f1 and fΘ2 satisfy LIC and NPC assumptions, then with high probability ~w updated in the next iteration can improve the loss.
Proof. Recall the training algorithm is the backpropagation algorithm. Also note that according to Eq. (1), the order of updating is ~θ first and then Θ2. We denote in the end of iteration i the value of ~θ and Θ2 as ~θ(iter=i) and Θ (iter=i) 2 , respectively. With randomized initialization, Θ2 is assigned as Θ (iter=0) 2 before the execution of the iteration 1. Then in each iteration i ≥ 1, g(x) is a combination of fixed parameter components. Hence this can reduce to the all pre-trained cases, and can apply Theorem 1 and 2.
Lemma 8.1. For a given data set X , let ~g := (g1, ..., gN ) and ~y = (y1, ..., yN ). If 〈~g,~g − ~y〉 6= 0, then there exists α ∈ R s.t. ∑
i∈[N ] (αg(xi)− yi)2 < ∑ i∈[N ] (g(xi)− yi)2
Proof. It is equivalent to show the inequality∑ i∈[N ] (αg(xi)− yi)2 − ∑ i∈[N ] (g(xi)− yi)2 < 0
has a real number solution.∑ i∈[N ] [ (αg(xi)− yi)2 − (g(xi)− yi)2 ]
= ∑ i∈[N ] g(xi) 2 α2 + −2 ∑ i∈[N ] g(xi)yi α+ − ∑ i∈[N ] g(xi) 2 + 2g(xi)yi =〈~g,~g〉α2 + (−2〈~g, ~y〉)α+ (−〈~g,~g〉+ 2〈~g, ~y〉) .
This is a quadratic inequality of α, hence if
(−2〈~g, ~y〉)2 − 4 (〈~g,~g〉) (−〈~g,~g〉+ 2〈~g, ~y〉) ≥ 0,
then there exists at least one real solution.
Now we first consider some of {fΘj (xj)}Kj=1 are pre-trained and some are un-trained, and then investigate the hierarchical combination of both kinds of components. In particularly, Eq. (1) can be re-written as g(x) = w0 · σ (θ1f1 + θ2fΘ2) + b0, where f1 is a pre-trained component and fΘ2 is un-trained. Since Θ2 is not fixed, it can not be checked that LIC and NPC assumptions are satisfied. On the other hand, after initialization, fΘ2 can be seen as a pre-trained component at any a snapshot during training phase.
8.4 GENERALIZATION ERROR ANALYSIS
Theorem 4. Assume pre-trained components {fj}Kj=1 satisfy LIC and NPC. Let {GE(fj)}Kj=1 be corresponding generalization errors of {fj}Kj=1, and Θ(L) ◦ σ(L) ◦ · · · ◦ σ(1) ◦Θ(0)(f1, ..., fK) be the composite neural network. Denote the generalization error, E{L(Θ(L) ◦ σ(L) ◦ · · · ◦ σ(1) ◦ Θ(0)(f1, ..., fK))}, of the composite neural network as E{LΘ,f1,...,fK}. Suppose the learned weights obey the normal distribution. Then with high probability, there exist a setting of {Θ∗(L), ...,Θ ∗ (0)} such that E{LΘ,f1,...,fK} ≤ Θ∗(L)(GE(f1), ...GE(fK)).
Proof. (of Theorem 4) (Proof Sketch) We apply the idea similar to Kawaguchi (2016): the exception of non-liner activations is same with the exception of liner activations. Previous theorems provide that with high probability there exists the solution of Θ(i),∀i ∈ [L]+ s.t. each Θ(i)+1σΘ(i) approximates a degree one polynomial AΘ(i)+1σΘ(i),1 as well as possible. If the weights are obey the normal distribution, then E{LΘ,f1,...,fK} ≤ Θ∗(L)(GE(f1), ...GE(fK)). | 1. What is the main contribution of the paper, and how does it aim to justify the performance gain of using "composite" neural networks?
2. What are the strengths and weaknesses of the paper regarding its writing, clarity, and scientific/mathematical ideas?
3. How do the main results of the paper (Theorem 1, 2, 3) relate to the idea of adding more features to a network, and what assumptions do they make?
4. Why does the reviewer question the specificity of the results to pre-trained components, and what is their concern regarding linear independence?
5. What is the connection between the motivating Example 1 and the use of pre-training, and how does it relate to the paper's main contributions?
6. Can you clarify the unclear statements from the intro, such as the distinction between applicable data sources being boundless versus identifiable at once?
7. Are there any typos or errors in the paper that need correction, such as the one mentioned regarding the XOR function? | Review | Review
The paper aims at justifying the performance gain that is acquired by the use of "composite" neural networks (e.g., composed of a pre-trained neural network and additional layers that will be trained for the new task).
I found the paper lacking in terms of writing and in terms of clarity in expressing scientific/mathematical ideas especially for a theory paper.
Example from the Abstract:
"The advantages of adopting such a pre-trained model in a composite neural network are two folds. One is to benefit from other’s intelligence and diligence, and the other is saving the efforts in data preparation and resources
and time in training"
The main results of the paper (Theorem 1,2,3) are of the following nature: if you use more features (i.e., "components") in the input of a network then you have "more information", and this cannot be bad. Here are the corresponding claims in the Abstract:
"we prove that a composite neural network, with high probability, performs better than any of its pre-trained components under certain assumptions."
"if an extra pre-trained component is added to a composite network, with high probability the overall performance will be improved."
However, this argument seems to be just about expressiveness; adding more features can be statistically problematic.
Furthermore, why is it specific to pre-trained components? Essentially the theorems are about adding any features.
Finally, the assumption that the pre-trained components are linearly independent is invalid and the makes the whole analysis somewhat simplistic.
The motivating Example 1 just shows that the convex hull of a class of hypotheses can include more hypotheses than the class itself. I don't see any connection between this and the use of pre-training.
Other examples unclear statements from the intro:
"One of distinctive features of the complicated applications is their applicable data sources are boundless. Consequently, their solutions need frequent revisions."
"Although neural networks can approximate arbitrary functions as close as possible (Hornik, 1991), the major reason for not existing such competent neural networks for those complicated applications is their problems are hardly fully understood and their applicable data sources cannot be identified all at once."
There are many typos in the paper including this one about X for the XOR function:
"Assume there is a set of locations indexed as X = {(0; 0); (0; 1); (1; 0); (1; 0)} with the corresponding values Y = (0; 1; 1; 0). Obviously, the observed function is the XOR" |
ICLR | Title
Information Plane Analysis for Dropout Neural Networks
Abstract
The information-theoretic framework promises to explain the predictive power of neural networks. In particular, the information plane analysis, which measures mutual information (MI) between input and representation as well as representation and output, should give rich insights into the training process. This approach, however, was shown to strongly depend on the choice of estimator of the MI. The problem is amplified for deterministic networks if the MI between input and representation is infinite. Thus, the estimated values are defined by the different approaches for estimation, but do not adequately represent the training process from an information-theoretic perspective. In this work, we show that dropout with continuously distributed noise ensures that MI is finite. We demonstrate in a range of experiments1 that this enables a meaningful information plane analysis for a class of dropout neural networks that is widely used in practice.
1 INTRODUCTION
The information bottleneck hypothesis for deep learning conjectures two phases of training feedforward neural networks (Shwartz-Ziv and Tishby, 2017): the fitting phase and the compression phase. The former corresponds to extracting information from the input into the learned representations, and is characterized by an increase of mutual information (MI) between inputs and hidden representations. The latter corresponds to forgetting information that is not needed to predict the target, which is reflected in a decrease of the MI between learned representations and inputs, while MI between representations and targets stays the same or grows. The phases can be observed via an information plane (IP) analysis, i.e., by analyzing the development of MI between inputs and representations and between representations and targets during training (see Fig. 1 for an example). For an overview of information plane analysis we refer the reader to (Geiger, 2022).
While being elegant and plausible, the information bottleneck hypothesis is challenging to investigate empirically. As shown by Amjad and Geiger (2020, Th. 1), the MI between inputs and the representations learned by a deterministic neural network is infinite if the input distribution is continuous. The standard approach is therefore to assume the input distribution to be discrete (e.g., equivalent to the empirical distribution of the dataset S at hand) and to discretize the real-valued hidden representations by binning to allow for non-trivial measurements, i.e., to avoid that the MI always takes the maximum value of log(|S|) (Shwartz-Ziv and Tishby, 2017). In this discrete and deterministic setting the MI theoretically gets equivalent to the Shannon entropy of the representation. Considering the effect of binning, however, the decrease of MI is essentially equivalent to geometrical compression (Basirat et al., 2021). Moreover, the binning-based estimate highly depends on the chosen bin size (Ross, 2014). To instead work with continuous input distributions, Goldfeld
1Code for the experiments is public on https://github.com/link-er/IP_dropout.
et al. (2019) suggest to replace deterministic neural networks by stochastic ones via adding Gaussian noise to each of the hidden representations. This kind of stochastic networks is rarely used in practice, which limits the insights brought by the analysis.
In contrast, dropout, being a source of stochasticity, is heavily used in practice due to its effective regularizing properties. The core questions investigated in this work therefore are: i) Can we obtain accurate and meaningful MI estimates in neural networks with dropout noise? ii) And if so, do IPs built for dropout networks confirm the information bottleneck hypothesis? Our main contributions answer these questions and can be summarized as follows: We present a theoretical analysis showing that binary dropout does not prevent the MI from being infinite due to the discrete nature of the noise. In contrast, we prove that dropout noise with any continuous distribution not only results in finite MI, but also provides an elegant way to estimate it. This in particular holds for Gaussian dropout, which is known to benefit generalization even more than binary dropout (Srivastava et al., 2014), and for information dropout (Achille and Soatto, 2018). We empirically analyze the quality of the MI estimation in the setup with Gaussian and information dropout in a range of experiments on benchmark neural networks and datasets. While our results do not conclusively confirm or refute the information bottleneck hypothesis, they show that the IPs obtained using our estimator exhibit qualitatively different behavior than the IPs obtained using binning estimators and strongly indicate that a compression phase is indeed happening.
2 MUTUAL INFORMATION ESTIMATION FOR NEURAL NETWORKS
We use the following notation: Lower-case letters denote realizations of random variables (RVs), e.g., b denotes a realization of the RV B; H(A) denotes the Shannon entropy of a discrete RV A whose distribution is denoted pa; h(B) is the differential entropy of a continuous RV B whose distribution is described by the probability density function pb; I(A;B) is the MI between RVs A and B; X ∈ X ⊆ Rn and Y ∈ Y are the RVs describing inputs to a neural network and corresponding targets; f(X) is the result of the forward pass of the input through the network to the hidden layer of interest; Z is an N -dimensional RV describing the hidden representations.
The caveats of different approaches to measure the MI between input X and hidden representation Z of a neural network – e.g., the MI being infinite for deterministic neural networks and continuous input distributions, the dependence of the MI estimate on the parameterization of the estimator, etc. – were discussed widely in the literature (Saxe et al., 2019; Geiger, 2022) and are briefly reviewed in this section. These caveats do not appear for the MI measured between representations Z and targets Y , since the target is in most cases a discrete RV (class), for which MI is always finite.
One option for estimating I(X;Z) is to assume the input to be drawn from a discrete distribution. This view is supported by the finiteness of the accuracy of the used computational resources (Lorenzen et al., 2021) and makes it easy to use a finite dataset S to describe the distribution. In such setup, the distribution of (X,Y ) is assumed uniform on the dataset S, and the discretization of Z is performed at a fixed bin size (e.g., corresponding to the computer precision). The MI between
X and the discretized Ẑ is computed as I(X; Ẑ) = H(Ẑ) − H(Ẑ|X) = H(Ẑ) − 0 = H(Ẑ), where H(Ẑ|X) = 0 since f(·) and the discretization of Z are deterministic. Thus, the estimated MI between input and representation corresponds to the entropy of the discretized representation, which for small bin sizes is equal to the entropy H(X) = log |S| of the empirical distribution on the dataset, unless f(·) maps different points from the dataset to the same point in latent space. A different option that is more aligned to the common description of real-world data is to assume X to be drawn from a continuous distribution. If the network transformation f(·) results in a discrete distribution of the representations Z, one can use the decomposition I(X,Z) = H(Z)−H(Z|X) = H(Z) to estimate MI based on Shannon entropy, provided that the sample size is sufficiently large (note that the dimensionality N of Z may be large, and therefore the estimation of H(Z) may suffer from the curse of dimensionality). However, as shown in Theorem 1 of (Amjad and Geiger, 2020) for neural networks with commonly used activation functions the distribution of the latent representation is not discrete. In this case (i.e., f(·) is deterministic, X is continuous, and Z is not purely discrete) the MI between X and Z is infinite2. By binning, i.e., by quantizing Z to a discrete RV Ẑ, the MI I(X; Ẑ) = H(Ẑ) remains finite, but the qualitative behavior of this entropy will be defined by properties of activation functions and selected bin size (Saxe et al., 2019).
From the discussion above it follows that estimating I(X;Z) in deterministic neural networks is an ill-posed problem, and that the estimates reveal not an information-theoretic picture, but often rather a geometric one that is determined by the properties of the chosen estimators. As a solution to the aforementioned challenges, several authors have suggested to investigate the information planes of stochastic neural networks instead (Amjad and Geiger, 2020; Goldfeld et al., 2019). Goldfeld et al. (2019) proposed to add zero-mean Gaussian noise D to the representations during training. This transforms a deterministic neural network into a stochastic one that was shown to yield similar training results and predictive abilities of the model. The addition of Gaussian noise in Z = f(X)+ D guarantees a finite MI3 and therefore allows for estimating MI using Monte Carlo sampling with bounds on the estimation error. Futhermore, it links the information-theoretic perspective of the IP to geometric effects taking place in latent space. Indeed, when the MI between input and representation is decreasing, it means that noise-induced Gaussians centered at the representations of different data points overlap more strongly. Thus, it is becoming harder to distinguish between inputs of the same class based on their representations, which translates into lower MI between representation and input while leaving MI between representation and target unchanged.
As discussed above, for continuous input distributions both the IPs of deterministic neural networks as well as of stochastic neural networks with additive noise show a geometric picture (and in the former case the geometric interpretation is the only valid one, since MI is infinite in this case). Therefore, in this work we study the estimation of MI in networks with dropout layers, i.e., in settings where the stochasticity is introduced by multiplicative, rather than additive noise. In what follows we will investigate the requirements on the multiplicative noise for MI to remain finite, and whether the resulting IPs confirm the information bottleneck hypothesis.
3 MUTUAL INFORMATION IN DROPOUT NETWORKS
As discussed in the previous section, the MI between inputs and hidden representations of deterministic networks is infinite, if we assume the input distribution to be continuous. To overcome this problem, some form of stochasticity has to be introduced. While adding noise to activations (Goldfeld et al., 2019) indeed allows to compute the MI, this is not used in most contemporary neural networks. In contrast, neural networks with dropout are one of the most popular classes of neural networks used in practice and are stochastic in nature as well: Adding a dropout layer to a neural network corresponds to multiplying the hidden representation with some form of random noise. Formally, denoting the random noise by a RV D of the same dimension as f(X), the hidden representation becomes Z = f(X) ◦D, where ◦ denotes element-wise multiplication. In the most basic form, D follows a Bernoulli distribution (Srivastava et al., 2014). Such binary dropout is widely used and can intuitively been understood as “turning off” a fraction of neurons during training. There is a
2There are multiple mathematical derivations explaining why MI is infinite, one for example is discussed in (Saxe et al., 2019, Appendix C).
3At least when the px and f(·) are such that f(X) has finite variance, then the finiteness of MI follows from the result about the capacity of the additive Gaussian noise channel, cf. (Cover and Thomas, 1991, eq. (10.17)).
variety of other dropout schemes, including multiplicative Gaussian noise, fast dropout (Wang and Manning, 2013), or variational dropout (Kingma et al., 2015). Information dropout (Achille and Soatto, 2018) is a variant that uses a closed-form expression of MI as regularization term. In order to obtain such closed form, dropout noise is sampled from a log-normal distribution, and the prior distribution on representations is chosen depending on the activation function (ReLU or Softplus). We provide details on the derivation in Appendix A.1.
In this section, we investigate whether neural networks with dropout have indeed finite MI between input X and representation Z. While we first show a negative result by proving that binary dropout still leads to I(X;Z) =∞, our Theorem 3.3 shows that dropout with continuous distribution keeps MI finite. This fact allows us to estimate MI for such dropout neural networks in Sections 4 and 5.
3.1 BINARY DROPOUT
We start by analyzing binary dropout, which forces individual neurons to be “turned off” with some probability. More formally, the output of each neuron is multiplied with an independent Bernoulli RV that is equal to 1 with a predefined probability p. The following theorem shows that this kind of (combinatorial) stochasticity is insufficient to prevent I(X;Z) from becoming infinite. Theorem 3.1. In the setting of (Amjad and Geiger, 2020, Th. 1), let the output f(·) of a hidden layer be parameterized as a deterministic neural network with N̂ neurons, let B ∈ {0, 1}N̂ be the set of independent Bernoulli RVs characterizing the dropout pattern, and let Z = fB(X) denote the output of the hidden layer after applying the random pattern B. Then it holds that I(X;Z) =∞.
In the proof (provided in Appendix A.2) we use the fact that dropout mask b = (1, 1, . . . , 1) leads to an infinite MI. While the Bernoulli distribution guarantees that b = (1, 1, . . . , 1) always has nonzero probability, other distributions over {0, 1}N̂ might not have this property. Theorem 3.1 can however be generalized to arbitrary distributions over {0, 1}N̂ : Theorem 3.2. In the setting of (Amjad and Geiger, 2020, Th. 1), let the output f(·) of a hidden layer be parameterized as a deterministic neural network with N̂ neurons, let B ∈ {0, 1}N̂ be the binary random vector characterizing the dropout pattern, and let Z = fB(X) denote the output of the hidden layer after applying the random pattern B. Then, it either holds that I(X;Z) = ∞ or that I(X;Z) = 0 if the dropout patterns almost surely disrupt information flow through the network.
The proof for the theorem is provided in Appendix A.3.
Both Theorem 3.1 and Theorem 3.2 cover as a special case the setting where dropout is applied to only a subset of layers, by simply setting those elements of B to 1 that correspond to a neuron output without dropout. If dropout is applied to only a single layer, then fB(X) = f(X) ◦B′, where B′ is the dropout pattern of the considered layer and ◦ denotes the element-wise product. As a consequence of Theorem 3.2, for neural networks with binary dropout any finite estimate of MI is “infinitely wrong”, and the resulting IP does not permit an information-theoretic interpretation. Essentially, the stochasticity added by binary dropout is combinatorial, and hence cannot compensate the “continuous” stochasticity available in the input X .
3.2 DROPOUT WITH CONTINUOUS NOISE
As proposed by Srivastava et al. (2014), dropout can also be implemented using continuous Gaussian noise with mean vector µ = 1 and diagonal covariance matrix Iσ2 with fixed variance σ2. Achille and Soatto (2018), in contrast, proposed log-normally distributed dropout noise, the variance of which depends on the input sample x (this is termed information dropout). Generalizing both Gaussian and information dropout, in this section we consider continuously distributed multiplicative noise D. In contrast to binary noise sampled from a discrete distribution, continuously distributed noise turns the joint distribution of (Z,X) to be absolutely continuous with respect to the marginals of Z and X allowing for finite values of MI between the input X and the hidden representation Z. The following theorem states that the MI between input and the hidden representation of the dropout layer is indeed finite even if the variance of the noise depends on the input. Theorem 3.3. Let X be bounded in all dimensions, f(·) be parameterized by a deterministic neural network with Lipschitz activation functions, and let Z = f(X) ◦ D(X), where the components of
noise D(X) = (D1(X), . . . , DN (X)) are conditionally independent given X and have essentially bounded differential entropy and second moments, i.e., E[Di(X)2] ≤M <∞X-almost surely, for some M and all i = 1, . . . , N . Then, if the conditional expectation E[log(|f(X)|) | |f(X)| > 0] is finite in each of its elements, we have I(X;Z) <∞.
Theorem 3.3 (proof in Appendix A.4) can be instantiated for Gaussian dropout, where Di(x) = Di ∼ N (1, σ2), and for information dropout, where Di(x) ∼ logN (0, α2(x)). Note that for information dropout we have to ensure that the (learned) variance α2(x) stays bounded from above and below; e.g., in the experiments of Achille and Soatto (2018), α2(x) is restricted to be below 0.7.
The requirement that the conditional expectation E[log(|f(X)|) | |f(X)| > 0] is finite in each of its elements is critical for the proof. Indeed, one can construct a synthetic (albeit unrealistic) example for which this condition is violated: Example 3.4. Let X ′ have the following probability density function
px′(x ′) = { 2−n, if x′ ∈ [2n, 2n + 1), n = 1, 2, . . . 0, else
Evidently, E[X ′] =∞. Then, X = e−X′ is bounded, since its alphabet is a subset of (0, e−2]. Now consider a neural network with a single hidden layer with one neuron. Let the weight from X to the single neuron be 1, and assume that the neuron uses a ReLU activation function. Then,
E[log |f(X)|] = E[log |X|] = E[log |e−X ′ |] = E[−X ′] = −∞ .
It can be shown that in this example the probability density function of X (as well as of f(X)) is not bounded. Under the assumption that the probability density function pf of f(X) is bounded, the conditional expectation in the assertion of the theorem is finite: Assuming that pf ≤ C <∞, by the law of unconscious statistician we have
Ex[log(|f(X)i|) | |f(X)i| > 0] = ∫ ∥f(X)i∥∞ 0 log(f)pf (f)df
= ∫ 1 0
log(f)pf (f)df︸ ︷︷ ︸ I1 +
∫ ∥f(X)i∥∞ 1
log(f)pf (f)df︸ ︷︷ ︸ I2 .
It is obvious that I2 is positive and finite. Due to the boundedness of pf we also have I1 ≥ C ∫ 1 0 log(f)df = Cf(log(f)− 1)|10 = −C > −∞.
However, the boundedness of pf of is hard to guarantee for an arbitrary neural network. In contrast, the boundedness of px is more realistic and easier to check. For bounded px we can prove (in Appendix A.5) the finiteness of the expectation E[log(|f(X)|) | |f(X)| > 0] for ReLU networks: Proposition 3.5. Consider a deterministic neural network function f(·) constructed with finitely many layers, a finite number of neurons per layer, and ReLU activation functions. Let X be a continuously distributed RV with probability density function px that is bounded (px ≤ P < ∞) and has bounded support X . Then, the conditional expectation E[log(|f(X)|) | |f(X)| > 0] is finite in each of its elements.
Finally, note that Theorem 3.3 assumes that the network is deterministic up to the considered dropout layer. This does not come with a loss of generality for feed-forward networks (e.g., with no residual connections): Indeed, one can apply Theorem 3.3 to the first hidden layer representation Z(1) with dropout, where this assumption always holds. Then, for the ℓ-th hidden layer and irrespective of whether this layer also has dropout, the MI I(X;Z(ℓ)) is finite due to the data processing inequality (Cover and Thomas, 1991, Th. 2.8.1). Therefore, Theorem 3.3 ensures that MI is finite for all hidden layers after the first continuous dropout layer.
4 ESTIMATION OF MI UNDER CONTINUOUS DROPOUT
We now consider estimating I(X;Z) in networks with continuously distributed dropout, starting with information dropout. As discussed by Achille and Soatto (2018), networks with information
dropout are trained with the cross-entropy loss ℓce (which is involved in the known variational lower bound I(Z;Y ) ≥ H(Y )− ℓce) and regularized using a variational upper bound on I(X;Z). Therefore, estimates of the quantities displayed in the information plane are directly used in the training loss and, thus, easy to track, at least for softplus activation functions4.
In the case of Gaussian dropout, to estimate I(X;Z) we approximate h(Z) and h(Z|X) separately (pseudocode is given in Algorithm 1 in Appendix A.6).
For estimating h(Z) we employ a Monte Carlo (MC) estimate, similar to the one proposed by Goldfeld et al. (2019). That is, we approximate the distribution of Z as a Gaussian mixture, where we draw samples f(x(j)), j = 1, . . . , |S| and place Gaussians with a diagonal covariance matrix with variances σ2|f(x(j))i|2, i = 1, . . . , N on each samplef(x(j)). For a sanity check, we also compute an upper bound of h(Z) given by the entropy of a Gaussian with the same covariance matrix as Z. Note that the estimation of the
upper bound requires a sufficiently large number of samples to guarantee that the sample covariance matrix is not singular and that the resulting entropy estimate is finite.
For each fixed x the conditional distribution pz|x is a Gaussian distribution N (f(x),diag({σ2|f(x)i|)2})). Moreover, when the input is fixed, the components of Z|X = x are independent, since components of the noise are independent. This allows to compute h(Z|X) as a sum of h(Zi|X) where Zi is the i-th component of the representation vector. The computation of h(Zi|X) requires integration over the input space for computing the mathematical expectation Ex[h(Zi|X = x)]. This can be approximated via MC sampling. That is, we approximate h(Zi|X) by 1/|S| ∑|S| j=1 h(Zi|X = x(j)) where h(Zi|X = x(j)) = log(|f(x(j))i|σ √
2πe).
We consider a simple toy problem for validating our approach to estimating MI: the input X is generated from an n-dimensional standard normal distribution, modified with a function f(X) = 2X+0.5, and then subjected to Gaussian dropout distributed according toN (1, σ2). We investigate the convergence of our estimator for h(Z|X) for increasing number of samples. For each input data point, we generate 10 noise masks, thus obtaining 10 samples of Z for each x(j). The results in Fig. 2 show that the estimation stabilizes with larger amount of samples for different dimensionality of the data. We also compare the estimate to the upper bound for h(Z) in Fig 3.
We finally compare our estimation of MI to binning, the EDGE estimator (Noshad et al., 2019), and the lower bounds analyzed by McAllester and Stratos (2020). The results are shown in Fig. 4. In the plot, doe stands for the difference-of-entropies (DoE) estimator and doe l stands for DoE with logistic parametrization (McAllester and Stratos, 2020). The binning estimator underestimates the
4Indeed, for softplus activation functions, the variational approximation of I(X;Z) is available in closed form, while for ReLU activation functions, the available expression is only useful for minimizing, rather than for computing, I(X;Z) (see Appendix A.1).
MI when the bin size is large and overestimates it with small bin size (Ross, 2014), which can be clearly seen in the plots where bins are organized both by size (upper axis) and by number (lower axis). Moreover, with the high-dimensional data, binning hits the maximal possible value of log(|S|) very fast, not being able to reach larger MI values. According to McAllester and Stratos (2020), lower bound-based MI estimators (e.g., MINE (Belghazi et al., 2018)) also need exponentially (in the true value of MI) many data points for a good value approximation, otherwise they will always heavily underestimate the MI.
Further plots for different dropout variances and inputs dimensionality are given in Appendix A.6.
5 INFORMATION PLANE ANALYSIS OF DROPOUT NETWORKS
We use the estimators described in the previous section for an IP analysis of networks with Gaussian and information dropout. We always consider only the representation corresponding to the first dropout layer 5 and measure the MI in nats, e.g., use the natural logarithm. For estimating I(Y ;Z), we employ the EDGE estimator (Noshad et al., 2019) for Gaussian dropout and variational estimate for information dropout. IPs created using the binning estimator use binning for both I(X;Z) and I(Y ;Z).
In the first set of experiments we investigate the difference between IPs obtained via our proposed estimator
and via binning. The analysis on the MNIST dataset was performed for a LeNet network (LeCun et al., 1998) that achieves 99% accuracy and a simple fully-connected (FC) network with three hidden layers (28×28−512−128−32−10) and softplus activation functions achieving 97% accuracy. We analyze both information dropout and Gaussian dropout in the LeNet network and only Gaussian dropout in the FC network. In both cases dropout is applied on penultimate layers. We compare IPs based on binning estimators to IPs based on our estimators in Fig. 1 and Fig. 5.
5This makes the MI estimation more efficient, since the previous part of the network is deterministic which allows for an analytical expression of h(Z|X = x). Note however, that the estimation could be extended to higher layers as well since for those MI also remains finite. However, an estimator different from ours should be used for those layers.
We also analyze the IPs for a ResNet18 trained on CIFAR10 (see Fig. 6), where we added an additional bottleneck layer with 128 neurons and Gaussian dropout before the output layer, and which achieves an accuracy of 94%.
Interestingly, for all networks and datasets we observe significant compression for our estimator and a lack of compression for binning estimators (also for different bin size, see Appendix A.8). This indicates that either the MI compression measured in dropout networks is different from purely geometrical compression, or
that the number of samples |S| is insufficient to reliably estimate I(X;Z) by binning.
In the second set of experiments, we analyze IPs in information dropout networks, with MI estimations as described before. To this end, we trained a fully convolutional neural network (fullCNN) on CIFAR10 using code provided by Achille and Soatto (2018). Training proceeded for 200 epochs using SGD with momentum and, different from the original setup, with only one dropout layer after the third convolutional layer. The batch size was set to 100, the learning rate was initially set to 0.05 and was reduced by multiplying it with 0.1 after the 40, 80, and 120 epoch. The network was trained with different values of the regularization weight β and different amounts of filters in the convolutional layers. That is, the full-size fullCNN has 3 layers with 96 filters succeeded by 4 layers with 192 filters, while only 25% of these filters are constituting the small network. Also different from the original setup, we allowed the noise variance to grow up to 0.95 in order to see the effect of the limited
information between representation and input more pronounced. Results are shown in Fig. 7. It can be seen that regularizing I(X;Z) is effective (i.e., larger values of β lead to smaller I(X;Z)), and that regularizing too strongly (β = 20) leads to worse performance: the test error is 5% higher and train error is 10% higher. We can further see stronger compression for smaller β and almost no compression for larger β. We conjecture that compression can only become visible if sufficient information is permitted to flow through the network (which happens only for small β). Fig. 7 (c) and (d) show the IPs for the small fullCNN. It can be seen that the smaller network appears not to compress at all (see Fig. 7 (c)), but that I(X;Z) rather increases throughout training until it is at the same level as in Fig. 7 (a). This indicates that β determines to which point in the IP information compresses, and that the IP curve that is traversed during training depends on the overall capacity of the neural network.
Plots for the additional experiments can be found in Appendix A.8.
6 DISCUSSION
Whether or not information-theoretic compression is correlated with improved generalization is the main question connected to and the most prominent justification for information plane analysis of deep neural networks. Such a connection, however, can only be tested for neural networks for which MI is finite and therefore measurable. In our theoretical analysis, we investigate if different variants of dropout noise allow for finite values of MI under an assumption of a continuous input distribution. We answered this question positively by showing that in networks with certain constraints on the induced distribution of the representations, continuous dropout noise with finite differential entropy prevents I(X;Z) from becoming infinite. We have further shown that these constraints on the distribution of the representation are satisfied in ReLU networks if the probability density function of the input is bounded.
Following this conclusion we propose an MC-based estimate of MI in Gaussian dropout networks and perform an IP analysis for different networks with Gaussian and information dropout on different datasets. The experiments show that the binning estimator behaves very differently from our estimator: While our estimator mostly exhibits compression in the IP, the binning estimator does not. Further, the values of I(X;Z) for our estimator are often orders of magnitude larger than the values of I(Y ;Z), especially when compared to the binning estimator. Assuming that the proposed estimators are reasonably accurate, this makes a connection between information-theoretic compression and generalization questionable. While these preliminary experiments do not conclusively answer the question if such a connection exists, they show a practically relevant setting in which this correlation can be studied.
The discrepancy between the binning estimator and our estimator further suggests that either the information-theoretic compression we observe using our estimator is not geometric, or that there are insufficient samples to obtain reliable estimates from the binning estimator. This is in contrast with the work of Goldfeld et al. (2019), which showed that information-theoretic and geometric compression were linked in their networks with additive noise. We thus believe that a closer investigation of whether multiplicative noise induces geometric compression, and whether the induced compression improves generalization performance, are interesting questions for future research.
ACKNOWLEDGEMENTS
The authors want to thank Michael Kamp, Simon Damm, Ziv Goldfeld, and Jihao Andreas Lin for valuable discussions about the work.
Asja Fischer acknowledges support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy - EXC 2092 CASA - 390781972.
The Know-Center is funded within the Austrian COMET Program - Competence Centers for Excellent Technologies - under the auspices of the Austrian Federal Ministry of Climate Action, Environment, Energy, Mobility, Innovation and Technology, the Austrian Federal Ministry of Digital and Economic Affairs, and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency FFG.
A APPENDIX
A.1 INFORMATION DROPOUT
One type of dropout with continuous noise is termed information dropout (Achille and Soatto, 2018). It is a technique that combines dropout noise sampled from a log-normal distribution ϵ ∼ pϵ = logN (0, α2θ(x)), where αθ(x) is a learnable parameter dependent on the parameters θ of a network, and the introduction of a regularization term KL(pz|xi || ∏|Z| i=1 pzi). This regularization term is based on an information bottleneck objective for training neural networks: Rewriting the information bottleneck Lagrangian and adding a disentanglement term (i.e., we want each element of representation Z to be independent of the others) results in the aforementioned formula. Additionally, it is proposed to use as prior pz , defined by the choice of activation function (ReLU or Softplus), a particular distribution whose validity is empirically verified. Such priors and selected dropout noise allow for deriving a closed form of KL-divergence, which makes it easy to directly track IP values while training.
In the following, we provide the closed form for computation of I(X;Z) as proposed by Achille and Soatto (2018):
I(X;Z) = KL(px,z||pzpx) = ∫ px,z(x, z) log ( px,z(x, z)
pz(z)px(x)
) dxdz
= ∫ px(x)pz|x(z) log ( px(x)pz|x(z)
pz(z)px(x)
) dxdz = ∫ px(x)KL(pz|x||pz)dx
= Ex[KL(pz|x||pz)] . Empirically we can approximate this as I(X;Z) = ∑|S|
j=1 KL(pz|x(j) ||pz), where we sum over the dataset of size |S| of samples of X . First, we discuss ReLU neural networks. The prior distribution pz in this case consists is a mixture of two parts: and improper log-uniform distribution and a point mass at 0. Such prior is empirically valid for ReLU activations. First we restrict the derivation to the case when f(X) ̸= 0 (which in turn means that Z ̸= 0, since noise ϵ is log-normal and cannot be 0). In the following we will omit the subscript of probability density functions, when it is clear from its argument.
KL(pz|x(j) ||pz) = KL(plog(z|x(j))||plog(z)) (1)
= ∫ p(log(z|x(j))) log ( p(log(z|x(j))) p(log(z)) ) dz
= ∫ p(log(ϵ) + log(f(x(j)))|x(j)) log ( p(log(ϵ) + log(f(x(j)))|x(j))
c
) dϵ (2)
= ∫ p(log(ϵ)) log(p(log(ϵ)))dϵ− ∫ p(log(ϵ)) log(c)dϵ (3)
= ∫ p(log(ϵ)) log(p(log(ϵ)))dϵ− log(c) = −h(log(ϵ))− log(c) (4)
= −(log(α(x(j))) + 1 2 log(2πe))− log(c) , (5)
where equation 1 holds due to the invariance of the KL-divergence under parameter transformation with a strictly monotone function (log(·)); equation 2 holds since log(Z) = log(ϵ) + log(f(X)) and plog(z) = c for the improper log-uniform distribution; equation 3 is taking into account that px+const = px, that log(f(x))|x(j) is constant, and that plog(ϵ)|x(j) = plog(ϵ) because ϵ is independent of X; equation 4 uses that ∫ plog(ϵ)dϵ = 1; finally equation 5 holds because log(ϵ) is normally distributed and its entropy can be computed in closed form.
Now we put f(X) = 0, and also get Z = 0. Then pZ|X = δ0 (point mass or Dirac delta) and MI becomes:
KL(pz|x(j) ||pz) = ∫ pz|x(z) log ( pz|x(z)
pz(z)
) dz = ∫ δ0 log ( δ0 qδ0 ) dz = − log(q) , (6)
where q is the weight of the point mass in the prior pz .
Combination of equation 5 and equation 6 results in a computable I(X;Z). As it can be seen, one has to correctly combine non-zero and zero values of f(X) and also know the parameters of the prior pz: constant c and weight q. This makes it not practical for IP analysis.
If instead of ReLU the network has softplus activations, then the prior on the representations distribution is standard log-normal instead of log-uniform with delta Dirac. In this case the computation is very simple, since KL divergence between two log-normal distributions is computed as KL divergence between corresponding normal distributions:
KL(pz|x(j) ||pz) = 1 2σ2 (α2(x(j)) + µ2)− log(α(x
(j)))
σ − 1 2 , (7)
where σ2 = 1 and µ = 0 are known parameters of the prior. Thus, softplus activations (equation 7) allows for direct computations of I(X;Z).
A.2 PROOF OF THEOREM 3.1
Proof. Using the chain rule of MI, we have
I(X;Z) = I(X;Z,B)− I(B;X|Z) = I(X;Z|B) + I(B;X)− I(B;X|Z) ≥ I(X;Z|B)−H(B)
where the inequality follows from dropping I(B;X) since B and X are independent and the fact that I(B;X|Z) ≤ H(B). Having B ∈ {0, 1}N̂ as a discrete RV, it immediately follows that H(B) ≤ N̂ log 2. Now note that
I(X;Z|B) = ∑
b∈{0,1}N̂ P(B = b)I(X;Z|B = b).
Since the Bernoulli RVs are independent, positive probability mass is assigned to b = (1, 1, . . . , 1), i.e., to the case where all neurons are active. Evidently, when b = (1, 1, . . . , 1) it follows that Z = f(X). Thus, with (Amjad and Geiger, 2020, Th. 1)
I(X;Z|B) ≥ P(b = (1, 1, . . . , 1))I(X; f(X)) =∞
and I(X;Z) =∞.
A.3 PROOF OF THEOREM 3.2
Proof. If the binary dropout is such that nonzero probability is assigned to the dropout mask b = (1, 1, . . . 1), then the statement of the theorem follows as in the proof of the theorem 3.1.
Assume now that B is such that zero mass is assigned to b = (1, 1, . . . , 1). To treat this case, we suppose that the distribution of X has a portion with a continuous probability density function on a compact set and that the neural network has activation functions that are either bi-Lipschitz or continuously differentiable with a strictly positive derivative (following the requirements of Amjad and Geiger (2020, Th. 1)). Then, we obtain I(X; f(X)) =∞ from (Amjad and Geiger, 2020, Th. 1) for almost all parameterizations of the neural network. Under this setting, fB(X) is again a neural network with activation functions that are either bi-Lipschitz or continuously differentiable with a strictly positive derivative. Assuming that b is such that the input of the network is not completely disconnected from the considered layer, for this pattern we have I(X;Z|B = b) = ∞. Otherwise, we obviously have I(X;Z|B = b) = 0. The statement of the theorem follows from taking the expectation over all patterns b.
A.4 PROOF OF THEOREM 3.3
Proof. W.l.o.g we first restrict our attention to the dimensions of representations Z that are different from zero. Specifically, suppose that Z = (Z1, . . . , ZN ) and that B = (B1, . . . , BN ) with Bi = 0 if Zi = 0 and Bi = 1 otherwise. Clearly, B is a function of Z, hence I(X;Z) = I(X;Z,B) =
I(B;X) + I(Z;X|B). Since B is binary, we have that I(X;B) ≤ H(B) ≤ n log 2. Let ZB = (Zi|i: Bi = 1) denote the sub-vector of non-zero elements of Z, then
I(X;Z) ≤ n log 2 + ∑ b P(B = b)I(Zb;X)
where, if B = b, I(Zb;X) = I(Z;X|B = b) holds because constant (i.e., 0) RVs do not contribute to MI. Therefore, I(X;Z) is finite iff I(Zb;X) = I(Z;X|B = b) is finite B-almost surely. We thus now fix an arbitrary B = b and continue the proof for Z = Zb.
We decompose MI into differential entropies as I(X;Z) = h(Z)−h(Z|X). The differential entropy of the representations h(Z) is upper-bounded by the entropy of a Gaussian RV with the same covariance matrix Σ as the distribution of Z = (Z1, . . . , ZN ), i.e., by N/2 log(2π) + 1/2 log(det(Σ)) + N/2. From Hadamard’s inequality and since Σ is positive semidefinite it follows that det(Σ) ≤∏n
i=1 σ 2 ii, where σ 2 ii are diagonal elements of the covariance matrix, i.e., σ 2 ii = V ar[Zi]. This variance can be bounded from above. Specifically, since Xi is bounded and f(·) is a composition of Lipschitz functions, f(X)i is bounded as well. Recalling that E[Di(x)2] ≤ M holds X-almost surely, this yields
V ar[Zi] ≤ Ex[f(X)2iDi(X)2] = Ex[f(X)2iEd[Di(X)2 | X]] ≤MEx[f(X)2i ] ≤M∥f(X)i∥2∞
It remains to show that the h(Z|X) > −∞. Due to the conditional independence of Di and Dj given X , for all i ̸= j, the conditional differential entropy of Z factorises in the sum of conditional differential entropy of its components, i.e., h(Z|X) = ∑N i=1 h(Zi|X). We write this conditional entropy as an expectation over X and obtain using (Cover and Thomas, 1991, Th. 9.6.4)
h(Zi|X) = Ex[h(Zi|X = x)] = Ex[h(Di(x)|f(x)i||X = x)] = Ex[h(Di(x)|X = x)] + Ex[log(|f(X)i|)]
by the formula of change of variables for differential entropy. Both terms are finite as per the assertion of the theorem. The first term is finite since we assumed that the differential entropy of Di(X) is essentially bounded, i.e., there exists a number C <∞ such that h(Di(x)) ≤ C X-almost surely. The second term is finite since we assumed that the conditional expectation E[log(|f(X)|) | |f(X)| > 0] is finite in each of its elements, and since Zi ̸= 0 implies |f(X)i| > 0. This completes the proof.
A.5 PROOF OF PROPOSITION 3.5
Proof. We assume w.l.o.g. that f(·) has a range with dimension D = 1, i.e., f : X → R, where X ⊆ Rn is the function domain. The proof can be straightforwardly extended to the several dimensions of f(·). Since f(·) is constructed using a finitely-sized neural network with ReLU activation functions, it is piecewise affinely linear on a finite partition of the function domain. The fact that E[log(|f(X)|) | |f(X)| > 0] <∞ follows then immediately from the fact that X , and thus |f(X )|, is bounded. To investigate whether E[log(|f(X)|) | |f(X)| > 0] > −∞, split domain X in the following partitions:
1. X0 = f−1({0}) denotes the element of the partition on which f(X) vanishes;
2. {X ci }i=1,...,ℓ denotes elements of the partition of X on which f(X) = ci, i.e., on which f(·) is constant;
3. X a = ⋃m
i=1 X ai denotes the union of the all other sets {X ai }i=1,...,m of the partition, where f(·) is not constant.
For the last subset, define the function f̃ : X a → Rn via f̃(x) = (|f(x)|, x2, x3, . . . , xn). Note that f̃(·) is piecewise bijective, hence W̃ = f̃(X) has a probability density function that is obtained
from the change of variables formula:
pw̃(w̃) = ∑
x∈f̃−1(w̃)
px(x)
|det(Jf̃ (x))|
where Jf̃ (x) = [ ∂f̃i ∂xj (x) ]
is the Jacobian matrix of f̃(·), with f̃1(x) = |f(x)| and f̃j(x) = xj for all j ≥ 2. It follows that Jacobian matrix is diagonal and has determinant | ∂f∂x1 (x)|. The density pw|Xa of the conditional random variable W = |f(X)| | X ∈ X a can be then obtained by marginalization from pw̃:
pw|Xa(w) = ∫ pw̃(w, x n 2 )dx n 2 = ∫ ∑ x∈f̃−1(w,xn2 ) px(x) | ∂f∂x1 (x)| dxn2 (8)
where xn2 = (x2, . . . , xn) and where we perform an (n− 1)-fold integral. Thus, by the Lebesgue decomposition, the distribution of W = |f(X)| can be split into an absolutely continuous component with a probability density function pw|Xa and a discrete component with finitely many mass points, for which we have P(W = ci) = ∫ X ci
px(x)dx =: px(X ci ). By the law of unconscious statistician, we then obtain
E[log(|f(X)|) | |f(X)| > 0] = E[log(W ) |W > 0]
= ℓ∑ i=1 px(X ci ) log |ci|+ px(X a) ∫ ∞ 0 log(w)pw|Xa(w)dw
= ℓ∑ i=1 px(X ci ) log |ci|+ px(X a) ∫ ϵ 0
log(w)pw|Xa(w)dw︸ ︷︷ ︸ I1
+px(X a) ∫ ∞ ϵ
log(w)pw|Xa(w)dw︸ ︷︷ ︸ I2
where in the last line we split the integral at a fixed ϵ≪ 1. Clearly, the first sum is finite since ci > 0 for all i. For the remaining summands involving integrals, suppose for now that pw|Xa(w) ≤ C < ∞. Then,
I1 = ∫ ϵ 0 pw log(w)dw ≥ ∫ ϵ 0 C log(w)dw = C(ϵ log(ϵ)− ϵ) > −∞
I2 = ∫ ∞ ϵ pw log(w)dw ≥ ∫ ∞ ϵ pw ( 1− 1 w ) dw ≥ ∫ ∞ ϵ pw ( 1− 1 ϵ ) dw ≥ 1− 1 ϵ > −∞.
We thus remain to show that pw|Xa(w) ≤ C for w ∈ [0, ϵ]. To this end, we revisit equation 8 and note that the integral is finite if i) px is bounded, ii) the integration is over a bounded set, and iii) | ∂f∂x1 (x)| ≥ ϵ1 > 0. Conditions i) and ii) are ensured by the assertion of the lemma. It remains to show that condition iii) holds.
Note that in contrast to using f̃(x) = (|f(x)|, x2, x3, . . . , xn), the same pw|Xa(w) can also be obtained by using the piecewise bijective function f̃(x) = (x1, |f(x)|, x3, . . . , xn), etc. Hence, pw|Xa(w) ≤ C if the partial derivative of f is bounded from below for at least one dimension, i.e., if there exists an i such that | ∂f∂x1 (x)| ≥ ϵ1. Since we have
∥∇xf(x)∥1 = n∑
i=1
∣∣∣∣ ∂f∂xi (x) ∣∣∣∣
this is equivalent to requiring that the L1 norm of the gradient is bounded from below. Indeed, remember that f is piecewise affinely linear with finitely many pieces, and its restriction to X a is non-constant. On its restriction to X a we thus have ∇xf(x) = gi > 0 for all x ∈ X a and some i ∈ {1, . . . ,m}. Hence, we can find an ϵ1 such that mini gi ≥ n · ϵ1 > 0, which implies that there exists an i for which | ∂f∂xi (x)| ≥ ϵ1 for all x ∈ X a. This completes the proof.
A.6 ESTIMATION OF MI UNDER GAUSSIAN DROPOUT
In the Algorithm 1 we describe how the estimation of I(X;Z) with Z being a representation under Gaussian dropout can be done. This is the way we estimated MI for our experiments, but any other estimator can be used in this setup.
Algorithm 1 Estimation of MI under Gaussian dropout Require: GMM-MEANS, σ, nonoise-reprs ▷ Amount of Gaussians in GM for approximation;
noise variance; no noise representations reprs← [] ▷ Generate noisy samples with corresponding variance for all nr in nonoise-reprs do
for i← 1, n do ϵ← noisep reprs← reprs+ nr ∗ ϵ
end for end for points← nonoise-reprs[: GMM-MEANS] ▷ Create a GMM on restricted amount of points for faster computation d← [] for all p in points do
d← d+ Gaussian(p, σ ∗ |p|) end for gmm← MixtureModel(d) lp← [] ▷ Get estimates of log-probabilities from GMM for noisy samples for all r in reprs do
lp← lp+ gmm.log probability(r) end for h(z)← mean(lp) h(z|x)← 0 ▷ Compute conditional entropy using closed form formula for i← 1, dim(reprs[0]) do ▷ For each dimension of the representation
h(z|x)← h(z|x) + mean(ln( √ 2πeσ|nonoise-reprs[:, i]|)) ▷ Use no noise representations here, each dimension separately end for I(x, z)← h(z)− h(z|x) ▷ Obtain final estimate for the MI
A.7 EVALUATION OF ESTIMATOR
Fig. 8 shows upper bounds and estimation of h(Z) with a higher noise than in the Fig. 3. Larger noise increases the gap between the Gaussian entropy based upper bound and the mixture based estimation as expected.
In Fig. 9 we see convergence of the MC estimate for h(Z|X) under larger noise. As expected larger noise variance results in smaller MI values (Fig. 10), while the trend observed when changing dimensionality stays the same.
A.8 INFORMATION PLANE ANALYSIS
Note, that in the experiments we analyze IPs on the training samples and test samples separately. In order to obtain a valid sample of hidden representations for the MI estimation during inference, we apply MC-Dropout, as opposed to the usual way of performing inference with dropout being turned off. According to Srivastava et al. (2014) this is the theoretically sound way to obtain predictions, while turning off dropout and re-scaling weights results in an approximationthat allows for faster computation.
In Fig. 11, Fig.12, and Fig. 13 we provide IPs built on the test set of the corresponding datasets (MNIST, MNIST, and CIFAR10).
In the Fig. 14 we provide additional IPs for the binning estimator with varying amount of bins used for MI estimation. We report the results for the fully-connected network trained on MNIST with Gaussian dropout variance 0.2.
In the Fig. 15 we show the IPs obtained for the same fully-connected network trained on MNIST with the variance of the Gaussian dropout set to 0.4. | 1. What is the focus of the paper regarding neural network analysis?
2. What are the strengths and weaknesses of the proposed method for mutual information estimation?
3. How does the reviewer assess the novelty and significance of the paper's contributions?
4. What are the limitations of the paper, particularly in its comparisons with other works?
5. How does the reviewer evaluate the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper puts forward a method of mutual information estimation within neural networks where the stochasticity of observations is ensured by Gaussian dropout. The authors then use MC methods to estimate entropies and conditional entropies. The proposed approach is compared to MI estimation based on binning and the authors claim to observe the compression phase postulated by Tishby et al., which was not observed with the binning estimator.
Strengths And Weaknesses
STRENGTHS
The interpretation of neural networks with the information plane approach is a relevant problem which has garnered some attention since it was proposed in 2017.
The authors' idea of using dropout for MI estimation is valid, although not necessarily novel.
WEAKNESSES
The main weakness of the paper is its limited novelty. The negative results presented in Section 3 are straightforward and the only new results of this paper follow from the restriction to Gaussian dropout and estimating entropies with MC methods. The observed results concerning the compression phase in the last section are not convincing (the curves resemble straight lines). The authors also only compare this result to the binning estimator which is known to result is spurious effects (see e.g. the works of Saxe or Gabrie).
Clarity, Quality, Novelty And Reproducibility
The paper is relatively easy to read, but the originality is limited. |
ICLR | Title
Information Plane Analysis for Dropout Neural Networks
Abstract
The information-theoretic framework promises to explain the predictive power of neural networks. In particular, the information plane analysis, which measures mutual information (MI) between input and representation as well as representation and output, should give rich insights into the training process. This approach, however, was shown to strongly depend on the choice of estimator of the MI. The problem is amplified for deterministic networks if the MI between input and representation is infinite. Thus, the estimated values are defined by the different approaches for estimation, but do not adequately represent the training process from an information-theoretic perspective. In this work, we show that dropout with continuously distributed noise ensures that MI is finite. We demonstrate in a range of experiments1 that this enables a meaningful information plane analysis for a class of dropout neural networks that is widely used in practice.
1 INTRODUCTION
The information bottleneck hypothesis for deep learning conjectures two phases of training feedforward neural networks (Shwartz-Ziv and Tishby, 2017): the fitting phase and the compression phase. The former corresponds to extracting information from the input into the learned representations, and is characterized by an increase of mutual information (MI) between inputs and hidden representations. The latter corresponds to forgetting information that is not needed to predict the target, which is reflected in a decrease of the MI between learned representations and inputs, while MI between representations and targets stays the same or grows. The phases can be observed via an information plane (IP) analysis, i.e., by analyzing the development of MI between inputs and representations and between representations and targets during training (see Fig. 1 for an example). For an overview of information plane analysis we refer the reader to (Geiger, 2022).
While being elegant and plausible, the information bottleneck hypothesis is challenging to investigate empirically. As shown by Amjad and Geiger (2020, Th. 1), the MI between inputs and the representations learned by a deterministic neural network is infinite if the input distribution is continuous. The standard approach is therefore to assume the input distribution to be discrete (e.g., equivalent to the empirical distribution of the dataset S at hand) and to discretize the real-valued hidden representations by binning to allow for non-trivial measurements, i.e., to avoid that the MI always takes the maximum value of log(|S|) (Shwartz-Ziv and Tishby, 2017). In this discrete and deterministic setting the MI theoretically gets equivalent to the Shannon entropy of the representation. Considering the effect of binning, however, the decrease of MI is essentially equivalent to geometrical compression (Basirat et al., 2021). Moreover, the binning-based estimate highly depends on the chosen bin size (Ross, 2014). To instead work with continuous input distributions, Goldfeld
1Code for the experiments is public on https://github.com/link-er/IP_dropout.
et al. (2019) suggest to replace deterministic neural networks by stochastic ones via adding Gaussian noise to each of the hidden representations. This kind of stochastic networks is rarely used in practice, which limits the insights brought by the analysis.
In contrast, dropout, being a source of stochasticity, is heavily used in practice due to its effective regularizing properties. The core questions investigated in this work therefore are: i) Can we obtain accurate and meaningful MI estimates in neural networks with dropout noise? ii) And if so, do IPs built for dropout networks confirm the information bottleneck hypothesis? Our main contributions answer these questions and can be summarized as follows: We present a theoretical analysis showing that binary dropout does not prevent the MI from being infinite due to the discrete nature of the noise. In contrast, we prove that dropout noise with any continuous distribution not only results in finite MI, but also provides an elegant way to estimate it. This in particular holds for Gaussian dropout, which is known to benefit generalization even more than binary dropout (Srivastava et al., 2014), and for information dropout (Achille and Soatto, 2018). We empirically analyze the quality of the MI estimation in the setup with Gaussian and information dropout in a range of experiments on benchmark neural networks and datasets. While our results do not conclusively confirm or refute the information bottleneck hypothesis, they show that the IPs obtained using our estimator exhibit qualitatively different behavior than the IPs obtained using binning estimators and strongly indicate that a compression phase is indeed happening.
2 MUTUAL INFORMATION ESTIMATION FOR NEURAL NETWORKS
We use the following notation: Lower-case letters denote realizations of random variables (RVs), e.g., b denotes a realization of the RV B; H(A) denotes the Shannon entropy of a discrete RV A whose distribution is denoted pa; h(B) is the differential entropy of a continuous RV B whose distribution is described by the probability density function pb; I(A;B) is the MI between RVs A and B; X ∈ X ⊆ Rn and Y ∈ Y are the RVs describing inputs to a neural network and corresponding targets; f(X) is the result of the forward pass of the input through the network to the hidden layer of interest; Z is an N -dimensional RV describing the hidden representations.
The caveats of different approaches to measure the MI between input X and hidden representation Z of a neural network – e.g., the MI being infinite for deterministic neural networks and continuous input distributions, the dependence of the MI estimate on the parameterization of the estimator, etc. – were discussed widely in the literature (Saxe et al., 2019; Geiger, 2022) and are briefly reviewed in this section. These caveats do not appear for the MI measured between representations Z and targets Y , since the target is in most cases a discrete RV (class), for which MI is always finite.
One option for estimating I(X;Z) is to assume the input to be drawn from a discrete distribution. This view is supported by the finiteness of the accuracy of the used computational resources (Lorenzen et al., 2021) and makes it easy to use a finite dataset S to describe the distribution. In such setup, the distribution of (X,Y ) is assumed uniform on the dataset S, and the discretization of Z is performed at a fixed bin size (e.g., corresponding to the computer precision). The MI between
X and the discretized Ẑ is computed as I(X; Ẑ) = H(Ẑ) − H(Ẑ|X) = H(Ẑ) − 0 = H(Ẑ), where H(Ẑ|X) = 0 since f(·) and the discretization of Z are deterministic. Thus, the estimated MI between input and representation corresponds to the entropy of the discretized representation, which for small bin sizes is equal to the entropy H(X) = log |S| of the empirical distribution on the dataset, unless f(·) maps different points from the dataset to the same point in latent space. A different option that is more aligned to the common description of real-world data is to assume X to be drawn from a continuous distribution. If the network transformation f(·) results in a discrete distribution of the representations Z, one can use the decomposition I(X,Z) = H(Z)−H(Z|X) = H(Z) to estimate MI based on Shannon entropy, provided that the sample size is sufficiently large (note that the dimensionality N of Z may be large, and therefore the estimation of H(Z) may suffer from the curse of dimensionality). However, as shown in Theorem 1 of (Amjad and Geiger, 2020) for neural networks with commonly used activation functions the distribution of the latent representation is not discrete. In this case (i.e., f(·) is deterministic, X is continuous, and Z is not purely discrete) the MI between X and Z is infinite2. By binning, i.e., by quantizing Z to a discrete RV Ẑ, the MI I(X; Ẑ) = H(Ẑ) remains finite, but the qualitative behavior of this entropy will be defined by properties of activation functions and selected bin size (Saxe et al., 2019).
From the discussion above it follows that estimating I(X;Z) in deterministic neural networks is an ill-posed problem, and that the estimates reveal not an information-theoretic picture, but often rather a geometric one that is determined by the properties of the chosen estimators. As a solution to the aforementioned challenges, several authors have suggested to investigate the information planes of stochastic neural networks instead (Amjad and Geiger, 2020; Goldfeld et al., 2019). Goldfeld et al. (2019) proposed to add zero-mean Gaussian noise D to the representations during training. This transforms a deterministic neural network into a stochastic one that was shown to yield similar training results and predictive abilities of the model. The addition of Gaussian noise in Z = f(X)+ D guarantees a finite MI3 and therefore allows for estimating MI using Monte Carlo sampling with bounds on the estimation error. Futhermore, it links the information-theoretic perspective of the IP to geometric effects taking place in latent space. Indeed, when the MI between input and representation is decreasing, it means that noise-induced Gaussians centered at the representations of different data points overlap more strongly. Thus, it is becoming harder to distinguish between inputs of the same class based on their representations, which translates into lower MI between representation and input while leaving MI between representation and target unchanged.
As discussed above, for continuous input distributions both the IPs of deterministic neural networks as well as of stochastic neural networks with additive noise show a geometric picture (and in the former case the geometric interpretation is the only valid one, since MI is infinite in this case). Therefore, in this work we study the estimation of MI in networks with dropout layers, i.e., in settings where the stochasticity is introduced by multiplicative, rather than additive noise. In what follows we will investigate the requirements on the multiplicative noise for MI to remain finite, and whether the resulting IPs confirm the information bottleneck hypothesis.
3 MUTUAL INFORMATION IN DROPOUT NETWORKS
As discussed in the previous section, the MI between inputs and hidden representations of deterministic networks is infinite, if we assume the input distribution to be continuous. To overcome this problem, some form of stochasticity has to be introduced. While adding noise to activations (Goldfeld et al., 2019) indeed allows to compute the MI, this is not used in most contemporary neural networks. In contrast, neural networks with dropout are one of the most popular classes of neural networks used in practice and are stochastic in nature as well: Adding a dropout layer to a neural network corresponds to multiplying the hidden representation with some form of random noise. Formally, denoting the random noise by a RV D of the same dimension as f(X), the hidden representation becomes Z = f(X) ◦D, where ◦ denotes element-wise multiplication. In the most basic form, D follows a Bernoulli distribution (Srivastava et al., 2014). Such binary dropout is widely used and can intuitively been understood as “turning off” a fraction of neurons during training. There is a
2There are multiple mathematical derivations explaining why MI is infinite, one for example is discussed in (Saxe et al., 2019, Appendix C).
3At least when the px and f(·) are such that f(X) has finite variance, then the finiteness of MI follows from the result about the capacity of the additive Gaussian noise channel, cf. (Cover and Thomas, 1991, eq. (10.17)).
variety of other dropout schemes, including multiplicative Gaussian noise, fast dropout (Wang and Manning, 2013), or variational dropout (Kingma et al., 2015). Information dropout (Achille and Soatto, 2018) is a variant that uses a closed-form expression of MI as regularization term. In order to obtain such closed form, dropout noise is sampled from a log-normal distribution, and the prior distribution on representations is chosen depending on the activation function (ReLU or Softplus). We provide details on the derivation in Appendix A.1.
In this section, we investigate whether neural networks with dropout have indeed finite MI between input X and representation Z. While we first show a negative result by proving that binary dropout still leads to I(X;Z) =∞, our Theorem 3.3 shows that dropout with continuous distribution keeps MI finite. This fact allows us to estimate MI for such dropout neural networks in Sections 4 and 5.
3.1 BINARY DROPOUT
We start by analyzing binary dropout, which forces individual neurons to be “turned off” with some probability. More formally, the output of each neuron is multiplied with an independent Bernoulli RV that is equal to 1 with a predefined probability p. The following theorem shows that this kind of (combinatorial) stochasticity is insufficient to prevent I(X;Z) from becoming infinite. Theorem 3.1. In the setting of (Amjad and Geiger, 2020, Th. 1), let the output f(·) of a hidden layer be parameterized as a deterministic neural network with N̂ neurons, let B ∈ {0, 1}N̂ be the set of independent Bernoulli RVs characterizing the dropout pattern, and let Z = fB(X) denote the output of the hidden layer after applying the random pattern B. Then it holds that I(X;Z) =∞.
In the proof (provided in Appendix A.2) we use the fact that dropout mask b = (1, 1, . . . , 1) leads to an infinite MI. While the Bernoulli distribution guarantees that b = (1, 1, . . . , 1) always has nonzero probability, other distributions over {0, 1}N̂ might not have this property. Theorem 3.1 can however be generalized to arbitrary distributions over {0, 1}N̂ : Theorem 3.2. In the setting of (Amjad and Geiger, 2020, Th. 1), let the output f(·) of a hidden layer be parameterized as a deterministic neural network with N̂ neurons, let B ∈ {0, 1}N̂ be the binary random vector characterizing the dropout pattern, and let Z = fB(X) denote the output of the hidden layer after applying the random pattern B. Then, it either holds that I(X;Z) = ∞ or that I(X;Z) = 0 if the dropout patterns almost surely disrupt information flow through the network.
The proof for the theorem is provided in Appendix A.3.
Both Theorem 3.1 and Theorem 3.2 cover as a special case the setting where dropout is applied to only a subset of layers, by simply setting those elements of B to 1 that correspond to a neuron output without dropout. If dropout is applied to only a single layer, then fB(X) = f(X) ◦B′, where B′ is the dropout pattern of the considered layer and ◦ denotes the element-wise product. As a consequence of Theorem 3.2, for neural networks with binary dropout any finite estimate of MI is “infinitely wrong”, and the resulting IP does not permit an information-theoretic interpretation. Essentially, the stochasticity added by binary dropout is combinatorial, and hence cannot compensate the “continuous” stochasticity available in the input X .
3.2 DROPOUT WITH CONTINUOUS NOISE
As proposed by Srivastava et al. (2014), dropout can also be implemented using continuous Gaussian noise with mean vector µ = 1 and diagonal covariance matrix Iσ2 with fixed variance σ2. Achille and Soatto (2018), in contrast, proposed log-normally distributed dropout noise, the variance of which depends on the input sample x (this is termed information dropout). Generalizing both Gaussian and information dropout, in this section we consider continuously distributed multiplicative noise D. In contrast to binary noise sampled from a discrete distribution, continuously distributed noise turns the joint distribution of (Z,X) to be absolutely continuous with respect to the marginals of Z and X allowing for finite values of MI between the input X and the hidden representation Z. The following theorem states that the MI between input and the hidden representation of the dropout layer is indeed finite even if the variance of the noise depends on the input. Theorem 3.3. Let X be bounded in all dimensions, f(·) be parameterized by a deterministic neural network with Lipschitz activation functions, and let Z = f(X) ◦ D(X), where the components of
noise D(X) = (D1(X), . . . , DN (X)) are conditionally independent given X and have essentially bounded differential entropy and second moments, i.e., E[Di(X)2] ≤M <∞X-almost surely, for some M and all i = 1, . . . , N . Then, if the conditional expectation E[log(|f(X)|) | |f(X)| > 0] is finite in each of its elements, we have I(X;Z) <∞.
Theorem 3.3 (proof in Appendix A.4) can be instantiated for Gaussian dropout, where Di(x) = Di ∼ N (1, σ2), and for information dropout, where Di(x) ∼ logN (0, α2(x)). Note that for information dropout we have to ensure that the (learned) variance α2(x) stays bounded from above and below; e.g., in the experiments of Achille and Soatto (2018), α2(x) is restricted to be below 0.7.
The requirement that the conditional expectation E[log(|f(X)|) | |f(X)| > 0] is finite in each of its elements is critical for the proof. Indeed, one can construct a synthetic (albeit unrealistic) example for which this condition is violated: Example 3.4. Let X ′ have the following probability density function
px′(x ′) = { 2−n, if x′ ∈ [2n, 2n + 1), n = 1, 2, . . . 0, else
Evidently, E[X ′] =∞. Then, X = e−X′ is bounded, since its alphabet is a subset of (0, e−2]. Now consider a neural network with a single hidden layer with one neuron. Let the weight from X to the single neuron be 1, and assume that the neuron uses a ReLU activation function. Then,
E[log |f(X)|] = E[log |X|] = E[log |e−X ′ |] = E[−X ′] = −∞ .
It can be shown that in this example the probability density function of X (as well as of f(X)) is not bounded. Under the assumption that the probability density function pf of f(X) is bounded, the conditional expectation in the assertion of the theorem is finite: Assuming that pf ≤ C <∞, by the law of unconscious statistician we have
Ex[log(|f(X)i|) | |f(X)i| > 0] = ∫ ∥f(X)i∥∞ 0 log(f)pf (f)df
= ∫ 1 0
log(f)pf (f)df︸ ︷︷ ︸ I1 +
∫ ∥f(X)i∥∞ 1
log(f)pf (f)df︸ ︷︷ ︸ I2 .
It is obvious that I2 is positive and finite. Due to the boundedness of pf we also have I1 ≥ C ∫ 1 0 log(f)df = Cf(log(f)− 1)|10 = −C > −∞.
However, the boundedness of pf of is hard to guarantee for an arbitrary neural network. In contrast, the boundedness of px is more realistic and easier to check. For bounded px we can prove (in Appendix A.5) the finiteness of the expectation E[log(|f(X)|) | |f(X)| > 0] for ReLU networks: Proposition 3.5. Consider a deterministic neural network function f(·) constructed with finitely many layers, a finite number of neurons per layer, and ReLU activation functions. Let X be a continuously distributed RV with probability density function px that is bounded (px ≤ P < ∞) and has bounded support X . Then, the conditional expectation E[log(|f(X)|) | |f(X)| > 0] is finite in each of its elements.
Finally, note that Theorem 3.3 assumes that the network is deterministic up to the considered dropout layer. This does not come with a loss of generality for feed-forward networks (e.g., with no residual connections): Indeed, one can apply Theorem 3.3 to the first hidden layer representation Z(1) with dropout, where this assumption always holds. Then, for the ℓ-th hidden layer and irrespective of whether this layer also has dropout, the MI I(X;Z(ℓ)) is finite due to the data processing inequality (Cover and Thomas, 1991, Th. 2.8.1). Therefore, Theorem 3.3 ensures that MI is finite for all hidden layers after the first continuous dropout layer.
4 ESTIMATION OF MI UNDER CONTINUOUS DROPOUT
We now consider estimating I(X;Z) in networks with continuously distributed dropout, starting with information dropout. As discussed by Achille and Soatto (2018), networks with information
dropout are trained with the cross-entropy loss ℓce (which is involved in the known variational lower bound I(Z;Y ) ≥ H(Y )− ℓce) and regularized using a variational upper bound on I(X;Z). Therefore, estimates of the quantities displayed in the information plane are directly used in the training loss and, thus, easy to track, at least for softplus activation functions4.
In the case of Gaussian dropout, to estimate I(X;Z) we approximate h(Z) and h(Z|X) separately (pseudocode is given in Algorithm 1 in Appendix A.6).
For estimating h(Z) we employ a Monte Carlo (MC) estimate, similar to the one proposed by Goldfeld et al. (2019). That is, we approximate the distribution of Z as a Gaussian mixture, where we draw samples f(x(j)), j = 1, . . . , |S| and place Gaussians with a diagonal covariance matrix with variances σ2|f(x(j))i|2, i = 1, . . . , N on each samplef(x(j)). For a sanity check, we also compute an upper bound of h(Z) given by the entropy of a Gaussian with the same covariance matrix as Z. Note that the estimation of the
upper bound requires a sufficiently large number of samples to guarantee that the sample covariance matrix is not singular and that the resulting entropy estimate is finite.
For each fixed x the conditional distribution pz|x is a Gaussian distribution N (f(x),diag({σ2|f(x)i|)2})). Moreover, when the input is fixed, the components of Z|X = x are independent, since components of the noise are independent. This allows to compute h(Z|X) as a sum of h(Zi|X) where Zi is the i-th component of the representation vector. The computation of h(Zi|X) requires integration over the input space for computing the mathematical expectation Ex[h(Zi|X = x)]. This can be approximated via MC sampling. That is, we approximate h(Zi|X) by 1/|S| ∑|S| j=1 h(Zi|X = x(j)) where h(Zi|X = x(j)) = log(|f(x(j))i|σ √
2πe).
We consider a simple toy problem for validating our approach to estimating MI: the input X is generated from an n-dimensional standard normal distribution, modified with a function f(X) = 2X+0.5, and then subjected to Gaussian dropout distributed according toN (1, σ2). We investigate the convergence of our estimator for h(Z|X) for increasing number of samples. For each input data point, we generate 10 noise masks, thus obtaining 10 samples of Z for each x(j). The results in Fig. 2 show that the estimation stabilizes with larger amount of samples for different dimensionality of the data. We also compare the estimate to the upper bound for h(Z) in Fig 3.
We finally compare our estimation of MI to binning, the EDGE estimator (Noshad et al., 2019), and the lower bounds analyzed by McAllester and Stratos (2020). The results are shown in Fig. 4. In the plot, doe stands for the difference-of-entropies (DoE) estimator and doe l stands for DoE with logistic parametrization (McAllester and Stratos, 2020). The binning estimator underestimates the
4Indeed, for softplus activation functions, the variational approximation of I(X;Z) is available in closed form, while for ReLU activation functions, the available expression is only useful for minimizing, rather than for computing, I(X;Z) (see Appendix A.1).
MI when the bin size is large and overestimates it with small bin size (Ross, 2014), which can be clearly seen in the plots where bins are organized both by size (upper axis) and by number (lower axis). Moreover, with the high-dimensional data, binning hits the maximal possible value of log(|S|) very fast, not being able to reach larger MI values. According to McAllester and Stratos (2020), lower bound-based MI estimators (e.g., MINE (Belghazi et al., 2018)) also need exponentially (in the true value of MI) many data points for a good value approximation, otherwise they will always heavily underestimate the MI.
Further plots for different dropout variances and inputs dimensionality are given in Appendix A.6.
5 INFORMATION PLANE ANALYSIS OF DROPOUT NETWORKS
We use the estimators described in the previous section for an IP analysis of networks with Gaussian and information dropout. We always consider only the representation corresponding to the first dropout layer 5 and measure the MI in nats, e.g., use the natural logarithm. For estimating I(Y ;Z), we employ the EDGE estimator (Noshad et al., 2019) for Gaussian dropout and variational estimate for information dropout. IPs created using the binning estimator use binning for both I(X;Z) and I(Y ;Z).
In the first set of experiments we investigate the difference between IPs obtained via our proposed estimator
and via binning. The analysis on the MNIST dataset was performed for a LeNet network (LeCun et al., 1998) that achieves 99% accuracy and a simple fully-connected (FC) network with three hidden layers (28×28−512−128−32−10) and softplus activation functions achieving 97% accuracy. We analyze both information dropout and Gaussian dropout in the LeNet network and only Gaussian dropout in the FC network. In both cases dropout is applied on penultimate layers. We compare IPs based on binning estimators to IPs based on our estimators in Fig. 1 and Fig. 5.
5This makes the MI estimation more efficient, since the previous part of the network is deterministic which allows for an analytical expression of h(Z|X = x). Note however, that the estimation could be extended to higher layers as well since for those MI also remains finite. However, an estimator different from ours should be used for those layers.
We also analyze the IPs for a ResNet18 trained on CIFAR10 (see Fig. 6), where we added an additional bottleneck layer with 128 neurons and Gaussian dropout before the output layer, and which achieves an accuracy of 94%.
Interestingly, for all networks and datasets we observe significant compression for our estimator and a lack of compression for binning estimators (also for different bin size, see Appendix A.8). This indicates that either the MI compression measured in dropout networks is different from purely geometrical compression, or
that the number of samples |S| is insufficient to reliably estimate I(X;Z) by binning.
In the second set of experiments, we analyze IPs in information dropout networks, with MI estimations as described before. To this end, we trained a fully convolutional neural network (fullCNN) on CIFAR10 using code provided by Achille and Soatto (2018). Training proceeded for 200 epochs using SGD with momentum and, different from the original setup, with only one dropout layer after the third convolutional layer. The batch size was set to 100, the learning rate was initially set to 0.05 and was reduced by multiplying it with 0.1 after the 40, 80, and 120 epoch. The network was trained with different values of the regularization weight β and different amounts of filters in the convolutional layers. That is, the full-size fullCNN has 3 layers with 96 filters succeeded by 4 layers with 192 filters, while only 25% of these filters are constituting the small network. Also different from the original setup, we allowed the noise variance to grow up to 0.95 in order to see the effect of the limited
information between representation and input more pronounced. Results are shown in Fig. 7. It can be seen that regularizing I(X;Z) is effective (i.e., larger values of β lead to smaller I(X;Z)), and that regularizing too strongly (β = 20) leads to worse performance: the test error is 5% higher and train error is 10% higher. We can further see stronger compression for smaller β and almost no compression for larger β. We conjecture that compression can only become visible if sufficient information is permitted to flow through the network (which happens only for small β). Fig. 7 (c) and (d) show the IPs for the small fullCNN. It can be seen that the smaller network appears not to compress at all (see Fig. 7 (c)), but that I(X;Z) rather increases throughout training until it is at the same level as in Fig. 7 (a). This indicates that β determines to which point in the IP information compresses, and that the IP curve that is traversed during training depends on the overall capacity of the neural network.
Plots for the additional experiments can be found in Appendix A.8.
6 DISCUSSION
Whether or not information-theoretic compression is correlated with improved generalization is the main question connected to and the most prominent justification for information plane analysis of deep neural networks. Such a connection, however, can only be tested for neural networks for which MI is finite and therefore measurable. In our theoretical analysis, we investigate if different variants of dropout noise allow for finite values of MI under an assumption of a continuous input distribution. We answered this question positively by showing that in networks with certain constraints on the induced distribution of the representations, continuous dropout noise with finite differential entropy prevents I(X;Z) from becoming infinite. We have further shown that these constraints on the distribution of the representation are satisfied in ReLU networks if the probability density function of the input is bounded.
Following this conclusion we propose an MC-based estimate of MI in Gaussian dropout networks and perform an IP analysis for different networks with Gaussian and information dropout on different datasets. The experiments show that the binning estimator behaves very differently from our estimator: While our estimator mostly exhibits compression in the IP, the binning estimator does not. Further, the values of I(X;Z) for our estimator are often orders of magnitude larger than the values of I(Y ;Z), especially when compared to the binning estimator. Assuming that the proposed estimators are reasonably accurate, this makes a connection between information-theoretic compression and generalization questionable. While these preliminary experiments do not conclusively answer the question if such a connection exists, they show a practically relevant setting in which this correlation can be studied.
The discrepancy between the binning estimator and our estimator further suggests that either the information-theoretic compression we observe using our estimator is not geometric, or that there are insufficient samples to obtain reliable estimates from the binning estimator. This is in contrast with the work of Goldfeld et al. (2019), which showed that information-theoretic and geometric compression were linked in their networks with additive noise. We thus believe that a closer investigation of whether multiplicative noise induces geometric compression, and whether the induced compression improves generalization performance, are interesting questions for future research.
ACKNOWLEDGEMENTS
The authors want to thank Michael Kamp, Simon Damm, Ziv Goldfeld, and Jihao Andreas Lin for valuable discussions about the work.
Asja Fischer acknowledges support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy - EXC 2092 CASA - 390781972.
The Know-Center is funded within the Austrian COMET Program - Competence Centers for Excellent Technologies - under the auspices of the Austrian Federal Ministry of Climate Action, Environment, Energy, Mobility, Innovation and Technology, the Austrian Federal Ministry of Digital and Economic Affairs, and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency FFG.
A APPENDIX
A.1 INFORMATION DROPOUT
One type of dropout with continuous noise is termed information dropout (Achille and Soatto, 2018). It is a technique that combines dropout noise sampled from a log-normal distribution ϵ ∼ pϵ = logN (0, α2θ(x)), where αθ(x) is a learnable parameter dependent on the parameters θ of a network, and the introduction of a regularization term KL(pz|xi || ∏|Z| i=1 pzi). This regularization term is based on an information bottleneck objective for training neural networks: Rewriting the information bottleneck Lagrangian and adding a disentanglement term (i.e., we want each element of representation Z to be independent of the others) results in the aforementioned formula. Additionally, it is proposed to use as prior pz , defined by the choice of activation function (ReLU or Softplus), a particular distribution whose validity is empirically verified. Such priors and selected dropout noise allow for deriving a closed form of KL-divergence, which makes it easy to directly track IP values while training.
In the following, we provide the closed form for computation of I(X;Z) as proposed by Achille and Soatto (2018):
I(X;Z) = KL(px,z||pzpx) = ∫ px,z(x, z) log ( px,z(x, z)
pz(z)px(x)
) dxdz
= ∫ px(x)pz|x(z) log ( px(x)pz|x(z)
pz(z)px(x)
) dxdz = ∫ px(x)KL(pz|x||pz)dx
= Ex[KL(pz|x||pz)] . Empirically we can approximate this as I(X;Z) = ∑|S|
j=1 KL(pz|x(j) ||pz), where we sum over the dataset of size |S| of samples of X . First, we discuss ReLU neural networks. The prior distribution pz in this case consists is a mixture of two parts: and improper log-uniform distribution and a point mass at 0. Such prior is empirically valid for ReLU activations. First we restrict the derivation to the case when f(X) ̸= 0 (which in turn means that Z ̸= 0, since noise ϵ is log-normal and cannot be 0). In the following we will omit the subscript of probability density functions, when it is clear from its argument.
KL(pz|x(j) ||pz) = KL(plog(z|x(j))||plog(z)) (1)
= ∫ p(log(z|x(j))) log ( p(log(z|x(j))) p(log(z)) ) dz
= ∫ p(log(ϵ) + log(f(x(j)))|x(j)) log ( p(log(ϵ) + log(f(x(j)))|x(j))
c
) dϵ (2)
= ∫ p(log(ϵ)) log(p(log(ϵ)))dϵ− ∫ p(log(ϵ)) log(c)dϵ (3)
= ∫ p(log(ϵ)) log(p(log(ϵ)))dϵ− log(c) = −h(log(ϵ))− log(c) (4)
= −(log(α(x(j))) + 1 2 log(2πe))− log(c) , (5)
where equation 1 holds due to the invariance of the KL-divergence under parameter transformation with a strictly monotone function (log(·)); equation 2 holds since log(Z) = log(ϵ) + log(f(X)) and plog(z) = c for the improper log-uniform distribution; equation 3 is taking into account that px+const = px, that log(f(x))|x(j) is constant, and that plog(ϵ)|x(j) = plog(ϵ) because ϵ is independent of X; equation 4 uses that ∫ plog(ϵ)dϵ = 1; finally equation 5 holds because log(ϵ) is normally distributed and its entropy can be computed in closed form.
Now we put f(X) = 0, and also get Z = 0. Then pZ|X = δ0 (point mass or Dirac delta) and MI becomes:
KL(pz|x(j) ||pz) = ∫ pz|x(z) log ( pz|x(z)
pz(z)
) dz = ∫ δ0 log ( δ0 qδ0 ) dz = − log(q) , (6)
where q is the weight of the point mass in the prior pz .
Combination of equation 5 and equation 6 results in a computable I(X;Z). As it can be seen, one has to correctly combine non-zero and zero values of f(X) and also know the parameters of the prior pz: constant c and weight q. This makes it not practical for IP analysis.
If instead of ReLU the network has softplus activations, then the prior on the representations distribution is standard log-normal instead of log-uniform with delta Dirac. In this case the computation is very simple, since KL divergence between two log-normal distributions is computed as KL divergence between corresponding normal distributions:
KL(pz|x(j) ||pz) = 1 2σ2 (α2(x(j)) + µ2)− log(α(x
(j)))
σ − 1 2 , (7)
where σ2 = 1 and µ = 0 are known parameters of the prior. Thus, softplus activations (equation 7) allows for direct computations of I(X;Z).
A.2 PROOF OF THEOREM 3.1
Proof. Using the chain rule of MI, we have
I(X;Z) = I(X;Z,B)− I(B;X|Z) = I(X;Z|B) + I(B;X)− I(B;X|Z) ≥ I(X;Z|B)−H(B)
where the inequality follows from dropping I(B;X) since B and X are independent and the fact that I(B;X|Z) ≤ H(B). Having B ∈ {0, 1}N̂ as a discrete RV, it immediately follows that H(B) ≤ N̂ log 2. Now note that
I(X;Z|B) = ∑
b∈{0,1}N̂ P(B = b)I(X;Z|B = b).
Since the Bernoulli RVs are independent, positive probability mass is assigned to b = (1, 1, . . . , 1), i.e., to the case where all neurons are active. Evidently, when b = (1, 1, . . . , 1) it follows that Z = f(X). Thus, with (Amjad and Geiger, 2020, Th. 1)
I(X;Z|B) ≥ P(b = (1, 1, . . . , 1))I(X; f(X)) =∞
and I(X;Z) =∞.
A.3 PROOF OF THEOREM 3.2
Proof. If the binary dropout is such that nonzero probability is assigned to the dropout mask b = (1, 1, . . . 1), then the statement of the theorem follows as in the proof of the theorem 3.1.
Assume now that B is such that zero mass is assigned to b = (1, 1, . . . , 1). To treat this case, we suppose that the distribution of X has a portion with a continuous probability density function on a compact set and that the neural network has activation functions that are either bi-Lipschitz or continuously differentiable with a strictly positive derivative (following the requirements of Amjad and Geiger (2020, Th. 1)). Then, we obtain I(X; f(X)) =∞ from (Amjad and Geiger, 2020, Th. 1) for almost all parameterizations of the neural network. Under this setting, fB(X) is again a neural network with activation functions that are either bi-Lipschitz or continuously differentiable with a strictly positive derivative. Assuming that b is such that the input of the network is not completely disconnected from the considered layer, for this pattern we have I(X;Z|B = b) = ∞. Otherwise, we obviously have I(X;Z|B = b) = 0. The statement of the theorem follows from taking the expectation over all patterns b.
A.4 PROOF OF THEOREM 3.3
Proof. W.l.o.g we first restrict our attention to the dimensions of representations Z that are different from zero. Specifically, suppose that Z = (Z1, . . . , ZN ) and that B = (B1, . . . , BN ) with Bi = 0 if Zi = 0 and Bi = 1 otherwise. Clearly, B is a function of Z, hence I(X;Z) = I(X;Z,B) =
I(B;X) + I(Z;X|B). Since B is binary, we have that I(X;B) ≤ H(B) ≤ n log 2. Let ZB = (Zi|i: Bi = 1) denote the sub-vector of non-zero elements of Z, then
I(X;Z) ≤ n log 2 + ∑ b P(B = b)I(Zb;X)
where, if B = b, I(Zb;X) = I(Z;X|B = b) holds because constant (i.e., 0) RVs do not contribute to MI. Therefore, I(X;Z) is finite iff I(Zb;X) = I(Z;X|B = b) is finite B-almost surely. We thus now fix an arbitrary B = b and continue the proof for Z = Zb.
We decompose MI into differential entropies as I(X;Z) = h(Z)−h(Z|X). The differential entropy of the representations h(Z) is upper-bounded by the entropy of a Gaussian RV with the same covariance matrix Σ as the distribution of Z = (Z1, . . . , ZN ), i.e., by N/2 log(2π) + 1/2 log(det(Σ)) + N/2. From Hadamard’s inequality and since Σ is positive semidefinite it follows that det(Σ) ≤∏n
i=1 σ 2 ii, where σ 2 ii are diagonal elements of the covariance matrix, i.e., σ 2 ii = V ar[Zi]. This variance can be bounded from above. Specifically, since Xi is bounded and f(·) is a composition of Lipschitz functions, f(X)i is bounded as well. Recalling that E[Di(x)2] ≤ M holds X-almost surely, this yields
V ar[Zi] ≤ Ex[f(X)2iDi(X)2] = Ex[f(X)2iEd[Di(X)2 | X]] ≤MEx[f(X)2i ] ≤M∥f(X)i∥2∞
It remains to show that the h(Z|X) > −∞. Due to the conditional independence of Di and Dj given X , for all i ̸= j, the conditional differential entropy of Z factorises in the sum of conditional differential entropy of its components, i.e., h(Z|X) = ∑N i=1 h(Zi|X). We write this conditional entropy as an expectation over X and obtain using (Cover and Thomas, 1991, Th. 9.6.4)
h(Zi|X) = Ex[h(Zi|X = x)] = Ex[h(Di(x)|f(x)i||X = x)] = Ex[h(Di(x)|X = x)] + Ex[log(|f(X)i|)]
by the formula of change of variables for differential entropy. Both terms are finite as per the assertion of the theorem. The first term is finite since we assumed that the differential entropy of Di(X) is essentially bounded, i.e., there exists a number C <∞ such that h(Di(x)) ≤ C X-almost surely. The second term is finite since we assumed that the conditional expectation E[log(|f(X)|) | |f(X)| > 0] is finite in each of its elements, and since Zi ̸= 0 implies |f(X)i| > 0. This completes the proof.
A.5 PROOF OF PROPOSITION 3.5
Proof. We assume w.l.o.g. that f(·) has a range with dimension D = 1, i.e., f : X → R, where X ⊆ Rn is the function domain. The proof can be straightforwardly extended to the several dimensions of f(·). Since f(·) is constructed using a finitely-sized neural network with ReLU activation functions, it is piecewise affinely linear on a finite partition of the function domain. The fact that E[log(|f(X)|) | |f(X)| > 0] <∞ follows then immediately from the fact that X , and thus |f(X )|, is bounded. To investigate whether E[log(|f(X)|) | |f(X)| > 0] > −∞, split domain X in the following partitions:
1. X0 = f−1({0}) denotes the element of the partition on which f(X) vanishes;
2. {X ci }i=1,...,ℓ denotes elements of the partition of X on which f(X) = ci, i.e., on which f(·) is constant;
3. X a = ⋃m
i=1 X ai denotes the union of the all other sets {X ai }i=1,...,m of the partition, where f(·) is not constant.
For the last subset, define the function f̃ : X a → Rn via f̃(x) = (|f(x)|, x2, x3, . . . , xn). Note that f̃(·) is piecewise bijective, hence W̃ = f̃(X) has a probability density function that is obtained
from the change of variables formula:
pw̃(w̃) = ∑
x∈f̃−1(w̃)
px(x)
|det(Jf̃ (x))|
where Jf̃ (x) = [ ∂f̃i ∂xj (x) ]
is the Jacobian matrix of f̃(·), with f̃1(x) = |f(x)| and f̃j(x) = xj for all j ≥ 2. It follows that Jacobian matrix is diagonal and has determinant | ∂f∂x1 (x)|. The density pw|Xa of the conditional random variable W = |f(X)| | X ∈ X a can be then obtained by marginalization from pw̃:
pw|Xa(w) = ∫ pw̃(w, x n 2 )dx n 2 = ∫ ∑ x∈f̃−1(w,xn2 ) px(x) | ∂f∂x1 (x)| dxn2 (8)
where xn2 = (x2, . . . , xn) and where we perform an (n− 1)-fold integral. Thus, by the Lebesgue decomposition, the distribution of W = |f(X)| can be split into an absolutely continuous component with a probability density function pw|Xa and a discrete component with finitely many mass points, for which we have P(W = ci) = ∫ X ci
px(x)dx =: px(X ci ). By the law of unconscious statistician, we then obtain
E[log(|f(X)|) | |f(X)| > 0] = E[log(W ) |W > 0]
= ℓ∑ i=1 px(X ci ) log |ci|+ px(X a) ∫ ∞ 0 log(w)pw|Xa(w)dw
= ℓ∑ i=1 px(X ci ) log |ci|+ px(X a) ∫ ϵ 0
log(w)pw|Xa(w)dw︸ ︷︷ ︸ I1
+px(X a) ∫ ∞ ϵ
log(w)pw|Xa(w)dw︸ ︷︷ ︸ I2
where in the last line we split the integral at a fixed ϵ≪ 1. Clearly, the first sum is finite since ci > 0 for all i. For the remaining summands involving integrals, suppose for now that pw|Xa(w) ≤ C < ∞. Then,
I1 = ∫ ϵ 0 pw log(w)dw ≥ ∫ ϵ 0 C log(w)dw = C(ϵ log(ϵ)− ϵ) > −∞
I2 = ∫ ∞ ϵ pw log(w)dw ≥ ∫ ∞ ϵ pw ( 1− 1 w ) dw ≥ ∫ ∞ ϵ pw ( 1− 1 ϵ ) dw ≥ 1− 1 ϵ > −∞.
We thus remain to show that pw|Xa(w) ≤ C for w ∈ [0, ϵ]. To this end, we revisit equation 8 and note that the integral is finite if i) px is bounded, ii) the integration is over a bounded set, and iii) | ∂f∂x1 (x)| ≥ ϵ1 > 0. Conditions i) and ii) are ensured by the assertion of the lemma. It remains to show that condition iii) holds.
Note that in contrast to using f̃(x) = (|f(x)|, x2, x3, . . . , xn), the same pw|Xa(w) can also be obtained by using the piecewise bijective function f̃(x) = (x1, |f(x)|, x3, . . . , xn), etc. Hence, pw|Xa(w) ≤ C if the partial derivative of f is bounded from below for at least one dimension, i.e., if there exists an i such that | ∂f∂x1 (x)| ≥ ϵ1. Since we have
∥∇xf(x)∥1 = n∑
i=1
∣∣∣∣ ∂f∂xi (x) ∣∣∣∣
this is equivalent to requiring that the L1 norm of the gradient is bounded from below. Indeed, remember that f is piecewise affinely linear with finitely many pieces, and its restriction to X a is non-constant. On its restriction to X a we thus have ∇xf(x) = gi > 0 for all x ∈ X a and some i ∈ {1, . . . ,m}. Hence, we can find an ϵ1 such that mini gi ≥ n · ϵ1 > 0, which implies that there exists an i for which | ∂f∂xi (x)| ≥ ϵ1 for all x ∈ X a. This completes the proof.
A.6 ESTIMATION OF MI UNDER GAUSSIAN DROPOUT
In the Algorithm 1 we describe how the estimation of I(X;Z) with Z being a representation under Gaussian dropout can be done. This is the way we estimated MI for our experiments, but any other estimator can be used in this setup.
Algorithm 1 Estimation of MI under Gaussian dropout Require: GMM-MEANS, σ, nonoise-reprs ▷ Amount of Gaussians in GM for approximation;
noise variance; no noise representations reprs← [] ▷ Generate noisy samples with corresponding variance for all nr in nonoise-reprs do
for i← 1, n do ϵ← noisep reprs← reprs+ nr ∗ ϵ
end for end for points← nonoise-reprs[: GMM-MEANS] ▷ Create a GMM on restricted amount of points for faster computation d← [] for all p in points do
d← d+ Gaussian(p, σ ∗ |p|) end for gmm← MixtureModel(d) lp← [] ▷ Get estimates of log-probabilities from GMM for noisy samples for all r in reprs do
lp← lp+ gmm.log probability(r) end for h(z)← mean(lp) h(z|x)← 0 ▷ Compute conditional entropy using closed form formula for i← 1, dim(reprs[0]) do ▷ For each dimension of the representation
h(z|x)← h(z|x) + mean(ln( √ 2πeσ|nonoise-reprs[:, i]|)) ▷ Use no noise representations here, each dimension separately end for I(x, z)← h(z)− h(z|x) ▷ Obtain final estimate for the MI
A.7 EVALUATION OF ESTIMATOR
Fig. 8 shows upper bounds and estimation of h(Z) with a higher noise than in the Fig. 3. Larger noise increases the gap between the Gaussian entropy based upper bound and the mixture based estimation as expected.
In Fig. 9 we see convergence of the MC estimate for h(Z|X) under larger noise. As expected larger noise variance results in smaller MI values (Fig. 10), while the trend observed when changing dimensionality stays the same.
A.8 INFORMATION PLANE ANALYSIS
Note, that in the experiments we analyze IPs on the training samples and test samples separately. In order to obtain a valid sample of hidden representations for the MI estimation during inference, we apply MC-Dropout, as opposed to the usual way of performing inference with dropout being turned off. According to Srivastava et al. (2014) this is the theoretically sound way to obtain predictions, while turning off dropout and re-scaling weights results in an approximationthat allows for faster computation.
In Fig. 11, Fig.12, and Fig. 13 we provide IPs built on the test set of the corresponding datasets (MNIST, MNIST, and CIFAR10).
In the Fig. 14 we provide additional IPs for the binning estimator with varying amount of bins used for MI estimation. We report the results for the fully-connected network trained on MNIST with Gaussian dropout variance 0.2.
In the Fig. 15 we show the IPs obtained for the same fully-connected network trained on MNIST with the variance of the Gaussian dropout set to 0.4. | 1. What is the focus of the paper regarding deep neural networks?
2. What are the strengths of the proposed approach, particularly in terms of technical and elegant aspects?
3. Do you have any concerns or questions about the significance of studying the information plane?
4. How do you assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper analyses the computation of mutual information in DNN with dropout. It shows that mutual information for discrete dropout is infinite, but for continuous drop it is a well defined finite quantity. The paper shows how this can be estimated using Monte Carlo techniques and empirically shows that this quantity converges. The paper finishes with an analysis of training a neural network with dropout in the information plane.
Strengths And Weaknesses
The paper resolves a long standing problem with using information theory to study learning in deep neural networks and particularly computing mutual informations. This is a problem that has received an enormous amount of interest. The proposed solution is both technically sophisticated (in terms of proofs) and elegant (in terms of implementation).
My very personal question as a naturally sceptical person is whether all the effort put into studying the information plane is worth all the effort. I'm prepared to accept that there are enough people interested that this is worth publishing, but the results section did not convince me that there is a lot of understanding to be gained. I accept though that the authors have said that a full analysis will end up elsewhere so I guess I need to wait. I also have a rather practical question that these measure are intrinsically super unstable when the mutual information becomes unbounded in a deterministic network. This gives me an uncomfortable feeling as this does not reflect the generalisation performance of networks with and without dropout (or with continuous versus discrete dropout). It seems to me that the true solution is to find a more meaningful picture than the information measure where networks that perform similarly don't have such different behaviour. However, I realise this is just me being picky and I'm not really expecting the authors to address my scepticism,
Clarity, Quality, Novelty And Reproducibility
The paper is clearly written. There are a few small grammatical errors and if the the authors have the energy they might consider rereading the text once more to eliminate these. The paper is a high quality paper (the proofs although in a way straightforward requires a sophisticated understanding of probability which is rare). The work is novel and the proofs and empirical results reproducible. |
ICLR | Title
Information Plane Analysis for Dropout Neural Networks
Abstract
The information-theoretic framework promises to explain the predictive power of neural networks. In particular, the information plane analysis, which measures mutual information (MI) between input and representation as well as representation and output, should give rich insights into the training process. This approach, however, was shown to strongly depend on the choice of estimator of the MI. The problem is amplified for deterministic networks if the MI between input and representation is infinite. Thus, the estimated values are defined by the different approaches for estimation, but do not adequately represent the training process from an information-theoretic perspective. In this work, we show that dropout with continuously distributed noise ensures that MI is finite. We demonstrate in a range of experiments1 that this enables a meaningful information plane analysis for a class of dropout neural networks that is widely used in practice.
1 INTRODUCTION
The information bottleneck hypothesis for deep learning conjectures two phases of training feedforward neural networks (Shwartz-Ziv and Tishby, 2017): the fitting phase and the compression phase. The former corresponds to extracting information from the input into the learned representations, and is characterized by an increase of mutual information (MI) between inputs and hidden representations. The latter corresponds to forgetting information that is not needed to predict the target, which is reflected in a decrease of the MI between learned representations and inputs, while MI between representations and targets stays the same or grows. The phases can be observed via an information plane (IP) analysis, i.e., by analyzing the development of MI between inputs and representations and between representations and targets during training (see Fig. 1 for an example). For an overview of information plane analysis we refer the reader to (Geiger, 2022).
While being elegant and plausible, the information bottleneck hypothesis is challenging to investigate empirically. As shown by Amjad and Geiger (2020, Th. 1), the MI between inputs and the representations learned by a deterministic neural network is infinite if the input distribution is continuous. The standard approach is therefore to assume the input distribution to be discrete (e.g., equivalent to the empirical distribution of the dataset S at hand) and to discretize the real-valued hidden representations by binning to allow for non-trivial measurements, i.e., to avoid that the MI always takes the maximum value of log(|S|) (Shwartz-Ziv and Tishby, 2017). In this discrete and deterministic setting the MI theoretically gets equivalent to the Shannon entropy of the representation. Considering the effect of binning, however, the decrease of MI is essentially equivalent to geometrical compression (Basirat et al., 2021). Moreover, the binning-based estimate highly depends on the chosen bin size (Ross, 2014). To instead work with continuous input distributions, Goldfeld
1Code for the experiments is public on https://github.com/link-er/IP_dropout.
et al. (2019) suggest to replace deterministic neural networks by stochastic ones via adding Gaussian noise to each of the hidden representations. This kind of stochastic networks is rarely used in practice, which limits the insights brought by the analysis.
In contrast, dropout, being a source of stochasticity, is heavily used in practice due to its effective regularizing properties. The core questions investigated in this work therefore are: i) Can we obtain accurate and meaningful MI estimates in neural networks with dropout noise? ii) And if so, do IPs built for dropout networks confirm the information bottleneck hypothesis? Our main contributions answer these questions and can be summarized as follows: We present a theoretical analysis showing that binary dropout does not prevent the MI from being infinite due to the discrete nature of the noise. In contrast, we prove that dropout noise with any continuous distribution not only results in finite MI, but also provides an elegant way to estimate it. This in particular holds for Gaussian dropout, which is known to benefit generalization even more than binary dropout (Srivastava et al., 2014), and for information dropout (Achille and Soatto, 2018). We empirically analyze the quality of the MI estimation in the setup with Gaussian and information dropout in a range of experiments on benchmark neural networks and datasets. While our results do not conclusively confirm or refute the information bottleneck hypothesis, they show that the IPs obtained using our estimator exhibit qualitatively different behavior than the IPs obtained using binning estimators and strongly indicate that a compression phase is indeed happening.
2 MUTUAL INFORMATION ESTIMATION FOR NEURAL NETWORKS
We use the following notation: Lower-case letters denote realizations of random variables (RVs), e.g., b denotes a realization of the RV B; H(A) denotes the Shannon entropy of a discrete RV A whose distribution is denoted pa; h(B) is the differential entropy of a continuous RV B whose distribution is described by the probability density function pb; I(A;B) is the MI between RVs A and B; X ∈ X ⊆ Rn and Y ∈ Y are the RVs describing inputs to a neural network and corresponding targets; f(X) is the result of the forward pass of the input through the network to the hidden layer of interest; Z is an N -dimensional RV describing the hidden representations.
The caveats of different approaches to measure the MI between input X and hidden representation Z of a neural network – e.g., the MI being infinite for deterministic neural networks and continuous input distributions, the dependence of the MI estimate on the parameterization of the estimator, etc. – were discussed widely in the literature (Saxe et al., 2019; Geiger, 2022) and are briefly reviewed in this section. These caveats do not appear for the MI measured between representations Z and targets Y , since the target is in most cases a discrete RV (class), for which MI is always finite.
One option for estimating I(X;Z) is to assume the input to be drawn from a discrete distribution. This view is supported by the finiteness of the accuracy of the used computational resources (Lorenzen et al., 2021) and makes it easy to use a finite dataset S to describe the distribution. In such setup, the distribution of (X,Y ) is assumed uniform on the dataset S, and the discretization of Z is performed at a fixed bin size (e.g., corresponding to the computer precision). The MI between
X and the discretized Ẑ is computed as I(X; Ẑ) = H(Ẑ) − H(Ẑ|X) = H(Ẑ) − 0 = H(Ẑ), where H(Ẑ|X) = 0 since f(·) and the discretization of Z are deterministic. Thus, the estimated MI between input and representation corresponds to the entropy of the discretized representation, which for small bin sizes is equal to the entropy H(X) = log |S| of the empirical distribution on the dataset, unless f(·) maps different points from the dataset to the same point in latent space. A different option that is more aligned to the common description of real-world data is to assume X to be drawn from a continuous distribution. If the network transformation f(·) results in a discrete distribution of the representations Z, one can use the decomposition I(X,Z) = H(Z)−H(Z|X) = H(Z) to estimate MI based on Shannon entropy, provided that the sample size is sufficiently large (note that the dimensionality N of Z may be large, and therefore the estimation of H(Z) may suffer from the curse of dimensionality). However, as shown in Theorem 1 of (Amjad and Geiger, 2020) for neural networks with commonly used activation functions the distribution of the latent representation is not discrete. In this case (i.e., f(·) is deterministic, X is continuous, and Z is not purely discrete) the MI between X and Z is infinite2. By binning, i.e., by quantizing Z to a discrete RV Ẑ, the MI I(X; Ẑ) = H(Ẑ) remains finite, but the qualitative behavior of this entropy will be defined by properties of activation functions and selected bin size (Saxe et al., 2019).
From the discussion above it follows that estimating I(X;Z) in deterministic neural networks is an ill-posed problem, and that the estimates reveal not an information-theoretic picture, but often rather a geometric one that is determined by the properties of the chosen estimators. As a solution to the aforementioned challenges, several authors have suggested to investigate the information planes of stochastic neural networks instead (Amjad and Geiger, 2020; Goldfeld et al., 2019). Goldfeld et al. (2019) proposed to add zero-mean Gaussian noise D to the representations during training. This transforms a deterministic neural network into a stochastic one that was shown to yield similar training results and predictive abilities of the model. The addition of Gaussian noise in Z = f(X)+ D guarantees a finite MI3 and therefore allows for estimating MI using Monte Carlo sampling with bounds on the estimation error. Futhermore, it links the information-theoretic perspective of the IP to geometric effects taking place in latent space. Indeed, when the MI between input and representation is decreasing, it means that noise-induced Gaussians centered at the representations of different data points overlap more strongly. Thus, it is becoming harder to distinguish between inputs of the same class based on their representations, which translates into lower MI between representation and input while leaving MI between representation and target unchanged.
As discussed above, for continuous input distributions both the IPs of deterministic neural networks as well as of stochastic neural networks with additive noise show a geometric picture (and in the former case the geometric interpretation is the only valid one, since MI is infinite in this case). Therefore, in this work we study the estimation of MI in networks with dropout layers, i.e., in settings where the stochasticity is introduced by multiplicative, rather than additive noise. In what follows we will investigate the requirements on the multiplicative noise for MI to remain finite, and whether the resulting IPs confirm the information bottleneck hypothesis.
3 MUTUAL INFORMATION IN DROPOUT NETWORKS
As discussed in the previous section, the MI between inputs and hidden representations of deterministic networks is infinite, if we assume the input distribution to be continuous. To overcome this problem, some form of stochasticity has to be introduced. While adding noise to activations (Goldfeld et al., 2019) indeed allows to compute the MI, this is not used in most contemporary neural networks. In contrast, neural networks with dropout are one of the most popular classes of neural networks used in practice and are stochastic in nature as well: Adding a dropout layer to a neural network corresponds to multiplying the hidden representation with some form of random noise. Formally, denoting the random noise by a RV D of the same dimension as f(X), the hidden representation becomes Z = f(X) ◦D, where ◦ denotes element-wise multiplication. In the most basic form, D follows a Bernoulli distribution (Srivastava et al., 2014). Such binary dropout is widely used and can intuitively been understood as “turning off” a fraction of neurons during training. There is a
2There are multiple mathematical derivations explaining why MI is infinite, one for example is discussed in (Saxe et al., 2019, Appendix C).
3At least when the px and f(·) are such that f(X) has finite variance, then the finiteness of MI follows from the result about the capacity of the additive Gaussian noise channel, cf. (Cover and Thomas, 1991, eq. (10.17)).
variety of other dropout schemes, including multiplicative Gaussian noise, fast dropout (Wang and Manning, 2013), or variational dropout (Kingma et al., 2015). Information dropout (Achille and Soatto, 2018) is a variant that uses a closed-form expression of MI as regularization term. In order to obtain such closed form, dropout noise is sampled from a log-normal distribution, and the prior distribution on representations is chosen depending on the activation function (ReLU or Softplus). We provide details on the derivation in Appendix A.1.
In this section, we investigate whether neural networks with dropout have indeed finite MI between input X and representation Z. While we first show a negative result by proving that binary dropout still leads to I(X;Z) =∞, our Theorem 3.3 shows that dropout with continuous distribution keeps MI finite. This fact allows us to estimate MI for such dropout neural networks in Sections 4 and 5.
3.1 BINARY DROPOUT
We start by analyzing binary dropout, which forces individual neurons to be “turned off” with some probability. More formally, the output of each neuron is multiplied with an independent Bernoulli RV that is equal to 1 with a predefined probability p. The following theorem shows that this kind of (combinatorial) stochasticity is insufficient to prevent I(X;Z) from becoming infinite. Theorem 3.1. In the setting of (Amjad and Geiger, 2020, Th. 1), let the output f(·) of a hidden layer be parameterized as a deterministic neural network with N̂ neurons, let B ∈ {0, 1}N̂ be the set of independent Bernoulli RVs characterizing the dropout pattern, and let Z = fB(X) denote the output of the hidden layer after applying the random pattern B. Then it holds that I(X;Z) =∞.
In the proof (provided in Appendix A.2) we use the fact that dropout mask b = (1, 1, . . . , 1) leads to an infinite MI. While the Bernoulli distribution guarantees that b = (1, 1, . . . , 1) always has nonzero probability, other distributions over {0, 1}N̂ might not have this property. Theorem 3.1 can however be generalized to arbitrary distributions over {0, 1}N̂ : Theorem 3.2. In the setting of (Amjad and Geiger, 2020, Th. 1), let the output f(·) of a hidden layer be parameterized as a deterministic neural network with N̂ neurons, let B ∈ {0, 1}N̂ be the binary random vector characterizing the dropout pattern, and let Z = fB(X) denote the output of the hidden layer after applying the random pattern B. Then, it either holds that I(X;Z) = ∞ or that I(X;Z) = 0 if the dropout patterns almost surely disrupt information flow through the network.
The proof for the theorem is provided in Appendix A.3.
Both Theorem 3.1 and Theorem 3.2 cover as a special case the setting where dropout is applied to only a subset of layers, by simply setting those elements of B to 1 that correspond to a neuron output without dropout. If dropout is applied to only a single layer, then fB(X) = f(X) ◦B′, where B′ is the dropout pattern of the considered layer and ◦ denotes the element-wise product. As a consequence of Theorem 3.2, for neural networks with binary dropout any finite estimate of MI is “infinitely wrong”, and the resulting IP does not permit an information-theoretic interpretation. Essentially, the stochasticity added by binary dropout is combinatorial, and hence cannot compensate the “continuous” stochasticity available in the input X .
3.2 DROPOUT WITH CONTINUOUS NOISE
As proposed by Srivastava et al. (2014), dropout can also be implemented using continuous Gaussian noise with mean vector µ = 1 and diagonal covariance matrix Iσ2 with fixed variance σ2. Achille and Soatto (2018), in contrast, proposed log-normally distributed dropout noise, the variance of which depends on the input sample x (this is termed information dropout). Generalizing both Gaussian and information dropout, in this section we consider continuously distributed multiplicative noise D. In contrast to binary noise sampled from a discrete distribution, continuously distributed noise turns the joint distribution of (Z,X) to be absolutely continuous with respect to the marginals of Z and X allowing for finite values of MI between the input X and the hidden representation Z. The following theorem states that the MI between input and the hidden representation of the dropout layer is indeed finite even if the variance of the noise depends on the input. Theorem 3.3. Let X be bounded in all dimensions, f(·) be parameterized by a deterministic neural network with Lipschitz activation functions, and let Z = f(X) ◦ D(X), where the components of
noise D(X) = (D1(X), . . . , DN (X)) are conditionally independent given X and have essentially bounded differential entropy and second moments, i.e., E[Di(X)2] ≤M <∞X-almost surely, for some M and all i = 1, . . . , N . Then, if the conditional expectation E[log(|f(X)|) | |f(X)| > 0] is finite in each of its elements, we have I(X;Z) <∞.
Theorem 3.3 (proof in Appendix A.4) can be instantiated for Gaussian dropout, where Di(x) = Di ∼ N (1, σ2), and for information dropout, where Di(x) ∼ logN (0, α2(x)). Note that for information dropout we have to ensure that the (learned) variance α2(x) stays bounded from above and below; e.g., in the experiments of Achille and Soatto (2018), α2(x) is restricted to be below 0.7.
The requirement that the conditional expectation E[log(|f(X)|) | |f(X)| > 0] is finite in each of its elements is critical for the proof. Indeed, one can construct a synthetic (albeit unrealistic) example for which this condition is violated: Example 3.4. Let X ′ have the following probability density function
px′(x ′) = { 2−n, if x′ ∈ [2n, 2n + 1), n = 1, 2, . . . 0, else
Evidently, E[X ′] =∞. Then, X = e−X′ is bounded, since its alphabet is a subset of (0, e−2]. Now consider a neural network with a single hidden layer with one neuron. Let the weight from X to the single neuron be 1, and assume that the neuron uses a ReLU activation function. Then,
E[log |f(X)|] = E[log |X|] = E[log |e−X ′ |] = E[−X ′] = −∞ .
It can be shown that in this example the probability density function of X (as well as of f(X)) is not bounded. Under the assumption that the probability density function pf of f(X) is bounded, the conditional expectation in the assertion of the theorem is finite: Assuming that pf ≤ C <∞, by the law of unconscious statistician we have
Ex[log(|f(X)i|) | |f(X)i| > 0] = ∫ ∥f(X)i∥∞ 0 log(f)pf (f)df
= ∫ 1 0
log(f)pf (f)df︸ ︷︷ ︸ I1 +
∫ ∥f(X)i∥∞ 1
log(f)pf (f)df︸ ︷︷ ︸ I2 .
It is obvious that I2 is positive and finite. Due to the boundedness of pf we also have I1 ≥ C ∫ 1 0 log(f)df = Cf(log(f)− 1)|10 = −C > −∞.
However, the boundedness of pf of is hard to guarantee for an arbitrary neural network. In contrast, the boundedness of px is more realistic and easier to check. For bounded px we can prove (in Appendix A.5) the finiteness of the expectation E[log(|f(X)|) | |f(X)| > 0] for ReLU networks: Proposition 3.5. Consider a deterministic neural network function f(·) constructed with finitely many layers, a finite number of neurons per layer, and ReLU activation functions. Let X be a continuously distributed RV with probability density function px that is bounded (px ≤ P < ∞) and has bounded support X . Then, the conditional expectation E[log(|f(X)|) | |f(X)| > 0] is finite in each of its elements.
Finally, note that Theorem 3.3 assumes that the network is deterministic up to the considered dropout layer. This does not come with a loss of generality for feed-forward networks (e.g., with no residual connections): Indeed, one can apply Theorem 3.3 to the first hidden layer representation Z(1) with dropout, where this assumption always holds. Then, for the ℓ-th hidden layer and irrespective of whether this layer also has dropout, the MI I(X;Z(ℓ)) is finite due to the data processing inequality (Cover and Thomas, 1991, Th. 2.8.1). Therefore, Theorem 3.3 ensures that MI is finite for all hidden layers after the first continuous dropout layer.
4 ESTIMATION OF MI UNDER CONTINUOUS DROPOUT
We now consider estimating I(X;Z) in networks with continuously distributed dropout, starting with information dropout. As discussed by Achille and Soatto (2018), networks with information
dropout are trained with the cross-entropy loss ℓce (which is involved in the known variational lower bound I(Z;Y ) ≥ H(Y )− ℓce) and regularized using a variational upper bound on I(X;Z). Therefore, estimates of the quantities displayed in the information plane are directly used in the training loss and, thus, easy to track, at least for softplus activation functions4.
In the case of Gaussian dropout, to estimate I(X;Z) we approximate h(Z) and h(Z|X) separately (pseudocode is given in Algorithm 1 in Appendix A.6).
For estimating h(Z) we employ a Monte Carlo (MC) estimate, similar to the one proposed by Goldfeld et al. (2019). That is, we approximate the distribution of Z as a Gaussian mixture, where we draw samples f(x(j)), j = 1, . . . , |S| and place Gaussians with a diagonal covariance matrix with variances σ2|f(x(j))i|2, i = 1, . . . , N on each samplef(x(j)). For a sanity check, we also compute an upper bound of h(Z) given by the entropy of a Gaussian with the same covariance matrix as Z. Note that the estimation of the
upper bound requires a sufficiently large number of samples to guarantee that the sample covariance matrix is not singular and that the resulting entropy estimate is finite.
For each fixed x the conditional distribution pz|x is a Gaussian distribution N (f(x),diag({σ2|f(x)i|)2})). Moreover, when the input is fixed, the components of Z|X = x are independent, since components of the noise are independent. This allows to compute h(Z|X) as a sum of h(Zi|X) where Zi is the i-th component of the representation vector. The computation of h(Zi|X) requires integration over the input space for computing the mathematical expectation Ex[h(Zi|X = x)]. This can be approximated via MC sampling. That is, we approximate h(Zi|X) by 1/|S| ∑|S| j=1 h(Zi|X = x(j)) where h(Zi|X = x(j)) = log(|f(x(j))i|σ √
2πe).
We consider a simple toy problem for validating our approach to estimating MI: the input X is generated from an n-dimensional standard normal distribution, modified with a function f(X) = 2X+0.5, and then subjected to Gaussian dropout distributed according toN (1, σ2). We investigate the convergence of our estimator for h(Z|X) for increasing number of samples. For each input data point, we generate 10 noise masks, thus obtaining 10 samples of Z for each x(j). The results in Fig. 2 show that the estimation stabilizes with larger amount of samples for different dimensionality of the data. We also compare the estimate to the upper bound for h(Z) in Fig 3.
We finally compare our estimation of MI to binning, the EDGE estimator (Noshad et al., 2019), and the lower bounds analyzed by McAllester and Stratos (2020). The results are shown in Fig. 4. In the plot, doe stands for the difference-of-entropies (DoE) estimator and doe l stands for DoE with logistic parametrization (McAllester and Stratos, 2020). The binning estimator underestimates the
4Indeed, for softplus activation functions, the variational approximation of I(X;Z) is available in closed form, while for ReLU activation functions, the available expression is only useful for minimizing, rather than for computing, I(X;Z) (see Appendix A.1).
MI when the bin size is large and overestimates it with small bin size (Ross, 2014), which can be clearly seen in the plots where bins are organized both by size (upper axis) and by number (lower axis). Moreover, with the high-dimensional data, binning hits the maximal possible value of log(|S|) very fast, not being able to reach larger MI values. According to McAllester and Stratos (2020), lower bound-based MI estimators (e.g., MINE (Belghazi et al., 2018)) also need exponentially (in the true value of MI) many data points for a good value approximation, otherwise they will always heavily underestimate the MI.
Further plots for different dropout variances and inputs dimensionality are given in Appendix A.6.
5 INFORMATION PLANE ANALYSIS OF DROPOUT NETWORKS
We use the estimators described in the previous section for an IP analysis of networks with Gaussian and information dropout. We always consider only the representation corresponding to the first dropout layer 5 and measure the MI in nats, e.g., use the natural logarithm. For estimating I(Y ;Z), we employ the EDGE estimator (Noshad et al., 2019) for Gaussian dropout and variational estimate for information dropout. IPs created using the binning estimator use binning for both I(X;Z) and I(Y ;Z).
In the first set of experiments we investigate the difference between IPs obtained via our proposed estimator
and via binning. The analysis on the MNIST dataset was performed for a LeNet network (LeCun et al., 1998) that achieves 99% accuracy and a simple fully-connected (FC) network with three hidden layers (28×28−512−128−32−10) and softplus activation functions achieving 97% accuracy. We analyze both information dropout and Gaussian dropout in the LeNet network and only Gaussian dropout in the FC network. In both cases dropout is applied on penultimate layers. We compare IPs based on binning estimators to IPs based on our estimators in Fig. 1 and Fig. 5.
5This makes the MI estimation more efficient, since the previous part of the network is deterministic which allows for an analytical expression of h(Z|X = x). Note however, that the estimation could be extended to higher layers as well since for those MI also remains finite. However, an estimator different from ours should be used for those layers.
We also analyze the IPs for a ResNet18 trained on CIFAR10 (see Fig. 6), where we added an additional bottleneck layer with 128 neurons and Gaussian dropout before the output layer, and which achieves an accuracy of 94%.
Interestingly, for all networks and datasets we observe significant compression for our estimator and a lack of compression for binning estimators (also for different bin size, see Appendix A.8). This indicates that either the MI compression measured in dropout networks is different from purely geometrical compression, or
that the number of samples |S| is insufficient to reliably estimate I(X;Z) by binning.
In the second set of experiments, we analyze IPs in information dropout networks, with MI estimations as described before. To this end, we trained a fully convolutional neural network (fullCNN) on CIFAR10 using code provided by Achille and Soatto (2018). Training proceeded for 200 epochs using SGD with momentum and, different from the original setup, with only one dropout layer after the third convolutional layer. The batch size was set to 100, the learning rate was initially set to 0.05 and was reduced by multiplying it with 0.1 after the 40, 80, and 120 epoch. The network was trained with different values of the regularization weight β and different amounts of filters in the convolutional layers. That is, the full-size fullCNN has 3 layers with 96 filters succeeded by 4 layers with 192 filters, while only 25% of these filters are constituting the small network. Also different from the original setup, we allowed the noise variance to grow up to 0.95 in order to see the effect of the limited
information between representation and input more pronounced. Results are shown in Fig. 7. It can be seen that regularizing I(X;Z) is effective (i.e., larger values of β lead to smaller I(X;Z)), and that regularizing too strongly (β = 20) leads to worse performance: the test error is 5% higher and train error is 10% higher. We can further see stronger compression for smaller β and almost no compression for larger β. We conjecture that compression can only become visible if sufficient information is permitted to flow through the network (which happens only for small β). Fig. 7 (c) and (d) show the IPs for the small fullCNN. It can be seen that the smaller network appears not to compress at all (see Fig. 7 (c)), but that I(X;Z) rather increases throughout training until it is at the same level as in Fig. 7 (a). This indicates that β determines to which point in the IP information compresses, and that the IP curve that is traversed during training depends on the overall capacity of the neural network.
Plots for the additional experiments can be found in Appendix A.8.
6 DISCUSSION
Whether or not information-theoretic compression is correlated with improved generalization is the main question connected to and the most prominent justification for information plane analysis of deep neural networks. Such a connection, however, can only be tested for neural networks for which MI is finite and therefore measurable. In our theoretical analysis, we investigate if different variants of dropout noise allow for finite values of MI under an assumption of a continuous input distribution. We answered this question positively by showing that in networks with certain constraints on the induced distribution of the representations, continuous dropout noise with finite differential entropy prevents I(X;Z) from becoming infinite. We have further shown that these constraints on the distribution of the representation are satisfied in ReLU networks if the probability density function of the input is bounded.
Following this conclusion we propose an MC-based estimate of MI in Gaussian dropout networks and perform an IP analysis for different networks with Gaussian and information dropout on different datasets. The experiments show that the binning estimator behaves very differently from our estimator: While our estimator mostly exhibits compression in the IP, the binning estimator does not. Further, the values of I(X;Z) for our estimator are often orders of magnitude larger than the values of I(Y ;Z), especially when compared to the binning estimator. Assuming that the proposed estimators are reasonably accurate, this makes a connection between information-theoretic compression and generalization questionable. While these preliminary experiments do not conclusively answer the question if such a connection exists, they show a practically relevant setting in which this correlation can be studied.
The discrepancy between the binning estimator and our estimator further suggests that either the information-theoretic compression we observe using our estimator is not geometric, or that there are insufficient samples to obtain reliable estimates from the binning estimator. This is in contrast with the work of Goldfeld et al. (2019), which showed that information-theoretic and geometric compression were linked in their networks with additive noise. We thus believe that a closer investigation of whether multiplicative noise induces geometric compression, and whether the induced compression improves generalization performance, are interesting questions for future research.
ACKNOWLEDGEMENTS
The authors want to thank Michael Kamp, Simon Damm, Ziv Goldfeld, and Jihao Andreas Lin for valuable discussions about the work.
Asja Fischer acknowledges support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy - EXC 2092 CASA - 390781972.
The Know-Center is funded within the Austrian COMET Program - Competence Centers for Excellent Technologies - under the auspices of the Austrian Federal Ministry of Climate Action, Environment, Energy, Mobility, Innovation and Technology, the Austrian Federal Ministry of Digital and Economic Affairs, and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency FFG.
A APPENDIX
A.1 INFORMATION DROPOUT
One type of dropout with continuous noise is termed information dropout (Achille and Soatto, 2018). It is a technique that combines dropout noise sampled from a log-normal distribution ϵ ∼ pϵ = logN (0, α2θ(x)), where αθ(x) is a learnable parameter dependent on the parameters θ of a network, and the introduction of a regularization term KL(pz|xi || ∏|Z| i=1 pzi). This regularization term is based on an information bottleneck objective for training neural networks: Rewriting the information bottleneck Lagrangian and adding a disentanglement term (i.e., we want each element of representation Z to be independent of the others) results in the aforementioned formula. Additionally, it is proposed to use as prior pz , defined by the choice of activation function (ReLU or Softplus), a particular distribution whose validity is empirically verified. Such priors and selected dropout noise allow for deriving a closed form of KL-divergence, which makes it easy to directly track IP values while training.
In the following, we provide the closed form for computation of I(X;Z) as proposed by Achille and Soatto (2018):
I(X;Z) = KL(px,z||pzpx) = ∫ px,z(x, z) log ( px,z(x, z)
pz(z)px(x)
) dxdz
= ∫ px(x)pz|x(z) log ( px(x)pz|x(z)
pz(z)px(x)
) dxdz = ∫ px(x)KL(pz|x||pz)dx
= Ex[KL(pz|x||pz)] . Empirically we can approximate this as I(X;Z) = ∑|S|
j=1 KL(pz|x(j) ||pz), where we sum over the dataset of size |S| of samples of X . First, we discuss ReLU neural networks. The prior distribution pz in this case consists is a mixture of two parts: and improper log-uniform distribution and a point mass at 0. Such prior is empirically valid for ReLU activations. First we restrict the derivation to the case when f(X) ̸= 0 (which in turn means that Z ̸= 0, since noise ϵ is log-normal and cannot be 0). In the following we will omit the subscript of probability density functions, when it is clear from its argument.
KL(pz|x(j) ||pz) = KL(plog(z|x(j))||plog(z)) (1)
= ∫ p(log(z|x(j))) log ( p(log(z|x(j))) p(log(z)) ) dz
= ∫ p(log(ϵ) + log(f(x(j)))|x(j)) log ( p(log(ϵ) + log(f(x(j)))|x(j))
c
) dϵ (2)
= ∫ p(log(ϵ)) log(p(log(ϵ)))dϵ− ∫ p(log(ϵ)) log(c)dϵ (3)
= ∫ p(log(ϵ)) log(p(log(ϵ)))dϵ− log(c) = −h(log(ϵ))− log(c) (4)
= −(log(α(x(j))) + 1 2 log(2πe))− log(c) , (5)
where equation 1 holds due to the invariance of the KL-divergence under parameter transformation with a strictly monotone function (log(·)); equation 2 holds since log(Z) = log(ϵ) + log(f(X)) and plog(z) = c for the improper log-uniform distribution; equation 3 is taking into account that px+const = px, that log(f(x))|x(j) is constant, and that plog(ϵ)|x(j) = plog(ϵ) because ϵ is independent of X; equation 4 uses that ∫ plog(ϵ)dϵ = 1; finally equation 5 holds because log(ϵ) is normally distributed and its entropy can be computed in closed form.
Now we put f(X) = 0, and also get Z = 0. Then pZ|X = δ0 (point mass or Dirac delta) and MI becomes:
KL(pz|x(j) ||pz) = ∫ pz|x(z) log ( pz|x(z)
pz(z)
) dz = ∫ δ0 log ( δ0 qδ0 ) dz = − log(q) , (6)
where q is the weight of the point mass in the prior pz .
Combination of equation 5 and equation 6 results in a computable I(X;Z). As it can be seen, one has to correctly combine non-zero and zero values of f(X) and also know the parameters of the prior pz: constant c and weight q. This makes it not practical for IP analysis.
If instead of ReLU the network has softplus activations, then the prior on the representations distribution is standard log-normal instead of log-uniform with delta Dirac. In this case the computation is very simple, since KL divergence between two log-normal distributions is computed as KL divergence between corresponding normal distributions:
KL(pz|x(j) ||pz) = 1 2σ2 (α2(x(j)) + µ2)− log(α(x
(j)))
σ − 1 2 , (7)
where σ2 = 1 and µ = 0 are known parameters of the prior. Thus, softplus activations (equation 7) allows for direct computations of I(X;Z).
A.2 PROOF OF THEOREM 3.1
Proof. Using the chain rule of MI, we have
I(X;Z) = I(X;Z,B)− I(B;X|Z) = I(X;Z|B) + I(B;X)− I(B;X|Z) ≥ I(X;Z|B)−H(B)
where the inequality follows from dropping I(B;X) since B and X are independent and the fact that I(B;X|Z) ≤ H(B). Having B ∈ {0, 1}N̂ as a discrete RV, it immediately follows that H(B) ≤ N̂ log 2. Now note that
I(X;Z|B) = ∑
b∈{0,1}N̂ P(B = b)I(X;Z|B = b).
Since the Bernoulli RVs are independent, positive probability mass is assigned to b = (1, 1, . . . , 1), i.e., to the case where all neurons are active. Evidently, when b = (1, 1, . . . , 1) it follows that Z = f(X). Thus, with (Amjad and Geiger, 2020, Th. 1)
I(X;Z|B) ≥ P(b = (1, 1, . . . , 1))I(X; f(X)) =∞
and I(X;Z) =∞.
A.3 PROOF OF THEOREM 3.2
Proof. If the binary dropout is such that nonzero probability is assigned to the dropout mask b = (1, 1, . . . 1), then the statement of the theorem follows as in the proof of the theorem 3.1.
Assume now that B is such that zero mass is assigned to b = (1, 1, . . . , 1). To treat this case, we suppose that the distribution of X has a portion with a continuous probability density function on a compact set and that the neural network has activation functions that are either bi-Lipschitz or continuously differentiable with a strictly positive derivative (following the requirements of Amjad and Geiger (2020, Th. 1)). Then, we obtain I(X; f(X)) =∞ from (Amjad and Geiger, 2020, Th. 1) for almost all parameterizations of the neural network. Under this setting, fB(X) is again a neural network with activation functions that are either bi-Lipschitz or continuously differentiable with a strictly positive derivative. Assuming that b is such that the input of the network is not completely disconnected from the considered layer, for this pattern we have I(X;Z|B = b) = ∞. Otherwise, we obviously have I(X;Z|B = b) = 0. The statement of the theorem follows from taking the expectation over all patterns b.
A.4 PROOF OF THEOREM 3.3
Proof. W.l.o.g we first restrict our attention to the dimensions of representations Z that are different from zero. Specifically, suppose that Z = (Z1, . . . , ZN ) and that B = (B1, . . . , BN ) with Bi = 0 if Zi = 0 and Bi = 1 otherwise. Clearly, B is a function of Z, hence I(X;Z) = I(X;Z,B) =
I(B;X) + I(Z;X|B). Since B is binary, we have that I(X;B) ≤ H(B) ≤ n log 2. Let ZB = (Zi|i: Bi = 1) denote the sub-vector of non-zero elements of Z, then
I(X;Z) ≤ n log 2 + ∑ b P(B = b)I(Zb;X)
where, if B = b, I(Zb;X) = I(Z;X|B = b) holds because constant (i.e., 0) RVs do not contribute to MI. Therefore, I(X;Z) is finite iff I(Zb;X) = I(Z;X|B = b) is finite B-almost surely. We thus now fix an arbitrary B = b and continue the proof for Z = Zb.
We decompose MI into differential entropies as I(X;Z) = h(Z)−h(Z|X). The differential entropy of the representations h(Z) is upper-bounded by the entropy of a Gaussian RV with the same covariance matrix Σ as the distribution of Z = (Z1, . . . , ZN ), i.e., by N/2 log(2π) + 1/2 log(det(Σ)) + N/2. From Hadamard’s inequality and since Σ is positive semidefinite it follows that det(Σ) ≤∏n
i=1 σ 2 ii, where σ 2 ii are diagonal elements of the covariance matrix, i.e., σ 2 ii = V ar[Zi]. This variance can be bounded from above. Specifically, since Xi is bounded and f(·) is a composition of Lipschitz functions, f(X)i is bounded as well. Recalling that E[Di(x)2] ≤ M holds X-almost surely, this yields
V ar[Zi] ≤ Ex[f(X)2iDi(X)2] = Ex[f(X)2iEd[Di(X)2 | X]] ≤MEx[f(X)2i ] ≤M∥f(X)i∥2∞
It remains to show that the h(Z|X) > −∞. Due to the conditional independence of Di and Dj given X , for all i ̸= j, the conditional differential entropy of Z factorises in the sum of conditional differential entropy of its components, i.e., h(Z|X) = ∑N i=1 h(Zi|X). We write this conditional entropy as an expectation over X and obtain using (Cover and Thomas, 1991, Th. 9.6.4)
h(Zi|X) = Ex[h(Zi|X = x)] = Ex[h(Di(x)|f(x)i||X = x)] = Ex[h(Di(x)|X = x)] + Ex[log(|f(X)i|)]
by the formula of change of variables for differential entropy. Both terms are finite as per the assertion of the theorem. The first term is finite since we assumed that the differential entropy of Di(X) is essentially bounded, i.e., there exists a number C <∞ such that h(Di(x)) ≤ C X-almost surely. The second term is finite since we assumed that the conditional expectation E[log(|f(X)|) | |f(X)| > 0] is finite in each of its elements, and since Zi ̸= 0 implies |f(X)i| > 0. This completes the proof.
A.5 PROOF OF PROPOSITION 3.5
Proof. We assume w.l.o.g. that f(·) has a range with dimension D = 1, i.e., f : X → R, where X ⊆ Rn is the function domain. The proof can be straightforwardly extended to the several dimensions of f(·). Since f(·) is constructed using a finitely-sized neural network with ReLU activation functions, it is piecewise affinely linear on a finite partition of the function domain. The fact that E[log(|f(X)|) | |f(X)| > 0] <∞ follows then immediately from the fact that X , and thus |f(X )|, is bounded. To investigate whether E[log(|f(X)|) | |f(X)| > 0] > −∞, split domain X in the following partitions:
1. X0 = f−1({0}) denotes the element of the partition on which f(X) vanishes;
2. {X ci }i=1,...,ℓ denotes elements of the partition of X on which f(X) = ci, i.e., on which f(·) is constant;
3. X a = ⋃m
i=1 X ai denotes the union of the all other sets {X ai }i=1,...,m of the partition, where f(·) is not constant.
For the last subset, define the function f̃ : X a → Rn via f̃(x) = (|f(x)|, x2, x3, . . . , xn). Note that f̃(·) is piecewise bijective, hence W̃ = f̃(X) has a probability density function that is obtained
from the change of variables formula:
pw̃(w̃) = ∑
x∈f̃−1(w̃)
px(x)
|det(Jf̃ (x))|
where Jf̃ (x) = [ ∂f̃i ∂xj (x) ]
is the Jacobian matrix of f̃(·), with f̃1(x) = |f(x)| and f̃j(x) = xj for all j ≥ 2. It follows that Jacobian matrix is diagonal and has determinant | ∂f∂x1 (x)|. The density pw|Xa of the conditional random variable W = |f(X)| | X ∈ X a can be then obtained by marginalization from pw̃:
pw|Xa(w) = ∫ pw̃(w, x n 2 )dx n 2 = ∫ ∑ x∈f̃−1(w,xn2 ) px(x) | ∂f∂x1 (x)| dxn2 (8)
where xn2 = (x2, . . . , xn) and where we perform an (n− 1)-fold integral. Thus, by the Lebesgue decomposition, the distribution of W = |f(X)| can be split into an absolutely continuous component with a probability density function pw|Xa and a discrete component with finitely many mass points, for which we have P(W = ci) = ∫ X ci
px(x)dx =: px(X ci ). By the law of unconscious statistician, we then obtain
E[log(|f(X)|) | |f(X)| > 0] = E[log(W ) |W > 0]
= ℓ∑ i=1 px(X ci ) log |ci|+ px(X a) ∫ ∞ 0 log(w)pw|Xa(w)dw
= ℓ∑ i=1 px(X ci ) log |ci|+ px(X a) ∫ ϵ 0
log(w)pw|Xa(w)dw︸ ︷︷ ︸ I1
+px(X a) ∫ ∞ ϵ
log(w)pw|Xa(w)dw︸ ︷︷ ︸ I2
where in the last line we split the integral at a fixed ϵ≪ 1. Clearly, the first sum is finite since ci > 0 for all i. For the remaining summands involving integrals, suppose for now that pw|Xa(w) ≤ C < ∞. Then,
I1 = ∫ ϵ 0 pw log(w)dw ≥ ∫ ϵ 0 C log(w)dw = C(ϵ log(ϵ)− ϵ) > −∞
I2 = ∫ ∞ ϵ pw log(w)dw ≥ ∫ ∞ ϵ pw ( 1− 1 w ) dw ≥ ∫ ∞ ϵ pw ( 1− 1 ϵ ) dw ≥ 1− 1 ϵ > −∞.
We thus remain to show that pw|Xa(w) ≤ C for w ∈ [0, ϵ]. To this end, we revisit equation 8 and note that the integral is finite if i) px is bounded, ii) the integration is over a bounded set, and iii) | ∂f∂x1 (x)| ≥ ϵ1 > 0. Conditions i) and ii) are ensured by the assertion of the lemma. It remains to show that condition iii) holds.
Note that in contrast to using f̃(x) = (|f(x)|, x2, x3, . . . , xn), the same pw|Xa(w) can also be obtained by using the piecewise bijective function f̃(x) = (x1, |f(x)|, x3, . . . , xn), etc. Hence, pw|Xa(w) ≤ C if the partial derivative of f is bounded from below for at least one dimension, i.e., if there exists an i such that | ∂f∂x1 (x)| ≥ ϵ1. Since we have
∥∇xf(x)∥1 = n∑
i=1
∣∣∣∣ ∂f∂xi (x) ∣∣∣∣
this is equivalent to requiring that the L1 norm of the gradient is bounded from below. Indeed, remember that f is piecewise affinely linear with finitely many pieces, and its restriction to X a is non-constant. On its restriction to X a we thus have ∇xf(x) = gi > 0 for all x ∈ X a and some i ∈ {1, . . . ,m}. Hence, we can find an ϵ1 such that mini gi ≥ n · ϵ1 > 0, which implies that there exists an i for which | ∂f∂xi (x)| ≥ ϵ1 for all x ∈ X a. This completes the proof.
A.6 ESTIMATION OF MI UNDER GAUSSIAN DROPOUT
In the Algorithm 1 we describe how the estimation of I(X;Z) with Z being a representation under Gaussian dropout can be done. This is the way we estimated MI for our experiments, but any other estimator can be used in this setup.
Algorithm 1 Estimation of MI under Gaussian dropout Require: GMM-MEANS, σ, nonoise-reprs ▷ Amount of Gaussians in GM for approximation;
noise variance; no noise representations reprs← [] ▷ Generate noisy samples with corresponding variance for all nr in nonoise-reprs do
for i← 1, n do ϵ← noisep reprs← reprs+ nr ∗ ϵ
end for end for points← nonoise-reprs[: GMM-MEANS] ▷ Create a GMM on restricted amount of points for faster computation d← [] for all p in points do
d← d+ Gaussian(p, σ ∗ |p|) end for gmm← MixtureModel(d) lp← [] ▷ Get estimates of log-probabilities from GMM for noisy samples for all r in reprs do
lp← lp+ gmm.log probability(r) end for h(z)← mean(lp) h(z|x)← 0 ▷ Compute conditional entropy using closed form formula for i← 1, dim(reprs[0]) do ▷ For each dimension of the representation
h(z|x)← h(z|x) + mean(ln( √ 2πeσ|nonoise-reprs[:, i]|)) ▷ Use no noise representations here, each dimension separately end for I(x, z)← h(z)− h(z|x) ▷ Obtain final estimate for the MI
A.7 EVALUATION OF ESTIMATOR
Fig. 8 shows upper bounds and estimation of h(Z) with a higher noise than in the Fig. 3. Larger noise increases the gap between the Gaussian entropy based upper bound and the mixture based estimation as expected.
In Fig. 9 we see convergence of the MC estimate for h(Z|X) under larger noise. As expected larger noise variance results in smaller MI values (Fig. 10), while the trend observed when changing dimensionality stays the same.
A.8 INFORMATION PLANE ANALYSIS
Note, that in the experiments we analyze IPs on the training samples and test samples separately. In order to obtain a valid sample of hidden representations for the MI estimation during inference, we apply MC-Dropout, as opposed to the usual way of performing inference with dropout being turned off. According to Srivastava et al. (2014) this is the theoretically sound way to obtain predictions, while turning off dropout and re-scaling weights results in an approximationthat allows for faster computation.
In Fig. 11, Fig.12, and Fig. 13 we provide IPs built on the test set of the corresponding datasets (MNIST, MNIST, and CIFAR10).
In the Fig. 14 we provide additional IPs for the binning estimator with varying amount of bins used for MI estimation. We report the results for the fully-connected network trained on MNIST with Gaussian dropout variance 0.2.
In the Fig. 15 we show the IPs obtained for the same fully-connected network trained on MNIST with the variance of the Gaussian dropout set to 0.4. | 1. What is the main contribution of the paper regarding the estimation of mutual information in dropout neural networks?
2. What are the strengths and weaknesses of the proposed method, particularly in its limitation to continuous noise dropout networks?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any suggestions for improving the paper, such as summarizing the computation of the proposed MI estimate or using better color choices in figures? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The goal of the paper is to obtain a sufficiently accurate estimate of MI (between input and representation) for dropout neural networks and to use it to confirm the information bottleneck hypothesis for this NN model.
Contributions:
The authors propose a monte-carlo based estimate of MI in Gaussian dropout networks that is relatively accurate and well-supported by the performed theoretical analysis.
The authors use their MI estimate to provide information plane analyses for several NN models with Gaussian and information dropout (LeNet and MLP on MNIST, ResNet on CIFAR10)
Strengths And Weaknesses
Plus
A solid contribution to the area of information-theoretic analysis of neural networks.
The idea of utilizing stochasticity induced by dropout layers to estimate MI (between input and internal model representation) is novel and interesting.
The authors provide a thorough theoretical analysis of MI estimation in NNs and its limitations for both binary dropout and dropout with continuous noise.
The proposed monte-carlo MI estimate seems promising (based on the experiments).
Minus
I see the main limitation of the paper in relatively narrow area of application. The method is restricted to dropout networks with continuous noise. The authors provide a proof that the principle cannot be extended to (the widely used) binary dropout (or to NNs without dropout). The concurrent technique of (Goldfeld et al. 2019) seems to offer wider possibilities of use despite its other disadvantages (namely the need to alter the internal representation of the model by noise).
Clarity, Quality, Novelty And Reproducibility
Novelty
Both the idea and realization of using stochasticity hidden in dropout layers to estimate MI are novel. The MI estimates seem to be accurate enough to provide a meaningful information plane analysis of the model.
Related work is cited and analyzed adequately.
Quality
The technical and experimental results seem to be well-executed to the best of my assessment. I appreciate namely the detailed theoretical analysis.
Clarity And Reproducibility
The paper is written comprehensibly and its structure is good.
Nevertheless, the description and computation of the proposed MI estimate is "hidden" in the paragraphs. I recommend the authors to summarize the computation into one comprehensible expression (or in a box with pseudocode). This will help readers to quickly understand and reproduce it.
The colors in Figure 4 and Fig. 10 are not well chosen. It impossible to distinguish the individual shades of orange/red. That makes the graphs incomprehensible. Please, change the colors and support the comprehensibility by dashed lines.
Figures and their labels are too small (compared to other text) and therefore hardly comprehensible without high magnification on the screen (e.g., Figure 1, 5, 6, 7).
It is not clear what "doe" and "doe_l" stand for in Fig. 4 and Fig. 10. Please add an explanation of this notation. |
ICLR | Title
Information Plane Analysis for Dropout Neural Networks
Abstract
The information-theoretic framework promises to explain the predictive power of neural networks. In particular, the information plane analysis, which measures mutual information (MI) between input and representation as well as representation and output, should give rich insights into the training process. This approach, however, was shown to strongly depend on the choice of estimator of the MI. The problem is amplified for deterministic networks if the MI between input and representation is infinite. Thus, the estimated values are defined by the different approaches for estimation, but do not adequately represent the training process from an information-theoretic perspective. In this work, we show that dropout with continuously distributed noise ensures that MI is finite. We demonstrate in a range of experiments1 that this enables a meaningful information plane analysis for a class of dropout neural networks that is widely used in practice.
1 INTRODUCTION
The information bottleneck hypothesis for deep learning conjectures two phases of training feedforward neural networks (Shwartz-Ziv and Tishby, 2017): the fitting phase and the compression phase. The former corresponds to extracting information from the input into the learned representations, and is characterized by an increase of mutual information (MI) between inputs and hidden representations. The latter corresponds to forgetting information that is not needed to predict the target, which is reflected in a decrease of the MI between learned representations and inputs, while MI between representations and targets stays the same or grows. The phases can be observed via an information plane (IP) analysis, i.e., by analyzing the development of MI between inputs and representations and between representations and targets during training (see Fig. 1 for an example). For an overview of information plane analysis we refer the reader to (Geiger, 2022).
While being elegant and plausible, the information bottleneck hypothesis is challenging to investigate empirically. As shown by Amjad and Geiger (2020, Th. 1), the MI between inputs and the representations learned by a deterministic neural network is infinite if the input distribution is continuous. The standard approach is therefore to assume the input distribution to be discrete (e.g., equivalent to the empirical distribution of the dataset S at hand) and to discretize the real-valued hidden representations by binning to allow for non-trivial measurements, i.e., to avoid that the MI always takes the maximum value of log(|S|) (Shwartz-Ziv and Tishby, 2017). In this discrete and deterministic setting the MI theoretically gets equivalent to the Shannon entropy of the representation. Considering the effect of binning, however, the decrease of MI is essentially equivalent to geometrical compression (Basirat et al., 2021). Moreover, the binning-based estimate highly depends on the chosen bin size (Ross, 2014). To instead work with continuous input distributions, Goldfeld
1Code for the experiments is public on https://github.com/link-er/IP_dropout.
et al. (2019) suggest to replace deterministic neural networks by stochastic ones via adding Gaussian noise to each of the hidden representations. This kind of stochastic networks is rarely used in practice, which limits the insights brought by the analysis.
In contrast, dropout, being a source of stochasticity, is heavily used in practice due to its effective regularizing properties. The core questions investigated in this work therefore are: i) Can we obtain accurate and meaningful MI estimates in neural networks with dropout noise? ii) And if so, do IPs built for dropout networks confirm the information bottleneck hypothesis? Our main contributions answer these questions and can be summarized as follows: We present a theoretical analysis showing that binary dropout does not prevent the MI from being infinite due to the discrete nature of the noise. In contrast, we prove that dropout noise with any continuous distribution not only results in finite MI, but also provides an elegant way to estimate it. This in particular holds for Gaussian dropout, which is known to benefit generalization even more than binary dropout (Srivastava et al., 2014), and for information dropout (Achille and Soatto, 2018). We empirically analyze the quality of the MI estimation in the setup with Gaussian and information dropout in a range of experiments on benchmark neural networks and datasets. While our results do not conclusively confirm or refute the information bottleneck hypothesis, they show that the IPs obtained using our estimator exhibit qualitatively different behavior than the IPs obtained using binning estimators and strongly indicate that a compression phase is indeed happening.
2 MUTUAL INFORMATION ESTIMATION FOR NEURAL NETWORKS
We use the following notation: Lower-case letters denote realizations of random variables (RVs), e.g., b denotes a realization of the RV B; H(A) denotes the Shannon entropy of a discrete RV A whose distribution is denoted pa; h(B) is the differential entropy of a continuous RV B whose distribution is described by the probability density function pb; I(A;B) is the MI between RVs A and B; X ∈ X ⊆ Rn and Y ∈ Y are the RVs describing inputs to a neural network and corresponding targets; f(X) is the result of the forward pass of the input through the network to the hidden layer of interest; Z is an N -dimensional RV describing the hidden representations.
The caveats of different approaches to measure the MI between input X and hidden representation Z of a neural network – e.g., the MI being infinite for deterministic neural networks and continuous input distributions, the dependence of the MI estimate on the parameterization of the estimator, etc. – were discussed widely in the literature (Saxe et al., 2019; Geiger, 2022) and are briefly reviewed in this section. These caveats do not appear for the MI measured between representations Z and targets Y , since the target is in most cases a discrete RV (class), for which MI is always finite.
One option for estimating I(X;Z) is to assume the input to be drawn from a discrete distribution. This view is supported by the finiteness of the accuracy of the used computational resources (Lorenzen et al., 2021) and makes it easy to use a finite dataset S to describe the distribution. In such setup, the distribution of (X,Y ) is assumed uniform on the dataset S, and the discretization of Z is performed at a fixed bin size (e.g., corresponding to the computer precision). The MI between
X and the discretized Ẑ is computed as I(X; Ẑ) = H(Ẑ) − H(Ẑ|X) = H(Ẑ) − 0 = H(Ẑ), where H(Ẑ|X) = 0 since f(·) and the discretization of Z are deterministic. Thus, the estimated MI between input and representation corresponds to the entropy of the discretized representation, which for small bin sizes is equal to the entropy H(X) = log |S| of the empirical distribution on the dataset, unless f(·) maps different points from the dataset to the same point in latent space. A different option that is more aligned to the common description of real-world data is to assume X to be drawn from a continuous distribution. If the network transformation f(·) results in a discrete distribution of the representations Z, one can use the decomposition I(X,Z) = H(Z)−H(Z|X) = H(Z) to estimate MI based on Shannon entropy, provided that the sample size is sufficiently large (note that the dimensionality N of Z may be large, and therefore the estimation of H(Z) may suffer from the curse of dimensionality). However, as shown in Theorem 1 of (Amjad and Geiger, 2020) for neural networks with commonly used activation functions the distribution of the latent representation is not discrete. In this case (i.e., f(·) is deterministic, X is continuous, and Z is not purely discrete) the MI between X and Z is infinite2. By binning, i.e., by quantizing Z to a discrete RV Ẑ, the MI I(X; Ẑ) = H(Ẑ) remains finite, but the qualitative behavior of this entropy will be defined by properties of activation functions and selected bin size (Saxe et al., 2019).
From the discussion above it follows that estimating I(X;Z) in deterministic neural networks is an ill-posed problem, and that the estimates reveal not an information-theoretic picture, but often rather a geometric one that is determined by the properties of the chosen estimators. As a solution to the aforementioned challenges, several authors have suggested to investigate the information planes of stochastic neural networks instead (Amjad and Geiger, 2020; Goldfeld et al., 2019). Goldfeld et al. (2019) proposed to add zero-mean Gaussian noise D to the representations during training. This transforms a deterministic neural network into a stochastic one that was shown to yield similar training results and predictive abilities of the model. The addition of Gaussian noise in Z = f(X)+ D guarantees a finite MI3 and therefore allows for estimating MI using Monte Carlo sampling with bounds on the estimation error. Futhermore, it links the information-theoretic perspective of the IP to geometric effects taking place in latent space. Indeed, when the MI between input and representation is decreasing, it means that noise-induced Gaussians centered at the representations of different data points overlap more strongly. Thus, it is becoming harder to distinguish between inputs of the same class based on their representations, which translates into lower MI between representation and input while leaving MI between representation and target unchanged.
As discussed above, for continuous input distributions both the IPs of deterministic neural networks as well as of stochastic neural networks with additive noise show a geometric picture (and in the former case the geometric interpretation is the only valid one, since MI is infinite in this case). Therefore, in this work we study the estimation of MI in networks with dropout layers, i.e., in settings where the stochasticity is introduced by multiplicative, rather than additive noise. In what follows we will investigate the requirements on the multiplicative noise for MI to remain finite, and whether the resulting IPs confirm the information bottleneck hypothesis.
3 MUTUAL INFORMATION IN DROPOUT NETWORKS
As discussed in the previous section, the MI between inputs and hidden representations of deterministic networks is infinite, if we assume the input distribution to be continuous. To overcome this problem, some form of stochasticity has to be introduced. While adding noise to activations (Goldfeld et al., 2019) indeed allows to compute the MI, this is not used in most contemporary neural networks. In contrast, neural networks with dropout are one of the most popular classes of neural networks used in practice and are stochastic in nature as well: Adding a dropout layer to a neural network corresponds to multiplying the hidden representation with some form of random noise. Formally, denoting the random noise by a RV D of the same dimension as f(X), the hidden representation becomes Z = f(X) ◦D, where ◦ denotes element-wise multiplication. In the most basic form, D follows a Bernoulli distribution (Srivastava et al., 2014). Such binary dropout is widely used and can intuitively been understood as “turning off” a fraction of neurons during training. There is a
2There are multiple mathematical derivations explaining why MI is infinite, one for example is discussed in (Saxe et al., 2019, Appendix C).
3At least when the px and f(·) are such that f(X) has finite variance, then the finiteness of MI follows from the result about the capacity of the additive Gaussian noise channel, cf. (Cover and Thomas, 1991, eq. (10.17)).
variety of other dropout schemes, including multiplicative Gaussian noise, fast dropout (Wang and Manning, 2013), or variational dropout (Kingma et al., 2015). Information dropout (Achille and Soatto, 2018) is a variant that uses a closed-form expression of MI as regularization term. In order to obtain such closed form, dropout noise is sampled from a log-normal distribution, and the prior distribution on representations is chosen depending on the activation function (ReLU or Softplus). We provide details on the derivation in Appendix A.1.
In this section, we investigate whether neural networks with dropout have indeed finite MI between input X and representation Z. While we first show a negative result by proving that binary dropout still leads to I(X;Z) =∞, our Theorem 3.3 shows that dropout with continuous distribution keeps MI finite. This fact allows us to estimate MI for such dropout neural networks in Sections 4 and 5.
3.1 BINARY DROPOUT
We start by analyzing binary dropout, which forces individual neurons to be “turned off” with some probability. More formally, the output of each neuron is multiplied with an independent Bernoulli RV that is equal to 1 with a predefined probability p. The following theorem shows that this kind of (combinatorial) stochasticity is insufficient to prevent I(X;Z) from becoming infinite. Theorem 3.1. In the setting of (Amjad and Geiger, 2020, Th. 1), let the output f(·) of a hidden layer be parameterized as a deterministic neural network with N̂ neurons, let B ∈ {0, 1}N̂ be the set of independent Bernoulli RVs characterizing the dropout pattern, and let Z = fB(X) denote the output of the hidden layer after applying the random pattern B. Then it holds that I(X;Z) =∞.
In the proof (provided in Appendix A.2) we use the fact that dropout mask b = (1, 1, . . . , 1) leads to an infinite MI. While the Bernoulli distribution guarantees that b = (1, 1, . . . , 1) always has nonzero probability, other distributions over {0, 1}N̂ might not have this property. Theorem 3.1 can however be generalized to arbitrary distributions over {0, 1}N̂ : Theorem 3.2. In the setting of (Amjad and Geiger, 2020, Th. 1), let the output f(·) of a hidden layer be parameterized as a deterministic neural network with N̂ neurons, let B ∈ {0, 1}N̂ be the binary random vector characterizing the dropout pattern, and let Z = fB(X) denote the output of the hidden layer after applying the random pattern B. Then, it either holds that I(X;Z) = ∞ or that I(X;Z) = 0 if the dropout patterns almost surely disrupt information flow through the network.
The proof for the theorem is provided in Appendix A.3.
Both Theorem 3.1 and Theorem 3.2 cover as a special case the setting where dropout is applied to only a subset of layers, by simply setting those elements of B to 1 that correspond to a neuron output without dropout. If dropout is applied to only a single layer, then fB(X) = f(X) ◦B′, where B′ is the dropout pattern of the considered layer and ◦ denotes the element-wise product. As a consequence of Theorem 3.2, for neural networks with binary dropout any finite estimate of MI is “infinitely wrong”, and the resulting IP does not permit an information-theoretic interpretation. Essentially, the stochasticity added by binary dropout is combinatorial, and hence cannot compensate the “continuous” stochasticity available in the input X .
3.2 DROPOUT WITH CONTINUOUS NOISE
As proposed by Srivastava et al. (2014), dropout can also be implemented using continuous Gaussian noise with mean vector µ = 1 and diagonal covariance matrix Iσ2 with fixed variance σ2. Achille and Soatto (2018), in contrast, proposed log-normally distributed dropout noise, the variance of which depends on the input sample x (this is termed information dropout). Generalizing both Gaussian and information dropout, in this section we consider continuously distributed multiplicative noise D. In contrast to binary noise sampled from a discrete distribution, continuously distributed noise turns the joint distribution of (Z,X) to be absolutely continuous with respect to the marginals of Z and X allowing for finite values of MI between the input X and the hidden representation Z. The following theorem states that the MI between input and the hidden representation of the dropout layer is indeed finite even if the variance of the noise depends on the input. Theorem 3.3. Let X be bounded in all dimensions, f(·) be parameterized by a deterministic neural network with Lipschitz activation functions, and let Z = f(X) ◦ D(X), where the components of
noise D(X) = (D1(X), . . . , DN (X)) are conditionally independent given X and have essentially bounded differential entropy and second moments, i.e., E[Di(X)2] ≤M <∞X-almost surely, for some M and all i = 1, . . . , N . Then, if the conditional expectation E[log(|f(X)|) | |f(X)| > 0] is finite in each of its elements, we have I(X;Z) <∞.
Theorem 3.3 (proof in Appendix A.4) can be instantiated for Gaussian dropout, where Di(x) = Di ∼ N (1, σ2), and for information dropout, where Di(x) ∼ logN (0, α2(x)). Note that for information dropout we have to ensure that the (learned) variance α2(x) stays bounded from above and below; e.g., in the experiments of Achille and Soatto (2018), α2(x) is restricted to be below 0.7.
The requirement that the conditional expectation E[log(|f(X)|) | |f(X)| > 0] is finite in each of its elements is critical for the proof. Indeed, one can construct a synthetic (albeit unrealistic) example for which this condition is violated: Example 3.4. Let X ′ have the following probability density function
px′(x ′) = { 2−n, if x′ ∈ [2n, 2n + 1), n = 1, 2, . . . 0, else
Evidently, E[X ′] =∞. Then, X = e−X′ is bounded, since its alphabet is a subset of (0, e−2]. Now consider a neural network with a single hidden layer with one neuron. Let the weight from X to the single neuron be 1, and assume that the neuron uses a ReLU activation function. Then,
E[log |f(X)|] = E[log |X|] = E[log |e−X ′ |] = E[−X ′] = −∞ .
It can be shown that in this example the probability density function of X (as well as of f(X)) is not bounded. Under the assumption that the probability density function pf of f(X) is bounded, the conditional expectation in the assertion of the theorem is finite: Assuming that pf ≤ C <∞, by the law of unconscious statistician we have
Ex[log(|f(X)i|) | |f(X)i| > 0] = ∫ ∥f(X)i∥∞ 0 log(f)pf (f)df
= ∫ 1 0
log(f)pf (f)df︸ ︷︷ ︸ I1 +
∫ ∥f(X)i∥∞ 1
log(f)pf (f)df︸ ︷︷ ︸ I2 .
It is obvious that I2 is positive and finite. Due to the boundedness of pf we also have I1 ≥ C ∫ 1 0 log(f)df = Cf(log(f)− 1)|10 = −C > −∞.
However, the boundedness of pf of is hard to guarantee for an arbitrary neural network. In contrast, the boundedness of px is more realistic and easier to check. For bounded px we can prove (in Appendix A.5) the finiteness of the expectation E[log(|f(X)|) | |f(X)| > 0] for ReLU networks: Proposition 3.5. Consider a deterministic neural network function f(·) constructed with finitely many layers, a finite number of neurons per layer, and ReLU activation functions. Let X be a continuously distributed RV with probability density function px that is bounded (px ≤ P < ∞) and has bounded support X . Then, the conditional expectation E[log(|f(X)|) | |f(X)| > 0] is finite in each of its elements.
Finally, note that Theorem 3.3 assumes that the network is deterministic up to the considered dropout layer. This does not come with a loss of generality for feed-forward networks (e.g., with no residual connections): Indeed, one can apply Theorem 3.3 to the first hidden layer representation Z(1) with dropout, where this assumption always holds. Then, for the ℓ-th hidden layer and irrespective of whether this layer also has dropout, the MI I(X;Z(ℓ)) is finite due to the data processing inequality (Cover and Thomas, 1991, Th. 2.8.1). Therefore, Theorem 3.3 ensures that MI is finite for all hidden layers after the first continuous dropout layer.
4 ESTIMATION OF MI UNDER CONTINUOUS DROPOUT
We now consider estimating I(X;Z) in networks with continuously distributed dropout, starting with information dropout. As discussed by Achille and Soatto (2018), networks with information
dropout are trained with the cross-entropy loss ℓce (which is involved in the known variational lower bound I(Z;Y ) ≥ H(Y )− ℓce) and regularized using a variational upper bound on I(X;Z). Therefore, estimates of the quantities displayed in the information plane are directly used in the training loss and, thus, easy to track, at least for softplus activation functions4.
In the case of Gaussian dropout, to estimate I(X;Z) we approximate h(Z) and h(Z|X) separately (pseudocode is given in Algorithm 1 in Appendix A.6).
For estimating h(Z) we employ a Monte Carlo (MC) estimate, similar to the one proposed by Goldfeld et al. (2019). That is, we approximate the distribution of Z as a Gaussian mixture, where we draw samples f(x(j)), j = 1, . . . , |S| and place Gaussians with a diagonal covariance matrix with variances σ2|f(x(j))i|2, i = 1, . . . , N on each samplef(x(j)). For a sanity check, we also compute an upper bound of h(Z) given by the entropy of a Gaussian with the same covariance matrix as Z. Note that the estimation of the
upper bound requires a sufficiently large number of samples to guarantee that the sample covariance matrix is not singular and that the resulting entropy estimate is finite.
For each fixed x the conditional distribution pz|x is a Gaussian distribution N (f(x),diag({σ2|f(x)i|)2})). Moreover, when the input is fixed, the components of Z|X = x are independent, since components of the noise are independent. This allows to compute h(Z|X) as a sum of h(Zi|X) where Zi is the i-th component of the representation vector. The computation of h(Zi|X) requires integration over the input space for computing the mathematical expectation Ex[h(Zi|X = x)]. This can be approximated via MC sampling. That is, we approximate h(Zi|X) by 1/|S| ∑|S| j=1 h(Zi|X = x(j)) where h(Zi|X = x(j)) = log(|f(x(j))i|σ √
2πe).
We consider a simple toy problem for validating our approach to estimating MI: the input X is generated from an n-dimensional standard normal distribution, modified with a function f(X) = 2X+0.5, and then subjected to Gaussian dropout distributed according toN (1, σ2). We investigate the convergence of our estimator for h(Z|X) for increasing number of samples. For each input data point, we generate 10 noise masks, thus obtaining 10 samples of Z for each x(j). The results in Fig. 2 show that the estimation stabilizes with larger amount of samples for different dimensionality of the data. We also compare the estimate to the upper bound for h(Z) in Fig 3.
We finally compare our estimation of MI to binning, the EDGE estimator (Noshad et al., 2019), and the lower bounds analyzed by McAllester and Stratos (2020). The results are shown in Fig. 4. In the plot, doe stands for the difference-of-entropies (DoE) estimator and doe l stands for DoE with logistic parametrization (McAllester and Stratos, 2020). The binning estimator underestimates the
4Indeed, for softplus activation functions, the variational approximation of I(X;Z) is available in closed form, while for ReLU activation functions, the available expression is only useful for minimizing, rather than for computing, I(X;Z) (see Appendix A.1).
MI when the bin size is large and overestimates it with small bin size (Ross, 2014), which can be clearly seen in the plots where bins are organized both by size (upper axis) and by number (lower axis). Moreover, with the high-dimensional data, binning hits the maximal possible value of log(|S|) very fast, not being able to reach larger MI values. According to McAllester and Stratos (2020), lower bound-based MI estimators (e.g., MINE (Belghazi et al., 2018)) also need exponentially (in the true value of MI) many data points for a good value approximation, otherwise they will always heavily underestimate the MI.
Further plots for different dropout variances and inputs dimensionality are given in Appendix A.6.
5 INFORMATION PLANE ANALYSIS OF DROPOUT NETWORKS
We use the estimators described in the previous section for an IP analysis of networks with Gaussian and information dropout. We always consider only the representation corresponding to the first dropout layer 5 and measure the MI in nats, e.g., use the natural logarithm. For estimating I(Y ;Z), we employ the EDGE estimator (Noshad et al., 2019) for Gaussian dropout and variational estimate for information dropout. IPs created using the binning estimator use binning for both I(X;Z) and I(Y ;Z).
In the first set of experiments we investigate the difference between IPs obtained via our proposed estimator
and via binning. The analysis on the MNIST dataset was performed for a LeNet network (LeCun et al., 1998) that achieves 99% accuracy and a simple fully-connected (FC) network with three hidden layers (28×28−512−128−32−10) and softplus activation functions achieving 97% accuracy. We analyze both information dropout and Gaussian dropout in the LeNet network and only Gaussian dropout in the FC network. In both cases dropout is applied on penultimate layers. We compare IPs based on binning estimators to IPs based on our estimators in Fig. 1 and Fig. 5.
5This makes the MI estimation more efficient, since the previous part of the network is deterministic which allows for an analytical expression of h(Z|X = x). Note however, that the estimation could be extended to higher layers as well since for those MI also remains finite. However, an estimator different from ours should be used for those layers.
We also analyze the IPs for a ResNet18 trained on CIFAR10 (see Fig. 6), where we added an additional bottleneck layer with 128 neurons and Gaussian dropout before the output layer, and which achieves an accuracy of 94%.
Interestingly, for all networks and datasets we observe significant compression for our estimator and a lack of compression for binning estimators (also for different bin size, see Appendix A.8). This indicates that either the MI compression measured in dropout networks is different from purely geometrical compression, or
that the number of samples |S| is insufficient to reliably estimate I(X;Z) by binning.
In the second set of experiments, we analyze IPs in information dropout networks, with MI estimations as described before. To this end, we trained a fully convolutional neural network (fullCNN) on CIFAR10 using code provided by Achille and Soatto (2018). Training proceeded for 200 epochs using SGD with momentum and, different from the original setup, with only one dropout layer after the third convolutional layer. The batch size was set to 100, the learning rate was initially set to 0.05 and was reduced by multiplying it with 0.1 after the 40, 80, and 120 epoch. The network was trained with different values of the regularization weight β and different amounts of filters in the convolutional layers. That is, the full-size fullCNN has 3 layers with 96 filters succeeded by 4 layers with 192 filters, while only 25% of these filters are constituting the small network. Also different from the original setup, we allowed the noise variance to grow up to 0.95 in order to see the effect of the limited
information between representation and input more pronounced. Results are shown in Fig. 7. It can be seen that regularizing I(X;Z) is effective (i.e., larger values of β lead to smaller I(X;Z)), and that regularizing too strongly (β = 20) leads to worse performance: the test error is 5% higher and train error is 10% higher. We can further see stronger compression for smaller β and almost no compression for larger β. We conjecture that compression can only become visible if sufficient information is permitted to flow through the network (which happens only for small β). Fig. 7 (c) and (d) show the IPs for the small fullCNN. It can be seen that the smaller network appears not to compress at all (see Fig. 7 (c)), but that I(X;Z) rather increases throughout training until it is at the same level as in Fig. 7 (a). This indicates that β determines to which point in the IP information compresses, and that the IP curve that is traversed during training depends on the overall capacity of the neural network.
Plots for the additional experiments can be found in Appendix A.8.
6 DISCUSSION
Whether or not information-theoretic compression is correlated with improved generalization is the main question connected to and the most prominent justification for information plane analysis of deep neural networks. Such a connection, however, can only be tested for neural networks for which MI is finite and therefore measurable. In our theoretical analysis, we investigate if different variants of dropout noise allow for finite values of MI under an assumption of a continuous input distribution. We answered this question positively by showing that in networks with certain constraints on the induced distribution of the representations, continuous dropout noise with finite differential entropy prevents I(X;Z) from becoming infinite. We have further shown that these constraints on the distribution of the representation are satisfied in ReLU networks if the probability density function of the input is bounded.
Following this conclusion we propose an MC-based estimate of MI in Gaussian dropout networks and perform an IP analysis for different networks with Gaussian and information dropout on different datasets. The experiments show that the binning estimator behaves very differently from our estimator: While our estimator mostly exhibits compression in the IP, the binning estimator does not. Further, the values of I(X;Z) for our estimator are often orders of magnitude larger than the values of I(Y ;Z), especially when compared to the binning estimator. Assuming that the proposed estimators are reasonably accurate, this makes a connection between information-theoretic compression and generalization questionable. While these preliminary experiments do not conclusively answer the question if such a connection exists, they show a practically relevant setting in which this correlation can be studied.
The discrepancy between the binning estimator and our estimator further suggests that either the information-theoretic compression we observe using our estimator is not geometric, or that there are insufficient samples to obtain reliable estimates from the binning estimator. This is in contrast with the work of Goldfeld et al. (2019), which showed that information-theoretic and geometric compression were linked in their networks with additive noise. We thus believe that a closer investigation of whether multiplicative noise induces geometric compression, and whether the induced compression improves generalization performance, are interesting questions for future research.
ACKNOWLEDGEMENTS
The authors want to thank Michael Kamp, Simon Damm, Ziv Goldfeld, and Jihao Andreas Lin for valuable discussions about the work.
Asja Fischer acknowledges support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy - EXC 2092 CASA - 390781972.
The Know-Center is funded within the Austrian COMET Program - Competence Centers for Excellent Technologies - under the auspices of the Austrian Federal Ministry of Climate Action, Environment, Energy, Mobility, Innovation and Technology, the Austrian Federal Ministry of Digital and Economic Affairs, and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency FFG.
A APPENDIX
A.1 INFORMATION DROPOUT
One type of dropout with continuous noise is termed information dropout (Achille and Soatto, 2018). It is a technique that combines dropout noise sampled from a log-normal distribution ϵ ∼ pϵ = logN (0, α2θ(x)), where αθ(x) is a learnable parameter dependent on the parameters θ of a network, and the introduction of a regularization term KL(pz|xi || ∏|Z| i=1 pzi). This regularization term is based on an information bottleneck objective for training neural networks: Rewriting the information bottleneck Lagrangian and adding a disentanglement term (i.e., we want each element of representation Z to be independent of the others) results in the aforementioned formula. Additionally, it is proposed to use as prior pz , defined by the choice of activation function (ReLU or Softplus), a particular distribution whose validity is empirically verified. Such priors and selected dropout noise allow for deriving a closed form of KL-divergence, which makes it easy to directly track IP values while training.
In the following, we provide the closed form for computation of I(X;Z) as proposed by Achille and Soatto (2018):
I(X;Z) = KL(px,z||pzpx) = ∫ px,z(x, z) log ( px,z(x, z)
pz(z)px(x)
) dxdz
= ∫ px(x)pz|x(z) log ( px(x)pz|x(z)
pz(z)px(x)
) dxdz = ∫ px(x)KL(pz|x||pz)dx
= Ex[KL(pz|x||pz)] . Empirically we can approximate this as I(X;Z) = ∑|S|
j=1 KL(pz|x(j) ||pz), where we sum over the dataset of size |S| of samples of X . First, we discuss ReLU neural networks. The prior distribution pz in this case consists is a mixture of two parts: and improper log-uniform distribution and a point mass at 0. Such prior is empirically valid for ReLU activations. First we restrict the derivation to the case when f(X) ̸= 0 (which in turn means that Z ̸= 0, since noise ϵ is log-normal and cannot be 0). In the following we will omit the subscript of probability density functions, when it is clear from its argument.
KL(pz|x(j) ||pz) = KL(plog(z|x(j))||plog(z)) (1)
= ∫ p(log(z|x(j))) log ( p(log(z|x(j))) p(log(z)) ) dz
= ∫ p(log(ϵ) + log(f(x(j)))|x(j)) log ( p(log(ϵ) + log(f(x(j)))|x(j))
c
) dϵ (2)
= ∫ p(log(ϵ)) log(p(log(ϵ)))dϵ− ∫ p(log(ϵ)) log(c)dϵ (3)
= ∫ p(log(ϵ)) log(p(log(ϵ)))dϵ− log(c) = −h(log(ϵ))− log(c) (4)
= −(log(α(x(j))) + 1 2 log(2πe))− log(c) , (5)
where equation 1 holds due to the invariance of the KL-divergence under parameter transformation with a strictly monotone function (log(·)); equation 2 holds since log(Z) = log(ϵ) + log(f(X)) and plog(z) = c for the improper log-uniform distribution; equation 3 is taking into account that px+const = px, that log(f(x))|x(j) is constant, and that plog(ϵ)|x(j) = plog(ϵ) because ϵ is independent of X; equation 4 uses that ∫ plog(ϵ)dϵ = 1; finally equation 5 holds because log(ϵ) is normally distributed and its entropy can be computed in closed form.
Now we put f(X) = 0, and also get Z = 0. Then pZ|X = δ0 (point mass or Dirac delta) and MI becomes:
KL(pz|x(j) ||pz) = ∫ pz|x(z) log ( pz|x(z)
pz(z)
) dz = ∫ δ0 log ( δ0 qδ0 ) dz = − log(q) , (6)
where q is the weight of the point mass in the prior pz .
Combination of equation 5 and equation 6 results in a computable I(X;Z). As it can be seen, one has to correctly combine non-zero and zero values of f(X) and also know the parameters of the prior pz: constant c and weight q. This makes it not practical for IP analysis.
If instead of ReLU the network has softplus activations, then the prior on the representations distribution is standard log-normal instead of log-uniform with delta Dirac. In this case the computation is very simple, since KL divergence between two log-normal distributions is computed as KL divergence between corresponding normal distributions:
KL(pz|x(j) ||pz) = 1 2σ2 (α2(x(j)) + µ2)− log(α(x
(j)))
σ − 1 2 , (7)
where σ2 = 1 and µ = 0 are known parameters of the prior. Thus, softplus activations (equation 7) allows for direct computations of I(X;Z).
A.2 PROOF OF THEOREM 3.1
Proof. Using the chain rule of MI, we have
I(X;Z) = I(X;Z,B)− I(B;X|Z) = I(X;Z|B) + I(B;X)− I(B;X|Z) ≥ I(X;Z|B)−H(B)
where the inequality follows from dropping I(B;X) since B and X are independent and the fact that I(B;X|Z) ≤ H(B). Having B ∈ {0, 1}N̂ as a discrete RV, it immediately follows that H(B) ≤ N̂ log 2. Now note that
I(X;Z|B) = ∑
b∈{0,1}N̂ P(B = b)I(X;Z|B = b).
Since the Bernoulli RVs are independent, positive probability mass is assigned to b = (1, 1, . . . , 1), i.e., to the case where all neurons are active. Evidently, when b = (1, 1, . . . , 1) it follows that Z = f(X). Thus, with (Amjad and Geiger, 2020, Th. 1)
I(X;Z|B) ≥ P(b = (1, 1, . . . , 1))I(X; f(X)) =∞
and I(X;Z) =∞.
A.3 PROOF OF THEOREM 3.2
Proof. If the binary dropout is such that nonzero probability is assigned to the dropout mask b = (1, 1, . . . 1), then the statement of the theorem follows as in the proof of the theorem 3.1.
Assume now that B is such that zero mass is assigned to b = (1, 1, . . . , 1). To treat this case, we suppose that the distribution of X has a portion with a continuous probability density function on a compact set and that the neural network has activation functions that are either bi-Lipschitz or continuously differentiable with a strictly positive derivative (following the requirements of Amjad and Geiger (2020, Th. 1)). Then, we obtain I(X; f(X)) =∞ from (Amjad and Geiger, 2020, Th. 1) for almost all parameterizations of the neural network. Under this setting, fB(X) is again a neural network with activation functions that are either bi-Lipschitz or continuously differentiable with a strictly positive derivative. Assuming that b is such that the input of the network is not completely disconnected from the considered layer, for this pattern we have I(X;Z|B = b) = ∞. Otherwise, we obviously have I(X;Z|B = b) = 0. The statement of the theorem follows from taking the expectation over all patterns b.
A.4 PROOF OF THEOREM 3.3
Proof. W.l.o.g we first restrict our attention to the dimensions of representations Z that are different from zero. Specifically, suppose that Z = (Z1, . . . , ZN ) and that B = (B1, . . . , BN ) with Bi = 0 if Zi = 0 and Bi = 1 otherwise. Clearly, B is a function of Z, hence I(X;Z) = I(X;Z,B) =
I(B;X) + I(Z;X|B). Since B is binary, we have that I(X;B) ≤ H(B) ≤ n log 2. Let ZB = (Zi|i: Bi = 1) denote the sub-vector of non-zero elements of Z, then
I(X;Z) ≤ n log 2 + ∑ b P(B = b)I(Zb;X)
where, if B = b, I(Zb;X) = I(Z;X|B = b) holds because constant (i.e., 0) RVs do not contribute to MI. Therefore, I(X;Z) is finite iff I(Zb;X) = I(Z;X|B = b) is finite B-almost surely. We thus now fix an arbitrary B = b and continue the proof for Z = Zb.
We decompose MI into differential entropies as I(X;Z) = h(Z)−h(Z|X). The differential entropy of the representations h(Z) is upper-bounded by the entropy of a Gaussian RV with the same covariance matrix Σ as the distribution of Z = (Z1, . . . , ZN ), i.e., by N/2 log(2π) + 1/2 log(det(Σ)) + N/2. From Hadamard’s inequality and since Σ is positive semidefinite it follows that det(Σ) ≤∏n
i=1 σ 2 ii, where σ 2 ii are diagonal elements of the covariance matrix, i.e., σ 2 ii = V ar[Zi]. This variance can be bounded from above. Specifically, since Xi is bounded and f(·) is a composition of Lipschitz functions, f(X)i is bounded as well. Recalling that E[Di(x)2] ≤ M holds X-almost surely, this yields
V ar[Zi] ≤ Ex[f(X)2iDi(X)2] = Ex[f(X)2iEd[Di(X)2 | X]] ≤MEx[f(X)2i ] ≤M∥f(X)i∥2∞
It remains to show that the h(Z|X) > −∞. Due to the conditional independence of Di and Dj given X , for all i ̸= j, the conditional differential entropy of Z factorises in the sum of conditional differential entropy of its components, i.e., h(Z|X) = ∑N i=1 h(Zi|X). We write this conditional entropy as an expectation over X and obtain using (Cover and Thomas, 1991, Th. 9.6.4)
h(Zi|X) = Ex[h(Zi|X = x)] = Ex[h(Di(x)|f(x)i||X = x)] = Ex[h(Di(x)|X = x)] + Ex[log(|f(X)i|)]
by the formula of change of variables for differential entropy. Both terms are finite as per the assertion of the theorem. The first term is finite since we assumed that the differential entropy of Di(X) is essentially bounded, i.e., there exists a number C <∞ such that h(Di(x)) ≤ C X-almost surely. The second term is finite since we assumed that the conditional expectation E[log(|f(X)|) | |f(X)| > 0] is finite in each of its elements, and since Zi ̸= 0 implies |f(X)i| > 0. This completes the proof.
A.5 PROOF OF PROPOSITION 3.5
Proof. We assume w.l.o.g. that f(·) has a range with dimension D = 1, i.e., f : X → R, where X ⊆ Rn is the function domain. The proof can be straightforwardly extended to the several dimensions of f(·). Since f(·) is constructed using a finitely-sized neural network with ReLU activation functions, it is piecewise affinely linear on a finite partition of the function domain. The fact that E[log(|f(X)|) | |f(X)| > 0] <∞ follows then immediately from the fact that X , and thus |f(X )|, is bounded. To investigate whether E[log(|f(X)|) | |f(X)| > 0] > −∞, split domain X in the following partitions:
1. X0 = f−1({0}) denotes the element of the partition on which f(X) vanishes;
2. {X ci }i=1,...,ℓ denotes elements of the partition of X on which f(X) = ci, i.e., on which f(·) is constant;
3. X a = ⋃m
i=1 X ai denotes the union of the all other sets {X ai }i=1,...,m of the partition, where f(·) is not constant.
For the last subset, define the function f̃ : X a → Rn via f̃(x) = (|f(x)|, x2, x3, . . . , xn). Note that f̃(·) is piecewise bijective, hence W̃ = f̃(X) has a probability density function that is obtained
from the change of variables formula:
pw̃(w̃) = ∑
x∈f̃−1(w̃)
px(x)
|det(Jf̃ (x))|
where Jf̃ (x) = [ ∂f̃i ∂xj (x) ]
is the Jacobian matrix of f̃(·), with f̃1(x) = |f(x)| and f̃j(x) = xj for all j ≥ 2. It follows that Jacobian matrix is diagonal and has determinant | ∂f∂x1 (x)|. The density pw|Xa of the conditional random variable W = |f(X)| | X ∈ X a can be then obtained by marginalization from pw̃:
pw|Xa(w) = ∫ pw̃(w, x n 2 )dx n 2 = ∫ ∑ x∈f̃−1(w,xn2 ) px(x) | ∂f∂x1 (x)| dxn2 (8)
where xn2 = (x2, . . . , xn) and where we perform an (n− 1)-fold integral. Thus, by the Lebesgue decomposition, the distribution of W = |f(X)| can be split into an absolutely continuous component with a probability density function pw|Xa and a discrete component with finitely many mass points, for which we have P(W = ci) = ∫ X ci
px(x)dx =: px(X ci ). By the law of unconscious statistician, we then obtain
E[log(|f(X)|) | |f(X)| > 0] = E[log(W ) |W > 0]
= ℓ∑ i=1 px(X ci ) log |ci|+ px(X a) ∫ ∞ 0 log(w)pw|Xa(w)dw
= ℓ∑ i=1 px(X ci ) log |ci|+ px(X a) ∫ ϵ 0
log(w)pw|Xa(w)dw︸ ︷︷ ︸ I1
+px(X a) ∫ ∞ ϵ
log(w)pw|Xa(w)dw︸ ︷︷ ︸ I2
where in the last line we split the integral at a fixed ϵ≪ 1. Clearly, the first sum is finite since ci > 0 for all i. For the remaining summands involving integrals, suppose for now that pw|Xa(w) ≤ C < ∞. Then,
I1 = ∫ ϵ 0 pw log(w)dw ≥ ∫ ϵ 0 C log(w)dw = C(ϵ log(ϵ)− ϵ) > −∞
I2 = ∫ ∞ ϵ pw log(w)dw ≥ ∫ ∞ ϵ pw ( 1− 1 w ) dw ≥ ∫ ∞ ϵ pw ( 1− 1 ϵ ) dw ≥ 1− 1 ϵ > −∞.
We thus remain to show that pw|Xa(w) ≤ C for w ∈ [0, ϵ]. To this end, we revisit equation 8 and note that the integral is finite if i) px is bounded, ii) the integration is over a bounded set, and iii) | ∂f∂x1 (x)| ≥ ϵ1 > 0. Conditions i) and ii) are ensured by the assertion of the lemma. It remains to show that condition iii) holds.
Note that in contrast to using f̃(x) = (|f(x)|, x2, x3, . . . , xn), the same pw|Xa(w) can also be obtained by using the piecewise bijective function f̃(x) = (x1, |f(x)|, x3, . . . , xn), etc. Hence, pw|Xa(w) ≤ C if the partial derivative of f is bounded from below for at least one dimension, i.e., if there exists an i such that | ∂f∂x1 (x)| ≥ ϵ1. Since we have
∥∇xf(x)∥1 = n∑
i=1
∣∣∣∣ ∂f∂xi (x) ∣∣∣∣
this is equivalent to requiring that the L1 norm of the gradient is bounded from below. Indeed, remember that f is piecewise affinely linear with finitely many pieces, and its restriction to X a is non-constant. On its restriction to X a we thus have ∇xf(x) = gi > 0 for all x ∈ X a and some i ∈ {1, . . . ,m}. Hence, we can find an ϵ1 such that mini gi ≥ n · ϵ1 > 0, which implies that there exists an i for which | ∂f∂xi (x)| ≥ ϵ1 for all x ∈ X a. This completes the proof.
A.6 ESTIMATION OF MI UNDER GAUSSIAN DROPOUT
In the Algorithm 1 we describe how the estimation of I(X;Z) with Z being a representation under Gaussian dropout can be done. This is the way we estimated MI for our experiments, but any other estimator can be used in this setup.
Algorithm 1 Estimation of MI under Gaussian dropout Require: GMM-MEANS, σ, nonoise-reprs ▷ Amount of Gaussians in GM for approximation;
noise variance; no noise representations reprs← [] ▷ Generate noisy samples with corresponding variance for all nr in nonoise-reprs do
for i← 1, n do ϵ← noisep reprs← reprs+ nr ∗ ϵ
end for end for points← nonoise-reprs[: GMM-MEANS] ▷ Create a GMM on restricted amount of points for faster computation d← [] for all p in points do
d← d+ Gaussian(p, σ ∗ |p|) end for gmm← MixtureModel(d) lp← [] ▷ Get estimates of log-probabilities from GMM for noisy samples for all r in reprs do
lp← lp+ gmm.log probability(r) end for h(z)← mean(lp) h(z|x)← 0 ▷ Compute conditional entropy using closed form formula for i← 1, dim(reprs[0]) do ▷ For each dimension of the representation
h(z|x)← h(z|x) + mean(ln( √ 2πeσ|nonoise-reprs[:, i]|)) ▷ Use no noise representations here, each dimension separately end for I(x, z)← h(z)− h(z|x) ▷ Obtain final estimate for the MI
A.7 EVALUATION OF ESTIMATOR
Fig. 8 shows upper bounds and estimation of h(Z) with a higher noise than in the Fig. 3. Larger noise increases the gap between the Gaussian entropy based upper bound and the mixture based estimation as expected.
In Fig. 9 we see convergence of the MC estimate for h(Z|X) under larger noise. As expected larger noise variance results in smaller MI values (Fig. 10), while the trend observed when changing dimensionality stays the same.
A.8 INFORMATION PLANE ANALYSIS
Note, that in the experiments we analyze IPs on the training samples and test samples separately. In order to obtain a valid sample of hidden representations for the MI estimation during inference, we apply MC-Dropout, as opposed to the usual way of performing inference with dropout being turned off. According to Srivastava et al. (2014) this is the theoretically sound way to obtain predictions, while turning off dropout and re-scaling weights results in an approximationthat allows for faster computation.
In Fig. 11, Fig.12, and Fig. 13 we provide IPs built on the test set of the corresponding datasets (MNIST, MNIST, and CIFAR10).
In the Fig. 14 we provide additional IPs for the binning estimator with varying amount of bins used for MI estimation. We report the results for the fully-connected network trained on MNIST with Gaussian dropout variance 0.2.
In the Fig. 15 we show the IPs obtained for the same fully-connected network trained on MNIST with the variance of the Gaussian dropout set to 0.4. | 1. What is the focus and contribution of the paper regarding ill-posed problems in deterministic neural networks?
2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to other methods such as binning?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or questions regarding the methodology, such as the impact of dropout hyperparameters or the difference in estimated mutual information between dropout and binning methods? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper address the ill-posed problem of IB of deterministic neural networks. Motivated by the success of stochastic neural networks, the authors proposed using the dropout technique to define and estimate the mutual information in neural networks. The authors showed that MI is bounded when the dropout uses continuous noise. Then they conducted numerical experiments of MI and studied the IP of neural network models.
Strengths And Weaknesses
Strength
Compared to stochastic neural networks, using the dropout is more practical and realistic.
The estimation procedure of MI using the dropout seems easy and worked well compared to binning approaches.
Weakness
Compared to binning approaches, this method only applies to the dropout models with continuous noise. There are many networks that do not use dropout and continuous noise.
When using information dropout, its noise is learned simultaneously. This means the estimator of MI changes during the training. Thus, we use different MI estimators for different epochs when plotting the IP. So it is hard to understand the obtained IP curves or meaningless. Is my understanding correct?
I wonder how the dropout hyperparameters affect the estimation of MI and the IP. For example. The number of dropout layers or the magnitude of the noise should have a large impact on them, but there is no discussion and no numerical experiments about them.
Why I(Y; Z) are so different between dropout and binning methods, as shown in Figures 1, 5, 11, 12? As far as I understand, I(Y; Z) is estimated by subtracting H(Y) from the cross entropy as mentioned in Sec 4. This is the same procedure for the dropout and binning approaches.
Clarity, Quality, Novelty And Reproducibility
Clarity
This is a minor point. The colors of lines in Figure 4 are similar and hard to recognize.
Novelty
The idea of using dropout is interesting, but it is the straightforward extension of the idea of stochastic neural networks. As far as I understood the technique is not quite novel. |
ICLR | Title
MISSO: Minimization by Incremental Stochastic Surrogate Optimization for Large Scale Nonconvex and Nonsmooth Problems
Abstract
Many constrained, nonconvex and nonsmooth optimization problems can be tackled using the majorization-minimization (MM) method which alternates between constructing a surrogate function which upper bounds the objective function, and then minimizing this surrogate. For problems which minimize a finite sum of functions, a stochastic version of the MM method selects a batch of functions at random at each iteration and optimizes the accumulated surrogate. However, in many cases of interest such as variational inference for latent variable models, the surrogate functions are expressed as an expectation. In this contribution, we propose a doubly stochastic MM method based on Monte Carlo approximation of these stochastic surrogates. We establish asymptotic and non-asymptotic convergence of our scheme in a constrained, nonconvex, nonsmooth optimization setting. We apply our new framework for inference of logistic regression model with missing data and for variational inference of Bayesian variants of LeNet-5 and Resnet-18 on respectively the MNIST and CIFAR-10 datasets.
1 INTRODUCTION
We consider the constrained minimization problem of a finite sum of functions:
min θ∈Θ L(θ) := 1 n n∑ i=1 Li(θ) , (1)
where Θ is a convex, compact, and closed subset of Rp, and for any i ∈ J1, nK, the function Li : Rp → R is bounded from below and is (possibly) nonconvex and nonsmooth. To tackle the optimization problem (1), a popular approach is to apply the majorization-minimization (MM) method which iteratively minimizes a majorizing surrogate function. A large number of existing procedures fall into this general framework, for instance gradient-based or proximal methods or the Expectation-Maximization (EM) algorithm (McLachlan & Krishnan, 2008) and some variational Bayes inference techniques (Jordan et al., 1999); see for example (Razaviyayn et al., 2013) and (Lange, 2016) and the references therein. When the number of terms n in (1) is large, the vanilla MM method may be intractable because it requires to construct a surrogate function for all the n terms Li at each iteration. Here, a remedy is to apply the Minimization by Incremental Surrogate Optimization (MISO) method proposed by Mairal (2015), where the surrogate functions are updated incrementally. The MISO method can be interpreted as a combination of MM and ideas which have emerged for variance reduction in stochastic gradient methods (Schmidt et al., 2017). An extended analysis of MISO has been proposed in (Qian et al., 2019).
The success of the MISO method rests upon the efficient minimization of surrogates such as convex functions, see (Mairal, 2015, Section 2.3). A notable application of MISO-like algorithms is described in (Mensch et al., 2017) where the authors builds upon the stochastic majorizationminimization framework of Mairal (2015) to introduce a method for sparse matrix factorization. Yet, in many applications of interest, the natural surrogate functions are intractable, yet they are defined as expectation of tractable functions. For instance, this is the case for inference in latent variable models via maximum likelihood (McLachlan & Krishnan, 2008). Another application is
variational inference (Ghahramani, 2015), in which the goal is to approximate the posterior distribution of parameters given the observations; see for example (Neal, 2012; Blundell et al., 2015; Polson et al., 2017; Rezende et al., 2014; Li & Gal, 2017).
This paper fills the gap in the literature by proposing a method called Minimization by Incremental Stochastic Surrogate Optimization (MISSO), designed for the nonconvex and nonsmooth finite sum optimization, with a finite-time convergence guarantee. Our work aims at formulating a generic class of incremental stochastic surrogate methods for nonconvex optimization and building the theory to understand its behavior. In particular, we provide convergence guarantees for stochastic EM and Variational Inference-type methods, under mild conditions. In summary, our contributions are:
• we propose a unifying framework of analysis for incremental stochastic surrogate optimization when the surrogates are defined as expectations of tractable functions. The proposed MISSO method is built on the Monte Carlo integration of the intractable surrogate function, i.e., a doubly stochastic surrogate optimization scheme.
• we present an incremental update of the commonly used variational inference and Monte Carlo EM methods as special cases of our newly introduced framework. The analysis of those two algorithms is thus conducted under this unifying framework of analysis.
• we establish both asymptotic and non-asymptotic convergence for the MISSO method. In particular, the MISSO method converges almost surely to a stationary point and in O(n/ ) iterations to an -stationary point, see Theorem 1.
• in essence, we relax the class of surrogate functions used in MISO (Mairal, 2015) and allow for intractable surrogates that can only be evaluated by Monte-Carlo approximations. Working at the crossroads of Optimization and Sampling constitutes what we believe to be the novelty and the technicality of our framework and theoretical results.
In Section 2, we review the techniques for incremental minimization of finite sum functions based on the MM principle; specifically, we review the MISO method (Mairal, 2015), and present a class of surrogate functions expressed as an expectation over a latent space. The MISSO method is then introduced for the latter class of intractable surrogate functions requiring approximation. In Section 3, we provide the asymptotic and non-asymptotic convergence analysis for the MISSO method (and of the MISO (Mairal, 2015) one as a special case). Section 4 presents numerical applications including parameter inference for logistic regression with missing data and variational inference for two types of Bayesian neural networks. The proofs of theoretical results are reported as Supplement.
Notations. We denote J1, nK = {1, . . . , n}. Unless otherwise specified, ‖ · ‖ denotes the standard Euclidean norm and 〈· | ·〉 is the inner product in the Euclidean space. For any function f : Θ→ R, f ′(θ,d) is the directional derivative of f at θ along the direction d, i.e.,
f ′(θ,d) := lim t→0+ f(θ + td)− f(θ) t . (2)
The directional derivative is assumed to exist for the functions introduced throughout this paper.
2 INCREMENTAL MINIMIZATION OF FINITE SUM NONCONVEX FUNCTIONS
The objective function in (1) is composed of a finite sum of possibly nonsmooth and nonconvex functions. A popular approach here is to apply the MM method, which tackles (1) through alternating between two steps — (i) minimizing a surrogate function which upper bounds the original objective function; and (ii) updating the surrogate function to tighten the upper bound.
As mentioned in the introduction, the MISO method (Mairal, 2015) is developed as an iterative scheme that only updates the surrogate functions partially at each iteration. Formally, for any i ∈ J1, nK, we consider a surrogate function L̂i(θ;θ) which satisfies the assumptions (H1, H2): H1. For all i ∈ J1, nK and θ ∈ Θ, L̂i(θ;θ) is convex w.r.t. θ, and it holds
L̂i(θ;θ) ≥ Li(θ), ∀ θ ∈ Θ , (3)
where the equality holds when θ = θ.
H2. For any θi ∈ Θ, i ∈ J1, nK and some > 0, the difference function ê(θ; {θi}ni=1) := 1 n ∑n i=1 L̂i(θ;θi) − L(θ) is defined for all θ ∈ Θ and differentiable for all θ ∈ Θ, where Θ = {θ ∈ Rd, infθ′∈Θ ‖θ − θ′‖ < } is an -neighborhood set of Θ. Moreover, for some constant L, the gradient satisfies
‖∇ê(θ; {θi}ni=1)‖2 ≤ 2Lê(θ; {θi}ni=1), ∀ θ ∈ Θ . (4)
Algorithm 1 The MISO method (Mairal, 2015). 1: Input: initialization θ(0). 2: Initialize the surrogate function as A0i (θ) := L̂i(θ;θ(0)), i ∈ J1, nK.
3: for k = 0, 1, ...,Kmax do 4: Pick ik uniformly from J1, nK. 5: Update Ak+1i (θ) as:
Ak+1i (θ) = { L̂i(θ;θ(k)), if i = ik Aki (θ), otherwise.
6: Set θ(k+1) ∈ arg min θ∈Θ 1 n
∑n i=1A k+1 i (θ).
7: end for
We remark that H1 is a common assumption used for surrogate functions, see (Mairal, 2015, Section 2.3). H2 can be satisfied when the difference function ê(θ; {θi}ni=1) is L-smooth, i.e., ê is differentiable on Θ and its gradient ∇ê is LLipschitz, ∀θ ∈ Θ. H2 can be implied by applying (Razaviyayn et al., 2013, Proposition 1).
The inequality (3) implies L̂i(θ;θ) ≥ Li(θ) > −∞ for any θ ∈ Θ. The MISO method is an incremental version of the MM method, as summarized by Algorithm 1, which shows that the MISO method maintains an iteratively updated set of upper-bounding surrogate functions {Aki (θ)}ni=1 and updates the iterate via minimizing the average of the surrogate functions.
Particularly, only one out of the n surrogate functions is updated at each iteration [cf. Line 5] and the sum function 1n ∑n i=1A k+1 i (θ) is designed to be ‘easy to optimize’, which, for example, can be a sum of quadratic functions. As such, the MISO method is suitable for large-scale optimization as the computation cost per iteration is independent of n. Under H1, H2, it was shown that the MISO method converges almost surely to a stationary point of (1) (Mairal, 2015, Prop. 3.1).
We now consider the case when the surrogate functions L̂i(θ;θ) are intractable. Let Z be a measurable set, pi : Z × Θ → R+ a probability density function, ri : Θ × Θ × Z → R a measurable function and µi a σ-finite measure. We consider surrogate functions which satisfy H1, H2 and that can be expressed as an expectation, i.e.:
L̂i(θ;θ) := ∫ Z ri(θ;θ, zi)pi(zi;θ)µi(dzi) ∀ (θ,θ) ∈ Θ×Θ . (5)
Plugging (5) into the MISO method is not feasible since the update step in Step 6 involves a minimization of an expectation. Several motivating examples of (1) are given in Section 2.
In this paper, we propose the Minimization by Incremental Stochastic Surrogate Optimization (MISSO) method which replaces the expectation in (5) by Monte Carlo integration and then optimizes the objective function (1) in an incremental manner. Denote by M ∈ N the Monte Carlo batch size and let {zm ∈ Z}Mm=1 be a set of samples. These samples can be drawn (Case 1) i.i.d. from the distribution pi(·;θ) or (Case 2) from a Markov chain with stationary distribution pi(·;θ); see Section 3 for illustrations. To this end, we define the stochastic surrogate as follows:
L̃i(θ;θ, {zm}Mm=1) := 1
M M∑ m=1 ri(θ;θ, zm) , (6)
and we summarize the proposed MISSO method in Algorithm 2. Compared to the MISO method, there is a crucial difference in that the MISSO method involves two types of randomness. The first level of randomness comes from the selection of ik in Line 5. The second level of randomness stems from the set of Monte Carlo approximated functions Ãki (θ) used in lieu of Aki (θ) in Line 6 when optimizing for the next iterate θ(k). We now discuss two applications of the MISSO method.
Example 1: Maximum Likelihood Estimation for Latent Variable Model. Latent variable models (Bishop, 2006) are constructed by introducing unobserved (latent) variables which help explain the observed data. We consider n independent observations ((yi, zi), i ∈ JnK) where yi is observed and zi is latent. In this incomplete data framework, define {fi(zi,θ),θ ∈ Θ} to be the complete
Algorithm 2 The MISSO method. 1: Input: initialization θ(0); a sequence of non-negative numbers {M(k)}∞k=0. 2: For all i ∈ J1, nK, draw M(0) Monte Carlo samples with the stationary distribution pi(·;θ(0)). 3: Initialize the surrogate function as
Ã0i (θ) := L̃i(θ;θ(0), {z (0) i,m} M(0) m=1), i ∈ J1, nK .
4: for k = 0, 1, ...,Kmax do 5: Pick a function index ik uniformly on J1, nK. 6: Draw M(k) Monte Carlo samples with the stationary distribution pi(·;θ(k)). 7: Update the individual surrogate functions recursively as:
Ãk+1i (θ) =
{ L̃i(θ;θ(k), {z(k)i,m} M(k) m=1), if i = ik
Ãki (θ), otherwise.
8: Set θ(k+1) ∈ arg minθ∈Θ L̃(k+1)(θ) := 1n ∑n i=1 Ã k+1 i (θ). 9: end for
data likelihood models, i.e., the joint likelihood of the observations and latent variables. Let
gi(θ) := ∫ Z fi(zi,θ)µi(dzi), i ∈ J1, nK, θ ∈ Θ
denote the incomplete data likelihood, i.e., the marginal likelihood of the observations yi. For ease of notations, the dependence on the observations is made implicit. The maximum likelihood (ML) estimation problem sets the individual objective function Li(θ) to be the i-th negated incomplete data log-likelihood Li(θ) := − log gi(θ). Assume, without loss of generality, that gi(θ) 6= 0 for all θ ∈ Θ. We define by pi(zi,θ) := fi(zi,θ)/gi(θ) the conditional distribution of the latent variable zi given the observations yi. A surrogate function L̂i(θ;θ) satisfying H1 can be obtained through writing fi(zi,θ) = fi(zi,θ)pi(zi,θ)pi(zi,θ) and applying the Jensen inequality:
L̂i(θ;θ) = ∫ Z log ( pi(zi,θ)/fi(zi,θ) )︸ ︷︷ ︸ =ri(θ;θ,zi) pi(zi,θ)µi(dzi) . (7)
We note that H2 can also be verified for common distribution models. We can apply the MISSO method following the above specification of ri(θ;θ, zi) and pi(zi,θ).
Example 2: Variational Inference. Let ((xi, yi), i ∈ J1, nK) be i.i.d. input-output pairs and w ∈ W ⊆ Rd be a latent variable. When conditioned on the input data x = (xi, i ∈ J1, nK), the joint distribution of y = (yi, i ∈ J1, nK) and w is given by:
p(y, w|x) = π(w) ∏n i=1 p(yi|xi, w) . (8)
Our goal is to compute the posterior distribution p(w|y, x). In most cases, the posterior distribution p(w|y, x) is intractable and is approximated using a family of parametric distributions, {q(w,θ),θ ∈ Θ}. The variational inference (VI) problem (Blei et al., 2017) boils down to minimizing the Kullback-Leibler (KL) divergence between q(w,θ) and the posterior distribution p(w|y, x):
min θ∈Θ
L(θ) := KL (q(w;θ) ||p(w|y, x)) := Eq(w;θ) [ log ( q(w;θ)/p(w|y, x) )] . (9)
Using (8), we decompose L(θ) = n−1 ∑n i=1 Li(θ) + const. where:
Li(θ) := −Eq(w;θ) [ log p(yi|xi, w) ] + 1
n Eq(w;θ)
[ log q(w;θ)/π(w) ] := ri(θ) + d(θ) . (10)
Directly optimizing the finite sum objective function in (9) can be difficult. First, with n 1, evaluating the objective function L(θ) requires a full pass over the entire dataset. Second, for some
complex models, the expectations in (10) can be intractable even if we assume a simple parametric model for q(w;θ). Assume that Li is L-smooth. We apply the MISSO method with a quadratic surrogate function defined as:
L̂i(θ;θ) := Li(θ) + 〈 ∇θLi(θ) |θ − θ 〉 + L
2 ‖θ − θ‖2, (θ,θ) ∈ Θ2 . (11)
It is easily checked that the quadratic function L̂i(θ;θ) satisfies H1, H2. To compute the gradient ∇Li(θ), we apply the re-parametrization technique suggested in (Paisley et al., 2012; Kingma & Welling, 2014; Blundell et al., 2015). Let t : Rd×Θ 7→ Rd be a differentiable function w.r.t. θ ∈ Θ which is designed such that the law of w = t(z,θ) is q(·,θ), where z ∼ Nd(0, I). By (Blundell et al., 2015, Proposition 1), the gradient of −ri(·) in (10) is:
∇θEq(w;θ) [ log p(yi|xi, w) ] = Ez∼Nd(0,I) [ Jtθ(z,θ)∇w log p(yi|xi, w) ∣∣ w=t(z,θ) ] , (12)
where for each z ∈ Rd, Jtθ(z,θ) is the Jacobian of the function t(z, ·) with respect to θ evaluated at θ. In addition, for most cases, the term∇d(θ) can be evaluated in closed form as the gradient of the KL between the prior distribution π(·) and the variational candidate q(·,θ).
ri(θ;θ, z) := 〈 ∇θd(θ)− Jtθ(z,θ)∇w log p(yi|xi, w) ∣∣ w=t(z,θ) |θ − θ 〉 + L 2 ‖θ − θ‖2 . (13)
Finally, using (11) and (13), the surrogate function (6) is given by L̃i(θ;θ, {zm}Mm=1) := M−1 ∑M m=1 ri(θ;θ, zm) where {zm}Mm=1 are i.i.d samples drawn from N (0, I).
3 CONVERGENCE ANALYSIS
We now provide asymptotic and non-asymptotic convergence results of our method. Assume:
H3. For all i ∈ J1, nK, θ ∈ Θ, zi ∈ Z, ri(·;θ, zi) is convex on Θ and is lower bounded.
We are particularly interested in the constrained optimization setting where Θ is a bounded set. To this end, we control the supremum norm of the MC approximation, introduced in (6), as: H4. For the samples {zi,m}Mm=1, there exist finite constants Cr and Cgr such that
Cr := sup θ∈Θ sup M>0 1√ M Eθ [ sup θ∈Θ ∣∣∣∣∣ M∑ m=1 { ri(θ;θ, zi,m)− L̂i(θ;θ) }∣∣∣∣∣ ]
Cgr := sup θ∈Θ sup M>0
√ MEθ sup θ∈Θ ∣∣∣∣∣ 1M M∑ m=1 L̂′i(θ,θ − θ;θ)− r′i(θ,θ − θ;θ, zi,m) ‖θ − θ‖ ∣∣∣∣∣ 2
for all i ∈ J1, nK, and we denoted by Eθ[·] the expectation w.r.t. a Markov chain {zi,m}Mm=1 with initial distribution ξi(·;θ), transition kernel Πi,θ, and stationary distribution pi(·;θ).
Some intuitions behind the controlling terms: It is common in statistical and optimization problems, to deal with the manipulation and the control of random variables indexed by sets with an infinite number of elements. Here, the controlled random variable is an image of a continuous function defined as ri(θ;θ, zi,m) − L̂i(θ;θ) for all z ∈ Z and for fixed (θ,θ) ∈ Θ2. To characterize such control, we will have recourse to the notion of metric entropy (or bracketing number) as developed in (Van der Vaart, 2000; Vershynin, 2018; Wainwright, 2019). A collection of results from those references gives intuition behind our assumption H4, which is classical in empirical processes. In (Vershynin, 2018, Theorem 8.2.3), the authors recall the uniform law of large numbers:
E [ sup f∈F ∣∣∣∣∣ 1M M∑ i=1 f (zi,m)− E[f(zi)] ∣∣∣∣∣ ] ≤ CL√ M for all zi,m, i ∈ J1,MK ,
where F is a class of L-Lipschitz functions. Moreover, in (Vershynin, 2018, Theorem 8.1.3 ) and (Wainwright, 2019, Theorem 5.22), the application of the Dudley inequality yields:
E[sup f∈F |Xf −X0|] ≤ 1√ M ∫ 1 0 √ logN (F , ‖ · ‖∞, ε)dε ,
whereN (F , ‖ · ‖∞, ε) is the bracketing number and denotes the level of approximation (the bracketing number goes to infinity when → 0). Finally, in (Van der Vaart, 2000, p.271, Example), N (F , ‖ · ‖∞, ε) is bounded from above for a class of parametric functions F = fθ : θ ∈ Θ:
N (F , ‖ · ‖∞, ε) ≤ K ( diam Θ
ε
)d , for all 0 < ε < diam Θ .
The authors acknowledge that those bounds are a dramatic manifestation of the curse of dimensionality happening when sampling is needed. Nevertheless, the dependence on the dimension highly depends on the class of surrogate functions F used in our scheme, as smaller bounds on these controlling terms can be derived for simpler class of functions, such as quadratic functions.
Stationarity measure. As problem (1) is a constrained optimization task, we consider the following stationarity measure:
g(θ) := inf θ∈Θ L′(θ,θ − θ) ‖θ − θ‖ and g(θ) = g+(θ)− g−(θ) , (14)
where g+(θ) := max{0, g(θ)}, g−(θ) := −min{0, g(θ)} denote the positive and negative part of g(θ), respectively. Note that θ is a stationary point if and only if g−(θ) = 0 (Fletcher et al., 2002). Furthermore, suppose that the sequence {θ(k)}k≥0 has a limit point θ that is a stationary point, then one has limk→∞ g−(θ(k)) = 0. Thus, the sequence {θ(k)}k≥0 is said to satisfy an asymptotic stationary point condition. This is equivalent to (Mairal, 2015, Definition 2.4).
To facilitate our analysis, we define τki as the iteration index where the i-th function is last accessed in the MISSO method prior to iteration k, τk+1ik = k for instance. We define:
L̂(k)(θ) := 1n ∑n i=1L̂i(θ;θ (τki )), ê(k)(θ) := L̂(k)(θ)− L(θ), M (k) := Kmax−1∑ k=0 M −1/2 (k) . (15)
We first establish a non-asymptotic convergence rate for the MISSO method:
Theorem 1. Under H1-H4. For any Kmax ∈ N, let K be an independent discrete r.v. drawn uniformly from {0, ...,Kmax − 1} and define the following quantity:
∆(Kmax) := 2nLE[L̃ (0)(θ(0))− L̃(Kmax)(θ(Kmax))] + 4LCrM (k) .
Then we have following non-asymptotic bounds:
E [ ‖∇ê(K)(θ(K))‖2 ] ≤ ∆(Kmax)
Kmax and E[g−(θ(K))] ≤
√ ∆(Kmax)
Kmax + Cgr Kmax M (k) . (16)
Note that ∆(Kmax) is finite for any Kmax ∈ N. Iteration Complexity of MISSO. As expected, the MISSO method converges to a stationary point of (1) asymptotically and at a sublinear rate E[g(K)− ] ≤ O( √ ∆(Kmax)/Kmax). In other terms, MISSO requires O(nL/ ) iterations to reach an -stationary point when the suboptimality condition, that characterizes stationarity, is E [ ‖g−(θ(K))‖2 ] . Note that this stationarity criterion are similar to the
usual quantity used in stochastic nonconvex optimization, i.e., E [ ‖∇L(θ(K))‖2 ] . In fact, when the
optimization problem (1) is unconstrained, i.e., Θ = Rp, then E [ g(θ(K)) ] = E [ ∇L(θ(K)) ] .
Sample Complexity of MISSO. Regarding the sample complexity of our method, setting M(k) = k2/n2, as a non-decreasing sequence of integers satisfying ∑∞ k=0M −1/2 (k) < ∞, in order to keep
∆(Kmax) nL, then the MISSO method requires ∑nL/ k=0 k
2/n2 = nL3/ 3 samples to reach an -stationary point.
Furthermore,we remark that the MISO method can be analyzed in Theorem 1 as a special case of the MISSO method satisfying Cr = Cgr = 0. In this case, while the asymptotic convergence is well known from (Mairal, 2015) [cf. H4], Eq. (16) gives a non-asymptotic rate of E[g(K)− ] ≤
O( √ nL/Kmax) which is new to our best knowledge. Next, we show that under an additional assumption on the sequence of batch size M(k), the MISSO method converges almost surely to a stationary point:
Theorem 2. Under H1-H4. In addition, assume that {M(k)}k≥0 is a non-decreasing sequence of integers which satisfies ∑∞ k=0M −1/2 (k) <∞. Then:
1. the negative part of the stationarity measure converges a.s. to zero, i.e., lim k→∞ g−(θ (k))
a.s. = 0.
2. the objective value L(θ(k)) converges a.s. to a finite number L, i.e., limk→∞ L(θ(k)) a.s. = L.
In particular, the first result above shows that the sequence {θ(k)}k≥0 produced by the MISSO method satisfies an asymptotic stationary point condition.
4 NUMERICAL EXPERIMENTS
4.1 BINARY LOGISTIC REGRESSION WITH MISSING VALUES
This application follows Example 1 described in Section 2. We consider a binary regression setup, ((yi, zi), i ∈ JnK) where yi ∈ {0, 1} is a binary response and zi = (zi,j ∈ R, j ∈ JpK) is a covariate vector. The vector of covariates zi = [zi,mis, zi,obs] is not fully observed where we denote by zi,mis the missing values and zi,obs the observed covariate. It is assumed that (zi, i ∈ JnK) are i.i.d. and marginally distributed according toN (β,Ω) where β ∈ Rp and Ω is a positive definite p×pmatrix. We define the conditional distribution of the observations yi given zi = (zi,mis, zi,obs) as:
pi(yi|zi) = S(δ>z̄i)yi ( 1− S(δ>z̄i) )1−yi , (17)
where for u ∈ R, S(u) = 1/(1+e−u), δ = (δ0, · · · , δp) are the logistic parameters and z̄i = (1, zi). Here, θ = (δ,β,Ω) is the parameter to estimate. For i ∈ JnK, the complete log-likelihood reads: log fi(zi,mis,θ) ∝ yiδ>z̄i − log ( 1 + exp(δ>z̄i) ) − 1
2 log(|Ω|) + 1 2 Tr ( Ω−1(zi − β)(zi − β)> ) .
Fitting a logistic regression model on the TraumaBase dataset: We apply the MISSO method to fit a logistic regression model on the TraumaBase (http://traumabase.eu) dataset, which consists of data collected from 15 trauma centers in France, covering measurements on patients from the initial to last stage of trauma. This dataset includes information from the first stage of the trauma, namely initial observations on the patient’s accident site to the last stage being intense care at the hospital and counts more than 200 variables measured for more than 7 000 patients. Since the dataset considered is heterogeneous – coming from multiple sources with frequently missed entries – we apply the latent data model described in (17) to predict the risk of a severe hemorrhage which is one of the main cause of death after a major trauma.
Similar to (Jiang et al., 2018), we select p = 16 influential quantitative measurements, on n = 6384 patients. For the Monte Carlo sampling of zi,mis, required while running MISSO, we run a Metropolis-Hastings algorithm with the target distribution p(·|zi,obs, yi;θ(k)).
We compare in Figure 1 the convergence behavior of the estimated parameters δ and β using SAEM (Delyon et al., 1999) (with stepsize γk = 1/kα where α = 0.6 after tuning), MCEM (Wei
& Tanner, 1990) and the proposed MISSO method. For the MISSO method, we set the batch size to M(k) = 10 + k2 and we examine with selecting different number of functions in Line 5 in the method – the default settings with 1 (MISSO), 10% (MISSO10) and 50% (MISSO50) minibatches per iteration. From Figure 1, the MISSO method converges to a static value with less number of epochs than the MCEM, SAEM methods. It is worth noting that the difference among the MISSO runs for different number of selected functions demonstrates a variance-cost tradeoff. Though wall clock times are similar for all methods, they are reported in the appendix for completeness.
4.2 TRAINING BAYESIAN CNN USING MISSO
This application follows Example 2 described in Section 2. We use variational inference and the ELBO loss (10) to fit Bayesian Neural Networks on different datasets. At iteration k, minimizing the sum of stochastic surrogates defined as in (6) and (13) yields the following MISSO update — step (i) pick a function index ik uniformly on JnK; step (ii) sample a Monte Carlo batch {z(k)m } M(k) m=1 from N (0, I); and step (iii) update the parameters, with w̃ = t(θ(k−1), z(k)m ), as
µ (k) ` = µ̂ (τk) ` −
γ
n n∑ i=1 δ̂ (k) µ`,i ,
δ̂ (k) µ`,ik = − 1 M(k) M(k)∑ m=1 ∇w log p(yik |xik , w̃) +∇µ`d(θ(k−1)) ,
where µ̂(τ k)
` = 1 n ∑n i=1 µ (τki ) ` and d(θ) = n −1∑d `=1 ( − log(σ) + (σ2 + µ2`)/2− 1/2 ) .
Bayesian LeNet-5 on MNIST (LeCun et al., 1998): We apply the MISSO method to fit a Bayesian variant of LeNet-5 (LeCun et al., 1998). We train this network on the MNIST dataset (LeCun, 1998). The training set is composed of n = 55 000 handwritten digits, 28 × 28 images. Each image is labelled with its corresponding number (from zero to nine). Under the prior distribution π, see (8), the weights are assumed independent and identically distributed according to N (0, 1). We also assume that q(·;θ) ≡ N (µ, σ2I). The variational posterior parameters are thus θ = (µ, σ) where µ = (µ`, ` ∈ JdK) where d is the number of weights in the neural network. We use the re-parametrization as w = t(θ, z) = µ+ σz with z ∼ N (0, I). Bayesian ResNet-18 (He et al., 2016) on CIFAR-10 (Krizhevsky et al., 2012): We train here the Bayesian variant of the ResNet-18 neural network introduced in (He et al., 2016) on CIFAR-10. The latter dataset is composed of n = 60 000 handwritten digits, 32 × 32 colour images in 10 classes, with 6 000 images per class. As in the previous example, the weights are assumed independent and identically distributed according toN (0, I). Standard hyperparameters values found in the literature, such as the annealing constant or the number of MC samples, were used for the benchmark methods. For efficiency purpose and lower variance, the Flipout estimator (Wen et al., 2018) is used.
Experiment Results: We compare the convergence of the Monte Carlo variants of the following state of the art optimization algorithms — the ADAM (Kingma & Ba, 2015), the Momentum (Sutskever et al., 2013) and the SAG (Schmidt et al., 2017) methods versus the Bayes by Backprop (BBB) (Blundell et al., 2015) and our proposed MISSO method. For all these methods, the loss function (10) and its gradients were computed by Monte Carlo integration based on the reparametrization described above. The mini-batch of indices and MC samples are respectively set to 128 and M(k) = k. The learning rates are set to 10−3 for LeNet-5 and 10−4 for Resnet-18.
Figure 2(a) shows the convergence of the negated evidence lower bound against the number of passes over data (one pass represents an epoch). As observed, the proposed MISSO method outperforms Bayes by Backprop and Momentum, while similar convergence rates are observed with the MISSO, ADAM and SAG methods for our experiment on MNIST dataset using a Bayesian variant of LeNet5. On the other hand, the experiment conducted on CIFAR-10 (Figure 2(b)) using a much larger network, i.e., a Bayesian variant of ResNet-18 showcases the need of a well-tuned adaptive methods to reach lower training loss (and also faster). Our MISSO method is similar to the Monte Carlo variant of ADAM but slower than Adagrad optimizer. Recall that the purpose of this paper is to provide a common class of optimizers, such as VI, in order to study their convergence behaviors, and not to introduce a novel method outperforming the baselines methods. We report wall clock times for all methods in the appendix for completeness.
5 CONCLUSION
We present a unifying framework for minimizing a nonconvex and nonsmooth finite-sum objective function using incremental surrogates when the latter functions are expressed as an expectation and are intractable. Our approach covers a large class of nonconvex applications in machine learning such as logistic regression with missing values and variational inference. We provide both finitetime and asymptotic guarantees of our incremental stochastic surrogate optimization technique and illustrate our findings training a binary logistic regression with missing covariates to predict hemorrhagic shock and Bayesian variants of two Convolutional Neural Networks on benchmark datasets.
A PROOFS OF THE THEORETICAL RESULTS
A.1 PROOF OF THEOREM 1
Theorem. Under H1-H4. For anyKmax ∈ N, letK be an independent discrete r.v. drawn uniformly from {0, ...,Kmax − 1} and define the following quantity:
∆(Kmax) := 2nLE[L̃ (0)(θ(0))− L̃(Kmax)(θ(Kmax))] + 4LCrM (k) .
Then we have following non-asymptotic bounds:
E [ ‖∇ê(K)(θ(K))‖2 ] ≤ ∆(Kmax)
Kmax and E[g−(θ(K))] ≤
√ ∆(Kmax)
Kmax + Cgr Kmax M (k) .
Proof We begin by recalling the definition
L̃(k)(θ) := 1 n n∑ i=1 Ãki (θ) .
Notice that
L̃(k+1)(θ) = 1 n n∑ i=1 L̃i(θ;θ(τ k+1 i ), {z(τ k+1 i ) i,m } M (τ k+1 i ) m=1 )
= L̃(k)(θ) + 1 n
( L̃ik(θ;θ(k), {z (k) ik,m }M(k)m=1)− L̃ik(θ;θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 ) ) .
Furthermore, we recall that L̂(k)(θ) := 1n ∑n i=1L̂i(θ;θ (τki )), ê(k)(θ) := L̂(k)(θ)− L(θ) .
Due to H2, we have ‖∇ê(k)(θ(k))‖2 ≤ 2Lê(k)(θ(k)) . (18)
To prove the first bound in (16), using the optimality of θ(k+1), one has
L̃(k+1)(θ(k+1)) ≤ L̃(k+1)(θ(k))
= L̃(k)(θ(k)) + 1n ( L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)− L̃ik(θ(k);θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 ) ) .
(19)
Let Fk be the filtration of random variables up to iteration k, i.e., {i`−1, {z(`−1)i`−1,m} M(`−1) m=1 ,θ
(`)}k`=1. We observe that the conditional expectation evaluates to
Eik [ E [ L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)|Fk, ik ] |Fk ]
= L(θ(k)) + Eik [ E [ 1 M(k) M(k)∑ m=1 rik(θ (k);θ(k), z (k) ik,m )− L̂ik(θ(k);θ(k))|Fk, ik ] |Fk ] ≤ L(θ(k)) + Cr√ M(k) ,
where the last inequality is due to H4. Moreover,
E [ L̃ik(θ(k);θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 )|Fk ] = 1
n n∑ i=1 L̃i(θ(k);θ(τ k i ), {z(τ k i ) i,m } M (τk i ) m=1 ) = L̃(k)(θ(k)) .
Taking the conditional expectations on both sides of (19) and re-arranging terms give:
L̃(k)(θ(k))− L(θ(k)) ≤ nE [ L̃(k)(θ(k))− L̃(k+1)(θ(k+1))|Fk ] +
Cr√ M(k) . (20)
Proceeding from (20), we observe the following lower bound for the left hand side
L̃(k)(θ(k))− L(θ(k)) (a)= L̃(k)(θ(k))− L̂(k)(θ(k)) + ê(k)(θ(k)) (b)
≥ L̃(k)(θ(k))− L̂(k)(θ(k)) + 1 2L ‖∇ê(k)(θ(k))‖2
= 1
n n∑ i=1 { 1 M(τki ) M (τk i )∑ m=1 ri(θ (k);θ(τ k i ), z (τki ) i,m )− L̂i(θ (k);θ(τ k i )) }
︸ ︷︷ ︸ :=−δ(k)(θ(k))
+ 1
2L ‖∇ê(k)(θ(k))‖2 ,
where (a) is due to ê(k)(θ(k)) = 0 [cf. H1], (b) is due to (18) and we have defined the summation in the last equality as −δ(k)(θ(k)). Substituting the above into (20) yields
‖∇ê(k)(θ(k))‖2
2L ≤ nE
[ L̃(k)(θ(k))− L̃(k+1)(θ(k+1))|Fk ] +
Cr√ M(k) + δ(k)(θ(k)) . (21)
Observe the following upper bound on the total expectations:
E [ δ(k)(θ(k)) ] ≤ E [ 1 n n∑ i=1 Cr√ M(τki ) ] ,
which is due to H4. It yields
E [ ‖∇ê(k)(θ(k))‖2 ] ≤ 2nLE [ L̃(k)(θ(k))− L̃(k+1)(θ(k+1)) ] +
2LCr√ M(k) + 1 n n∑ i=1 E [ 2LCr√
M(τki )
] .
Finally, for anyKmax ∈ N, we letK be a discrete r.v. that is uniformly drawn from {0, 1, ...,Kmax− 1}. Using H4 and taking total expectations lead to
E [ ‖∇ê(K)(θ(K))‖2 ] = 1
Kmax Kmax−1∑ k=0 E[‖∇ê(k)(θ(k))‖2]
≤ 2nLE[L̃ (0)(θ(0))− L̃(Kmax)(θ(Kmax))]
Kmax + 2LCr Kmax Kmax−1∑ k=0 E [ 1√ M(k) + 1 n n∑ i=1 1√ M(τki ) ] . (22)
For all i ∈ J1, nK, the index i is selected with a probability equal to 1n when conditioned independently on the past. We observe:
E[M−1/2 (τki ) ] = k∑ j=1 1 n ( 1− 1 n )j−1 M −1/2 (k−j) (23)
Taking the sum yields: Kmax−1∑ k=0 E[M−1/2 (τki ) ] = Kmax−1∑ k=0 k∑ j=1 1 n ( 1− 1 n )j−1 M −1/2 (k−j) = Kmax−1∑ k=0 k−1∑ l=0 1 n ( 1− 1 n )k−(l+1) M −1/2 (l)
= Kmax−1∑ l=0 M −1/2 (l) Kmax−1∑ k=l+1 1 n ( 1− 1 n )k−(l+1) ≤ Kmax−1∑ l=0 M −1/2 (l) ,
(24)
where the last inequality is due to upper bounding the geometric series. Plugging this back into (22) yields
E [ ‖∇ê(K)(θ(K))‖2 ] = 1
Kmax Kmax−1∑ k=0 E[‖∇ê(k)(θ(k))‖2]
≤ 2nLE[L̃ (0)(θ(0))− L̃(Kmax)(θ(Kmax))]
Kmax +
1
Kmax Kmax−1∑ k=0 4LCr√ M(k) = ∆(Kmax) Kmax .
This concludes our proof for the first inequality in (16).
To prove the second inequality of (16), we define the shorthand notations g(k) := g(θ(k)), g(k)− := −min{0, g(k)}, g(k)+ := max{0, g(k)}. We observe that
g(k) = inf θ∈Θ L′(θ(k),θ − θ(k)) ‖θ(k) − θ‖
= inf θ∈Θ
{ 1 n ∑n i=1 L̂ ′
i(θ (k),θ − θ(k);θ(τki )) ‖θ(k) − θ‖
− 〈 ∇ê(k)(θ(k)) |θ − θ(k) 〉 ‖θ(k) − θ‖ } ≥ −‖∇ê(k)(θ(k))‖+ inf
θ∈Θ
1 n ∑n i=1 L̂ ′
i(θ (k),θ − θ(k);θ(τki )) ‖θ(k) − θ‖ ,
where the last inequality is due to the Cauchy-Schwarz inequality and we have defined L̂′i(θ,d;θ(τ k i )) as the directional derivative of L̂i(·;θ(τ k i )) at θ along the direction d. Moreover, for any θ ∈ Θ, 1
n n∑ i=1 L̂ ′ i(θ (k),θ − θ(k);θ(τ k i ))
= L̃(k) ′ (θ(k),θ − θ(k))︸ ︷︷ ︸
≥0
−L̃(k) ′ (θ(k),θ − θ(k)) + 1
n n∑ i=1 L̂ ′ i(θ (k),θ − θ(k);θ(τ k i ))
≥ 1 n n∑ i=1 { L̂ ′ i(θ (k),θ − θ(k);θ(τ k i ))− 1 M(τki ) M (τk i )∑ m=1 r′i(θ (k),θ − θ(k);θ(τ k i ), z (τki ) i,m ) } ,
where the inequality is due to the optimality of θ(k) and the convexity of L̃(k)(θ) [cf. H3]. Denoting a scaled version of the above term as:
(k)(θ) :=
1 n ∑n i=1 { 1
M (τk i )
∑M(τk i )
m=1 r ′ i(θ
(k),θ − θ(k);θ(τki ), z(τ k i ) i,m )− L̂ ′ i(θ (k),θ − θ(k);θ(τki )) } ‖θ(k) − θ‖ .
We have g(k) ≥ −‖∇ê(k)(θ(k))‖+ inf
θ∈Θ (− (k)(θ)) ≥ −‖∇ê(k)(θ(k))‖ − sup θ∈Θ | (k)(θ)| . (25)
Since g(k) = g(k)+ − g (k) − and g (k) + g (k) − = 0, this implies
g (k) − ≤ ‖∇ê(k)(θ(k))‖+ sup θ∈Θ | (k)(θ)| . (26)
Consider the above inequality when k = K, i.e., the random index, and taking total expectations on both sides gives
E[g(K)− ] ≤ E[‖∇ê(K)(θ(K))‖] + E[sup θ∈Θ (K)(θ)] .
We note that ( E[‖∇ê(K)(θ(K))‖] )2 ≤ E[‖∇ê(K)(θ(K))‖2] ≤ ∆(Kmax)
Kmax ,
where the first inequality is due to the convexity of (·)2 and the Jensen’s inequality, and
E[sup θ∈Θ
(K)(θ)] = 1
Kmax Kmax∑ k=0 E[sup θ∈Θ (k)(θ)] (a) ≤ Cgr Kmax Kmax−1∑ k=0 E [ 1 n n∑ i=1 M −1/2 (τki ) ] (b) ≤ Cgr Kmax Kmax−1∑ k=0 M −1/2 (k) ,
where (a) is due to H4 and (b) is due to (24). This implies
E[g(K)− ] ≤
√ ∆(Kmax)
Kmax + Cgr Kmax Kmax−1∑ k=0 M −1/2 (k) ,
and concludes the proof of the theorem.
A.2 PROOF OF THEOREM 2
Theorem. Under H1-H4. In addition, assume that {M(k)}k≥0 is a non-decreasing sequence of integers which satisfies ∑∞ k=0M −1/2 (k) <∞. Then:
1. the negative part of the stationarity measure converges a.s. to zero, i.e., lim k→∞ g−(θ (k))
a.s. = 0.
2. the objective value L(θ(k)) converges a.s. to a finite number L, i.e., limk→∞ L(θ(k)) a.s. = L.
Proof We apply the following auxiliary lemma which proof can be found in Appendix A.3 for the readability of the current proof:
Lemma 1. Let (Vk)k≥0 be a non negative sequence of random variables such that E[V0] < ∞. Let (Xk)k≥0 a non negative sequence of random variables and (Ek)k≥0 be a sequence of random variables such that ∑∞ k=0 E[|Ek|] <∞. If for any k ≥ 1:
Vk ≤ Vk−1 −Xk−1 + Ek−1 (27)
then:
(i) for all k ≥ 0, E[Vk] <∞ and the sequence (Vk)k≥0 converges a.s. to a finite limit V∞.
(ii) the sequence (E[Vk])k≥0 converges and lim k→∞ E[Vk] = E[V∞].
(iii) the series ∑∞ k=0Xk converges almost surely and ∑∞ k=0 E[Xk] <∞.
We proceed from (19) by re-arranging terms and observing that L̂(k+1)(θ(k+1)) ≤ L̂(k)(θ(k))− 1n ( L̂ik(θ(k);θ (τkik ))− L̂ik(θ(k);θ(k)) ) − ( L̃(k+1)(θ(k+1))− L̂(k+1)(θ(k+1)) ) + ( L̃(k)(θ(k))− L̂(k)(θ(k))
) + 1n ( L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)− L̂ik(θ(k);θ(k))
) + 1n ( L̂ik(θ(k);θ (τkik ))− L̃ik(θ(k);θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 ) ) .
Our idea is to apply Lemma 1. Under H1, the finite sum of surrogate functions L̂(k)(θ), defined in (15), is lower bounded by a constant ck > −∞ for any θ. To this end, we observe that
Vk := L̂(k)(θ(k))− inf k≥0 ck ≥ 0 (28)
is a non-negative random variable.
Secondly, under H1, the following random variable is non-negative
Xk := 1 n ( L̂ik(θ (τkik );θ(k))− L̂ik(θ(k);θ(k)) ) ≥ 0 . (29)
Thirdly, we define Ek = − ( L̃(k+1)(θ(k+1))− L̂(k+1)(θ(k+1)) ) + ( L̃(k)(θ(k))− L̂(k)(θ(k)) ) + 1n ( L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)− L̂ik(θ(k);θ(k))
) + 1n ( L̂ik(θ(k);θ (τkik ))− L̃ik(θ(k);θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 ) ) .
(30)
Note that from the definitions (28), (29), (30), we have Vk+1 ≤ Vk −Xk + Ek for any k ≥ 1. Under H4, we observe that
E [ |L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)− L̂ik(θ(k);θ(k))| ] ≤ CrM−1/2(k)
E [∣∣∣L̂ik(θ(k);θ(τkik ))− L̃ik(θ(k);θ(τkik ), {z(τkik )ik,m }M(τkik )m=1 )∣∣∣] ≤ CrE[M−1/2(τkik ) ]
E [ |L̃(k)(θ(k))− L̂(k)(θ(k))| ] ≤ 1n ∑n i=1CrE [ M −1/2 (τki ) ] Therefore,
E [ |Ek| ] ≤ Crn ( M −1/2 (k) + E [ M −1/2 (τkik ) + ∑n i=1 { M −1/2 (τki ) +M −1/2 (τk+1i ) }]) .
Using (24) and the assumption on the sequence {M(k)}k≥0, we obtain that ∞∑ k=0 E [ |Ek| ] < Cr n (2 + 2n) ∞∑ k=0 M −1/2 (k) <∞.
Therefore, the conclusions in Lemma 1 hold. Precisely, we have ∑∞ k=0Xk < ∞ and∑∞
k=0 E[Xk] <∞ almost surely. Note that this implies
∞ > ∞∑ k=0 E[Xk] = 1 n ∞∑ k=0 E [ L̂ik(θ(k);θ (τkik ))− L̂ik(θ(k);θ(k)) ] = 1
n ∞∑ k=0 E [ L̂(k)(θ(k))− L(θ(k)) ] = 1 n ∞∑ k=0 E [ ê(k)(θ(k)) ] .
Since ê(k)(θ(k)) ≥ 0, the above implies
lim k→∞
ê(k)(θ(k)) = 0 a.s. (31)
and subsequently applying (18), we have limk→∞ ‖ê(k)(θ(k))‖ = 0 almost surely. Finally, it follows from (18) and (26) that
lim k→∞
g (k) − ≤ lim
k→∞
√ 2L √ ê(k)(θ(k)) + lim
k→∞ sup θ∈Θ | (k)(θ)| = 0 , (32)
where the last equality holds almost surely due to the fact that ∑∞ k=0 E[supθ∈Θ | (k)(θ)|] < ∞. This concludes the asymptotic convergence of the MISSO method.
Finally, we prove thatL(θ(k)) converges almost surely. As a consequence of Lemma 1, it is clear that {Vk}k≥0 converges almost surely and so is {L̂(k)(θ(k))}k≥0, i.e., we have limk→∞ L̂(k)(θ(k)) = L. Applying (31) implies that
L = lim k→∞ L̂(k)(θ(k)) = lim k→∞ L(θ(k)) a.s.
This shows that L(θ(k)) converges almost surely to L.
A.3 PROOF OF LEMMA 1
Lemma. Let (Vk)k≥0 be a non negative sequence of random variables such that E[V0] < ∞. Let (Xk)k≥0 a non negative sequence of random variables and (Ek)k≥0 be a sequence of random variables such that ∑∞ k=0 E[|Ek|] <∞. If for any k ≥ 1:
Vk ≤ Vk−1 −Xk−1 + Ek−1
then:
(i) for all k ≥ 0, E[Vk] <∞ and the sequence (Vk)k≥0 converges a.s. to a finite limit V∞.
(ii) the sequence (E[Vk])k≥0 converges and lim k→∞ E[Vk] = E[V∞].
(iii) the series ∑∞ k=0Xk converges almost surely and ∑∞ k=0 E[Xk] <∞.
Proof We first show that for all k ≥ 0, E[Vk] <∞. Note indeed that:
0 ≤ Vk ≤ V0 − k∑ j=1 Xj + k∑ j=1 Ej ≤ V0 + k∑ j=1 Ej , (33)
showing that E[Vk] ≤ E[V0] + E [∑k j=1Ej ] <∞. Since 0 ≤ Xk ≤ Vk−1 − Vk + Ek we also obtain for all k ≥ 0, E[Xk] < ∞. Moreover, since E [∑∞ j=1 |Ej | ] <∞, the series ∑∞ j=1Ej converges a.s. We may therefore define:
Wk = Vk + ∞∑ j=k+1 Ej (34)
Note that E[|Wk|] ≤ E[Vk] + E [∑∞ j=k+1 |Ej | ] <∞. For all k ≥ 1, we get:
Wk ≤ Vk−1 −Xk + ∞∑ j=k Ej ≤Wk−1 −Xk ≤Wk−1
E[Wk] ≤ E[Wk−1]− E[Xk] .
(35)
Hence the sequences (Wk)k≥0 and (E[Wk])k≥0 are non increasing. Since for all k ≥ 0, Wk ≥ − ∑∞ j=1 |Ej | > −∞ and E[Wk] ≥ − ∑∞ j=1 E[|Ej |] > −∞, the (random) sequence (Wk)k≥0 converges a.s. to a limitW∞ and the (deterministic) sequence (E[Wk])k≥0 converges to a limit w∞. Since |Wk| ≤ V0 + ∑∞ j=1 |Ej |, the Fatou lemma implies that:
E[lim inf k→∞ |Wk|] = E[|W∞|] ≤ lim inf k→∞ E[|Wk|] ≤ E[V0] + ∞∑ j=1 E[|Ej |] <∞ , (36)
showing that the random variable W∞ is integrable.
In the sequel, set Uk ,W0 −Wk. By construction we have for all k ≥ 0, Uk ≥ 0, Uk ≤ Uk+1 and E[Uk] ≤ E[|W0|] + E[|Wk|] <∞ and by the monotone convergence theorem, we get:
lim k→∞ E[Uk] = E[ lim k→∞ Uk] . (37)
Finally, we have:
lim k→∞ E[Uk] = E[W0]− w∞ and E[ lim k→∞ Uk] = E[W0]− E[W∞] . (38)
showing that E[W∞] = w∞ and concluding the proof of (ii). Moreover, using (35) we have that Wk ≤Wk−1 −Xk which yields:
∞∑ j=1 Xj ≤W0 −W∞ <∞ ,
∞∑ j=1 E[Xj ] ≤ E[W0]− w∞ <∞ , (39)
an concludes the proof of the lemma.
B PRACTICAL DETAILS FOR THE BINARY LOGISTIC REGRESSION ON THE TRAUMABASE
B.1 TRAUMABASE DATASET QUANTITATIVE VARIABLES
The list of the 16 quantitative variables we use in our experiments are as follows — age, weight, height, BMI (Body Mass Index), the Glasgow Coma Scale, the Glasgow Coma Scale motor component, the minimum systolic blood pressure, the minimum diastolic blood pressure, the maximum
number of heart rate (or pulse) per unit time (usually a minute), the systolic blood pressure at arrival of ambulance, the diastolic blood pressure at arrival of ambulance, the heart rate at arrival of ambulance, the capillary Hemoglobin concentration, the oxygen saturation, the fluid expansion colloids, the fluid expansion cristalloids, the pulse pressure for the minimum value of diastolic and systolic blood pressure, the pulse pressure at arrival of ambulance.
B.2 METROPOLIS-HASTINGS ALGORITHM
During the simulation step of the MISSO method, the sampling from the target distribution π(zi,mis;θ) := p(zi,mis|zi,obs, yi;θ) is performed using a Metropolis-Hastings (MH) algorithm (Meyn & Tweedie, 2012) with proposal distribution q(zi,mis; δ) := p(zi,mis|zi,obs; δ) where θ = (β,Ω) and δ = (ξ,Σ). The parameters of the Gaussian conditional distribution of zi,mis|zi,obs read:
ξ = βmiss + Ωmis,obsΩ −1 obs,obs(zi,obs − βobs) , Σ = Ωmis,mis + Ωmis,obsΩ −1 obs,obsΩobs,mis ,
where we have used the Schur Complement of Ωobs,obs in Ω and noted βmis (resp. βobs) the missing (resp. observed) elements of β. The MH algorithm is summarized in Algorithm 3.
Algorithm 3 MH aglorithm 1: Input: initialization zi,mis,0 ∼ q(zi,mis; δ) 2: for m = 1, · · · ,M do 3: Sample zi,mis,m ∼ q(zi,mis; δ) 4: Sample u ∼ U(J0, 1K) 5: Calculate the ratio r = π(zi,mis,m;θ)/q(zi,mis,m);δ)π(zi,mis,m−1;θ)/q(zi,mis,m−1);δ) 6: if u < r then 7: Accept zi,mis,m 8: else 9: zi,mis,m ← zi,mis,m−1 10: end if 11: end for 12: Output: zi,mis,M
B.3 MISSO UPDATE
Choice of surrogate function for MISO: We recall the MISO deterministic surrogate defined in (7): L̂i(θ;θ) = ∫ Z log ( pi(zi,mis,θ)/fi(zi,mis,θ) ) pi(zi,mis,θ)µi(dzi) .
where θ = (δ, β,Ω) and θ = (δ̄, β̄, Ω̄). We adapt it to our missing covariates problem and decompose the surrogate function defined above into an observed and a missing part.
Surrogate function decomposition We adapt it to our missing covariates problem and decompose the term depending on θ, while θ̄ is fixed, in two following parts leading to
L̂i(θ;θ) =− ∫ Z log fi(zi,mis, zi,obs,θ)pi(zi,mis,θ)µi(dzi,mis)
=− ∫ Z log [pi(yi|zi,mis, zi,obs, δ)pi(zi,mis, β,Ω)] pi(zi,θ)µi(dzi,mis)
=− ∫ Z
log pi(yi|zi,mis, zi,obs, δ)pi(zi,θ)µi(dzi,mis)︸ ︷︷ ︸ =L̂(1)i (δ,θ)
− ∫ Z
log pi(zi,mis, β,Ω)pi(zi,θ)µi(dzi,mis)︸ ︷︷ ︸ =L̂(2)i (β,Ω,θ) .
(40)
The mean β and the covariance Ω of the latent structure can be estimated minimizing the sum of MISSO surrogates L̃(2)i (β,Ω,θ, {zm}Mm=1), defined as MC approximation of L̂ (2) i (β,Ω,θ), for all i ∈ JnK, in closed-form expression.
We thus keep the surrogate L̂(2)i (β,Ω,θ) as it is, and consider the following quadratic approximation of L̂(1)i (δ,θ) to estimate the vector of logistic parameters δ:
L̂(1)i (δ̄,θ)− ∫ Z ∇ log pi(yi|zi,mis, zi,obs, δ) ∣∣ δ=δ̄ pi(zi,mis,θ)µi(dzi,mis)(δ − δ̄)
−(δ − δ̄)/2 ∫ Z ∇2 log pi(yi|zi,mis, zi,obs, δ)pi(zi,mis,θ)pi(zi,mis,θ)µi(dzi,mis)(δ − δ̄)>.
Recall that: ∇ log pi(yi|zi,mis, zi,obs, δ) = zi ( yi − S(δ>zi) ) ,
∇2 log pi(yi|zi,mis, zi,obs, δ) = −ziz>i Ṡ(δ>zi) ,
where Ṡ(u) is the derivative of S(u). Note that Ṡ(u) ≤ 1/4 and since, for all i ∈ JnK, the p × p matrix ziz>i is semi-definite positive we can assume that: L1. For all i ∈ JnK and > 0, there exist, for all zi ∈ Z, a positive definite matrix Hi(zi) := 1 4 (ziz > i + Id) such that for all δ ∈ Rp, −ziz>i Ṡ(δ>zi) ≤ Hi(zi).
Then, we use, for all i ∈ JnK, the following surrogate function to estimate δ:
L̄(1)i (δ,θ) = L̂ (1) i (δ̄,θ)−D > i (δ − δ̄) +
1 2 (δ − δ̄)Hi(δ − δ̄)> , (41)
where:
Di = ∫ Z ∇ log pi(yi|zi,mis, zi,obs, δ) ∣∣ δ=δ̄ pi(zi,mis,θ)µi(dzi,mis) ,
Hi = ∫ Z Hi(zi,mis)pi(zi,mis,θ)µi(dzi,mis) .
Finally, at iteration k, the total surrogate is:
L̃(k)(θ) = 1 n n∑ i=1 L̃i(θ, θ(τ k i ), {zi,m} M (τk i ) m=1 )
= 1
n n∑ i=1 L̃(2)i (β,Ω, θ (τki ), {zi,m} M (τk i ) m=1 )− 1 n n∑ i=1 D̃ (τki ) i (δ − δ (τki ))
+ 1
2n n∑ i=1 (δ − δ(τ k i )) { H̃ (τki ) i } (δ − δ(τ k i ))> ,
(42)
where for all i ∈ JnK:
D̃ (τki ) i = 1
M(τki )
M (τk i )∑
m=1
z (τki ) i,m ( yi − S( ( δ(τ k i ) )> zi,m(τ k i )) ) ,
H̃ (τki ) i =
1
4M(τki )
M (τk i )∑
m=1
z (τki ) i,m (z (τki ) i,m ) > .
Minimizing the total surrogate (42) boils down to performing a quasi-Newton step. It is perhaps sensible to apply some diagonal loading which is perfectly compatible with the surrogate interpretation we just gave.
The logistic parameters are estimated as follows:
δ(k) = arg min δ∈Θ
1
n n∑ i=1 L̃(1)i (δ, θ (τki ), {zi,m} M (τk i ) m=1 ) ,
where L̃(1)i (δ, θ(τ k i ), {zi,m}
M (τk i )
m=1 ) is the MC approximation of the MISO surrogate defined in (41) and which leads to the following quasi-Newton step:
δ(k) = 1
n n∑ i=1 δ(τ k i ) − (H̃(k))−1D̃(k) ,
with D̃(k) = 1n ∑n i=1 D̃ (τki ) i and H̃ (k) = 1n ∑n i=1 H̃ (τki ) i .
MISSO updates: At the k-th iteration, and after the initialization, for all i ∈ JnK, of the latent variables (z(0)i ), the MISSO algorithm consists in picking an index ik uniformly on JnK, completing the observations by sampling a Monte Carlo batch {z(k)ik,mis,m} M(k) m=1 of missing values from the conditional distribution p(zik,mis|zik,obs, yik ;θ(k−1)) using an MCMC sampler and computing the estimated parameters as follows:
β(k) = arg min β∈Θ
1
n n∑ i=1 L̃(2)i (β,Ω (k), θ(τ k i ), {zi,m} M (τk i ) m=1 ) = 1 n n∑ i=1 1 M(τki ) M (τk i )∑ m=1 z (k) i,m ,
Ω(k) = arg min Ω∈Θ
1
n n∑ i=1 L̃(2)i (β (k),Ω, θ(τ k i ), {zi,m} M (τk i ) m=1 ) = 1 n n∑ i=1 1 M(τki ) M (τk i )∑ m=1 w (k) i,m ,
δ(k) = 1
n n∑ i=1 δ(τ k i ) − (H̃(k))−1D̃(k) .
(43)
where z(k)i,m = (z (k) i,mis,m, zi,obs) is composed of a simulated and an observed part, D̃ (k) =
1 n ∑n i=1 D̃ (τki ) i , H̃ (k) = 1n ∑n i=1 H̃ (τki ) i and w (k) i,m = z (k) i,m(z (k) i,m)
> − β(k)(β(k))>. Besides, L̃(1)i (β,Ω,θ, {zm}Mm=1) and L̃ (2) i (β,Ω,θ, {zm}Mm=1) are defined as MC approximation of L̂(1)i (β,Ω,θ) and L̂ (2) i (β,Ω,θ), for all i ∈ JnK as components of the surrogate function (40).
B.4 WALL CLOCK TIME
We provide Table 1, the running time for each method, plotted in Figure 1, employed to train a logistic regression with missing values on the TraumaBase dataset (p = 16 influential quantitative measurements, on n = 6384 patients).
The running times are sensibly the same since for each method the computation complexity per epoch is similar. We remark a slight delay using the MISSO method with a batch size of 1, as our code implemented in R, is not totally optimized and parallelized. Yet, when the batch size tends to 100%, we retrieve the duration of MCEM, which is consistent with the fact that MISSO with a full batch update boils down to the MCEM algorithm.
We plot Figure 3, the updated parameters for the Logistic regression example against the time elapsed (in seconds).
C PRACTICAL DETAILS FOR THE INCREMENTAL VARIATIONAL INFERENCE
C.1 NEURAL NETWORKS ARCHITECTURE
Bayesian LeNet-5 Architecture: We describe in Table 2 the architecture of the Convolutional Neural Network introduced in (LeCun et al., 1998) and trained on MNIST:
Bayesian ResNet-18 Architecture: We describe in Table 3 the architecture of the Resnet-18 we train on CIFAR-10:
C.2 ALGORITHMS UPDATES
First, we initialize the means µ(0)` for ` ∈ JdK and variance estimates σ(0). At iteration k, minimizing the sum of stochastic surrogates defined as in (6) and (13) yields the following MISSO update —
step (i) pick a function index ik uniformly on JnK; step (ii) sample a Monte Carlo batch {z(k)m } M(k) m=1 from N (0, I); and step (iii) update the parameters as
µ (k) ` =
1
n n∑ i=1 µ (τki ) ` − γ n n∑ i=1 δ̂ (k) µ`,i and σ(k) = 1 n n∑ i=1 σ(τ k i ) − γ n n∑ i=1 δ̂ (k) σ,i , (44)
where we define the following gradient terms for all i ∈ J1, nK:
δ̂ (k) µ`,i = − 1 M(k) M(k)∑ m=1 ∇w log p(yi|xi, w) ∣∣∣ w=t(θ(k−1),z (k) m ) +∇µ`d(θ(k−1)) ,
δ̂ (k) σ,i = −
1
M(k) M(k)∑ m=1 z(k)m ∇w log p(yi|xi, w) ∣∣∣ w=t(θ(k−1),z (k) m ) +∇σd(θ(k−1)) .
(45)
Note that our analysis in the main text does require the parameter to be in a compact set. For the current estimation problem considered, this can be enforced in practice by restricting the parameters in a ball. In our simulation for the BNNs example, we did not implement the algorithms that stick closely to the compactness requirement for illustrative purposes. However, we observe empirically that the parameters are always bounded. The update rules can be easily modified to respect the requirement. For the considered VI problem, we recall the surrogate functions (11) are quadratic and indeed a simple projection step suffices to ensure boundedness of the iterates.
For all benchmark algorithms, we pick, at iteration k, a function index ik uniformly on JnK and sample a Monte Carlo batch {z(k)m } M(k) m=1 from the standard Gaussian distribution. The updates of the parameters µ` for all ` ∈ JdK and σ break down as follows: Monte Carlo SAG update: Set
µ (k) ` = µ (k−1) ` −
γ
n n∑ i=1 δ̂ (k) µ`,i and σ(k) = σ(k−1) − γ n n∑ i=1 δ̂ (k) σ,i ,
where δ̂(k)µ`,i = δ̂ (k−1) µ`,i and δ̂(k)σ,i = δ̂ (k−1) σ,i for i 6= ik and are defined by (45) for i = ik. The learning rate is set to γ = 10−3.
Bayes By Backprop update: Set
µ (k) ` = µ (k−1) ` −
γ n δ̂ (k) µ`,ik and σ(k) = σ(k−1) − γ n δ̂ (k) σ,ik ,
where the learning rate γ = 10−3.
Monte Carlo Momentum update: Set
µ (k) ` = µ (k−1) ` + v̂ (k) µ` and σ(k) = σ(k−1) + v̂(k)σ ,
where v̂
(k) µ`,i = αv̂(k−1)µ` − γ
n δ̂
(k) µ`,ik and v̂(k)σ = αv̂ (k−1) σ −
γ n δ̂ (k) σ,ik ,
where α and γ, respectively the momentum and the learning rates, are set to 10−3.
Monte Carlo ADAM update: Set
µ (k) ` = µ (k−1) ` −
γ n m̂(k)µ` /(
√ m̂
(k) µ` + ) and σ (k) = σ(k−1) − γ n m̂(k)σ /(
√ m̂ (k) σ + ) ,
where
m̂(k)µ` = m (k−1 | 1. What is the focus of the paper regarding minimization algorithms?
2. What are the weaknesses of the proposed approach, particularly in its computational expense?
3. How does the reviewer assess the convergence rate of the method?
4. Are there any concerns regarding the updates in the algorithm?
5. How does the reviewer evaluate the overall quality and novelty of the paper's content? | Review | Review
This paper develops a stochastic MM-type algorithm to minimize a finite sum. Essentially, the stochastic method draws one sample at each iteration, and find a majorization surrogate for the corresponding loss, and find the minimizer for the updated total loss.
Overall, I don't find the paper well-developed and doesn't meet the bar of a top conference like ICLR for the following major concerns:
The major flaw is that in each iteration, the algorithm requires us to find the minimizer of the updated total loss (Step 8 of algorithm 2). This step is computationally as expensive as the update step in a batched MM algorithm. For a stochastic-type algorithm, I would expect the update only finds the minimizer of the stochastically picked individual surrogate function.
By minimizing a stochastically picked individual surrogate function, the convergence follows by existing literature on stochastic proximal gradient method, there Theorem 2 follows without much difficulty.
The convergence rate of the proposed method is not derived, which shouldn't be too difficult to derive. |
ICLR | Title
MISSO: Minimization by Incremental Stochastic Surrogate Optimization for Large Scale Nonconvex and Nonsmooth Problems
Abstract
Many constrained, nonconvex and nonsmooth optimization problems can be tackled using the majorization-minimization (MM) method which alternates between constructing a surrogate function which upper bounds the objective function, and then minimizing this surrogate. For problems which minimize a finite sum of functions, a stochastic version of the MM method selects a batch of functions at random at each iteration and optimizes the accumulated surrogate. However, in many cases of interest such as variational inference for latent variable models, the surrogate functions are expressed as an expectation. In this contribution, we propose a doubly stochastic MM method based on Monte Carlo approximation of these stochastic surrogates. We establish asymptotic and non-asymptotic convergence of our scheme in a constrained, nonconvex, nonsmooth optimization setting. We apply our new framework for inference of logistic regression model with missing data and for variational inference of Bayesian variants of LeNet-5 and Resnet-18 on respectively the MNIST and CIFAR-10 datasets.
1 INTRODUCTION
We consider the constrained minimization problem of a finite sum of functions:
min θ∈Θ L(θ) := 1 n n∑ i=1 Li(θ) , (1)
where Θ is a convex, compact, and closed subset of Rp, and for any i ∈ J1, nK, the function Li : Rp → R is bounded from below and is (possibly) nonconvex and nonsmooth. To tackle the optimization problem (1), a popular approach is to apply the majorization-minimization (MM) method which iteratively minimizes a majorizing surrogate function. A large number of existing procedures fall into this general framework, for instance gradient-based or proximal methods or the Expectation-Maximization (EM) algorithm (McLachlan & Krishnan, 2008) and some variational Bayes inference techniques (Jordan et al., 1999); see for example (Razaviyayn et al., 2013) and (Lange, 2016) and the references therein. When the number of terms n in (1) is large, the vanilla MM method may be intractable because it requires to construct a surrogate function for all the n terms Li at each iteration. Here, a remedy is to apply the Minimization by Incremental Surrogate Optimization (MISO) method proposed by Mairal (2015), where the surrogate functions are updated incrementally. The MISO method can be interpreted as a combination of MM and ideas which have emerged for variance reduction in stochastic gradient methods (Schmidt et al., 2017). An extended analysis of MISO has been proposed in (Qian et al., 2019).
The success of the MISO method rests upon the efficient minimization of surrogates such as convex functions, see (Mairal, 2015, Section 2.3). A notable application of MISO-like algorithms is described in (Mensch et al., 2017) where the authors builds upon the stochastic majorizationminimization framework of Mairal (2015) to introduce a method for sparse matrix factorization. Yet, in many applications of interest, the natural surrogate functions are intractable, yet they are defined as expectation of tractable functions. For instance, this is the case for inference in latent variable models via maximum likelihood (McLachlan & Krishnan, 2008). Another application is
variational inference (Ghahramani, 2015), in which the goal is to approximate the posterior distribution of parameters given the observations; see for example (Neal, 2012; Blundell et al., 2015; Polson et al., 2017; Rezende et al., 2014; Li & Gal, 2017).
This paper fills the gap in the literature by proposing a method called Minimization by Incremental Stochastic Surrogate Optimization (MISSO), designed for the nonconvex and nonsmooth finite sum optimization, with a finite-time convergence guarantee. Our work aims at formulating a generic class of incremental stochastic surrogate methods for nonconvex optimization and building the theory to understand its behavior. In particular, we provide convergence guarantees for stochastic EM and Variational Inference-type methods, under mild conditions. In summary, our contributions are:
• we propose a unifying framework of analysis for incremental stochastic surrogate optimization when the surrogates are defined as expectations of tractable functions. The proposed MISSO method is built on the Monte Carlo integration of the intractable surrogate function, i.e., a doubly stochastic surrogate optimization scheme.
• we present an incremental update of the commonly used variational inference and Monte Carlo EM methods as special cases of our newly introduced framework. The analysis of those two algorithms is thus conducted under this unifying framework of analysis.
• we establish both asymptotic and non-asymptotic convergence for the MISSO method. In particular, the MISSO method converges almost surely to a stationary point and in O(n/ ) iterations to an -stationary point, see Theorem 1.
• in essence, we relax the class of surrogate functions used in MISO (Mairal, 2015) and allow for intractable surrogates that can only be evaluated by Monte-Carlo approximations. Working at the crossroads of Optimization and Sampling constitutes what we believe to be the novelty and the technicality of our framework and theoretical results.
In Section 2, we review the techniques for incremental minimization of finite sum functions based on the MM principle; specifically, we review the MISO method (Mairal, 2015), and present a class of surrogate functions expressed as an expectation over a latent space. The MISSO method is then introduced for the latter class of intractable surrogate functions requiring approximation. In Section 3, we provide the asymptotic and non-asymptotic convergence analysis for the MISSO method (and of the MISO (Mairal, 2015) one as a special case). Section 4 presents numerical applications including parameter inference for logistic regression with missing data and variational inference for two types of Bayesian neural networks. The proofs of theoretical results are reported as Supplement.
Notations. We denote J1, nK = {1, . . . , n}. Unless otherwise specified, ‖ · ‖ denotes the standard Euclidean norm and 〈· | ·〉 is the inner product in the Euclidean space. For any function f : Θ→ R, f ′(θ,d) is the directional derivative of f at θ along the direction d, i.e.,
f ′(θ,d) := lim t→0+ f(θ + td)− f(θ) t . (2)
The directional derivative is assumed to exist for the functions introduced throughout this paper.
2 INCREMENTAL MINIMIZATION OF FINITE SUM NONCONVEX FUNCTIONS
The objective function in (1) is composed of a finite sum of possibly nonsmooth and nonconvex functions. A popular approach here is to apply the MM method, which tackles (1) through alternating between two steps — (i) minimizing a surrogate function which upper bounds the original objective function; and (ii) updating the surrogate function to tighten the upper bound.
As mentioned in the introduction, the MISO method (Mairal, 2015) is developed as an iterative scheme that only updates the surrogate functions partially at each iteration. Formally, for any i ∈ J1, nK, we consider a surrogate function L̂i(θ;θ) which satisfies the assumptions (H1, H2): H1. For all i ∈ J1, nK and θ ∈ Θ, L̂i(θ;θ) is convex w.r.t. θ, and it holds
L̂i(θ;θ) ≥ Li(θ), ∀ θ ∈ Θ , (3)
where the equality holds when θ = θ.
H2. For any θi ∈ Θ, i ∈ J1, nK and some > 0, the difference function ê(θ; {θi}ni=1) := 1 n ∑n i=1 L̂i(θ;θi) − L(θ) is defined for all θ ∈ Θ and differentiable for all θ ∈ Θ, where Θ = {θ ∈ Rd, infθ′∈Θ ‖θ − θ′‖ < } is an -neighborhood set of Θ. Moreover, for some constant L, the gradient satisfies
‖∇ê(θ; {θi}ni=1)‖2 ≤ 2Lê(θ; {θi}ni=1), ∀ θ ∈ Θ . (4)
Algorithm 1 The MISO method (Mairal, 2015). 1: Input: initialization θ(0). 2: Initialize the surrogate function as A0i (θ) := L̂i(θ;θ(0)), i ∈ J1, nK.
3: for k = 0, 1, ...,Kmax do 4: Pick ik uniformly from J1, nK. 5: Update Ak+1i (θ) as:
Ak+1i (θ) = { L̂i(θ;θ(k)), if i = ik Aki (θ), otherwise.
6: Set θ(k+1) ∈ arg min θ∈Θ 1 n
∑n i=1A k+1 i (θ).
7: end for
We remark that H1 is a common assumption used for surrogate functions, see (Mairal, 2015, Section 2.3). H2 can be satisfied when the difference function ê(θ; {θi}ni=1) is L-smooth, i.e., ê is differentiable on Θ and its gradient ∇ê is LLipschitz, ∀θ ∈ Θ. H2 can be implied by applying (Razaviyayn et al., 2013, Proposition 1).
The inequality (3) implies L̂i(θ;θ) ≥ Li(θ) > −∞ for any θ ∈ Θ. The MISO method is an incremental version of the MM method, as summarized by Algorithm 1, which shows that the MISO method maintains an iteratively updated set of upper-bounding surrogate functions {Aki (θ)}ni=1 and updates the iterate via minimizing the average of the surrogate functions.
Particularly, only one out of the n surrogate functions is updated at each iteration [cf. Line 5] and the sum function 1n ∑n i=1A k+1 i (θ) is designed to be ‘easy to optimize’, which, for example, can be a sum of quadratic functions. As such, the MISO method is suitable for large-scale optimization as the computation cost per iteration is independent of n. Under H1, H2, it was shown that the MISO method converges almost surely to a stationary point of (1) (Mairal, 2015, Prop. 3.1).
We now consider the case when the surrogate functions L̂i(θ;θ) are intractable. Let Z be a measurable set, pi : Z × Θ → R+ a probability density function, ri : Θ × Θ × Z → R a measurable function and µi a σ-finite measure. We consider surrogate functions which satisfy H1, H2 and that can be expressed as an expectation, i.e.:
L̂i(θ;θ) := ∫ Z ri(θ;θ, zi)pi(zi;θ)µi(dzi) ∀ (θ,θ) ∈ Θ×Θ . (5)
Plugging (5) into the MISO method is not feasible since the update step in Step 6 involves a minimization of an expectation. Several motivating examples of (1) are given in Section 2.
In this paper, we propose the Minimization by Incremental Stochastic Surrogate Optimization (MISSO) method which replaces the expectation in (5) by Monte Carlo integration and then optimizes the objective function (1) in an incremental manner. Denote by M ∈ N the Monte Carlo batch size and let {zm ∈ Z}Mm=1 be a set of samples. These samples can be drawn (Case 1) i.i.d. from the distribution pi(·;θ) or (Case 2) from a Markov chain with stationary distribution pi(·;θ); see Section 3 for illustrations. To this end, we define the stochastic surrogate as follows:
L̃i(θ;θ, {zm}Mm=1) := 1
M M∑ m=1 ri(θ;θ, zm) , (6)
and we summarize the proposed MISSO method in Algorithm 2. Compared to the MISO method, there is a crucial difference in that the MISSO method involves two types of randomness. The first level of randomness comes from the selection of ik in Line 5. The second level of randomness stems from the set of Monte Carlo approximated functions Ãki (θ) used in lieu of Aki (θ) in Line 6 when optimizing for the next iterate θ(k). We now discuss two applications of the MISSO method.
Example 1: Maximum Likelihood Estimation for Latent Variable Model. Latent variable models (Bishop, 2006) are constructed by introducing unobserved (latent) variables which help explain the observed data. We consider n independent observations ((yi, zi), i ∈ JnK) where yi is observed and zi is latent. In this incomplete data framework, define {fi(zi,θ),θ ∈ Θ} to be the complete
Algorithm 2 The MISSO method. 1: Input: initialization θ(0); a sequence of non-negative numbers {M(k)}∞k=0. 2: For all i ∈ J1, nK, draw M(0) Monte Carlo samples with the stationary distribution pi(·;θ(0)). 3: Initialize the surrogate function as
Ã0i (θ) := L̃i(θ;θ(0), {z (0) i,m} M(0) m=1), i ∈ J1, nK .
4: for k = 0, 1, ...,Kmax do 5: Pick a function index ik uniformly on J1, nK. 6: Draw M(k) Monte Carlo samples with the stationary distribution pi(·;θ(k)). 7: Update the individual surrogate functions recursively as:
Ãk+1i (θ) =
{ L̃i(θ;θ(k), {z(k)i,m} M(k) m=1), if i = ik
Ãki (θ), otherwise.
8: Set θ(k+1) ∈ arg minθ∈Θ L̃(k+1)(θ) := 1n ∑n i=1 Ã k+1 i (θ). 9: end for
data likelihood models, i.e., the joint likelihood of the observations and latent variables. Let
gi(θ) := ∫ Z fi(zi,θ)µi(dzi), i ∈ J1, nK, θ ∈ Θ
denote the incomplete data likelihood, i.e., the marginal likelihood of the observations yi. For ease of notations, the dependence on the observations is made implicit. The maximum likelihood (ML) estimation problem sets the individual objective function Li(θ) to be the i-th negated incomplete data log-likelihood Li(θ) := − log gi(θ). Assume, without loss of generality, that gi(θ) 6= 0 for all θ ∈ Θ. We define by pi(zi,θ) := fi(zi,θ)/gi(θ) the conditional distribution of the latent variable zi given the observations yi. A surrogate function L̂i(θ;θ) satisfying H1 can be obtained through writing fi(zi,θ) = fi(zi,θ)pi(zi,θ)pi(zi,θ) and applying the Jensen inequality:
L̂i(θ;θ) = ∫ Z log ( pi(zi,θ)/fi(zi,θ) )︸ ︷︷ ︸ =ri(θ;θ,zi) pi(zi,θ)µi(dzi) . (7)
We note that H2 can also be verified for common distribution models. We can apply the MISSO method following the above specification of ri(θ;θ, zi) and pi(zi,θ).
Example 2: Variational Inference. Let ((xi, yi), i ∈ J1, nK) be i.i.d. input-output pairs and w ∈ W ⊆ Rd be a latent variable. When conditioned on the input data x = (xi, i ∈ J1, nK), the joint distribution of y = (yi, i ∈ J1, nK) and w is given by:
p(y, w|x) = π(w) ∏n i=1 p(yi|xi, w) . (8)
Our goal is to compute the posterior distribution p(w|y, x). In most cases, the posterior distribution p(w|y, x) is intractable and is approximated using a family of parametric distributions, {q(w,θ),θ ∈ Θ}. The variational inference (VI) problem (Blei et al., 2017) boils down to minimizing the Kullback-Leibler (KL) divergence between q(w,θ) and the posterior distribution p(w|y, x):
min θ∈Θ
L(θ) := KL (q(w;θ) ||p(w|y, x)) := Eq(w;θ) [ log ( q(w;θ)/p(w|y, x) )] . (9)
Using (8), we decompose L(θ) = n−1 ∑n i=1 Li(θ) + const. where:
Li(θ) := −Eq(w;θ) [ log p(yi|xi, w) ] + 1
n Eq(w;θ)
[ log q(w;θ)/π(w) ] := ri(θ) + d(θ) . (10)
Directly optimizing the finite sum objective function in (9) can be difficult. First, with n 1, evaluating the objective function L(θ) requires a full pass over the entire dataset. Second, for some
complex models, the expectations in (10) can be intractable even if we assume a simple parametric model for q(w;θ). Assume that Li is L-smooth. We apply the MISSO method with a quadratic surrogate function defined as:
L̂i(θ;θ) := Li(θ) + 〈 ∇θLi(θ) |θ − θ 〉 + L
2 ‖θ − θ‖2, (θ,θ) ∈ Θ2 . (11)
It is easily checked that the quadratic function L̂i(θ;θ) satisfies H1, H2. To compute the gradient ∇Li(θ), we apply the re-parametrization technique suggested in (Paisley et al., 2012; Kingma & Welling, 2014; Blundell et al., 2015). Let t : Rd×Θ 7→ Rd be a differentiable function w.r.t. θ ∈ Θ which is designed such that the law of w = t(z,θ) is q(·,θ), where z ∼ Nd(0, I). By (Blundell et al., 2015, Proposition 1), the gradient of −ri(·) in (10) is:
∇θEq(w;θ) [ log p(yi|xi, w) ] = Ez∼Nd(0,I) [ Jtθ(z,θ)∇w log p(yi|xi, w) ∣∣ w=t(z,θ) ] , (12)
where for each z ∈ Rd, Jtθ(z,θ) is the Jacobian of the function t(z, ·) with respect to θ evaluated at θ. In addition, for most cases, the term∇d(θ) can be evaluated in closed form as the gradient of the KL between the prior distribution π(·) and the variational candidate q(·,θ).
ri(θ;θ, z) := 〈 ∇θd(θ)− Jtθ(z,θ)∇w log p(yi|xi, w) ∣∣ w=t(z,θ) |θ − θ 〉 + L 2 ‖θ − θ‖2 . (13)
Finally, using (11) and (13), the surrogate function (6) is given by L̃i(θ;θ, {zm}Mm=1) := M−1 ∑M m=1 ri(θ;θ, zm) where {zm}Mm=1 are i.i.d samples drawn from N (0, I).
3 CONVERGENCE ANALYSIS
We now provide asymptotic and non-asymptotic convergence results of our method. Assume:
H3. For all i ∈ J1, nK, θ ∈ Θ, zi ∈ Z, ri(·;θ, zi) is convex on Θ and is lower bounded.
We are particularly interested in the constrained optimization setting where Θ is a bounded set. To this end, we control the supremum norm of the MC approximation, introduced in (6), as: H4. For the samples {zi,m}Mm=1, there exist finite constants Cr and Cgr such that
Cr := sup θ∈Θ sup M>0 1√ M Eθ [ sup θ∈Θ ∣∣∣∣∣ M∑ m=1 { ri(θ;θ, zi,m)− L̂i(θ;θ) }∣∣∣∣∣ ]
Cgr := sup θ∈Θ sup M>0
√ MEθ sup θ∈Θ ∣∣∣∣∣ 1M M∑ m=1 L̂′i(θ,θ − θ;θ)− r′i(θ,θ − θ;θ, zi,m) ‖θ − θ‖ ∣∣∣∣∣ 2
for all i ∈ J1, nK, and we denoted by Eθ[·] the expectation w.r.t. a Markov chain {zi,m}Mm=1 with initial distribution ξi(·;θ), transition kernel Πi,θ, and stationary distribution pi(·;θ).
Some intuitions behind the controlling terms: It is common in statistical and optimization problems, to deal with the manipulation and the control of random variables indexed by sets with an infinite number of elements. Here, the controlled random variable is an image of a continuous function defined as ri(θ;θ, zi,m) − L̂i(θ;θ) for all z ∈ Z and for fixed (θ,θ) ∈ Θ2. To characterize such control, we will have recourse to the notion of metric entropy (or bracketing number) as developed in (Van der Vaart, 2000; Vershynin, 2018; Wainwright, 2019). A collection of results from those references gives intuition behind our assumption H4, which is classical in empirical processes. In (Vershynin, 2018, Theorem 8.2.3), the authors recall the uniform law of large numbers:
E [ sup f∈F ∣∣∣∣∣ 1M M∑ i=1 f (zi,m)− E[f(zi)] ∣∣∣∣∣ ] ≤ CL√ M for all zi,m, i ∈ J1,MK ,
where F is a class of L-Lipschitz functions. Moreover, in (Vershynin, 2018, Theorem 8.1.3 ) and (Wainwright, 2019, Theorem 5.22), the application of the Dudley inequality yields:
E[sup f∈F |Xf −X0|] ≤ 1√ M ∫ 1 0 √ logN (F , ‖ · ‖∞, ε)dε ,
whereN (F , ‖ · ‖∞, ε) is the bracketing number and denotes the level of approximation (the bracketing number goes to infinity when → 0). Finally, in (Van der Vaart, 2000, p.271, Example), N (F , ‖ · ‖∞, ε) is bounded from above for a class of parametric functions F = fθ : θ ∈ Θ:
N (F , ‖ · ‖∞, ε) ≤ K ( diam Θ
ε
)d , for all 0 < ε < diam Θ .
The authors acknowledge that those bounds are a dramatic manifestation of the curse of dimensionality happening when sampling is needed. Nevertheless, the dependence on the dimension highly depends on the class of surrogate functions F used in our scheme, as smaller bounds on these controlling terms can be derived for simpler class of functions, such as quadratic functions.
Stationarity measure. As problem (1) is a constrained optimization task, we consider the following stationarity measure:
g(θ) := inf θ∈Θ L′(θ,θ − θ) ‖θ − θ‖ and g(θ) = g+(θ)− g−(θ) , (14)
where g+(θ) := max{0, g(θ)}, g−(θ) := −min{0, g(θ)} denote the positive and negative part of g(θ), respectively. Note that θ is a stationary point if and only if g−(θ) = 0 (Fletcher et al., 2002). Furthermore, suppose that the sequence {θ(k)}k≥0 has a limit point θ that is a stationary point, then one has limk→∞ g−(θ(k)) = 0. Thus, the sequence {θ(k)}k≥0 is said to satisfy an asymptotic stationary point condition. This is equivalent to (Mairal, 2015, Definition 2.4).
To facilitate our analysis, we define τki as the iteration index where the i-th function is last accessed in the MISSO method prior to iteration k, τk+1ik = k for instance. We define:
L̂(k)(θ) := 1n ∑n i=1L̂i(θ;θ (τki )), ê(k)(θ) := L̂(k)(θ)− L(θ), M (k) := Kmax−1∑ k=0 M −1/2 (k) . (15)
We first establish a non-asymptotic convergence rate for the MISSO method:
Theorem 1. Under H1-H4. For any Kmax ∈ N, let K be an independent discrete r.v. drawn uniformly from {0, ...,Kmax − 1} and define the following quantity:
∆(Kmax) := 2nLE[L̃ (0)(θ(0))− L̃(Kmax)(θ(Kmax))] + 4LCrM (k) .
Then we have following non-asymptotic bounds:
E [ ‖∇ê(K)(θ(K))‖2 ] ≤ ∆(Kmax)
Kmax and E[g−(θ(K))] ≤
√ ∆(Kmax)
Kmax + Cgr Kmax M (k) . (16)
Note that ∆(Kmax) is finite for any Kmax ∈ N. Iteration Complexity of MISSO. As expected, the MISSO method converges to a stationary point of (1) asymptotically and at a sublinear rate E[g(K)− ] ≤ O( √ ∆(Kmax)/Kmax). In other terms, MISSO requires O(nL/ ) iterations to reach an -stationary point when the suboptimality condition, that characterizes stationarity, is E [ ‖g−(θ(K))‖2 ] . Note that this stationarity criterion are similar to the
usual quantity used in stochastic nonconvex optimization, i.e., E [ ‖∇L(θ(K))‖2 ] . In fact, when the
optimization problem (1) is unconstrained, i.e., Θ = Rp, then E [ g(θ(K)) ] = E [ ∇L(θ(K)) ] .
Sample Complexity of MISSO. Regarding the sample complexity of our method, setting M(k) = k2/n2, as a non-decreasing sequence of integers satisfying ∑∞ k=0M −1/2 (k) < ∞, in order to keep
∆(Kmax) nL, then the MISSO method requires ∑nL/ k=0 k
2/n2 = nL3/ 3 samples to reach an -stationary point.
Furthermore,we remark that the MISO method can be analyzed in Theorem 1 as a special case of the MISSO method satisfying Cr = Cgr = 0. In this case, while the asymptotic convergence is well known from (Mairal, 2015) [cf. H4], Eq. (16) gives a non-asymptotic rate of E[g(K)− ] ≤
O( √ nL/Kmax) which is new to our best knowledge. Next, we show that under an additional assumption on the sequence of batch size M(k), the MISSO method converges almost surely to a stationary point:
Theorem 2. Under H1-H4. In addition, assume that {M(k)}k≥0 is a non-decreasing sequence of integers which satisfies ∑∞ k=0M −1/2 (k) <∞. Then:
1. the negative part of the stationarity measure converges a.s. to zero, i.e., lim k→∞ g−(θ (k))
a.s. = 0.
2. the objective value L(θ(k)) converges a.s. to a finite number L, i.e., limk→∞ L(θ(k)) a.s. = L.
In particular, the first result above shows that the sequence {θ(k)}k≥0 produced by the MISSO method satisfies an asymptotic stationary point condition.
4 NUMERICAL EXPERIMENTS
4.1 BINARY LOGISTIC REGRESSION WITH MISSING VALUES
This application follows Example 1 described in Section 2. We consider a binary regression setup, ((yi, zi), i ∈ JnK) where yi ∈ {0, 1} is a binary response and zi = (zi,j ∈ R, j ∈ JpK) is a covariate vector. The vector of covariates zi = [zi,mis, zi,obs] is not fully observed where we denote by zi,mis the missing values and zi,obs the observed covariate. It is assumed that (zi, i ∈ JnK) are i.i.d. and marginally distributed according toN (β,Ω) where β ∈ Rp and Ω is a positive definite p×pmatrix. We define the conditional distribution of the observations yi given zi = (zi,mis, zi,obs) as:
pi(yi|zi) = S(δ>z̄i)yi ( 1− S(δ>z̄i) )1−yi , (17)
where for u ∈ R, S(u) = 1/(1+e−u), δ = (δ0, · · · , δp) are the logistic parameters and z̄i = (1, zi). Here, θ = (δ,β,Ω) is the parameter to estimate. For i ∈ JnK, the complete log-likelihood reads: log fi(zi,mis,θ) ∝ yiδ>z̄i − log ( 1 + exp(δ>z̄i) ) − 1
2 log(|Ω|) + 1 2 Tr ( Ω−1(zi − β)(zi − β)> ) .
Fitting a logistic regression model on the TraumaBase dataset: We apply the MISSO method to fit a logistic regression model on the TraumaBase (http://traumabase.eu) dataset, which consists of data collected from 15 trauma centers in France, covering measurements on patients from the initial to last stage of trauma. This dataset includes information from the first stage of the trauma, namely initial observations on the patient’s accident site to the last stage being intense care at the hospital and counts more than 200 variables measured for more than 7 000 patients. Since the dataset considered is heterogeneous – coming from multiple sources with frequently missed entries – we apply the latent data model described in (17) to predict the risk of a severe hemorrhage which is one of the main cause of death after a major trauma.
Similar to (Jiang et al., 2018), we select p = 16 influential quantitative measurements, on n = 6384 patients. For the Monte Carlo sampling of zi,mis, required while running MISSO, we run a Metropolis-Hastings algorithm with the target distribution p(·|zi,obs, yi;θ(k)).
We compare in Figure 1 the convergence behavior of the estimated parameters δ and β using SAEM (Delyon et al., 1999) (with stepsize γk = 1/kα where α = 0.6 after tuning), MCEM (Wei
& Tanner, 1990) and the proposed MISSO method. For the MISSO method, we set the batch size to M(k) = 10 + k2 and we examine with selecting different number of functions in Line 5 in the method – the default settings with 1 (MISSO), 10% (MISSO10) and 50% (MISSO50) minibatches per iteration. From Figure 1, the MISSO method converges to a static value with less number of epochs than the MCEM, SAEM methods. It is worth noting that the difference among the MISSO runs for different number of selected functions demonstrates a variance-cost tradeoff. Though wall clock times are similar for all methods, they are reported in the appendix for completeness.
4.2 TRAINING BAYESIAN CNN USING MISSO
This application follows Example 2 described in Section 2. We use variational inference and the ELBO loss (10) to fit Bayesian Neural Networks on different datasets. At iteration k, minimizing the sum of stochastic surrogates defined as in (6) and (13) yields the following MISSO update — step (i) pick a function index ik uniformly on JnK; step (ii) sample a Monte Carlo batch {z(k)m } M(k) m=1 from N (0, I); and step (iii) update the parameters, with w̃ = t(θ(k−1), z(k)m ), as
µ (k) ` = µ̂ (τk) ` −
γ
n n∑ i=1 δ̂ (k) µ`,i ,
δ̂ (k) µ`,ik = − 1 M(k) M(k)∑ m=1 ∇w log p(yik |xik , w̃) +∇µ`d(θ(k−1)) ,
where µ̂(τ k)
` = 1 n ∑n i=1 µ (τki ) ` and d(θ) = n −1∑d `=1 ( − log(σ) + (σ2 + µ2`)/2− 1/2 ) .
Bayesian LeNet-5 on MNIST (LeCun et al., 1998): We apply the MISSO method to fit a Bayesian variant of LeNet-5 (LeCun et al., 1998). We train this network on the MNIST dataset (LeCun, 1998). The training set is composed of n = 55 000 handwritten digits, 28 × 28 images. Each image is labelled with its corresponding number (from zero to nine). Under the prior distribution π, see (8), the weights are assumed independent and identically distributed according to N (0, 1). We also assume that q(·;θ) ≡ N (µ, σ2I). The variational posterior parameters are thus θ = (µ, σ) where µ = (µ`, ` ∈ JdK) where d is the number of weights in the neural network. We use the re-parametrization as w = t(θ, z) = µ+ σz with z ∼ N (0, I). Bayesian ResNet-18 (He et al., 2016) on CIFAR-10 (Krizhevsky et al., 2012): We train here the Bayesian variant of the ResNet-18 neural network introduced in (He et al., 2016) on CIFAR-10. The latter dataset is composed of n = 60 000 handwritten digits, 32 × 32 colour images in 10 classes, with 6 000 images per class. As in the previous example, the weights are assumed independent and identically distributed according toN (0, I). Standard hyperparameters values found in the literature, such as the annealing constant or the number of MC samples, were used for the benchmark methods. For efficiency purpose and lower variance, the Flipout estimator (Wen et al., 2018) is used.
Experiment Results: We compare the convergence of the Monte Carlo variants of the following state of the art optimization algorithms — the ADAM (Kingma & Ba, 2015), the Momentum (Sutskever et al., 2013) and the SAG (Schmidt et al., 2017) methods versus the Bayes by Backprop (BBB) (Blundell et al., 2015) and our proposed MISSO method. For all these methods, the loss function (10) and its gradients were computed by Monte Carlo integration based on the reparametrization described above. The mini-batch of indices and MC samples are respectively set to 128 and M(k) = k. The learning rates are set to 10−3 for LeNet-5 and 10−4 for Resnet-18.
Figure 2(a) shows the convergence of the negated evidence lower bound against the number of passes over data (one pass represents an epoch). As observed, the proposed MISSO method outperforms Bayes by Backprop and Momentum, while similar convergence rates are observed with the MISSO, ADAM and SAG methods for our experiment on MNIST dataset using a Bayesian variant of LeNet5. On the other hand, the experiment conducted on CIFAR-10 (Figure 2(b)) using a much larger network, i.e., a Bayesian variant of ResNet-18 showcases the need of a well-tuned adaptive methods to reach lower training loss (and also faster). Our MISSO method is similar to the Monte Carlo variant of ADAM but slower than Adagrad optimizer. Recall that the purpose of this paper is to provide a common class of optimizers, such as VI, in order to study their convergence behaviors, and not to introduce a novel method outperforming the baselines methods. We report wall clock times for all methods in the appendix for completeness.
5 CONCLUSION
We present a unifying framework for minimizing a nonconvex and nonsmooth finite-sum objective function using incremental surrogates when the latter functions are expressed as an expectation and are intractable. Our approach covers a large class of nonconvex applications in machine learning such as logistic regression with missing values and variational inference. We provide both finitetime and asymptotic guarantees of our incremental stochastic surrogate optimization technique and illustrate our findings training a binary logistic regression with missing covariates to predict hemorrhagic shock and Bayesian variants of two Convolutional Neural Networks on benchmark datasets.
A PROOFS OF THE THEORETICAL RESULTS
A.1 PROOF OF THEOREM 1
Theorem. Under H1-H4. For anyKmax ∈ N, letK be an independent discrete r.v. drawn uniformly from {0, ...,Kmax − 1} and define the following quantity:
∆(Kmax) := 2nLE[L̃ (0)(θ(0))− L̃(Kmax)(θ(Kmax))] + 4LCrM (k) .
Then we have following non-asymptotic bounds:
E [ ‖∇ê(K)(θ(K))‖2 ] ≤ ∆(Kmax)
Kmax and E[g−(θ(K))] ≤
√ ∆(Kmax)
Kmax + Cgr Kmax M (k) .
Proof We begin by recalling the definition
L̃(k)(θ) := 1 n n∑ i=1 Ãki (θ) .
Notice that
L̃(k+1)(θ) = 1 n n∑ i=1 L̃i(θ;θ(τ k+1 i ), {z(τ k+1 i ) i,m } M (τ k+1 i ) m=1 )
= L̃(k)(θ) + 1 n
( L̃ik(θ;θ(k), {z (k) ik,m }M(k)m=1)− L̃ik(θ;θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 ) ) .
Furthermore, we recall that L̂(k)(θ) := 1n ∑n i=1L̂i(θ;θ (τki )), ê(k)(θ) := L̂(k)(θ)− L(θ) .
Due to H2, we have ‖∇ê(k)(θ(k))‖2 ≤ 2Lê(k)(θ(k)) . (18)
To prove the first bound in (16), using the optimality of θ(k+1), one has
L̃(k+1)(θ(k+1)) ≤ L̃(k+1)(θ(k))
= L̃(k)(θ(k)) + 1n ( L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)− L̃ik(θ(k);θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 ) ) .
(19)
Let Fk be the filtration of random variables up to iteration k, i.e., {i`−1, {z(`−1)i`−1,m} M(`−1) m=1 ,θ
(`)}k`=1. We observe that the conditional expectation evaluates to
Eik [ E [ L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)|Fk, ik ] |Fk ]
= L(θ(k)) + Eik [ E [ 1 M(k) M(k)∑ m=1 rik(θ (k);θ(k), z (k) ik,m )− L̂ik(θ(k);θ(k))|Fk, ik ] |Fk ] ≤ L(θ(k)) + Cr√ M(k) ,
where the last inequality is due to H4. Moreover,
E [ L̃ik(θ(k);θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 )|Fk ] = 1
n n∑ i=1 L̃i(θ(k);θ(τ k i ), {z(τ k i ) i,m } M (τk i ) m=1 ) = L̃(k)(θ(k)) .
Taking the conditional expectations on both sides of (19) and re-arranging terms give:
L̃(k)(θ(k))− L(θ(k)) ≤ nE [ L̃(k)(θ(k))− L̃(k+1)(θ(k+1))|Fk ] +
Cr√ M(k) . (20)
Proceeding from (20), we observe the following lower bound for the left hand side
L̃(k)(θ(k))− L(θ(k)) (a)= L̃(k)(θ(k))− L̂(k)(θ(k)) + ê(k)(θ(k)) (b)
≥ L̃(k)(θ(k))− L̂(k)(θ(k)) + 1 2L ‖∇ê(k)(θ(k))‖2
= 1
n n∑ i=1 { 1 M(τki ) M (τk i )∑ m=1 ri(θ (k);θ(τ k i ), z (τki ) i,m )− L̂i(θ (k);θ(τ k i )) }
︸ ︷︷ ︸ :=−δ(k)(θ(k))
+ 1
2L ‖∇ê(k)(θ(k))‖2 ,
where (a) is due to ê(k)(θ(k)) = 0 [cf. H1], (b) is due to (18) and we have defined the summation in the last equality as −δ(k)(θ(k)). Substituting the above into (20) yields
‖∇ê(k)(θ(k))‖2
2L ≤ nE
[ L̃(k)(θ(k))− L̃(k+1)(θ(k+1))|Fk ] +
Cr√ M(k) + δ(k)(θ(k)) . (21)
Observe the following upper bound on the total expectations:
E [ δ(k)(θ(k)) ] ≤ E [ 1 n n∑ i=1 Cr√ M(τki ) ] ,
which is due to H4. It yields
E [ ‖∇ê(k)(θ(k))‖2 ] ≤ 2nLE [ L̃(k)(θ(k))− L̃(k+1)(θ(k+1)) ] +
2LCr√ M(k) + 1 n n∑ i=1 E [ 2LCr√
M(τki )
] .
Finally, for anyKmax ∈ N, we letK be a discrete r.v. that is uniformly drawn from {0, 1, ...,Kmax− 1}. Using H4 and taking total expectations lead to
E [ ‖∇ê(K)(θ(K))‖2 ] = 1
Kmax Kmax−1∑ k=0 E[‖∇ê(k)(θ(k))‖2]
≤ 2nLE[L̃ (0)(θ(0))− L̃(Kmax)(θ(Kmax))]
Kmax + 2LCr Kmax Kmax−1∑ k=0 E [ 1√ M(k) + 1 n n∑ i=1 1√ M(τki ) ] . (22)
For all i ∈ J1, nK, the index i is selected with a probability equal to 1n when conditioned independently on the past. We observe:
E[M−1/2 (τki ) ] = k∑ j=1 1 n ( 1− 1 n )j−1 M −1/2 (k−j) (23)
Taking the sum yields: Kmax−1∑ k=0 E[M−1/2 (τki ) ] = Kmax−1∑ k=0 k∑ j=1 1 n ( 1− 1 n )j−1 M −1/2 (k−j) = Kmax−1∑ k=0 k−1∑ l=0 1 n ( 1− 1 n )k−(l+1) M −1/2 (l)
= Kmax−1∑ l=0 M −1/2 (l) Kmax−1∑ k=l+1 1 n ( 1− 1 n )k−(l+1) ≤ Kmax−1∑ l=0 M −1/2 (l) ,
(24)
where the last inequality is due to upper bounding the geometric series. Plugging this back into (22) yields
E [ ‖∇ê(K)(θ(K))‖2 ] = 1
Kmax Kmax−1∑ k=0 E[‖∇ê(k)(θ(k))‖2]
≤ 2nLE[L̃ (0)(θ(0))− L̃(Kmax)(θ(Kmax))]
Kmax +
1
Kmax Kmax−1∑ k=0 4LCr√ M(k) = ∆(Kmax) Kmax .
This concludes our proof for the first inequality in (16).
To prove the second inequality of (16), we define the shorthand notations g(k) := g(θ(k)), g(k)− := −min{0, g(k)}, g(k)+ := max{0, g(k)}. We observe that
g(k) = inf θ∈Θ L′(θ(k),θ − θ(k)) ‖θ(k) − θ‖
= inf θ∈Θ
{ 1 n ∑n i=1 L̂ ′
i(θ (k),θ − θ(k);θ(τki )) ‖θ(k) − θ‖
− 〈 ∇ê(k)(θ(k)) |θ − θ(k) 〉 ‖θ(k) − θ‖ } ≥ −‖∇ê(k)(θ(k))‖+ inf
θ∈Θ
1 n ∑n i=1 L̂ ′
i(θ (k),θ − θ(k);θ(τki )) ‖θ(k) − θ‖ ,
where the last inequality is due to the Cauchy-Schwarz inequality and we have defined L̂′i(θ,d;θ(τ k i )) as the directional derivative of L̂i(·;θ(τ k i )) at θ along the direction d. Moreover, for any θ ∈ Θ, 1
n n∑ i=1 L̂ ′ i(θ (k),θ − θ(k);θ(τ k i ))
= L̃(k) ′ (θ(k),θ − θ(k))︸ ︷︷ ︸
≥0
−L̃(k) ′ (θ(k),θ − θ(k)) + 1
n n∑ i=1 L̂ ′ i(θ (k),θ − θ(k);θ(τ k i ))
≥ 1 n n∑ i=1 { L̂ ′ i(θ (k),θ − θ(k);θ(τ k i ))− 1 M(τki ) M (τk i )∑ m=1 r′i(θ (k),θ − θ(k);θ(τ k i ), z (τki ) i,m ) } ,
where the inequality is due to the optimality of θ(k) and the convexity of L̃(k)(θ) [cf. H3]. Denoting a scaled version of the above term as:
(k)(θ) :=
1 n ∑n i=1 { 1
M (τk i )
∑M(τk i )
m=1 r ′ i(θ
(k),θ − θ(k);θ(τki ), z(τ k i ) i,m )− L̂ ′ i(θ (k),θ − θ(k);θ(τki )) } ‖θ(k) − θ‖ .
We have g(k) ≥ −‖∇ê(k)(θ(k))‖+ inf
θ∈Θ (− (k)(θ)) ≥ −‖∇ê(k)(θ(k))‖ − sup θ∈Θ | (k)(θ)| . (25)
Since g(k) = g(k)+ − g (k) − and g (k) + g (k) − = 0, this implies
g (k) − ≤ ‖∇ê(k)(θ(k))‖+ sup θ∈Θ | (k)(θ)| . (26)
Consider the above inequality when k = K, i.e., the random index, and taking total expectations on both sides gives
E[g(K)− ] ≤ E[‖∇ê(K)(θ(K))‖] + E[sup θ∈Θ (K)(θ)] .
We note that ( E[‖∇ê(K)(θ(K))‖] )2 ≤ E[‖∇ê(K)(θ(K))‖2] ≤ ∆(Kmax)
Kmax ,
where the first inequality is due to the convexity of (·)2 and the Jensen’s inequality, and
E[sup θ∈Θ
(K)(θ)] = 1
Kmax Kmax∑ k=0 E[sup θ∈Θ (k)(θ)] (a) ≤ Cgr Kmax Kmax−1∑ k=0 E [ 1 n n∑ i=1 M −1/2 (τki ) ] (b) ≤ Cgr Kmax Kmax−1∑ k=0 M −1/2 (k) ,
where (a) is due to H4 and (b) is due to (24). This implies
E[g(K)− ] ≤
√ ∆(Kmax)
Kmax + Cgr Kmax Kmax−1∑ k=0 M −1/2 (k) ,
and concludes the proof of the theorem.
A.2 PROOF OF THEOREM 2
Theorem. Under H1-H4. In addition, assume that {M(k)}k≥0 is a non-decreasing sequence of integers which satisfies ∑∞ k=0M −1/2 (k) <∞. Then:
1. the negative part of the stationarity measure converges a.s. to zero, i.e., lim k→∞ g−(θ (k))
a.s. = 0.
2. the objective value L(θ(k)) converges a.s. to a finite number L, i.e., limk→∞ L(θ(k)) a.s. = L.
Proof We apply the following auxiliary lemma which proof can be found in Appendix A.3 for the readability of the current proof:
Lemma 1. Let (Vk)k≥0 be a non negative sequence of random variables such that E[V0] < ∞. Let (Xk)k≥0 a non negative sequence of random variables and (Ek)k≥0 be a sequence of random variables such that ∑∞ k=0 E[|Ek|] <∞. If for any k ≥ 1:
Vk ≤ Vk−1 −Xk−1 + Ek−1 (27)
then:
(i) for all k ≥ 0, E[Vk] <∞ and the sequence (Vk)k≥0 converges a.s. to a finite limit V∞.
(ii) the sequence (E[Vk])k≥0 converges and lim k→∞ E[Vk] = E[V∞].
(iii) the series ∑∞ k=0Xk converges almost surely and ∑∞ k=0 E[Xk] <∞.
We proceed from (19) by re-arranging terms and observing that L̂(k+1)(θ(k+1)) ≤ L̂(k)(θ(k))− 1n ( L̂ik(θ(k);θ (τkik ))− L̂ik(θ(k);θ(k)) ) − ( L̃(k+1)(θ(k+1))− L̂(k+1)(θ(k+1)) ) + ( L̃(k)(θ(k))− L̂(k)(θ(k))
) + 1n ( L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)− L̂ik(θ(k);θ(k))
) + 1n ( L̂ik(θ(k);θ (τkik ))− L̃ik(θ(k);θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 ) ) .
Our idea is to apply Lemma 1. Under H1, the finite sum of surrogate functions L̂(k)(θ), defined in (15), is lower bounded by a constant ck > −∞ for any θ. To this end, we observe that
Vk := L̂(k)(θ(k))− inf k≥0 ck ≥ 0 (28)
is a non-negative random variable.
Secondly, under H1, the following random variable is non-negative
Xk := 1 n ( L̂ik(θ (τkik );θ(k))− L̂ik(θ(k);θ(k)) ) ≥ 0 . (29)
Thirdly, we define Ek = − ( L̃(k+1)(θ(k+1))− L̂(k+1)(θ(k+1)) ) + ( L̃(k)(θ(k))− L̂(k)(θ(k)) ) + 1n ( L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)− L̂ik(θ(k);θ(k))
) + 1n ( L̂ik(θ(k);θ (τkik ))− L̃ik(θ(k);θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 ) ) .
(30)
Note that from the definitions (28), (29), (30), we have Vk+1 ≤ Vk −Xk + Ek for any k ≥ 1. Under H4, we observe that
E [ |L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)− L̂ik(θ(k);θ(k))| ] ≤ CrM−1/2(k)
E [∣∣∣L̂ik(θ(k);θ(τkik ))− L̃ik(θ(k);θ(τkik ), {z(τkik )ik,m }M(τkik )m=1 )∣∣∣] ≤ CrE[M−1/2(τkik ) ]
E [ |L̃(k)(θ(k))− L̂(k)(θ(k))| ] ≤ 1n ∑n i=1CrE [ M −1/2 (τki ) ] Therefore,
E [ |Ek| ] ≤ Crn ( M −1/2 (k) + E [ M −1/2 (τkik ) + ∑n i=1 { M −1/2 (τki ) +M −1/2 (τk+1i ) }]) .
Using (24) and the assumption on the sequence {M(k)}k≥0, we obtain that ∞∑ k=0 E [ |Ek| ] < Cr n (2 + 2n) ∞∑ k=0 M −1/2 (k) <∞.
Therefore, the conclusions in Lemma 1 hold. Precisely, we have ∑∞ k=0Xk < ∞ and∑∞
k=0 E[Xk] <∞ almost surely. Note that this implies
∞ > ∞∑ k=0 E[Xk] = 1 n ∞∑ k=0 E [ L̂ik(θ(k);θ (τkik ))− L̂ik(θ(k);θ(k)) ] = 1
n ∞∑ k=0 E [ L̂(k)(θ(k))− L(θ(k)) ] = 1 n ∞∑ k=0 E [ ê(k)(θ(k)) ] .
Since ê(k)(θ(k)) ≥ 0, the above implies
lim k→∞
ê(k)(θ(k)) = 0 a.s. (31)
and subsequently applying (18), we have limk→∞ ‖ê(k)(θ(k))‖ = 0 almost surely. Finally, it follows from (18) and (26) that
lim k→∞
g (k) − ≤ lim
k→∞
√ 2L √ ê(k)(θ(k)) + lim
k→∞ sup θ∈Θ | (k)(θ)| = 0 , (32)
where the last equality holds almost surely due to the fact that ∑∞ k=0 E[supθ∈Θ | (k)(θ)|] < ∞. This concludes the asymptotic convergence of the MISSO method.
Finally, we prove thatL(θ(k)) converges almost surely. As a consequence of Lemma 1, it is clear that {Vk}k≥0 converges almost surely and so is {L̂(k)(θ(k))}k≥0, i.e., we have limk→∞ L̂(k)(θ(k)) = L. Applying (31) implies that
L = lim k→∞ L̂(k)(θ(k)) = lim k→∞ L(θ(k)) a.s.
This shows that L(θ(k)) converges almost surely to L.
A.3 PROOF OF LEMMA 1
Lemma. Let (Vk)k≥0 be a non negative sequence of random variables such that E[V0] < ∞. Let (Xk)k≥0 a non negative sequence of random variables and (Ek)k≥0 be a sequence of random variables such that ∑∞ k=0 E[|Ek|] <∞. If for any k ≥ 1:
Vk ≤ Vk−1 −Xk−1 + Ek−1
then:
(i) for all k ≥ 0, E[Vk] <∞ and the sequence (Vk)k≥0 converges a.s. to a finite limit V∞.
(ii) the sequence (E[Vk])k≥0 converges and lim k→∞ E[Vk] = E[V∞].
(iii) the series ∑∞ k=0Xk converges almost surely and ∑∞ k=0 E[Xk] <∞.
Proof We first show that for all k ≥ 0, E[Vk] <∞. Note indeed that:
0 ≤ Vk ≤ V0 − k∑ j=1 Xj + k∑ j=1 Ej ≤ V0 + k∑ j=1 Ej , (33)
showing that E[Vk] ≤ E[V0] + E [∑k j=1Ej ] <∞. Since 0 ≤ Xk ≤ Vk−1 − Vk + Ek we also obtain for all k ≥ 0, E[Xk] < ∞. Moreover, since E [∑∞ j=1 |Ej | ] <∞, the series ∑∞ j=1Ej converges a.s. We may therefore define:
Wk = Vk + ∞∑ j=k+1 Ej (34)
Note that E[|Wk|] ≤ E[Vk] + E [∑∞ j=k+1 |Ej | ] <∞. For all k ≥ 1, we get:
Wk ≤ Vk−1 −Xk + ∞∑ j=k Ej ≤Wk−1 −Xk ≤Wk−1
E[Wk] ≤ E[Wk−1]− E[Xk] .
(35)
Hence the sequences (Wk)k≥0 and (E[Wk])k≥0 are non increasing. Since for all k ≥ 0, Wk ≥ − ∑∞ j=1 |Ej | > −∞ and E[Wk] ≥ − ∑∞ j=1 E[|Ej |] > −∞, the (random) sequence (Wk)k≥0 converges a.s. to a limitW∞ and the (deterministic) sequence (E[Wk])k≥0 converges to a limit w∞. Since |Wk| ≤ V0 + ∑∞ j=1 |Ej |, the Fatou lemma implies that:
E[lim inf k→∞ |Wk|] = E[|W∞|] ≤ lim inf k→∞ E[|Wk|] ≤ E[V0] + ∞∑ j=1 E[|Ej |] <∞ , (36)
showing that the random variable W∞ is integrable.
In the sequel, set Uk ,W0 −Wk. By construction we have for all k ≥ 0, Uk ≥ 0, Uk ≤ Uk+1 and E[Uk] ≤ E[|W0|] + E[|Wk|] <∞ and by the monotone convergence theorem, we get:
lim k→∞ E[Uk] = E[ lim k→∞ Uk] . (37)
Finally, we have:
lim k→∞ E[Uk] = E[W0]− w∞ and E[ lim k→∞ Uk] = E[W0]− E[W∞] . (38)
showing that E[W∞] = w∞ and concluding the proof of (ii). Moreover, using (35) we have that Wk ≤Wk−1 −Xk which yields:
∞∑ j=1 Xj ≤W0 −W∞ <∞ ,
∞∑ j=1 E[Xj ] ≤ E[W0]− w∞ <∞ , (39)
an concludes the proof of the lemma.
B PRACTICAL DETAILS FOR THE BINARY LOGISTIC REGRESSION ON THE TRAUMABASE
B.1 TRAUMABASE DATASET QUANTITATIVE VARIABLES
The list of the 16 quantitative variables we use in our experiments are as follows — age, weight, height, BMI (Body Mass Index), the Glasgow Coma Scale, the Glasgow Coma Scale motor component, the minimum systolic blood pressure, the minimum diastolic blood pressure, the maximum
number of heart rate (or pulse) per unit time (usually a minute), the systolic blood pressure at arrival of ambulance, the diastolic blood pressure at arrival of ambulance, the heart rate at arrival of ambulance, the capillary Hemoglobin concentration, the oxygen saturation, the fluid expansion colloids, the fluid expansion cristalloids, the pulse pressure for the minimum value of diastolic and systolic blood pressure, the pulse pressure at arrival of ambulance.
B.2 METROPOLIS-HASTINGS ALGORITHM
During the simulation step of the MISSO method, the sampling from the target distribution π(zi,mis;θ) := p(zi,mis|zi,obs, yi;θ) is performed using a Metropolis-Hastings (MH) algorithm (Meyn & Tweedie, 2012) with proposal distribution q(zi,mis; δ) := p(zi,mis|zi,obs; δ) where θ = (β,Ω) and δ = (ξ,Σ). The parameters of the Gaussian conditional distribution of zi,mis|zi,obs read:
ξ = βmiss + Ωmis,obsΩ −1 obs,obs(zi,obs − βobs) , Σ = Ωmis,mis + Ωmis,obsΩ −1 obs,obsΩobs,mis ,
where we have used the Schur Complement of Ωobs,obs in Ω and noted βmis (resp. βobs) the missing (resp. observed) elements of β. The MH algorithm is summarized in Algorithm 3.
Algorithm 3 MH aglorithm 1: Input: initialization zi,mis,0 ∼ q(zi,mis; δ) 2: for m = 1, · · · ,M do 3: Sample zi,mis,m ∼ q(zi,mis; δ) 4: Sample u ∼ U(J0, 1K) 5: Calculate the ratio r = π(zi,mis,m;θ)/q(zi,mis,m);δ)π(zi,mis,m−1;θ)/q(zi,mis,m−1);δ) 6: if u < r then 7: Accept zi,mis,m 8: else 9: zi,mis,m ← zi,mis,m−1 10: end if 11: end for 12: Output: zi,mis,M
B.3 MISSO UPDATE
Choice of surrogate function for MISO: We recall the MISO deterministic surrogate defined in (7): L̂i(θ;θ) = ∫ Z log ( pi(zi,mis,θ)/fi(zi,mis,θ) ) pi(zi,mis,θ)µi(dzi) .
where θ = (δ, β,Ω) and θ = (δ̄, β̄, Ω̄). We adapt it to our missing covariates problem and decompose the surrogate function defined above into an observed and a missing part.
Surrogate function decomposition We adapt it to our missing covariates problem and decompose the term depending on θ, while θ̄ is fixed, in two following parts leading to
L̂i(θ;θ) =− ∫ Z log fi(zi,mis, zi,obs,θ)pi(zi,mis,θ)µi(dzi,mis)
=− ∫ Z log [pi(yi|zi,mis, zi,obs, δ)pi(zi,mis, β,Ω)] pi(zi,θ)µi(dzi,mis)
=− ∫ Z
log pi(yi|zi,mis, zi,obs, δ)pi(zi,θ)µi(dzi,mis)︸ ︷︷ ︸ =L̂(1)i (δ,θ)
− ∫ Z
log pi(zi,mis, β,Ω)pi(zi,θ)µi(dzi,mis)︸ ︷︷ ︸ =L̂(2)i (β,Ω,θ) .
(40)
The mean β and the covariance Ω of the latent structure can be estimated minimizing the sum of MISSO surrogates L̃(2)i (β,Ω,θ, {zm}Mm=1), defined as MC approximation of L̂ (2) i (β,Ω,θ), for all i ∈ JnK, in closed-form expression.
We thus keep the surrogate L̂(2)i (β,Ω,θ) as it is, and consider the following quadratic approximation of L̂(1)i (δ,θ) to estimate the vector of logistic parameters δ:
L̂(1)i (δ̄,θ)− ∫ Z ∇ log pi(yi|zi,mis, zi,obs, δ) ∣∣ δ=δ̄ pi(zi,mis,θ)µi(dzi,mis)(δ − δ̄)
−(δ − δ̄)/2 ∫ Z ∇2 log pi(yi|zi,mis, zi,obs, δ)pi(zi,mis,θ)pi(zi,mis,θ)µi(dzi,mis)(δ − δ̄)>.
Recall that: ∇ log pi(yi|zi,mis, zi,obs, δ) = zi ( yi − S(δ>zi) ) ,
∇2 log pi(yi|zi,mis, zi,obs, δ) = −ziz>i Ṡ(δ>zi) ,
where Ṡ(u) is the derivative of S(u). Note that Ṡ(u) ≤ 1/4 and since, for all i ∈ JnK, the p × p matrix ziz>i is semi-definite positive we can assume that: L1. For all i ∈ JnK and > 0, there exist, for all zi ∈ Z, a positive definite matrix Hi(zi) := 1 4 (ziz > i + Id) such that for all δ ∈ Rp, −ziz>i Ṡ(δ>zi) ≤ Hi(zi).
Then, we use, for all i ∈ JnK, the following surrogate function to estimate δ:
L̄(1)i (δ,θ) = L̂ (1) i (δ̄,θ)−D > i (δ − δ̄) +
1 2 (δ − δ̄)Hi(δ − δ̄)> , (41)
where:
Di = ∫ Z ∇ log pi(yi|zi,mis, zi,obs, δ) ∣∣ δ=δ̄ pi(zi,mis,θ)µi(dzi,mis) ,
Hi = ∫ Z Hi(zi,mis)pi(zi,mis,θ)µi(dzi,mis) .
Finally, at iteration k, the total surrogate is:
L̃(k)(θ) = 1 n n∑ i=1 L̃i(θ, θ(τ k i ), {zi,m} M (τk i ) m=1 )
= 1
n n∑ i=1 L̃(2)i (β,Ω, θ (τki ), {zi,m} M (τk i ) m=1 )− 1 n n∑ i=1 D̃ (τki ) i (δ − δ (τki ))
+ 1
2n n∑ i=1 (δ − δ(τ k i )) { H̃ (τki ) i } (δ − δ(τ k i ))> ,
(42)
where for all i ∈ JnK:
D̃ (τki ) i = 1
M(τki )
M (τk i )∑
m=1
z (τki ) i,m ( yi − S( ( δ(τ k i ) )> zi,m(τ k i )) ) ,
H̃ (τki ) i =
1
4M(τki )
M (τk i )∑
m=1
z (τki ) i,m (z (τki ) i,m ) > .
Minimizing the total surrogate (42) boils down to performing a quasi-Newton step. It is perhaps sensible to apply some diagonal loading which is perfectly compatible with the surrogate interpretation we just gave.
The logistic parameters are estimated as follows:
δ(k) = arg min δ∈Θ
1
n n∑ i=1 L̃(1)i (δ, θ (τki ), {zi,m} M (τk i ) m=1 ) ,
where L̃(1)i (δ, θ(τ k i ), {zi,m}
M (τk i )
m=1 ) is the MC approximation of the MISO surrogate defined in (41) and which leads to the following quasi-Newton step:
δ(k) = 1
n n∑ i=1 δ(τ k i ) − (H̃(k))−1D̃(k) ,
with D̃(k) = 1n ∑n i=1 D̃ (τki ) i and H̃ (k) = 1n ∑n i=1 H̃ (τki ) i .
MISSO updates: At the k-th iteration, and after the initialization, for all i ∈ JnK, of the latent variables (z(0)i ), the MISSO algorithm consists in picking an index ik uniformly on JnK, completing the observations by sampling a Monte Carlo batch {z(k)ik,mis,m} M(k) m=1 of missing values from the conditional distribution p(zik,mis|zik,obs, yik ;θ(k−1)) using an MCMC sampler and computing the estimated parameters as follows:
β(k) = arg min β∈Θ
1
n n∑ i=1 L̃(2)i (β,Ω (k), θ(τ k i ), {zi,m} M (τk i ) m=1 ) = 1 n n∑ i=1 1 M(τki ) M (τk i )∑ m=1 z (k) i,m ,
Ω(k) = arg min Ω∈Θ
1
n n∑ i=1 L̃(2)i (β (k),Ω, θ(τ k i ), {zi,m} M (τk i ) m=1 ) = 1 n n∑ i=1 1 M(τki ) M (τk i )∑ m=1 w (k) i,m ,
δ(k) = 1
n n∑ i=1 δ(τ k i ) − (H̃(k))−1D̃(k) .
(43)
where z(k)i,m = (z (k) i,mis,m, zi,obs) is composed of a simulated and an observed part, D̃ (k) =
1 n ∑n i=1 D̃ (τki ) i , H̃ (k) = 1n ∑n i=1 H̃ (τki ) i and w (k) i,m = z (k) i,m(z (k) i,m)
> − β(k)(β(k))>. Besides, L̃(1)i (β,Ω,θ, {zm}Mm=1) and L̃ (2) i (β,Ω,θ, {zm}Mm=1) are defined as MC approximation of L̂(1)i (β,Ω,θ) and L̂ (2) i (β,Ω,θ), for all i ∈ JnK as components of the surrogate function (40).
B.4 WALL CLOCK TIME
We provide Table 1, the running time for each method, plotted in Figure 1, employed to train a logistic regression with missing values on the TraumaBase dataset (p = 16 influential quantitative measurements, on n = 6384 patients).
The running times are sensibly the same since for each method the computation complexity per epoch is similar. We remark a slight delay using the MISSO method with a batch size of 1, as our code implemented in R, is not totally optimized and parallelized. Yet, when the batch size tends to 100%, we retrieve the duration of MCEM, which is consistent with the fact that MISSO with a full batch update boils down to the MCEM algorithm.
We plot Figure 3, the updated parameters for the Logistic regression example against the time elapsed (in seconds).
C PRACTICAL DETAILS FOR THE INCREMENTAL VARIATIONAL INFERENCE
C.1 NEURAL NETWORKS ARCHITECTURE
Bayesian LeNet-5 Architecture: We describe in Table 2 the architecture of the Convolutional Neural Network introduced in (LeCun et al., 1998) and trained on MNIST:
Bayesian ResNet-18 Architecture: We describe in Table 3 the architecture of the Resnet-18 we train on CIFAR-10:
C.2 ALGORITHMS UPDATES
First, we initialize the means µ(0)` for ` ∈ JdK and variance estimates σ(0). At iteration k, minimizing the sum of stochastic surrogates defined as in (6) and (13) yields the following MISSO update —
step (i) pick a function index ik uniformly on JnK; step (ii) sample a Monte Carlo batch {z(k)m } M(k) m=1 from N (0, I); and step (iii) update the parameters as
µ (k) ` =
1
n n∑ i=1 µ (τki ) ` − γ n n∑ i=1 δ̂ (k) µ`,i and σ(k) = 1 n n∑ i=1 σ(τ k i ) − γ n n∑ i=1 δ̂ (k) σ,i , (44)
where we define the following gradient terms for all i ∈ J1, nK:
δ̂ (k) µ`,i = − 1 M(k) M(k)∑ m=1 ∇w log p(yi|xi, w) ∣∣∣ w=t(θ(k−1),z (k) m ) +∇µ`d(θ(k−1)) ,
δ̂ (k) σ,i = −
1
M(k) M(k)∑ m=1 z(k)m ∇w log p(yi|xi, w) ∣∣∣ w=t(θ(k−1),z (k) m ) +∇σd(θ(k−1)) .
(45)
Note that our analysis in the main text does require the parameter to be in a compact set. For the current estimation problem considered, this can be enforced in practice by restricting the parameters in a ball. In our simulation for the BNNs example, we did not implement the algorithms that stick closely to the compactness requirement for illustrative purposes. However, we observe empirically that the parameters are always bounded. The update rules can be easily modified to respect the requirement. For the considered VI problem, we recall the surrogate functions (11) are quadratic and indeed a simple projection step suffices to ensure boundedness of the iterates.
For all benchmark algorithms, we pick, at iteration k, a function index ik uniformly on JnK and sample a Monte Carlo batch {z(k)m } M(k) m=1 from the standard Gaussian distribution. The updates of the parameters µ` for all ` ∈ JdK and σ break down as follows: Monte Carlo SAG update: Set
µ (k) ` = µ (k−1) ` −
γ
n n∑ i=1 δ̂ (k) µ`,i and σ(k) = σ(k−1) − γ n n∑ i=1 δ̂ (k) σ,i ,
where δ̂(k)µ`,i = δ̂ (k−1) µ`,i and δ̂(k)σ,i = δ̂ (k−1) σ,i for i 6= ik and are defined by (45) for i = ik. The learning rate is set to γ = 10−3.
Bayes By Backprop update: Set
µ (k) ` = µ (k−1) ` −
γ n δ̂ (k) µ`,ik and σ(k) = σ(k−1) − γ n δ̂ (k) σ,ik ,
where the learning rate γ = 10−3.
Monte Carlo Momentum update: Set
µ (k) ` = µ (k−1) ` + v̂ (k) µ` and σ(k) = σ(k−1) + v̂(k)σ ,
where v̂
(k) µ`,i = αv̂(k−1)µ` − γ
n δ̂
(k) µ`,ik and v̂(k)σ = αv̂ (k−1) σ −
γ n δ̂ (k) σ,ik ,
where α and γ, respectively the momentum and the learning rates, are set to 10−3.
Monte Carlo ADAM update: Set
µ (k) ` = µ (k−1) ` −
γ n m̂(k)µ` /(
√ m̂
(k) µ` + ) and σ (k) = σ(k−1) − γ n m̂(k)σ /(
√ m̂ (k) σ + ) ,
where
m̂(k)µ` = m (k−1 | 1. What is the main contribution of the paper regarding stochastic optimization?
2. What are the strengths of the theoretical analysis provided in the paper?
3. How does the reviewer assess the empirical study presented in the paper?
4. Are there any concerns or suggestions regarding the presentation and discussion of the results?
5. How does the reviewer evaluate the practical relevance and impact of the proposed method? | Review | Review
This manuscript contributes a stochastic optimization method for finite sums where the loss function is itself an intractable expectation. It builds upon stochastic majorization-minimizations methods, in particular MISO, that it extends to use Monte-Carlo approximation of the loss.
I am happy to see some attention put to the majorization-minimizations methods, which have many interesting benefits. The paper contributes nice theoretical results, in particular non-asymptotic results. However, I believe that these theoretical results are not enough to situate the contribution with regards to the wider landscape of optimization methods for machine learning.
In this respect, the empirical study is crucial, however it is not completely convincing. Expressing figures 1 and 2 as a function of the number of epoch, rather than as an estimate of runtime is not meaningful: it discards the cost of running the inner loop, which varies from one approach to another. It would leed to believe that MISSO50 is the best option, which is probably not the case.
Also, MC-ADAM seems to outperform MISSO for variational inference
With regards to the broader contribution, it is very appreciable to have a wider theory of stochastic optimization with MM methods. It would have been good, however, to have a discussion of the link of the contributed method to the follow up work by Mairal and colleagues, Stochastic Approximate MM (Mensch et al 2017).
Additional comments after the discussion
The authors have thoroughly replied to all the comments from the various reviewers.
After reading all the discussions (other reviews as well as replies from the authors), it appears to me that the practical relevance of this contribution is not completely clear. The computational cost of each iteration is large. The benchmarks do not show clear improvements in computational. |
ICLR | Title
MISSO: Minimization by Incremental Stochastic Surrogate Optimization for Large Scale Nonconvex and Nonsmooth Problems
Abstract
Many constrained, nonconvex and nonsmooth optimization problems can be tackled using the majorization-minimization (MM) method which alternates between constructing a surrogate function which upper bounds the objective function, and then minimizing this surrogate. For problems which minimize a finite sum of functions, a stochastic version of the MM method selects a batch of functions at random at each iteration and optimizes the accumulated surrogate. However, in many cases of interest such as variational inference for latent variable models, the surrogate functions are expressed as an expectation. In this contribution, we propose a doubly stochastic MM method based on Monte Carlo approximation of these stochastic surrogates. We establish asymptotic and non-asymptotic convergence of our scheme in a constrained, nonconvex, nonsmooth optimization setting. We apply our new framework for inference of logistic regression model with missing data and for variational inference of Bayesian variants of LeNet-5 and Resnet-18 on respectively the MNIST and CIFAR-10 datasets.
1 INTRODUCTION
We consider the constrained minimization problem of a finite sum of functions:
min θ∈Θ L(θ) := 1 n n∑ i=1 Li(θ) , (1)
where Θ is a convex, compact, and closed subset of Rp, and for any i ∈ J1, nK, the function Li : Rp → R is bounded from below and is (possibly) nonconvex and nonsmooth. To tackle the optimization problem (1), a popular approach is to apply the majorization-minimization (MM) method which iteratively minimizes a majorizing surrogate function. A large number of existing procedures fall into this general framework, for instance gradient-based or proximal methods or the Expectation-Maximization (EM) algorithm (McLachlan & Krishnan, 2008) and some variational Bayes inference techniques (Jordan et al., 1999); see for example (Razaviyayn et al., 2013) and (Lange, 2016) and the references therein. When the number of terms n in (1) is large, the vanilla MM method may be intractable because it requires to construct a surrogate function for all the n terms Li at each iteration. Here, a remedy is to apply the Minimization by Incremental Surrogate Optimization (MISO) method proposed by Mairal (2015), where the surrogate functions are updated incrementally. The MISO method can be interpreted as a combination of MM and ideas which have emerged for variance reduction in stochastic gradient methods (Schmidt et al., 2017). An extended analysis of MISO has been proposed in (Qian et al., 2019).
The success of the MISO method rests upon the efficient minimization of surrogates such as convex functions, see (Mairal, 2015, Section 2.3). A notable application of MISO-like algorithms is described in (Mensch et al., 2017) where the authors builds upon the stochastic majorizationminimization framework of Mairal (2015) to introduce a method for sparse matrix factorization. Yet, in many applications of interest, the natural surrogate functions are intractable, yet they are defined as expectation of tractable functions. For instance, this is the case for inference in latent variable models via maximum likelihood (McLachlan & Krishnan, 2008). Another application is
variational inference (Ghahramani, 2015), in which the goal is to approximate the posterior distribution of parameters given the observations; see for example (Neal, 2012; Blundell et al., 2015; Polson et al., 2017; Rezende et al., 2014; Li & Gal, 2017).
This paper fills the gap in the literature by proposing a method called Minimization by Incremental Stochastic Surrogate Optimization (MISSO), designed for the nonconvex and nonsmooth finite sum optimization, with a finite-time convergence guarantee. Our work aims at formulating a generic class of incremental stochastic surrogate methods for nonconvex optimization and building the theory to understand its behavior. In particular, we provide convergence guarantees for stochastic EM and Variational Inference-type methods, under mild conditions. In summary, our contributions are:
• we propose a unifying framework of analysis for incremental stochastic surrogate optimization when the surrogates are defined as expectations of tractable functions. The proposed MISSO method is built on the Monte Carlo integration of the intractable surrogate function, i.e., a doubly stochastic surrogate optimization scheme.
• we present an incremental update of the commonly used variational inference and Monte Carlo EM methods as special cases of our newly introduced framework. The analysis of those two algorithms is thus conducted under this unifying framework of analysis.
• we establish both asymptotic and non-asymptotic convergence for the MISSO method. In particular, the MISSO method converges almost surely to a stationary point and in O(n/ ) iterations to an -stationary point, see Theorem 1.
• in essence, we relax the class of surrogate functions used in MISO (Mairal, 2015) and allow for intractable surrogates that can only be evaluated by Monte-Carlo approximations. Working at the crossroads of Optimization and Sampling constitutes what we believe to be the novelty and the technicality of our framework and theoretical results.
In Section 2, we review the techniques for incremental minimization of finite sum functions based on the MM principle; specifically, we review the MISO method (Mairal, 2015), and present a class of surrogate functions expressed as an expectation over a latent space. The MISSO method is then introduced for the latter class of intractable surrogate functions requiring approximation. In Section 3, we provide the asymptotic and non-asymptotic convergence analysis for the MISSO method (and of the MISO (Mairal, 2015) one as a special case). Section 4 presents numerical applications including parameter inference for logistic regression with missing data and variational inference for two types of Bayesian neural networks. The proofs of theoretical results are reported as Supplement.
Notations. We denote J1, nK = {1, . . . , n}. Unless otherwise specified, ‖ · ‖ denotes the standard Euclidean norm and 〈· | ·〉 is the inner product in the Euclidean space. For any function f : Θ→ R, f ′(θ,d) is the directional derivative of f at θ along the direction d, i.e.,
f ′(θ,d) := lim t→0+ f(θ + td)− f(θ) t . (2)
The directional derivative is assumed to exist for the functions introduced throughout this paper.
2 INCREMENTAL MINIMIZATION OF FINITE SUM NONCONVEX FUNCTIONS
The objective function in (1) is composed of a finite sum of possibly nonsmooth and nonconvex functions. A popular approach here is to apply the MM method, which tackles (1) through alternating between two steps — (i) minimizing a surrogate function which upper bounds the original objective function; and (ii) updating the surrogate function to tighten the upper bound.
As mentioned in the introduction, the MISO method (Mairal, 2015) is developed as an iterative scheme that only updates the surrogate functions partially at each iteration. Formally, for any i ∈ J1, nK, we consider a surrogate function L̂i(θ;θ) which satisfies the assumptions (H1, H2): H1. For all i ∈ J1, nK and θ ∈ Θ, L̂i(θ;θ) is convex w.r.t. θ, and it holds
L̂i(θ;θ) ≥ Li(θ), ∀ θ ∈ Θ , (3)
where the equality holds when θ = θ.
H2. For any θi ∈ Θ, i ∈ J1, nK and some > 0, the difference function ê(θ; {θi}ni=1) := 1 n ∑n i=1 L̂i(θ;θi) − L(θ) is defined for all θ ∈ Θ and differentiable for all θ ∈ Θ, where Θ = {θ ∈ Rd, infθ′∈Θ ‖θ − θ′‖ < } is an -neighborhood set of Θ. Moreover, for some constant L, the gradient satisfies
‖∇ê(θ; {θi}ni=1)‖2 ≤ 2Lê(θ; {θi}ni=1), ∀ θ ∈ Θ . (4)
Algorithm 1 The MISO method (Mairal, 2015). 1: Input: initialization θ(0). 2: Initialize the surrogate function as A0i (θ) := L̂i(θ;θ(0)), i ∈ J1, nK.
3: for k = 0, 1, ...,Kmax do 4: Pick ik uniformly from J1, nK. 5: Update Ak+1i (θ) as:
Ak+1i (θ) = { L̂i(θ;θ(k)), if i = ik Aki (θ), otherwise.
6: Set θ(k+1) ∈ arg min θ∈Θ 1 n
∑n i=1A k+1 i (θ).
7: end for
We remark that H1 is a common assumption used for surrogate functions, see (Mairal, 2015, Section 2.3). H2 can be satisfied when the difference function ê(θ; {θi}ni=1) is L-smooth, i.e., ê is differentiable on Θ and its gradient ∇ê is LLipschitz, ∀θ ∈ Θ. H2 can be implied by applying (Razaviyayn et al., 2013, Proposition 1).
The inequality (3) implies L̂i(θ;θ) ≥ Li(θ) > −∞ for any θ ∈ Θ. The MISO method is an incremental version of the MM method, as summarized by Algorithm 1, which shows that the MISO method maintains an iteratively updated set of upper-bounding surrogate functions {Aki (θ)}ni=1 and updates the iterate via minimizing the average of the surrogate functions.
Particularly, only one out of the n surrogate functions is updated at each iteration [cf. Line 5] and the sum function 1n ∑n i=1A k+1 i (θ) is designed to be ‘easy to optimize’, which, for example, can be a sum of quadratic functions. As such, the MISO method is suitable for large-scale optimization as the computation cost per iteration is independent of n. Under H1, H2, it was shown that the MISO method converges almost surely to a stationary point of (1) (Mairal, 2015, Prop. 3.1).
We now consider the case when the surrogate functions L̂i(θ;θ) are intractable. Let Z be a measurable set, pi : Z × Θ → R+ a probability density function, ri : Θ × Θ × Z → R a measurable function and µi a σ-finite measure. We consider surrogate functions which satisfy H1, H2 and that can be expressed as an expectation, i.e.:
L̂i(θ;θ) := ∫ Z ri(θ;θ, zi)pi(zi;θ)µi(dzi) ∀ (θ,θ) ∈ Θ×Θ . (5)
Plugging (5) into the MISO method is not feasible since the update step in Step 6 involves a minimization of an expectation. Several motivating examples of (1) are given in Section 2.
In this paper, we propose the Minimization by Incremental Stochastic Surrogate Optimization (MISSO) method which replaces the expectation in (5) by Monte Carlo integration and then optimizes the objective function (1) in an incremental manner. Denote by M ∈ N the Monte Carlo batch size and let {zm ∈ Z}Mm=1 be a set of samples. These samples can be drawn (Case 1) i.i.d. from the distribution pi(·;θ) or (Case 2) from a Markov chain with stationary distribution pi(·;θ); see Section 3 for illustrations. To this end, we define the stochastic surrogate as follows:
L̃i(θ;θ, {zm}Mm=1) := 1
M M∑ m=1 ri(θ;θ, zm) , (6)
and we summarize the proposed MISSO method in Algorithm 2. Compared to the MISO method, there is a crucial difference in that the MISSO method involves two types of randomness. The first level of randomness comes from the selection of ik in Line 5. The second level of randomness stems from the set of Monte Carlo approximated functions Ãki (θ) used in lieu of Aki (θ) in Line 6 when optimizing for the next iterate θ(k). We now discuss two applications of the MISSO method.
Example 1: Maximum Likelihood Estimation for Latent Variable Model. Latent variable models (Bishop, 2006) are constructed by introducing unobserved (latent) variables which help explain the observed data. We consider n independent observations ((yi, zi), i ∈ JnK) where yi is observed and zi is latent. In this incomplete data framework, define {fi(zi,θ),θ ∈ Θ} to be the complete
Algorithm 2 The MISSO method. 1: Input: initialization θ(0); a sequence of non-negative numbers {M(k)}∞k=0. 2: For all i ∈ J1, nK, draw M(0) Monte Carlo samples with the stationary distribution pi(·;θ(0)). 3: Initialize the surrogate function as
Ã0i (θ) := L̃i(θ;θ(0), {z (0) i,m} M(0) m=1), i ∈ J1, nK .
4: for k = 0, 1, ...,Kmax do 5: Pick a function index ik uniformly on J1, nK. 6: Draw M(k) Monte Carlo samples with the stationary distribution pi(·;θ(k)). 7: Update the individual surrogate functions recursively as:
Ãk+1i (θ) =
{ L̃i(θ;θ(k), {z(k)i,m} M(k) m=1), if i = ik
Ãki (θ), otherwise.
8: Set θ(k+1) ∈ arg minθ∈Θ L̃(k+1)(θ) := 1n ∑n i=1 Ã k+1 i (θ). 9: end for
data likelihood models, i.e., the joint likelihood of the observations and latent variables. Let
gi(θ) := ∫ Z fi(zi,θ)µi(dzi), i ∈ J1, nK, θ ∈ Θ
denote the incomplete data likelihood, i.e., the marginal likelihood of the observations yi. For ease of notations, the dependence on the observations is made implicit. The maximum likelihood (ML) estimation problem sets the individual objective function Li(θ) to be the i-th negated incomplete data log-likelihood Li(θ) := − log gi(θ). Assume, without loss of generality, that gi(θ) 6= 0 for all θ ∈ Θ. We define by pi(zi,θ) := fi(zi,θ)/gi(θ) the conditional distribution of the latent variable zi given the observations yi. A surrogate function L̂i(θ;θ) satisfying H1 can be obtained through writing fi(zi,θ) = fi(zi,θ)pi(zi,θ)pi(zi,θ) and applying the Jensen inequality:
L̂i(θ;θ) = ∫ Z log ( pi(zi,θ)/fi(zi,θ) )︸ ︷︷ ︸ =ri(θ;θ,zi) pi(zi,θ)µi(dzi) . (7)
We note that H2 can also be verified for common distribution models. We can apply the MISSO method following the above specification of ri(θ;θ, zi) and pi(zi,θ).
Example 2: Variational Inference. Let ((xi, yi), i ∈ J1, nK) be i.i.d. input-output pairs and w ∈ W ⊆ Rd be a latent variable. When conditioned on the input data x = (xi, i ∈ J1, nK), the joint distribution of y = (yi, i ∈ J1, nK) and w is given by:
p(y, w|x) = π(w) ∏n i=1 p(yi|xi, w) . (8)
Our goal is to compute the posterior distribution p(w|y, x). In most cases, the posterior distribution p(w|y, x) is intractable and is approximated using a family of parametric distributions, {q(w,θ),θ ∈ Θ}. The variational inference (VI) problem (Blei et al., 2017) boils down to minimizing the Kullback-Leibler (KL) divergence between q(w,θ) and the posterior distribution p(w|y, x):
min θ∈Θ
L(θ) := KL (q(w;θ) ||p(w|y, x)) := Eq(w;θ) [ log ( q(w;θ)/p(w|y, x) )] . (9)
Using (8), we decompose L(θ) = n−1 ∑n i=1 Li(θ) + const. where:
Li(θ) := −Eq(w;θ) [ log p(yi|xi, w) ] + 1
n Eq(w;θ)
[ log q(w;θ)/π(w) ] := ri(θ) + d(θ) . (10)
Directly optimizing the finite sum objective function in (9) can be difficult. First, with n 1, evaluating the objective function L(θ) requires a full pass over the entire dataset. Second, for some
complex models, the expectations in (10) can be intractable even if we assume a simple parametric model for q(w;θ). Assume that Li is L-smooth. We apply the MISSO method with a quadratic surrogate function defined as:
L̂i(θ;θ) := Li(θ) + 〈 ∇θLi(θ) |θ − θ 〉 + L
2 ‖θ − θ‖2, (θ,θ) ∈ Θ2 . (11)
It is easily checked that the quadratic function L̂i(θ;θ) satisfies H1, H2. To compute the gradient ∇Li(θ), we apply the re-parametrization technique suggested in (Paisley et al., 2012; Kingma & Welling, 2014; Blundell et al., 2015). Let t : Rd×Θ 7→ Rd be a differentiable function w.r.t. θ ∈ Θ which is designed such that the law of w = t(z,θ) is q(·,θ), where z ∼ Nd(0, I). By (Blundell et al., 2015, Proposition 1), the gradient of −ri(·) in (10) is:
∇θEq(w;θ) [ log p(yi|xi, w) ] = Ez∼Nd(0,I) [ Jtθ(z,θ)∇w log p(yi|xi, w) ∣∣ w=t(z,θ) ] , (12)
where for each z ∈ Rd, Jtθ(z,θ) is the Jacobian of the function t(z, ·) with respect to θ evaluated at θ. In addition, for most cases, the term∇d(θ) can be evaluated in closed form as the gradient of the KL between the prior distribution π(·) and the variational candidate q(·,θ).
ri(θ;θ, z) := 〈 ∇θd(θ)− Jtθ(z,θ)∇w log p(yi|xi, w) ∣∣ w=t(z,θ) |θ − θ 〉 + L 2 ‖θ − θ‖2 . (13)
Finally, using (11) and (13), the surrogate function (6) is given by L̃i(θ;θ, {zm}Mm=1) := M−1 ∑M m=1 ri(θ;θ, zm) where {zm}Mm=1 are i.i.d samples drawn from N (0, I).
3 CONVERGENCE ANALYSIS
We now provide asymptotic and non-asymptotic convergence results of our method. Assume:
H3. For all i ∈ J1, nK, θ ∈ Θ, zi ∈ Z, ri(·;θ, zi) is convex on Θ and is lower bounded.
We are particularly interested in the constrained optimization setting where Θ is a bounded set. To this end, we control the supremum norm of the MC approximation, introduced in (6), as: H4. For the samples {zi,m}Mm=1, there exist finite constants Cr and Cgr such that
Cr := sup θ∈Θ sup M>0 1√ M Eθ [ sup θ∈Θ ∣∣∣∣∣ M∑ m=1 { ri(θ;θ, zi,m)− L̂i(θ;θ) }∣∣∣∣∣ ]
Cgr := sup θ∈Θ sup M>0
√ MEθ sup θ∈Θ ∣∣∣∣∣ 1M M∑ m=1 L̂′i(θ,θ − θ;θ)− r′i(θ,θ − θ;θ, zi,m) ‖θ − θ‖ ∣∣∣∣∣ 2
for all i ∈ J1, nK, and we denoted by Eθ[·] the expectation w.r.t. a Markov chain {zi,m}Mm=1 with initial distribution ξi(·;θ), transition kernel Πi,θ, and stationary distribution pi(·;θ).
Some intuitions behind the controlling terms: It is common in statistical and optimization problems, to deal with the manipulation and the control of random variables indexed by sets with an infinite number of elements. Here, the controlled random variable is an image of a continuous function defined as ri(θ;θ, zi,m) − L̂i(θ;θ) for all z ∈ Z and for fixed (θ,θ) ∈ Θ2. To characterize such control, we will have recourse to the notion of metric entropy (or bracketing number) as developed in (Van der Vaart, 2000; Vershynin, 2018; Wainwright, 2019). A collection of results from those references gives intuition behind our assumption H4, which is classical in empirical processes. In (Vershynin, 2018, Theorem 8.2.3), the authors recall the uniform law of large numbers:
E [ sup f∈F ∣∣∣∣∣ 1M M∑ i=1 f (zi,m)− E[f(zi)] ∣∣∣∣∣ ] ≤ CL√ M for all zi,m, i ∈ J1,MK ,
where F is a class of L-Lipschitz functions. Moreover, in (Vershynin, 2018, Theorem 8.1.3 ) and (Wainwright, 2019, Theorem 5.22), the application of the Dudley inequality yields:
E[sup f∈F |Xf −X0|] ≤ 1√ M ∫ 1 0 √ logN (F , ‖ · ‖∞, ε)dε ,
whereN (F , ‖ · ‖∞, ε) is the bracketing number and denotes the level of approximation (the bracketing number goes to infinity when → 0). Finally, in (Van der Vaart, 2000, p.271, Example), N (F , ‖ · ‖∞, ε) is bounded from above for a class of parametric functions F = fθ : θ ∈ Θ:
N (F , ‖ · ‖∞, ε) ≤ K ( diam Θ
ε
)d , for all 0 < ε < diam Θ .
The authors acknowledge that those bounds are a dramatic manifestation of the curse of dimensionality happening when sampling is needed. Nevertheless, the dependence on the dimension highly depends on the class of surrogate functions F used in our scheme, as smaller bounds on these controlling terms can be derived for simpler class of functions, such as quadratic functions.
Stationarity measure. As problem (1) is a constrained optimization task, we consider the following stationarity measure:
g(θ) := inf θ∈Θ L′(θ,θ − θ) ‖θ − θ‖ and g(θ) = g+(θ)− g−(θ) , (14)
where g+(θ) := max{0, g(θ)}, g−(θ) := −min{0, g(θ)} denote the positive and negative part of g(θ), respectively. Note that θ is a stationary point if and only if g−(θ) = 0 (Fletcher et al., 2002). Furthermore, suppose that the sequence {θ(k)}k≥0 has a limit point θ that is a stationary point, then one has limk→∞ g−(θ(k)) = 0. Thus, the sequence {θ(k)}k≥0 is said to satisfy an asymptotic stationary point condition. This is equivalent to (Mairal, 2015, Definition 2.4).
To facilitate our analysis, we define τki as the iteration index where the i-th function is last accessed in the MISSO method prior to iteration k, τk+1ik = k for instance. We define:
L̂(k)(θ) := 1n ∑n i=1L̂i(θ;θ (τki )), ê(k)(θ) := L̂(k)(θ)− L(θ), M (k) := Kmax−1∑ k=0 M −1/2 (k) . (15)
We first establish a non-asymptotic convergence rate for the MISSO method:
Theorem 1. Under H1-H4. For any Kmax ∈ N, let K be an independent discrete r.v. drawn uniformly from {0, ...,Kmax − 1} and define the following quantity:
∆(Kmax) := 2nLE[L̃ (0)(θ(0))− L̃(Kmax)(θ(Kmax))] + 4LCrM (k) .
Then we have following non-asymptotic bounds:
E [ ‖∇ê(K)(θ(K))‖2 ] ≤ ∆(Kmax)
Kmax and E[g−(θ(K))] ≤
√ ∆(Kmax)
Kmax + Cgr Kmax M (k) . (16)
Note that ∆(Kmax) is finite for any Kmax ∈ N. Iteration Complexity of MISSO. As expected, the MISSO method converges to a stationary point of (1) asymptotically and at a sublinear rate E[g(K)− ] ≤ O( √ ∆(Kmax)/Kmax). In other terms, MISSO requires O(nL/ ) iterations to reach an -stationary point when the suboptimality condition, that characterizes stationarity, is E [ ‖g−(θ(K))‖2 ] . Note that this stationarity criterion are similar to the
usual quantity used in stochastic nonconvex optimization, i.e., E [ ‖∇L(θ(K))‖2 ] . In fact, when the
optimization problem (1) is unconstrained, i.e., Θ = Rp, then E [ g(θ(K)) ] = E [ ∇L(θ(K)) ] .
Sample Complexity of MISSO. Regarding the sample complexity of our method, setting M(k) = k2/n2, as a non-decreasing sequence of integers satisfying ∑∞ k=0M −1/2 (k) < ∞, in order to keep
∆(Kmax) nL, then the MISSO method requires ∑nL/ k=0 k
2/n2 = nL3/ 3 samples to reach an -stationary point.
Furthermore,we remark that the MISO method can be analyzed in Theorem 1 as a special case of the MISSO method satisfying Cr = Cgr = 0. In this case, while the asymptotic convergence is well known from (Mairal, 2015) [cf. H4], Eq. (16) gives a non-asymptotic rate of E[g(K)− ] ≤
O( √ nL/Kmax) which is new to our best knowledge. Next, we show that under an additional assumption on the sequence of batch size M(k), the MISSO method converges almost surely to a stationary point:
Theorem 2. Under H1-H4. In addition, assume that {M(k)}k≥0 is a non-decreasing sequence of integers which satisfies ∑∞ k=0M −1/2 (k) <∞. Then:
1. the negative part of the stationarity measure converges a.s. to zero, i.e., lim k→∞ g−(θ (k))
a.s. = 0.
2. the objective value L(θ(k)) converges a.s. to a finite number L, i.e., limk→∞ L(θ(k)) a.s. = L.
In particular, the first result above shows that the sequence {θ(k)}k≥0 produced by the MISSO method satisfies an asymptotic stationary point condition.
4 NUMERICAL EXPERIMENTS
4.1 BINARY LOGISTIC REGRESSION WITH MISSING VALUES
This application follows Example 1 described in Section 2. We consider a binary regression setup, ((yi, zi), i ∈ JnK) where yi ∈ {0, 1} is a binary response and zi = (zi,j ∈ R, j ∈ JpK) is a covariate vector. The vector of covariates zi = [zi,mis, zi,obs] is not fully observed where we denote by zi,mis the missing values and zi,obs the observed covariate. It is assumed that (zi, i ∈ JnK) are i.i.d. and marginally distributed according toN (β,Ω) where β ∈ Rp and Ω is a positive definite p×pmatrix. We define the conditional distribution of the observations yi given zi = (zi,mis, zi,obs) as:
pi(yi|zi) = S(δ>z̄i)yi ( 1− S(δ>z̄i) )1−yi , (17)
where for u ∈ R, S(u) = 1/(1+e−u), δ = (δ0, · · · , δp) are the logistic parameters and z̄i = (1, zi). Here, θ = (δ,β,Ω) is the parameter to estimate. For i ∈ JnK, the complete log-likelihood reads: log fi(zi,mis,θ) ∝ yiδ>z̄i − log ( 1 + exp(δ>z̄i) ) − 1
2 log(|Ω|) + 1 2 Tr ( Ω−1(zi − β)(zi − β)> ) .
Fitting a logistic regression model on the TraumaBase dataset: We apply the MISSO method to fit a logistic regression model on the TraumaBase (http://traumabase.eu) dataset, which consists of data collected from 15 trauma centers in France, covering measurements on patients from the initial to last stage of trauma. This dataset includes information from the first stage of the trauma, namely initial observations on the patient’s accident site to the last stage being intense care at the hospital and counts more than 200 variables measured for more than 7 000 patients. Since the dataset considered is heterogeneous – coming from multiple sources with frequently missed entries – we apply the latent data model described in (17) to predict the risk of a severe hemorrhage which is one of the main cause of death after a major trauma.
Similar to (Jiang et al., 2018), we select p = 16 influential quantitative measurements, on n = 6384 patients. For the Monte Carlo sampling of zi,mis, required while running MISSO, we run a Metropolis-Hastings algorithm with the target distribution p(·|zi,obs, yi;θ(k)).
We compare in Figure 1 the convergence behavior of the estimated parameters δ and β using SAEM (Delyon et al., 1999) (with stepsize γk = 1/kα where α = 0.6 after tuning), MCEM (Wei
& Tanner, 1990) and the proposed MISSO method. For the MISSO method, we set the batch size to M(k) = 10 + k2 and we examine with selecting different number of functions in Line 5 in the method – the default settings with 1 (MISSO), 10% (MISSO10) and 50% (MISSO50) minibatches per iteration. From Figure 1, the MISSO method converges to a static value with less number of epochs than the MCEM, SAEM methods. It is worth noting that the difference among the MISSO runs for different number of selected functions demonstrates a variance-cost tradeoff. Though wall clock times are similar for all methods, they are reported in the appendix for completeness.
4.2 TRAINING BAYESIAN CNN USING MISSO
This application follows Example 2 described in Section 2. We use variational inference and the ELBO loss (10) to fit Bayesian Neural Networks on different datasets. At iteration k, minimizing the sum of stochastic surrogates defined as in (6) and (13) yields the following MISSO update — step (i) pick a function index ik uniformly on JnK; step (ii) sample a Monte Carlo batch {z(k)m } M(k) m=1 from N (0, I); and step (iii) update the parameters, with w̃ = t(θ(k−1), z(k)m ), as
µ (k) ` = µ̂ (τk) ` −
γ
n n∑ i=1 δ̂ (k) µ`,i ,
δ̂ (k) µ`,ik = − 1 M(k) M(k)∑ m=1 ∇w log p(yik |xik , w̃) +∇µ`d(θ(k−1)) ,
where µ̂(τ k)
` = 1 n ∑n i=1 µ (τki ) ` and d(θ) = n −1∑d `=1 ( − log(σ) + (σ2 + µ2`)/2− 1/2 ) .
Bayesian LeNet-5 on MNIST (LeCun et al., 1998): We apply the MISSO method to fit a Bayesian variant of LeNet-5 (LeCun et al., 1998). We train this network on the MNIST dataset (LeCun, 1998). The training set is composed of n = 55 000 handwritten digits, 28 × 28 images. Each image is labelled with its corresponding number (from zero to nine). Under the prior distribution π, see (8), the weights are assumed independent and identically distributed according to N (0, 1). We also assume that q(·;θ) ≡ N (µ, σ2I). The variational posterior parameters are thus θ = (µ, σ) where µ = (µ`, ` ∈ JdK) where d is the number of weights in the neural network. We use the re-parametrization as w = t(θ, z) = µ+ σz with z ∼ N (0, I). Bayesian ResNet-18 (He et al., 2016) on CIFAR-10 (Krizhevsky et al., 2012): We train here the Bayesian variant of the ResNet-18 neural network introduced in (He et al., 2016) on CIFAR-10. The latter dataset is composed of n = 60 000 handwritten digits, 32 × 32 colour images in 10 classes, with 6 000 images per class. As in the previous example, the weights are assumed independent and identically distributed according toN (0, I). Standard hyperparameters values found in the literature, such as the annealing constant or the number of MC samples, were used for the benchmark methods. For efficiency purpose and lower variance, the Flipout estimator (Wen et al., 2018) is used.
Experiment Results: We compare the convergence of the Monte Carlo variants of the following state of the art optimization algorithms — the ADAM (Kingma & Ba, 2015), the Momentum (Sutskever et al., 2013) and the SAG (Schmidt et al., 2017) methods versus the Bayes by Backprop (BBB) (Blundell et al., 2015) and our proposed MISSO method. For all these methods, the loss function (10) and its gradients were computed by Monte Carlo integration based on the reparametrization described above. The mini-batch of indices and MC samples are respectively set to 128 and M(k) = k. The learning rates are set to 10−3 for LeNet-5 and 10−4 for Resnet-18.
Figure 2(a) shows the convergence of the negated evidence lower bound against the number of passes over data (one pass represents an epoch). As observed, the proposed MISSO method outperforms Bayes by Backprop and Momentum, while similar convergence rates are observed with the MISSO, ADAM and SAG methods for our experiment on MNIST dataset using a Bayesian variant of LeNet5. On the other hand, the experiment conducted on CIFAR-10 (Figure 2(b)) using a much larger network, i.e., a Bayesian variant of ResNet-18 showcases the need of a well-tuned adaptive methods to reach lower training loss (and also faster). Our MISSO method is similar to the Monte Carlo variant of ADAM but slower than Adagrad optimizer. Recall that the purpose of this paper is to provide a common class of optimizers, such as VI, in order to study their convergence behaviors, and not to introduce a novel method outperforming the baselines methods. We report wall clock times for all methods in the appendix for completeness.
5 CONCLUSION
We present a unifying framework for minimizing a nonconvex and nonsmooth finite-sum objective function using incremental surrogates when the latter functions are expressed as an expectation and are intractable. Our approach covers a large class of nonconvex applications in machine learning such as logistic regression with missing values and variational inference. We provide both finitetime and asymptotic guarantees of our incremental stochastic surrogate optimization technique and illustrate our findings training a binary logistic regression with missing covariates to predict hemorrhagic shock and Bayesian variants of two Convolutional Neural Networks on benchmark datasets.
A PROOFS OF THE THEORETICAL RESULTS
A.1 PROOF OF THEOREM 1
Theorem. Under H1-H4. For anyKmax ∈ N, letK be an independent discrete r.v. drawn uniformly from {0, ...,Kmax − 1} and define the following quantity:
∆(Kmax) := 2nLE[L̃ (0)(θ(0))− L̃(Kmax)(θ(Kmax))] + 4LCrM (k) .
Then we have following non-asymptotic bounds:
E [ ‖∇ê(K)(θ(K))‖2 ] ≤ ∆(Kmax)
Kmax and E[g−(θ(K))] ≤
√ ∆(Kmax)
Kmax + Cgr Kmax M (k) .
Proof We begin by recalling the definition
L̃(k)(θ) := 1 n n∑ i=1 Ãki (θ) .
Notice that
L̃(k+1)(θ) = 1 n n∑ i=1 L̃i(θ;θ(τ k+1 i ), {z(τ k+1 i ) i,m } M (τ k+1 i ) m=1 )
= L̃(k)(θ) + 1 n
( L̃ik(θ;θ(k), {z (k) ik,m }M(k)m=1)− L̃ik(θ;θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 ) ) .
Furthermore, we recall that L̂(k)(θ) := 1n ∑n i=1L̂i(θ;θ (τki )), ê(k)(θ) := L̂(k)(θ)− L(θ) .
Due to H2, we have ‖∇ê(k)(θ(k))‖2 ≤ 2Lê(k)(θ(k)) . (18)
To prove the first bound in (16), using the optimality of θ(k+1), one has
L̃(k+1)(θ(k+1)) ≤ L̃(k+1)(θ(k))
= L̃(k)(θ(k)) + 1n ( L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)− L̃ik(θ(k);θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 ) ) .
(19)
Let Fk be the filtration of random variables up to iteration k, i.e., {i`−1, {z(`−1)i`−1,m} M(`−1) m=1 ,θ
(`)}k`=1. We observe that the conditional expectation evaluates to
Eik [ E [ L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)|Fk, ik ] |Fk ]
= L(θ(k)) + Eik [ E [ 1 M(k) M(k)∑ m=1 rik(θ (k);θ(k), z (k) ik,m )− L̂ik(θ(k);θ(k))|Fk, ik ] |Fk ] ≤ L(θ(k)) + Cr√ M(k) ,
where the last inequality is due to H4. Moreover,
E [ L̃ik(θ(k);θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 )|Fk ] = 1
n n∑ i=1 L̃i(θ(k);θ(τ k i ), {z(τ k i ) i,m } M (τk i ) m=1 ) = L̃(k)(θ(k)) .
Taking the conditional expectations on both sides of (19) and re-arranging terms give:
L̃(k)(θ(k))− L(θ(k)) ≤ nE [ L̃(k)(θ(k))− L̃(k+1)(θ(k+1))|Fk ] +
Cr√ M(k) . (20)
Proceeding from (20), we observe the following lower bound for the left hand side
L̃(k)(θ(k))− L(θ(k)) (a)= L̃(k)(θ(k))− L̂(k)(θ(k)) + ê(k)(θ(k)) (b)
≥ L̃(k)(θ(k))− L̂(k)(θ(k)) + 1 2L ‖∇ê(k)(θ(k))‖2
= 1
n n∑ i=1 { 1 M(τki ) M (τk i )∑ m=1 ri(θ (k);θ(τ k i ), z (τki ) i,m )− L̂i(θ (k);θ(τ k i )) }
︸ ︷︷ ︸ :=−δ(k)(θ(k))
+ 1
2L ‖∇ê(k)(θ(k))‖2 ,
where (a) is due to ê(k)(θ(k)) = 0 [cf. H1], (b) is due to (18) and we have defined the summation in the last equality as −δ(k)(θ(k)). Substituting the above into (20) yields
‖∇ê(k)(θ(k))‖2
2L ≤ nE
[ L̃(k)(θ(k))− L̃(k+1)(θ(k+1))|Fk ] +
Cr√ M(k) + δ(k)(θ(k)) . (21)
Observe the following upper bound on the total expectations:
E [ δ(k)(θ(k)) ] ≤ E [ 1 n n∑ i=1 Cr√ M(τki ) ] ,
which is due to H4. It yields
E [ ‖∇ê(k)(θ(k))‖2 ] ≤ 2nLE [ L̃(k)(θ(k))− L̃(k+1)(θ(k+1)) ] +
2LCr√ M(k) + 1 n n∑ i=1 E [ 2LCr√
M(τki )
] .
Finally, for anyKmax ∈ N, we letK be a discrete r.v. that is uniformly drawn from {0, 1, ...,Kmax− 1}. Using H4 and taking total expectations lead to
E [ ‖∇ê(K)(θ(K))‖2 ] = 1
Kmax Kmax−1∑ k=0 E[‖∇ê(k)(θ(k))‖2]
≤ 2nLE[L̃ (0)(θ(0))− L̃(Kmax)(θ(Kmax))]
Kmax + 2LCr Kmax Kmax−1∑ k=0 E [ 1√ M(k) + 1 n n∑ i=1 1√ M(τki ) ] . (22)
For all i ∈ J1, nK, the index i is selected with a probability equal to 1n when conditioned independently on the past. We observe:
E[M−1/2 (τki ) ] = k∑ j=1 1 n ( 1− 1 n )j−1 M −1/2 (k−j) (23)
Taking the sum yields: Kmax−1∑ k=0 E[M−1/2 (τki ) ] = Kmax−1∑ k=0 k∑ j=1 1 n ( 1− 1 n )j−1 M −1/2 (k−j) = Kmax−1∑ k=0 k−1∑ l=0 1 n ( 1− 1 n )k−(l+1) M −1/2 (l)
= Kmax−1∑ l=0 M −1/2 (l) Kmax−1∑ k=l+1 1 n ( 1− 1 n )k−(l+1) ≤ Kmax−1∑ l=0 M −1/2 (l) ,
(24)
where the last inequality is due to upper bounding the geometric series. Plugging this back into (22) yields
E [ ‖∇ê(K)(θ(K))‖2 ] = 1
Kmax Kmax−1∑ k=0 E[‖∇ê(k)(θ(k))‖2]
≤ 2nLE[L̃ (0)(θ(0))− L̃(Kmax)(θ(Kmax))]
Kmax +
1
Kmax Kmax−1∑ k=0 4LCr√ M(k) = ∆(Kmax) Kmax .
This concludes our proof for the first inequality in (16).
To prove the second inequality of (16), we define the shorthand notations g(k) := g(θ(k)), g(k)− := −min{0, g(k)}, g(k)+ := max{0, g(k)}. We observe that
g(k) = inf θ∈Θ L′(θ(k),θ − θ(k)) ‖θ(k) − θ‖
= inf θ∈Θ
{ 1 n ∑n i=1 L̂ ′
i(θ (k),θ − θ(k);θ(τki )) ‖θ(k) − θ‖
− 〈 ∇ê(k)(θ(k)) |θ − θ(k) 〉 ‖θ(k) − θ‖ } ≥ −‖∇ê(k)(θ(k))‖+ inf
θ∈Θ
1 n ∑n i=1 L̂ ′
i(θ (k),θ − θ(k);θ(τki )) ‖θ(k) − θ‖ ,
where the last inequality is due to the Cauchy-Schwarz inequality and we have defined L̂′i(θ,d;θ(τ k i )) as the directional derivative of L̂i(·;θ(τ k i )) at θ along the direction d. Moreover, for any θ ∈ Θ, 1
n n∑ i=1 L̂ ′ i(θ (k),θ − θ(k);θ(τ k i ))
= L̃(k) ′ (θ(k),θ − θ(k))︸ ︷︷ ︸
≥0
−L̃(k) ′ (θ(k),θ − θ(k)) + 1
n n∑ i=1 L̂ ′ i(θ (k),θ − θ(k);θ(τ k i ))
≥ 1 n n∑ i=1 { L̂ ′ i(θ (k),θ − θ(k);θ(τ k i ))− 1 M(τki ) M (τk i )∑ m=1 r′i(θ (k),θ − θ(k);θ(τ k i ), z (τki ) i,m ) } ,
where the inequality is due to the optimality of θ(k) and the convexity of L̃(k)(θ) [cf. H3]. Denoting a scaled version of the above term as:
(k)(θ) :=
1 n ∑n i=1 { 1
M (τk i )
∑M(τk i )
m=1 r ′ i(θ
(k),θ − θ(k);θ(τki ), z(τ k i ) i,m )− L̂ ′ i(θ (k),θ − θ(k);θ(τki )) } ‖θ(k) − θ‖ .
We have g(k) ≥ −‖∇ê(k)(θ(k))‖+ inf
θ∈Θ (− (k)(θ)) ≥ −‖∇ê(k)(θ(k))‖ − sup θ∈Θ | (k)(θ)| . (25)
Since g(k) = g(k)+ − g (k) − and g (k) + g (k) − = 0, this implies
g (k) − ≤ ‖∇ê(k)(θ(k))‖+ sup θ∈Θ | (k)(θ)| . (26)
Consider the above inequality when k = K, i.e., the random index, and taking total expectations on both sides gives
E[g(K)− ] ≤ E[‖∇ê(K)(θ(K))‖] + E[sup θ∈Θ (K)(θ)] .
We note that ( E[‖∇ê(K)(θ(K))‖] )2 ≤ E[‖∇ê(K)(θ(K))‖2] ≤ ∆(Kmax)
Kmax ,
where the first inequality is due to the convexity of (·)2 and the Jensen’s inequality, and
E[sup θ∈Θ
(K)(θ)] = 1
Kmax Kmax∑ k=0 E[sup θ∈Θ (k)(θ)] (a) ≤ Cgr Kmax Kmax−1∑ k=0 E [ 1 n n∑ i=1 M −1/2 (τki ) ] (b) ≤ Cgr Kmax Kmax−1∑ k=0 M −1/2 (k) ,
where (a) is due to H4 and (b) is due to (24). This implies
E[g(K)− ] ≤
√ ∆(Kmax)
Kmax + Cgr Kmax Kmax−1∑ k=0 M −1/2 (k) ,
and concludes the proof of the theorem.
A.2 PROOF OF THEOREM 2
Theorem. Under H1-H4. In addition, assume that {M(k)}k≥0 is a non-decreasing sequence of integers which satisfies ∑∞ k=0M −1/2 (k) <∞. Then:
1. the negative part of the stationarity measure converges a.s. to zero, i.e., lim k→∞ g−(θ (k))
a.s. = 0.
2. the objective value L(θ(k)) converges a.s. to a finite number L, i.e., limk→∞ L(θ(k)) a.s. = L.
Proof We apply the following auxiliary lemma which proof can be found in Appendix A.3 for the readability of the current proof:
Lemma 1. Let (Vk)k≥0 be a non negative sequence of random variables such that E[V0] < ∞. Let (Xk)k≥0 a non negative sequence of random variables and (Ek)k≥0 be a sequence of random variables such that ∑∞ k=0 E[|Ek|] <∞. If for any k ≥ 1:
Vk ≤ Vk−1 −Xk−1 + Ek−1 (27)
then:
(i) for all k ≥ 0, E[Vk] <∞ and the sequence (Vk)k≥0 converges a.s. to a finite limit V∞.
(ii) the sequence (E[Vk])k≥0 converges and lim k→∞ E[Vk] = E[V∞].
(iii) the series ∑∞ k=0Xk converges almost surely and ∑∞ k=0 E[Xk] <∞.
We proceed from (19) by re-arranging terms and observing that L̂(k+1)(θ(k+1)) ≤ L̂(k)(θ(k))− 1n ( L̂ik(θ(k);θ (τkik ))− L̂ik(θ(k);θ(k)) ) − ( L̃(k+1)(θ(k+1))− L̂(k+1)(θ(k+1)) ) + ( L̃(k)(θ(k))− L̂(k)(θ(k))
) + 1n ( L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)− L̂ik(θ(k);θ(k))
) + 1n ( L̂ik(θ(k);θ (τkik ))− L̃ik(θ(k);θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 ) ) .
Our idea is to apply Lemma 1. Under H1, the finite sum of surrogate functions L̂(k)(θ), defined in (15), is lower bounded by a constant ck > −∞ for any θ. To this end, we observe that
Vk := L̂(k)(θ(k))− inf k≥0 ck ≥ 0 (28)
is a non-negative random variable.
Secondly, under H1, the following random variable is non-negative
Xk := 1 n ( L̂ik(θ (τkik );θ(k))− L̂ik(θ(k);θ(k)) ) ≥ 0 . (29)
Thirdly, we define Ek = − ( L̃(k+1)(θ(k+1))− L̂(k+1)(θ(k+1)) ) + ( L̃(k)(θ(k))− L̂(k)(θ(k)) ) + 1n ( L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)− L̂ik(θ(k);θ(k))
) + 1n ( L̂ik(θ(k);θ (τkik ))− L̃ik(θ(k);θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 ) ) .
(30)
Note that from the definitions (28), (29), (30), we have Vk+1 ≤ Vk −Xk + Ek for any k ≥ 1. Under H4, we observe that
E [ |L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)− L̂ik(θ(k);θ(k))| ] ≤ CrM−1/2(k)
E [∣∣∣L̂ik(θ(k);θ(τkik ))− L̃ik(θ(k);θ(τkik ), {z(τkik )ik,m }M(τkik )m=1 )∣∣∣] ≤ CrE[M−1/2(τkik ) ]
E [ |L̃(k)(θ(k))− L̂(k)(θ(k))| ] ≤ 1n ∑n i=1CrE [ M −1/2 (τki ) ] Therefore,
E [ |Ek| ] ≤ Crn ( M −1/2 (k) + E [ M −1/2 (τkik ) + ∑n i=1 { M −1/2 (τki ) +M −1/2 (τk+1i ) }]) .
Using (24) and the assumption on the sequence {M(k)}k≥0, we obtain that ∞∑ k=0 E [ |Ek| ] < Cr n (2 + 2n) ∞∑ k=0 M −1/2 (k) <∞.
Therefore, the conclusions in Lemma 1 hold. Precisely, we have ∑∞ k=0Xk < ∞ and∑∞
k=0 E[Xk] <∞ almost surely. Note that this implies
∞ > ∞∑ k=0 E[Xk] = 1 n ∞∑ k=0 E [ L̂ik(θ(k);θ (τkik ))− L̂ik(θ(k);θ(k)) ] = 1
n ∞∑ k=0 E [ L̂(k)(θ(k))− L(θ(k)) ] = 1 n ∞∑ k=0 E [ ê(k)(θ(k)) ] .
Since ê(k)(θ(k)) ≥ 0, the above implies
lim k→∞
ê(k)(θ(k)) = 0 a.s. (31)
and subsequently applying (18), we have limk→∞ ‖ê(k)(θ(k))‖ = 0 almost surely. Finally, it follows from (18) and (26) that
lim k→∞
g (k) − ≤ lim
k→∞
√ 2L √ ê(k)(θ(k)) + lim
k→∞ sup θ∈Θ | (k)(θ)| = 0 , (32)
where the last equality holds almost surely due to the fact that ∑∞ k=0 E[supθ∈Θ | (k)(θ)|] < ∞. This concludes the asymptotic convergence of the MISSO method.
Finally, we prove thatL(θ(k)) converges almost surely. As a consequence of Lemma 1, it is clear that {Vk}k≥0 converges almost surely and so is {L̂(k)(θ(k))}k≥0, i.e., we have limk→∞ L̂(k)(θ(k)) = L. Applying (31) implies that
L = lim k→∞ L̂(k)(θ(k)) = lim k→∞ L(θ(k)) a.s.
This shows that L(θ(k)) converges almost surely to L.
A.3 PROOF OF LEMMA 1
Lemma. Let (Vk)k≥0 be a non negative sequence of random variables such that E[V0] < ∞. Let (Xk)k≥0 a non negative sequence of random variables and (Ek)k≥0 be a sequence of random variables such that ∑∞ k=0 E[|Ek|] <∞. If for any k ≥ 1:
Vk ≤ Vk−1 −Xk−1 + Ek−1
then:
(i) for all k ≥ 0, E[Vk] <∞ and the sequence (Vk)k≥0 converges a.s. to a finite limit V∞.
(ii) the sequence (E[Vk])k≥0 converges and lim k→∞ E[Vk] = E[V∞].
(iii) the series ∑∞ k=0Xk converges almost surely and ∑∞ k=0 E[Xk] <∞.
Proof We first show that for all k ≥ 0, E[Vk] <∞. Note indeed that:
0 ≤ Vk ≤ V0 − k∑ j=1 Xj + k∑ j=1 Ej ≤ V0 + k∑ j=1 Ej , (33)
showing that E[Vk] ≤ E[V0] + E [∑k j=1Ej ] <∞. Since 0 ≤ Xk ≤ Vk−1 − Vk + Ek we also obtain for all k ≥ 0, E[Xk] < ∞. Moreover, since E [∑∞ j=1 |Ej | ] <∞, the series ∑∞ j=1Ej converges a.s. We may therefore define:
Wk = Vk + ∞∑ j=k+1 Ej (34)
Note that E[|Wk|] ≤ E[Vk] + E [∑∞ j=k+1 |Ej | ] <∞. For all k ≥ 1, we get:
Wk ≤ Vk−1 −Xk + ∞∑ j=k Ej ≤Wk−1 −Xk ≤Wk−1
E[Wk] ≤ E[Wk−1]− E[Xk] .
(35)
Hence the sequences (Wk)k≥0 and (E[Wk])k≥0 are non increasing. Since for all k ≥ 0, Wk ≥ − ∑∞ j=1 |Ej | > −∞ and E[Wk] ≥ − ∑∞ j=1 E[|Ej |] > −∞, the (random) sequence (Wk)k≥0 converges a.s. to a limitW∞ and the (deterministic) sequence (E[Wk])k≥0 converges to a limit w∞. Since |Wk| ≤ V0 + ∑∞ j=1 |Ej |, the Fatou lemma implies that:
E[lim inf k→∞ |Wk|] = E[|W∞|] ≤ lim inf k→∞ E[|Wk|] ≤ E[V0] + ∞∑ j=1 E[|Ej |] <∞ , (36)
showing that the random variable W∞ is integrable.
In the sequel, set Uk ,W0 −Wk. By construction we have for all k ≥ 0, Uk ≥ 0, Uk ≤ Uk+1 and E[Uk] ≤ E[|W0|] + E[|Wk|] <∞ and by the monotone convergence theorem, we get:
lim k→∞ E[Uk] = E[ lim k→∞ Uk] . (37)
Finally, we have:
lim k→∞ E[Uk] = E[W0]− w∞ and E[ lim k→∞ Uk] = E[W0]− E[W∞] . (38)
showing that E[W∞] = w∞ and concluding the proof of (ii). Moreover, using (35) we have that Wk ≤Wk−1 −Xk which yields:
∞∑ j=1 Xj ≤W0 −W∞ <∞ ,
∞∑ j=1 E[Xj ] ≤ E[W0]− w∞ <∞ , (39)
an concludes the proof of the lemma.
B PRACTICAL DETAILS FOR THE BINARY LOGISTIC REGRESSION ON THE TRAUMABASE
B.1 TRAUMABASE DATASET QUANTITATIVE VARIABLES
The list of the 16 quantitative variables we use in our experiments are as follows — age, weight, height, BMI (Body Mass Index), the Glasgow Coma Scale, the Glasgow Coma Scale motor component, the minimum systolic blood pressure, the minimum diastolic blood pressure, the maximum
number of heart rate (or pulse) per unit time (usually a minute), the systolic blood pressure at arrival of ambulance, the diastolic blood pressure at arrival of ambulance, the heart rate at arrival of ambulance, the capillary Hemoglobin concentration, the oxygen saturation, the fluid expansion colloids, the fluid expansion cristalloids, the pulse pressure for the minimum value of diastolic and systolic blood pressure, the pulse pressure at arrival of ambulance.
B.2 METROPOLIS-HASTINGS ALGORITHM
During the simulation step of the MISSO method, the sampling from the target distribution π(zi,mis;θ) := p(zi,mis|zi,obs, yi;θ) is performed using a Metropolis-Hastings (MH) algorithm (Meyn & Tweedie, 2012) with proposal distribution q(zi,mis; δ) := p(zi,mis|zi,obs; δ) where θ = (β,Ω) and δ = (ξ,Σ). The parameters of the Gaussian conditional distribution of zi,mis|zi,obs read:
ξ = βmiss + Ωmis,obsΩ −1 obs,obs(zi,obs − βobs) , Σ = Ωmis,mis + Ωmis,obsΩ −1 obs,obsΩobs,mis ,
where we have used the Schur Complement of Ωobs,obs in Ω and noted βmis (resp. βobs) the missing (resp. observed) elements of β. The MH algorithm is summarized in Algorithm 3.
Algorithm 3 MH aglorithm 1: Input: initialization zi,mis,0 ∼ q(zi,mis; δ) 2: for m = 1, · · · ,M do 3: Sample zi,mis,m ∼ q(zi,mis; δ) 4: Sample u ∼ U(J0, 1K) 5: Calculate the ratio r = π(zi,mis,m;θ)/q(zi,mis,m);δ)π(zi,mis,m−1;θ)/q(zi,mis,m−1);δ) 6: if u < r then 7: Accept zi,mis,m 8: else 9: zi,mis,m ← zi,mis,m−1 10: end if 11: end for 12: Output: zi,mis,M
B.3 MISSO UPDATE
Choice of surrogate function for MISO: We recall the MISO deterministic surrogate defined in (7): L̂i(θ;θ) = ∫ Z log ( pi(zi,mis,θ)/fi(zi,mis,θ) ) pi(zi,mis,θ)µi(dzi) .
where θ = (δ, β,Ω) and θ = (δ̄, β̄, Ω̄). We adapt it to our missing covariates problem and decompose the surrogate function defined above into an observed and a missing part.
Surrogate function decomposition We adapt it to our missing covariates problem and decompose the term depending on θ, while θ̄ is fixed, in two following parts leading to
L̂i(θ;θ) =− ∫ Z log fi(zi,mis, zi,obs,θ)pi(zi,mis,θ)µi(dzi,mis)
=− ∫ Z log [pi(yi|zi,mis, zi,obs, δ)pi(zi,mis, β,Ω)] pi(zi,θ)µi(dzi,mis)
=− ∫ Z
log pi(yi|zi,mis, zi,obs, δ)pi(zi,θ)µi(dzi,mis)︸ ︷︷ ︸ =L̂(1)i (δ,θ)
− ∫ Z
log pi(zi,mis, β,Ω)pi(zi,θ)µi(dzi,mis)︸ ︷︷ ︸ =L̂(2)i (β,Ω,θ) .
(40)
The mean β and the covariance Ω of the latent structure can be estimated minimizing the sum of MISSO surrogates L̃(2)i (β,Ω,θ, {zm}Mm=1), defined as MC approximation of L̂ (2) i (β,Ω,θ), for all i ∈ JnK, in closed-form expression.
We thus keep the surrogate L̂(2)i (β,Ω,θ) as it is, and consider the following quadratic approximation of L̂(1)i (δ,θ) to estimate the vector of logistic parameters δ:
L̂(1)i (δ̄,θ)− ∫ Z ∇ log pi(yi|zi,mis, zi,obs, δ) ∣∣ δ=δ̄ pi(zi,mis,θ)µi(dzi,mis)(δ − δ̄)
−(δ − δ̄)/2 ∫ Z ∇2 log pi(yi|zi,mis, zi,obs, δ)pi(zi,mis,θ)pi(zi,mis,θ)µi(dzi,mis)(δ − δ̄)>.
Recall that: ∇ log pi(yi|zi,mis, zi,obs, δ) = zi ( yi − S(δ>zi) ) ,
∇2 log pi(yi|zi,mis, zi,obs, δ) = −ziz>i Ṡ(δ>zi) ,
where Ṡ(u) is the derivative of S(u). Note that Ṡ(u) ≤ 1/4 and since, for all i ∈ JnK, the p × p matrix ziz>i is semi-definite positive we can assume that: L1. For all i ∈ JnK and > 0, there exist, for all zi ∈ Z, a positive definite matrix Hi(zi) := 1 4 (ziz > i + Id) such that for all δ ∈ Rp, −ziz>i Ṡ(δ>zi) ≤ Hi(zi).
Then, we use, for all i ∈ JnK, the following surrogate function to estimate δ:
L̄(1)i (δ,θ) = L̂ (1) i (δ̄,θ)−D > i (δ − δ̄) +
1 2 (δ − δ̄)Hi(δ − δ̄)> , (41)
where:
Di = ∫ Z ∇ log pi(yi|zi,mis, zi,obs, δ) ∣∣ δ=δ̄ pi(zi,mis,θ)µi(dzi,mis) ,
Hi = ∫ Z Hi(zi,mis)pi(zi,mis,θ)µi(dzi,mis) .
Finally, at iteration k, the total surrogate is:
L̃(k)(θ) = 1 n n∑ i=1 L̃i(θ, θ(τ k i ), {zi,m} M (τk i ) m=1 )
= 1
n n∑ i=1 L̃(2)i (β,Ω, θ (τki ), {zi,m} M (τk i ) m=1 )− 1 n n∑ i=1 D̃ (τki ) i (δ − δ (τki ))
+ 1
2n n∑ i=1 (δ − δ(τ k i )) { H̃ (τki ) i } (δ − δ(τ k i ))> ,
(42)
where for all i ∈ JnK:
D̃ (τki ) i = 1
M(τki )
M (τk i )∑
m=1
z (τki ) i,m ( yi − S( ( δ(τ k i ) )> zi,m(τ k i )) ) ,
H̃ (τki ) i =
1
4M(τki )
M (τk i )∑
m=1
z (τki ) i,m (z (τki ) i,m ) > .
Minimizing the total surrogate (42) boils down to performing a quasi-Newton step. It is perhaps sensible to apply some diagonal loading which is perfectly compatible with the surrogate interpretation we just gave.
The logistic parameters are estimated as follows:
δ(k) = arg min δ∈Θ
1
n n∑ i=1 L̃(1)i (δ, θ (τki ), {zi,m} M (τk i ) m=1 ) ,
where L̃(1)i (δ, θ(τ k i ), {zi,m}
M (τk i )
m=1 ) is the MC approximation of the MISO surrogate defined in (41) and which leads to the following quasi-Newton step:
δ(k) = 1
n n∑ i=1 δ(τ k i ) − (H̃(k))−1D̃(k) ,
with D̃(k) = 1n ∑n i=1 D̃ (τki ) i and H̃ (k) = 1n ∑n i=1 H̃ (τki ) i .
MISSO updates: At the k-th iteration, and after the initialization, for all i ∈ JnK, of the latent variables (z(0)i ), the MISSO algorithm consists in picking an index ik uniformly on JnK, completing the observations by sampling a Monte Carlo batch {z(k)ik,mis,m} M(k) m=1 of missing values from the conditional distribution p(zik,mis|zik,obs, yik ;θ(k−1)) using an MCMC sampler and computing the estimated parameters as follows:
β(k) = arg min β∈Θ
1
n n∑ i=1 L̃(2)i (β,Ω (k), θ(τ k i ), {zi,m} M (τk i ) m=1 ) = 1 n n∑ i=1 1 M(τki ) M (τk i )∑ m=1 z (k) i,m ,
Ω(k) = arg min Ω∈Θ
1
n n∑ i=1 L̃(2)i (β (k),Ω, θ(τ k i ), {zi,m} M (τk i ) m=1 ) = 1 n n∑ i=1 1 M(τki ) M (τk i )∑ m=1 w (k) i,m ,
δ(k) = 1
n n∑ i=1 δ(τ k i ) − (H̃(k))−1D̃(k) .
(43)
where z(k)i,m = (z (k) i,mis,m, zi,obs) is composed of a simulated and an observed part, D̃ (k) =
1 n ∑n i=1 D̃ (τki ) i , H̃ (k) = 1n ∑n i=1 H̃ (τki ) i and w (k) i,m = z (k) i,m(z (k) i,m)
> − β(k)(β(k))>. Besides, L̃(1)i (β,Ω,θ, {zm}Mm=1) and L̃ (2) i (β,Ω,θ, {zm}Mm=1) are defined as MC approximation of L̂(1)i (β,Ω,θ) and L̂ (2) i (β,Ω,θ), for all i ∈ JnK as components of the surrogate function (40).
B.4 WALL CLOCK TIME
We provide Table 1, the running time for each method, plotted in Figure 1, employed to train a logistic regression with missing values on the TraumaBase dataset (p = 16 influential quantitative measurements, on n = 6384 patients).
The running times are sensibly the same since for each method the computation complexity per epoch is similar. We remark a slight delay using the MISSO method with a batch size of 1, as our code implemented in R, is not totally optimized and parallelized. Yet, when the batch size tends to 100%, we retrieve the duration of MCEM, which is consistent with the fact that MISSO with a full batch update boils down to the MCEM algorithm.
We plot Figure 3, the updated parameters for the Logistic regression example against the time elapsed (in seconds).
C PRACTICAL DETAILS FOR THE INCREMENTAL VARIATIONAL INFERENCE
C.1 NEURAL NETWORKS ARCHITECTURE
Bayesian LeNet-5 Architecture: We describe in Table 2 the architecture of the Convolutional Neural Network introduced in (LeCun et al., 1998) and trained on MNIST:
Bayesian ResNet-18 Architecture: We describe in Table 3 the architecture of the Resnet-18 we train on CIFAR-10:
C.2 ALGORITHMS UPDATES
First, we initialize the means µ(0)` for ` ∈ JdK and variance estimates σ(0). At iteration k, minimizing the sum of stochastic surrogates defined as in (6) and (13) yields the following MISSO update —
step (i) pick a function index ik uniformly on JnK; step (ii) sample a Monte Carlo batch {z(k)m } M(k) m=1 from N (0, I); and step (iii) update the parameters as
µ (k) ` =
1
n n∑ i=1 µ (τki ) ` − γ n n∑ i=1 δ̂ (k) µ`,i and σ(k) = 1 n n∑ i=1 σ(τ k i ) − γ n n∑ i=1 δ̂ (k) σ,i , (44)
where we define the following gradient terms for all i ∈ J1, nK:
δ̂ (k) µ`,i = − 1 M(k) M(k)∑ m=1 ∇w log p(yi|xi, w) ∣∣∣ w=t(θ(k−1),z (k) m ) +∇µ`d(θ(k−1)) ,
δ̂ (k) σ,i = −
1
M(k) M(k)∑ m=1 z(k)m ∇w log p(yi|xi, w) ∣∣∣ w=t(θ(k−1),z (k) m ) +∇σd(θ(k−1)) .
(45)
Note that our analysis in the main text does require the parameter to be in a compact set. For the current estimation problem considered, this can be enforced in practice by restricting the parameters in a ball. In our simulation for the BNNs example, we did not implement the algorithms that stick closely to the compactness requirement for illustrative purposes. However, we observe empirically that the parameters are always bounded. The update rules can be easily modified to respect the requirement. For the considered VI problem, we recall the surrogate functions (11) are quadratic and indeed a simple projection step suffices to ensure boundedness of the iterates.
For all benchmark algorithms, we pick, at iteration k, a function index ik uniformly on JnK and sample a Monte Carlo batch {z(k)m } M(k) m=1 from the standard Gaussian distribution. The updates of the parameters µ` for all ` ∈ JdK and σ break down as follows: Monte Carlo SAG update: Set
µ (k) ` = µ (k−1) ` −
γ
n n∑ i=1 δ̂ (k) µ`,i and σ(k) = σ(k−1) − γ n n∑ i=1 δ̂ (k) σ,i ,
where δ̂(k)µ`,i = δ̂ (k−1) µ`,i and δ̂(k)σ,i = δ̂ (k−1) σ,i for i 6= ik and are defined by (45) for i = ik. The learning rate is set to γ = 10−3.
Bayes By Backprop update: Set
µ (k) ` = µ (k−1) ` −
γ n δ̂ (k) µ`,ik and σ(k) = σ(k−1) − γ n δ̂ (k) σ,ik ,
where the learning rate γ = 10−3.
Monte Carlo Momentum update: Set
µ (k) ` = µ (k−1) ` + v̂ (k) µ` and σ(k) = σ(k−1) + v̂(k)σ ,
where v̂
(k) µ`,i = αv̂(k−1)µ` − γ
n δ̂
(k) µ`,ik and v̂(k)σ = αv̂ (k−1) σ −
γ n δ̂ (k) σ,ik ,
where α and γ, respectively the momentum and the learning rates, are set to 10−3.
Monte Carlo ADAM update: Set
µ (k) ` = µ (k−1) ` −
γ n m̂(k)µ` /(
√ m̂
(k) µ` + ) and σ (k) = σ(k−1) − γ n m̂(k)σ /(
√ m̂ (k) σ + ) ,
where
m̂(k)µ` = m (k−1 | 1. What is the main contribution of the paper in terms of optimization methods?
2. What are the strengths and weaknesses of the proposed doubly stochastic MM method?
3. How does the reviewer assess the novelty and effectiveness of the method in comparison to other state-of-the-art optimization algorithms?
4. What are the limitations of the proposed approach regarding the choice of surrogate functions and convergence rates?
5. How does the reviewer evaluate the technical quality and experimental results presented in the paper? | Review | Review
This paper propose a doubly stochastic MM method based on Monte Carlo approximation of these stochastic surrogates for solving nonconvex and nonsmooth optimization problems. The proposed method iteratively selects a batch of functions at random at each iteration and minimize the accumulated surrogate functions (which are expressed as an expectation). They establish asymptotic and non-asymptotic convergence of the proposed algorithm. They apply their method for inference of logistic regression model and for variational inference of Bayesian CNN on the real-word data sets.
Weak Points. W1. The authors do not discuss the connections with state-of-the-art second-order optimization algorithms such as K-FAC. W2. The proposed algorithm still falls into the framework of MM algorithm and a simple convex quadratic surrogate function is considered. The convergence rate of the algorithm is expected.
Strong Points. S1. The proposed method can be viewed as a combination of MM and stochastic gradient method with variance reduction, which explains its good performance. S2. The paper contains sufficient details of the choice of the surrogate function and all the compared methods in the experiments. S3. The authors establish asymptotic and non-asymptotic convergence of the proposed algorithm. I found the technical quality is very high. S4. Extensive experiments on binary logistic regression with missing values and Bayesian CNN have been conducted. |
ICLR | Title
MISSO: Minimization by Incremental Stochastic Surrogate Optimization for Large Scale Nonconvex and Nonsmooth Problems
Abstract
Many constrained, nonconvex and nonsmooth optimization problems can be tackled using the majorization-minimization (MM) method which alternates between constructing a surrogate function which upper bounds the objective function, and then minimizing this surrogate. For problems which minimize a finite sum of functions, a stochastic version of the MM method selects a batch of functions at random at each iteration and optimizes the accumulated surrogate. However, in many cases of interest such as variational inference for latent variable models, the surrogate functions are expressed as an expectation. In this contribution, we propose a doubly stochastic MM method based on Monte Carlo approximation of these stochastic surrogates. We establish asymptotic and non-asymptotic convergence of our scheme in a constrained, nonconvex, nonsmooth optimization setting. We apply our new framework for inference of logistic regression model with missing data and for variational inference of Bayesian variants of LeNet-5 and Resnet-18 on respectively the MNIST and CIFAR-10 datasets.
1 INTRODUCTION
We consider the constrained minimization problem of a finite sum of functions:
min θ∈Θ L(θ) := 1 n n∑ i=1 Li(θ) , (1)
where Θ is a convex, compact, and closed subset of Rp, and for any i ∈ J1, nK, the function Li : Rp → R is bounded from below and is (possibly) nonconvex and nonsmooth. To tackle the optimization problem (1), a popular approach is to apply the majorization-minimization (MM) method which iteratively minimizes a majorizing surrogate function. A large number of existing procedures fall into this general framework, for instance gradient-based or proximal methods or the Expectation-Maximization (EM) algorithm (McLachlan & Krishnan, 2008) and some variational Bayes inference techniques (Jordan et al., 1999); see for example (Razaviyayn et al., 2013) and (Lange, 2016) and the references therein. When the number of terms n in (1) is large, the vanilla MM method may be intractable because it requires to construct a surrogate function for all the n terms Li at each iteration. Here, a remedy is to apply the Minimization by Incremental Surrogate Optimization (MISO) method proposed by Mairal (2015), where the surrogate functions are updated incrementally. The MISO method can be interpreted as a combination of MM and ideas which have emerged for variance reduction in stochastic gradient methods (Schmidt et al., 2017). An extended analysis of MISO has been proposed in (Qian et al., 2019).
The success of the MISO method rests upon the efficient minimization of surrogates such as convex functions, see (Mairal, 2015, Section 2.3). A notable application of MISO-like algorithms is described in (Mensch et al., 2017) where the authors builds upon the stochastic majorizationminimization framework of Mairal (2015) to introduce a method for sparse matrix factorization. Yet, in many applications of interest, the natural surrogate functions are intractable, yet they are defined as expectation of tractable functions. For instance, this is the case for inference in latent variable models via maximum likelihood (McLachlan & Krishnan, 2008). Another application is
variational inference (Ghahramani, 2015), in which the goal is to approximate the posterior distribution of parameters given the observations; see for example (Neal, 2012; Blundell et al., 2015; Polson et al., 2017; Rezende et al., 2014; Li & Gal, 2017).
This paper fills the gap in the literature by proposing a method called Minimization by Incremental Stochastic Surrogate Optimization (MISSO), designed for the nonconvex and nonsmooth finite sum optimization, with a finite-time convergence guarantee. Our work aims at formulating a generic class of incremental stochastic surrogate methods for nonconvex optimization and building the theory to understand its behavior. In particular, we provide convergence guarantees for stochastic EM and Variational Inference-type methods, under mild conditions. In summary, our contributions are:
• we propose a unifying framework of analysis for incremental stochastic surrogate optimization when the surrogates are defined as expectations of tractable functions. The proposed MISSO method is built on the Monte Carlo integration of the intractable surrogate function, i.e., a doubly stochastic surrogate optimization scheme.
• we present an incremental update of the commonly used variational inference and Monte Carlo EM methods as special cases of our newly introduced framework. The analysis of those two algorithms is thus conducted under this unifying framework of analysis.
• we establish both asymptotic and non-asymptotic convergence for the MISSO method. In particular, the MISSO method converges almost surely to a stationary point and in O(n/ ) iterations to an -stationary point, see Theorem 1.
• in essence, we relax the class of surrogate functions used in MISO (Mairal, 2015) and allow for intractable surrogates that can only be evaluated by Monte-Carlo approximations. Working at the crossroads of Optimization and Sampling constitutes what we believe to be the novelty and the technicality of our framework and theoretical results.
In Section 2, we review the techniques for incremental minimization of finite sum functions based on the MM principle; specifically, we review the MISO method (Mairal, 2015), and present a class of surrogate functions expressed as an expectation over a latent space. The MISSO method is then introduced for the latter class of intractable surrogate functions requiring approximation. In Section 3, we provide the asymptotic and non-asymptotic convergence analysis for the MISSO method (and of the MISO (Mairal, 2015) one as a special case). Section 4 presents numerical applications including parameter inference for logistic regression with missing data and variational inference for two types of Bayesian neural networks. The proofs of theoretical results are reported as Supplement.
Notations. We denote J1, nK = {1, . . . , n}. Unless otherwise specified, ‖ · ‖ denotes the standard Euclidean norm and 〈· | ·〉 is the inner product in the Euclidean space. For any function f : Θ→ R, f ′(θ,d) is the directional derivative of f at θ along the direction d, i.e.,
f ′(θ,d) := lim t→0+ f(θ + td)− f(θ) t . (2)
The directional derivative is assumed to exist for the functions introduced throughout this paper.
2 INCREMENTAL MINIMIZATION OF FINITE SUM NONCONVEX FUNCTIONS
The objective function in (1) is composed of a finite sum of possibly nonsmooth and nonconvex functions. A popular approach here is to apply the MM method, which tackles (1) through alternating between two steps — (i) minimizing a surrogate function which upper bounds the original objective function; and (ii) updating the surrogate function to tighten the upper bound.
As mentioned in the introduction, the MISO method (Mairal, 2015) is developed as an iterative scheme that only updates the surrogate functions partially at each iteration. Formally, for any i ∈ J1, nK, we consider a surrogate function L̂i(θ;θ) which satisfies the assumptions (H1, H2): H1. For all i ∈ J1, nK and θ ∈ Θ, L̂i(θ;θ) is convex w.r.t. θ, and it holds
L̂i(θ;θ) ≥ Li(θ), ∀ θ ∈ Θ , (3)
where the equality holds when θ = θ.
H2. For any θi ∈ Θ, i ∈ J1, nK and some > 0, the difference function ê(θ; {θi}ni=1) := 1 n ∑n i=1 L̂i(θ;θi) − L(θ) is defined for all θ ∈ Θ and differentiable for all θ ∈ Θ, where Θ = {θ ∈ Rd, infθ′∈Θ ‖θ − θ′‖ < } is an -neighborhood set of Θ. Moreover, for some constant L, the gradient satisfies
‖∇ê(θ; {θi}ni=1)‖2 ≤ 2Lê(θ; {θi}ni=1), ∀ θ ∈ Θ . (4)
Algorithm 1 The MISO method (Mairal, 2015). 1: Input: initialization θ(0). 2: Initialize the surrogate function as A0i (θ) := L̂i(θ;θ(0)), i ∈ J1, nK.
3: for k = 0, 1, ...,Kmax do 4: Pick ik uniformly from J1, nK. 5: Update Ak+1i (θ) as:
Ak+1i (θ) = { L̂i(θ;θ(k)), if i = ik Aki (θ), otherwise.
6: Set θ(k+1) ∈ arg min θ∈Θ 1 n
∑n i=1A k+1 i (θ).
7: end for
We remark that H1 is a common assumption used for surrogate functions, see (Mairal, 2015, Section 2.3). H2 can be satisfied when the difference function ê(θ; {θi}ni=1) is L-smooth, i.e., ê is differentiable on Θ and its gradient ∇ê is LLipschitz, ∀θ ∈ Θ. H2 can be implied by applying (Razaviyayn et al., 2013, Proposition 1).
The inequality (3) implies L̂i(θ;θ) ≥ Li(θ) > −∞ for any θ ∈ Θ. The MISO method is an incremental version of the MM method, as summarized by Algorithm 1, which shows that the MISO method maintains an iteratively updated set of upper-bounding surrogate functions {Aki (θ)}ni=1 and updates the iterate via minimizing the average of the surrogate functions.
Particularly, only one out of the n surrogate functions is updated at each iteration [cf. Line 5] and the sum function 1n ∑n i=1A k+1 i (θ) is designed to be ‘easy to optimize’, which, for example, can be a sum of quadratic functions. As such, the MISO method is suitable for large-scale optimization as the computation cost per iteration is independent of n. Under H1, H2, it was shown that the MISO method converges almost surely to a stationary point of (1) (Mairal, 2015, Prop. 3.1).
We now consider the case when the surrogate functions L̂i(θ;θ) are intractable. Let Z be a measurable set, pi : Z × Θ → R+ a probability density function, ri : Θ × Θ × Z → R a measurable function and µi a σ-finite measure. We consider surrogate functions which satisfy H1, H2 and that can be expressed as an expectation, i.e.:
L̂i(θ;θ) := ∫ Z ri(θ;θ, zi)pi(zi;θ)µi(dzi) ∀ (θ,θ) ∈ Θ×Θ . (5)
Plugging (5) into the MISO method is not feasible since the update step in Step 6 involves a minimization of an expectation. Several motivating examples of (1) are given in Section 2.
In this paper, we propose the Minimization by Incremental Stochastic Surrogate Optimization (MISSO) method which replaces the expectation in (5) by Monte Carlo integration and then optimizes the objective function (1) in an incremental manner. Denote by M ∈ N the Monte Carlo batch size and let {zm ∈ Z}Mm=1 be a set of samples. These samples can be drawn (Case 1) i.i.d. from the distribution pi(·;θ) or (Case 2) from a Markov chain with stationary distribution pi(·;θ); see Section 3 for illustrations. To this end, we define the stochastic surrogate as follows:
L̃i(θ;θ, {zm}Mm=1) := 1
M M∑ m=1 ri(θ;θ, zm) , (6)
and we summarize the proposed MISSO method in Algorithm 2. Compared to the MISO method, there is a crucial difference in that the MISSO method involves two types of randomness. The first level of randomness comes from the selection of ik in Line 5. The second level of randomness stems from the set of Monte Carlo approximated functions Ãki (θ) used in lieu of Aki (θ) in Line 6 when optimizing for the next iterate θ(k). We now discuss two applications of the MISSO method.
Example 1: Maximum Likelihood Estimation for Latent Variable Model. Latent variable models (Bishop, 2006) are constructed by introducing unobserved (latent) variables which help explain the observed data. We consider n independent observations ((yi, zi), i ∈ JnK) where yi is observed and zi is latent. In this incomplete data framework, define {fi(zi,θ),θ ∈ Θ} to be the complete
Algorithm 2 The MISSO method. 1: Input: initialization θ(0); a sequence of non-negative numbers {M(k)}∞k=0. 2: For all i ∈ J1, nK, draw M(0) Monte Carlo samples with the stationary distribution pi(·;θ(0)). 3: Initialize the surrogate function as
Ã0i (θ) := L̃i(θ;θ(0), {z (0) i,m} M(0) m=1), i ∈ J1, nK .
4: for k = 0, 1, ...,Kmax do 5: Pick a function index ik uniformly on J1, nK. 6: Draw M(k) Monte Carlo samples with the stationary distribution pi(·;θ(k)). 7: Update the individual surrogate functions recursively as:
Ãk+1i (θ) =
{ L̃i(θ;θ(k), {z(k)i,m} M(k) m=1), if i = ik
Ãki (θ), otherwise.
8: Set θ(k+1) ∈ arg minθ∈Θ L̃(k+1)(θ) := 1n ∑n i=1 Ã k+1 i (θ). 9: end for
data likelihood models, i.e., the joint likelihood of the observations and latent variables. Let
gi(θ) := ∫ Z fi(zi,θ)µi(dzi), i ∈ J1, nK, θ ∈ Θ
denote the incomplete data likelihood, i.e., the marginal likelihood of the observations yi. For ease of notations, the dependence on the observations is made implicit. The maximum likelihood (ML) estimation problem sets the individual objective function Li(θ) to be the i-th negated incomplete data log-likelihood Li(θ) := − log gi(θ). Assume, without loss of generality, that gi(θ) 6= 0 for all θ ∈ Θ. We define by pi(zi,θ) := fi(zi,θ)/gi(θ) the conditional distribution of the latent variable zi given the observations yi. A surrogate function L̂i(θ;θ) satisfying H1 can be obtained through writing fi(zi,θ) = fi(zi,θ)pi(zi,θ)pi(zi,θ) and applying the Jensen inequality:
L̂i(θ;θ) = ∫ Z log ( pi(zi,θ)/fi(zi,θ) )︸ ︷︷ ︸ =ri(θ;θ,zi) pi(zi,θ)µi(dzi) . (7)
We note that H2 can also be verified for common distribution models. We can apply the MISSO method following the above specification of ri(θ;θ, zi) and pi(zi,θ).
Example 2: Variational Inference. Let ((xi, yi), i ∈ J1, nK) be i.i.d. input-output pairs and w ∈ W ⊆ Rd be a latent variable. When conditioned on the input data x = (xi, i ∈ J1, nK), the joint distribution of y = (yi, i ∈ J1, nK) and w is given by:
p(y, w|x) = π(w) ∏n i=1 p(yi|xi, w) . (8)
Our goal is to compute the posterior distribution p(w|y, x). In most cases, the posterior distribution p(w|y, x) is intractable and is approximated using a family of parametric distributions, {q(w,θ),θ ∈ Θ}. The variational inference (VI) problem (Blei et al., 2017) boils down to minimizing the Kullback-Leibler (KL) divergence between q(w,θ) and the posterior distribution p(w|y, x):
min θ∈Θ
L(θ) := KL (q(w;θ) ||p(w|y, x)) := Eq(w;θ) [ log ( q(w;θ)/p(w|y, x) )] . (9)
Using (8), we decompose L(θ) = n−1 ∑n i=1 Li(θ) + const. where:
Li(θ) := −Eq(w;θ) [ log p(yi|xi, w) ] + 1
n Eq(w;θ)
[ log q(w;θ)/π(w) ] := ri(θ) + d(θ) . (10)
Directly optimizing the finite sum objective function in (9) can be difficult. First, with n 1, evaluating the objective function L(θ) requires a full pass over the entire dataset. Second, for some
complex models, the expectations in (10) can be intractable even if we assume a simple parametric model for q(w;θ). Assume that Li is L-smooth. We apply the MISSO method with a quadratic surrogate function defined as:
L̂i(θ;θ) := Li(θ) + 〈 ∇θLi(θ) |θ − θ 〉 + L
2 ‖θ − θ‖2, (θ,θ) ∈ Θ2 . (11)
It is easily checked that the quadratic function L̂i(θ;θ) satisfies H1, H2. To compute the gradient ∇Li(θ), we apply the re-parametrization technique suggested in (Paisley et al., 2012; Kingma & Welling, 2014; Blundell et al., 2015). Let t : Rd×Θ 7→ Rd be a differentiable function w.r.t. θ ∈ Θ which is designed such that the law of w = t(z,θ) is q(·,θ), where z ∼ Nd(0, I). By (Blundell et al., 2015, Proposition 1), the gradient of −ri(·) in (10) is:
∇θEq(w;θ) [ log p(yi|xi, w) ] = Ez∼Nd(0,I) [ Jtθ(z,θ)∇w log p(yi|xi, w) ∣∣ w=t(z,θ) ] , (12)
where for each z ∈ Rd, Jtθ(z,θ) is the Jacobian of the function t(z, ·) with respect to θ evaluated at θ. In addition, for most cases, the term∇d(θ) can be evaluated in closed form as the gradient of the KL between the prior distribution π(·) and the variational candidate q(·,θ).
ri(θ;θ, z) := 〈 ∇θd(θ)− Jtθ(z,θ)∇w log p(yi|xi, w) ∣∣ w=t(z,θ) |θ − θ 〉 + L 2 ‖θ − θ‖2 . (13)
Finally, using (11) and (13), the surrogate function (6) is given by L̃i(θ;θ, {zm}Mm=1) := M−1 ∑M m=1 ri(θ;θ, zm) where {zm}Mm=1 are i.i.d samples drawn from N (0, I).
3 CONVERGENCE ANALYSIS
We now provide asymptotic and non-asymptotic convergence results of our method. Assume:
H3. For all i ∈ J1, nK, θ ∈ Θ, zi ∈ Z, ri(·;θ, zi) is convex on Θ and is lower bounded.
We are particularly interested in the constrained optimization setting where Θ is a bounded set. To this end, we control the supremum norm of the MC approximation, introduced in (6), as: H4. For the samples {zi,m}Mm=1, there exist finite constants Cr and Cgr such that
Cr := sup θ∈Θ sup M>0 1√ M Eθ [ sup θ∈Θ ∣∣∣∣∣ M∑ m=1 { ri(θ;θ, zi,m)− L̂i(θ;θ) }∣∣∣∣∣ ]
Cgr := sup θ∈Θ sup M>0
√ MEθ sup θ∈Θ ∣∣∣∣∣ 1M M∑ m=1 L̂′i(θ,θ − θ;θ)− r′i(θ,θ − θ;θ, zi,m) ‖θ − θ‖ ∣∣∣∣∣ 2
for all i ∈ J1, nK, and we denoted by Eθ[·] the expectation w.r.t. a Markov chain {zi,m}Mm=1 with initial distribution ξi(·;θ), transition kernel Πi,θ, and stationary distribution pi(·;θ).
Some intuitions behind the controlling terms: It is common in statistical and optimization problems, to deal with the manipulation and the control of random variables indexed by sets with an infinite number of elements. Here, the controlled random variable is an image of a continuous function defined as ri(θ;θ, zi,m) − L̂i(θ;θ) for all z ∈ Z and for fixed (θ,θ) ∈ Θ2. To characterize such control, we will have recourse to the notion of metric entropy (or bracketing number) as developed in (Van der Vaart, 2000; Vershynin, 2018; Wainwright, 2019). A collection of results from those references gives intuition behind our assumption H4, which is classical in empirical processes. In (Vershynin, 2018, Theorem 8.2.3), the authors recall the uniform law of large numbers:
E [ sup f∈F ∣∣∣∣∣ 1M M∑ i=1 f (zi,m)− E[f(zi)] ∣∣∣∣∣ ] ≤ CL√ M for all zi,m, i ∈ J1,MK ,
where F is a class of L-Lipschitz functions. Moreover, in (Vershynin, 2018, Theorem 8.1.3 ) and (Wainwright, 2019, Theorem 5.22), the application of the Dudley inequality yields:
E[sup f∈F |Xf −X0|] ≤ 1√ M ∫ 1 0 √ logN (F , ‖ · ‖∞, ε)dε ,
whereN (F , ‖ · ‖∞, ε) is the bracketing number and denotes the level of approximation (the bracketing number goes to infinity when → 0). Finally, in (Van der Vaart, 2000, p.271, Example), N (F , ‖ · ‖∞, ε) is bounded from above for a class of parametric functions F = fθ : θ ∈ Θ:
N (F , ‖ · ‖∞, ε) ≤ K ( diam Θ
ε
)d , for all 0 < ε < diam Θ .
The authors acknowledge that those bounds are a dramatic manifestation of the curse of dimensionality happening when sampling is needed. Nevertheless, the dependence on the dimension highly depends on the class of surrogate functions F used in our scheme, as smaller bounds on these controlling terms can be derived for simpler class of functions, such as quadratic functions.
Stationarity measure. As problem (1) is a constrained optimization task, we consider the following stationarity measure:
g(θ) := inf θ∈Θ L′(θ,θ − θ) ‖θ − θ‖ and g(θ) = g+(θ)− g−(θ) , (14)
where g+(θ) := max{0, g(θ)}, g−(θ) := −min{0, g(θ)} denote the positive and negative part of g(θ), respectively. Note that θ is a stationary point if and only if g−(θ) = 0 (Fletcher et al., 2002). Furthermore, suppose that the sequence {θ(k)}k≥0 has a limit point θ that is a stationary point, then one has limk→∞ g−(θ(k)) = 0. Thus, the sequence {θ(k)}k≥0 is said to satisfy an asymptotic stationary point condition. This is equivalent to (Mairal, 2015, Definition 2.4).
To facilitate our analysis, we define τki as the iteration index where the i-th function is last accessed in the MISSO method prior to iteration k, τk+1ik = k for instance. We define:
L̂(k)(θ) := 1n ∑n i=1L̂i(θ;θ (τki )), ê(k)(θ) := L̂(k)(θ)− L(θ), M (k) := Kmax−1∑ k=0 M −1/2 (k) . (15)
We first establish a non-asymptotic convergence rate for the MISSO method:
Theorem 1. Under H1-H4. For any Kmax ∈ N, let K be an independent discrete r.v. drawn uniformly from {0, ...,Kmax − 1} and define the following quantity:
∆(Kmax) := 2nLE[L̃ (0)(θ(0))− L̃(Kmax)(θ(Kmax))] + 4LCrM (k) .
Then we have following non-asymptotic bounds:
E [ ‖∇ê(K)(θ(K))‖2 ] ≤ ∆(Kmax)
Kmax and E[g−(θ(K))] ≤
√ ∆(Kmax)
Kmax + Cgr Kmax M (k) . (16)
Note that ∆(Kmax) is finite for any Kmax ∈ N. Iteration Complexity of MISSO. As expected, the MISSO method converges to a stationary point of (1) asymptotically and at a sublinear rate E[g(K)− ] ≤ O( √ ∆(Kmax)/Kmax). In other terms, MISSO requires O(nL/ ) iterations to reach an -stationary point when the suboptimality condition, that characterizes stationarity, is E [ ‖g−(θ(K))‖2 ] . Note that this stationarity criterion are similar to the
usual quantity used in stochastic nonconvex optimization, i.e., E [ ‖∇L(θ(K))‖2 ] . In fact, when the
optimization problem (1) is unconstrained, i.e., Θ = Rp, then E [ g(θ(K)) ] = E [ ∇L(θ(K)) ] .
Sample Complexity of MISSO. Regarding the sample complexity of our method, setting M(k) = k2/n2, as a non-decreasing sequence of integers satisfying ∑∞ k=0M −1/2 (k) < ∞, in order to keep
∆(Kmax) nL, then the MISSO method requires ∑nL/ k=0 k
2/n2 = nL3/ 3 samples to reach an -stationary point.
Furthermore,we remark that the MISO method can be analyzed in Theorem 1 as a special case of the MISSO method satisfying Cr = Cgr = 0. In this case, while the asymptotic convergence is well known from (Mairal, 2015) [cf. H4], Eq. (16) gives a non-asymptotic rate of E[g(K)− ] ≤
O( √ nL/Kmax) which is new to our best knowledge. Next, we show that under an additional assumption on the sequence of batch size M(k), the MISSO method converges almost surely to a stationary point:
Theorem 2. Under H1-H4. In addition, assume that {M(k)}k≥0 is a non-decreasing sequence of integers which satisfies ∑∞ k=0M −1/2 (k) <∞. Then:
1. the negative part of the stationarity measure converges a.s. to zero, i.e., lim k→∞ g−(θ (k))
a.s. = 0.
2. the objective value L(θ(k)) converges a.s. to a finite number L, i.e., limk→∞ L(θ(k)) a.s. = L.
In particular, the first result above shows that the sequence {θ(k)}k≥0 produced by the MISSO method satisfies an asymptotic stationary point condition.
4 NUMERICAL EXPERIMENTS
4.1 BINARY LOGISTIC REGRESSION WITH MISSING VALUES
This application follows Example 1 described in Section 2. We consider a binary regression setup, ((yi, zi), i ∈ JnK) where yi ∈ {0, 1} is a binary response and zi = (zi,j ∈ R, j ∈ JpK) is a covariate vector. The vector of covariates zi = [zi,mis, zi,obs] is not fully observed where we denote by zi,mis the missing values and zi,obs the observed covariate. It is assumed that (zi, i ∈ JnK) are i.i.d. and marginally distributed according toN (β,Ω) where β ∈ Rp and Ω is a positive definite p×pmatrix. We define the conditional distribution of the observations yi given zi = (zi,mis, zi,obs) as:
pi(yi|zi) = S(δ>z̄i)yi ( 1− S(δ>z̄i) )1−yi , (17)
where for u ∈ R, S(u) = 1/(1+e−u), δ = (δ0, · · · , δp) are the logistic parameters and z̄i = (1, zi). Here, θ = (δ,β,Ω) is the parameter to estimate. For i ∈ JnK, the complete log-likelihood reads: log fi(zi,mis,θ) ∝ yiδ>z̄i − log ( 1 + exp(δ>z̄i) ) − 1
2 log(|Ω|) + 1 2 Tr ( Ω−1(zi − β)(zi − β)> ) .
Fitting a logistic regression model on the TraumaBase dataset: We apply the MISSO method to fit a logistic regression model on the TraumaBase (http://traumabase.eu) dataset, which consists of data collected from 15 trauma centers in France, covering measurements on patients from the initial to last stage of trauma. This dataset includes information from the first stage of the trauma, namely initial observations on the patient’s accident site to the last stage being intense care at the hospital and counts more than 200 variables measured for more than 7 000 patients. Since the dataset considered is heterogeneous – coming from multiple sources with frequently missed entries – we apply the latent data model described in (17) to predict the risk of a severe hemorrhage which is one of the main cause of death after a major trauma.
Similar to (Jiang et al., 2018), we select p = 16 influential quantitative measurements, on n = 6384 patients. For the Monte Carlo sampling of zi,mis, required while running MISSO, we run a Metropolis-Hastings algorithm with the target distribution p(·|zi,obs, yi;θ(k)).
We compare in Figure 1 the convergence behavior of the estimated parameters δ and β using SAEM (Delyon et al., 1999) (with stepsize γk = 1/kα where α = 0.6 after tuning), MCEM (Wei
& Tanner, 1990) and the proposed MISSO method. For the MISSO method, we set the batch size to M(k) = 10 + k2 and we examine with selecting different number of functions in Line 5 in the method – the default settings with 1 (MISSO), 10% (MISSO10) and 50% (MISSO50) minibatches per iteration. From Figure 1, the MISSO method converges to a static value with less number of epochs than the MCEM, SAEM methods. It is worth noting that the difference among the MISSO runs for different number of selected functions demonstrates a variance-cost tradeoff. Though wall clock times are similar for all methods, they are reported in the appendix for completeness.
4.2 TRAINING BAYESIAN CNN USING MISSO
This application follows Example 2 described in Section 2. We use variational inference and the ELBO loss (10) to fit Bayesian Neural Networks on different datasets. At iteration k, minimizing the sum of stochastic surrogates defined as in (6) and (13) yields the following MISSO update — step (i) pick a function index ik uniformly on JnK; step (ii) sample a Monte Carlo batch {z(k)m } M(k) m=1 from N (0, I); and step (iii) update the parameters, with w̃ = t(θ(k−1), z(k)m ), as
µ (k) ` = µ̂ (τk) ` −
γ
n n∑ i=1 δ̂ (k) µ`,i ,
δ̂ (k) µ`,ik = − 1 M(k) M(k)∑ m=1 ∇w log p(yik |xik , w̃) +∇µ`d(θ(k−1)) ,
where µ̂(τ k)
` = 1 n ∑n i=1 µ (τki ) ` and d(θ) = n −1∑d `=1 ( − log(σ) + (σ2 + µ2`)/2− 1/2 ) .
Bayesian LeNet-5 on MNIST (LeCun et al., 1998): We apply the MISSO method to fit a Bayesian variant of LeNet-5 (LeCun et al., 1998). We train this network on the MNIST dataset (LeCun, 1998). The training set is composed of n = 55 000 handwritten digits, 28 × 28 images. Each image is labelled with its corresponding number (from zero to nine). Under the prior distribution π, see (8), the weights are assumed independent and identically distributed according to N (0, 1). We also assume that q(·;θ) ≡ N (µ, σ2I). The variational posterior parameters are thus θ = (µ, σ) where µ = (µ`, ` ∈ JdK) where d is the number of weights in the neural network. We use the re-parametrization as w = t(θ, z) = µ+ σz with z ∼ N (0, I). Bayesian ResNet-18 (He et al., 2016) on CIFAR-10 (Krizhevsky et al., 2012): We train here the Bayesian variant of the ResNet-18 neural network introduced in (He et al., 2016) on CIFAR-10. The latter dataset is composed of n = 60 000 handwritten digits, 32 × 32 colour images in 10 classes, with 6 000 images per class. As in the previous example, the weights are assumed independent and identically distributed according toN (0, I). Standard hyperparameters values found in the literature, such as the annealing constant or the number of MC samples, were used for the benchmark methods. For efficiency purpose and lower variance, the Flipout estimator (Wen et al., 2018) is used.
Experiment Results: We compare the convergence of the Monte Carlo variants of the following state of the art optimization algorithms — the ADAM (Kingma & Ba, 2015), the Momentum (Sutskever et al., 2013) and the SAG (Schmidt et al., 2017) methods versus the Bayes by Backprop (BBB) (Blundell et al., 2015) and our proposed MISSO method. For all these methods, the loss function (10) and its gradients were computed by Monte Carlo integration based on the reparametrization described above. The mini-batch of indices and MC samples are respectively set to 128 and M(k) = k. The learning rates are set to 10−3 for LeNet-5 and 10−4 for Resnet-18.
Figure 2(a) shows the convergence of the negated evidence lower bound against the number of passes over data (one pass represents an epoch). As observed, the proposed MISSO method outperforms Bayes by Backprop and Momentum, while similar convergence rates are observed with the MISSO, ADAM and SAG methods for our experiment on MNIST dataset using a Bayesian variant of LeNet5. On the other hand, the experiment conducted on CIFAR-10 (Figure 2(b)) using a much larger network, i.e., a Bayesian variant of ResNet-18 showcases the need of a well-tuned adaptive methods to reach lower training loss (and also faster). Our MISSO method is similar to the Monte Carlo variant of ADAM but slower than Adagrad optimizer. Recall that the purpose of this paper is to provide a common class of optimizers, such as VI, in order to study their convergence behaviors, and not to introduce a novel method outperforming the baselines methods. We report wall clock times for all methods in the appendix for completeness.
5 CONCLUSION
We present a unifying framework for minimizing a nonconvex and nonsmooth finite-sum objective function using incremental surrogates when the latter functions are expressed as an expectation and are intractable. Our approach covers a large class of nonconvex applications in machine learning such as logistic regression with missing values and variational inference. We provide both finitetime and asymptotic guarantees of our incremental stochastic surrogate optimization technique and illustrate our findings training a binary logistic regression with missing covariates to predict hemorrhagic shock and Bayesian variants of two Convolutional Neural Networks on benchmark datasets.
A PROOFS OF THE THEORETICAL RESULTS
A.1 PROOF OF THEOREM 1
Theorem. Under H1-H4. For anyKmax ∈ N, letK be an independent discrete r.v. drawn uniformly from {0, ...,Kmax − 1} and define the following quantity:
∆(Kmax) := 2nLE[L̃ (0)(θ(0))− L̃(Kmax)(θ(Kmax))] + 4LCrM (k) .
Then we have following non-asymptotic bounds:
E [ ‖∇ê(K)(θ(K))‖2 ] ≤ ∆(Kmax)
Kmax and E[g−(θ(K))] ≤
√ ∆(Kmax)
Kmax + Cgr Kmax M (k) .
Proof We begin by recalling the definition
L̃(k)(θ) := 1 n n∑ i=1 Ãki (θ) .
Notice that
L̃(k+1)(θ) = 1 n n∑ i=1 L̃i(θ;θ(τ k+1 i ), {z(τ k+1 i ) i,m } M (τ k+1 i ) m=1 )
= L̃(k)(θ) + 1 n
( L̃ik(θ;θ(k), {z (k) ik,m }M(k)m=1)− L̃ik(θ;θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 ) ) .
Furthermore, we recall that L̂(k)(θ) := 1n ∑n i=1L̂i(θ;θ (τki )), ê(k)(θ) := L̂(k)(θ)− L(θ) .
Due to H2, we have ‖∇ê(k)(θ(k))‖2 ≤ 2Lê(k)(θ(k)) . (18)
To prove the first bound in (16), using the optimality of θ(k+1), one has
L̃(k+1)(θ(k+1)) ≤ L̃(k+1)(θ(k))
= L̃(k)(θ(k)) + 1n ( L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)− L̃ik(θ(k);θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 ) ) .
(19)
Let Fk be the filtration of random variables up to iteration k, i.e., {i`−1, {z(`−1)i`−1,m} M(`−1) m=1 ,θ
(`)}k`=1. We observe that the conditional expectation evaluates to
Eik [ E [ L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)|Fk, ik ] |Fk ]
= L(θ(k)) + Eik [ E [ 1 M(k) M(k)∑ m=1 rik(θ (k);θ(k), z (k) ik,m )− L̂ik(θ(k);θ(k))|Fk, ik ] |Fk ] ≤ L(θ(k)) + Cr√ M(k) ,
where the last inequality is due to H4. Moreover,
E [ L̃ik(θ(k);θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 )|Fk ] = 1
n n∑ i=1 L̃i(θ(k);θ(τ k i ), {z(τ k i ) i,m } M (τk i ) m=1 ) = L̃(k)(θ(k)) .
Taking the conditional expectations on both sides of (19) and re-arranging terms give:
L̃(k)(θ(k))− L(θ(k)) ≤ nE [ L̃(k)(θ(k))− L̃(k+1)(θ(k+1))|Fk ] +
Cr√ M(k) . (20)
Proceeding from (20), we observe the following lower bound for the left hand side
L̃(k)(θ(k))− L(θ(k)) (a)= L̃(k)(θ(k))− L̂(k)(θ(k)) + ê(k)(θ(k)) (b)
≥ L̃(k)(θ(k))− L̂(k)(θ(k)) + 1 2L ‖∇ê(k)(θ(k))‖2
= 1
n n∑ i=1 { 1 M(τki ) M (τk i )∑ m=1 ri(θ (k);θ(τ k i ), z (τki ) i,m )− L̂i(θ (k);θ(τ k i )) }
︸ ︷︷ ︸ :=−δ(k)(θ(k))
+ 1
2L ‖∇ê(k)(θ(k))‖2 ,
where (a) is due to ê(k)(θ(k)) = 0 [cf. H1], (b) is due to (18) and we have defined the summation in the last equality as −δ(k)(θ(k)). Substituting the above into (20) yields
‖∇ê(k)(θ(k))‖2
2L ≤ nE
[ L̃(k)(θ(k))− L̃(k+1)(θ(k+1))|Fk ] +
Cr√ M(k) + δ(k)(θ(k)) . (21)
Observe the following upper bound on the total expectations:
E [ δ(k)(θ(k)) ] ≤ E [ 1 n n∑ i=1 Cr√ M(τki ) ] ,
which is due to H4. It yields
E [ ‖∇ê(k)(θ(k))‖2 ] ≤ 2nLE [ L̃(k)(θ(k))− L̃(k+1)(θ(k+1)) ] +
2LCr√ M(k) + 1 n n∑ i=1 E [ 2LCr√
M(τki )
] .
Finally, for anyKmax ∈ N, we letK be a discrete r.v. that is uniformly drawn from {0, 1, ...,Kmax− 1}. Using H4 and taking total expectations lead to
E [ ‖∇ê(K)(θ(K))‖2 ] = 1
Kmax Kmax−1∑ k=0 E[‖∇ê(k)(θ(k))‖2]
≤ 2nLE[L̃ (0)(θ(0))− L̃(Kmax)(θ(Kmax))]
Kmax + 2LCr Kmax Kmax−1∑ k=0 E [ 1√ M(k) + 1 n n∑ i=1 1√ M(τki ) ] . (22)
For all i ∈ J1, nK, the index i is selected with a probability equal to 1n when conditioned independently on the past. We observe:
E[M−1/2 (τki ) ] = k∑ j=1 1 n ( 1− 1 n )j−1 M −1/2 (k−j) (23)
Taking the sum yields: Kmax−1∑ k=0 E[M−1/2 (τki ) ] = Kmax−1∑ k=0 k∑ j=1 1 n ( 1− 1 n )j−1 M −1/2 (k−j) = Kmax−1∑ k=0 k−1∑ l=0 1 n ( 1− 1 n )k−(l+1) M −1/2 (l)
= Kmax−1∑ l=0 M −1/2 (l) Kmax−1∑ k=l+1 1 n ( 1− 1 n )k−(l+1) ≤ Kmax−1∑ l=0 M −1/2 (l) ,
(24)
where the last inequality is due to upper bounding the geometric series. Plugging this back into (22) yields
E [ ‖∇ê(K)(θ(K))‖2 ] = 1
Kmax Kmax−1∑ k=0 E[‖∇ê(k)(θ(k))‖2]
≤ 2nLE[L̃ (0)(θ(0))− L̃(Kmax)(θ(Kmax))]
Kmax +
1
Kmax Kmax−1∑ k=0 4LCr√ M(k) = ∆(Kmax) Kmax .
This concludes our proof for the first inequality in (16).
To prove the second inequality of (16), we define the shorthand notations g(k) := g(θ(k)), g(k)− := −min{0, g(k)}, g(k)+ := max{0, g(k)}. We observe that
g(k) = inf θ∈Θ L′(θ(k),θ − θ(k)) ‖θ(k) − θ‖
= inf θ∈Θ
{ 1 n ∑n i=1 L̂ ′
i(θ (k),θ − θ(k);θ(τki )) ‖θ(k) − θ‖
− 〈 ∇ê(k)(θ(k)) |θ − θ(k) 〉 ‖θ(k) − θ‖ } ≥ −‖∇ê(k)(θ(k))‖+ inf
θ∈Θ
1 n ∑n i=1 L̂ ′
i(θ (k),θ − θ(k);θ(τki )) ‖θ(k) − θ‖ ,
where the last inequality is due to the Cauchy-Schwarz inequality and we have defined L̂′i(θ,d;θ(τ k i )) as the directional derivative of L̂i(·;θ(τ k i )) at θ along the direction d. Moreover, for any θ ∈ Θ, 1
n n∑ i=1 L̂ ′ i(θ (k),θ − θ(k);θ(τ k i ))
= L̃(k) ′ (θ(k),θ − θ(k))︸ ︷︷ ︸
≥0
−L̃(k) ′ (θ(k),θ − θ(k)) + 1
n n∑ i=1 L̂ ′ i(θ (k),θ − θ(k);θ(τ k i ))
≥ 1 n n∑ i=1 { L̂ ′ i(θ (k),θ − θ(k);θ(τ k i ))− 1 M(τki ) M (τk i )∑ m=1 r′i(θ (k),θ − θ(k);θ(τ k i ), z (τki ) i,m ) } ,
where the inequality is due to the optimality of θ(k) and the convexity of L̃(k)(θ) [cf. H3]. Denoting a scaled version of the above term as:
(k)(θ) :=
1 n ∑n i=1 { 1
M (τk i )
∑M(τk i )
m=1 r ′ i(θ
(k),θ − θ(k);θ(τki ), z(τ k i ) i,m )− L̂ ′ i(θ (k),θ − θ(k);θ(τki )) } ‖θ(k) − θ‖ .
We have g(k) ≥ −‖∇ê(k)(θ(k))‖+ inf
θ∈Θ (− (k)(θ)) ≥ −‖∇ê(k)(θ(k))‖ − sup θ∈Θ | (k)(θ)| . (25)
Since g(k) = g(k)+ − g (k) − and g (k) + g (k) − = 0, this implies
g (k) − ≤ ‖∇ê(k)(θ(k))‖+ sup θ∈Θ | (k)(θ)| . (26)
Consider the above inequality when k = K, i.e., the random index, and taking total expectations on both sides gives
E[g(K)− ] ≤ E[‖∇ê(K)(θ(K))‖] + E[sup θ∈Θ (K)(θ)] .
We note that ( E[‖∇ê(K)(θ(K))‖] )2 ≤ E[‖∇ê(K)(θ(K))‖2] ≤ ∆(Kmax)
Kmax ,
where the first inequality is due to the convexity of (·)2 and the Jensen’s inequality, and
E[sup θ∈Θ
(K)(θ)] = 1
Kmax Kmax∑ k=0 E[sup θ∈Θ (k)(θ)] (a) ≤ Cgr Kmax Kmax−1∑ k=0 E [ 1 n n∑ i=1 M −1/2 (τki ) ] (b) ≤ Cgr Kmax Kmax−1∑ k=0 M −1/2 (k) ,
where (a) is due to H4 and (b) is due to (24). This implies
E[g(K)− ] ≤
√ ∆(Kmax)
Kmax + Cgr Kmax Kmax−1∑ k=0 M −1/2 (k) ,
and concludes the proof of the theorem.
A.2 PROOF OF THEOREM 2
Theorem. Under H1-H4. In addition, assume that {M(k)}k≥0 is a non-decreasing sequence of integers which satisfies ∑∞ k=0M −1/2 (k) <∞. Then:
1. the negative part of the stationarity measure converges a.s. to zero, i.e., lim k→∞ g−(θ (k))
a.s. = 0.
2. the objective value L(θ(k)) converges a.s. to a finite number L, i.e., limk→∞ L(θ(k)) a.s. = L.
Proof We apply the following auxiliary lemma which proof can be found in Appendix A.3 for the readability of the current proof:
Lemma 1. Let (Vk)k≥0 be a non negative sequence of random variables such that E[V0] < ∞. Let (Xk)k≥0 a non negative sequence of random variables and (Ek)k≥0 be a sequence of random variables such that ∑∞ k=0 E[|Ek|] <∞. If for any k ≥ 1:
Vk ≤ Vk−1 −Xk−1 + Ek−1 (27)
then:
(i) for all k ≥ 0, E[Vk] <∞ and the sequence (Vk)k≥0 converges a.s. to a finite limit V∞.
(ii) the sequence (E[Vk])k≥0 converges and lim k→∞ E[Vk] = E[V∞].
(iii) the series ∑∞ k=0Xk converges almost surely and ∑∞ k=0 E[Xk] <∞.
We proceed from (19) by re-arranging terms and observing that L̂(k+1)(θ(k+1)) ≤ L̂(k)(θ(k))− 1n ( L̂ik(θ(k);θ (τkik ))− L̂ik(θ(k);θ(k)) ) − ( L̃(k+1)(θ(k+1))− L̂(k+1)(θ(k+1)) ) + ( L̃(k)(θ(k))− L̂(k)(θ(k))
) + 1n ( L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)− L̂ik(θ(k);θ(k))
) + 1n ( L̂ik(θ(k);θ (τkik ))− L̃ik(θ(k);θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 ) ) .
Our idea is to apply Lemma 1. Under H1, the finite sum of surrogate functions L̂(k)(θ), defined in (15), is lower bounded by a constant ck > −∞ for any θ. To this end, we observe that
Vk := L̂(k)(θ(k))− inf k≥0 ck ≥ 0 (28)
is a non-negative random variable.
Secondly, under H1, the following random variable is non-negative
Xk := 1 n ( L̂ik(θ (τkik );θ(k))− L̂ik(θ(k);θ(k)) ) ≥ 0 . (29)
Thirdly, we define Ek = − ( L̃(k+1)(θ(k+1))− L̂(k+1)(θ(k+1)) ) + ( L̃(k)(θ(k))− L̂(k)(θ(k)) ) + 1n ( L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)− L̂ik(θ(k);θ(k))
) + 1n ( L̂ik(θ(k);θ (τkik ))− L̃ik(θ(k);θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 ) ) .
(30)
Note that from the definitions (28), (29), (30), we have Vk+1 ≤ Vk −Xk + Ek for any k ≥ 1. Under H4, we observe that
E [ |L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)− L̂ik(θ(k);θ(k))| ] ≤ CrM−1/2(k)
E [∣∣∣L̂ik(θ(k);θ(τkik ))− L̃ik(θ(k);θ(τkik ), {z(τkik )ik,m }M(τkik )m=1 )∣∣∣] ≤ CrE[M−1/2(τkik ) ]
E [ |L̃(k)(θ(k))− L̂(k)(θ(k))| ] ≤ 1n ∑n i=1CrE [ M −1/2 (τki ) ] Therefore,
E [ |Ek| ] ≤ Crn ( M −1/2 (k) + E [ M −1/2 (τkik ) + ∑n i=1 { M −1/2 (τki ) +M −1/2 (τk+1i ) }]) .
Using (24) and the assumption on the sequence {M(k)}k≥0, we obtain that ∞∑ k=0 E [ |Ek| ] < Cr n (2 + 2n) ∞∑ k=0 M −1/2 (k) <∞.
Therefore, the conclusions in Lemma 1 hold. Precisely, we have ∑∞ k=0Xk < ∞ and∑∞
k=0 E[Xk] <∞ almost surely. Note that this implies
∞ > ∞∑ k=0 E[Xk] = 1 n ∞∑ k=0 E [ L̂ik(θ(k);θ (τkik ))− L̂ik(θ(k);θ(k)) ] = 1
n ∞∑ k=0 E [ L̂(k)(θ(k))− L(θ(k)) ] = 1 n ∞∑ k=0 E [ ê(k)(θ(k)) ] .
Since ê(k)(θ(k)) ≥ 0, the above implies
lim k→∞
ê(k)(θ(k)) = 0 a.s. (31)
and subsequently applying (18), we have limk→∞ ‖ê(k)(θ(k))‖ = 0 almost surely. Finally, it follows from (18) and (26) that
lim k→∞
g (k) − ≤ lim
k→∞
√ 2L √ ê(k)(θ(k)) + lim
k→∞ sup θ∈Θ | (k)(θ)| = 0 , (32)
where the last equality holds almost surely due to the fact that ∑∞ k=0 E[supθ∈Θ | (k)(θ)|] < ∞. This concludes the asymptotic convergence of the MISSO method.
Finally, we prove thatL(θ(k)) converges almost surely. As a consequence of Lemma 1, it is clear that {Vk}k≥0 converges almost surely and so is {L̂(k)(θ(k))}k≥0, i.e., we have limk→∞ L̂(k)(θ(k)) = L. Applying (31) implies that
L = lim k→∞ L̂(k)(θ(k)) = lim k→∞ L(θ(k)) a.s.
This shows that L(θ(k)) converges almost surely to L.
A.3 PROOF OF LEMMA 1
Lemma. Let (Vk)k≥0 be a non negative sequence of random variables such that E[V0] < ∞. Let (Xk)k≥0 a non negative sequence of random variables and (Ek)k≥0 be a sequence of random variables such that ∑∞ k=0 E[|Ek|] <∞. If for any k ≥ 1:
Vk ≤ Vk−1 −Xk−1 + Ek−1
then:
(i) for all k ≥ 0, E[Vk] <∞ and the sequence (Vk)k≥0 converges a.s. to a finite limit V∞.
(ii) the sequence (E[Vk])k≥0 converges and lim k→∞ E[Vk] = E[V∞].
(iii) the series ∑∞ k=0Xk converges almost surely and ∑∞ k=0 E[Xk] <∞.
Proof We first show that for all k ≥ 0, E[Vk] <∞. Note indeed that:
0 ≤ Vk ≤ V0 − k∑ j=1 Xj + k∑ j=1 Ej ≤ V0 + k∑ j=1 Ej , (33)
showing that E[Vk] ≤ E[V0] + E [∑k j=1Ej ] <∞. Since 0 ≤ Xk ≤ Vk−1 − Vk + Ek we also obtain for all k ≥ 0, E[Xk] < ∞. Moreover, since E [∑∞ j=1 |Ej | ] <∞, the series ∑∞ j=1Ej converges a.s. We may therefore define:
Wk = Vk + ∞∑ j=k+1 Ej (34)
Note that E[|Wk|] ≤ E[Vk] + E [∑∞ j=k+1 |Ej | ] <∞. For all k ≥ 1, we get:
Wk ≤ Vk−1 −Xk + ∞∑ j=k Ej ≤Wk−1 −Xk ≤Wk−1
E[Wk] ≤ E[Wk−1]− E[Xk] .
(35)
Hence the sequences (Wk)k≥0 and (E[Wk])k≥0 are non increasing. Since for all k ≥ 0, Wk ≥ − ∑∞ j=1 |Ej | > −∞ and E[Wk] ≥ − ∑∞ j=1 E[|Ej |] > −∞, the (random) sequence (Wk)k≥0 converges a.s. to a limitW∞ and the (deterministic) sequence (E[Wk])k≥0 converges to a limit w∞. Since |Wk| ≤ V0 + ∑∞ j=1 |Ej |, the Fatou lemma implies that:
E[lim inf k→∞ |Wk|] = E[|W∞|] ≤ lim inf k→∞ E[|Wk|] ≤ E[V0] + ∞∑ j=1 E[|Ej |] <∞ , (36)
showing that the random variable W∞ is integrable.
In the sequel, set Uk ,W0 −Wk. By construction we have for all k ≥ 0, Uk ≥ 0, Uk ≤ Uk+1 and E[Uk] ≤ E[|W0|] + E[|Wk|] <∞ and by the monotone convergence theorem, we get:
lim k→∞ E[Uk] = E[ lim k→∞ Uk] . (37)
Finally, we have:
lim k→∞ E[Uk] = E[W0]− w∞ and E[ lim k→∞ Uk] = E[W0]− E[W∞] . (38)
showing that E[W∞] = w∞ and concluding the proof of (ii). Moreover, using (35) we have that Wk ≤Wk−1 −Xk which yields:
∞∑ j=1 Xj ≤W0 −W∞ <∞ ,
∞∑ j=1 E[Xj ] ≤ E[W0]− w∞ <∞ , (39)
an concludes the proof of the lemma.
B PRACTICAL DETAILS FOR THE BINARY LOGISTIC REGRESSION ON THE TRAUMABASE
B.1 TRAUMABASE DATASET QUANTITATIVE VARIABLES
The list of the 16 quantitative variables we use in our experiments are as follows — age, weight, height, BMI (Body Mass Index), the Glasgow Coma Scale, the Glasgow Coma Scale motor component, the minimum systolic blood pressure, the minimum diastolic blood pressure, the maximum
number of heart rate (or pulse) per unit time (usually a minute), the systolic blood pressure at arrival of ambulance, the diastolic blood pressure at arrival of ambulance, the heart rate at arrival of ambulance, the capillary Hemoglobin concentration, the oxygen saturation, the fluid expansion colloids, the fluid expansion cristalloids, the pulse pressure for the minimum value of diastolic and systolic blood pressure, the pulse pressure at arrival of ambulance.
B.2 METROPOLIS-HASTINGS ALGORITHM
During the simulation step of the MISSO method, the sampling from the target distribution π(zi,mis;θ) := p(zi,mis|zi,obs, yi;θ) is performed using a Metropolis-Hastings (MH) algorithm (Meyn & Tweedie, 2012) with proposal distribution q(zi,mis; δ) := p(zi,mis|zi,obs; δ) where θ = (β,Ω) and δ = (ξ,Σ). The parameters of the Gaussian conditional distribution of zi,mis|zi,obs read:
ξ = βmiss + Ωmis,obsΩ −1 obs,obs(zi,obs − βobs) , Σ = Ωmis,mis + Ωmis,obsΩ −1 obs,obsΩobs,mis ,
where we have used the Schur Complement of Ωobs,obs in Ω and noted βmis (resp. βobs) the missing (resp. observed) elements of β. The MH algorithm is summarized in Algorithm 3.
Algorithm 3 MH aglorithm 1: Input: initialization zi,mis,0 ∼ q(zi,mis; δ) 2: for m = 1, · · · ,M do 3: Sample zi,mis,m ∼ q(zi,mis; δ) 4: Sample u ∼ U(J0, 1K) 5: Calculate the ratio r = π(zi,mis,m;θ)/q(zi,mis,m);δ)π(zi,mis,m−1;θ)/q(zi,mis,m−1);δ) 6: if u < r then 7: Accept zi,mis,m 8: else 9: zi,mis,m ← zi,mis,m−1 10: end if 11: end for 12: Output: zi,mis,M
B.3 MISSO UPDATE
Choice of surrogate function for MISO: We recall the MISO deterministic surrogate defined in (7): L̂i(θ;θ) = ∫ Z log ( pi(zi,mis,θ)/fi(zi,mis,θ) ) pi(zi,mis,θ)µi(dzi) .
where θ = (δ, β,Ω) and θ = (δ̄, β̄, Ω̄). We adapt it to our missing covariates problem and decompose the surrogate function defined above into an observed and a missing part.
Surrogate function decomposition We adapt it to our missing covariates problem and decompose the term depending on θ, while θ̄ is fixed, in two following parts leading to
L̂i(θ;θ) =− ∫ Z log fi(zi,mis, zi,obs,θ)pi(zi,mis,θ)µi(dzi,mis)
=− ∫ Z log [pi(yi|zi,mis, zi,obs, δ)pi(zi,mis, β,Ω)] pi(zi,θ)µi(dzi,mis)
=− ∫ Z
log pi(yi|zi,mis, zi,obs, δ)pi(zi,θ)µi(dzi,mis)︸ ︷︷ ︸ =L̂(1)i (δ,θ)
− ∫ Z
log pi(zi,mis, β,Ω)pi(zi,θ)µi(dzi,mis)︸ ︷︷ ︸ =L̂(2)i (β,Ω,θ) .
(40)
The mean β and the covariance Ω of the latent structure can be estimated minimizing the sum of MISSO surrogates L̃(2)i (β,Ω,θ, {zm}Mm=1), defined as MC approximation of L̂ (2) i (β,Ω,θ), for all i ∈ JnK, in closed-form expression.
We thus keep the surrogate L̂(2)i (β,Ω,θ) as it is, and consider the following quadratic approximation of L̂(1)i (δ,θ) to estimate the vector of logistic parameters δ:
L̂(1)i (δ̄,θ)− ∫ Z ∇ log pi(yi|zi,mis, zi,obs, δ) ∣∣ δ=δ̄ pi(zi,mis,θ)µi(dzi,mis)(δ − δ̄)
−(δ − δ̄)/2 ∫ Z ∇2 log pi(yi|zi,mis, zi,obs, δ)pi(zi,mis,θ)pi(zi,mis,θ)µi(dzi,mis)(δ − δ̄)>.
Recall that: ∇ log pi(yi|zi,mis, zi,obs, δ) = zi ( yi − S(δ>zi) ) ,
∇2 log pi(yi|zi,mis, zi,obs, δ) = −ziz>i Ṡ(δ>zi) ,
where Ṡ(u) is the derivative of S(u). Note that Ṡ(u) ≤ 1/4 and since, for all i ∈ JnK, the p × p matrix ziz>i is semi-definite positive we can assume that: L1. For all i ∈ JnK and > 0, there exist, for all zi ∈ Z, a positive definite matrix Hi(zi) := 1 4 (ziz > i + Id) such that for all δ ∈ Rp, −ziz>i Ṡ(δ>zi) ≤ Hi(zi).
Then, we use, for all i ∈ JnK, the following surrogate function to estimate δ:
L̄(1)i (δ,θ) = L̂ (1) i (δ̄,θ)−D > i (δ − δ̄) +
1 2 (δ − δ̄)Hi(δ − δ̄)> , (41)
where:
Di = ∫ Z ∇ log pi(yi|zi,mis, zi,obs, δ) ∣∣ δ=δ̄ pi(zi,mis,θ)µi(dzi,mis) ,
Hi = ∫ Z Hi(zi,mis)pi(zi,mis,θ)µi(dzi,mis) .
Finally, at iteration k, the total surrogate is:
L̃(k)(θ) = 1 n n∑ i=1 L̃i(θ, θ(τ k i ), {zi,m} M (τk i ) m=1 )
= 1
n n∑ i=1 L̃(2)i (β,Ω, θ (τki ), {zi,m} M (τk i ) m=1 )− 1 n n∑ i=1 D̃ (τki ) i (δ − δ (τki ))
+ 1
2n n∑ i=1 (δ − δ(τ k i )) { H̃ (τki ) i } (δ − δ(τ k i ))> ,
(42)
where for all i ∈ JnK:
D̃ (τki ) i = 1
M(τki )
M (τk i )∑
m=1
z (τki ) i,m ( yi − S( ( δ(τ k i ) )> zi,m(τ k i )) ) ,
H̃ (τki ) i =
1
4M(τki )
M (τk i )∑
m=1
z (τki ) i,m (z (τki ) i,m ) > .
Minimizing the total surrogate (42) boils down to performing a quasi-Newton step. It is perhaps sensible to apply some diagonal loading which is perfectly compatible with the surrogate interpretation we just gave.
The logistic parameters are estimated as follows:
δ(k) = arg min δ∈Θ
1
n n∑ i=1 L̃(1)i (δ, θ (τki ), {zi,m} M (τk i ) m=1 ) ,
where L̃(1)i (δ, θ(τ k i ), {zi,m}
M (τk i )
m=1 ) is the MC approximation of the MISO surrogate defined in (41) and which leads to the following quasi-Newton step:
δ(k) = 1
n n∑ i=1 δ(τ k i ) − (H̃(k))−1D̃(k) ,
with D̃(k) = 1n ∑n i=1 D̃ (τki ) i and H̃ (k) = 1n ∑n i=1 H̃ (τki ) i .
MISSO updates: At the k-th iteration, and after the initialization, for all i ∈ JnK, of the latent variables (z(0)i ), the MISSO algorithm consists in picking an index ik uniformly on JnK, completing the observations by sampling a Monte Carlo batch {z(k)ik,mis,m} M(k) m=1 of missing values from the conditional distribution p(zik,mis|zik,obs, yik ;θ(k−1)) using an MCMC sampler and computing the estimated parameters as follows:
β(k) = arg min β∈Θ
1
n n∑ i=1 L̃(2)i (β,Ω (k), θ(τ k i ), {zi,m} M (τk i ) m=1 ) = 1 n n∑ i=1 1 M(τki ) M (τk i )∑ m=1 z (k) i,m ,
Ω(k) = arg min Ω∈Θ
1
n n∑ i=1 L̃(2)i (β (k),Ω, θ(τ k i ), {zi,m} M (τk i ) m=1 ) = 1 n n∑ i=1 1 M(τki ) M (τk i )∑ m=1 w (k) i,m ,
δ(k) = 1
n n∑ i=1 δ(τ k i ) − (H̃(k))−1D̃(k) .
(43)
where z(k)i,m = (z (k) i,mis,m, zi,obs) is composed of a simulated and an observed part, D̃ (k) =
1 n ∑n i=1 D̃ (τki ) i , H̃ (k) = 1n ∑n i=1 H̃ (τki ) i and w (k) i,m = z (k) i,m(z (k) i,m)
> − β(k)(β(k))>. Besides, L̃(1)i (β,Ω,θ, {zm}Mm=1) and L̃ (2) i (β,Ω,θ, {zm}Mm=1) are defined as MC approximation of L̂(1)i (β,Ω,θ) and L̂ (2) i (β,Ω,θ), for all i ∈ JnK as components of the surrogate function (40).
B.4 WALL CLOCK TIME
We provide Table 1, the running time for each method, plotted in Figure 1, employed to train a logistic regression with missing values on the TraumaBase dataset (p = 16 influential quantitative measurements, on n = 6384 patients).
The running times are sensibly the same since for each method the computation complexity per epoch is similar. We remark a slight delay using the MISSO method with a batch size of 1, as our code implemented in R, is not totally optimized and parallelized. Yet, when the batch size tends to 100%, we retrieve the duration of MCEM, which is consistent with the fact that MISSO with a full batch update boils down to the MCEM algorithm.
We plot Figure 3, the updated parameters for the Logistic regression example against the time elapsed (in seconds).
C PRACTICAL DETAILS FOR THE INCREMENTAL VARIATIONAL INFERENCE
C.1 NEURAL NETWORKS ARCHITECTURE
Bayesian LeNet-5 Architecture: We describe in Table 2 the architecture of the Convolutional Neural Network introduced in (LeCun et al., 1998) and trained on MNIST:
Bayesian ResNet-18 Architecture: We describe in Table 3 the architecture of the Resnet-18 we train on CIFAR-10:
C.2 ALGORITHMS UPDATES
First, we initialize the means µ(0)` for ` ∈ JdK and variance estimates σ(0). At iteration k, minimizing the sum of stochastic surrogates defined as in (6) and (13) yields the following MISSO update —
step (i) pick a function index ik uniformly on JnK; step (ii) sample a Monte Carlo batch {z(k)m } M(k) m=1 from N (0, I); and step (iii) update the parameters as
µ (k) ` =
1
n n∑ i=1 µ (τki ) ` − γ n n∑ i=1 δ̂ (k) µ`,i and σ(k) = 1 n n∑ i=1 σ(τ k i ) − γ n n∑ i=1 δ̂ (k) σ,i , (44)
where we define the following gradient terms for all i ∈ J1, nK:
δ̂ (k) µ`,i = − 1 M(k) M(k)∑ m=1 ∇w log p(yi|xi, w) ∣∣∣ w=t(θ(k−1),z (k) m ) +∇µ`d(θ(k−1)) ,
δ̂ (k) σ,i = −
1
M(k) M(k)∑ m=1 z(k)m ∇w log p(yi|xi, w) ∣∣∣ w=t(θ(k−1),z (k) m ) +∇σd(θ(k−1)) .
(45)
Note that our analysis in the main text does require the parameter to be in a compact set. For the current estimation problem considered, this can be enforced in practice by restricting the parameters in a ball. In our simulation for the BNNs example, we did not implement the algorithms that stick closely to the compactness requirement for illustrative purposes. However, we observe empirically that the parameters are always bounded. The update rules can be easily modified to respect the requirement. For the considered VI problem, we recall the surrogate functions (11) are quadratic and indeed a simple projection step suffices to ensure boundedness of the iterates.
For all benchmark algorithms, we pick, at iteration k, a function index ik uniformly on JnK and sample a Monte Carlo batch {z(k)m } M(k) m=1 from the standard Gaussian distribution. The updates of the parameters µ` for all ` ∈ JdK and σ break down as follows: Monte Carlo SAG update: Set
µ (k) ` = µ (k−1) ` −
γ
n n∑ i=1 δ̂ (k) µ`,i and σ(k) = σ(k−1) − γ n n∑ i=1 δ̂ (k) σ,i ,
where δ̂(k)µ`,i = δ̂ (k−1) µ`,i and δ̂(k)σ,i = δ̂ (k−1) σ,i for i 6= ik and are defined by (45) for i = ik. The learning rate is set to γ = 10−3.
Bayes By Backprop update: Set
µ (k) ` = µ (k−1) ` −
γ n δ̂ (k) µ`,ik and σ(k) = σ(k−1) − γ n δ̂ (k) σ,ik ,
where the learning rate γ = 10−3.
Monte Carlo Momentum update: Set
µ (k) ` = µ (k−1) ` + v̂ (k) µ` and σ(k) = σ(k−1) + v̂(k)σ ,
where v̂
(k) µ`,i = αv̂(k−1)µ` − γ
n δ̂
(k) µ`,ik and v̂(k)σ = αv̂ (k−1) σ −
γ n δ̂ (k) σ,ik ,
where α and γ, respectively the momentum and the learning rates, are set to 10−3.
Monte Carlo ADAM update: Set
µ (k) ` = µ (k−1) ` −
γ n m̂(k)µ` /(
√ m̂
(k) µ` + ) and σ (k) = σ(k−1) − γ n m̂(k)σ /(
√ m̂ (k) σ + ) ,
where
m̂(k)µ` = m (k−1 | 1. What is the focus and contribution of the paper on MISSO?
2. What are the strengths and weaknesses of the proposed approach, particularly in comparison with other optimizers like MC-SAG and MC-ADAM?
3. Is there anything unclear or confusing about the paper's content, such as the significance of the work or its advantages over previous methods? | Review | Review
This paper proposed MISSO, which is an extension of MISO to handle surrogate functions that are expressed as an expectation. MISSO just used the Monte Carlo samples from the distribution to construct objectives to minimize.
It seems to me that MISSO is just a straigforward extension of MISO, also the empirical results seems to suggest the proposed MISSO has no advantage over Monte Carlo variants of other optimizers, such as MC-SAG, MC-ADAM, thus it is not clear to me what is the significant aspect of this work. |
ICLR | Title
MISSO: Minimization by Incremental Stochastic Surrogate Optimization for Large Scale Nonconvex and Nonsmooth Problems
Abstract
Many constrained, nonconvex and nonsmooth optimization problems can be tackled using the majorization-minimization (MM) method which alternates between constructing a surrogate function which upper bounds the objective function, and then minimizing this surrogate. For problems which minimize a finite sum of functions, a stochastic version of the MM method selects a batch of functions at random at each iteration and optimizes the accumulated surrogate. However, in many cases of interest such as variational inference for latent variable models, the surrogate functions are expressed as an expectation. In this contribution, we propose a doubly stochastic MM method based on Monte Carlo approximation of these stochastic surrogates. We establish asymptotic and non-asymptotic convergence of our scheme in a constrained, nonconvex, nonsmooth optimization setting. We apply our new framework for inference of logistic regression model with missing data and for variational inference of Bayesian variants of LeNet-5 and Resnet-18 on respectively the MNIST and CIFAR-10 datasets.
1 INTRODUCTION
We consider the constrained minimization problem of a finite sum of functions:
min θ∈Θ L(θ) := 1 n n∑ i=1 Li(θ) , (1)
where Θ is a convex, compact, and closed subset of Rp, and for any i ∈ J1, nK, the function Li : Rp → R is bounded from below and is (possibly) nonconvex and nonsmooth. To tackle the optimization problem (1), a popular approach is to apply the majorization-minimization (MM) method which iteratively minimizes a majorizing surrogate function. A large number of existing procedures fall into this general framework, for instance gradient-based or proximal methods or the Expectation-Maximization (EM) algorithm (McLachlan & Krishnan, 2008) and some variational Bayes inference techniques (Jordan et al., 1999); see for example (Razaviyayn et al., 2013) and (Lange, 2016) and the references therein. When the number of terms n in (1) is large, the vanilla MM method may be intractable because it requires to construct a surrogate function for all the n terms Li at each iteration. Here, a remedy is to apply the Minimization by Incremental Surrogate Optimization (MISO) method proposed by Mairal (2015), where the surrogate functions are updated incrementally. The MISO method can be interpreted as a combination of MM and ideas which have emerged for variance reduction in stochastic gradient methods (Schmidt et al., 2017). An extended analysis of MISO has been proposed in (Qian et al., 2019).
The success of the MISO method rests upon the efficient minimization of surrogates such as convex functions, see (Mairal, 2015, Section 2.3). A notable application of MISO-like algorithms is described in (Mensch et al., 2017) where the authors builds upon the stochastic majorizationminimization framework of Mairal (2015) to introduce a method for sparse matrix factorization. Yet, in many applications of interest, the natural surrogate functions are intractable, yet they are defined as expectation of tractable functions. For instance, this is the case for inference in latent variable models via maximum likelihood (McLachlan & Krishnan, 2008). Another application is
variational inference (Ghahramani, 2015), in which the goal is to approximate the posterior distribution of parameters given the observations; see for example (Neal, 2012; Blundell et al., 2015; Polson et al., 2017; Rezende et al., 2014; Li & Gal, 2017).
This paper fills the gap in the literature by proposing a method called Minimization by Incremental Stochastic Surrogate Optimization (MISSO), designed for the nonconvex and nonsmooth finite sum optimization, with a finite-time convergence guarantee. Our work aims at formulating a generic class of incremental stochastic surrogate methods for nonconvex optimization and building the theory to understand its behavior. In particular, we provide convergence guarantees for stochastic EM and Variational Inference-type methods, under mild conditions. In summary, our contributions are:
• we propose a unifying framework of analysis for incremental stochastic surrogate optimization when the surrogates are defined as expectations of tractable functions. The proposed MISSO method is built on the Monte Carlo integration of the intractable surrogate function, i.e., a doubly stochastic surrogate optimization scheme.
• we present an incremental update of the commonly used variational inference and Monte Carlo EM methods as special cases of our newly introduced framework. The analysis of those two algorithms is thus conducted under this unifying framework of analysis.
• we establish both asymptotic and non-asymptotic convergence for the MISSO method. In particular, the MISSO method converges almost surely to a stationary point and in O(n/ ) iterations to an -stationary point, see Theorem 1.
• in essence, we relax the class of surrogate functions used in MISO (Mairal, 2015) and allow for intractable surrogates that can only be evaluated by Monte-Carlo approximations. Working at the crossroads of Optimization and Sampling constitutes what we believe to be the novelty and the technicality of our framework and theoretical results.
In Section 2, we review the techniques for incremental minimization of finite sum functions based on the MM principle; specifically, we review the MISO method (Mairal, 2015), and present a class of surrogate functions expressed as an expectation over a latent space. The MISSO method is then introduced for the latter class of intractable surrogate functions requiring approximation. In Section 3, we provide the asymptotic and non-asymptotic convergence analysis for the MISSO method (and of the MISO (Mairal, 2015) one as a special case). Section 4 presents numerical applications including parameter inference for logistic regression with missing data and variational inference for two types of Bayesian neural networks. The proofs of theoretical results are reported as Supplement.
Notations. We denote J1, nK = {1, . . . , n}. Unless otherwise specified, ‖ · ‖ denotes the standard Euclidean norm and 〈· | ·〉 is the inner product in the Euclidean space. For any function f : Θ→ R, f ′(θ,d) is the directional derivative of f at θ along the direction d, i.e.,
f ′(θ,d) := lim t→0+ f(θ + td)− f(θ) t . (2)
The directional derivative is assumed to exist for the functions introduced throughout this paper.
2 INCREMENTAL MINIMIZATION OF FINITE SUM NONCONVEX FUNCTIONS
The objective function in (1) is composed of a finite sum of possibly nonsmooth and nonconvex functions. A popular approach here is to apply the MM method, which tackles (1) through alternating between two steps — (i) minimizing a surrogate function which upper bounds the original objective function; and (ii) updating the surrogate function to tighten the upper bound.
As mentioned in the introduction, the MISO method (Mairal, 2015) is developed as an iterative scheme that only updates the surrogate functions partially at each iteration. Formally, for any i ∈ J1, nK, we consider a surrogate function L̂i(θ;θ) which satisfies the assumptions (H1, H2): H1. For all i ∈ J1, nK and θ ∈ Θ, L̂i(θ;θ) is convex w.r.t. θ, and it holds
L̂i(θ;θ) ≥ Li(θ), ∀ θ ∈ Θ , (3)
where the equality holds when θ = θ.
H2. For any θi ∈ Θ, i ∈ J1, nK and some > 0, the difference function ê(θ; {θi}ni=1) := 1 n ∑n i=1 L̂i(θ;θi) − L(θ) is defined for all θ ∈ Θ and differentiable for all θ ∈ Θ, where Θ = {θ ∈ Rd, infθ′∈Θ ‖θ − θ′‖ < } is an -neighborhood set of Θ. Moreover, for some constant L, the gradient satisfies
‖∇ê(θ; {θi}ni=1)‖2 ≤ 2Lê(θ; {θi}ni=1), ∀ θ ∈ Θ . (4)
Algorithm 1 The MISO method (Mairal, 2015). 1: Input: initialization θ(0). 2: Initialize the surrogate function as A0i (θ) := L̂i(θ;θ(0)), i ∈ J1, nK.
3: for k = 0, 1, ...,Kmax do 4: Pick ik uniformly from J1, nK. 5: Update Ak+1i (θ) as:
Ak+1i (θ) = { L̂i(θ;θ(k)), if i = ik Aki (θ), otherwise.
6: Set θ(k+1) ∈ arg min θ∈Θ 1 n
∑n i=1A k+1 i (θ).
7: end for
We remark that H1 is a common assumption used for surrogate functions, see (Mairal, 2015, Section 2.3). H2 can be satisfied when the difference function ê(θ; {θi}ni=1) is L-smooth, i.e., ê is differentiable on Θ and its gradient ∇ê is LLipschitz, ∀θ ∈ Θ. H2 can be implied by applying (Razaviyayn et al., 2013, Proposition 1).
The inequality (3) implies L̂i(θ;θ) ≥ Li(θ) > −∞ for any θ ∈ Θ. The MISO method is an incremental version of the MM method, as summarized by Algorithm 1, which shows that the MISO method maintains an iteratively updated set of upper-bounding surrogate functions {Aki (θ)}ni=1 and updates the iterate via minimizing the average of the surrogate functions.
Particularly, only one out of the n surrogate functions is updated at each iteration [cf. Line 5] and the sum function 1n ∑n i=1A k+1 i (θ) is designed to be ‘easy to optimize’, which, for example, can be a sum of quadratic functions. As such, the MISO method is suitable for large-scale optimization as the computation cost per iteration is independent of n. Under H1, H2, it was shown that the MISO method converges almost surely to a stationary point of (1) (Mairal, 2015, Prop. 3.1).
We now consider the case when the surrogate functions L̂i(θ;θ) are intractable. Let Z be a measurable set, pi : Z × Θ → R+ a probability density function, ri : Θ × Θ × Z → R a measurable function and µi a σ-finite measure. We consider surrogate functions which satisfy H1, H2 and that can be expressed as an expectation, i.e.:
L̂i(θ;θ) := ∫ Z ri(θ;θ, zi)pi(zi;θ)µi(dzi) ∀ (θ,θ) ∈ Θ×Θ . (5)
Plugging (5) into the MISO method is not feasible since the update step in Step 6 involves a minimization of an expectation. Several motivating examples of (1) are given in Section 2.
In this paper, we propose the Minimization by Incremental Stochastic Surrogate Optimization (MISSO) method which replaces the expectation in (5) by Monte Carlo integration and then optimizes the objective function (1) in an incremental manner. Denote by M ∈ N the Monte Carlo batch size and let {zm ∈ Z}Mm=1 be a set of samples. These samples can be drawn (Case 1) i.i.d. from the distribution pi(·;θ) or (Case 2) from a Markov chain with stationary distribution pi(·;θ); see Section 3 for illustrations. To this end, we define the stochastic surrogate as follows:
L̃i(θ;θ, {zm}Mm=1) := 1
M M∑ m=1 ri(θ;θ, zm) , (6)
and we summarize the proposed MISSO method in Algorithm 2. Compared to the MISO method, there is a crucial difference in that the MISSO method involves two types of randomness. The first level of randomness comes from the selection of ik in Line 5. The second level of randomness stems from the set of Monte Carlo approximated functions Ãki (θ) used in lieu of Aki (θ) in Line 6 when optimizing for the next iterate θ(k). We now discuss two applications of the MISSO method.
Example 1: Maximum Likelihood Estimation for Latent Variable Model. Latent variable models (Bishop, 2006) are constructed by introducing unobserved (latent) variables which help explain the observed data. We consider n independent observations ((yi, zi), i ∈ JnK) where yi is observed and zi is latent. In this incomplete data framework, define {fi(zi,θ),θ ∈ Θ} to be the complete
Algorithm 2 The MISSO method. 1: Input: initialization θ(0); a sequence of non-negative numbers {M(k)}∞k=0. 2: For all i ∈ J1, nK, draw M(0) Monte Carlo samples with the stationary distribution pi(·;θ(0)). 3: Initialize the surrogate function as
Ã0i (θ) := L̃i(θ;θ(0), {z (0) i,m} M(0) m=1), i ∈ J1, nK .
4: for k = 0, 1, ...,Kmax do 5: Pick a function index ik uniformly on J1, nK. 6: Draw M(k) Monte Carlo samples with the stationary distribution pi(·;θ(k)). 7: Update the individual surrogate functions recursively as:
Ãk+1i (θ) =
{ L̃i(θ;θ(k), {z(k)i,m} M(k) m=1), if i = ik
Ãki (θ), otherwise.
8: Set θ(k+1) ∈ arg minθ∈Θ L̃(k+1)(θ) := 1n ∑n i=1 Ã k+1 i (θ). 9: end for
data likelihood models, i.e., the joint likelihood of the observations and latent variables. Let
gi(θ) := ∫ Z fi(zi,θ)µi(dzi), i ∈ J1, nK, θ ∈ Θ
denote the incomplete data likelihood, i.e., the marginal likelihood of the observations yi. For ease of notations, the dependence on the observations is made implicit. The maximum likelihood (ML) estimation problem sets the individual objective function Li(θ) to be the i-th negated incomplete data log-likelihood Li(θ) := − log gi(θ). Assume, without loss of generality, that gi(θ) 6= 0 for all θ ∈ Θ. We define by pi(zi,θ) := fi(zi,θ)/gi(θ) the conditional distribution of the latent variable zi given the observations yi. A surrogate function L̂i(θ;θ) satisfying H1 can be obtained through writing fi(zi,θ) = fi(zi,θ)pi(zi,θ)pi(zi,θ) and applying the Jensen inequality:
L̂i(θ;θ) = ∫ Z log ( pi(zi,θ)/fi(zi,θ) )︸ ︷︷ ︸ =ri(θ;θ,zi) pi(zi,θ)µi(dzi) . (7)
We note that H2 can also be verified for common distribution models. We can apply the MISSO method following the above specification of ri(θ;θ, zi) and pi(zi,θ).
Example 2: Variational Inference. Let ((xi, yi), i ∈ J1, nK) be i.i.d. input-output pairs and w ∈ W ⊆ Rd be a latent variable. When conditioned on the input data x = (xi, i ∈ J1, nK), the joint distribution of y = (yi, i ∈ J1, nK) and w is given by:
p(y, w|x) = π(w) ∏n i=1 p(yi|xi, w) . (8)
Our goal is to compute the posterior distribution p(w|y, x). In most cases, the posterior distribution p(w|y, x) is intractable and is approximated using a family of parametric distributions, {q(w,θ),θ ∈ Θ}. The variational inference (VI) problem (Blei et al., 2017) boils down to minimizing the Kullback-Leibler (KL) divergence between q(w,θ) and the posterior distribution p(w|y, x):
min θ∈Θ
L(θ) := KL (q(w;θ) ||p(w|y, x)) := Eq(w;θ) [ log ( q(w;θ)/p(w|y, x) )] . (9)
Using (8), we decompose L(θ) = n−1 ∑n i=1 Li(θ) + const. where:
Li(θ) := −Eq(w;θ) [ log p(yi|xi, w) ] + 1
n Eq(w;θ)
[ log q(w;θ)/π(w) ] := ri(θ) + d(θ) . (10)
Directly optimizing the finite sum objective function in (9) can be difficult. First, with n 1, evaluating the objective function L(θ) requires a full pass over the entire dataset. Second, for some
complex models, the expectations in (10) can be intractable even if we assume a simple parametric model for q(w;θ). Assume that Li is L-smooth. We apply the MISSO method with a quadratic surrogate function defined as:
L̂i(θ;θ) := Li(θ) + 〈 ∇θLi(θ) |θ − θ 〉 + L
2 ‖θ − θ‖2, (θ,θ) ∈ Θ2 . (11)
It is easily checked that the quadratic function L̂i(θ;θ) satisfies H1, H2. To compute the gradient ∇Li(θ), we apply the re-parametrization technique suggested in (Paisley et al., 2012; Kingma & Welling, 2014; Blundell et al., 2015). Let t : Rd×Θ 7→ Rd be a differentiable function w.r.t. θ ∈ Θ which is designed such that the law of w = t(z,θ) is q(·,θ), where z ∼ Nd(0, I). By (Blundell et al., 2015, Proposition 1), the gradient of −ri(·) in (10) is:
∇θEq(w;θ) [ log p(yi|xi, w) ] = Ez∼Nd(0,I) [ Jtθ(z,θ)∇w log p(yi|xi, w) ∣∣ w=t(z,θ) ] , (12)
where for each z ∈ Rd, Jtθ(z,θ) is the Jacobian of the function t(z, ·) with respect to θ evaluated at θ. In addition, for most cases, the term∇d(θ) can be evaluated in closed form as the gradient of the KL between the prior distribution π(·) and the variational candidate q(·,θ).
ri(θ;θ, z) := 〈 ∇θd(θ)− Jtθ(z,θ)∇w log p(yi|xi, w) ∣∣ w=t(z,θ) |θ − θ 〉 + L 2 ‖θ − θ‖2 . (13)
Finally, using (11) and (13), the surrogate function (6) is given by L̃i(θ;θ, {zm}Mm=1) := M−1 ∑M m=1 ri(θ;θ, zm) where {zm}Mm=1 are i.i.d samples drawn from N (0, I).
3 CONVERGENCE ANALYSIS
We now provide asymptotic and non-asymptotic convergence results of our method. Assume:
H3. For all i ∈ J1, nK, θ ∈ Θ, zi ∈ Z, ri(·;θ, zi) is convex on Θ and is lower bounded.
We are particularly interested in the constrained optimization setting where Θ is a bounded set. To this end, we control the supremum norm of the MC approximation, introduced in (6), as: H4. For the samples {zi,m}Mm=1, there exist finite constants Cr and Cgr such that
Cr := sup θ∈Θ sup M>0 1√ M Eθ [ sup θ∈Θ ∣∣∣∣∣ M∑ m=1 { ri(θ;θ, zi,m)− L̂i(θ;θ) }∣∣∣∣∣ ]
Cgr := sup θ∈Θ sup M>0
√ MEθ sup θ∈Θ ∣∣∣∣∣ 1M M∑ m=1 L̂′i(θ,θ − θ;θ)− r′i(θ,θ − θ;θ, zi,m) ‖θ − θ‖ ∣∣∣∣∣ 2
for all i ∈ J1, nK, and we denoted by Eθ[·] the expectation w.r.t. a Markov chain {zi,m}Mm=1 with initial distribution ξi(·;θ), transition kernel Πi,θ, and stationary distribution pi(·;θ).
Some intuitions behind the controlling terms: It is common in statistical and optimization problems, to deal with the manipulation and the control of random variables indexed by sets with an infinite number of elements. Here, the controlled random variable is an image of a continuous function defined as ri(θ;θ, zi,m) − L̂i(θ;θ) for all z ∈ Z and for fixed (θ,θ) ∈ Θ2. To characterize such control, we will have recourse to the notion of metric entropy (or bracketing number) as developed in (Van der Vaart, 2000; Vershynin, 2018; Wainwright, 2019). A collection of results from those references gives intuition behind our assumption H4, which is classical in empirical processes. In (Vershynin, 2018, Theorem 8.2.3), the authors recall the uniform law of large numbers:
E [ sup f∈F ∣∣∣∣∣ 1M M∑ i=1 f (zi,m)− E[f(zi)] ∣∣∣∣∣ ] ≤ CL√ M for all zi,m, i ∈ J1,MK ,
where F is a class of L-Lipschitz functions. Moreover, in (Vershynin, 2018, Theorem 8.1.3 ) and (Wainwright, 2019, Theorem 5.22), the application of the Dudley inequality yields:
E[sup f∈F |Xf −X0|] ≤ 1√ M ∫ 1 0 √ logN (F , ‖ · ‖∞, ε)dε ,
whereN (F , ‖ · ‖∞, ε) is the bracketing number and denotes the level of approximation (the bracketing number goes to infinity when → 0). Finally, in (Van der Vaart, 2000, p.271, Example), N (F , ‖ · ‖∞, ε) is bounded from above for a class of parametric functions F = fθ : θ ∈ Θ:
N (F , ‖ · ‖∞, ε) ≤ K ( diam Θ
ε
)d , for all 0 < ε < diam Θ .
The authors acknowledge that those bounds are a dramatic manifestation of the curse of dimensionality happening when sampling is needed. Nevertheless, the dependence on the dimension highly depends on the class of surrogate functions F used in our scheme, as smaller bounds on these controlling terms can be derived for simpler class of functions, such as quadratic functions.
Stationarity measure. As problem (1) is a constrained optimization task, we consider the following stationarity measure:
g(θ) := inf θ∈Θ L′(θ,θ − θ) ‖θ − θ‖ and g(θ) = g+(θ)− g−(θ) , (14)
where g+(θ) := max{0, g(θ)}, g−(θ) := −min{0, g(θ)} denote the positive and negative part of g(θ), respectively. Note that θ is a stationary point if and only if g−(θ) = 0 (Fletcher et al., 2002). Furthermore, suppose that the sequence {θ(k)}k≥0 has a limit point θ that is a stationary point, then one has limk→∞ g−(θ(k)) = 0. Thus, the sequence {θ(k)}k≥0 is said to satisfy an asymptotic stationary point condition. This is equivalent to (Mairal, 2015, Definition 2.4).
To facilitate our analysis, we define τki as the iteration index where the i-th function is last accessed in the MISSO method prior to iteration k, τk+1ik = k for instance. We define:
L̂(k)(θ) := 1n ∑n i=1L̂i(θ;θ (τki )), ê(k)(θ) := L̂(k)(θ)− L(θ), M (k) := Kmax−1∑ k=0 M −1/2 (k) . (15)
We first establish a non-asymptotic convergence rate for the MISSO method:
Theorem 1. Under H1-H4. For any Kmax ∈ N, let K be an independent discrete r.v. drawn uniformly from {0, ...,Kmax − 1} and define the following quantity:
∆(Kmax) := 2nLE[L̃ (0)(θ(0))− L̃(Kmax)(θ(Kmax))] + 4LCrM (k) .
Then we have following non-asymptotic bounds:
E [ ‖∇ê(K)(θ(K))‖2 ] ≤ ∆(Kmax)
Kmax and E[g−(θ(K))] ≤
√ ∆(Kmax)
Kmax + Cgr Kmax M (k) . (16)
Note that ∆(Kmax) is finite for any Kmax ∈ N. Iteration Complexity of MISSO. As expected, the MISSO method converges to a stationary point of (1) asymptotically and at a sublinear rate E[g(K)− ] ≤ O( √ ∆(Kmax)/Kmax). In other terms, MISSO requires O(nL/ ) iterations to reach an -stationary point when the suboptimality condition, that characterizes stationarity, is E [ ‖g−(θ(K))‖2 ] . Note that this stationarity criterion are similar to the
usual quantity used in stochastic nonconvex optimization, i.e., E [ ‖∇L(θ(K))‖2 ] . In fact, when the
optimization problem (1) is unconstrained, i.e., Θ = Rp, then E [ g(θ(K)) ] = E [ ∇L(θ(K)) ] .
Sample Complexity of MISSO. Regarding the sample complexity of our method, setting M(k) = k2/n2, as a non-decreasing sequence of integers satisfying ∑∞ k=0M −1/2 (k) < ∞, in order to keep
∆(Kmax) nL, then the MISSO method requires ∑nL/ k=0 k
2/n2 = nL3/ 3 samples to reach an -stationary point.
Furthermore,we remark that the MISO method can be analyzed in Theorem 1 as a special case of the MISSO method satisfying Cr = Cgr = 0. In this case, while the asymptotic convergence is well known from (Mairal, 2015) [cf. H4], Eq. (16) gives a non-asymptotic rate of E[g(K)− ] ≤
O( √ nL/Kmax) which is new to our best knowledge. Next, we show that under an additional assumption on the sequence of batch size M(k), the MISSO method converges almost surely to a stationary point:
Theorem 2. Under H1-H4. In addition, assume that {M(k)}k≥0 is a non-decreasing sequence of integers which satisfies ∑∞ k=0M −1/2 (k) <∞. Then:
1. the negative part of the stationarity measure converges a.s. to zero, i.e., lim k→∞ g−(θ (k))
a.s. = 0.
2. the objective value L(θ(k)) converges a.s. to a finite number L, i.e., limk→∞ L(θ(k)) a.s. = L.
In particular, the first result above shows that the sequence {θ(k)}k≥0 produced by the MISSO method satisfies an asymptotic stationary point condition.
4 NUMERICAL EXPERIMENTS
4.1 BINARY LOGISTIC REGRESSION WITH MISSING VALUES
This application follows Example 1 described in Section 2. We consider a binary regression setup, ((yi, zi), i ∈ JnK) where yi ∈ {0, 1} is a binary response and zi = (zi,j ∈ R, j ∈ JpK) is a covariate vector. The vector of covariates zi = [zi,mis, zi,obs] is not fully observed where we denote by zi,mis the missing values and zi,obs the observed covariate. It is assumed that (zi, i ∈ JnK) are i.i.d. and marginally distributed according toN (β,Ω) where β ∈ Rp and Ω is a positive definite p×pmatrix. We define the conditional distribution of the observations yi given zi = (zi,mis, zi,obs) as:
pi(yi|zi) = S(δ>z̄i)yi ( 1− S(δ>z̄i) )1−yi , (17)
where for u ∈ R, S(u) = 1/(1+e−u), δ = (δ0, · · · , δp) are the logistic parameters and z̄i = (1, zi). Here, θ = (δ,β,Ω) is the parameter to estimate. For i ∈ JnK, the complete log-likelihood reads: log fi(zi,mis,θ) ∝ yiδ>z̄i − log ( 1 + exp(δ>z̄i) ) − 1
2 log(|Ω|) + 1 2 Tr ( Ω−1(zi − β)(zi − β)> ) .
Fitting a logistic regression model on the TraumaBase dataset: We apply the MISSO method to fit a logistic regression model on the TraumaBase (http://traumabase.eu) dataset, which consists of data collected from 15 trauma centers in France, covering measurements on patients from the initial to last stage of trauma. This dataset includes information from the first stage of the trauma, namely initial observations on the patient’s accident site to the last stage being intense care at the hospital and counts more than 200 variables measured for more than 7 000 patients. Since the dataset considered is heterogeneous – coming from multiple sources with frequently missed entries – we apply the latent data model described in (17) to predict the risk of a severe hemorrhage which is one of the main cause of death after a major trauma.
Similar to (Jiang et al., 2018), we select p = 16 influential quantitative measurements, on n = 6384 patients. For the Monte Carlo sampling of zi,mis, required while running MISSO, we run a Metropolis-Hastings algorithm with the target distribution p(·|zi,obs, yi;θ(k)).
We compare in Figure 1 the convergence behavior of the estimated parameters δ and β using SAEM (Delyon et al., 1999) (with stepsize γk = 1/kα where α = 0.6 after tuning), MCEM (Wei
& Tanner, 1990) and the proposed MISSO method. For the MISSO method, we set the batch size to M(k) = 10 + k2 and we examine with selecting different number of functions in Line 5 in the method – the default settings with 1 (MISSO), 10% (MISSO10) and 50% (MISSO50) minibatches per iteration. From Figure 1, the MISSO method converges to a static value with less number of epochs than the MCEM, SAEM methods. It is worth noting that the difference among the MISSO runs for different number of selected functions demonstrates a variance-cost tradeoff. Though wall clock times are similar for all methods, they are reported in the appendix for completeness.
4.2 TRAINING BAYESIAN CNN USING MISSO
This application follows Example 2 described in Section 2. We use variational inference and the ELBO loss (10) to fit Bayesian Neural Networks on different datasets. At iteration k, minimizing the sum of stochastic surrogates defined as in (6) and (13) yields the following MISSO update — step (i) pick a function index ik uniformly on JnK; step (ii) sample a Monte Carlo batch {z(k)m } M(k) m=1 from N (0, I); and step (iii) update the parameters, with w̃ = t(θ(k−1), z(k)m ), as
µ (k) ` = µ̂ (τk) ` −
γ
n n∑ i=1 δ̂ (k) µ`,i ,
δ̂ (k) µ`,ik = − 1 M(k) M(k)∑ m=1 ∇w log p(yik |xik , w̃) +∇µ`d(θ(k−1)) ,
where µ̂(τ k)
` = 1 n ∑n i=1 µ (τki ) ` and d(θ) = n −1∑d `=1 ( − log(σ) + (σ2 + µ2`)/2− 1/2 ) .
Bayesian LeNet-5 on MNIST (LeCun et al., 1998): We apply the MISSO method to fit a Bayesian variant of LeNet-5 (LeCun et al., 1998). We train this network on the MNIST dataset (LeCun, 1998). The training set is composed of n = 55 000 handwritten digits, 28 × 28 images. Each image is labelled with its corresponding number (from zero to nine). Under the prior distribution π, see (8), the weights are assumed independent and identically distributed according to N (0, 1). We also assume that q(·;θ) ≡ N (µ, σ2I). The variational posterior parameters are thus θ = (µ, σ) where µ = (µ`, ` ∈ JdK) where d is the number of weights in the neural network. We use the re-parametrization as w = t(θ, z) = µ+ σz with z ∼ N (0, I). Bayesian ResNet-18 (He et al., 2016) on CIFAR-10 (Krizhevsky et al., 2012): We train here the Bayesian variant of the ResNet-18 neural network introduced in (He et al., 2016) on CIFAR-10. The latter dataset is composed of n = 60 000 handwritten digits, 32 × 32 colour images in 10 classes, with 6 000 images per class. As in the previous example, the weights are assumed independent and identically distributed according toN (0, I). Standard hyperparameters values found in the literature, such as the annealing constant or the number of MC samples, were used for the benchmark methods. For efficiency purpose and lower variance, the Flipout estimator (Wen et al., 2018) is used.
Experiment Results: We compare the convergence of the Monte Carlo variants of the following state of the art optimization algorithms — the ADAM (Kingma & Ba, 2015), the Momentum (Sutskever et al., 2013) and the SAG (Schmidt et al., 2017) methods versus the Bayes by Backprop (BBB) (Blundell et al., 2015) and our proposed MISSO method. For all these methods, the loss function (10) and its gradients were computed by Monte Carlo integration based on the reparametrization described above. The mini-batch of indices and MC samples are respectively set to 128 and M(k) = k. The learning rates are set to 10−3 for LeNet-5 and 10−4 for Resnet-18.
Figure 2(a) shows the convergence of the negated evidence lower bound against the number of passes over data (one pass represents an epoch). As observed, the proposed MISSO method outperforms Bayes by Backprop and Momentum, while similar convergence rates are observed with the MISSO, ADAM and SAG methods for our experiment on MNIST dataset using a Bayesian variant of LeNet5. On the other hand, the experiment conducted on CIFAR-10 (Figure 2(b)) using a much larger network, i.e., a Bayesian variant of ResNet-18 showcases the need of a well-tuned adaptive methods to reach lower training loss (and also faster). Our MISSO method is similar to the Monte Carlo variant of ADAM but slower than Adagrad optimizer. Recall that the purpose of this paper is to provide a common class of optimizers, such as VI, in order to study their convergence behaviors, and not to introduce a novel method outperforming the baselines methods. We report wall clock times for all methods in the appendix for completeness.
5 CONCLUSION
We present a unifying framework for minimizing a nonconvex and nonsmooth finite-sum objective function using incremental surrogates when the latter functions are expressed as an expectation and are intractable. Our approach covers a large class of nonconvex applications in machine learning such as logistic regression with missing values and variational inference. We provide both finitetime and asymptotic guarantees of our incremental stochastic surrogate optimization technique and illustrate our findings training a binary logistic regression with missing covariates to predict hemorrhagic shock and Bayesian variants of two Convolutional Neural Networks on benchmark datasets.
A PROOFS OF THE THEORETICAL RESULTS
A.1 PROOF OF THEOREM 1
Theorem. Under H1-H4. For anyKmax ∈ N, letK be an independent discrete r.v. drawn uniformly from {0, ...,Kmax − 1} and define the following quantity:
∆(Kmax) := 2nLE[L̃ (0)(θ(0))− L̃(Kmax)(θ(Kmax))] + 4LCrM (k) .
Then we have following non-asymptotic bounds:
E [ ‖∇ê(K)(θ(K))‖2 ] ≤ ∆(Kmax)
Kmax and E[g−(θ(K))] ≤
√ ∆(Kmax)
Kmax + Cgr Kmax M (k) .
Proof We begin by recalling the definition
L̃(k)(θ) := 1 n n∑ i=1 Ãki (θ) .
Notice that
L̃(k+1)(θ) = 1 n n∑ i=1 L̃i(θ;θ(τ k+1 i ), {z(τ k+1 i ) i,m } M (τ k+1 i ) m=1 )
= L̃(k)(θ) + 1 n
( L̃ik(θ;θ(k), {z (k) ik,m }M(k)m=1)− L̃ik(θ;θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 ) ) .
Furthermore, we recall that L̂(k)(θ) := 1n ∑n i=1L̂i(θ;θ (τki )), ê(k)(θ) := L̂(k)(θ)− L(θ) .
Due to H2, we have ‖∇ê(k)(θ(k))‖2 ≤ 2Lê(k)(θ(k)) . (18)
To prove the first bound in (16), using the optimality of θ(k+1), one has
L̃(k+1)(θ(k+1)) ≤ L̃(k+1)(θ(k))
= L̃(k)(θ(k)) + 1n ( L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)− L̃ik(θ(k);θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 ) ) .
(19)
Let Fk be the filtration of random variables up to iteration k, i.e., {i`−1, {z(`−1)i`−1,m} M(`−1) m=1 ,θ
(`)}k`=1. We observe that the conditional expectation evaluates to
Eik [ E [ L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)|Fk, ik ] |Fk ]
= L(θ(k)) + Eik [ E [ 1 M(k) M(k)∑ m=1 rik(θ (k);θ(k), z (k) ik,m )− L̂ik(θ(k);θ(k))|Fk, ik ] |Fk ] ≤ L(θ(k)) + Cr√ M(k) ,
where the last inequality is due to H4. Moreover,
E [ L̃ik(θ(k);θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 )|Fk ] = 1
n n∑ i=1 L̃i(θ(k);θ(τ k i ), {z(τ k i ) i,m } M (τk i ) m=1 ) = L̃(k)(θ(k)) .
Taking the conditional expectations on both sides of (19) and re-arranging terms give:
L̃(k)(θ(k))− L(θ(k)) ≤ nE [ L̃(k)(θ(k))− L̃(k+1)(θ(k+1))|Fk ] +
Cr√ M(k) . (20)
Proceeding from (20), we observe the following lower bound for the left hand side
L̃(k)(θ(k))− L(θ(k)) (a)= L̃(k)(θ(k))− L̂(k)(θ(k)) + ê(k)(θ(k)) (b)
≥ L̃(k)(θ(k))− L̂(k)(θ(k)) + 1 2L ‖∇ê(k)(θ(k))‖2
= 1
n n∑ i=1 { 1 M(τki ) M (τk i )∑ m=1 ri(θ (k);θ(τ k i ), z (τki ) i,m )− L̂i(θ (k);θ(τ k i )) }
︸ ︷︷ ︸ :=−δ(k)(θ(k))
+ 1
2L ‖∇ê(k)(θ(k))‖2 ,
where (a) is due to ê(k)(θ(k)) = 0 [cf. H1], (b) is due to (18) and we have defined the summation in the last equality as −δ(k)(θ(k)). Substituting the above into (20) yields
‖∇ê(k)(θ(k))‖2
2L ≤ nE
[ L̃(k)(θ(k))− L̃(k+1)(θ(k+1))|Fk ] +
Cr√ M(k) + δ(k)(θ(k)) . (21)
Observe the following upper bound on the total expectations:
E [ δ(k)(θ(k)) ] ≤ E [ 1 n n∑ i=1 Cr√ M(τki ) ] ,
which is due to H4. It yields
E [ ‖∇ê(k)(θ(k))‖2 ] ≤ 2nLE [ L̃(k)(θ(k))− L̃(k+1)(θ(k+1)) ] +
2LCr√ M(k) + 1 n n∑ i=1 E [ 2LCr√
M(τki )
] .
Finally, for anyKmax ∈ N, we letK be a discrete r.v. that is uniformly drawn from {0, 1, ...,Kmax− 1}. Using H4 and taking total expectations lead to
E [ ‖∇ê(K)(θ(K))‖2 ] = 1
Kmax Kmax−1∑ k=0 E[‖∇ê(k)(θ(k))‖2]
≤ 2nLE[L̃ (0)(θ(0))− L̃(Kmax)(θ(Kmax))]
Kmax + 2LCr Kmax Kmax−1∑ k=0 E [ 1√ M(k) + 1 n n∑ i=1 1√ M(τki ) ] . (22)
For all i ∈ J1, nK, the index i is selected with a probability equal to 1n when conditioned independently on the past. We observe:
E[M−1/2 (τki ) ] = k∑ j=1 1 n ( 1− 1 n )j−1 M −1/2 (k−j) (23)
Taking the sum yields: Kmax−1∑ k=0 E[M−1/2 (τki ) ] = Kmax−1∑ k=0 k∑ j=1 1 n ( 1− 1 n )j−1 M −1/2 (k−j) = Kmax−1∑ k=0 k−1∑ l=0 1 n ( 1− 1 n )k−(l+1) M −1/2 (l)
= Kmax−1∑ l=0 M −1/2 (l) Kmax−1∑ k=l+1 1 n ( 1− 1 n )k−(l+1) ≤ Kmax−1∑ l=0 M −1/2 (l) ,
(24)
where the last inequality is due to upper bounding the geometric series. Plugging this back into (22) yields
E [ ‖∇ê(K)(θ(K))‖2 ] = 1
Kmax Kmax−1∑ k=0 E[‖∇ê(k)(θ(k))‖2]
≤ 2nLE[L̃ (0)(θ(0))− L̃(Kmax)(θ(Kmax))]
Kmax +
1
Kmax Kmax−1∑ k=0 4LCr√ M(k) = ∆(Kmax) Kmax .
This concludes our proof for the first inequality in (16).
To prove the second inequality of (16), we define the shorthand notations g(k) := g(θ(k)), g(k)− := −min{0, g(k)}, g(k)+ := max{0, g(k)}. We observe that
g(k) = inf θ∈Θ L′(θ(k),θ − θ(k)) ‖θ(k) − θ‖
= inf θ∈Θ
{ 1 n ∑n i=1 L̂ ′
i(θ (k),θ − θ(k);θ(τki )) ‖θ(k) − θ‖
− 〈 ∇ê(k)(θ(k)) |θ − θ(k) 〉 ‖θ(k) − θ‖ } ≥ −‖∇ê(k)(θ(k))‖+ inf
θ∈Θ
1 n ∑n i=1 L̂ ′
i(θ (k),θ − θ(k);θ(τki )) ‖θ(k) − θ‖ ,
where the last inequality is due to the Cauchy-Schwarz inequality and we have defined L̂′i(θ,d;θ(τ k i )) as the directional derivative of L̂i(·;θ(τ k i )) at θ along the direction d. Moreover, for any θ ∈ Θ, 1
n n∑ i=1 L̂ ′ i(θ (k),θ − θ(k);θ(τ k i ))
= L̃(k) ′ (θ(k),θ − θ(k))︸ ︷︷ ︸
≥0
−L̃(k) ′ (θ(k),θ − θ(k)) + 1
n n∑ i=1 L̂ ′ i(θ (k),θ − θ(k);θ(τ k i ))
≥ 1 n n∑ i=1 { L̂ ′ i(θ (k),θ − θ(k);θ(τ k i ))− 1 M(τki ) M (τk i )∑ m=1 r′i(θ (k),θ − θ(k);θ(τ k i ), z (τki ) i,m ) } ,
where the inequality is due to the optimality of θ(k) and the convexity of L̃(k)(θ) [cf. H3]. Denoting a scaled version of the above term as:
(k)(θ) :=
1 n ∑n i=1 { 1
M (τk i )
∑M(τk i )
m=1 r ′ i(θ
(k),θ − θ(k);θ(τki ), z(τ k i ) i,m )− L̂ ′ i(θ (k),θ − θ(k);θ(τki )) } ‖θ(k) − θ‖ .
We have g(k) ≥ −‖∇ê(k)(θ(k))‖+ inf
θ∈Θ (− (k)(θ)) ≥ −‖∇ê(k)(θ(k))‖ − sup θ∈Θ | (k)(θ)| . (25)
Since g(k) = g(k)+ − g (k) − and g (k) + g (k) − = 0, this implies
g (k) − ≤ ‖∇ê(k)(θ(k))‖+ sup θ∈Θ | (k)(θ)| . (26)
Consider the above inequality when k = K, i.e., the random index, and taking total expectations on both sides gives
E[g(K)− ] ≤ E[‖∇ê(K)(θ(K))‖] + E[sup θ∈Θ (K)(θ)] .
We note that ( E[‖∇ê(K)(θ(K))‖] )2 ≤ E[‖∇ê(K)(θ(K))‖2] ≤ ∆(Kmax)
Kmax ,
where the first inequality is due to the convexity of (·)2 and the Jensen’s inequality, and
E[sup θ∈Θ
(K)(θ)] = 1
Kmax Kmax∑ k=0 E[sup θ∈Θ (k)(θ)] (a) ≤ Cgr Kmax Kmax−1∑ k=0 E [ 1 n n∑ i=1 M −1/2 (τki ) ] (b) ≤ Cgr Kmax Kmax−1∑ k=0 M −1/2 (k) ,
where (a) is due to H4 and (b) is due to (24). This implies
E[g(K)− ] ≤
√ ∆(Kmax)
Kmax + Cgr Kmax Kmax−1∑ k=0 M −1/2 (k) ,
and concludes the proof of the theorem.
A.2 PROOF OF THEOREM 2
Theorem. Under H1-H4. In addition, assume that {M(k)}k≥0 is a non-decreasing sequence of integers which satisfies ∑∞ k=0M −1/2 (k) <∞. Then:
1. the negative part of the stationarity measure converges a.s. to zero, i.e., lim k→∞ g−(θ (k))
a.s. = 0.
2. the objective value L(θ(k)) converges a.s. to a finite number L, i.e., limk→∞ L(θ(k)) a.s. = L.
Proof We apply the following auxiliary lemma which proof can be found in Appendix A.3 for the readability of the current proof:
Lemma 1. Let (Vk)k≥0 be a non negative sequence of random variables such that E[V0] < ∞. Let (Xk)k≥0 a non negative sequence of random variables and (Ek)k≥0 be a sequence of random variables such that ∑∞ k=0 E[|Ek|] <∞. If for any k ≥ 1:
Vk ≤ Vk−1 −Xk−1 + Ek−1 (27)
then:
(i) for all k ≥ 0, E[Vk] <∞ and the sequence (Vk)k≥0 converges a.s. to a finite limit V∞.
(ii) the sequence (E[Vk])k≥0 converges and lim k→∞ E[Vk] = E[V∞].
(iii) the series ∑∞ k=0Xk converges almost surely and ∑∞ k=0 E[Xk] <∞.
We proceed from (19) by re-arranging terms and observing that L̂(k+1)(θ(k+1)) ≤ L̂(k)(θ(k))− 1n ( L̂ik(θ(k);θ (τkik ))− L̂ik(θ(k);θ(k)) ) − ( L̃(k+1)(θ(k+1))− L̂(k+1)(θ(k+1)) ) + ( L̃(k)(θ(k))− L̂(k)(θ(k))
) + 1n ( L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)− L̂ik(θ(k);θ(k))
) + 1n ( L̂ik(θ(k);θ (τkik ))− L̃ik(θ(k);θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 ) ) .
Our idea is to apply Lemma 1. Under H1, the finite sum of surrogate functions L̂(k)(θ), defined in (15), is lower bounded by a constant ck > −∞ for any θ. To this end, we observe that
Vk := L̂(k)(θ(k))− inf k≥0 ck ≥ 0 (28)
is a non-negative random variable.
Secondly, under H1, the following random variable is non-negative
Xk := 1 n ( L̂ik(θ (τkik );θ(k))− L̂ik(θ(k);θ(k)) ) ≥ 0 . (29)
Thirdly, we define Ek = − ( L̃(k+1)(θ(k+1))− L̂(k+1)(θ(k+1)) ) + ( L̃(k)(θ(k))− L̂(k)(θ(k)) ) + 1n ( L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)− L̂ik(θ(k);θ(k))
) + 1n ( L̂ik(θ(k);θ (τkik ))− L̃ik(θ(k);θ (τkik ), {z (τkik ) ik,m } M (τk ik ) m=1 ) ) .
(30)
Note that from the definitions (28), (29), (30), we have Vk+1 ≤ Vk −Xk + Ek for any k ≥ 1. Under H4, we observe that
E [ |L̃ik(θ(k);θ(k), {z (k) ik,m }M(k)m=1)− L̂ik(θ(k);θ(k))| ] ≤ CrM−1/2(k)
E [∣∣∣L̂ik(θ(k);θ(τkik ))− L̃ik(θ(k);θ(τkik ), {z(τkik )ik,m }M(τkik )m=1 )∣∣∣] ≤ CrE[M−1/2(τkik ) ]
E [ |L̃(k)(θ(k))− L̂(k)(θ(k))| ] ≤ 1n ∑n i=1CrE [ M −1/2 (τki ) ] Therefore,
E [ |Ek| ] ≤ Crn ( M −1/2 (k) + E [ M −1/2 (τkik ) + ∑n i=1 { M −1/2 (τki ) +M −1/2 (τk+1i ) }]) .
Using (24) and the assumption on the sequence {M(k)}k≥0, we obtain that ∞∑ k=0 E [ |Ek| ] < Cr n (2 + 2n) ∞∑ k=0 M −1/2 (k) <∞.
Therefore, the conclusions in Lemma 1 hold. Precisely, we have ∑∞ k=0Xk < ∞ and∑∞
k=0 E[Xk] <∞ almost surely. Note that this implies
∞ > ∞∑ k=0 E[Xk] = 1 n ∞∑ k=0 E [ L̂ik(θ(k);θ (τkik ))− L̂ik(θ(k);θ(k)) ] = 1
n ∞∑ k=0 E [ L̂(k)(θ(k))− L(θ(k)) ] = 1 n ∞∑ k=0 E [ ê(k)(θ(k)) ] .
Since ê(k)(θ(k)) ≥ 0, the above implies
lim k→∞
ê(k)(θ(k)) = 0 a.s. (31)
and subsequently applying (18), we have limk→∞ ‖ê(k)(θ(k))‖ = 0 almost surely. Finally, it follows from (18) and (26) that
lim k→∞
g (k) − ≤ lim
k→∞
√ 2L √ ê(k)(θ(k)) + lim
k→∞ sup θ∈Θ | (k)(θ)| = 0 , (32)
where the last equality holds almost surely due to the fact that ∑∞ k=0 E[supθ∈Θ | (k)(θ)|] < ∞. This concludes the asymptotic convergence of the MISSO method.
Finally, we prove thatL(θ(k)) converges almost surely. As a consequence of Lemma 1, it is clear that {Vk}k≥0 converges almost surely and so is {L̂(k)(θ(k))}k≥0, i.e., we have limk→∞ L̂(k)(θ(k)) = L. Applying (31) implies that
L = lim k→∞ L̂(k)(θ(k)) = lim k→∞ L(θ(k)) a.s.
This shows that L(θ(k)) converges almost surely to L.
A.3 PROOF OF LEMMA 1
Lemma. Let (Vk)k≥0 be a non negative sequence of random variables such that E[V0] < ∞. Let (Xk)k≥0 a non negative sequence of random variables and (Ek)k≥0 be a sequence of random variables such that ∑∞ k=0 E[|Ek|] <∞. If for any k ≥ 1:
Vk ≤ Vk−1 −Xk−1 + Ek−1
then:
(i) for all k ≥ 0, E[Vk] <∞ and the sequence (Vk)k≥0 converges a.s. to a finite limit V∞.
(ii) the sequence (E[Vk])k≥0 converges and lim k→∞ E[Vk] = E[V∞].
(iii) the series ∑∞ k=0Xk converges almost surely and ∑∞ k=0 E[Xk] <∞.
Proof We first show that for all k ≥ 0, E[Vk] <∞. Note indeed that:
0 ≤ Vk ≤ V0 − k∑ j=1 Xj + k∑ j=1 Ej ≤ V0 + k∑ j=1 Ej , (33)
showing that E[Vk] ≤ E[V0] + E [∑k j=1Ej ] <∞. Since 0 ≤ Xk ≤ Vk−1 − Vk + Ek we also obtain for all k ≥ 0, E[Xk] < ∞. Moreover, since E [∑∞ j=1 |Ej | ] <∞, the series ∑∞ j=1Ej converges a.s. We may therefore define:
Wk = Vk + ∞∑ j=k+1 Ej (34)
Note that E[|Wk|] ≤ E[Vk] + E [∑∞ j=k+1 |Ej | ] <∞. For all k ≥ 1, we get:
Wk ≤ Vk−1 −Xk + ∞∑ j=k Ej ≤Wk−1 −Xk ≤Wk−1
E[Wk] ≤ E[Wk−1]− E[Xk] .
(35)
Hence the sequences (Wk)k≥0 and (E[Wk])k≥0 are non increasing. Since for all k ≥ 0, Wk ≥ − ∑∞ j=1 |Ej | > −∞ and E[Wk] ≥ − ∑∞ j=1 E[|Ej |] > −∞, the (random) sequence (Wk)k≥0 converges a.s. to a limitW∞ and the (deterministic) sequence (E[Wk])k≥0 converges to a limit w∞. Since |Wk| ≤ V0 + ∑∞ j=1 |Ej |, the Fatou lemma implies that:
E[lim inf k→∞ |Wk|] = E[|W∞|] ≤ lim inf k→∞ E[|Wk|] ≤ E[V0] + ∞∑ j=1 E[|Ej |] <∞ , (36)
showing that the random variable W∞ is integrable.
In the sequel, set Uk ,W0 −Wk. By construction we have for all k ≥ 0, Uk ≥ 0, Uk ≤ Uk+1 and E[Uk] ≤ E[|W0|] + E[|Wk|] <∞ and by the monotone convergence theorem, we get:
lim k→∞ E[Uk] = E[ lim k→∞ Uk] . (37)
Finally, we have:
lim k→∞ E[Uk] = E[W0]− w∞ and E[ lim k→∞ Uk] = E[W0]− E[W∞] . (38)
showing that E[W∞] = w∞ and concluding the proof of (ii). Moreover, using (35) we have that Wk ≤Wk−1 −Xk which yields:
∞∑ j=1 Xj ≤W0 −W∞ <∞ ,
∞∑ j=1 E[Xj ] ≤ E[W0]− w∞ <∞ , (39)
an concludes the proof of the lemma.
B PRACTICAL DETAILS FOR THE BINARY LOGISTIC REGRESSION ON THE TRAUMABASE
B.1 TRAUMABASE DATASET QUANTITATIVE VARIABLES
The list of the 16 quantitative variables we use in our experiments are as follows — age, weight, height, BMI (Body Mass Index), the Glasgow Coma Scale, the Glasgow Coma Scale motor component, the minimum systolic blood pressure, the minimum diastolic blood pressure, the maximum
number of heart rate (or pulse) per unit time (usually a minute), the systolic blood pressure at arrival of ambulance, the diastolic blood pressure at arrival of ambulance, the heart rate at arrival of ambulance, the capillary Hemoglobin concentration, the oxygen saturation, the fluid expansion colloids, the fluid expansion cristalloids, the pulse pressure for the minimum value of diastolic and systolic blood pressure, the pulse pressure at arrival of ambulance.
B.2 METROPOLIS-HASTINGS ALGORITHM
During the simulation step of the MISSO method, the sampling from the target distribution π(zi,mis;θ) := p(zi,mis|zi,obs, yi;θ) is performed using a Metropolis-Hastings (MH) algorithm (Meyn & Tweedie, 2012) with proposal distribution q(zi,mis; δ) := p(zi,mis|zi,obs; δ) where θ = (β,Ω) and δ = (ξ,Σ). The parameters of the Gaussian conditional distribution of zi,mis|zi,obs read:
ξ = βmiss + Ωmis,obsΩ −1 obs,obs(zi,obs − βobs) , Σ = Ωmis,mis + Ωmis,obsΩ −1 obs,obsΩobs,mis ,
where we have used the Schur Complement of Ωobs,obs in Ω and noted βmis (resp. βobs) the missing (resp. observed) elements of β. The MH algorithm is summarized in Algorithm 3.
Algorithm 3 MH aglorithm 1: Input: initialization zi,mis,0 ∼ q(zi,mis; δ) 2: for m = 1, · · · ,M do 3: Sample zi,mis,m ∼ q(zi,mis; δ) 4: Sample u ∼ U(J0, 1K) 5: Calculate the ratio r = π(zi,mis,m;θ)/q(zi,mis,m);δ)π(zi,mis,m−1;θ)/q(zi,mis,m−1);δ) 6: if u < r then 7: Accept zi,mis,m 8: else 9: zi,mis,m ← zi,mis,m−1 10: end if 11: end for 12: Output: zi,mis,M
B.3 MISSO UPDATE
Choice of surrogate function for MISO: We recall the MISO deterministic surrogate defined in (7): L̂i(θ;θ) = ∫ Z log ( pi(zi,mis,θ)/fi(zi,mis,θ) ) pi(zi,mis,θ)µi(dzi) .
where θ = (δ, β,Ω) and θ = (δ̄, β̄, Ω̄). We adapt it to our missing covariates problem and decompose the surrogate function defined above into an observed and a missing part.
Surrogate function decomposition We adapt it to our missing covariates problem and decompose the term depending on θ, while θ̄ is fixed, in two following parts leading to
L̂i(θ;θ) =− ∫ Z log fi(zi,mis, zi,obs,θ)pi(zi,mis,θ)µi(dzi,mis)
=− ∫ Z log [pi(yi|zi,mis, zi,obs, δ)pi(zi,mis, β,Ω)] pi(zi,θ)µi(dzi,mis)
=− ∫ Z
log pi(yi|zi,mis, zi,obs, δ)pi(zi,θ)µi(dzi,mis)︸ ︷︷ ︸ =L̂(1)i (δ,θ)
− ∫ Z
log pi(zi,mis, β,Ω)pi(zi,θ)µi(dzi,mis)︸ ︷︷ ︸ =L̂(2)i (β,Ω,θ) .
(40)
The mean β and the covariance Ω of the latent structure can be estimated minimizing the sum of MISSO surrogates L̃(2)i (β,Ω,θ, {zm}Mm=1), defined as MC approximation of L̂ (2) i (β,Ω,θ), for all i ∈ JnK, in closed-form expression.
We thus keep the surrogate L̂(2)i (β,Ω,θ) as it is, and consider the following quadratic approximation of L̂(1)i (δ,θ) to estimate the vector of logistic parameters δ:
L̂(1)i (δ̄,θ)− ∫ Z ∇ log pi(yi|zi,mis, zi,obs, δ) ∣∣ δ=δ̄ pi(zi,mis,θ)µi(dzi,mis)(δ − δ̄)
−(δ − δ̄)/2 ∫ Z ∇2 log pi(yi|zi,mis, zi,obs, δ)pi(zi,mis,θ)pi(zi,mis,θ)µi(dzi,mis)(δ − δ̄)>.
Recall that: ∇ log pi(yi|zi,mis, zi,obs, δ) = zi ( yi − S(δ>zi) ) ,
∇2 log pi(yi|zi,mis, zi,obs, δ) = −ziz>i Ṡ(δ>zi) ,
where Ṡ(u) is the derivative of S(u). Note that Ṡ(u) ≤ 1/4 and since, for all i ∈ JnK, the p × p matrix ziz>i is semi-definite positive we can assume that: L1. For all i ∈ JnK and > 0, there exist, for all zi ∈ Z, a positive definite matrix Hi(zi) := 1 4 (ziz > i + Id) such that for all δ ∈ Rp, −ziz>i Ṡ(δ>zi) ≤ Hi(zi).
Then, we use, for all i ∈ JnK, the following surrogate function to estimate δ:
L̄(1)i (δ,θ) = L̂ (1) i (δ̄,θ)−D > i (δ − δ̄) +
1 2 (δ − δ̄)Hi(δ − δ̄)> , (41)
where:
Di = ∫ Z ∇ log pi(yi|zi,mis, zi,obs, δ) ∣∣ δ=δ̄ pi(zi,mis,θ)µi(dzi,mis) ,
Hi = ∫ Z Hi(zi,mis)pi(zi,mis,θ)µi(dzi,mis) .
Finally, at iteration k, the total surrogate is:
L̃(k)(θ) = 1 n n∑ i=1 L̃i(θ, θ(τ k i ), {zi,m} M (τk i ) m=1 )
= 1
n n∑ i=1 L̃(2)i (β,Ω, θ (τki ), {zi,m} M (τk i ) m=1 )− 1 n n∑ i=1 D̃ (τki ) i (δ − δ (τki ))
+ 1
2n n∑ i=1 (δ − δ(τ k i )) { H̃ (τki ) i } (δ − δ(τ k i ))> ,
(42)
where for all i ∈ JnK:
D̃ (τki ) i = 1
M(τki )
M (τk i )∑
m=1
z (τki ) i,m ( yi − S( ( δ(τ k i ) )> zi,m(τ k i )) ) ,
H̃ (τki ) i =
1
4M(τki )
M (τk i )∑
m=1
z (τki ) i,m (z (τki ) i,m ) > .
Minimizing the total surrogate (42) boils down to performing a quasi-Newton step. It is perhaps sensible to apply some diagonal loading which is perfectly compatible with the surrogate interpretation we just gave.
The logistic parameters are estimated as follows:
δ(k) = arg min δ∈Θ
1
n n∑ i=1 L̃(1)i (δ, θ (τki ), {zi,m} M (τk i ) m=1 ) ,
where L̃(1)i (δ, θ(τ k i ), {zi,m}
M (τk i )
m=1 ) is the MC approximation of the MISO surrogate defined in (41) and which leads to the following quasi-Newton step:
δ(k) = 1
n n∑ i=1 δ(τ k i ) − (H̃(k))−1D̃(k) ,
with D̃(k) = 1n ∑n i=1 D̃ (τki ) i and H̃ (k) = 1n ∑n i=1 H̃ (τki ) i .
MISSO updates: At the k-th iteration, and after the initialization, for all i ∈ JnK, of the latent variables (z(0)i ), the MISSO algorithm consists in picking an index ik uniformly on JnK, completing the observations by sampling a Monte Carlo batch {z(k)ik,mis,m} M(k) m=1 of missing values from the conditional distribution p(zik,mis|zik,obs, yik ;θ(k−1)) using an MCMC sampler and computing the estimated parameters as follows:
β(k) = arg min β∈Θ
1
n n∑ i=1 L̃(2)i (β,Ω (k), θ(τ k i ), {zi,m} M (τk i ) m=1 ) = 1 n n∑ i=1 1 M(τki ) M (τk i )∑ m=1 z (k) i,m ,
Ω(k) = arg min Ω∈Θ
1
n n∑ i=1 L̃(2)i (β (k),Ω, θ(τ k i ), {zi,m} M (τk i ) m=1 ) = 1 n n∑ i=1 1 M(τki ) M (τk i )∑ m=1 w (k) i,m ,
δ(k) = 1
n n∑ i=1 δ(τ k i ) − (H̃(k))−1D̃(k) .
(43)
where z(k)i,m = (z (k) i,mis,m, zi,obs) is composed of a simulated and an observed part, D̃ (k) =
1 n ∑n i=1 D̃ (τki ) i , H̃ (k) = 1n ∑n i=1 H̃ (τki ) i and w (k) i,m = z (k) i,m(z (k) i,m)
> − β(k)(β(k))>. Besides, L̃(1)i (β,Ω,θ, {zm}Mm=1) and L̃ (2) i (β,Ω,θ, {zm}Mm=1) are defined as MC approximation of L̂(1)i (β,Ω,θ) and L̂ (2) i (β,Ω,θ), for all i ∈ JnK as components of the surrogate function (40).
B.4 WALL CLOCK TIME
We provide Table 1, the running time for each method, plotted in Figure 1, employed to train a logistic regression with missing values on the TraumaBase dataset (p = 16 influential quantitative measurements, on n = 6384 patients).
The running times are sensibly the same since for each method the computation complexity per epoch is similar. We remark a slight delay using the MISSO method with a batch size of 1, as our code implemented in R, is not totally optimized and parallelized. Yet, when the batch size tends to 100%, we retrieve the duration of MCEM, which is consistent with the fact that MISSO with a full batch update boils down to the MCEM algorithm.
We plot Figure 3, the updated parameters for the Logistic regression example against the time elapsed (in seconds).
C PRACTICAL DETAILS FOR THE INCREMENTAL VARIATIONAL INFERENCE
C.1 NEURAL NETWORKS ARCHITECTURE
Bayesian LeNet-5 Architecture: We describe in Table 2 the architecture of the Convolutional Neural Network introduced in (LeCun et al., 1998) and trained on MNIST:
Bayesian ResNet-18 Architecture: We describe in Table 3 the architecture of the Resnet-18 we train on CIFAR-10:
C.2 ALGORITHMS UPDATES
First, we initialize the means µ(0)` for ` ∈ JdK and variance estimates σ(0). At iteration k, minimizing the sum of stochastic surrogates defined as in (6) and (13) yields the following MISSO update —
step (i) pick a function index ik uniformly on JnK; step (ii) sample a Monte Carlo batch {z(k)m } M(k) m=1 from N (0, I); and step (iii) update the parameters as
µ (k) ` =
1
n n∑ i=1 µ (τki ) ` − γ n n∑ i=1 δ̂ (k) µ`,i and σ(k) = 1 n n∑ i=1 σ(τ k i ) − γ n n∑ i=1 δ̂ (k) σ,i , (44)
where we define the following gradient terms for all i ∈ J1, nK:
δ̂ (k) µ`,i = − 1 M(k) M(k)∑ m=1 ∇w log p(yi|xi, w) ∣∣∣ w=t(θ(k−1),z (k) m ) +∇µ`d(θ(k−1)) ,
δ̂ (k) σ,i = −
1
M(k) M(k)∑ m=1 z(k)m ∇w log p(yi|xi, w) ∣∣∣ w=t(θ(k−1),z (k) m ) +∇σd(θ(k−1)) .
(45)
Note that our analysis in the main text does require the parameter to be in a compact set. For the current estimation problem considered, this can be enforced in practice by restricting the parameters in a ball. In our simulation for the BNNs example, we did not implement the algorithms that stick closely to the compactness requirement for illustrative purposes. However, we observe empirically that the parameters are always bounded. The update rules can be easily modified to respect the requirement. For the considered VI problem, we recall the surrogate functions (11) are quadratic and indeed a simple projection step suffices to ensure boundedness of the iterates.
For all benchmark algorithms, we pick, at iteration k, a function index ik uniformly on JnK and sample a Monte Carlo batch {z(k)m } M(k) m=1 from the standard Gaussian distribution. The updates of the parameters µ` for all ` ∈ JdK and σ break down as follows: Monte Carlo SAG update: Set
µ (k) ` = µ (k−1) ` −
γ
n n∑ i=1 δ̂ (k) µ`,i and σ(k) = σ(k−1) − γ n n∑ i=1 δ̂ (k) σ,i ,
where δ̂(k)µ`,i = δ̂ (k−1) µ`,i and δ̂(k)σ,i = δ̂ (k−1) σ,i for i 6= ik and are defined by (45) for i = ik. The learning rate is set to γ = 10−3.
Bayes By Backprop update: Set
µ (k) ` = µ (k−1) ` −
γ n δ̂ (k) µ`,ik and σ(k) = σ(k−1) − γ n δ̂ (k) σ,ik ,
where the learning rate γ = 10−3.
Monte Carlo Momentum update: Set
µ (k) ` = µ (k−1) ` + v̂ (k) µ` and σ(k) = σ(k−1) + v̂(k)σ ,
where v̂
(k) µ`,i = αv̂(k−1)µ` − γ
n δ̂
(k) µ`,ik and v̂(k)σ = αv̂ (k−1) σ −
γ n δ̂ (k) σ,ik ,
where α and γ, respectively the momentum and the learning rates, are set to 10−3.
Monte Carlo ADAM update: Set
µ (k) ` = µ (k−1) ` −
γ n m̂(k)µ` /(
√ m̂
(k) µ` + ) and σ (k) = σ(k−1) − γ n m̂(k)σ /(
√ m̂ (k) σ + ) ,
where
m̂(k)µ` = m (k−1 | 1. What is the main contribution of the paper regarding optimization methods?
2. What are the strengths and weaknesses of the proposed algorithm compared to prior works like SAG?
3. How does the reviewer assess the convergence guarantee and sample complexity of the proposed method?
4. Are there any suggestions for improving the paper, particularly in discussing the relation and difference between MISSO and MISO and providing a more reasonable sample complexity analysis? | Review | Review
Summarize what the paper claims to do/contribute. Be positive and generous.
In this paper, the authors consider solving the optimization of the summation of a finite number of component functions. The proposed algorithm is based on a previous work called Minimization by Incremental Surrogate Optimization (MISO). The MISO is a majorization minimization algorithm, which shares a similar update style of the SAG method. However, different from SAG, whose convergence is not available for nonconvex optimization, and is even very tricky in convex case, MISO enjoys a global convergence guarantee due to its majorization property. Based on this existing method, for the problems whose majorization surrogate is very hard to construct, e.g. variational inference of latent variable models, the authors of this paper propose a sample average approximation of the exact majorization surrogate function. The convergence of the proposed algorithm is also provided in this paper.
Clearly state your decision (accept or reject) with one or two key reasons for this choice. This paper is marginally below the acceptance threshold.
Provide supporting arguments for the reasons for the decision.
(i). (Weakness) For the hard cases where each component is an expectation itself, the strategy applied here is to do a simple sample average approximation. This requires the sample size of in each iteration (M_k) to satisfy the condition that \sum_k M_k^{-1/2}<\infty. That is, in the
k
-th iteration, the sample size will be at least k^2. According to Theorem 1, the number of iteration should be K\geq nL/\epsilon^2. Consequently, the total sample complexity of this method seems to be \sum_{i=1}^{K} k^2 ~ n^3L^3\epsilon^{-6}. The
n
3
L
3
dependence seems very bad. However, let us do a simple estimation of a naive method: 1. In each step compute the \epsilon-accurate estimation of the gradient for each component, this needs O(n \epsilon^{-2}) samples per iteration. Then if the function is L-smooth (this paper can handle nonsmooth cases) then the total iterations will be O(L\epsilon^{-2}). Then the total sample complexity seems only O(nL\epsilon^{-4}). This might need some clarification.
(ii). (Strength) This paper provides a non-asymptotic rate of convergence for the MISSO algorithm, which implies a non-asymptotic rate for the MISO method, whose non-asymptotic rate is not known before, which should be appreciated. Moreover, the numerical experiment in this paper is well presented.
Provide additional feedback with the aim to improve the paper. Make it clear that these points are here to help, and not necessarily part of your decision assessment.
(i). The MISSO (and MISO) share a similar updating style with SAG, it will be better if the authors could add some discussion on their relation and difference. Or, if such discussion exists in other literature, add a reference to that.
(ii). After the Theorem 2. It may make sense to give the sample complexity of the result. Namely, to get the optimality measure \leq \epsilon, how many sampled are needed. Specifically, by the reviewer’s rough estimation, the dependence on n and L is O(n^3L^3), see my argument before, this dependence is not reasonable. My question is that can the authors carefully balance the parameters and derive a more reasonable sample complexity? If the O(n) and O(L) dependence can be achieved, the reviewer is willing to change to a higher score. |
ICLR | Title
Variational Autoencoders with Jointly Optimized Latent Dependency Structure
Abstract
We propose a method for learning the dependency structure between latent variables in deep latent variable models. Our general modeling and inference framework combines the complementary strengths of deep generative models and probabilistic graphical models. In particular, we express the latent variable space of a variational autoencoder (VAE) in terms of a Bayesian network with a learned, flexible dependency structure. The network parameters, variational parameters as well as the latent topology are optimized simultaneously with a single objective. Inference is formulated via a sampling procedure that produces expectations over latent variable structures and incorporates top-down and bottom-up reasoning over latent variable values. We validate our framework in extensive experiments on MNIST, Omniglot, and CIFAR-10. Comparisons to state-of-the-art structured variational autoencoder baselines show improvements in terms of the expressiveness of the learned model.
1 INTRODUCTION
Deep latent variable models offer an effective method for automatically learning structure from data. By explicitly modeling the data distribution using latent variables, these models are capable of learning compressed representations that are then relevant for downstream tasks. Such models have been applied across a wide array of domains, such as images (Gregor et al., 2014; Kingma & Welling, 2014; Rezende et al., 2014; Gregor et al., 2015), audio (Chung et al., 2015; Fraccaro et al., 2016), video (He et al., 2018; Yingzhen & Mandt, 2018), and text (Bowman et al., 2016; Krishnan et al., 2017). However, despite their success, latent variable models are often formulated with simple (e.g. Gaussian) distributions, making independence assumptions among the latent variables. That is, each latent variable is sampled independently. Ignoring the dependencies between latent variables limits the flexibility of these models, negatively impacting the model’s ability to fit the data.
In general, structural dependencies can be incorporated into all phases of a forward process, including inference, latent model, and output space: Normalizing flows (Rezende & Mohamed, 2015), for instance, accounts for dependencies during inference by learning a mapping from a simple distribution to a more complex distribution that contains these dependencies. Structured output networks (Lehrmann & Sigal, 2017), on the other hand, directly predict an expressive non-parametric output distribution. On the modeling side, one can add dependencies by constructing a hierarchical latent representation (Dayan et al., 1995). These structures consist of conditional (empirical) priors, in which one latent variable forms a prior on another latent variable. While this conditional
∗Equal Contribution.
distribution may take a simple form, marginalizing over the parent variable can result in an arbitrarily complex distribution. Models with these more flexible latent dependency structures have been shown to result in improved performance (Sønderby et al., 2016; Burda et al., 2016; Kingma et al., 2016). However, despite the benefits of including additional structure in these models, their dependency structures have so far been predefined, potentially limiting the performance of this approach.
In this work, we propose a method for learning dependency structures in latent variable models. Structure learning is a difficult task with a long history in the graphical models community (Koller & Friedman, 2009). Over the years, it has been tackled from several perspectives, including constraint-based approaches (Cheng et al., 2002; Lehmann & Romano, 2008), optimization of structure scores (Kass & Raftery, 1995; Heckerman et al., 1995; Barron et al., 1998), Bayesian model averaging (Heckerman et al., 1999; Koivisto & Sood, 2004), and many more. Unfortunately, the underlying objectives are often limited to graphs of a particular form (e.g., limited tree width), prohibitively expensive, or difficult to integrate with the gradient-based optimization techniques of modern neural networks. Here, we discuss an end-to-end approach for general graph structures introducing minimal complexity overhead. In particular, we introduce a set of binary global variables to gate the latent dependencies. The whole model (including its structure) is jointly optimized with a single stochastic variational inference objective. In our experimental validation, we show that the learned dependency structures result in models that more accurately model the data distribution, outperforming several common predefined latent dependency structures.
2 BACKGROUND
2.1 VARIATIONAL INFERENCE & VARIATIONAL AUTOENCODERS
A latent variable model, defined by the joint distribution, pθ(x,z) = pθ(x∣z)pθ(z), models each data example, x, using a local latent variable, z, and global parameters, θ. pθ(x∣z) denotes the conditional likelihood, and pθ(z) denotes the prior. Latent variable models are capable of capturing the structure present in data, with z forming a compressed representation of each data example. Unfortunately, inferring the posterior, pθ(z∣x), is typically computationally intractable, prompting the use of approximate inference techniques. Variational inference (Jordan et al., 1999) introduces an approximate posterior, qφ(z∣x), and optimizes variational parameters, φ, to minimize the KL-divergence to the true posterior, KL(qφ(z∣x)∣∣pθ(z∣x)). As this quantity cannot be evaluated directly, the following relation is used:
log pθ(x) = KL(qφ(z∣x)∥pθ(z∣x)) +L(x; θ, φ), (1)
where L(x; θ, φ) is the evidence lower bound (ELBO), defined as
L(x; θ, φ) = Eqφ(z∣x) [log pθ(x∣z)] −KL(qφ(z∣x)∣∣pθ(z)). (2)
In Eq. (1), log pθ(x) is independent of φ, so we can minimize the KL divergence term, i.e. perform approximate inference, by maximizing L(x; θ, φ) w.r.t. qφ(z∣x). Further, because KL divergence is non-negative, L(x; θ, φ) is a lower bound on log pθ(x), meaning we can then learn the model parameters by maximizing L(x; θ, φ) w.r.t. θ.
Variational autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014) amortize inference optimization across data examples by parameterizing qφ(z∣x) as a separate inference model, then jointly optimizing the model parameters θ and φ. VAEs instantiate both the inference model and latent variable model with deep networks, allowing them to scale to high-dimensional data. However, VAEs are typically implemented with basic graphical structures and simple, unimodal distributions (e.g. Gaussians). For instance, the dimensions of the prior are often assumed to be independent, pθ(z) = ∏m pθ(zm), with a common assumption being a fixed standard Gaussian: pθ(z) = N (z;0, I). Similarly, approximate posteriors often make the mean field assumption, qφ(z∣x) = ∏m qφ(zm∣x). Independence assumptions such as these may be overly restrictive, thereby limiting modeling capabilities.
2.2 EMPIRICAL PRIORS THROUGH LATENT DEPENDENCY STRUCTURE
One technique for improving the expressive capacity of latent variable models is through incorporating dependency structure among the latent variables, forming a hierarchy (Dayan et al., 1995;
Rezende et al., 2014; Goyal et al., 2017; Vikram et al., 2018; Webb et al., 2018). These dependencies provide empirical priors, learned priors that are conditioned on other latent variables. With M latent dimensions, the full prior takes the following auto-regressive form:
pθ(z) = M
∏ m=1
pθ(zm∣zpa(m)), (3)
where zpa(m) denotes the vector of latent variables constituting the parents of zm. Each conditional distribution can be parameterized by deep networks that output the parameters of distributions, e.g. mean and variance of a Gaussian. While these conditional distributions may be relatively simple, the marginal empirical prior, pθ(zm) = ∫ pθ(zm∣zpa(m))pθ(zpa(m))dzpa(m), can be arbitrarily complex. By using more flexible priors, models with latent dependency structure are less restrictive in their latent representations, hopefully capturing more information and enabling a better fit to the data.
With the added latent dependencies in the model, the independence assumption in the approximate posterior is even less valid in this setting. While normalizing flows (Rezende & Mohamed, 2015) offers one technique for overcoming the mean field assumption, a separate line of work has investigated the use of structured approximate posteriors, particularly in the context of models with empirical priors (Johnson et al., 2016). This technique introduces dependencies between the dimensions of the approximate posterior, often mirroring the dependency structure of the latent variable model. An explanation for this was provided by Marino et al. (2018): optimizing the approximate posterior requires knowledge of the prior, which is especially relevant in models with empirical priors where the prior can vary with the data. Ladder VAE (Sønderby et al., 2016) incorporates these prior dependencies by using a structured approximate posterior of the form
qφ(z∣x) = M
∏ m=1
qφ(zm∣x,zpa(m)). (4)
Unlike the mean field approximate posterior, which conditions each dimension only on the data example, x, the distributions in Eq. (4) account for latent dependencies by conditioning on samples from the parent variables. Ladder VAE performs this conditioning by reusing the empirical prior during inference, forming the approximate posterior by combining a “bottom-up” recognition distribution and the “top-down” prior.
While Eqs. (3) and (4) permit separate latent dependencies for each individual latent dimension, the dimensions are typically partitioned into a set of nodes, with dimensions within each node sharing
the same parents. This improves computational efficiency by allowing priors and approximate posteriors within each node to be calculated in parallel. Using zn to denote latent node n of N and zpa(n) to denote the concatenation of its parent nodes, we can write the ELBO (Eq. (2)) as
L(x; θ, φ) = Eqφ(z∣x) [log pθ(x∣z)] − N
∑ n=1
Eqφ(z∣x) [log qφ(zn∣x,zpa(n))
pθ(zn∣zpa(n)) ] . (5)
Note that the KL divergence term in the ELBO can no longer be evaluated analytically, now requiring a sampling-based estimate of the expectation (Kingma & Welling, 2014). While this can lead to higher variance in ELBO estimates and the resulting gradients, models with latent dependencies still tend to empirically outperform models with independence assumptions (Burda et al., 2016; Sønderby et al., 2016). However, by increasing the number of nodes, the burden of devising a suitable dependency structure falls upon the experimental practitioner. This is non-trivial, as the structure may depend on the data and other model hyperparameters, such as the number of layers in the deep networks, non-linearities, latent distributions, etc. Rather than relying on pre-defined fully-connected structures (Kingma et al., 2016) or chain structures (Sønderby et al., 2016), we seek to automatically learn the latent dependency structure as part of the variational optimization process. A comparison of these approaches is visualized in Fig. 1.
3 VARIATIONAL OPTIMIZATION OF LATENT STRUCTURES
Unlike the model parameters (φ, θ), which are optimized over a continuous domain, the latent dependency structure is discrete, without a clear ordering. The discrete nature of the latent space’s topological structure introduces discontinuities in the optimization landscape, complicating the learning process. Fortunately, unlike the related setting of neural architecture search (Zoph & Le, 2016), there is only a finite number of possible dependency structures over a fixed number of latent dimensions: In a directed graphical model, a fully-connected directed acyclic graph (DAG) models all possible dependencies. In this model, an ordering is induced over the latent nodes, and the parents of node n (of N ) are given as zpa(n) = {zn+1, . . . ,zN}.1 Thus, to learn an appropriate latent dependency structure, we can maintain all dependencies in a fully-connected DAG, modifying their presence or absence during training. This is accomplished by introducing a set of binary dependency gates (Section 3.1). We convert discrete optimization over dependency structures into a continuous optimization problem by parameterizing these gates as samples from Bernoulli distributions, then learning the distribution parameters (Section 3.3). These gating distributions induce an additional lower bound on L, which becomes tight when the distribution converges to a delta function, yielding a single, optimized dependency structure (Section 3.2). Indeed, we observe this process empirically, with the learned dependency structures outperforming their predefined counterparts (Section 4).
3.1 GATED DEPENDENCIES
To control the dependency structure of the model, we introduce a set of binary global variables, c = {ci,j}i,j , which gate the latent dependencies. The element ci,j denotes the gate variable from zi to zj (i > j), specifying the presence or absence of this latent dependency. Because each element of c takes values in {0,1}, dependencies can be preserved or removed simply through (broadcasted) element-wise multiplication with the corresponding parent nodes. Removing a dependency entails multiplying the corresponding input parent node by 0. Each possible latent dependency structure can now be expressed through its corresponding value of c.
Each fixed latent dependency structure, c′, induces a separate latent variable model pθ(x,z,c) = pθ(x∣z,c)pθ(z∣c)δc,c′ , where δ⋅,⋅ is the Kronecker delta, which effectively selects a single structure. Similar to Eq. (3), the prior on the latent variables can now be expressed as
pθ(z∣c) = N
∏ n=1
pθ(zn∣zpa(n),cpa(n),n), (6)
where cpa(n),n denotes the gate variables associated with the dependencies between node zn and its parents, zpa(n). Note that zpa(n) denotes the set of all possible parents of node zn in the fullyconnected DAG, i.e. zpa(n) = {zn+1, . . . ,zN}. To give a concrete example of the gating procedure,
1We follow convention (Dayan et al., 1995; Rezende et al., 2014; Sønderby et al., 2016), with parent nodes having a larger index than their children.
consider the case in which pθ(zn∣zpa(n),cpa(n),n) is given by a Gaussian density with parameters ψ̂n = (µ̂n, Σ̂n). We obtain these parameters recursively by multiplying samples of node zn’s parent variables zpa(n) with their corresponding gating variables cpa(n),n and input a concatenation of the results of this operation into a multi-layer perceptron MLP(TD)n predicting ψ̂n (see Appendix B for additional details on the MLP architecture). The top-down recursion starts at the root node zN ∼ p(zN) = N (0, I). An illustration of this process is shown in black in Fig. 2.
The approximate posterior, qφ(z∣x,c), must approximate pθ(z∣x,c). We express the approximate posterior as
qφ(z∣x,c) = N
∏ n=1
qφ(zn∣x,zpa(n),cpa(n)). (7)
We note that the dependency structures of the generative model and its corresponding posterior are, in general, not independent: choosing a particular structure in the generative model induces a particular structure in the posterior (Webb et al., 2018). A simple way to guarantee enough capacity in the encoder to account for the dependencies implied by the decoder is thus to keep the encoder graph fully-connected and learn a decoder graph only. Instead, we share the gating variables c between approximate posterior and generative model (see Section 3.3), i.e., we assume that the encoder dependencies mirror those of the decoder. As a consequence, the posterior implied by the generative model could lie outside of the model class representable by the encoder. In practice, this is not an issue and we observe significant performance improvements over both traditional VAEs (where prior and posterior match but are limited in their expressiveness) and Graph VAEs with fully-connected encoder graph. See Section 4.3 for quantitative experiments and Section 5 for a discussion on the relationship between learned structures and fully-connected structures.
Parameter prediction for the local factors qφ(zn∣x,zpa(n),cpa(n)) consists of a precision-weighted fusion of the top-down prediction ψ̂n described above and a bottom-up prediction ψ̃n. The latter is obtained by encoding x into a generic feature that is used as an input to a node-specific multi-layer perceptron MLP(BU)n predicting ψ̃n. This is shown in blue in Fig. 2. Additional details on the fusion process can be found in Appendix B.3.
Algorithm 1 Optimizing VAEs with Latent Dependency Structure Require: Data x, number of latent nodes N , number of dimensions per node N ′.
1: Initialize θ, φ, µ. 2: repeat 3: Sample c using Eq. (9) and determine zpa(n) for each zn based on the sampled structure. 4: For each node, compute qφ(zn∣x,zpa(n)) using Eq. (7). 5: Sample z from qφ(z∣x) using Eq. (6) and compute pθ(x∣z). 6: Update θ, φ,µ based on the gradients derived from Eq. (8). 7: until Convergence.
3.2 LEARNING STRUCTURE BY INDUCING AN ADDITIONAL LOWER BOUND
The formulation in Section 3.1 provides the form of the model for a particular configuration of the latent dependency structure. Finding the optimal structure corresponds to a discrete optimization over all values of c, potentially optimizing the model parameters of each possible configuration. To avoid this intractable procedure, we place a distribution over c, then directly optimize the parameters of this distribution to arrive at a single, learned latent dependency structure. Specifically, we treat each ci,j as an independent random variable, sampled from a Bernoulli distribution with mean µi,j , i.e. ci,j ∼ p(ci,j) = B(µi,j). We denote the set of these Bernoulli means as µ. Introducing this distribution allows us to express the following additional lower bound on L, derived in Appendix A:
L ≥ L̃ = Ep(c) [Eqφ(z∣x) [log pθ(x∣z,c)] −KL(qφ(z∣x)∣∣pθ(z∣c))] = Ep(c) [Lc] , (8)
where Lc is the ELBO for a particular value of dependency gating variables. Thus, L̃ can be interpreted as the expected ELBO under the distribution of dependency structures induced by p(c), which we estimate by sampling c ∼ p(c) and evaluating Lc. We note that L̃ is not a proper variational bound, as it is not guaranteed to recover the marginal likelihood if the approximate posterior matches the true posterior. Rather, optimizing L̃ provides a method for learning the latent structure. For fixed parameters (φ, θ), the optimal L̃ w.r.t. µ is a δ-distribution at the MAP configuration, c∗, yielding L = L̃ = Lc∗ . This is because Lc∗ is always greater than or equal to the expected ELBO over all dependency gates, L̃. In practice, we jointly optimize φ, θ, and µ. While this non-convex optimization procedure may result in local optima, we find this empirically works well, with p(c) converging to a fixed distribution (Fig. 3). Thus, by the end of training, we are effectively optimizing the ELBO for a single dependency structure. The training procedure is outlined in Algorithm 1.
3.3 LEARNING THE DISCRETE GATING VARIABLE DISTRIBUTIONS
For a given latent dependency structure, gradients for the parameters θ and φ can be estimated using Monte Carlo samples and the reparameterization trick (Kingma & Welling, 2014; Rezende et al., 2014). To obtain gradients for the gate means, µ, we make use of recent advances in differentiating through discrete operations (Maddison et al., 2017; Jang et al., 2017), allowing us to differentiate through the sampling of the dependency gating variables, c. Specifically, we recast the gating variables using the Gumbel-Softmax estimator from Jang et al. (2017), re-expressing ci,j as:
ci,j = exp((log(µi,j) + 1)/τ)
exp((log(µi,j) + 1)/τ) + exp((log(1 − µi,j) + 2)/τ) , (9)
where 1 and 2 are i.i.d samples drawn from a Gumbel(0, 1) distribution and τ is a temperature parameter. The Gumbel-Softmax distribution is differentiable for τ > 0, allowing us to estimate the derivative ∂ci,j
∂µi,j . For large values of τ , we obtain smoothed versions of c, essentially interpolating
between different dependency structures. As τ → 0, we recover binary values for c, yielding the desired discrete sampling of dependency structures at the cost of high-variance gradient estimates. Thus, we anneal τ during training to learn the dependency gate means, µ, eventually arriving at the discrete setting.
4 EVALUATION
We evaluate the proposed latent dependency learning approach on three benchmark datasets: MNIST (Lecun et al., 1998; Larochelle & Murray, 2011), Omniglot (Lake et al., 2013), and CIFAR10 (Krizhevsky, 2009). After discussing the experimental setup in Section 4.1, we provide a set of qualitative experiments in Section 4.2 to gain insight into the learning process, hyper-parameter selection, and the nature of the inferred structures. In Section 4.3, we provide quantitative comparisons with common predefined latent dependency structures on benchmark datasets. Additional results on training robustness and latent space embeddings can be found in Appendix C.
4.1 EXPERIMENTAL SETUP
To provide a fair comparison, the encoders of all structured methods use the same MLP architecture with batch normalization (Ioffe & Szegedy, 2015) and ReLU non-linearities (Nair & Hinton, 2010) in all experiments.2 Decoder structures are the reverse of the encoders. Likewise, the number of latent dimensions is the same in all models and experiments (M = 80). As discussed in Section 3.1, all latent dependencies are modeled by non-linear MLPs as well.
For MNIST and Omniglot, we binarize the data and model pθ(x∣z) as a Bernoulli distribution, using a sigmoid non-linearity on the output layer to produce the mean of this distribution. For CIFAR-10, we model pθ(x∣z) with a Gaussian density, with mean and log-variance predicted by sigmoid and linear functions, respectively. Further implementation details, including the model architectures and training criteria, can be found in Appendix B.
4.2 QUALITATIVE ANALYSIS
We first explore the structure learning process. As described in Section 3.2 and Appendix A, optimizing L̃ w.r.t. the dependency gating means, µ, should push this lower bound toward L. Thus, µ should converge to either 0 or 1, yielding a fixed, final latent dependency structure. In Fig. 3, we visualize this process during training on MNIST. The model has N = 5 nodes and a total latent dimension of M = 80 (i.e., N ′ =M/N = 16 dimensions per node). As shown in Fig. 3a, the gating means converge in practice, with 3 out of 10 edges removed and the rest retained. The resulting static dependency structure is visualized in Fig. 3b. We observed that the learned structure is stable across training runs with different seeds for parameter initialization and mini-batch sampling, supporting the hypothesis that the inferred structure indeed depends on the model parameterization and the dataset.
2We implement classic VAEs using a more complex encoder to match the number of parameters of the structured methods. All baselines use the same or more parameters than Graph VAE.
We next investigate the influence of the total latent dimension, M , and the trade-off between the number of nodes, N , and the node dimension, N ′ = M/N . Our results for various models trained on MNIST are shown in Fig. 4. Models with the same total latent dimension are shown in the same color. We observe that the performance improves with increasing total latent dimension, likely resulting from the additional flexibility of the higher-dimensional latent space. We also observe that, for a fixed number of latent dimensions, models with fewer node dimensions (and therefore more nodes with a more complex dependency structure) typically perform better. This highlights the importance of using an expressive dependency structure for obtaining a flexible model.
4.3 QUANTITATIVE COMPARISON
To quantitatively evaluate the improvements due to learning the latent dependency structure, we compare with a range of common, predefined baseline structures. These baselines include classic VAEs (Kingma & Welling, 2014; Rezende et al., 2014), which contain no dependencies in the prior, Ladder VAEs (Sønderby et al., 2016), which contain chain-like dependencies in the prior, and fullyconnected VAEs (FC-VAEs) (cf. Kingma et al. (2016)), which contain all possible dependencies in the prior (corresponding to all gating variable parameters µi,j set to a fixed value of 1). We note that our approach is orthogonal and could be complemented by a number of other approaches attempting to overcome the limitations of classic VAEs. Similar to ladder VAEs, Zhao et al. (2017) use chainstructured latent dependencies to learn disentangled representations. Normalizing flows (Rezende & Mohamed, 2015), on the other hand, adds dependencies to the approximate posterior through a series of invertible transformations.
We evaluate the performance of all models using their test log-likelihood, log pθ(x), in 5 independent runs (Table 1). All values were estimated using 5,000 importance-weighted samples. Following standard practice, we report log pθ(x) in nats on MNIST/Omniglot and in bits/input dimension on CIFAR-10. The learned dependency structure in our proposed Graph VAE consistently outperforms models with both fewer (VAE, ladder VAE) and more (FC-VAE) latent dependencies. We discuss potential reasons in Section 5. To provide further insight into the training objective, Table 1 also reports DKL(qφ(z∣x)∣∣pθ(z)) and the ELBO for each model on the test set.
Encoder-Decoder Relationship. From a purely theoretical point of view, learning the structure of the generative model implies the need for a fully-connected graph in the approximate posterior (see Section 3.1). In practice, we share the gating variables c between encoder and decoder (see Eq. (8)), because we observed improved empirical performance when doing so: training a Graph VAE decoder with a predefined, fully-connected encoder graph results in a mean test log-likelihood of −83.40 nats on MNIST, which is worse than the performance of FC-VAE (−82.80 nats) and Graph VAE (−82.58 nats). We believe these are noteworthy empirical results, but further research is
required to understand this behavior at a theoretical level. A full visualization of the training process is provided in Appendix C.1.
5 DISCUSSION
Performance vs. Speed/Memory. As shown in Fig. 4, the number of latent nodes can significantly impact the performance of our model. While allowing more complex dependency structures through a lowM/N -ratio is typically beneficial, it also has an adverse effect on the training time and memory consumption. Fortunately, the ability to freely select this ratio allows a simple adaption to the available processing power, hardware constraints, and application scenarios.
Optimization Order. It is worth noting that the learning process optimizes the model parameters (c, φ, θ) in a clear temporal order. While the latent structure, governed by c, converges during the first ≈ 200 epochs (Fig. 3a), it takes over 10× as long until the variational and generative parameters (φ, θ) converge. There is no external force enforcing this behaviour, indicating that the loss can initially most easily be decreased by limiting the latent structure to the complexity prescribed by the observed training data.
Performance Improvement over Fully-Connected VAEs. There is an intricate relationship between fully-connected graphs vs. learned graphs along one axis and between prior structures vs. posterior structures along a separate axis: FC-VAEs model all conditional latent dependencies and are thus potentially more expressive and flexible than other latent dependency structures. It is therefore somewhat surprising that the learned latent structures in Graph VAE consistently outperform the FC-VAE baseline. We speculate that this may be due to difficulties in optimization, which is a known problem in hierarchical latent variable models (Bowman et al., 2016; Burda et al., 2016). Graph VAE and FC-VAE both contain the same hypothesis space of possible models that can be learned. If all dependencies are needed, Graph VAE could set all dependency gate parameters to 1. Likewise, if a latent dependency was unnecessary, FC-VAE could set all of the model parameters in that dependency to 0. However, this would require many coordinated steps along multiple parameter dimensions. It is plausible that the benefit of learning the dependency structure may stem from the ability to alter the optimization landscape, using the dependency gates to move through the model parameter space more coarsely and rapidly. The resulting latent dependency structures may thus be less expressive, but easier to optimize. While only an intuition, this hypothesis is also in line with the observations and results in our experiments with fully-connected encoder graphs and learned decoder graphs, where the theoretically more flexible FC-encoder is outperformed by our parameter sharing approach. Follow-up work will be required to test this intuition.
6 CONCLUSION
We presented a novel method for structure learning in latent variable models, which uses dependency gating variables, together with a modified objective and discrete differentiation techniques, to effectively transform discrete structure learning into a continuous optimization problem. In our experiments, the learned latent dependency structures improve the performance of latent variable models over comparable baselines with predefined dependency structures. The approach presented here provides directions for further research in structure learning for other tasks, including undirected graphical models, time-series models, and discriminative models.
A LOWER BOUND DERIVATION
Introducing the distribution over dependency gating variables, p(c) = B(µ), modifies the evidence lower bound (ELBO) from Eq. (2), as we now have an additional set of random variables over which to marginalize. To see this, we can start by re-expressing Eq. (2) as
L = Eqφ(z∣x) [log pθ(x,z)] −Eqφ(z∣x) [log qφ(z∣x)] . (10)
pθ(x,z) can be expressed as a marginalization over the gating variables, effectively averaging over an ensemble of models with a distribution of dependency structures:
pθ(x,z) = ∫ pθ(x,z,c)dc = ∫ pθ(x,z∣c)p(c)dc = Ep(c) [pθ(x,z∣c)] . (11)
Plugging this into Eq. (10):
L = Eqφ(z∣x) [logEp(c) [pθ(x,z∣c)]] −Eqφ(z∣x) [log qφ(z∣x)] . (12)
Using Jensen’s inequality, we bring the log inside of the expectation, Ep(c) [⋅], yielding
L ≥ L̃ = Eqφ(z∣x) [Ep(c) [log pθ(x,z∣c)]] −Eqφ(z∣x) [log qφ(z∣x)] , (13)
where L̃ is a lower bound on L. Swapping the order of expectation, we rewrite L̃ as
L̃ = Ep(c) [Eqφ(z∣x) [log pθ(x,z∣c)]] −Eqφ(z∣x) [log qφ(z∣x)] , (14)
and because the second term is independent of p(c), we can include both terms inside of a single outer expectation:
L̃ = Ep(c) [Eqφ(z∣x) [log pθ(x,z∣c) − log qφ(z∣x)]]
= Ep(c) [Eqφ(z∣x) [log pθ(x∣z,c)] −KL(qφ(z∣x)∣∣pθ(z∣c))] = Ep(c) [Lc] ,
(15)
where we have defined Lc as the ELBO for a given dependency structure. Note that L̃ is not a proper variational bound, as it cannot recover the marginal log likelihood. Rather, L̃ allows us to optimize the distribution over gating variables and, thus, the model structure. When we arrive at a fixed structure, we will be optimizing the variational bound for that particular dependency structure.
To see this, note that the bound in Eq. 13 becomes tight when p(c) is any fixed distribution, in which the dependency gating means, µ, are all either 1 or 0. Plugging in a delta distribution for p(c) at a particular configuration, c′, i.e. p(c) = δc,c′ , we have
L = Eqφ(z∣x) [logEδc,c′ [pθ(x,z∣c)]] −Eqφ(z∣x) [log qφ(z∣x)]
= Eqφ(z∣x) [Eδc,c′ [log pθ(x,z∣c)]] −Eqφ(z∣x) [log qφ(z∣x)]
= L̃.
(16)
Intuitively, this is simply the case of a latent variable model with predefined, fixed dependencies. By optimizing L̃ w.r.t. µ, we hope to collapse p(c) to a delta distribution at the optimal (MAP) configuration, c∗, because
Lc∗ = Eδc,c∗ [Lc] ≥ Ep(c) [Lc] = L̃. (17) Effectively, we can search for the dependency structure with the highest ELBO by optimizing the distribution parameters of p(c). Although this optimization procedure may arrive at locally optimal dependency structures, our hope is that these learned structures will still perform better than an arbitrary, predefined dependency structure.
B IMPLEMENTATION DETAILS
B.1 NETWORK ARCHITECTURE
We document the network architectures used in our experiments. We use the same network architectures for all datasets. Input size (pixels per image) is the only difference across datasets. The input size is 28 × 28 for MNIST and Omniglot, and 3 × 32 × 32 for CIFAR-10.
Encoders:
fc(input size,512) → batch norm → ELU → fc(512,512) → batch norm → ELU → fc(512,256) → batch norm → ELU → fc(256,128)
Latent Dependencies: Latent dependencies are modelled by non-linear MLPs. Note that the topdown architecture is shared between inference model and generative model, but the MLPs are optimized independently.
bottom-up: For each node with N ′ dimensions, the local potential is predicted by a mapping from the encoded feature:
fc(128,128) → batch norm → ELU → fc(128,N ′). The output feature is then mapped to µ and log var with two independent fc(N ′,N ′) layers, respectively.
top-down: For each node (N ′ dimensions) with a set of parent nodes, the top-down inference/generation is implemented as:
fc(sum of parent nodes’ dimension, 128) → batch norm → ELU → fc(128,N ′). The output feature is then mapped to µ and log var with two independent fc(N ′, N ′) layers, respectively.
Decoders:
fc(N ′,256) → batch norm → ELU → fc(256,512) → batch norm → ELU → fc(512,512)→ batch norm→ ELU→ fc(512,input size)→ output function()
output function for MNIST and Omniglot is sigmoid(), which predicts the mean µ of Bernoulli observations; and sigmoid() predicting µ, fc(input size,input size) predicting log var of Gaussion observations for CIFAR-10.
B.2 TRAINING
All models were implemented with PyTorch (Paszke et al., 2017) and trained using the Adam (Kingma & Ba, 2015) optimizer with a mini-batch size of 64 and learning rate of 1e−3. Learning rate is decresed by 0.25 every 200 epochs. The Gumbel-softmax temperature was initialized at 1 and decreased to 0.99epoch at each epoch. MNIST and Omniglot took 2000 epochs to converge, and CIFAR took 3000 epochs to converge.
B.3 INFERENCE MODULE DETAILS
Parameter prediction for the local factors qφ(zn∣x,zpa(n)) consists of a precision-weighted fusion of the top-down prediction ψ̂n and a bottom-up prediction ψ̃n. Specifically, for a latent variable zn, ψ̂n ∶= {µ̂n, σ̂n}, and ψ̃n ∶= {µ̃n, σ̃n}.
Bottom-up Inference. A high-dimensional input is first mapped to a feature vector hx by an encoder MLP. hx is then used to predict µ̃n and σ̃n with non-linear MLPBUn .
Top-down Inference. µ̂n and σ̂n are predicted by µ̂n = MLPTDn ([zpa(n) ⊙ cpa(n),n]) and σ̂n = MLPTDn ([zpa(n) ⊙ cpa(n),n]), respectively. [,] denotes concatenation operation, and ⊙ denotes element-wise multiplication.
Precision-weighted fusion. Having ψ̂n and ψ̃n, the parameters of the local conditional distribution is given by qφ(zn∣x,zpa(n),cpa(n),n) ∼ N (zn∣µn, σ 2 n), with
σn = 1
σ̂−2n + σ̃ −2 n
,
µn = µ̂nσ̂
−2 n + µ̃nσ̃ −2 n
σ̂−2n + σ̃ −2 n
.
(18)
C ADDITIONAL RESULTS
C.1 ROBUSTNESS OF LOG-LIKELIHOODS
In Fig. 5, we report the averaged test log-likelihoods and associated standard deviations of Graph VAE and our baselines at different epochs. All calculations are based on 5 independent runs.
C.2 LATENT EMBEDDINGS
Our training objective optimizes intrinsic structure (which does not necessarily correlate with semantic meaning) and does not incentivize a disentanglement of latent factors. Interestingly, a TSNEvisualization of the data as well as latent embeddings of Graph VAE and VAE on MNIST (Fig. 6) shows that the latent embedding of Graph VAE exhibits a large (semantic) gap between different classes, even though the model is trained in an unsupervised fashion. We will further investigate this behavior in future work. | 1. What is the main contribution of the paper regarding deep generative models?
2. What are the strengths of the proposed approach, particularly in representing structure via a learned binary adjacency matrix?
3. Do you have any questions regarding the experimental setup and baseline comparisons?
4. How does the reviewer assess the clarity, quality, and novelty of the paper's content?
5. Are there any related works that the author could discuss to provide further context?
6. Is there a qualitative analysis of the data sampled from the model when perturbing the learned hierarchy?
7. Is there a relationship between the complexity of each conditional distribution and the learned latent structure? | Review | Review
Often in a deep generative model with multiple latent variables, the structure amongst the latent variables is pre-specified before parameter estimation. This work aims to learn the structure as part of the parameters. To do so, this work represents all possible dependencies amongst the latent random variables via a learned binary adjacency matrix, c, where a 1 denotes each parent child relationship.
Each setting of c defines a latent variable as the root and subsequent parent-child relationships amongst the others. To be able to support (up to N-1) parents, the paper proposes a neural architecture where the sample from each parent is multiplied by the corresponding value of c_ij (0'd out if the edge does not exist in c), concatenated and fed into an MLP that predicts a distribution over the child node. The inference network shares parameters with the generative model (as in Sonderby et. al). Given any setting of c, one can define the variational lower-bound of the data. This work performs parameter estimation by sampling c and then performing gradient ascent on the resulting lower-bound.
The model is evaluated on MNIST, Omniglot and CIFAR where it is found to outperform a VAE with a single latent variable (with the same number of latent dimensions as the proposed graphVAE), the LadderVAE and the FCVAE (VAE with a fully connected graph). An ablation study is conducted to study the effect of number of nodes and their dimensionality.
Overall, the paper is (a) well written, (b) proposes a new, interesting idea and (c) shows that the choice to parameterize structure via the use of auxillary random variables improves the quality of results on some standard benchmarks.
Comments and questions for the authors:
* Clarity
It might be instructive to describe in detail how the inference network is structured for different settings of c (for example, via a scenario with three latent variables) rather than via reference to Sonderby et. al.
What prior distribution was used for c?
For the baseline comparing to the LadderVAE, what dimensionalities were used for the latent variables in the LadderVAE (which has a chain structured dependence amongst its latent variables)? The experimental setup keeps fixed the latent dimensionality to 80 -- the original paper recommends a different dimensionality for each latent variables in the chain [https://arxiv.org/pdf/1602.02282.pdf, Table 2] -- was this tried? Did the ladderVAE do better if each latent variable in the chain was allowed to have a dimensionality?
* Related work
There is related work which leverages Bayesian non-parametric models to learn hierarchical priors for deep generative models. It is worth discussing for putting this line of work into context. For example:
http://openaccess.thecvf.com/content_ICCV_2017/papers/Goyal_Nonparametric_Variational_Auto-Encoders_ICCV_2017_paper.pdf
and more recently: https://arxiv.org/pdf/1810.06891.pdf
In the context of defining inference networks for generative models where the latent variables have structure, Webb et. al [https://arxiv.org/abs/1712.00287] describe how inference networks should be setup in order to invert the generative process.
* Qualitative study
Notable in its absence is a qualitative analysis of what happens to the data sampled from the model when the various nodes in the learned hierarchy are perturbed holding fixed their parents. Have you attempted this experiment? Are the edge relationships sensible or interesting?
Is there a relationship between the complexity of each conditional distribution in the generative model and the learned latent structure? Specifically, have you experimented to see what happens to the learned structure amongst the latent variables if each conditional density is a linear function of its parents? |
ICLR | Title
Variational Autoencoders with Jointly Optimized Latent Dependency Structure
Abstract
We propose a method for learning the dependency structure between latent variables in deep latent variable models. Our general modeling and inference framework combines the complementary strengths of deep generative models and probabilistic graphical models. In particular, we express the latent variable space of a variational autoencoder (VAE) in terms of a Bayesian network with a learned, flexible dependency structure. The network parameters, variational parameters as well as the latent topology are optimized simultaneously with a single objective. Inference is formulated via a sampling procedure that produces expectations over latent variable structures and incorporates top-down and bottom-up reasoning over latent variable values. We validate our framework in extensive experiments on MNIST, Omniglot, and CIFAR-10. Comparisons to state-of-the-art structured variational autoencoder baselines show improvements in terms of the expressiveness of the learned model.
1 INTRODUCTION
Deep latent variable models offer an effective method for automatically learning structure from data. By explicitly modeling the data distribution using latent variables, these models are capable of learning compressed representations that are then relevant for downstream tasks. Such models have been applied across a wide array of domains, such as images (Gregor et al., 2014; Kingma & Welling, 2014; Rezende et al., 2014; Gregor et al., 2015), audio (Chung et al., 2015; Fraccaro et al., 2016), video (He et al., 2018; Yingzhen & Mandt, 2018), and text (Bowman et al., 2016; Krishnan et al., 2017). However, despite their success, latent variable models are often formulated with simple (e.g. Gaussian) distributions, making independence assumptions among the latent variables. That is, each latent variable is sampled independently. Ignoring the dependencies between latent variables limits the flexibility of these models, negatively impacting the model’s ability to fit the data.
In general, structural dependencies can be incorporated into all phases of a forward process, including inference, latent model, and output space: Normalizing flows (Rezende & Mohamed, 2015), for instance, accounts for dependencies during inference by learning a mapping from a simple distribution to a more complex distribution that contains these dependencies. Structured output networks (Lehrmann & Sigal, 2017), on the other hand, directly predict an expressive non-parametric output distribution. On the modeling side, one can add dependencies by constructing a hierarchical latent representation (Dayan et al., 1995). These structures consist of conditional (empirical) priors, in which one latent variable forms a prior on another latent variable. While this conditional
∗Equal Contribution.
distribution may take a simple form, marginalizing over the parent variable can result in an arbitrarily complex distribution. Models with these more flexible latent dependency structures have been shown to result in improved performance (Sønderby et al., 2016; Burda et al., 2016; Kingma et al., 2016). However, despite the benefits of including additional structure in these models, their dependency structures have so far been predefined, potentially limiting the performance of this approach.
In this work, we propose a method for learning dependency structures in latent variable models. Structure learning is a difficult task with a long history in the graphical models community (Koller & Friedman, 2009). Over the years, it has been tackled from several perspectives, including constraint-based approaches (Cheng et al., 2002; Lehmann & Romano, 2008), optimization of structure scores (Kass & Raftery, 1995; Heckerman et al., 1995; Barron et al., 1998), Bayesian model averaging (Heckerman et al., 1999; Koivisto & Sood, 2004), and many more. Unfortunately, the underlying objectives are often limited to graphs of a particular form (e.g., limited tree width), prohibitively expensive, or difficult to integrate with the gradient-based optimization techniques of modern neural networks. Here, we discuss an end-to-end approach for general graph structures introducing minimal complexity overhead. In particular, we introduce a set of binary global variables to gate the latent dependencies. The whole model (including its structure) is jointly optimized with a single stochastic variational inference objective. In our experimental validation, we show that the learned dependency structures result in models that more accurately model the data distribution, outperforming several common predefined latent dependency structures.
2 BACKGROUND
2.1 VARIATIONAL INFERENCE & VARIATIONAL AUTOENCODERS
A latent variable model, defined by the joint distribution, pθ(x,z) = pθ(x∣z)pθ(z), models each data example, x, using a local latent variable, z, and global parameters, θ. pθ(x∣z) denotes the conditional likelihood, and pθ(z) denotes the prior. Latent variable models are capable of capturing the structure present in data, with z forming a compressed representation of each data example. Unfortunately, inferring the posterior, pθ(z∣x), is typically computationally intractable, prompting the use of approximate inference techniques. Variational inference (Jordan et al., 1999) introduces an approximate posterior, qφ(z∣x), and optimizes variational parameters, φ, to minimize the KL-divergence to the true posterior, KL(qφ(z∣x)∣∣pθ(z∣x)). As this quantity cannot be evaluated directly, the following relation is used:
log pθ(x) = KL(qφ(z∣x)∥pθ(z∣x)) +L(x; θ, φ), (1)
where L(x; θ, φ) is the evidence lower bound (ELBO), defined as
L(x; θ, φ) = Eqφ(z∣x) [log pθ(x∣z)] −KL(qφ(z∣x)∣∣pθ(z)). (2)
In Eq. (1), log pθ(x) is independent of φ, so we can minimize the KL divergence term, i.e. perform approximate inference, by maximizing L(x; θ, φ) w.r.t. qφ(z∣x). Further, because KL divergence is non-negative, L(x; θ, φ) is a lower bound on log pθ(x), meaning we can then learn the model parameters by maximizing L(x; θ, φ) w.r.t. θ.
Variational autoencoders (VAEs) (Kingma & Welling, 2014; Rezende et al., 2014) amortize inference optimization across data examples by parameterizing qφ(z∣x) as a separate inference model, then jointly optimizing the model parameters θ and φ. VAEs instantiate both the inference model and latent variable model with deep networks, allowing them to scale to high-dimensional data. However, VAEs are typically implemented with basic graphical structures and simple, unimodal distributions (e.g. Gaussians). For instance, the dimensions of the prior are often assumed to be independent, pθ(z) = ∏m pθ(zm), with a common assumption being a fixed standard Gaussian: pθ(z) = N (z;0, I). Similarly, approximate posteriors often make the mean field assumption, qφ(z∣x) = ∏m qφ(zm∣x). Independence assumptions such as these may be overly restrictive, thereby limiting modeling capabilities.
2.2 EMPIRICAL PRIORS THROUGH LATENT DEPENDENCY STRUCTURE
One technique for improving the expressive capacity of latent variable models is through incorporating dependency structure among the latent variables, forming a hierarchy (Dayan et al., 1995;
Rezende et al., 2014; Goyal et al., 2017; Vikram et al., 2018; Webb et al., 2018). These dependencies provide empirical priors, learned priors that are conditioned on other latent variables. With M latent dimensions, the full prior takes the following auto-regressive form:
pθ(z) = M
∏ m=1
pθ(zm∣zpa(m)), (3)
where zpa(m) denotes the vector of latent variables constituting the parents of zm. Each conditional distribution can be parameterized by deep networks that output the parameters of distributions, e.g. mean and variance of a Gaussian. While these conditional distributions may be relatively simple, the marginal empirical prior, pθ(zm) = ∫ pθ(zm∣zpa(m))pθ(zpa(m))dzpa(m), can be arbitrarily complex. By using more flexible priors, models with latent dependency structure are less restrictive in their latent representations, hopefully capturing more information and enabling a better fit to the data.
With the added latent dependencies in the model, the independence assumption in the approximate posterior is even less valid in this setting. While normalizing flows (Rezende & Mohamed, 2015) offers one technique for overcoming the mean field assumption, a separate line of work has investigated the use of structured approximate posteriors, particularly in the context of models with empirical priors (Johnson et al., 2016). This technique introduces dependencies between the dimensions of the approximate posterior, often mirroring the dependency structure of the latent variable model. An explanation for this was provided by Marino et al. (2018): optimizing the approximate posterior requires knowledge of the prior, which is especially relevant in models with empirical priors where the prior can vary with the data. Ladder VAE (Sønderby et al., 2016) incorporates these prior dependencies by using a structured approximate posterior of the form
qφ(z∣x) = M
∏ m=1
qφ(zm∣x,zpa(m)). (4)
Unlike the mean field approximate posterior, which conditions each dimension only on the data example, x, the distributions in Eq. (4) account for latent dependencies by conditioning on samples from the parent variables. Ladder VAE performs this conditioning by reusing the empirical prior during inference, forming the approximate posterior by combining a “bottom-up” recognition distribution and the “top-down” prior.
While Eqs. (3) and (4) permit separate latent dependencies for each individual latent dimension, the dimensions are typically partitioned into a set of nodes, with dimensions within each node sharing
the same parents. This improves computational efficiency by allowing priors and approximate posteriors within each node to be calculated in parallel. Using zn to denote latent node n of N and zpa(n) to denote the concatenation of its parent nodes, we can write the ELBO (Eq. (2)) as
L(x; θ, φ) = Eqφ(z∣x) [log pθ(x∣z)] − N
∑ n=1
Eqφ(z∣x) [log qφ(zn∣x,zpa(n))
pθ(zn∣zpa(n)) ] . (5)
Note that the KL divergence term in the ELBO can no longer be evaluated analytically, now requiring a sampling-based estimate of the expectation (Kingma & Welling, 2014). While this can lead to higher variance in ELBO estimates and the resulting gradients, models with latent dependencies still tend to empirically outperform models with independence assumptions (Burda et al., 2016; Sønderby et al., 2016). However, by increasing the number of nodes, the burden of devising a suitable dependency structure falls upon the experimental practitioner. This is non-trivial, as the structure may depend on the data and other model hyperparameters, such as the number of layers in the deep networks, non-linearities, latent distributions, etc. Rather than relying on pre-defined fully-connected structures (Kingma et al., 2016) or chain structures (Sønderby et al., 2016), we seek to automatically learn the latent dependency structure as part of the variational optimization process. A comparison of these approaches is visualized in Fig. 1.
3 VARIATIONAL OPTIMIZATION OF LATENT STRUCTURES
Unlike the model parameters (φ, θ), which are optimized over a continuous domain, the latent dependency structure is discrete, without a clear ordering. The discrete nature of the latent space’s topological structure introduces discontinuities in the optimization landscape, complicating the learning process. Fortunately, unlike the related setting of neural architecture search (Zoph & Le, 2016), there is only a finite number of possible dependency structures over a fixed number of latent dimensions: In a directed graphical model, a fully-connected directed acyclic graph (DAG) models all possible dependencies. In this model, an ordering is induced over the latent nodes, and the parents of node n (of N ) are given as zpa(n) = {zn+1, . . . ,zN}.1 Thus, to learn an appropriate latent dependency structure, we can maintain all dependencies in a fully-connected DAG, modifying their presence or absence during training. This is accomplished by introducing a set of binary dependency gates (Section 3.1). We convert discrete optimization over dependency structures into a continuous optimization problem by parameterizing these gates as samples from Bernoulli distributions, then learning the distribution parameters (Section 3.3). These gating distributions induce an additional lower bound on L, which becomes tight when the distribution converges to a delta function, yielding a single, optimized dependency structure (Section 3.2). Indeed, we observe this process empirically, with the learned dependency structures outperforming their predefined counterparts (Section 4).
3.1 GATED DEPENDENCIES
To control the dependency structure of the model, we introduce a set of binary global variables, c = {ci,j}i,j , which gate the latent dependencies. The element ci,j denotes the gate variable from zi to zj (i > j), specifying the presence or absence of this latent dependency. Because each element of c takes values in {0,1}, dependencies can be preserved or removed simply through (broadcasted) element-wise multiplication with the corresponding parent nodes. Removing a dependency entails multiplying the corresponding input parent node by 0. Each possible latent dependency structure can now be expressed through its corresponding value of c.
Each fixed latent dependency structure, c′, induces a separate latent variable model pθ(x,z,c) = pθ(x∣z,c)pθ(z∣c)δc,c′ , where δ⋅,⋅ is the Kronecker delta, which effectively selects a single structure. Similar to Eq. (3), the prior on the latent variables can now be expressed as
pθ(z∣c) = N
∏ n=1
pθ(zn∣zpa(n),cpa(n),n), (6)
where cpa(n),n denotes the gate variables associated with the dependencies between node zn and its parents, zpa(n). Note that zpa(n) denotes the set of all possible parents of node zn in the fullyconnected DAG, i.e. zpa(n) = {zn+1, . . . ,zN}. To give a concrete example of the gating procedure,
1We follow convention (Dayan et al., 1995; Rezende et al., 2014; Sønderby et al., 2016), with parent nodes having a larger index than their children.
consider the case in which pθ(zn∣zpa(n),cpa(n),n) is given by a Gaussian density with parameters ψ̂n = (µ̂n, Σ̂n). We obtain these parameters recursively by multiplying samples of node zn’s parent variables zpa(n) with their corresponding gating variables cpa(n),n and input a concatenation of the results of this operation into a multi-layer perceptron MLP(TD)n predicting ψ̂n (see Appendix B for additional details on the MLP architecture). The top-down recursion starts at the root node zN ∼ p(zN) = N (0, I). An illustration of this process is shown in black in Fig. 2.
The approximate posterior, qφ(z∣x,c), must approximate pθ(z∣x,c). We express the approximate posterior as
qφ(z∣x,c) = N
∏ n=1
qφ(zn∣x,zpa(n),cpa(n)). (7)
We note that the dependency structures of the generative model and its corresponding posterior are, in general, not independent: choosing a particular structure in the generative model induces a particular structure in the posterior (Webb et al., 2018). A simple way to guarantee enough capacity in the encoder to account for the dependencies implied by the decoder is thus to keep the encoder graph fully-connected and learn a decoder graph only. Instead, we share the gating variables c between approximate posterior and generative model (see Section 3.3), i.e., we assume that the encoder dependencies mirror those of the decoder. As a consequence, the posterior implied by the generative model could lie outside of the model class representable by the encoder. In practice, this is not an issue and we observe significant performance improvements over both traditional VAEs (where prior and posterior match but are limited in their expressiveness) and Graph VAEs with fully-connected encoder graph. See Section 4.3 for quantitative experiments and Section 5 for a discussion on the relationship between learned structures and fully-connected structures.
Parameter prediction for the local factors qφ(zn∣x,zpa(n),cpa(n)) consists of a precision-weighted fusion of the top-down prediction ψ̂n described above and a bottom-up prediction ψ̃n. The latter is obtained by encoding x into a generic feature that is used as an input to a node-specific multi-layer perceptron MLP(BU)n predicting ψ̃n. This is shown in blue in Fig. 2. Additional details on the fusion process can be found in Appendix B.3.
Algorithm 1 Optimizing VAEs with Latent Dependency Structure Require: Data x, number of latent nodes N , number of dimensions per node N ′.
1: Initialize θ, φ, µ. 2: repeat 3: Sample c using Eq. (9) and determine zpa(n) for each zn based on the sampled structure. 4: For each node, compute qφ(zn∣x,zpa(n)) using Eq. (7). 5: Sample z from qφ(z∣x) using Eq. (6) and compute pθ(x∣z). 6: Update θ, φ,µ based on the gradients derived from Eq. (8). 7: until Convergence.
3.2 LEARNING STRUCTURE BY INDUCING AN ADDITIONAL LOWER BOUND
The formulation in Section 3.1 provides the form of the model for a particular configuration of the latent dependency structure. Finding the optimal structure corresponds to a discrete optimization over all values of c, potentially optimizing the model parameters of each possible configuration. To avoid this intractable procedure, we place a distribution over c, then directly optimize the parameters of this distribution to arrive at a single, learned latent dependency structure. Specifically, we treat each ci,j as an independent random variable, sampled from a Bernoulli distribution with mean µi,j , i.e. ci,j ∼ p(ci,j) = B(µi,j). We denote the set of these Bernoulli means as µ. Introducing this distribution allows us to express the following additional lower bound on L, derived in Appendix A:
L ≥ L̃ = Ep(c) [Eqφ(z∣x) [log pθ(x∣z,c)] −KL(qφ(z∣x)∣∣pθ(z∣c))] = Ep(c) [Lc] , (8)
where Lc is the ELBO for a particular value of dependency gating variables. Thus, L̃ can be interpreted as the expected ELBO under the distribution of dependency structures induced by p(c), which we estimate by sampling c ∼ p(c) and evaluating Lc. We note that L̃ is not a proper variational bound, as it is not guaranteed to recover the marginal likelihood if the approximate posterior matches the true posterior. Rather, optimizing L̃ provides a method for learning the latent structure. For fixed parameters (φ, θ), the optimal L̃ w.r.t. µ is a δ-distribution at the MAP configuration, c∗, yielding L = L̃ = Lc∗ . This is because Lc∗ is always greater than or equal to the expected ELBO over all dependency gates, L̃. In practice, we jointly optimize φ, θ, and µ. While this non-convex optimization procedure may result in local optima, we find this empirically works well, with p(c) converging to a fixed distribution (Fig. 3). Thus, by the end of training, we are effectively optimizing the ELBO for a single dependency structure. The training procedure is outlined in Algorithm 1.
3.3 LEARNING THE DISCRETE GATING VARIABLE DISTRIBUTIONS
For a given latent dependency structure, gradients for the parameters θ and φ can be estimated using Monte Carlo samples and the reparameterization trick (Kingma & Welling, 2014; Rezende et al., 2014). To obtain gradients for the gate means, µ, we make use of recent advances in differentiating through discrete operations (Maddison et al., 2017; Jang et al., 2017), allowing us to differentiate through the sampling of the dependency gating variables, c. Specifically, we recast the gating variables using the Gumbel-Softmax estimator from Jang et al. (2017), re-expressing ci,j as:
ci,j = exp((log(µi,j) + 1)/τ)
exp((log(µi,j) + 1)/τ) + exp((log(1 − µi,j) + 2)/τ) , (9)
where 1 and 2 are i.i.d samples drawn from a Gumbel(0, 1) distribution and τ is a temperature parameter. The Gumbel-Softmax distribution is differentiable for τ > 0, allowing us to estimate the derivative ∂ci,j
∂µi,j . For large values of τ , we obtain smoothed versions of c, essentially interpolating
between different dependency structures. As τ → 0, we recover binary values for c, yielding the desired discrete sampling of dependency structures at the cost of high-variance gradient estimates. Thus, we anneal τ during training to learn the dependency gate means, µ, eventually arriving at the discrete setting.
4 EVALUATION
We evaluate the proposed latent dependency learning approach on three benchmark datasets: MNIST (Lecun et al., 1998; Larochelle & Murray, 2011), Omniglot (Lake et al., 2013), and CIFAR10 (Krizhevsky, 2009). After discussing the experimental setup in Section 4.1, we provide a set of qualitative experiments in Section 4.2 to gain insight into the learning process, hyper-parameter selection, and the nature of the inferred structures. In Section 4.3, we provide quantitative comparisons with common predefined latent dependency structures on benchmark datasets. Additional results on training robustness and latent space embeddings can be found in Appendix C.
4.1 EXPERIMENTAL SETUP
To provide a fair comparison, the encoders of all structured methods use the same MLP architecture with batch normalization (Ioffe & Szegedy, 2015) and ReLU non-linearities (Nair & Hinton, 2010) in all experiments.2 Decoder structures are the reverse of the encoders. Likewise, the number of latent dimensions is the same in all models and experiments (M = 80). As discussed in Section 3.1, all latent dependencies are modeled by non-linear MLPs as well.
For MNIST and Omniglot, we binarize the data and model pθ(x∣z) as a Bernoulli distribution, using a sigmoid non-linearity on the output layer to produce the mean of this distribution. For CIFAR-10, we model pθ(x∣z) with a Gaussian density, with mean and log-variance predicted by sigmoid and linear functions, respectively. Further implementation details, including the model architectures and training criteria, can be found in Appendix B.
4.2 QUALITATIVE ANALYSIS
We first explore the structure learning process. As described in Section 3.2 and Appendix A, optimizing L̃ w.r.t. the dependency gating means, µ, should push this lower bound toward L. Thus, µ should converge to either 0 or 1, yielding a fixed, final latent dependency structure. In Fig. 3, we visualize this process during training on MNIST. The model has N = 5 nodes and a total latent dimension of M = 80 (i.e., N ′ =M/N = 16 dimensions per node). As shown in Fig. 3a, the gating means converge in practice, with 3 out of 10 edges removed and the rest retained. The resulting static dependency structure is visualized in Fig. 3b. We observed that the learned structure is stable across training runs with different seeds for parameter initialization and mini-batch sampling, supporting the hypothesis that the inferred structure indeed depends on the model parameterization and the dataset.
2We implement classic VAEs using a more complex encoder to match the number of parameters of the structured methods. All baselines use the same or more parameters than Graph VAE.
We next investigate the influence of the total latent dimension, M , and the trade-off between the number of nodes, N , and the node dimension, N ′ = M/N . Our results for various models trained on MNIST are shown in Fig. 4. Models with the same total latent dimension are shown in the same color. We observe that the performance improves with increasing total latent dimension, likely resulting from the additional flexibility of the higher-dimensional latent space. We also observe that, for a fixed number of latent dimensions, models with fewer node dimensions (and therefore more nodes with a more complex dependency structure) typically perform better. This highlights the importance of using an expressive dependency structure for obtaining a flexible model.
4.3 QUANTITATIVE COMPARISON
To quantitatively evaluate the improvements due to learning the latent dependency structure, we compare with a range of common, predefined baseline structures. These baselines include classic VAEs (Kingma & Welling, 2014; Rezende et al., 2014), which contain no dependencies in the prior, Ladder VAEs (Sønderby et al., 2016), which contain chain-like dependencies in the prior, and fullyconnected VAEs (FC-VAEs) (cf. Kingma et al. (2016)), which contain all possible dependencies in the prior (corresponding to all gating variable parameters µi,j set to a fixed value of 1). We note that our approach is orthogonal and could be complemented by a number of other approaches attempting to overcome the limitations of classic VAEs. Similar to ladder VAEs, Zhao et al. (2017) use chainstructured latent dependencies to learn disentangled representations. Normalizing flows (Rezende & Mohamed, 2015), on the other hand, adds dependencies to the approximate posterior through a series of invertible transformations.
We evaluate the performance of all models using their test log-likelihood, log pθ(x), in 5 independent runs (Table 1). All values were estimated using 5,000 importance-weighted samples. Following standard practice, we report log pθ(x) in nats on MNIST/Omniglot and in bits/input dimension on CIFAR-10. The learned dependency structure in our proposed Graph VAE consistently outperforms models with both fewer (VAE, ladder VAE) and more (FC-VAE) latent dependencies. We discuss potential reasons in Section 5. To provide further insight into the training objective, Table 1 also reports DKL(qφ(z∣x)∣∣pθ(z)) and the ELBO for each model on the test set.
Encoder-Decoder Relationship. From a purely theoretical point of view, learning the structure of the generative model implies the need for a fully-connected graph in the approximate posterior (see Section 3.1). In practice, we share the gating variables c between encoder and decoder (see Eq. (8)), because we observed improved empirical performance when doing so: training a Graph VAE decoder with a predefined, fully-connected encoder graph results in a mean test log-likelihood of −83.40 nats on MNIST, which is worse than the performance of FC-VAE (−82.80 nats) and Graph VAE (−82.58 nats). We believe these are noteworthy empirical results, but further research is
required to understand this behavior at a theoretical level. A full visualization of the training process is provided in Appendix C.1.
5 DISCUSSION
Performance vs. Speed/Memory. As shown in Fig. 4, the number of latent nodes can significantly impact the performance of our model. While allowing more complex dependency structures through a lowM/N -ratio is typically beneficial, it also has an adverse effect on the training time and memory consumption. Fortunately, the ability to freely select this ratio allows a simple adaption to the available processing power, hardware constraints, and application scenarios.
Optimization Order. It is worth noting that the learning process optimizes the model parameters (c, φ, θ) in a clear temporal order. While the latent structure, governed by c, converges during the first ≈ 200 epochs (Fig. 3a), it takes over 10× as long until the variational and generative parameters (φ, θ) converge. There is no external force enforcing this behaviour, indicating that the loss can initially most easily be decreased by limiting the latent structure to the complexity prescribed by the observed training data.
Performance Improvement over Fully-Connected VAEs. There is an intricate relationship between fully-connected graphs vs. learned graphs along one axis and between prior structures vs. posterior structures along a separate axis: FC-VAEs model all conditional latent dependencies and are thus potentially more expressive and flexible than other latent dependency structures. It is therefore somewhat surprising that the learned latent structures in Graph VAE consistently outperform the FC-VAE baseline. We speculate that this may be due to difficulties in optimization, which is a known problem in hierarchical latent variable models (Bowman et al., 2016; Burda et al., 2016). Graph VAE and FC-VAE both contain the same hypothesis space of possible models that can be learned. If all dependencies are needed, Graph VAE could set all dependency gate parameters to 1. Likewise, if a latent dependency was unnecessary, FC-VAE could set all of the model parameters in that dependency to 0. However, this would require many coordinated steps along multiple parameter dimensions. It is plausible that the benefit of learning the dependency structure may stem from the ability to alter the optimization landscape, using the dependency gates to move through the model parameter space more coarsely and rapidly. The resulting latent dependency structures may thus be less expressive, but easier to optimize. While only an intuition, this hypothesis is also in line with the observations and results in our experiments with fully-connected encoder graphs and learned decoder graphs, where the theoretically more flexible FC-encoder is outperformed by our parameter sharing approach. Follow-up work will be required to test this intuition.
6 CONCLUSION
We presented a novel method for structure learning in latent variable models, which uses dependency gating variables, together with a modified objective and discrete differentiation techniques, to effectively transform discrete structure learning into a continuous optimization problem. In our experiments, the learned latent dependency structures improve the performance of latent variable models over comparable baselines with predefined dependency structures. The approach presented here provides directions for further research in structure learning for other tasks, including undirected graphical models, time-series models, and discriminative models.
A LOWER BOUND DERIVATION
Introducing the distribution over dependency gating variables, p(c) = B(µ), modifies the evidence lower bound (ELBO) from Eq. (2), as we now have an additional set of random variables over which to marginalize. To see this, we can start by re-expressing Eq. (2) as
L = Eqφ(z∣x) [log pθ(x,z)] −Eqφ(z∣x) [log qφ(z∣x)] . (10)
pθ(x,z) can be expressed as a marginalization over the gating variables, effectively averaging over an ensemble of models with a distribution of dependency structures:
pθ(x,z) = ∫ pθ(x,z,c)dc = ∫ pθ(x,z∣c)p(c)dc = Ep(c) [pθ(x,z∣c)] . (11)
Plugging this into Eq. (10):
L = Eqφ(z∣x) [logEp(c) [pθ(x,z∣c)]] −Eqφ(z∣x) [log qφ(z∣x)] . (12)
Using Jensen’s inequality, we bring the log inside of the expectation, Ep(c) [⋅], yielding
L ≥ L̃ = Eqφ(z∣x) [Ep(c) [log pθ(x,z∣c)]] −Eqφ(z∣x) [log qφ(z∣x)] , (13)
where L̃ is a lower bound on L. Swapping the order of expectation, we rewrite L̃ as
L̃ = Ep(c) [Eqφ(z∣x) [log pθ(x,z∣c)]] −Eqφ(z∣x) [log qφ(z∣x)] , (14)
and because the second term is independent of p(c), we can include both terms inside of a single outer expectation:
L̃ = Ep(c) [Eqφ(z∣x) [log pθ(x,z∣c) − log qφ(z∣x)]]
= Ep(c) [Eqφ(z∣x) [log pθ(x∣z,c)] −KL(qφ(z∣x)∣∣pθ(z∣c))] = Ep(c) [Lc] ,
(15)
where we have defined Lc as the ELBO for a given dependency structure. Note that L̃ is not a proper variational bound, as it cannot recover the marginal log likelihood. Rather, L̃ allows us to optimize the distribution over gating variables and, thus, the model structure. When we arrive at a fixed structure, we will be optimizing the variational bound for that particular dependency structure.
To see this, note that the bound in Eq. 13 becomes tight when p(c) is any fixed distribution, in which the dependency gating means, µ, are all either 1 or 0. Plugging in a delta distribution for p(c) at a particular configuration, c′, i.e. p(c) = δc,c′ , we have
L = Eqφ(z∣x) [logEδc,c′ [pθ(x,z∣c)]] −Eqφ(z∣x) [log qφ(z∣x)]
= Eqφ(z∣x) [Eδc,c′ [log pθ(x,z∣c)]] −Eqφ(z∣x) [log qφ(z∣x)]
= L̃.
(16)
Intuitively, this is simply the case of a latent variable model with predefined, fixed dependencies. By optimizing L̃ w.r.t. µ, we hope to collapse p(c) to a delta distribution at the optimal (MAP) configuration, c∗, because
Lc∗ = Eδc,c∗ [Lc] ≥ Ep(c) [Lc] = L̃. (17) Effectively, we can search for the dependency structure with the highest ELBO by optimizing the distribution parameters of p(c). Although this optimization procedure may arrive at locally optimal dependency structures, our hope is that these learned structures will still perform better than an arbitrary, predefined dependency structure.
B IMPLEMENTATION DETAILS
B.1 NETWORK ARCHITECTURE
We document the network architectures used in our experiments. We use the same network architectures for all datasets. Input size (pixels per image) is the only difference across datasets. The input size is 28 × 28 for MNIST and Omniglot, and 3 × 32 × 32 for CIFAR-10.
Encoders:
fc(input size,512) → batch norm → ELU → fc(512,512) → batch norm → ELU → fc(512,256) → batch norm → ELU → fc(256,128)
Latent Dependencies: Latent dependencies are modelled by non-linear MLPs. Note that the topdown architecture is shared between inference model and generative model, but the MLPs are optimized independently.
bottom-up: For each node with N ′ dimensions, the local potential is predicted by a mapping from the encoded feature:
fc(128,128) → batch norm → ELU → fc(128,N ′). The output feature is then mapped to µ and log var with two independent fc(N ′,N ′) layers, respectively.
top-down: For each node (N ′ dimensions) with a set of parent nodes, the top-down inference/generation is implemented as:
fc(sum of parent nodes’ dimension, 128) → batch norm → ELU → fc(128,N ′). The output feature is then mapped to µ and log var with two independent fc(N ′, N ′) layers, respectively.
Decoders:
fc(N ′,256) → batch norm → ELU → fc(256,512) → batch norm → ELU → fc(512,512)→ batch norm→ ELU→ fc(512,input size)→ output function()
output function for MNIST and Omniglot is sigmoid(), which predicts the mean µ of Bernoulli observations; and sigmoid() predicting µ, fc(input size,input size) predicting log var of Gaussion observations for CIFAR-10.
B.2 TRAINING
All models were implemented with PyTorch (Paszke et al., 2017) and trained using the Adam (Kingma & Ba, 2015) optimizer with a mini-batch size of 64 and learning rate of 1e−3. Learning rate is decresed by 0.25 every 200 epochs. The Gumbel-softmax temperature was initialized at 1 and decreased to 0.99epoch at each epoch. MNIST and Omniglot took 2000 epochs to converge, and CIFAR took 3000 epochs to converge.
B.3 INFERENCE MODULE DETAILS
Parameter prediction for the local factors qφ(zn∣x,zpa(n)) consists of a precision-weighted fusion of the top-down prediction ψ̂n and a bottom-up prediction ψ̃n. Specifically, for a latent variable zn, ψ̂n ∶= {µ̂n, σ̂n}, and ψ̃n ∶= {µ̃n, σ̃n}.
Bottom-up Inference. A high-dimensional input is first mapped to a feature vector hx by an encoder MLP. hx is then used to predict µ̃n and σ̃n with non-linear MLPBUn .
Top-down Inference. µ̂n and σ̂n are predicted by µ̂n = MLPTDn ([zpa(n) ⊙ cpa(n),n]) and σ̂n = MLPTDn ([zpa(n) ⊙ cpa(n),n]), respectively. [,] denotes concatenation operation, and ⊙ denotes element-wise multiplication.
Precision-weighted fusion. Having ψ̂n and ψ̃n, the parameters of the local conditional distribution is given by qφ(zn∣x,zpa(n),cpa(n),n) ∼ N (zn∣µn, σ 2 n), with
σn = 1
σ̂−2n + σ̃ −2 n
,
µn = µ̂nσ̂
−2 n + µ̃nσ̃ −2 n
σ̂−2n + σ̃ −2 n
.
(18)
C ADDITIONAL RESULTS
C.1 ROBUSTNESS OF LOG-LIKELIHOODS
In Fig. 5, we report the averaged test log-likelihoods and associated standard deviations of Graph VAE and our baselines at different epochs. All calculations are based on 5 independent runs.
C.2 LATENT EMBEDDINGS
Our training objective optimizes intrinsic structure (which does not necessarily correlate with semantic meaning) and does not incentivize a disentanglement of latent factors. Interestingly, a TSNEvisualization of the data as well as latent embeddings of Graph VAE and VAE on MNIST (Fig. 6) shows that the latent embedding of Graph VAE exhibits a large (semantic) gap between different classes, even though the model is trained in an unsupervised fashion. We will further investigate this behavior in future work. | 1. What is the main contribution of the paper, and how does it build upon previous works in the field?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its ability to improve the expressiveness of the inference network and the latent prior?
3. How effective is the introduced gating mechanism in reducing the latent DAG from its fully-connected state, and what are the implications of this reduction for the model's performance?
4. What are the limitations of the reported experimental results, and how do they compare to previous works in the field?
5. How might the introduction of a regulatory mechanism regarding the gating variable improve the model's performance and convergence properties? | Review | Review
The authors propose to augment the latent space of a Variational AutoEncoder [1] with an auto-regressive structure, to improve the expressiveness of both the inference network and the latent prior, making them into a general DAG of latent variables. This works goes further in the same direction as the Ladder VAE [2]. This paper introduces a mechanism for the latent model to directly learn its DAG structure by first considering the fully-connected DAG of latent variables, and adding Bernoulli variables controlling the presence or absence of each edge. The authors derive a new ELBO taking these variables into account, and use it to train the model. The gradients of the parameters of the Bernoulli variables are computed using the Gumbel-Softmax approach [3] and annealing the temperature.
The authors observe with they experiments that the Bernoulli variables converge relatively quickly towards 0 or 1 during the training, fixing the structure of the DAG for the rest of the training. They test their model against a VAE, a Ladder VAE and an alternative to their model were the DAG is fixed to remain fully-connected (FC-VAE), and observe improvements in terms of the ELBO values and log-likelihood estimations.
The main addition of this paper is the introduction of the gating mechanism to reduce the latent DAG from its fully-connected state. It is motivated by the tendency of latent models to fall into local optima.
However, it is not clear to me what this mechanism as it is now adds to the model:
- The reported results shows the improvements of Graph-VAE over FC-VAE to be quite small, making their relevance dubious in the absence of measurement of variance accross different trainings. Additionally, the reported performances for Ladder VAE are inferior to what [2] reports. Actually the performance of Ladder-VAE reported in [2] is better than the one reported for Graph-VAE in this paper, both on the MNIST and Omniglot datasets.
- The authors observe that the Bernoulli variables have converged after around ~200 epochs. At this time, according to their reported experimental setup, the Gumbel-Softmax temperature is 0.999^200 ~= 0.82, which is still quite near 1.0, meaning the model is still pretty far from a real Bernoulli-like behavior. And actually, equation 9 is not a proper description of the Gumbel-Softmax as described by [3] : there should be only 2 samples from the Gumbel distribution, not 3. Given these two issues, I can't believe that the c_ij coefficients behave like Bernoulli variables in this experiment. As such, It seems to me that Graph-VAE is nothing more than a special reparametrization of FC-VAE that tends to favor saturating behavior for the c_ij variables.
- On figure 3b, the learned structure is very symmetrical (z2, z3, z4 play an identical role in the final DAG). In my opinion, this begs for the introduction of a regulatory mechanism regarding the gating variable to push the model towards sparsity. I was honestly surprised to see this gating mechanism introduced without anything guiding the convergence of the c_ij variables.
I like the idea of learning a latent structure DAG for VAEs, but this paper introduces a rather weak way to try to achieve this, and the experimental results are not convincing.
[1] https://arxiv.org/abs/1312.6114
[2] https://arxiv.org/abs/1602.02282
[3] https://arxiv.org/abs/1611.01144 |
ICLR | Title
Generalized Graph Embedding Models
Abstract
Many types of relations in physical, biological, social and information systems can be modeled as homogeneous or heterogeneous concept graphs. Hence, learning from and with graph embeddings has drawn a great deal of research interest recently, but only ad hoc solutions have been obtained this far. In this paper, we conjecture that the one-shot supervised learning mechanism is a bottleneck in improving the performance of the graph embedding learning algorithms, and propose to extend this by introducing a multi-shot unsupervised learning framework. Empirical results on several real-world data set show that the proposed model consistently and significantly outperforms existing state-of-the-art approaches on knowledge base completion and graph based multi-label classification tasks.
1 INTRODUCTION
Recent studies have highlighted the importance of learning distributed representations for symbolic data in a wide variety of artificial intelligence tasks (Bengio et al., 2013). Research on word embeddings (Mikolov et al., 2013) has led to breakthroughs in many related areas, such as machine translation (Bahdanau et al., 2015), question answering (Xiong et al., 2016), and visual-semantic alignments (Karpathy & Fei-Fei, 2017). However, learning to predict for large-scale knowledge graphs (KGs) is still a challenging problem left, this is largely due to the diversity of the ontologies, and the semantic richness of the concepts, which makes it really hard to generate proper and universally applicable graph embeddings, simply based on word-level embeddings (Cai et al., 2017).
Being able to generate reasonable and accurate distributed representations for large-scale knowledge graphs would be particularly valuable, in that it may help predict unobserved facts from limited concepts, uncover gaps in our knowledge, suggest new downstream applications, which clearly reflects the central concerns of the artificial intelligence (Nickel et al., 2016a; Henaff et al., 2017). Therefore, massive attention has been devoted to the potential of embedding entities and relationships of multi-relational data in low-dimensional vector spaces in recent years (Wang et al., 2017).
In this paper, we consider the problem of developing simple and efficient model for learning neural representation of generalized knowledge graphs, including the multi-relational heterogeneous graphs, and more specifically defined homogeneous graphs (such as social and biological networks).
Following the pioneer work of Nickel et al. (2011) and Bordes et al. (2013), almost all of the stateof-the-art approaches try to model the graph embedding learning problem as supervised binary classification problems, their objective functions are usually one-shot (single purpose) . We argue that prior research in this area might have been affected and biased by “ established priors”, which prevents the formulation of a methodology that is objective enough to cope with the highly sparse knowledge graphs. We propose to handle the embedded learning problem of knowledge graphs with an unsupervised neural network model, called the Graph Embedding Network (GEN). The proposed model consists of three simple multi-layer perceptron (MLP) cells, each cell operates in response to a different “query” with regard to the input fact, which will be trained sequentially. The formulation of the model is inspired by the neural sequence-to-sequence (seq2seq) model (Sutskever et al., 2014), except that we attempt to use the MLP cells to mimic the sequence learning capability of the recurrent neural network (RNN), to model the semantic structure of the knowledge graphs.
The major contribution of this paper is that: (1) we propose GEN, a novel and efficient multishot framework for embedding learning in generalized knowledge graphs. (2) We show how GEN is in accordance with established principles in cognitive science, providing flexibility in learning representations that works on graphs conforming to different domains.
2 RELATED WORKS
During the last few years, an increasing amount of research attention has been devoted to the challenge of representation learning on knowledge graphs, especially focused on the potential benefits for the knowledge base completion (KBC) tasks, including the link prediction problem and the relation prediction problem. Among which, the relation translating model TransE (Bordes et al., 2013), the tensor factorization based semantic matching model RESCAL (Nickel et al., 2011), and the neural network based semantic matching model ER-MLP (Dong et al., 2014; Nickel et al., 2016b), are probably the most heavily studied from the methodology perspective. For good surveys on such embedding learning algorithms, see Nickel et al. (2016a), Wang et al. (2017), and Cai et al. (2017).
Broadly speaking, related works can be divided into two categories: linear and non-linear, according to whether the output embedding has a reasonable linear interpretation. State-of-the-art linear models include the TransE, RESCAL, TranH (Wang et al., 2014), DistMult (Yang et al.), and ANALOGY (Liu et al., 2017), while the popular non-linear models include the ER-MLP, ComplEX1 (Trouillon et al., 2016), HoIE (Nickel et al., 2016b), ProjE (Shi & Weninger, 2017) and ConvE (Dettmers et al., 2017). The proposed GEN model is also a non-linear model.
The graph embedding learning models that is most closely related to this work is probably the ProjE model, which makes use of an embedding projection function defined as:
h(r, t) = g(w0 · f(wr1r+wt1t+ b1) + b0) where h, r, t denote the embedding vectors, f(·) and g(·) are non-linear activation functions, w0, wr1 and w t 1 are learnable weight matrices, b0 and b1 are bias vectors. The output ranking scores of entity h with regard to the given query (?, r, t) can be obtained through a softmax function:
Score(hi, r, t) = softmax {h(r, t)}i However, as one could see from above functions, the ProjE model is built upon the query (?, r, t), hence is a one-shot solution, which is distinctly different from our GEN model. Still another difference lies in the definition of the objective loss function, the ProjE model choose to use the (selective) cross-entropy loss based on the open world assumption, while our model uses a simplified cross-entropy loss based on the close world assumption. In order to save the computation cost, the ProjE model introduced a negative sampling process, this could cause potential risks for introducing additional bias. Besides, its candidate sampling process is time consuming and hard to be paralleled.
Another model that is closely related to the GEN model is the ER-MLP model, which can be interpreted as creating representation for each element of triples and deriving their existence from this representation (Nickel et al., 2016a). The ER-MLP model can be defined as:
Score(h, r, t) = wT g { CT (h ⊕ r ⊕ t) } where symbol ⊕ denotes the vector concatenation operator, vector w and matrix C are global weight vectors shared by all the entities and relations, g(·) is an element-wise non-linear activation function. This model is built upon the fourth query as we defined in Section 3, it is a supervised solution, which is quite different from ours. One well-known disadvantage of the ER-MLP is that, even properly regularized, it is still easily prone to over-fitting on knowledge graph datasets (Nickel et al., 2016b), therefore we do not compare with it in this work, but instead with the ProjE model.
As mentioned before, the primary motivation of this study is to develop a graph embedding model that is universally applicable to a wide variety of situations. In order to verify the validity of our solution on heterogeneous networks, we further test it on multi-label network classification tasks for social networks (BlogCatalog) and biological networks (Protein-Protein Interaction), and compare our results with two state-of-the-art techniques, namely, DeepWalk (Perozzi et al., 2014) and node2vec (Grover & Leskovec, 2016). Both of them are derived directly from the word2vec model (Mikolov et al., 2013), which creating node embeddings of the graphs based on the skip-gram framework, and train the model with corpus generated through random walking on that graph. However, it is shown that the random walk sampling can be insufficient for supervised learning tasks in the sparse network environment (Liu et al., 2016). Our results support this conjecture, the experimental results on benchmark tests provide strong evidence that our model performs much better.
1The ComplEX model can be seen as an extension of the DistMult model in the complex space, albeit there is no nonlinear transformations applied, we treat it as a non-linear model here.
3 APPROACH AND MODEL ARCHITECTURE
Most of the prevalent semantic knowledge databases are built upon the Resource Description Framework (RDF) , in which the facts are represented and stored in the form of SPO (Subject, Predicate, Object) triples. Following the convention, we will use the symbol (h, r, t) to represent a unit of facts, in which h, r and t denote the head entity, the relation, and the tail entity, respectively.
The primary motivation of this paper is to develop a representation learning method that is suitable and flexible enough for modeling different types of knowledge graphs from a universal perspective. To achieve this objective, the most important problems to be faced are associated with: how to define the optimization problem and how to solve it. As mentioned above, previous works only consider a one-shot mapping from the embedding space to the criterion space, which we conjecture, would be vulnerable to loss considerable amount of the structured semantic information. For instance, if given a fact (Elvis Presley, profession, singer), one could immediately learn the following queries:
• Q1: What is the profession of Elvis Presley? A1: singer.
• Q2: Can you name a person whose profession is singer? A2: Elvis Presley.
• Q3: What is the possible relationship in between Elvis Presley and singer? A3: profession.
• Q4: Is it true that Elvis Presley’s profession is singer? A4: Yes.
In fact, this is the actual way we humans learn the meaning of concepts expressed by a statement. These self-labeled queries reflect the following modeling philosophy: (1) (h, r) ⇒ t; (2) (t, r) ⇒ h; (3) (h, t) ⇒ r; (4) (h, r, t) ⇒ T/F; respectively. This has been exclusively adopted by the previous research. However, none of them have systematically investigated the effect of combination all of such information. In this section, we propose a novel multi-shot model to solve this problem. For a more detailed discussion of the motivation and intuition behind this model, see Appendix A.
3.1 OVERVIEW OF THE MULTI-SHOT LEARNING FRAMEWORK
The proposed model (GEN) is designed to process data in sequential form. As shown in Fig.1, GEN consists of three components (cells), each corresponding to an individual query with regard to the given input triple. In this study, we propose to use a 2-layer MLP network to deal with the parameter estimation problem for each query individually, although it can be substituted by any other one-shot models, we only report the test results on MLP cells for simplicity. In training mode, the training set is fed into the system sequentially, each of the triple is decomposed into three self-labeled queries: (h, r, ?) ⇒ t, (?, r, t) ⇒ h, and (h, ?, t) ⇒ r. Each of the queries is fetched into the corresponding cell in order to update the parameters. Since for any given triple, our model would read it from three different perspective, we call it “multi-shot model” to distinguish it from other related works.
Parameters of the model can be logically divided into two parts. Firstly, the distribution representation of the entities and the relations are defined in the same d-dimensional space, which, as shown in Fig.1, are organized together as a learnable dictionary of embeddings. Secondly, there exist two types of MLP cells in the model, one deals with the entity prediction tasks, the other is responsible for the relation prediction tasks, which are marked as “E CELL” and “R CELL” respectively. Each individual cell has its own parameter set {W0,b0;W1,b1} representing certain network structures. Please note that two E CELLs are introduced to learn from the labeled entities, based on query (h, r, ?) and (?, r, t). According to our modeling hypothesis, which claims that all of the relations should be treated conceptually instead of syntactically, we propose to share parameters between the E CELLs, the intuition behind is to let them share their memory of each known facts from both side of the relation, so that after training with enough knowledge, the E CELLs will eventually able to learn how to correctly distinguish between valid and invalid entities for the given queries.
Another theoretical explanation of the GEN model is given below. We consider the proposed model as a variant of the RNN model, or more precisely, a neural seq2seq model, as illustrated in Fig.2. When training with the graph (a “document” of triples), the GEN model is organized as a stacked RNN, which consists of two chains: the E CELL chains and the R CELL chains. For any given input (h, r, t), each of the cells works as an individual seq2seq model according to its responsive query. For instance, the R CELL is responsible to query (h, ?, t) ⇒ r, it will take the embedding of h and r as input, and take r as its target label, and the parameters (memory) of the R CELL will be updated through back-propagation according to the discrepancy between the prediction results (in this case the softmax vector) and the desired label r. Therefore, the proposed model is completely unsupervised, which is distinctly different from previous works. Also please note that due to the lack of semantic connections between the adjacent triples in the input sequence, we did not consider the “long term memory” in this model, as usually did in real RNN models. Therefore, there only exists one “global memory” in this model — the parameter of the two types of cells, which is responsible for “learning to remember” the rules of how the knowledge graph is constructed.
3.2 DEFINITION OF THE GEN CELLS
The network structure of the E CELLs and the R CELLs are quite similar, the only difference is that they have different number of neurons in the hidden layer and the output layer, which are defined as hyper-parameters as shown in Fig.1. For simplicity, we only present the implementation details of the E CELLs here. In order to answer query (h, ?, t) ⇒ r, the hidden layer of the E CELL takes input from the embedding dictionary according to label h and r, the hidden layer is defined as:
x1 = f(W e o · x0 + b0) (1)
where x0 = [h ⊕ r], denotes the concatenation of the embedding vectors, hence the x0 is a 2d× 1 real-value vector. Weo is a k × 2d weights matrix, b0 is a k × 1 bias vector, k denotes the number of neurons in the hidden layer, and f(·) is a non-linear activation function, in this work, we use the
rectified linear unit (ReLU) function for all the experiments (Nair & Hinton, 2010). The output layer takes the hidden state vector x1 as input, mapping it to the target label space:
ŷ = g(We1 · x1 + b1) (2)
where We1 is a Ne × k weights matrix, b1 is a Ne × 1 bias vector, Ne denotes the number of entities in the dictionary, g(·) denotes the softmax function. Hence, ŷ is a Ne×1 probability vector, which means that, when training the model with a given fact (h, r, t) to answer the query (h, r, ?), the predictive results output by the model is a probabilistic distribution over all of the possible candidate entities. The cross-entropy loss with regard to prediction results is then defined as:
L(ŷ) = − Ne∑ i=1 y[i]log(ŷ[i]) + (1− y[i])log(1− ŷ[i]) (3)
where y denotes the ground truth, which is a one-hot vector exclusively activated by t. To speed-up the stochastic convex optimization process, we use a mini-batch setting, and rewrite the averaged cross-entropy loss over a batch of multiple samples of size N as following simplified form:
L(y) = − 1 N N∑ i=1 log(ŷi[ ti ]) (4)
where the subscript i denotes the i-th sample of the batch, ti represent the index of label t in the ground truth vector of that sample. Eq.4 is computationally efficient, however, it tend to ignores the existing knowledge for query (h, r, ?) other than the current fact (h, r, t), which has been proven to be useful for improving performance (Shi & Weninger, 2017). But, our experimental results show that the impact of such a problem can be controlled by means of collaborative correction with related facts under our model framework, which further demonstrate the validity of our modelling assumptions. Hopefully, the lessons learned for designing reasonable and computationally efficient cost functions in this study can serve as exemplars for future work.
4 EXPERIMENTAL RESULTS
We evaluate the proposed model on two distinctly different types of graph embedding learning tasks. Firstly, we evaluate our model on knowledge base completion tasks with the conventional datasets FB15K and WN182, and their upgrade version FB15k-237 and WN18RR3. Secondly, we evaluate our model on graph based multi-label classification tasks with two benchmark datasets from the complex network research area: BlogCatalog and Protein-Protein Interaction (PPI)4. Background information of the datasets and the implementation details of our model are given in Appendix B.
4.1 EVALUATION ON KNOWLEDGE BASE COMPLETION TASKS
The aim of the first evaluation was to assess the performance of the proposed model in link prediction tasks, by comparing it with other state-of-the-art approaches. We report the filtered P@N scores following the protocols proposed by Bordes et al. (2013), which means that all of the known facts would be screened out from the ranking list before calculating the statistics of the hits. The numerical results are presented in Table 1, where the highest scores in each column are presented in bold.
We reproduced all of the results of the existing studies (mostly with the released code), whereas some of which are below the reported record. For a fair comparison of the models, we cite those numbers from the original publications (marked with ⋆ symbols). Also, it seems that results reported by Dettmers et al. (2017) only consider the tail entity prediction scenario (without averaging with the head entity prediction results), hence we report two version of the test results of our model, the averaged version is named as GEN(avg.), while the tail entity prediction results are reported with
2Available online at: https://everest.hds.utc.fr/doku.php?id=en:transe 3Available online at: https://github.com/TimDettmers/ConvE 4Available online at: https://snap.stanford.edu/node2vec/
model named GEN(tail). Besides, we found that our model tends to remember the reverse facts with regard to the triples that has been processed during the training phase. We argue that this is an inherent characteristic of our modeling methodology, since it would treat such reverse facts as conceptually correct. Therefore, we also report P@N scores after screening out such reverse facts, this model is named as GEN(opt). We consider that under certain practical circumstances, it is reasonable to care about such results, because the reverse facts are direct reflections of the known facts, and in many scenarios, they themselves are useful and effective facts.
From Table 1 one could see that the performance of ComplEX seems much more competitive than other models on both of the WordNet subset, however, according to our tests, TransE and HoIE perform (generalized) more stable than others for all of the subtasks. Also please note that, after filtering out the reverse facts from the ranking list, we recorded a significant increase in P@1 score on WN18, which was not observed in other models. Since most of the semantic relations defined in WordNet are reflexive (Miller, 1995), we believe that these results help verify the efficacy of our model framework. Further evidence can be found by looking at evaluation results on FB15K and FB15K-237, in which our model consistently and significantly outperforms others for all settings.
The goal of the second evaluation was three-folded. (1) To assess the relation prediction performance of our model. (2) To verify the validity of the multi-shot learning framework. (3) To evaluate the quality (representability) of different embedding schemes. To achieve this goal, we carried out a group of experiments depicted in Table 2, where the model name shown in the parentheses indicate that the test is based on the embeddings generated by that model, but being re-trained with our model for fair comparison. For example, before testing the GEN(TransE) model, we need to train a GEN model with TransE embeddings, the only difference is that the pre-trained embeddings will not be updated during the training process, such that the quality of the different embedding schemes can be assessed more objectively. The results of GEN(HoIE) were obtained similarly from the pre-trained HoIE embeddings. The pre-trained word2vec embedding5 and GloVe embedding6 are obtained from the publicly available dictionaries released respectively by Google and Stanford NLP Group for research purpose, which are also heavily studied by recent researches. For entities and relations consisting of many words, we use the weighted sum of the word embeddings as their distributed representation for the test. The three models listed in the bottom of Table 2 demonstrate the oneshot learning capability of GEN, for instance, the results of GEN(h, r ⇒ t) were obtained by only considering the query (h, r, ?) during the training stage.
From the studies, the following conclusions could be obtained. (1) The performance of GEN on relation prediction tasks has been demonstrated. However, it seems that such strong performance mainly comes from our GEN framework, under which the predictive capability of a variety of em-
5Available at: https://code.google.com/archive/p/word2vec; version: GoogleNews-vectors-negative300. 6Available at: https://nlp.stanford.edu/projects/glove/; file version: glove.42B.300d.
beddings can be enhanced. In considering the ratio of the number of facts to relations involved, this problem seems much easier than the link prediction problem. (2) The validity of the multi-shot framework has been verified, since each of the one-shot GEN model performs significantly worse than the multi-shot model for almost all the tests, except that in relation prediction tasks, GEN(h, t ⇒ r) performs comparable to GEN, this is probably because that it was exclusively trained for that task, which is prone to overfit the data. (3) Comparing with their performance on link prediction tasks, we argue that the embeddings generated by GEN are probably more representative and informative than other embedding schemes, which we will provide more empirical (visual) evidence in Appendix C.
4.2 EVALUATION ON GRAPH BASED MULTI-LABEL CLASSIFICATION TASKS
In previous section, the term “knowledge graph” was used to refer to a multi-relational database, in which the entities were engaged in one or more heterogeneous relations, which means the relations related with a entity may range over different domains. In this section, we consider the problem of embedding learning on another type of graph — the homogeneous graphs (networks), in which the entities were engaged in a specific relationship, which is a natural structure people use to model the physical world, such as the various social network and the biological information systems. In this study, we consider it as a generalized form of the knowledge graphs, and attempt to come up with a general-purpose framework that could be used for embedding learning on different graphs.
To verify the validity of the proposed model, we evaluate GEN by comparing its performance on some benchmark multi-label classification tasks with the state-of-the-art DeepWalk and Node2vec models. Besides, we also report results on TransE and HoIE embeddings for comparison purpose, the supervised model used for multi-label classification are identical to each other (but differ from the embeddings). For fair comparison, all of the results with regard to the DeepWalk (Perozzi et al., 2014) and Node2vec (Grover & Leskovec, 2016) are cited from their original sources.
Following the convention of previous authors, we randomly sample a portion of the labeled nodes as training set (and the rest are used for test), we repeat this process 9 times (with the training ratio increased from 10% to 90%), and report two of the averaged measures (w.r.t. recall, precision, and F1-measure) on each of the test, namely, macro-average and micro-average. The Macro-F1 weights equally all the categories regardless of how many labels belong to it, while the Micro-F1 weights equally all the labels, thus favouring the performance on common categories.
Numerical results are presented in Table 3 and 4 respectively, the highest scores in each column are presented in bold face. From Table 3 one could see that the performance of DeepWalk proves much more competitive than other models when labeled data is sparse, but GEN still consistently outperforms when given 50% of the data, which demonstrates the validity of the proposed embedding learning framework for modeling author connections on social networks. Next, we investigate the performance of our model on even more sparse graphs, i.e. the Protein-Protein Interactions network. Table 4 shows that GEN performs consistently and significantly better than other baselines. In fact, when trained with only 20% of the labeled proteins, GEN performs significantly better than other approaches when they are given 90% of the data. We argue that this strong performance not only indicates that our model is flexible enough to the biological networks, but also provides new insights into their underlying biological mechanisms. Also please note that Macro-F1 scores in Table 3
and 4 demonstrate that, comparing with other embedding schemes, GEN performs more stable (and better) in both common and rare categories, which indicates that the embeddings generated by GEN are probably more representative and informative than other solutions, thus the supervised model built on top of it is less vulnerable to global under-fitting and local over-fitting.
5 CONCLUSION AND FUTURE WORK
Representation learning of knowledge graphs is a key concern for artificial intelligence and cognitive science. Many types of relations in physical, biological, social and information systems can be modeled with concept (knowledge) graphs. In this paper, we present an efficient scalable framework for learning conceptual embeddings of entities and relations in generalized knowledge graphs, including the homogeneous and heterogeneous graphs. We give evidence that the proposed model learns good representations of all these graphs for knowledge inference and supervised learning. For future work, we plan to investigate more thoroughly the efficacy of the proposed modeling framework, with respect to the decomposition of the semantic information conveyed by the linked concepts into elementary information, i.e. the four Q&A pairs. Also, we seek to enhance the quality of scientific investigations and theoretical conceptualizations on graph embedding learning in the context of semantic interoperability, for there is usually no possibility to interpret the embedded information meaningfully and accurately in order to produce useful results as defined by existing algorithms.
ACKNOWLEDGMENTS
We are grateful to the anonymous reviewers for taking time read and provide helpful comments.
APPENDIX A: MOTIVATION AND INTUITION
To get an intuitive understanding of the problem, consider the following examples taken from three typical KGs that have been heavily studied by the academic and industrial communities:
• (Elvis Presley, instance of, rock star) : taken from the WordNet7, one of the largest online lexical database of English, in which each distinct concept (called synset) are interlinked by means of rigidly defined (hence limited) conceptual-semantic or lexical relations.
• (Elvis Presley, /people/person/profession, singer) : taken from the Freebase8, which was once to be the largest collaboratively edited knowledge base (deprecated at this time and absorbed by the Wikidata project). In which each named entities are interlinked by means of fine-grained relation types defined in the meta-schema. Due to the loosely-defined nature of the relation types, redundancy or alternate facts are allowed to exist simultaneously, such as, (Elvis Presley, profession, musician) and (Elvis Presley, profession, actor).
• (Elvis Presley, rdf:type, American rock singers) : taken from the YAGO9, one of the largest and most active semantic knowledge base developed at the Max Planck Institute for Computer Science in Saarbrücken , which combines the clean taxonomy (relation types) of WordNet with the richness of the Wikipedia category system (classes of entities ).
As can be perceived from above examples, the use of different ontologies can lead to different (and incoherent) relations between the same pair of concepts, similarly, applying different ontologies can lead to diverse kinds of conceptualizations. Therefore, it is (arguably) impractical to rely on using the word-level embeddings to precisely represent the knowledge graphs under the diverse conditions, and it is necessary to develop a universal solution that is applicable to all of the ontology infrastructures, for phrase-level embedding learning of the different concept representations.
As mentioned in Section 3, in order to develop a representation learning method that is flexible enough for modeling different types of knowledge graphs, the most important problems to be faced are associated with how to define the optimization problem and how to solve it. According to our survey, most state-of-the-art models, including the translating models derived from the TransE (Bordes et al., 2013; Lin et al., 2015), the latent semantic models derived from the RESCAL (Nickel et al., 2011; 2016b), and the neural network models derived from the NTN (Socher et al., 2013), were all trying to define the graph embedding learning problem as a supervised binary classification problem, in which the optimization objectives are defined in the form of a relation-specific cost function of the entity and/or relation embeddings, and then to solve it with a stochastic gradient decent (SGD) process. Typical criteria used to evaluate the cost functions include the logistic loss and the pairwise margin-based criterion, and the negative samples used for training the model are usually sampled from the complement of the knowledge graph based on the open world assumption (Drumond et al., 2012). However, we doubt that there are many situations where such modeling strategies would have theoretical and practical disadvantages.
Firstly, we speculate that the reason why most previous studies did not consider the first and second queries simultaneously (see Section 3), is probably due to the difficult in modeling the inverse semantic relatedness of the entities from the given fact. In other words, shall we use the embedding of r to represent its reverse r′? If we do so, it seems that it will inevitably lead to semantic paradox like: Presley’s profession is Presley, since from the model’s perspective, there is no difference between the entity Presley and other entities that may appear on both side of the relation profession. Considering the sparsity of the knowledge graph, models trained with limited facts would very likely tend to give higher scores to the entities that have been “seen in the right place”.
In order to solve this problem, we propose to model the facts conceptually instead of concretely (or literally, syntactically), which means that we will focus on the semantic meanings of the embeddings (of the entities and relations), rather than their syntactic features. Such a conceptual embedding scheme allow us to unify the representation of a relation (r) and its reverse counterpart (r’), and to accommodate the lexical variety in use by various knowledge bases.
7http://wordnet.princeton.edu 8https://developers.google.com/freebase 9https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago
The intuition behind is, for any given fact (h, r, t), one would instantly recognize the bidirectional semantic connection between h and t, without need of translating it into (t, r′, h) explicitly in his/her mind. We believe this is crucial for efficient utilization of the structure information of the KGs for representation learning, empirical evidence is provided in Section 4 and Appendix C, respectively.
Secondly, we propose to use unsupervised learning techniques for graph embedding learning tasks, because: (1) Presently, almost all of the large-scale knowledge graphs are extremely sparse, which would unavoidably degrade the quality and reliability of the supervised learning algorithms. Further, considering the relation-specific solution that dominates the current research, the situation might get even worse. (2) Selecting negative examples for pair-wise training would be tricky and expensive, since in practice, it is very hard to generate a “proper and informative” negative sample responsive to each of the positive examples. For example, when learning from the fact (Einstein, employer, IAS), the false fact (Einstein, employer, Stanford) would seem to be more reasonable and informative than (Einstein, employer, singer) — if the objective is to further improve the predictive capability of the model to discriminate between similar objects.
To solve the data sparsity problem, we propose to model each of the facts as a short sentence, the entire KG can be regarded as a huge document, so that it can be processed by unsupervised encoderdecoder neural models, which has demonstrate to be efficient and useful in concept learning from the large-scale and feature sparse data (Sutskever et al., 2014). In order to avoid the sampling bias due to the selection of uninformative entities, we propose to use the softmax cross-entropy loss as a measure of the predictive discrepancy for model training, because its probability interpretation is more objective than those squared or logistic errors conventionally used in this area, and, it has been proven to be convex for the MLP we used in this paper (Bengio et al., 2005).
APPENDIX B: BACKGROUND INFORMATION AND IMPLEMENTATION DETAILS
B.1 DATASETS
WN18 is a subset of WordNet, which contains mostly the conceptual-semantic and lexical relations, and the entities are organized in a strictly hierarchical manner. FB15k is a subset of Freebase, which contains facts gathered from Wikipedia, mostly focused on the topic of movies and sports.
These datasets have been used as a de facto benchmarks for comparative evaluation, however, recent research (Toutanova & Chen, 2015; Dettmers et al., 2017) show that the test sets of WN18 and FB15k contain a lot of reversed triples that have been presented in the training set, i.e., (h, r, t) versus (t, r, h). Which, we consider would favor our model over those one-shot alternatives.
Therefore, we provide results on FB15k-237, which is introduced by Toutanova & Chen (2015), it is a subset of FB15K where reversing relations are removed. And, we also test on WN18RR provided by Dettmers et al. (2017), which is a reverse duplication free new sample of WordNet.
The multi-relational data sampled from WordNet and Freebase can be seen as typical of the heterogeneous graphs, in order to verify the generality of the developed model, we also perform evaluation in the multi-label classification setting on some typical homogeneous graphs.
BlogCatalog is a social network sampled from the BlogCatalog website, which contains only one relationship: the social connection between the blog authors, while the labels represent the interested topic categories provided by the bloggers. Protein-Protein Interactions is a biological network sampled from the PPI network for Homo Sapiens, which also contains only one relationship: the existence of interactions between the proteins, while the labels represent the biological states of the proteins. In the training set of these graph corpus, every entity (node) is assigned one or more labels from a finite set, the task is to predict the labels for the nodes in the test set.
The statistics of these data sets are summarized in Table 5.
B.2 EXPERIMENTAL SETUP
We optimized the hyper-parameters of all the datasets via extensive grid search and selected the model with the best filtered P@10 score on the validation set. Hyper-parameter ranges for the grid search were the following: embedding dimension d in {50, 100, 200, 300}, hidden layer dimension k in {256, 512, 1024, 2048} , MLP dropout rate p in {0.0, 0.1, 0.2, 0.3}, learning rate η in {0.001, 0.01, 0.1, 1, 5, 10}, learning rate decay λ in {0.7, 0.75, 0.8, 0.85, 0.9, 0.95}. In this study, we use the following combination of parameters for all of the graph embedding learning tasks :
• E CELLS: {d : 200, k : 2048, p : 0.2, η : 5, λ : 0.9. • R CELLS: {d : 200, k : 512, p : 0.2, η : 5, λ : 0.9}. • Mini-batch Settings: {batch size : 512, epoch : 50}
For multi-label classification tasks, we implement a single layer perceptron model for multi-task learning with: {k : 128, η : 0.1, λ : 0.9}, which is selected through grid search with the best averaged Macro-F1 score on randomly sampled validation set from the labeled nodes.
APPENDIX C: INVESTIGATING AND VISUALIZING THE EMBEDDING SCHEMES
In this section, we provide qualitative analysis on four typical embedding schemes (GEN, HoIE, TransE and word2vec), with the intention of better understanding the connection between the existing graph embedding schemes, and highlighting areas that remain poorly understood for further investigation. The reason we choose these models (except the word2vec) is because, according to our tests, they have demonstrated to be efficient and scalable to large-scale problems, and are also exhibiting good generalization ability on real data sets. We also consider the word2vec embeddings because we found that with the help of our multi-shot learning model, it achieves state-of-the-art performance on most of the knowledge base completion tasks (see Section 4), which is interesting, and worth some consideration (probably indicates a promising potential for transfer learning).
The argument in the first case is to claim that the graph embeddings generated by GEN are different from other solutions. We calculate the cosine similarities for each pair of the 1,345 relations in
FB15K w.r.t. four embedding schemes respectively, and compare their top-N ranking list (which is a set of relation pairs for each embedding scheme) through Venn diagrams, as illustrated in Fig.3.
Figure 3 reveals very different clustering pattern between GEN and other alternatives in their corresponding embedding space. What’s particularly interesting is that, although the TranE model is inspired by and heavily reliant on the spatial distribution of the wrod2vec embeddings (Bordes et al., 2013), but they are, in fact, not similar at all. On the contrary, the results of TransE and HoIE share a lot of similarities, it appears that in their top-300 and top-500 lists, almost half of the relation pairs were contained in their intersection. Which probably indicates that the translating embedding hypothesis (Bordes et al., 2013) is theoretically similar in nature to the holographic embedding hypothesis (Nickel et al., 2016b), when used for graph modeling. This is not an easily testable hypothesis, we consider it to be an open question, which we hope to further explore in the future.
The goal of the second experiment is to verify the claim that the embeddings generated by GEN are more representative and informative than other embedding schemes. Here we provide a case study on a randomly selected relation from FB15K, namely “/location/location/time zones”. There are 137 triples related to this relation (#425) in the test set, all of the head entities are names of the countries or regions, and the tail entities are the corresponding time zones. The heads are uniquely different from each other, while there are only 10 different time zones existed in the tails.
We plot all of the 137 triples in Fig.4, in which (Fig.4a and Fig4b) the input multi-dimensional vectors are projected to a 2-dimensional subspace spanned by x and y, by using of the principal component analysis (PCA), then we choose the first two principal components as the principal axes. In Fig.4a, the input is the concatenation of the head and tail entity of each triple, i.e. (h ⊕ t), with the intention of investigating the patterns of such feature vectors for relation prediction tasks. Hence, we choose the name of the tails as legend labels. As can be seen from Fig.4a, the feature vectors of the 137 triples shows clear clustering tendencies with regard to the categories in their tail entities. Based on this observation, we further plot the hidden layer of the R CELL (which is a 512-dimensional vector in this case) located before the output layer in our GEN model, as depicted in Fig.4b. From Fig.4b one could see that the distance between the data points is amplified, and the distinction becomes more prominent. We plot the cumulative softmax in Fig.4c, in which the X-axis represents the 1,345 type of relations in FB15K, Y-axis denotes the cumulative softmax values. The curve is obtained by adding all of the softmax vectors output by GEN with regard to the 137 triples. Obviously, the only peak observed in Fig.4c clearly exhibit that GEN can make good use of these (concatenated) features to identify the corresponding relations correctly.
For comparison purpose, we also visualize the other three embedding schemes with the same protocol, as illustrated in Fig.5. Since the corresponding models do not use MLP for relation prediction, we can not plot their “hidden state” and “accumulate softmax” for the second and the third subplots, hence we choose to visualize their predictive criterion vectors and output ranking list instead. The processing practice is consistent with the protocol of the original literature. Specifically, for TransE, we plot (t − h) as the hidden state for relation prediction, and calculate the ℓ1-norm distance |ri − (t− h)|1 w.r.t each of the relation ri in FB15K, then we process the distance vector with the softmax function for calculation of the accumulate softmax. While for HoIE, we plot the circular correlation vector (h⋆t) as the hidden state, and calculate the cosine similarity of (h⋆t)·ri w.r.t each of the relation ri in FB15K, then we use the obtained (cosine) vector to calculate the accumulate softmax. For word2vec embeddings, we use the same protocol as dealing with TransE.
From Fig.5 one could see that, the concatenated embedding vectors of TranE and HoIE shows similar clustering pattern as the GEN case, which help explaining the reason that under our multi-shot learning framework, the embeddings generated by these models perform similar in relation prediction tasks (see Table 2). It also provides evidence for our conjecture that these two embedding schemes could be inherently similar to each other. Form their criterion vectors (the second subplot for each models), one could see that their clustering pattern is not as clear as the case of GEN, which help explain their performance on relation prediction tasks (as shown in the third subplot)10. We consider this as solid support for the validity of the proposed multi-shot learning framework.
More evidence could be found with our source code release, which will be made publicly available on GitHub to encourage reproducible research, after anonymous review.
10 The alternative peaks appeared in subplot Fig.5c and Fig.5f are:
• #891: “/base/schemastaging/phone open times/time zone”, and • #583: “/time/time zone/locations in this time zone”. | 1. How does the proposed model perform compared to state-of-the-art approaches?
2. What is the main contribution of the paper regarding knowledge graph tasks?
3. Do you have any questions about the proposed model's architecture or training procedure?
4. How does the reviewer assess the significance and impact of the paper's contributions?
5. Are there any suggestions or recommendations for future work related to the paper's topic? | Review | Review
The paper is well-written and provides sufficient background on the knowledge graph tasks. The current state-of-the-art models are mentioned and the approach is evaluated against them. The proposed model is rather simple so it is really surprising that the proposed model performs on par or even outperforms existing state-of-the art approaches.
? The E_CELLs share the parameters. So, there is a forced symmetry on the relation i.e. given input head h and relation r predicting x and given input relation r and tail t predicting y would result in the same entity embedding x=y with h=t?
? In Table 2, you report the results of the retrained models GEN(x). There, the weights for the MLPs are learned based on the existing embeddings which do not get changed. I am missing a comparison of the change in the prediction score. Was it always better than the original model? Did all models improve in a similar fashion?
? Did you try training the other models e.g. TransE with alternating objective functions for respectively predicting the head, tail or relation based on the information from the other two?
? Are the last 3 Gen(x,y -> z) rows in Table 2 simple MLPs for the three different tasks and not the parts from the overall joint learned GEN model?
? Why is a binary classifier for Q4 not part of the model?
? Is the code with the parameter settings online?
+ outperforms previous approaches
+ proposes a general use case framework
- no run-time evaluation although it is crucial when one deals with large-scale knowledge graphs
Further comments:
* p.4: “it will take the embedding of h and r as input, and take r as its target label” -> “it will take the embedding of h and t as input, and take r as its target label”
* “ComplEX” -> “ComplEx” |
ICLR | Title
Generalized Graph Embedding Models
Abstract
Many types of relations in physical, biological, social and information systems can be modeled as homogeneous or heterogeneous concept graphs. Hence, learning from and with graph embeddings has drawn a great deal of research interest recently, but only ad hoc solutions have been obtained this far. In this paper, we conjecture that the one-shot supervised learning mechanism is a bottleneck in improving the performance of the graph embedding learning algorithms, and propose to extend this by introducing a multi-shot unsupervised learning framework. Empirical results on several real-world data set show that the proposed model consistently and significantly outperforms existing state-of-the-art approaches on knowledge base completion and graph based multi-label classification tasks.
1 INTRODUCTION
Recent studies have highlighted the importance of learning distributed representations for symbolic data in a wide variety of artificial intelligence tasks (Bengio et al., 2013). Research on word embeddings (Mikolov et al., 2013) has led to breakthroughs in many related areas, such as machine translation (Bahdanau et al., 2015), question answering (Xiong et al., 2016), and visual-semantic alignments (Karpathy & Fei-Fei, 2017). However, learning to predict for large-scale knowledge graphs (KGs) is still a challenging problem left, this is largely due to the diversity of the ontologies, and the semantic richness of the concepts, which makes it really hard to generate proper and universally applicable graph embeddings, simply based on word-level embeddings (Cai et al., 2017).
Being able to generate reasonable and accurate distributed representations for large-scale knowledge graphs would be particularly valuable, in that it may help predict unobserved facts from limited concepts, uncover gaps in our knowledge, suggest new downstream applications, which clearly reflects the central concerns of the artificial intelligence (Nickel et al., 2016a; Henaff et al., 2017). Therefore, massive attention has been devoted to the potential of embedding entities and relationships of multi-relational data in low-dimensional vector spaces in recent years (Wang et al., 2017).
In this paper, we consider the problem of developing simple and efficient model for learning neural representation of generalized knowledge graphs, including the multi-relational heterogeneous graphs, and more specifically defined homogeneous graphs (such as social and biological networks).
Following the pioneer work of Nickel et al. (2011) and Bordes et al. (2013), almost all of the stateof-the-art approaches try to model the graph embedding learning problem as supervised binary classification problems, their objective functions are usually one-shot (single purpose) . We argue that prior research in this area might have been affected and biased by “ established priors”, which prevents the formulation of a methodology that is objective enough to cope with the highly sparse knowledge graphs. We propose to handle the embedded learning problem of knowledge graphs with an unsupervised neural network model, called the Graph Embedding Network (GEN). The proposed model consists of three simple multi-layer perceptron (MLP) cells, each cell operates in response to a different “query” with regard to the input fact, which will be trained sequentially. The formulation of the model is inspired by the neural sequence-to-sequence (seq2seq) model (Sutskever et al., 2014), except that we attempt to use the MLP cells to mimic the sequence learning capability of the recurrent neural network (RNN), to model the semantic structure of the knowledge graphs.
The major contribution of this paper is that: (1) we propose GEN, a novel and efficient multishot framework for embedding learning in generalized knowledge graphs. (2) We show how GEN is in accordance with established principles in cognitive science, providing flexibility in learning representations that works on graphs conforming to different domains.
2 RELATED WORKS
During the last few years, an increasing amount of research attention has been devoted to the challenge of representation learning on knowledge graphs, especially focused on the potential benefits for the knowledge base completion (KBC) tasks, including the link prediction problem and the relation prediction problem. Among which, the relation translating model TransE (Bordes et al., 2013), the tensor factorization based semantic matching model RESCAL (Nickel et al., 2011), and the neural network based semantic matching model ER-MLP (Dong et al., 2014; Nickel et al., 2016b), are probably the most heavily studied from the methodology perspective. For good surveys on such embedding learning algorithms, see Nickel et al. (2016a), Wang et al. (2017), and Cai et al. (2017).
Broadly speaking, related works can be divided into two categories: linear and non-linear, according to whether the output embedding has a reasonable linear interpretation. State-of-the-art linear models include the TransE, RESCAL, TranH (Wang et al., 2014), DistMult (Yang et al.), and ANALOGY (Liu et al., 2017), while the popular non-linear models include the ER-MLP, ComplEX1 (Trouillon et al., 2016), HoIE (Nickel et al., 2016b), ProjE (Shi & Weninger, 2017) and ConvE (Dettmers et al., 2017). The proposed GEN model is also a non-linear model.
The graph embedding learning models that is most closely related to this work is probably the ProjE model, which makes use of an embedding projection function defined as:
h(r, t) = g(w0 · f(wr1r+wt1t+ b1) + b0) where h, r, t denote the embedding vectors, f(·) and g(·) are non-linear activation functions, w0, wr1 and w t 1 are learnable weight matrices, b0 and b1 are bias vectors. The output ranking scores of entity h with regard to the given query (?, r, t) can be obtained through a softmax function:
Score(hi, r, t) = softmax {h(r, t)}i However, as one could see from above functions, the ProjE model is built upon the query (?, r, t), hence is a one-shot solution, which is distinctly different from our GEN model. Still another difference lies in the definition of the objective loss function, the ProjE model choose to use the (selective) cross-entropy loss based on the open world assumption, while our model uses a simplified cross-entropy loss based on the close world assumption. In order to save the computation cost, the ProjE model introduced a negative sampling process, this could cause potential risks for introducing additional bias. Besides, its candidate sampling process is time consuming and hard to be paralleled.
Another model that is closely related to the GEN model is the ER-MLP model, which can be interpreted as creating representation for each element of triples and deriving their existence from this representation (Nickel et al., 2016a). The ER-MLP model can be defined as:
Score(h, r, t) = wT g { CT (h ⊕ r ⊕ t) } where symbol ⊕ denotes the vector concatenation operator, vector w and matrix C are global weight vectors shared by all the entities and relations, g(·) is an element-wise non-linear activation function. This model is built upon the fourth query as we defined in Section 3, it is a supervised solution, which is quite different from ours. One well-known disadvantage of the ER-MLP is that, even properly regularized, it is still easily prone to over-fitting on knowledge graph datasets (Nickel et al., 2016b), therefore we do not compare with it in this work, but instead with the ProjE model.
As mentioned before, the primary motivation of this study is to develop a graph embedding model that is universally applicable to a wide variety of situations. In order to verify the validity of our solution on heterogeneous networks, we further test it on multi-label network classification tasks for social networks (BlogCatalog) and biological networks (Protein-Protein Interaction), and compare our results with two state-of-the-art techniques, namely, DeepWalk (Perozzi et al., 2014) and node2vec (Grover & Leskovec, 2016). Both of them are derived directly from the word2vec model (Mikolov et al., 2013), which creating node embeddings of the graphs based on the skip-gram framework, and train the model with corpus generated through random walking on that graph. However, it is shown that the random walk sampling can be insufficient for supervised learning tasks in the sparse network environment (Liu et al., 2016). Our results support this conjecture, the experimental results on benchmark tests provide strong evidence that our model performs much better.
1The ComplEX model can be seen as an extension of the DistMult model in the complex space, albeit there is no nonlinear transformations applied, we treat it as a non-linear model here.
3 APPROACH AND MODEL ARCHITECTURE
Most of the prevalent semantic knowledge databases are built upon the Resource Description Framework (RDF) , in which the facts are represented and stored in the form of SPO (Subject, Predicate, Object) triples. Following the convention, we will use the symbol (h, r, t) to represent a unit of facts, in which h, r and t denote the head entity, the relation, and the tail entity, respectively.
The primary motivation of this paper is to develop a representation learning method that is suitable and flexible enough for modeling different types of knowledge graphs from a universal perspective. To achieve this objective, the most important problems to be faced are associated with: how to define the optimization problem and how to solve it. As mentioned above, previous works only consider a one-shot mapping from the embedding space to the criterion space, which we conjecture, would be vulnerable to loss considerable amount of the structured semantic information. For instance, if given a fact (Elvis Presley, profession, singer), one could immediately learn the following queries:
• Q1: What is the profession of Elvis Presley? A1: singer.
• Q2: Can you name a person whose profession is singer? A2: Elvis Presley.
• Q3: What is the possible relationship in between Elvis Presley and singer? A3: profession.
• Q4: Is it true that Elvis Presley’s profession is singer? A4: Yes.
In fact, this is the actual way we humans learn the meaning of concepts expressed by a statement. These self-labeled queries reflect the following modeling philosophy: (1) (h, r) ⇒ t; (2) (t, r) ⇒ h; (3) (h, t) ⇒ r; (4) (h, r, t) ⇒ T/F; respectively. This has been exclusively adopted by the previous research. However, none of them have systematically investigated the effect of combination all of such information. In this section, we propose a novel multi-shot model to solve this problem. For a more detailed discussion of the motivation and intuition behind this model, see Appendix A.
3.1 OVERVIEW OF THE MULTI-SHOT LEARNING FRAMEWORK
The proposed model (GEN) is designed to process data in sequential form. As shown in Fig.1, GEN consists of three components (cells), each corresponding to an individual query with regard to the given input triple. In this study, we propose to use a 2-layer MLP network to deal with the parameter estimation problem for each query individually, although it can be substituted by any other one-shot models, we only report the test results on MLP cells for simplicity. In training mode, the training set is fed into the system sequentially, each of the triple is decomposed into three self-labeled queries: (h, r, ?) ⇒ t, (?, r, t) ⇒ h, and (h, ?, t) ⇒ r. Each of the queries is fetched into the corresponding cell in order to update the parameters. Since for any given triple, our model would read it from three different perspective, we call it “multi-shot model” to distinguish it from other related works.
Parameters of the model can be logically divided into two parts. Firstly, the distribution representation of the entities and the relations are defined in the same d-dimensional space, which, as shown in Fig.1, are organized together as a learnable dictionary of embeddings. Secondly, there exist two types of MLP cells in the model, one deals with the entity prediction tasks, the other is responsible for the relation prediction tasks, which are marked as “E CELL” and “R CELL” respectively. Each individual cell has its own parameter set {W0,b0;W1,b1} representing certain network structures. Please note that two E CELLs are introduced to learn from the labeled entities, based on query (h, r, ?) and (?, r, t). According to our modeling hypothesis, which claims that all of the relations should be treated conceptually instead of syntactically, we propose to share parameters between the E CELLs, the intuition behind is to let them share their memory of each known facts from both side of the relation, so that after training with enough knowledge, the E CELLs will eventually able to learn how to correctly distinguish between valid and invalid entities for the given queries.
Another theoretical explanation of the GEN model is given below. We consider the proposed model as a variant of the RNN model, or more precisely, a neural seq2seq model, as illustrated in Fig.2. When training with the graph (a “document” of triples), the GEN model is organized as a stacked RNN, which consists of two chains: the E CELL chains and the R CELL chains. For any given input (h, r, t), each of the cells works as an individual seq2seq model according to its responsive query. For instance, the R CELL is responsible to query (h, ?, t) ⇒ r, it will take the embedding of h and r as input, and take r as its target label, and the parameters (memory) of the R CELL will be updated through back-propagation according to the discrepancy between the prediction results (in this case the softmax vector) and the desired label r. Therefore, the proposed model is completely unsupervised, which is distinctly different from previous works. Also please note that due to the lack of semantic connections between the adjacent triples in the input sequence, we did not consider the “long term memory” in this model, as usually did in real RNN models. Therefore, there only exists one “global memory” in this model — the parameter of the two types of cells, which is responsible for “learning to remember” the rules of how the knowledge graph is constructed.
3.2 DEFINITION OF THE GEN CELLS
The network structure of the E CELLs and the R CELLs are quite similar, the only difference is that they have different number of neurons in the hidden layer and the output layer, which are defined as hyper-parameters as shown in Fig.1. For simplicity, we only present the implementation details of the E CELLs here. In order to answer query (h, ?, t) ⇒ r, the hidden layer of the E CELL takes input from the embedding dictionary according to label h and r, the hidden layer is defined as:
x1 = f(W e o · x0 + b0) (1)
where x0 = [h ⊕ r], denotes the concatenation of the embedding vectors, hence the x0 is a 2d× 1 real-value vector. Weo is a k × 2d weights matrix, b0 is a k × 1 bias vector, k denotes the number of neurons in the hidden layer, and f(·) is a non-linear activation function, in this work, we use the
rectified linear unit (ReLU) function for all the experiments (Nair & Hinton, 2010). The output layer takes the hidden state vector x1 as input, mapping it to the target label space:
ŷ = g(We1 · x1 + b1) (2)
where We1 is a Ne × k weights matrix, b1 is a Ne × 1 bias vector, Ne denotes the number of entities in the dictionary, g(·) denotes the softmax function. Hence, ŷ is a Ne×1 probability vector, which means that, when training the model with a given fact (h, r, t) to answer the query (h, r, ?), the predictive results output by the model is a probabilistic distribution over all of the possible candidate entities. The cross-entropy loss with regard to prediction results is then defined as:
L(ŷ) = − Ne∑ i=1 y[i]log(ŷ[i]) + (1− y[i])log(1− ŷ[i]) (3)
where y denotes the ground truth, which is a one-hot vector exclusively activated by t. To speed-up the stochastic convex optimization process, we use a mini-batch setting, and rewrite the averaged cross-entropy loss over a batch of multiple samples of size N as following simplified form:
L(y) = − 1 N N∑ i=1 log(ŷi[ ti ]) (4)
where the subscript i denotes the i-th sample of the batch, ti represent the index of label t in the ground truth vector of that sample. Eq.4 is computationally efficient, however, it tend to ignores the existing knowledge for query (h, r, ?) other than the current fact (h, r, t), which has been proven to be useful for improving performance (Shi & Weninger, 2017). But, our experimental results show that the impact of such a problem can be controlled by means of collaborative correction with related facts under our model framework, which further demonstrate the validity of our modelling assumptions. Hopefully, the lessons learned for designing reasonable and computationally efficient cost functions in this study can serve as exemplars for future work.
4 EXPERIMENTAL RESULTS
We evaluate the proposed model on two distinctly different types of graph embedding learning tasks. Firstly, we evaluate our model on knowledge base completion tasks with the conventional datasets FB15K and WN182, and their upgrade version FB15k-237 and WN18RR3. Secondly, we evaluate our model on graph based multi-label classification tasks with two benchmark datasets from the complex network research area: BlogCatalog and Protein-Protein Interaction (PPI)4. Background information of the datasets and the implementation details of our model are given in Appendix B.
4.1 EVALUATION ON KNOWLEDGE BASE COMPLETION TASKS
The aim of the first evaluation was to assess the performance of the proposed model in link prediction tasks, by comparing it with other state-of-the-art approaches. We report the filtered P@N scores following the protocols proposed by Bordes et al. (2013), which means that all of the known facts would be screened out from the ranking list before calculating the statistics of the hits. The numerical results are presented in Table 1, where the highest scores in each column are presented in bold.
We reproduced all of the results of the existing studies (mostly with the released code), whereas some of which are below the reported record. For a fair comparison of the models, we cite those numbers from the original publications (marked with ⋆ symbols). Also, it seems that results reported by Dettmers et al. (2017) only consider the tail entity prediction scenario (without averaging with the head entity prediction results), hence we report two version of the test results of our model, the averaged version is named as GEN(avg.), while the tail entity prediction results are reported with
2Available online at: https://everest.hds.utc.fr/doku.php?id=en:transe 3Available online at: https://github.com/TimDettmers/ConvE 4Available online at: https://snap.stanford.edu/node2vec/
model named GEN(tail). Besides, we found that our model tends to remember the reverse facts with regard to the triples that has been processed during the training phase. We argue that this is an inherent characteristic of our modeling methodology, since it would treat such reverse facts as conceptually correct. Therefore, we also report P@N scores after screening out such reverse facts, this model is named as GEN(opt). We consider that under certain practical circumstances, it is reasonable to care about such results, because the reverse facts are direct reflections of the known facts, and in many scenarios, they themselves are useful and effective facts.
From Table 1 one could see that the performance of ComplEX seems much more competitive than other models on both of the WordNet subset, however, according to our tests, TransE and HoIE perform (generalized) more stable than others for all of the subtasks. Also please note that, after filtering out the reverse facts from the ranking list, we recorded a significant increase in P@1 score on WN18, which was not observed in other models. Since most of the semantic relations defined in WordNet are reflexive (Miller, 1995), we believe that these results help verify the efficacy of our model framework. Further evidence can be found by looking at evaluation results on FB15K and FB15K-237, in which our model consistently and significantly outperforms others for all settings.
The goal of the second evaluation was three-folded. (1) To assess the relation prediction performance of our model. (2) To verify the validity of the multi-shot learning framework. (3) To evaluate the quality (representability) of different embedding schemes. To achieve this goal, we carried out a group of experiments depicted in Table 2, where the model name shown in the parentheses indicate that the test is based on the embeddings generated by that model, but being re-trained with our model for fair comparison. For example, before testing the GEN(TransE) model, we need to train a GEN model with TransE embeddings, the only difference is that the pre-trained embeddings will not be updated during the training process, such that the quality of the different embedding schemes can be assessed more objectively. The results of GEN(HoIE) were obtained similarly from the pre-trained HoIE embeddings. The pre-trained word2vec embedding5 and GloVe embedding6 are obtained from the publicly available dictionaries released respectively by Google and Stanford NLP Group for research purpose, which are also heavily studied by recent researches. For entities and relations consisting of many words, we use the weighted sum of the word embeddings as their distributed representation for the test. The three models listed in the bottom of Table 2 demonstrate the oneshot learning capability of GEN, for instance, the results of GEN(h, r ⇒ t) were obtained by only considering the query (h, r, ?) during the training stage.
From the studies, the following conclusions could be obtained. (1) The performance of GEN on relation prediction tasks has been demonstrated. However, it seems that such strong performance mainly comes from our GEN framework, under which the predictive capability of a variety of em-
5Available at: https://code.google.com/archive/p/word2vec; version: GoogleNews-vectors-negative300. 6Available at: https://nlp.stanford.edu/projects/glove/; file version: glove.42B.300d.
beddings can be enhanced. In considering the ratio of the number of facts to relations involved, this problem seems much easier than the link prediction problem. (2) The validity of the multi-shot framework has been verified, since each of the one-shot GEN model performs significantly worse than the multi-shot model for almost all the tests, except that in relation prediction tasks, GEN(h, t ⇒ r) performs comparable to GEN, this is probably because that it was exclusively trained for that task, which is prone to overfit the data. (3) Comparing with their performance on link prediction tasks, we argue that the embeddings generated by GEN are probably more representative and informative than other embedding schemes, which we will provide more empirical (visual) evidence in Appendix C.
4.2 EVALUATION ON GRAPH BASED MULTI-LABEL CLASSIFICATION TASKS
In previous section, the term “knowledge graph” was used to refer to a multi-relational database, in which the entities were engaged in one or more heterogeneous relations, which means the relations related with a entity may range over different domains. In this section, we consider the problem of embedding learning on another type of graph — the homogeneous graphs (networks), in which the entities were engaged in a specific relationship, which is a natural structure people use to model the physical world, such as the various social network and the biological information systems. In this study, we consider it as a generalized form of the knowledge graphs, and attempt to come up with a general-purpose framework that could be used for embedding learning on different graphs.
To verify the validity of the proposed model, we evaluate GEN by comparing its performance on some benchmark multi-label classification tasks with the state-of-the-art DeepWalk and Node2vec models. Besides, we also report results on TransE and HoIE embeddings for comparison purpose, the supervised model used for multi-label classification are identical to each other (but differ from the embeddings). For fair comparison, all of the results with regard to the DeepWalk (Perozzi et al., 2014) and Node2vec (Grover & Leskovec, 2016) are cited from their original sources.
Following the convention of previous authors, we randomly sample a portion of the labeled nodes as training set (and the rest are used for test), we repeat this process 9 times (with the training ratio increased from 10% to 90%), and report two of the averaged measures (w.r.t. recall, precision, and F1-measure) on each of the test, namely, macro-average and micro-average. The Macro-F1 weights equally all the categories regardless of how many labels belong to it, while the Micro-F1 weights equally all the labels, thus favouring the performance on common categories.
Numerical results are presented in Table 3 and 4 respectively, the highest scores in each column are presented in bold face. From Table 3 one could see that the performance of DeepWalk proves much more competitive than other models when labeled data is sparse, but GEN still consistently outperforms when given 50% of the data, which demonstrates the validity of the proposed embedding learning framework for modeling author connections on social networks. Next, we investigate the performance of our model on even more sparse graphs, i.e. the Protein-Protein Interactions network. Table 4 shows that GEN performs consistently and significantly better than other baselines. In fact, when trained with only 20% of the labeled proteins, GEN performs significantly better than other approaches when they are given 90% of the data. We argue that this strong performance not only indicates that our model is flexible enough to the biological networks, but also provides new insights into their underlying biological mechanisms. Also please note that Macro-F1 scores in Table 3
and 4 demonstrate that, comparing with other embedding schemes, GEN performs more stable (and better) in both common and rare categories, which indicates that the embeddings generated by GEN are probably more representative and informative than other solutions, thus the supervised model built on top of it is less vulnerable to global under-fitting and local over-fitting.
5 CONCLUSION AND FUTURE WORK
Representation learning of knowledge graphs is a key concern for artificial intelligence and cognitive science. Many types of relations in physical, biological, social and information systems can be modeled with concept (knowledge) graphs. In this paper, we present an efficient scalable framework for learning conceptual embeddings of entities and relations in generalized knowledge graphs, including the homogeneous and heterogeneous graphs. We give evidence that the proposed model learns good representations of all these graphs for knowledge inference and supervised learning. For future work, we plan to investigate more thoroughly the efficacy of the proposed modeling framework, with respect to the decomposition of the semantic information conveyed by the linked concepts into elementary information, i.e. the four Q&A pairs. Also, we seek to enhance the quality of scientific investigations and theoretical conceptualizations on graph embedding learning in the context of semantic interoperability, for there is usually no possibility to interpret the embedded information meaningfully and accurately in order to produce useful results as defined by existing algorithms.
ACKNOWLEDGMENTS
We are grateful to the anonymous reviewers for taking time read and provide helpful comments.
APPENDIX A: MOTIVATION AND INTUITION
To get an intuitive understanding of the problem, consider the following examples taken from three typical KGs that have been heavily studied by the academic and industrial communities:
• (Elvis Presley, instance of, rock star) : taken from the WordNet7, one of the largest online lexical database of English, in which each distinct concept (called synset) are interlinked by means of rigidly defined (hence limited) conceptual-semantic or lexical relations.
• (Elvis Presley, /people/person/profession, singer) : taken from the Freebase8, which was once to be the largest collaboratively edited knowledge base (deprecated at this time and absorbed by the Wikidata project). In which each named entities are interlinked by means of fine-grained relation types defined in the meta-schema. Due to the loosely-defined nature of the relation types, redundancy or alternate facts are allowed to exist simultaneously, such as, (Elvis Presley, profession, musician) and (Elvis Presley, profession, actor).
• (Elvis Presley, rdf:type, American rock singers) : taken from the YAGO9, one of the largest and most active semantic knowledge base developed at the Max Planck Institute for Computer Science in Saarbrücken , which combines the clean taxonomy (relation types) of WordNet with the richness of the Wikipedia category system (classes of entities ).
As can be perceived from above examples, the use of different ontologies can lead to different (and incoherent) relations between the same pair of concepts, similarly, applying different ontologies can lead to diverse kinds of conceptualizations. Therefore, it is (arguably) impractical to rely on using the word-level embeddings to precisely represent the knowledge graphs under the diverse conditions, and it is necessary to develop a universal solution that is applicable to all of the ontology infrastructures, for phrase-level embedding learning of the different concept representations.
As mentioned in Section 3, in order to develop a representation learning method that is flexible enough for modeling different types of knowledge graphs, the most important problems to be faced are associated with how to define the optimization problem and how to solve it. According to our survey, most state-of-the-art models, including the translating models derived from the TransE (Bordes et al., 2013; Lin et al., 2015), the latent semantic models derived from the RESCAL (Nickel et al., 2011; 2016b), and the neural network models derived from the NTN (Socher et al., 2013), were all trying to define the graph embedding learning problem as a supervised binary classification problem, in which the optimization objectives are defined in the form of a relation-specific cost function of the entity and/or relation embeddings, and then to solve it with a stochastic gradient decent (SGD) process. Typical criteria used to evaluate the cost functions include the logistic loss and the pairwise margin-based criterion, and the negative samples used for training the model are usually sampled from the complement of the knowledge graph based on the open world assumption (Drumond et al., 2012). However, we doubt that there are many situations where such modeling strategies would have theoretical and practical disadvantages.
Firstly, we speculate that the reason why most previous studies did not consider the first and second queries simultaneously (see Section 3), is probably due to the difficult in modeling the inverse semantic relatedness of the entities from the given fact. In other words, shall we use the embedding of r to represent its reverse r′? If we do so, it seems that it will inevitably lead to semantic paradox like: Presley’s profession is Presley, since from the model’s perspective, there is no difference between the entity Presley and other entities that may appear on both side of the relation profession. Considering the sparsity of the knowledge graph, models trained with limited facts would very likely tend to give higher scores to the entities that have been “seen in the right place”.
In order to solve this problem, we propose to model the facts conceptually instead of concretely (or literally, syntactically), which means that we will focus on the semantic meanings of the embeddings (of the entities and relations), rather than their syntactic features. Such a conceptual embedding scheme allow us to unify the representation of a relation (r) and its reverse counterpart (r’), and to accommodate the lexical variety in use by various knowledge bases.
7http://wordnet.princeton.edu 8https://developers.google.com/freebase 9https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago
The intuition behind is, for any given fact (h, r, t), one would instantly recognize the bidirectional semantic connection between h and t, without need of translating it into (t, r′, h) explicitly in his/her mind. We believe this is crucial for efficient utilization of the structure information of the KGs for representation learning, empirical evidence is provided in Section 4 and Appendix C, respectively.
Secondly, we propose to use unsupervised learning techniques for graph embedding learning tasks, because: (1) Presently, almost all of the large-scale knowledge graphs are extremely sparse, which would unavoidably degrade the quality and reliability of the supervised learning algorithms. Further, considering the relation-specific solution that dominates the current research, the situation might get even worse. (2) Selecting negative examples for pair-wise training would be tricky and expensive, since in practice, it is very hard to generate a “proper and informative” negative sample responsive to each of the positive examples. For example, when learning from the fact (Einstein, employer, IAS), the false fact (Einstein, employer, Stanford) would seem to be more reasonable and informative than (Einstein, employer, singer) — if the objective is to further improve the predictive capability of the model to discriminate between similar objects.
To solve the data sparsity problem, we propose to model each of the facts as a short sentence, the entire KG can be regarded as a huge document, so that it can be processed by unsupervised encoderdecoder neural models, which has demonstrate to be efficient and useful in concept learning from the large-scale and feature sparse data (Sutskever et al., 2014). In order to avoid the sampling bias due to the selection of uninformative entities, we propose to use the softmax cross-entropy loss as a measure of the predictive discrepancy for model training, because its probability interpretation is more objective than those squared or logistic errors conventionally used in this area, and, it has been proven to be convex for the MLP we used in this paper (Bengio et al., 2005).
APPENDIX B: BACKGROUND INFORMATION AND IMPLEMENTATION DETAILS
B.1 DATASETS
WN18 is a subset of WordNet, which contains mostly the conceptual-semantic and lexical relations, and the entities are organized in a strictly hierarchical manner. FB15k is a subset of Freebase, which contains facts gathered from Wikipedia, mostly focused on the topic of movies and sports.
These datasets have been used as a de facto benchmarks for comparative evaluation, however, recent research (Toutanova & Chen, 2015; Dettmers et al., 2017) show that the test sets of WN18 and FB15k contain a lot of reversed triples that have been presented in the training set, i.e., (h, r, t) versus (t, r, h). Which, we consider would favor our model over those one-shot alternatives.
Therefore, we provide results on FB15k-237, which is introduced by Toutanova & Chen (2015), it is a subset of FB15K where reversing relations are removed. And, we also test on WN18RR provided by Dettmers et al. (2017), which is a reverse duplication free new sample of WordNet.
The multi-relational data sampled from WordNet and Freebase can be seen as typical of the heterogeneous graphs, in order to verify the generality of the developed model, we also perform evaluation in the multi-label classification setting on some typical homogeneous graphs.
BlogCatalog is a social network sampled from the BlogCatalog website, which contains only one relationship: the social connection between the blog authors, while the labels represent the interested topic categories provided by the bloggers. Protein-Protein Interactions is a biological network sampled from the PPI network for Homo Sapiens, which also contains only one relationship: the existence of interactions between the proteins, while the labels represent the biological states of the proteins. In the training set of these graph corpus, every entity (node) is assigned one or more labels from a finite set, the task is to predict the labels for the nodes in the test set.
The statistics of these data sets are summarized in Table 5.
B.2 EXPERIMENTAL SETUP
We optimized the hyper-parameters of all the datasets via extensive grid search and selected the model with the best filtered P@10 score on the validation set. Hyper-parameter ranges for the grid search were the following: embedding dimension d in {50, 100, 200, 300}, hidden layer dimension k in {256, 512, 1024, 2048} , MLP dropout rate p in {0.0, 0.1, 0.2, 0.3}, learning rate η in {0.001, 0.01, 0.1, 1, 5, 10}, learning rate decay λ in {0.7, 0.75, 0.8, 0.85, 0.9, 0.95}. In this study, we use the following combination of parameters for all of the graph embedding learning tasks :
• E CELLS: {d : 200, k : 2048, p : 0.2, η : 5, λ : 0.9. • R CELLS: {d : 200, k : 512, p : 0.2, η : 5, λ : 0.9}. • Mini-batch Settings: {batch size : 512, epoch : 50}
For multi-label classification tasks, we implement a single layer perceptron model for multi-task learning with: {k : 128, η : 0.1, λ : 0.9}, which is selected through grid search with the best averaged Macro-F1 score on randomly sampled validation set from the labeled nodes.
APPENDIX C: INVESTIGATING AND VISUALIZING THE EMBEDDING SCHEMES
In this section, we provide qualitative analysis on four typical embedding schemes (GEN, HoIE, TransE and word2vec), with the intention of better understanding the connection between the existing graph embedding schemes, and highlighting areas that remain poorly understood for further investigation. The reason we choose these models (except the word2vec) is because, according to our tests, they have demonstrated to be efficient and scalable to large-scale problems, and are also exhibiting good generalization ability on real data sets. We also consider the word2vec embeddings because we found that with the help of our multi-shot learning model, it achieves state-of-the-art performance on most of the knowledge base completion tasks (see Section 4), which is interesting, and worth some consideration (probably indicates a promising potential for transfer learning).
The argument in the first case is to claim that the graph embeddings generated by GEN are different from other solutions. We calculate the cosine similarities for each pair of the 1,345 relations in
FB15K w.r.t. four embedding schemes respectively, and compare their top-N ranking list (which is a set of relation pairs for each embedding scheme) through Venn diagrams, as illustrated in Fig.3.
Figure 3 reveals very different clustering pattern between GEN and other alternatives in their corresponding embedding space. What’s particularly interesting is that, although the TranE model is inspired by and heavily reliant on the spatial distribution of the wrod2vec embeddings (Bordes et al., 2013), but they are, in fact, not similar at all. On the contrary, the results of TransE and HoIE share a lot of similarities, it appears that in their top-300 and top-500 lists, almost half of the relation pairs were contained in their intersection. Which probably indicates that the translating embedding hypothesis (Bordes et al., 2013) is theoretically similar in nature to the holographic embedding hypothesis (Nickel et al., 2016b), when used for graph modeling. This is not an easily testable hypothesis, we consider it to be an open question, which we hope to further explore in the future.
The goal of the second experiment is to verify the claim that the embeddings generated by GEN are more representative and informative than other embedding schemes. Here we provide a case study on a randomly selected relation from FB15K, namely “/location/location/time zones”. There are 137 triples related to this relation (#425) in the test set, all of the head entities are names of the countries or regions, and the tail entities are the corresponding time zones. The heads are uniquely different from each other, while there are only 10 different time zones existed in the tails.
We plot all of the 137 triples in Fig.4, in which (Fig.4a and Fig4b) the input multi-dimensional vectors are projected to a 2-dimensional subspace spanned by x and y, by using of the principal component analysis (PCA), then we choose the first two principal components as the principal axes. In Fig.4a, the input is the concatenation of the head and tail entity of each triple, i.e. (h ⊕ t), with the intention of investigating the patterns of such feature vectors for relation prediction tasks. Hence, we choose the name of the tails as legend labels. As can be seen from Fig.4a, the feature vectors of the 137 triples shows clear clustering tendencies with regard to the categories in their tail entities. Based on this observation, we further plot the hidden layer of the R CELL (which is a 512-dimensional vector in this case) located before the output layer in our GEN model, as depicted in Fig.4b. From Fig.4b one could see that the distance between the data points is amplified, and the distinction becomes more prominent. We plot the cumulative softmax in Fig.4c, in which the X-axis represents the 1,345 type of relations in FB15K, Y-axis denotes the cumulative softmax values. The curve is obtained by adding all of the softmax vectors output by GEN with regard to the 137 triples. Obviously, the only peak observed in Fig.4c clearly exhibit that GEN can make good use of these (concatenated) features to identify the corresponding relations correctly.
For comparison purpose, we also visualize the other three embedding schemes with the same protocol, as illustrated in Fig.5. Since the corresponding models do not use MLP for relation prediction, we can not plot their “hidden state” and “accumulate softmax” for the second and the third subplots, hence we choose to visualize their predictive criterion vectors and output ranking list instead. The processing practice is consistent with the protocol of the original literature. Specifically, for TransE, we plot (t − h) as the hidden state for relation prediction, and calculate the ℓ1-norm distance |ri − (t− h)|1 w.r.t each of the relation ri in FB15K, then we process the distance vector with the softmax function for calculation of the accumulate softmax. While for HoIE, we plot the circular correlation vector (h⋆t) as the hidden state, and calculate the cosine similarity of (h⋆t)·ri w.r.t each of the relation ri in FB15K, then we use the obtained (cosine) vector to calculate the accumulate softmax. For word2vec embeddings, we use the same protocol as dealing with TransE.
From Fig.5 one could see that, the concatenated embedding vectors of TranE and HoIE shows similar clustering pattern as the GEN case, which help explaining the reason that under our multi-shot learning framework, the embeddings generated by these models perform similar in relation prediction tasks (see Table 2). It also provides evidence for our conjecture that these two embedding schemes could be inherently similar to each other. Form their criterion vectors (the second subplot for each models), one could see that their clustering pattern is not as clear as the case of GEN, which help explain their performance on relation prediction tasks (as shown in the third subplot)10. We consider this as solid support for the validity of the proposed multi-shot learning framework.
More evidence could be found with our source code release, which will be made publicly available on GitHub to encourage reproducible research, after anonymous review.
10 The alternative peaks appeared in subplot Fig.5c and Fig.5f are:
• #891: “/base/schemastaging/phone open times/time zone”, and • #583: “/time/time zone/locations in this time zone”. | 1. What is the main contribution of the paper on multi-relational graph embedding?
2. What are the strengths and weaknesses of the proposed approach, particularly in its sequential training mechanism and comparison to prior works?
3. Do you have any concerns regarding the method's ability to capture the relationships between entities and relations?
4. How does the reviewer assess the novelty and significance of the paper's contributions?
5. Are there any questions or imprecisions in the paper that need further clarification or improvement? | Review | Review
This paper tackles the task of learning embeddings of multi-relational graphs using a neural network. As much of previous work, the proposed architecture works on triples (h, r, t) wth h, t entities and r the relation type.
Despite interesting experimental results, I find that the paper carries too many imprecisions as is.
* One of the main originality of the approach is to be able for a given input triple to train by sequentially removing in turn the head h, then the tail t and finally the relation r. (called multi-shot in the paper). However, most (if not all) approaches learning embeddings of multi-relational graphs also create multiple examples given a triple. And that, at least since "Learning Structured Embeddings of Knowledge Bases" by Bordes et al. 2011 that was predicting h and t (not r). The only difference is that here it is done sequentially while most methods sample one case each time. Not really meaningful or at least not proved meaningful here.
* The sequential/RNN-like structure is unclear and it is hard to see how it relates to the data.
* Writing that the proposed method "unsupervised, which is distinctly different from previous works" is not true or should be rephrased. The only difference comes from that the prediction function (softmax and not ranking for instance) and the loss used. But none of the methods compared in the experiments use more information than GEN (the original graph). GEN is not the only model using a softmax by the way.
* The fact of predicting indistinctly a fact or its reverse seems rather worrying to me. Predicting that "John is_father_of Paul" or that "John is_child_of Paul" is not the same..! How is assessed the fact that a prediction is conceptually correct? Using types?
* The bottom part of Table 2 is surprising. How come for the task of predicting Head, the model trained only at predicting heads (GEN(t,r => h)) performs worse than the model trained only at predicting tails (GEN(h,r => t))? |
ICLR | Title
Generalized Graph Embedding Models
Abstract
Many types of relations in physical, biological, social and information systems can be modeled as homogeneous or heterogeneous concept graphs. Hence, learning from and with graph embeddings has drawn a great deal of research interest recently, but only ad hoc solutions have been obtained this far. In this paper, we conjecture that the one-shot supervised learning mechanism is a bottleneck in improving the performance of the graph embedding learning algorithms, and propose to extend this by introducing a multi-shot unsupervised learning framework. Empirical results on several real-world data set show that the proposed model consistently and significantly outperforms existing state-of-the-art approaches on knowledge base completion and graph based multi-label classification tasks.
1 INTRODUCTION
Recent studies have highlighted the importance of learning distributed representations for symbolic data in a wide variety of artificial intelligence tasks (Bengio et al., 2013). Research on word embeddings (Mikolov et al., 2013) has led to breakthroughs in many related areas, such as machine translation (Bahdanau et al., 2015), question answering (Xiong et al., 2016), and visual-semantic alignments (Karpathy & Fei-Fei, 2017). However, learning to predict for large-scale knowledge graphs (KGs) is still a challenging problem left, this is largely due to the diversity of the ontologies, and the semantic richness of the concepts, which makes it really hard to generate proper and universally applicable graph embeddings, simply based on word-level embeddings (Cai et al., 2017).
Being able to generate reasonable and accurate distributed representations for large-scale knowledge graphs would be particularly valuable, in that it may help predict unobserved facts from limited concepts, uncover gaps in our knowledge, suggest new downstream applications, which clearly reflects the central concerns of the artificial intelligence (Nickel et al., 2016a; Henaff et al., 2017). Therefore, massive attention has been devoted to the potential of embedding entities and relationships of multi-relational data in low-dimensional vector spaces in recent years (Wang et al., 2017).
In this paper, we consider the problem of developing simple and efficient model for learning neural representation of generalized knowledge graphs, including the multi-relational heterogeneous graphs, and more specifically defined homogeneous graphs (such as social and biological networks).
Following the pioneer work of Nickel et al. (2011) and Bordes et al. (2013), almost all of the stateof-the-art approaches try to model the graph embedding learning problem as supervised binary classification problems, their objective functions are usually one-shot (single purpose) . We argue that prior research in this area might have been affected and biased by “ established priors”, which prevents the formulation of a methodology that is objective enough to cope with the highly sparse knowledge graphs. We propose to handle the embedded learning problem of knowledge graphs with an unsupervised neural network model, called the Graph Embedding Network (GEN). The proposed model consists of three simple multi-layer perceptron (MLP) cells, each cell operates in response to a different “query” with regard to the input fact, which will be trained sequentially. The formulation of the model is inspired by the neural sequence-to-sequence (seq2seq) model (Sutskever et al., 2014), except that we attempt to use the MLP cells to mimic the sequence learning capability of the recurrent neural network (RNN), to model the semantic structure of the knowledge graphs.
The major contribution of this paper is that: (1) we propose GEN, a novel and efficient multishot framework for embedding learning in generalized knowledge graphs. (2) We show how GEN is in accordance with established principles in cognitive science, providing flexibility in learning representations that works on graphs conforming to different domains.
2 RELATED WORKS
During the last few years, an increasing amount of research attention has been devoted to the challenge of representation learning on knowledge graphs, especially focused on the potential benefits for the knowledge base completion (KBC) tasks, including the link prediction problem and the relation prediction problem. Among which, the relation translating model TransE (Bordes et al., 2013), the tensor factorization based semantic matching model RESCAL (Nickel et al., 2011), and the neural network based semantic matching model ER-MLP (Dong et al., 2014; Nickel et al., 2016b), are probably the most heavily studied from the methodology perspective. For good surveys on such embedding learning algorithms, see Nickel et al. (2016a), Wang et al. (2017), and Cai et al. (2017).
Broadly speaking, related works can be divided into two categories: linear and non-linear, according to whether the output embedding has a reasonable linear interpretation. State-of-the-art linear models include the TransE, RESCAL, TranH (Wang et al., 2014), DistMult (Yang et al.), and ANALOGY (Liu et al., 2017), while the popular non-linear models include the ER-MLP, ComplEX1 (Trouillon et al., 2016), HoIE (Nickel et al., 2016b), ProjE (Shi & Weninger, 2017) and ConvE (Dettmers et al., 2017). The proposed GEN model is also a non-linear model.
The graph embedding learning models that is most closely related to this work is probably the ProjE model, which makes use of an embedding projection function defined as:
h(r, t) = g(w0 · f(wr1r+wt1t+ b1) + b0) where h, r, t denote the embedding vectors, f(·) and g(·) are non-linear activation functions, w0, wr1 and w t 1 are learnable weight matrices, b0 and b1 are bias vectors. The output ranking scores of entity h with regard to the given query (?, r, t) can be obtained through a softmax function:
Score(hi, r, t) = softmax {h(r, t)}i However, as one could see from above functions, the ProjE model is built upon the query (?, r, t), hence is a one-shot solution, which is distinctly different from our GEN model. Still another difference lies in the definition of the objective loss function, the ProjE model choose to use the (selective) cross-entropy loss based on the open world assumption, while our model uses a simplified cross-entropy loss based on the close world assumption. In order to save the computation cost, the ProjE model introduced a negative sampling process, this could cause potential risks for introducing additional bias. Besides, its candidate sampling process is time consuming and hard to be paralleled.
Another model that is closely related to the GEN model is the ER-MLP model, which can be interpreted as creating representation for each element of triples and deriving their existence from this representation (Nickel et al., 2016a). The ER-MLP model can be defined as:
Score(h, r, t) = wT g { CT (h ⊕ r ⊕ t) } where symbol ⊕ denotes the vector concatenation operator, vector w and matrix C are global weight vectors shared by all the entities and relations, g(·) is an element-wise non-linear activation function. This model is built upon the fourth query as we defined in Section 3, it is a supervised solution, which is quite different from ours. One well-known disadvantage of the ER-MLP is that, even properly regularized, it is still easily prone to over-fitting on knowledge graph datasets (Nickel et al., 2016b), therefore we do not compare with it in this work, but instead with the ProjE model.
As mentioned before, the primary motivation of this study is to develop a graph embedding model that is universally applicable to a wide variety of situations. In order to verify the validity of our solution on heterogeneous networks, we further test it on multi-label network classification tasks for social networks (BlogCatalog) and biological networks (Protein-Protein Interaction), and compare our results with two state-of-the-art techniques, namely, DeepWalk (Perozzi et al., 2014) and node2vec (Grover & Leskovec, 2016). Both of them are derived directly from the word2vec model (Mikolov et al., 2013), which creating node embeddings of the graphs based on the skip-gram framework, and train the model with corpus generated through random walking on that graph. However, it is shown that the random walk sampling can be insufficient for supervised learning tasks in the sparse network environment (Liu et al., 2016). Our results support this conjecture, the experimental results on benchmark tests provide strong evidence that our model performs much better.
1The ComplEX model can be seen as an extension of the DistMult model in the complex space, albeit there is no nonlinear transformations applied, we treat it as a non-linear model here.
3 APPROACH AND MODEL ARCHITECTURE
Most of the prevalent semantic knowledge databases are built upon the Resource Description Framework (RDF) , in which the facts are represented and stored in the form of SPO (Subject, Predicate, Object) triples. Following the convention, we will use the symbol (h, r, t) to represent a unit of facts, in which h, r and t denote the head entity, the relation, and the tail entity, respectively.
The primary motivation of this paper is to develop a representation learning method that is suitable and flexible enough for modeling different types of knowledge graphs from a universal perspective. To achieve this objective, the most important problems to be faced are associated with: how to define the optimization problem and how to solve it. As mentioned above, previous works only consider a one-shot mapping from the embedding space to the criterion space, which we conjecture, would be vulnerable to loss considerable amount of the structured semantic information. For instance, if given a fact (Elvis Presley, profession, singer), one could immediately learn the following queries:
• Q1: What is the profession of Elvis Presley? A1: singer.
• Q2: Can you name a person whose profession is singer? A2: Elvis Presley.
• Q3: What is the possible relationship in between Elvis Presley and singer? A3: profession.
• Q4: Is it true that Elvis Presley’s profession is singer? A4: Yes.
In fact, this is the actual way we humans learn the meaning of concepts expressed by a statement. These self-labeled queries reflect the following modeling philosophy: (1) (h, r) ⇒ t; (2) (t, r) ⇒ h; (3) (h, t) ⇒ r; (4) (h, r, t) ⇒ T/F; respectively. This has been exclusively adopted by the previous research. However, none of them have systematically investigated the effect of combination all of such information. In this section, we propose a novel multi-shot model to solve this problem. For a more detailed discussion of the motivation and intuition behind this model, see Appendix A.
3.1 OVERVIEW OF THE MULTI-SHOT LEARNING FRAMEWORK
The proposed model (GEN) is designed to process data in sequential form. As shown in Fig.1, GEN consists of three components (cells), each corresponding to an individual query with regard to the given input triple. In this study, we propose to use a 2-layer MLP network to deal with the parameter estimation problem for each query individually, although it can be substituted by any other one-shot models, we only report the test results on MLP cells for simplicity. In training mode, the training set is fed into the system sequentially, each of the triple is decomposed into three self-labeled queries: (h, r, ?) ⇒ t, (?, r, t) ⇒ h, and (h, ?, t) ⇒ r. Each of the queries is fetched into the corresponding cell in order to update the parameters. Since for any given triple, our model would read it from three different perspective, we call it “multi-shot model” to distinguish it from other related works.
Parameters of the model can be logically divided into two parts. Firstly, the distribution representation of the entities and the relations are defined in the same d-dimensional space, which, as shown in Fig.1, are organized together as a learnable dictionary of embeddings. Secondly, there exist two types of MLP cells in the model, one deals with the entity prediction tasks, the other is responsible for the relation prediction tasks, which are marked as “E CELL” and “R CELL” respectively. Each individual cell has its own parameter set {W0,b0;W1,b1} representing certain network structures. Please note that two E CELLs are introduced to learn from the labeled entities, based on query (h, r, ?) and (?, r, t). According to our modeling hypothesis, which claims that all of the relations should be treated conceptually instead of syntactically, we propose to share parameters between the E CELLs, the intuition behind is to let them share their memory of each known facts from both side of the relation, so that after training with enough knowledge, the E CELLs will eventually able to learn how to correctly distinguish between valid and invalid entities for the given queries.
Another theoretical explanation of the GEN model is given below. We consider the proposed model as a variant of the RNN model, or more precisely, a neural seq2seq model, as illustrated in Fig.2. When training with the graph (a “document” of triples), the GEN model is organized as a stacked RNN, which consists of two chains: the E CELL chains and the R CELL chains. For any given input (h, r, t), each of the cells works as an individual seq2seq model according to its responsive query. For instance, the R CELL is responsible to query (h, ?, t) ⇒ r, it will take the embedding of h and r as input, and take r as its target label, and the parameters (memory) of the R CELL will be updated through back-propagation according to the discrepancy between the prediction results (in this case the softmax vector) and the desired label r. Therefore, the proposed model is completely unsupervised, which is distinctly different from previous works. Also please note that due to the lack of semantic connections between the adjacent triples in the input sequence, we did not consider the “long term memory” in this model, as usually did in real RNN models. Therefore, there only exists one “global memory” in this model — the parameter of the two types of cells, which is responsible for “learning to remember” the rules of how the knowledge graph is constructed.
3.2 DEFINITION OF THE GEN CELLS
The network structure of the E CELLs and the R CELLs are quite similar, the only difference is that they have different number of neurons in the hidden layer and the output layer, which are defined as hyper-parameters as shown in Fig.1. For simplicity, we only present the implementation details of the E CELLs here. In order to answer query (h, ?, t) ⇒ r, the hidden layer of the E CELL takes input from the embedding dictionary according to label h and r, the hidden layer is defined as:
x1 = f(W e o · x0 + b0) (1)
where x0 = [h ⊕ r], denotes the concatenation of the embedding vectors, hence the x0 is a 2d× 1 real-value vector. Weo is a k × 2d weights matrix, b0 is a k × 1 bias vector, k denotes the number of neurons in the hidden layer, and f(·) is a non-linear activation function, in this work, we use the
rectified linear unit (ReLU) function for all the experiments (Nair & Hinton, 2010). The output layer takes the hidden state vector x1 as input, mapping it to the target label space:
ŷ = g(We1 · x1 + b1) (2)
where We1 is a Ne × k weights matrix, b1 is a Ne × 1 bias vector, Ne denotes the number of entities in the dictionary, g(·) denotes the softmax function. Hence, ŷ is a Ne×1 probability vector, which means that, when training the model with a given fact (h, r, t) to answer the query (h, r, ?), the predictive results output by the model is a probabilistic distribution over all of the possible candidate entities. The cross-entropy loss with regard to prediction results is then defined as:
L(ŷ) = − Ne∑ i=1 y[i]log(ŷ[i]) + (1− y[i])log(1− ŷ[i]) (3)
where y denotes the ground truth, which is a one-hot vector exclusively activated by t. To speed-up the stochastic convex optimization process, we use a mini-batch setting, and rewrite the averaged cross-entropy loss over a batch of multiple samples of size N as following simplified form:
L(y) = − 1 N N∑ i=1 log(ŷi[ ti ]) (4)
where the subscript i denotes the i-th sample of the batch, ti represent the index of label t in the ground truth vector of that sample. Eq.4 is computationally efficient, however, it tend to ignores the existing knowledge for query (h, r, ?) other than the current fact (h, r, t), which has been proven to be useful for improving performance (Shi & Weninger, 2017). But, our experimental results show that the impact of such a problem can be controlled by means of collaborative correction with related facts under our model framework, which further demonstrate the validity of our modelling assumptions. Hopefully, the lessons learned for designing reasonable and computationally efficient cost functions in this study can serve as exemplars for future work.
4 EXPERIMENTAL RESULTS
We evaluate the proposed model on two distinctly different types of graph embedding learning tasks. Firstly, we evaluate our model on knowledge base completion tasks with the conventional datasets FB15K and WN182, and their upgrade version FB15k-237 and WN18RR3. Secondly, we evaluate our model on graph based multi-label classification tasks with two benchmark datasets from the complex network research area: BlogCatalog and Protein-Protein Interaction (PPI)4. Background information of the datasets and the implementation details of our model are given in Appendix B.
4.1 EVALUATION ON KNOWLEDGE BASE COMPLETION TASKS
The aim of the first evaluation was to assess the performance of the proposed model in link prediction tasks, by comparing it with other state-of-the-art approaches. We report the filtered P@N scores following the protocols proposed by Bordes et al. (2013), which means that all of the known facts would be screened out from the ranking list before calculating the statistics of the hits. The numerical results are presented in Table 1, where the highest scores in each column are presented in bold.
We reproduced all of the results of the existing studies (mostly with the released code), whereas some of which are below the reported record. For a fair comparison of the models, we cite those numbers from the original publications (marked with ⋆ symbols). Also, it seems that results reported by Dettmers et al. (2017) only consider the tail entity prediction scenario (without averaging with the head entity prediction results), hence we report two version of the test results of our model, the averaged version is named as GEN(avg.), while the tail entity prediction results are reported with
2Available online at: https://everest.hds.utc.fr/doku.php?id=en:transe 3Available online at: https://github.com/TimDettmers/ConvE 4Available online at: https://snap.stanford.edu/node2vec/
model named GEN(tail). Besides, we found that our model tends to remember the reverse facts with regard to the triples that has been processed during the training phase. We argue that this is an inherent characteristic of our modeling methodology, since it would treat such reverse facts as conceptually correct. Therefore, we also report P@N scores after screening out such reverse facts, this model is named as GEN(opt). We consider that under certain practical circumstances, it is reasonable to care about such results, because the reverse facts are direct reflections of the known facts, and in many scenarios, they themselves are useful and effective facts.
From Table 1 one could see that the performance of ComplEX seems much more competitive than other models on both of the WordNet subset, however, according to our tests, TransE and HoIE perform (generalized) more stable than others for all of the subtasks. Also please note that, after filtering out the reverse facts from the ranking list, we recorded a significant increase in P@1 score on WN18, which was not observed in other models. Since most of the semantic relations defined in WordNet are reflexive (Miller, 1995), we believe that these results help verify the efficacy of our model framework. Further evidence can be found by looking at evaluation results on FB15K and FB15K-237, in which our model consistently and significantly outperforms others for all settings.
The goal of the second evaluation was three-folded. (1) To assess the relation prediction performance of our model. (2) To verify the validity of the multi-shot learning framework. (3) To evaluate the quality (representability) of different embedding schemes. To achieve this goal, we carried out a group of experiments depicted in Table 2, where the model name shown in the parentheses indicate that the test is based on the embeddings generated by that model, but being re-trained with our model for fair comparison. For example, before testing the GEN(TransE) model, we need to train a GEN model with TransE embeddings, the only difference is that the pre-trained embeddings will not be updated during the training process, such that the quality of the different embedding schemes can be assessed more objectively. The results of GEN(HoIE) were obtained similarly from the pre-trained HoIE embeddings. The pre-trained word2vec embedding5 and GloVe embedding6 are obtained from the publicly available dictionaries released respectively by Google and Stanford NLP Group for research purpose, which are also heavily studied by recent researches. For entities and relations consisting of many words, we use the weighted sum of the word embeddings as their distributed representation for the test. The three models listed in the bottom of Table 2 demonstrate the oneshot learning capability of GEN, for instance, the results of GEN(h, r ⇒ t) were obtained by only considering the query (h, r, ?) during the training stage.
From the studies, the following conclusions could be obtained. (1) The performance of GEN on relation prediction tasks has been demonstrated. However, it seems that such strong performance mainly comes from our GEN framework, under which the predictive capability of a variety of em-
5Available at: https://code.google.com/archive/p/word2vec; version: GoogleNews-vectors-negative300. 6Available at: https://nlp.stanford.edu/projects/glove/; file version: glove.42B.300d.
beddings can be enhanced. In considering the ratio of the number of facts to relations involved, this problem seems much easier than the link prediction problem. (2) The validity of the multi-shot framework has been verified, since each of the one-shot GEN model performs significantly worse than the multi-shot model for almost all the tests, except that in relation prediction tasks, GEN(h, t ⇒ r) performs comparable to GEN, this is probably because that it was exclusively trained for that task, which is prone to overfit the data. (3) Comparing with their performance on link prediction tasks, we argue that the embeddings generated by GEN are probably more representative and informative than other embedding schemes, which we will provide more empirical (visual) evidence in Appendix C.
4.2 EVALUATION ON GRAPH BASED MULTI-LABEL CLASSIFICATION TASKS
In previous section, the term “knowledge graph” was used to refer to a multi-relational database, in which the entities were engaged in one or more heterogeneous relations, which means the relations related with a entity may range over different domains. In this section, we consider the problem of embedding learning on another type of graph — the homogeneous graphs (networks), in which the entities were engaged in a specific relationship, which is a natural structure people use to model the physical world, such as the various social network and the biological information systems. In this study, we consider it as a generalized form of the knowledge graphs, and attempt to come up with a general-purpose framework that could be used for embedding learning on different graphs.
To verify the validity of the proposed model, we evaluate GEN by comparing its performance on some benchmark multi-label classification tasks with the state-of-the-art DeepWalk and Node2vec models. Besides, we also report results on TransE and HoIE embeddings for comparison purpose, the supervised model used for multi-label classification are identical to each other (but differ from the embeddings). For fair comparison, all of the results with regard to the DeepWalk (Perozzi et al., 2014) and Node2vec (Grover & Leskovec, 2016) are cited from their original sources.
Following the convention of previous authors, we randomly sample a portion of the labeled nodes as training set (and the rest are used for test), we repeat this process 9 times (with the training ratio increased from 10% to 90%), and report two of the averaged measures (w.r.t. recall, precision, and F1-measure) on each of the test, namely, macro-average and micro-average. The Macro-F1 weights equally all the categories regardless of how many labels belong to it, while the Micro-F1 weights equally all the labels, thus favouring the performance on common categories.
Numerical results are presented in Table 3 and 4 respectively, the highest scores in each column are presented in bold face. From Table 3 one could see that the performance of DeepWalk proves much more competitive than other models when labeled data is sparse, but GEN still consistently outperforms when given 50% of the data, which demonstrates the validity of the proposed embedding learning framework for modeling author connections on social networks. Next, we investigate the performance of our model on even more sparse graphs, i.e. the Protein-Protein Interactions network. Table 4 shows that GEN performs consistently and significantly better than other baselines. In fact, when trained with only 20% of the labeled proteins, GEN performs significantly better than other approaches when they are given 90% of the data. We argue that this strong performance not only indicates that our model is flexible enough to the biological networks, but also provides new insights into their underlying biological mechanisms. Also please note that Macro-F1 scores in Table 3
and 4 demonstrate that, comparing with other embedding schemes, GEN performs more stable (and better) in both common and rare categories, which indicates that the embeddings generated by GEN are probably more representative and informative than other solutions, thus the supervised model built on top of it is less vulnerable to global under-fitting and local over-fitting.
5 CONCLUSION AND FUTURE WORK
Representation learning of knowledge graphs is a key concern for artificial intelligence and cognitive science. Many types of relations in physical, biological, social and information systems can be modeled with concept (knowledge) graphs. In this paper, we present an efficient scalable framework for learning conceptual embeddings of entities and relations in generalized knowledge graphs, including the homogeneous and heterogeneous graphs. We give evidence that the proposed model learns good representations of all these graphs for knowledge inference and supervised learning. For future work, we plan to investigate more thoroughly the efficacy of the proposed modeling framework, with respect to the decomposition of the semantic information conveyed by the linked concepts into elementary information, i.e. the four Q&A pairs. Also, we seek to enhance the quality of scientific investigations and theoretical conceptualizations on graph embedding learning in the context of semantic interoperability, for there is usually no possibility to interpret the embedded information meaningfully and accurately in order to produce useful results as defined by existing algorithms.
ACKNOWLEDGMENTS
We are grateful to the anonymous reviewers for taking time read and provide helpful comments.
APPENDIX A: MOTIVATION AND INTUITION
To get an intuitive understanding of the problem, consider the following examples taken from three typical KGs that have been heavily studied by the academic and industrial communities:
• (Elvis Presley, instance of, rock star) : taken from the WordNet7, one of the largest online lexical database of English, in which each distinct concept (called synset) are interlinked by means of rigidly defined (hence limited) conceptual-semantic or lexical relations.
• (Elvis Presley, /people/person/profession, singer) : taken from the Freebase8, which was once to be the largest collaboratively edited knowledge base (deprecated at this time and absorbed by the Wikidata project). In which each named entities are interlinked by means of fine-grained relation types defined in the meta-schema. Due to the loosely-defined nature of the relation types, redundancy or alternate facts are allowed to exist simultaneously, such as, (Elvis Presley, profession, musician) and (Elvis Presley, profession, actor).
• (Elvis Presley, rdf:type, American rock singers) : taken from the YAGO9, one of the largest and most active semantic knowledge base developed at the Max Planck Institute for Computer Science in Saarbrücken , which combines the clean taxonomy (relation types) of WordNet with the richness of the Wikipedia category system (classes of entities ).
As can be perceived from above examples, the use of different ontologies can lead to different (and incoherent) relations between the same pair of concepts, similarly, applying different ontologies can lead to diverse kinds of conceptualizations. Therefore, it is (arguably) impractical to rely on using the word-level embeddings to precisely represent the knowledge graphs under the diverse conditions, and it is necessary to develop a universal solution that is applicable to all of the ontology infrastructures, for phrase-level embedding learning of the different concept representations.
As mentioned in Section 3, in order to develop a representation learning method that is flexible enough for modeling different types of knowledge graphs, the most important problems to be faced are associated with how to define the optimization problem and how to solve it. According to our survey, most state-of-the-art models, including the translating models derived from the TransE (Bordes et al., 2013; Lin et al., 2015), the latent semantic models derived from the RESCAL (Nickel et al., 2011; 2016b), and the neural network models derived from the NTN (Socher et al., 2013), were all trying to define the graph embedding learning problem as a supervised binary classification problem, in which the optimization objectives are defined in the form of a relation-specific cost function of the entity and/or relation embeddings, and then to solve it with a stochastic gradient decent (SGD) process. Typical criteria used to evaluate the cost functions include the logistic loss and the pairwise margin-based criterion, and the negative samples used for training the model are usually sampled from the complement of the knowledge graph based on the open world assumption (Drumond et al., 2012). However, we doubt that there are many situations where such modeling strategies would have theoretical and practical disadvantages.
Firstly, we speculate that the reason why most previous studies did not consider the first and second queries simultaneously (see Section 3), is probably due to the difficult in modeling the inverse semantic relatedness of the entities from the given fact. In other words, shall we use the embedding of r to represent its reverse r′? If we do so, it seems that it will inevitably lead to semantic paradox like: Presley’s profession is Presley, since from the model’s perspective, there is no difference between the entity Presley and other entities that may appear on both side of the relation profession. Considering the sparsity of the knowledge graph, models trained with limited facts would very likely tend to give higher scores to the entities that have been “seen in the right place”.
In order to solve this problem, we propose to model the facts conceptually instead of concretely (or literally, syntactically), which means that we will focus on the semantic meanings of the embeddings (of the entities and relations), rather than their syntactic features. Such a conceptual embedding scheme allow us to unify the representation of a relation (r) and its reverse counterpart (r’), and to accommodate the lexical variety in use by various knowledge bases.
7http://wordnet.princeton.edu 8https://developers.google.com/freebase 9https://www.mpi-inf.mpg.de/departments/databases-and-information-systems/research/yago-naga/yago
The intuition behind is, for any given fact (h, r, t), one would instantly recognize the bidirectional semantic connection between h and t, without need of translating it into (t, r′, h) explicitly in his/her mind. We believe this is crucial for efficient utilization of the structure information of the KGs for representation learning, empirical evidence is provided in Section 4 and Appendix C, respectively.
Secondly, we propose to use unsupervised learning techniques for graph embedding learning tasks, because: (1) Presently, almost all of the large-scale knowledge graphs are extremely sparse, which would unavoidably degrade the quality and reliability of the supervised learning algorithms. Further, considering the relation-specific solution that dominates the current research, the situation might get even worse. (2) Selecting negative examples for pair-wise training would be tricky and expensive, since in practice, it is very hard to generate a “proper and informative” negative sample responsive to each of the positive examples. For example, when learning from the fact (Einstein, employer, IAS), the false fact (Einstein, employer, Stanford) would seem to be more reasonable and informative than (Einstein, employer, singer) — if the objective is to further improve the predictive capability of the model to discriminate between similar objects.
To solve the data sparsity problem, we propose to model each of the facts as a short sentence, the entire KG can be regarded as a huge document, so that it can be processed by unsupervised encoderdecoder neural models, which has demonstrate to be efficient and useful in concept learning from the large-scale and feature sparse data (Sutskever et al., 2014). In order to avoid the sampling bias due to the selection of uninformative entities, we propose to use the softmax cross-entropy loss as a measure of the predictive discrepancy for model training, because its probability interpretation is more objective than those squared or logistic errors conventionally used in this area, and, it has been proven to be convex for the MLP we used in this paper (Bengio et al., 2005).
APPENDIX B: BACKGROUND INFORMATION AND IMPLEMENTATION DETAILS
B.1 DATASETS
WN18 is a subset of WordNet, which contains mostly the conceptual-semantic and lexical relations, and the entities are organized in a strictly hierarchical manner. FB15k is a subset of Freebase, which contains facts gathered from Wikipedia, mostly focused on the topic of movies and sports.
These datasets have been used as a de facto benchmarks for comparative evaluation, however, recent research (Toutanova & Chen, 2015; Dettmers et al., 2017) show that the test sets of WN18 and FB15k contain a lot of reversed triples that have been presented in the training set, i.e., (h, r, t) versus (t, r, h). Which, we consider would favor our model over those one-shot alternatives.
Therefore, we provide results on FB15k-237, which is introduced by Toutanova & Chen (2015), it is a subset of FB15K where reversing relations are removed. And, we also test on WN18RR provided by Dettmers et al. (2017), which is a reverse duplication free new sample of WordNet.
The multi-relational data sampled from WordNet and Freebase can be seen as typical of the heterogeneous graphs, in order to verify the generality of the developed model, we also perform evaluation in the multi-label classification setting on some typical homogeneous graphs.
BlogCatalog is a social network sampled from the BlogCatalog website, which contains only one relationship: the social connection between the blog authors, while the labels represent the interested topic categories provided by the bloggers. Protein-Protein Interactions is a biological network sampled from the PPI network for Homo Sapiens, which also contains only one relationship: the existence of interactions between the proteins, while the labels represent the biological states of the proteins. In the training set of these graph corpus, every entity (node) is assigned one or more labels from a finite set, the task is to predict the labels for the nodes in the test set.
The statistics of these data sets are summarized in Table 5.
B.2 EXPERIMENTAL SETUP
We optimized the hyper-parameters of all the datasets via extensive grid search and selected the model with the best filtered P@10 score on the validation set. Hyper-parameter ranges for the grid search were the following: embedding dimension d in {50, 100, 200, 300}, hidden layer dimension k in {256, 512, 1024, 2048} , MLP dropout rate p in {0.0, 0.1, 0.2, 0.3}, learning rate η in {0.001, 0.01, 0.1, 1, 5, 10}, learning rate decay λ in {0.7, 0.75, 0.8, 0.85, 0.9, 0.95}. In this study, we use the following combination of parameters for all of the graph embedding learning tasks :
• E CELLS: {d : 200, k : 2048, p : 0.2, η : 5, λ : 0.9. • R CELLS: {d : 200, k : 512, p : 0.2, η : 5, λ : 0.9}. • Mini-batch Settings: {batch size : 512, epoch : 50}
For multi-label classification tasks, we implement a single layer perceptron model for multi-task learning with: {k : 128, η : 0.1, λ : 0.9}, which is selected through grid search with the best averaged Macro-F1 score on randomly sampled validation set from the labeled nodes.
APPENDIX C: INVESTIGATING AND VISUALIZING THE EMBEDDING SCHEMES
In this section, we provide qualitative analysis on four typical embedding schemes (GEN, HoIE, TransE and word2vec), with the intention of better understanding the connection between the existing graph embedding schemes, and highlighting areas that remain poorly understood for further investigation. The reason we choose these models (except the word2vec) is because, according to our tests, they have demonstrated to be efficient and scalable to large-scale problems, and are also exhibiting good generalization ability on real data sets. We also consider the word2vec embeddings because we found that with the help of our multi-shot learning model, it achieves state-of-the-art performance on most of the knowledge base completion tasks (see Section 4), which is interesting, and worth some consideration (probably indicates a promising potential for transfer learning).
The argument in the first case is to claim that the graph embeddings generated by GEN are different from other solutions. We calculate the cosine similarities for each pair of the 1,345 relations in
FB15K w.r.t. four embedding schemes respectively, and compare their top-N ranking list (which is a set of relation pairs for each embedding scheme) through Venn diagrams, as illustrated in Fig.3.
Figure 3 reveals very different clustering pattern between GEN and other alternatives in their corresponding embedding space. What’s particularly interesting is that, although the TranE model is inspired by and heavily reliant on the spatial distribution of the wrod2vec embeddings (Bordes et al., 2013), but they are, in fact, not similar at all. On the contrary, the results of TransE and HoIE share a lot of similarities, it appears that in their top-300 and top-500 lists, almost half of the relation pairs were contained in their intersection. Which probably indicates that the translating embedding hypothesis (Bordes et al., 2013) is theoretically similar in nature to the holographic embedding hypothesis (Nickel et al., 2016b), when used for graph modeling. This is not an easily testable hypothesis, we consider it to be an open question, which we hope to further explore in the future.
The goal of the second experiment is to verify the claim that the embeddings generated by GEN are more representative and informative than other embedding schemes. Here we provide a case study on a randomly selected relation from FB15K, namely “/location/location/time zones”. There are 137 triples related to this relation (#425) in the test set, all of the head entities are names of the countries or regions, and the tail entities are the corresponding time zones. The heads are uniquely different from each other, while there are only 10 different time zones existed in the tails.
We plot all of the 137 triples in Fig.4, in which (Fig.4a and Fig4b) the input multi-dimensional vectors are projected to a 2-dimensional subspace spanned by x and y, by using of the principal component analysis (PCA), then we choose the first two principal components as the principal axes. In Fig.4a, the input is the concatenation of the head and tail entity of each triple, i.e. (h ⊕ t), with the intention of investigating the patterns of such feature vectors for relation prediction tasks. Hence, we choose the name of the tails as legend labels. As can be seen from Fig.4a, the feature vectors of the 137 triples shows clear clustering tendencies with regard to the categories in their tail entities. Based on this observation, we further plot the hidden layer of the R CELL (which is a 512-dimensional vector in this case) located before the output layer in our GEN model, as depicted in Fig.4b. From Fig.4b one could see that the distance between the data points is amplified, and the distinction becomes more prominent. We plot the cumulative softmax in Fig.4c, in which the X-axis represents the 1,345 type of relations in FB15K, Y-axis denotes the cumulative softmax values. The curve is obtained by adding all of the softmax vectors output by GEN with regard to the 137 triples. Obviously, the only peak observed in Fig.4c clearly exhibit that GEN can make good use of these (concatenated) features to identify the corresponding relations correctly.
For comparison purpose, we also visualize the other three embedding schemes with the same protocol, as illustrated in Fig.5. Since the corresponding models do not use MLP for relation prediction, we can not plot their “hidden state” and “accumulate softmax” for the second and the third subplots, hence we choose to visualize their predictive criterion vectors and output ranking list instead. The processing practice is consistent with the protocol of the original literature. Specifically, for TransE, we plot (t − h) as the hidden state for relation prediction, and calculate the ℓ1-norm distance |ri − (t− h)|1 w.r.t each of the relation ri in FB15K, then we process the distance vector with the softmax function for calculation of the accumulate softmax. While for HoIE, we plot the circular correlation vector (h⋆t) as the hidden state, and calculate the cosine similarity of (h⋆t)·ri w.r.t each of the relation ri in FB15K, then we use the obtained (cosine) vector to calculate the accumulate softmax. For word2vec embeddings, we use the same protocol as dealing with TransE.
From Fig.5 one could see that, the concatenated embedding vectors of TranE and HoIE shows similar clustering pattern as the GEN case, which help explaining the reason that under our multi-shot learning framework, the embeddings generated by these models perform similar in relation prediction tasks (see Table 2). It also provides evidence for our conjecture that these two embedding schemes could be inherently similar to each other. Form their criterion vectors (the second subplot for each models), one could see that their clustering pattern is not as clear as the case of GEN, which help explain their performance on relation prediction tasks (as shown in the third subplot)10. We consider this as solid support for the validity of the proposed multi-shot learning framework.
More evidence could be found with our source code release, which will be made publicly available on GitHub to encourage reproducible research, after anonymous review.
10 The alternative peaks appeared in subplot Fig.5c and Fig.5f are:
• #891: “/base/schemastaging/phone open times/time zone”, and • #583: “/time/time zone/locations in this time zone”. | 1. What is the main contribution of the paper regarding multirelational graph embedding?
2. How does the proposed method differ from prior methods in terms of its approach and motivation?
3. Are there any concerns or limitations regarding the experimental results and their interpretation?
4. Do you have any questions about the paper's content, such as specific statements or definitions?
5. Is there anything else that could be improved in the paper, such as the presentation order or the use of different terms? | Review | Review
The paper proposes a new method to compute embeddings of multirelational graphs. In particular, the paper proposes so-called E-Cells and R-Cells to answer queries of the form (h,r,?), (?,r,t), and(h,?,t). The proposed method (GEN), is evaluated on standard datasets for link prediction as well as datasets for node classification.
The paper tackles an interesting problem, as learning from graphs via embedding methods has become increasingly important. The experimental results of the proposed model, especially for the node classification tasks, look promising. Unfortunately, the paper makes a number of claims which are not justified or seem to result from misconceptions about related methods. For instance, the abstract labels prior work as "ad hoc solutions" and claims to propose a principled approach. However, I do not see how the proposed method is a more principled than previously proposed methods. For instance, methods such as RESCAL, TransE, HolE or ComplEx can be motivated as compositional models that reflect the compositional structure of relational data. Furthermore, RESCAL-like models can be linked to prior research in cognitive science on relational memory [3]. HolE explicitly motivates its modeling through its relation to models for associative memory.
Furthermore, due to their compositional nature, these model are all able to answer the queries considered in the paper (i.e, (h,r,?), (h,?,t), (?,r,t)) and are implicitly trained to do so. The HolE paper discusses this for instance when relating the model to associative memory. For RESCAL, [4] shows how even more complicated queries involving logical connectives and quantification can be answered. It is therefore not clear how to proposed method improves over these models.
With regard to the evaluation: It is nice that the authors provided an evaluation which compares to several SOTA methods. However, it is unclear under which setting these results where obtained. In particular, how were the hyperparameter for each model chosen and which parameters ranges were considered in the grid search. Appendix B.2 in the supplementary seems to specify the parameter setting for GEN, but it is unclear whether the same parameters where chosen for the competing models and whether they were trained with similar methods (e.g., dropout, learning rate decay etc.). The big difference in performance of HolE and ComplEx is also surprising, as they are essentially the same model (e.g. see [1,2]). It is therefore not clear to me which conclusions we can draw from the reported numbers.
Further comments:
- p.3: The statement "This is the actual way we humans learn the meaning of concepts expressed by a statement" requires justification
- p.4: The authors state that the model is trained unsupervised, but eq. 10 clearly uses supervised information in form of labels.
- p.4: In 3.1, E-cells are responsible to answer queries of the form (h,r,?) and (?, r, t), while Section 3.2 says E-Cells are used to answer (h, ?, t). I assume in the later case, the task is actually to answer (h,r,?)?
- p.2: Making a closed-world assumption is quite problematic in this context, especially when taking a principled approach. Many graphs such as Freebase are very incomplete and make an explicit open-world assumption.
- The paper uses a unusual definition of one-shot/multi-shot learning, which makes it confusing to read at first. The authors might consider using different terms to improve readability.
- Paper would benefit if the model is presented earlier. GEN Cells are defined only in Section 3.2, but the model is discussed earlier. Reversing the order might improve presentation.
[1] K. Hayashi et al: "On the Equivalence of Holographic and Complex Embeddings for Link Prediction", 2017
[2] T.Trouillon et al: "Complex and holographic embeddings of knowledge graphs: a comparison", 2017
[3] G. Halford et al: "Processing capacity defined by relational complexity: Implications for comparative, developmental, and cognitive psychology", 1998.
[4] D. Krompaß et al: "Querying factorized probabilistic triple databases", 2014 |
ICLR | Title
Online Learning of Graph Neural Networks: When Can Data Be Permanently Deleted
Abstract
Online learning of graph neural networks (GNNs) faces the challenges of distribution shift and ever gbv rowing and changing training data, when temporal graphs evolve over time. This makes it inefficient to train over the complete graph whenever new data arrives. Deleting old data at some point in time may be preferable to maintain a good performance and to account for distribution shift. We systematically analyze these issues by incrementally training and evaluating GNNs in a sliding window over temporal graphs. We experiment with three representative GNN architectures and two scalable GNN techniques, on three new datasets. In our experiments, the GNNs face the challenge that new vertices, edges, and even classes appear and disappear over time. Our results show that no more than 50% of the GNN’s receptive field is necessary to retain at least 95% accuracy compared to training over a full graph. In most cases, i. e., 14 out 18 experiments, we even observe that a temporal window of size 1 is sufficient to retain at least 90%.
1 INTRODUCTION
Training of Graph Neural Networks (GNNs) on temporal graphs has become a hot topic. Recent works include combining GNNs with recurrent modules (Seo et al., 2018; Manessi et al., 2020; Sankar et al., 2020; Pareja et al., 2020) and vertex embeddings as a function of time to cope with continuous-time temporal graphs (da Xu et al., 2020; Rossi et al., 2020a). Concurrently, other approaches have been proposed to improve the scalability of GNNs. Those include sampling-based techniques (Chiang et al., 2019; Zeng et al., 2020) and shifting expensive neighborhood aggregation into pre-processing (Wu et al., 2019; Rossi et al., 2020b) or post-processing (Bojchevski et al., 2020).
However, there are further fundamental issues with temporal graphs that are not properly answered yet. First, as new vertices and edges appear (and disappear) over time, so can new classes. This results in a distribution shift, which is particularly challenging in an online setting, as there is no finite, a-priori known set of classes that can be used for training and it is not known when a new class appears. Second, scalable techniques for GNNs address the increased size of the graph, but always operate on the entire graph and thus on the entire temporal duration the graph spans. However, training on the entire history of a temporal graph (even in the context of scaling techniques like sampling (Chiang et al., 2019; Zeng et al., 2020)) may actually not be needed to perform tasks like vertex classification. Thus, it is important to investigate if, at some point in time, one can actually “intentionally forget” old data and still retain the same predictive power for the given task. In fact, is has been observed in other tasks such as stock-market prediction that too much history can even be counterproductive (Ersan et al., 2020).
Proposed Solution and Research Questions While we do not suggest to use an entirely new GNN architecture, we propose to adapt existing GNN architectures and scalable GNN techniques to the problem of distribution shift in temporal graphs. In essence, we propose a new evaluation procedure for online learning on the basis of the distribution of temporal differences, which assesses the nature of how vertices are connected in a temporal graph by enumerating the temporal differences of connected vertices along k-hop paths. This information is crucial for balancing between capturing the distribution shift while having sufficient vertices within the GNN’s receptive field.
In summary, the central question we aim to answer is, whether we can intentionally forget old data without losing predictive power in an online learning scenario under presence of distribution shift.
We simulate this scenario by applying temporal windows of different sizes over the temporal graph, as illustrated in Figure 1. The window size c resembles how much history of the temporal graph is used for training, or with other words: which information we forget. In this example, data older than t − 2 is ignored. We evaluate the accuracy of representative GNN architectures and scalable GNN techniques trained on the temporal window, against training on the entire timeline of the graph (full history). We evaluate the models by classifying the vertices at time step t, before we advance to the next time step.
To answer the research question, we break it down into four specific questions Q1 to Q4, each answered in a separate experiment. For Q1: Distribution Shift under Static vs Incremental Training, we verify that incremental training is necessary to account for distribution shift, compared to using a once-trained, static model. Extending from Q1, we investigate in Q2: Training with Warm vs Cold Restarts whether it is preferable to reuse model parameters from the previous time step (warm start) or restart with newly initialized parameters at each time step (cold start). In Q3: Incremental Training on Different Window Sizes, we answer the question what influence different choices for the window sizes have, i. e., how far do we need to look into the past such that a GNN trained on the window is still competitive to a model trained on the full graph. Question Q4 extends Q3 by considering Q4: Incremental Training with Scalable GNN Methods, i. e., how scalable GNN approaches compare to using the full history of the temporal graph and to which extent scaling techniques can be applied on top of the temporal window.
New Datasets To enable an analysis with a controlled extent of distribution shift, we contribute three newly compiled temporal graph datasets based on scientific publications: two citation graphs based on DBLP and one co-authorship graph based on Web of Science. To determine candidate window sizes, we contribute a new measure to compute the distribution of temporal differences within the k-hop neighborhood of each vertex, where k corresponds to the number of GNN layers. We select the 25th, 50th, and 75th percentiles of this distribution as candidate window sizes. This results in window sizes of 1, 3, and 6 time steps for the two DBLP datasets, and 1, 4, 8 for the Web of Science dataset.
Results We select three representative GNN architectures: GraphSAGE-Mean (Hamilton et al., 2017), graph attention networks (Veličković et al., 2018) and jumping knowledge networks (Xu et al., 2018) along with graph-agnostic multi-layer perceptrons. As scalable GNN techniques, we consider GraphSAINT (Zeng et al., 2020) as well as simplified GCNs (Wu et al., 2019). The results of our experiments show that already with a small window size of 3 or 4 time steps, GNNs achieve at least 95% accuracy compared to using the full graph. With window sizes of 6 or 8, 99% accuracy can be retained. With a window size of 1, for almost all experiments, a relative accuracy of no less than 90% could be retained, compared to models trained on the full graph. Furthermore, our experiments confirm that incremental training is necessary to account for distribution shift in temporal graphs and we show that both reinitialization strategies are viable and differ only marginally, when the learning rates are tuned accordingly. Surprisingly, simplified GCNs perform notably well on the most challenging dataset DBLP-hard and are only outperformed by GraphSAGE-Mean.
We outline the related work below. We provide a problem formalization and selection of GNNs for our experiments in Section 3. We describe the experimental apparatus and datasets in Section 4. The results of our experiments are reported in Section 5 and discussed in Section 6, before we conclude.
2 RELATED WORK
In Rossi & Neville (2012), the authors distinguish between tasks where the predicted attribute is static or changing over time. The dynamic graph problem is set up in a way that vertex and edge features may change over time and that edges may appear and disappear. This is conceptually different as it assumes a fixed vertex set, whereas in our case, the vertex set is changing over time. Furthermore, the predicted attribute is static in our case because it will not change after the respective vertex has appeared. Several recent works follow this setup and assume a fixed vertex set (Trivedi et al., 2017; Seo et al., 2018; Kumar et al., 2018; Trivedi et al., 2019; Manessi et al., 2020; Sankar et al., 2020).
In Park et al. (2017), the authors use vertex features concatenated with the adjacency vector and apply 1D-convolution. The experiments comprise link prediction and user state prediction. 1Dconvolution on the time axis can be regarded as a sliding window. However, the paper does not consider new classes during the evaluation time frame and does not analyze how much past training data would be required for up-training.
In Fish & Caceres (2017), the authors aim to find the optimal window size, given a dataset, a task, and a model. They treat the window size as a hyperparameter and propose an optimization algorithm which requires multiple runs of the model. This might be rather expensive. Furthermore, the study does not supply insights on how much predictive power can be preserved when selecting a nearoptimal but much smaller, and thus more efficient, window size.
CTDNE (Nguyen et al., 2018) is an embedding method for continuous-time graphs introducing temporal random walks. This approach considers graphs with featureless vertices with the objective to learn a meaningful/useful vertex embedding. In a recent extension of CTDNE (Lee et al., 2020), the method is applied to edge streams via up-training of the embedding. Comparing this approach to our work, we find that we have another task (discrete-time online vertex classification vs continuoustime online vertex embedding), consider a different type of graph (attributed vs featureless), and face different challenges (adaption to new classes). Nevertheless, it would be an interesting direction of future work to apply our experimental procedure to (streaming) CTDNE.
For discrete-time dynamic graphs involving new vertices, Goyal et al. (2018) proposes DynGEM as an autoencoder-like approach that jointly minimize reconstruction loss between t and t + 1 and embedding distance between connected vertices. In Dyngraph2vec (Goyal et al., 2020), the authors extend this approach by additional variants such as recurrent decoders.
EvolveGCN (Pareja et al., 2020) and T-GAT (da Xu et al., 2020) are both inductive approaches designed for attributed temporal graphs. EvolveGCN predicts the parameters of a GCN with an RNN by tying the RNN output or hidden state to the GCN parameters. T-GAT introduces a selfattention mechanism on the time axis. These approaches can cope with newly appearing vertices and are able to predict different labels for the same node at different times. They both require a sequence of graph snapshots for training. When new classes appear, these sequence-based models would need to be retrained. In our setting with limited window sizes, the sequence of snapshots within a window, i.e. the data available for retraining, might become very short: down to only one snapshot in the extreme case. Furthermore, these approaches focus on predicting future edges or predicting a label for each vertex at each time step. Therefore, the models serve a different purpose compared to the setting that we face, in which the label of each vertex is fixed. For these two reasons, we have focused on adapting and evaluating more efficient, static architectures as well as scalable GNN techniques, while leaving the adaption of T-GAT and EvolveGCN as future work.
To summarize, most works on dynamic graphs assume a fixed vertex set, while considering dynamics within the vertex/edge features, and/or the edges themselves. Inductive approaches such as EvolveGCN and T-GAT do allow new nodes. CTDNE can deal with new nodes via up-training. Previous work on finding optimal window sizes proposes a hyperparameter tuning algorithm. However, none of these works specifically analyzes the problem of new classes appearing over time and how much past training data is necessary, or how few is enough, to maintain good predictive power.
3 PROBLEM FORMALIZATION AND SELECTED METHODS
Problem Formalization We consider a vertex-labeled temporal graph Gt = (Vt, Et) with vertices Vt and edges Et, provided by a sequence of snapshots ordered by t ∈ N. Thus, Vt is the (finite) set of vertices that are in the graph at time step t, and Et the corresponding set of edges at time step t. Furthermore, we define the set of all vertices V ::= ⋃ i∈N Vi and all edges E ::= ⋃ i∈NEi, i. e., G = (V,E). Let tsmin : V → N be a function that returns for each vertex v ∈ V the timestamp at which the vertex was first added to the graph, i. e., tsmin : v 7→ min{i ∈ N|v ∈ Vi}. Finally, for each vertex v ∈ V we have a feature vector Xv ∈ RD, where D is the number of vertex features, and a class label yv ∈ C with C being the global set of classes C ::= ⋃ i∈N Ci.
In each time step t, previously unseen vertices and edges and even new classes may appear as illustrated in Figure 1. For these temporal graphs, we investigate training graph neural networks for the vertex classification task, i. e., assigning class labels y to previously unseen vertices based on vertex attributes X and connections to other vertices via edges. We denote the history of vertices and edges we take into account as the temporal window. The temporal window spans a range of multiple time steps, which we denote as the temporal window size c.
Selected Graph Neural Networks Several works have been proposed that combine GNNs with recurrent neural networks to capture temporal dynamics (Seo et al., 2018; Manessi et al., 2020; Sankar et al., 2020; Pareja et al., 2020). Other works focus on continuous-time temporal graphs (da Xu et al., 2020; Rossi et al., 2020a). Our work is orthogonal to those works as we focus on the distribution shift of temporal graphs and the question if and when old data can be deleted without sacrificing predictive power. In the following, we introduce and motivate our choice of representative GNN architectures as well as scalable GNN techniques for our experiments.
Dwivedi et al. (2020) have introduced a benchmarking framework to re-evaluate several recent GNN variants. Dwivedi et al. distinguish between isotropic and anisotropic GNN architectures. In isotropic GNNs, all edges are treated equally. Apart from graph convolutional networks (Kipf & Welling, 2017), examples of isotropic GNNs are GraphSAGE-mean (Hamilton et al., 2017), DiffPool (Ying et al., 2018), and GIN (Xu et al., 2019). In anisotropic GNNs, the weights for edges are computed dynamically. Instances of anisotropic GNNs include graph attention networks (Veličković et al., 2018), GatedGCN (Bresson & Laurent, 2017) and MoNet (Monti et al., 2017).
We select GraphSAGE-Mean (GS-Mean) (Hamilton et al., 2017) as a representative for isotropic GNNs because its special treatment of the vertices’ self-information has shown to be beneficial (Dwivedi et al., 2020). The representations from self-connections are concatenated to averaged neighbors’ representations before multiplying the parameters. In GS-Mean, the procedure to obtain representations in layer l+ 1 for vertex i is given by the equations ĥl+1i = h l i|| 1degi ∑ j∈N (i) h l j and hl+1i = σ(U lĥl+1i ), where N (i) is the set of adjacent vertices to vertex i, U l are the parameters of layer l, σ is a non-linear activation function, and ·||· is the concatenation. We select Graph Attention Networks (GATs) by (Veličković et al., 2018) as representative for the class of anisotropic GNNs. In GATs, representations in layer l + 1 for vertex i are computed as follows: ĥl+1i = w l ih l i + ∑ j∈N (i) w l ijh l j and h l+1 i = σ(U
lĥl+1i ), where the edge weights wij and self-connection weightswi are computed by a self-attention mechanism based on the representations hi and hj , i. e., the softmax of a(U lhi||U lhj) over edges, where a is a single-layer neural network with LeakyReLU activation.
Scaling Graph Neural Networks to Large Graphs Several approaches have been proposed to scale GNNs to large graphs. In general, these approaches fall into two categories: sampling either locally (Hamilton et al., 2017; Huang et al., 2018), or globally (Chiang et al., 2019; Zeng et al., 2020), and separating neighborhood aggregation from the neural network component (Wu et al., 2019; Rossi et al., 2020b; Bojchevski et al., 2020).
From both categories, we select one representative for our experiments. We use GraphSAINT (Zeng et al., 2020) as state-of-the-art sampling technique along with simplified GCNs (Wu et al., 2019) as a representative for shifting the neighborhood aggregation into a preprocessing step.
Simplified GCN (Wu et al., 2019) is a scalable variant of Graph Convolutional Networks (Kipf & Welling, 2017) that admits regular mini-batch sampling. Simplified GCN removes nonlinearities and collapses consecutive weight matrices into a single one. Thus, simplified GCN can be described by the equation ŶSGC = softmax(SKXΘ), where the parameter K has a similar effect as the number of layers in a regular GCN, S is the normalized adjacency matrix and Θ is the weight matrix. Instead of using multiple layers, the k-hop neighbourhood is computed by SK , which can be precomputed. This makes Simplified GCN efficient to compute, while not necessarily harming the performance.
In GraphSAINT (Zeng et al., 2020), entire subgraphs are sampled for training GNNs. Subgraph sampling introduces a bias which is counteracted by normalization coefficients for the loss function. The authors propose different sampling methods: vertex sampling, edge sampling, and random-walk sampling. We use the best-performing random-walk sampling for our experiments. The underlying GNN is exchangeable, yet the authors suggest to use Jumping Knowledge networks (JKNets) (Xu et al., 2018). JKNets introduce skip-connection to GNNs: each hidden layer has a direct connection to the output layer, in which the representations are aggregated, e. g., by concatenation. This enables the network to learn from representations corresponding to different levels of the local neighborhood. To isolate the effect of GraphSAINT sampling, we also include JKNets in our comparison.
4 EXPERIMENTAL APPARATUS
Procedure For each evaluation time step t ∈ [tstart, tend], we construct a subgraph G̃ = (Ṽ , Ẽ) of G induced on Ṽ = {v ∈ V |t− c ≤ tsmin(v) ≤ t} and Ẽ = {(u, v) ∈ E | u, v ∈ Ṽ }. The parameter c denotes the window size, i. e., determines the c time steps that the temporal window spans. Then, we supply the competing models with the subgraph G̃, the corresponding vertex features, and labels for vertices {u ∈ Ṽ | tsmin(u) < t} along with an epoch budget for updating their parameters. The task is to predict the labels for vertices {u ∈ Ṽ | tsmin(u) = t}. Finally, we evaluate the accuracy of the model before incrementing t. We provide an algorithmic view in Appendix A.1.
When advancing from one time step to the next, we consider two options of initializing the model. Using cold restarts corresponds to randomly re-initializing each model in each time step and training it from scratch. In contrast, when using warm restarts, we take the final weights of the previous time step as initialization for the next time step. In both cases, we initialize the additional parameters in the output layer randomly, when new classes appear.
Novel Measure for Distribution of Temporal Differences In the following, we develop a novel dataset-agnostic measure for the distribution of temporal difference within the k-hop neighborhood of each vertex. When k graph convolution layers are used, the features within the k-hop neighborhood of each vertex are taken into account for its prediction. This k-hop neighborhood is referred to as the receptive field of a GNN (Chen et al., 2018). When we incrementally train GNNs on a sliding window through time, the window size determines which vertices are available for training and for inference. Ideally, the temporal window covers all vertices within the GNN’s receptive field, such that GNNs have access to all relevant information.
How many vertices of the receptive field are contained in a temporal window of size c depends on the characteristics of the datasets. Therefore, we introduce a new measure for the distribution of temporal differences tdiffk within the receptive field of a k-layer GNN. Let N k(u) be the k-hop neighborhood of u, i. e., the set of vertices that are reachable from u by traversing at most k edges. Then, we define tdiffk(G) to be the multiset of time differences to past vertices:
tdiffk(G) := {tsmin(u)− tsmin(v)|u ∈ V ∧ v ∈ N k(u) ∧ tsmin(u) ≥ tsmin(v)} (1) Please note that this is a measure to determine comparable window sizes over different datasets and different granularities. It needs to be computed only once per dataseoncet, prior to any training iterations. When we consider a GNN with k graph convolution layers, the distribution tdiffk enumerates the temporal differences within the receptive field of the GNN. In our experiments, we will use the 25th, 50th, and 75th percentiles of this distribution for analyzing the effect of the temporal window size. This choice corresponds to an average receptive field coverage of 25%, 50%, and 75%.
Newly Compiled Datasets Pre-compiled temporal graph datasets for our real-world scenario are surprisingly rare. Therefore we contribute three new temporal graph datasets based on scientific
publications: one temporal co-authorship graph dataset (PharmaBio) as well as two newly compiled temporal citation graph datasets based on DBLP (DBLP-easy and DBLP-hard). These new datasets enable us to simulate a real-world scenario, in which not only new vertices but also new classes (venues) appear over time. Table 1 summarizes the basic characteristics of the datasets and Figure 2 shows the distribution of temporal differences tdiffk for different values of k. For details on the dataset creation procedure as well as degree and label distributions, we refer to Appendix A.2.
Evaluation Measures As our datasets have imbalanced classes, one could argue to use Micro or Macro F1-score as evaluation measure. However, we are primarily interested in the relative performance between limited-window training and training on the full graph. Motivated by realworld scenarios, we chose sample-based F1-score as our evaluation measure (equivalent to accuracy in single-label scenarios). When aggregating results over time, we use the unweighted average.
5 EXPERIMENTAL RESULTS
We report the results of our experiments along the research questions stated in the introduction.
Q1: Distribution Shift under Static vs Incremental Training In this experiment, we compare a once-trained static model against incrementally trained models. We train the static models for 400 epochs on the data before the first evaluation time step, which comprises 25% of the total vertices. We train incremental models for 200 epochs on temporal windows of 3 time steps (4 on the PharmaBio dataset) before evaluating each time step. All models have comparable capacity.
Figure 3 shows the results. We see that the accuracy of the static models decreases over time on DBLP-easy and DBLP-hard, where new classes appear over time. On PharmaBio, the accuracy of the static models plateaus, while the accuracy of incrementally trained models increases. That confirms our expectations as PharmaBio does not have any new classes, and incrementally trained models merely benefit from the increased amount of training data, while DBLP-easy and DBLPhard do have new classes appearing during the evaluation time frame. In the following experiments, we only use incrementally trained models because they outperform static models in all cases.
Q2: Training with Warm vs Cold Restarts We compare reusing the parameters of the model from the previous time step (warm restart) against randomly re-initializing the model parameters for each temporal window (cold restart). In both cases, we impose a 200 epoch budget per time step. The window size is set to 4 for PharmaBio and 3 for the two DBLP datasets, corresponding to 50% coverage of the GNNs’ receptive field. All models have comparable capacity.
Figure 4 shows the results. We observe that the results obtained by GNNs using warm and cold restarts are close to each other. On DBLP-hard with 23 new classes appearing during the evaluation steps, GS-Mean seems to benefit from warm restarts, while GATs yield better scores when cold restarts are used. On PharmaBio with a fixed class set, both GNNs benefit from reusing parameters from previous iterations. For now, we conclude that both reinitialization strategies are viable and we proceed by running both variants for the next experiments Q3 and Q4.
Q3: Incremental Training on Different Window Sizes We compare the models trained on windows of different sizes and compare it with a model trained on all available data, i. e., the full graph, which is our baseline. We select three window sizes per dataset based on the distribution of temporal differences tdiff2 (see Section 4). These window sized correspond to quartiles, i. e., the windows cover 25%, 50%, and 75% of the GNNs’ receptive field (RF) (see Table 1). Thus, we can compare window sizes across datasets with different characteristics, i. e., connectivity patterns through time and total number of time steps. The epoch budget is 200 and all models have comparable capacity.
Table 2 (top) shows the results. We observe that those GNN variants trained on the full timeline of the graph yield the highest scores on DBLP-easy and DBLP-hard. There, GNNs with window size 1 (25% RF) yield lower scores than training with larger window sizes (50% and 75% RF). On all datasets, the scores for training with limited window sizes larger than 1 are close to the ones of
full-graph training. In summary, window sizes that cover 50% of the receptive field, GNNs and also MLPs achieve at least 95% classification accuracy compared to full-graph training. When 75% of the receptive field is covered by the temporal window, at least 99% accuracy could be retained in all datasets. We refer to Appendix A.4 for extended results including both reinitialization strategies.
Q4: Incremental Training with Scalable GNN Methods Similarly to Q3, we again compare different window sizes against training on the full graph. This time, we focus on using scalable GNN techniques and aim to learn how they perform in conjunction with temporal windows. We further alleviate the fixed-capacity constraint of previous experiments and tune the hidden size as an additional hyperparameter. We refer to Appendix A.3 for details on hyperparameter choices.
We compare Simplified GCN and GraphSAINT, while including JKNet to isolate the effect of GraphSAINT sampling. Table 2 (bottom) shows the results. We observe that, again, limiting the window size to cover 50% of the GNN’s receptive field leads to at least 95% relative accuracy, compared to full graph training. As expected, GraphSAINT sampling (with JKNets as a base model) yields slightly lower scores than full-batch JKNets. On DBLP-hard, simplified GCN outperforms the other, more complex models. In terms of relative performance, limiting the receptive field does not negatively impact GraphSAINT on DBLP-hard and PharmaBio.
6 DISCUSSION
We have created a new experimental procedure for temporal graphs with new classes appearing over time, for which we contribute three newly compiled datasets with controlled degrees of distribution shift. In this online learning setup, we have evaluated three representative GNN architectures as well as two GNN scaling techniques. With the goal of generalizable results, we have introduced a new measure for the distribution of temporal differences tdiffk, based on which we have selected the temporal window sizes. Our results show that past data can be permanently deleted very early without diminishing the performance of an online vertex classification model. This has direct consequences for online learning of GNNs on temporal graphs and, thus, impacts how GNNs can be employed for numerous real-world applications.
Our main result is that incremental training with limited window sizes is as good as incremental training over the full timeline of the graph (see Q3 and Q4). With window sizes of 3 or 4 (50% receptive field coverage), GNNs achieve at least 95% accuracy compared to using all available data for incremental training. With window sizes of 6 or 8 (75% receptive field coverage), at least 99% accuracy can be retained. This result holds not only for standard GNN architectures but also when scaling techniques such as subgraph sampling are applied on-top of the temporal window. Finally, in almost all experiments, at least 90% of relative accuracy is reached with a window of size 1.
Furthermore, we have verified that incremental training helps to account for distribution shift compared to once-trained, static models (see Q1). We have further investigated on reusing parameters from previous iterations (Q2). Our results show that both strategies are viable, when learning rates are tuned accordingly. During hyperparameter optimization for Q4, in which we alleviated the fixed-capacity constraint, we further noticed that warm restarts are more suitable for higher capacity models with low learning rates, while using cold restarts admits using lower capacity models and higher learning rates (the details of hyperparameter optimization can be found in Appendix A.3).
Even though it was not our main objective to compare the absolute performances of the models, it is noteworthy that simplified GCNs perform surprisingly well on DBLP-hard. Despite the simplicity of the approach, the model yields higher scores than GraphSAINT, JKNets and fixed-capacity GATs, and are only outperformed by GraphSAGE-mean.
A limitation of the present work is that we assume that the true labels of each time step become available as training data for the next time step. In practice, however, only a small fraction of vertices might come with labels for training, while the larger part could be annotated by the model itself. Adapting our experimental procedure to use only a small fraction of true labels in each time step would be an interesting direction of future work.
One could further argue that deleting data that is not linked to the most recent data points would be a viable alternative to deletion based on a fixed time difference. However, this approach would be only feasible in retrospect because, in real-world scenarios, it is impossible to know whether a future data will include a link to a past data point. Still, future work could involve employing other methods to determine what data to delete, such as the personalized PageRank score (Bojchevski et al., 2020).
7 CONCLUSION
Temporal graphs occur in many real-world scenarios such as citation graphs, transaction graphs, and social graphs. Practitioners face a trade-off between memory requirements, which are tied to the temporal window size, and expected accuracy of their models. Until now, it was not clear, how GNNs can be efficiently trained in those online scenarios, especially when distribution shift becomes an issue. We demonstrate that a high level of accuracy can be retained, when training only on a fraction of the temporal graph, determined by a temporal window. The results of this paper can serve as guidelines for training GNNs on temporal graphs, particularly regarding the intentional forgetting of data while retaining a certain percentage of predictive power. For researchers, we supply our newly compiled datasets along with an implementation of the experimental procedure.
We will make the code and data available to reviewers during the peer-reviewing process as suggested in the ICLR 2021 author’s guide.
A APPENDIX
A.1 ALGORITHM FOR OUR EXPERIMENTAL PROCEDURE
Algorithm 1 outlines our incremental training and evaluation procedure.
Data: Temporal graph G, features X , labels y, time steps t, temporal window size c, epoch budget nepochs
Result: Predicted class labels for vertices in each time step of the graph 1 known classes← ∅; 2 θ ← initialize parameters(); 3 for t? ← tstart to tend do 4 G̃ ← subgraph of G induced on vertices u, where t? − c ≤ tsmin(u) ≤ t? ; 5 ỹtrain ← ỹu, where tsmin(u) < t?; 6 if do cold restart then
// Cold restart: re-initialize all parameters 7 θ ← initialize parameters(); 8 else
// Warm restart: initialize new parameters, copy others 9 tmp← clone(θ);
10 θ ← initialize parameters(); 11 θ|known classes ← tmp|known classes; 12 end 13 θ ← train(θ, G̃, X̃ , ỹtrain) for nepochs epochs; 14 ỹpred ← predict(θ, G̃, X̃) for vertices u, where tsmin(u) = t?; 15 known classes← known classes ∪ set(ỹtrain); 16 end
Algorithm 1: Incremental training procedure of our experimental apparatus
A.2 DATASET DETAILS
In the following, we outline the dataset compilation procedure and supply further descriptive statistics of the resulting datasets.
PharmaBio To compile the PharmaBio dataset, we use the metadata of 543,853 papers by Pharma and Biotech companies from Web of Science (Melnychuk et al., 2019). After removing duplicates, our data cleaning procedure ensures that there is a certain amount of labels for each class per year and that each paper is connected to at least one other paper by a same-author edge. More specifically, we: (1) Keep only papers that are in a journal category with at least 20 papers per year; (2) Keep only papers where at least one of the authors has at least two papers per year; (3) Create vocabulary of words (regular expression: \w\w+) that appear in at least 20 papers globally and keep only papers with at least one of these words. We iterate steps 1–3 until no further paper has been removed in one pass. We end up with 68,068 papers from 23,689 authors working for 68 companies. These papers are distributed across 2,818 journals which are, in turn, categorized into seven journal categories. During preprocessing, each paper becomes a vertex in the graph. The class of the paper is the category of the journal in which it was published. We insert an edge between two vertices, if they share at least one common author (based on string comparison).
DBLP-easy To compile these datasets, we use the DBLP Citation Network dataset (version 10)1 (Tang et al., 2008) as a basis. It comprises 3M citing documents and 25M citations to 2M distinct cited documents, ranging between years. We use venues (conferences or journals) as class labels and use citations as edges. First, we select the subset from 1990 until 2015. Then, we follow a similar procedure as above: (1) Keep only papers from venues that have at least τvenue papers in each year they occur (may be only every second year). (2) Keep only papers that stand in at least
1https://aminer.org/citation
one citation relation to another paper. (3) Remove papers from venues that occur only in a single year. (4) Keep only papers with at least one word from a vocabulary of words that are in at least τwords papers. We iterate steps 1–4 until no further paper has been removed in one pass.
DBLP-hard The difference between DBLP-easy and DBLP-hard is that τvenue := 100 papers in the easy variant and τvenue := 45 papers in the hard variant. The minimum word occurrence threshold τwords is set to 20 for DBLP-easy and 40 for DBLP-hard. Finally, we construct the graph with papers as vertices, citations as edges, and venues as classes.
For all three datasets, we use L2-normalized tf-idf (Salton & Buckley, 1988) representations as vertex features based the corresponding papers’ title. We estimate the power law coefficient α via
maximum likelihood (Newman, 2005) α = 1 + n (∑
u∈V ln degu
degmin
)−1 where degmin is 1 (2 for
PharmaBio).
In Figure 5, we visualize the degree distribution, label distribution, the distribution over years, as well as the distributions of temporal differences (as described in Section 4). All compiled datasets seem to follow a power law distribution, which is typical for citation and co-authorship graphs.
For each dataset, we chose the boundaries for our evaluation time steps [tstart, tend], such that 25% of the total number of vertices lie before tstart, and tend is the final time step. For PharmaBio (1985– 2016), that is tstart = 1999, and for both DBLP variants (1990-2015), that is tstart = 2004. Data before tstart may be used for training, depending on the window size. Regarding changes in the class set (distribution shift), DBLP-easy has 12 venues in total, including one bi-annual conference and four new venues appearing in 2005, 2006, 2007, and 2012. DBLP-hard has 73 venues, including one discontinued, nine bi-annual, six irregular venues, and 23 new venues.
A.3 IMPLEMENTATION DETAILS AND HYPERPARAMETERS
We tune the hyperparameters separately for each window size and each restart configuration. We tune the hyperparameters on DBLP-easy and use the same set of hyperparameters for DBLP-hard and PharmaBio.
For experiments Q1-Q3, we design the models to have a comparable capacity: one hidden layer with 64 hidden units. We use ReLU activation on the hidden layer of MLP and GS-Mean. GS-Mean has one hidden layer, i. e. two graph convolutional layers, with 32 units for self-connections and 32 units for aggregated neighbor representations. GAT has one hidden layer composed of 8 attention heads and 8 hidden units per head, along with one attention head for the output layer. We initialize the model parameters according to Glorot and Bengio (Glorot & Bengio, 2010). For both GS-Mean and GAT, the output of the second layer corresponds to the number of classes. We use dropout probability 0.5 on the hidden units for all models in experiment Q3. We use Adam (Kingma & Ba, 2014) to optimize for cross-entropy. We tune the learning rates on DBLP-easy with a search space of {10−1, 5 ·10−2, 10−2, 5 ·10−3, 10−3, 5 ·10−4, 10−4} and re-use these learning rates for the other datasets. The learning rates are tuned separately for each model, each parameter reinitialization strategy, and each window size. We do not use weight decay because it did not increase the performance (search space {0, 10−3, 5 · 10−4, 10−4, 5 · 10−5, 10−5}). The optimal learning rates can be found in Figure 6 for Q1, Figure 7 for Q2, and Figure 8 for Q3. For implementation of GraphSAGE-mean and GATs, we use DeepGraphLibrary (Wang et al., 2019). All methods are trained transductively: for each new snapshot, the new vertices are inserted into the graph without their labels, then, the models are allowed to (up-)train before making predictions.
For the experiment Q4, we use two hidden layers with 64 hidden units each to make use of jumping knowledge (Xu et al., 2018), as suggested as base architecture in GraphSAINT (Zeng et al., 2020). The learning rate is tuned in the space of {0.0001, 0.001, 0.01, 0.1}. Dropout probability is set to 0.2. We do not use weight decay. We also tune the batch size of GraphSAINT in the range of {256, 512, 2048, 4096}, as subgraph size is an important hyperparameter. For simplified GCN, we tune the learning rate in the range of {0.0005, 0.001, 0.005, 0.01, 0.05} and we set the neighborhood aggregation parameter K to 2, corresponding to two-layer aggregation. For implementation of GraphSAINT and JKNet, we use PyTorch-geometric (Fey & Lenssen, 2019). The optimal hyperparameter values as well as the respective search spaces for experiment Q4 can be found in Figure 3. JKNets and simplified GCNs are trained transductively, while GraphSAINT is trained inductively as suggested by the original work (Zeng et al., 2020).
A.4 EXTENDED RESULTS
Table 4 shows the full results table with both warm and cold restarts for experiment Q3. Table 5 shows the full results table with both warm and cold restarts for experiment Q4. Figure 9 visualizes the results for each time step of experiment Q3. | 1. What is the main research question addressed in the paper regarding online or incremental learning in temporal graphs?
2. What are the strengths and weaknesses of the proposed approach in comparison to prior works in the field?
3. How does the paper formulate the problem of forgetting older data, and what are some potential issues with this formulation?
4. Can you provide examples of applications where dynamic node classification and incremental learning in temporal graphs are essential?
5. What are some of the missing important related works that should be referenced and discussed in the paper to better position it in the existing literature?
6. How does the paper's contribution compare to previous works that have studied the impact of the temporal window and its size, as well as discarding past data and representing past data differently?
7. What are your thoughts on how the results and findings of the paper depend on the choice of time steps and granularity used by the authors?
8. Do you think the paper could benefit from including a more detailed discussion of the implications of their new dataset with controlled distribution shift? | Review | Review
This work studies the problem of online or incremental learning in temporal graphs (dynamic networks), and more precisely, whether past data can be discarded/ignored without losing predictive accuracy under the assumption that there is the presence of a distribution shift. This question has been essentially investigated over the years in various contexts, e.g., relational learning and classification in dynamic or time-evolving networks. It is also completely obvious that forgetting older data, especially under the assumption of a distribution shift, makes sense and is the correct thing to do. This is exactly what has been done in time-series forecasting for decades. The problem formulation is unclear and can be more precisely defined and motivated appropriately. This needs to be fixed. Are the class labels of a node changing over time, so if a node has label A at time t, then at time t+1 it could have label B, etc. This doesn’t seem true, as it seems the class labels of the nodes are “static”, which is unrealistic in many cases. How are the graph snapshots created? How was the timespan selected? What does every time step represent (1 hour, 5 minutes, etc.)? Also, are the node features changing over time? This doesn’t seem true, but if this is the case, then it is unclear why this would be the case in practice (it would be great to provide some motivation for this, or an example application or problem where this may be true). There are many assumptions that make this problem unrealistic. Furthermore, there have even been works that study the dynamic node classification problem previously, see [1-2] below.
The contribution and novelty of this work is unclear. Many important related works are missing. There have been countless works that have studied the impact of the temporal window and its size, as well as discarding past data, and using different amounts, as well as the representation of that past data (exponentially weighting links). This work also studies the impact of ignoring past data on node classification. Furthermore, many of the standard papers on this topic are seemingly missing such as CTDNE [10] and JODIE [6]. There are many other important works on incremental/online learning in dynamic and streaming graphs that are missing in the paper, see [4]-[13], which need to be referenced and appropriately discussed, mentioning the differences, and so on. The real contribution seems to be a new dataset with a controlled distribution shift. But putting this work into perspective with the related work, and explicitly stating the differences would help clarify the contribution and better position this work with respect to the existing literature.
Pros
Paper is well-written for the most part and easy to understand
New dataset with controlled distribution shift
Cons
Limited technical novelty and contribution
Important related work is missing and should be discussed appropriately to better position the work
Problem formulation is unclear and can be more precisely defined, and motivated.
Previous work has studied essentially the same research question and findings are obvious
The results and findings are in terms of time steps, however, the notion of a time step is not the same for every graph, nor is it ever discussed how the time steps are actually derived. Does every time step represent 30 seconds, 5 minutes, 1 hour, 1 day, etc. Furthermore, the results only make sense for the specific time step chosen for each graph. For instance, it is mentioned that “GNNs achieve 95% accuracy with a small window size of 3 or 4 time steps”. However, if the time step is extremely large then the result/findings change. And so all the findings in this paper and the discussion depend precisely on the data and the authors choice of how to create the time steps, and what granularity to use, which isn't discussed. This issue was discussed extensively in previous work. Minor comment: the labels in nearly all the figures are too small to read.
Time-evolving relational classification and ensemble methods
Deep dynamic relational classifiers: Exploiting dynamic neighborhoods in complex networks
A task-driven approach to time scale detection in dynamic networks
Dynamic Node Embeddings From Edge Streams
Afraid: fraud detection via active inference in time-evolving social networks
Learning Dynamic Embeddings from Temporal Interactions
Node Embedding over Temporal Graphs
Representation Learning in Continuous Entity-Set Associations
Efficient representation learning using random walks for dynamic graphs
Continuous-Time Dynamic Network Embeddings
Dyn2Vec: Exploiting dynamic behavior using difference networks-based node embeddings for classification
Real-Time Streaming Graph Embedding Through Local Actions
Temporal Graph Offset Reconstruction: Towards Temporally Robust Graph Representation Learning |
ICLR | Title
Online Learning of Graph Neural Networks: When Can Data Be Permanently Deleted
Abstract
Online learning of graph neural networks (GNNs) faces the challenges of distribution shift and ever gbv rowing and changing training data, when temporal graphs evolve over time. This makes it inefficient to train over the complete graph whenever new data arrives. Deleting old data at some point in time may be preferable to maintain a good performance and to account for distribution shift. We systematically analyze these issues by incrementally training and evaluating GNNs in a sliding window over temporal graphs. We experiment with three representative GNN architectures and two scalable GNN techniques, on three new datasets. In our experiments, the GNNs face the challenge that new vertices, edges, and even classes appear and disappear over time. Our results show that no more than 50% of the GNN’s receptive field is necessary to retain at least 95% accuracy compared to training over a full graph. In most cases, i. e., 14 out 18 experiments, we even observe that a temporal window of size 1 is sufficient to retain at least 90%.
1 INTRODUCTION
Training of Graph Neural Networks (GNNs) on temporal graphs has become a hot topic. Recent works include combining GNNs with recurrent modules (Seo et al., 2018; Manessi et al., 2020; Sankar et al., 2020; Pareja et al., 2020) and vertex embeddings as a function of time to cope with continuous-time temporal graphs (da Xu et al., 2020; Rossi et al., 2020a). Concurrently, other approaches have been proposed to improve the scalability of GNNs. Those include sampling-based techniques (Chiang et al., 2019; Zeng et al., 2020) and shifting expensive neighborhood aggregation into pre-processing (Wu et al., 2019; Rossi et al., 2020b) or post-processing (Bojchevski et al., 2020).
However, there are further fundamental issues with temporal graphs that are not properly answered yet. First, as new vertices and edges appear (and disappear) over time, so can new classes. This results in a distribution shift, which is particularly challenging in an online setting, as there is no finite, a-priori known set of classes that can be used for training and it is not known when a new class appears. Second, scalable techniques for GNNs address the increased size of the graph, but always operate on the entire graph and thus on the entire temporal duration the graph spans. However, training on the entire history of a temporal graph (even in the context of scaling techniques like sampling (Chiang et al., 2019; Zeng et al., 2020)) may actually not be needed to perform tasks like vertex classification. Thus, it is important to investigate if, at some point in time, one can actually “intentionally forget” old data and still retain the same predictive power for the given task. In fact, is has been observed in other tasks such as stock-market prediction that too much history can even be counterproductive (Ersan et al., 2020).
Proposed Solution and Research Questions While we do not suggest to use an entirely new GNN architecture, we propose to adapt existing GNN architectures and scalable GNN techniques to the problem of distribution shift in temporal graphs. In essence, we propose a new evaluation procedure for online learning on the basis of the distribution of temporal differences, which assesses the nature of how vertices are connected in a temporal graph by enumerating the temporal differences of connected vertices along k-hop paths. This information is crucial for balancing between capturing the distribution shift while having sufficient vertices within the GNN’s receptive field.
In summary, the central question we aim to answer is, whether we can intentionally forget old data without losing predictive power in an online learning scenario under presence of distribution shift.
We simulate this scenario by applying temporal windows of different sizes over the temporal graph, as illustrated in Figure 1. The window size c resembles how much history of the temporal graph is used for training, or with other words: which information we forget. In this example, data older than t − 2 is ignored. We evaluate the accuracy of representative GNN architectures and scalable GNN techniques trained on the temporal window, against training on the entire timeline of the graph (full history). We evaluate the models by classifying the vertices at time step t, before we advance to the next time step.
To answer the research question, we break it down into four specific questions Q1 to Q4, each answered in a separate experiment. For Q1: Distribution Shift under Static vs Incremental Training, we verify that incremental training is necessary to account for distribution shift, compared to using a once-trained, static model. Extending from Q1, we investigate in Q2: Training with Warm vs Cold Restarts whether it is preferable to reuse model parameters from the previous time step (warm start) or restart with newly initialized parameters at each time step (cold start). In Q3: Incremental Training on Different Window Sizes, we answer the question what influence different choices for the window sizes have, i. e., how far do we need to look into the past such that a GNN trained on the window is still competitive to a model trained on the full graph. Question Q4 extends Q3 by considering Q4: Incremental Training with Scalable GNN Methods, i. e., how scalable GNN approaches compare to using the full history of the temporal graph and to which extent scaling techniques can be applied on top of the temporal window.
New Datasets To enable an analysis with a controlled extent of distribution shift, we contribute three newly compiled temporal graph datasets based on scientific publications: two citation graphs based on DBLP and one co-authorship graph based on Web of Science. To determine candidate window sizes, we contribute a new measure to compute the distribution of temporal differences within the k-hop neighborhood of each vertex, where k corresponds to the number of GNN layers. We select the 25th, 50th, and 75th percentiles of this distribution as candidate window sizes. This results in window sizes of 1, 3, and 6 time steps for the two DBLP datasets, and 1, 4, 8 for the Web of Science dataset.
Results We select three representative GNN architectures: GraphSAGE-Mean (Hamilton et al., 2017), graph attention networks (Veličković et al., 2018) and jumping knowledge networks (Xu et al., 2018) along with graph-agnostic multi-layer perceptrons. As scalable GNN techniques, we consider GraphSAINT (Zeng et al., 2020) as well as simplified GCNs (Wu et al., 2019). The results of our experiments show that already with a small window size of 3 or 4 time steps, GNNs achieve at least 95% accuracy compared to using the full graph. With window sizes of 6 or 8, 99% accuracy can be retained. With a window size of 1, for almost all experiments, a relative accuracy of no less than 90% could be retained, compared to models trained on the full graph. Furthermore, our experiments confirm that incremental training is necessary to account for distribution shift in temporal graphs and we show that both reinitialization strategies are viable and differ only marginally, when the learning rates are tuned accordingly. Surprisingly, simplified GCNs perform notably well on the most challenging dataset DBLP-hard and are only outperformed by GraphSAGE-Mean.
We outline the related work below. We provide a problem formalization and selection of GNNs for our experiments in Section 3. We describe the experimental apparatus and datasets in Section 4. The results of our experiments are reported in Section 5 and discussed in Section 6, before we conclude.
2 RELATED WORK
In Rossi & Neville (2012), the authors distinguish between tasks where the predicted attribute is static or changing over time. The dynamic graph problem is set up in a way that vertex and edge features may change over time and that edges may appear and disappear. This is conceptually different as it assumes a fixed vertex set, whereas in our case, the vertex set is changing over time. Furthermore, the predicted attribute is static in our case because it will not change after the respective vertex has appeared. Several recent works follow this setup and assume a fixed vertex set (Trivedi et al., 2017; Seo et al., 2018; Kumar et al., 2018; Trivedi et al., 2019; Manessi et al., 2020; Sankar et al., 2020).
In Park et al. (2017), the authors use vertex features concatenated with the adjacency vector and apply 1D-convolution. The experiments comprise link prediction and user state prediction. 1Dconvolution on the time axis can be regarded as a sliding window. However, the paper does not consider new classes during the evaluation time frame and does not analyze how much past training data would be required for up-training.
In Fish & Caceres (2017), the authors aim to find the optimal window size, given a dataset, a task, and a model. They treat the window size as a hyperparameter and propose an optimization algorithm which requires multiple runs of the model. This might be rather expensive. Furthermore, the study does not supply insights on how much predictive power can be preserved when selecting a nearoptimal but much smaller, and thus more efficient, window size.
CTDNE (Nguyen et al., 2018) is an embedding method for continuous-time graphs introducing temporal random walks. This approach considers graphs with featureless vertices with the objective to learn a meaningful/useful vertex embedding. In a recent extension of CTDNE (Lee et al., 2020), the method is applied to edge streams via up-training of the embedding. Comparing this approach to our work, we find that we have another task (discrete-time online vertex classification vs continuoustime online vertex embedding), consider a different type of graph (attributed vs featureless), and face different challenges (adaption to new classes). Nevertheless, it would be an interesting direction of future work to apply our experimental procedure to (streaming) CTDNE.
For discrete-time dynamic graphs involving new vertices, Goyal et al. (2018) proposes DynGEM as an autoencoder-like approach that jointly minimize reconstruction loss between t and t + 1 and embedding distance between connected vertices. In Dyngraph2vec (Goyal et al., 2020), the authors extend this approach by additional variants such as recurrent decoders.
EvolveGCN (Pareja et al., 2020) and T-GAT (da Xu et al., 2020) are both inductive approaches designed for attributed temporal graphs. EvolveGCN predicts the parameters of a GCN with an RNN by tying the RNN output or hidden state to the GCN parameters. T-GAT introduces a selfattention mechanism on the time axis. These approaches can cope with newly appearing vertices and are able to predict different labels for the same node at different times. They both require a sequence of graph snapshots for training. When new classes appear, these sequence-based models would need to be retrained. In our setting with limited window sizes, the sequence of snapshots within a window, i.e. the data available for retraining, might become very short: down to only one snapshot in the extreme case. Furthermore, these approaches focus on predicting future edges or predicting a label for each vertex at each time step. Therefore, the models serve a different purpose compared to the setting that we face, in which the label of each vertex is fixed. For these two reasons, we have focused on adapting and evaluating more efficient, static architectures as well as scalable GNN techniques, while leaving the adaption of T-GAT and EvolveGCN as future work.
To summarize, most works on dynamic graphs assume a fixed vertex set, while considering dynamics within the vertex/edge features, and/or the edges themselves. Inductive approaches such as EvolveGCN and T-GAT do allow new nodes. CTDNE can deal with new nodes via up-training. Previous work on finding optimal window sizes proposes a hyperparameter tuning algorithm. However, none of these works specifically analyzes the problem of new classes appearing over time and how much past training data is necessary, or how few is enough, to maintain good predictive power.
3 PROBLEM FORMALIZATION AND SELECTED METHODS
Problem Formalization We consider a vertex-labeled temporal graph Gt = (Vt, Et) with vertices Vt and edges Et, provided by a sequence of snapshots ordered by t ∈ N. Thus, Vt is the (finite) set of vertices that are in the graph at time step t, and Et the corresponding set of edges at time step t. Furthermore, we define the set of all vertices V ::= ⋃ i∈N Vi and all edges E ::= ⋃ i∈NEi, i. e., G = (V,E). Let tsmin : V → N be a function that returns for each vertex v ∈ V the timestamp at which the vertex was first added to the graph, i. e., tsmin : v 7→ min{i ∈ N|v ∈ Vi}. Finally, for each vertex v ∈ V we have a feature vector Xv ∈ RD, where D is the number of vertex features, and a class label yv ∈ C with C being the global set of classes C ::= ⋃ i∈N Ci.
In each time step t, previously unseen vertices and edges and even new classes may appear as illustrated in Figure 1. For these temporal graphs, we investigate training graph neural networks for the vertex classification task, i. e., assigning class labels y to previously unseen vertices based on vertex attributes X and connections to other vertices via edges. We denote the history of vertices and edges we take into account as the temporal window. The temporal window spans a range of multiple time steps, which we denote as the temporal window size c.
Selected Graph Neural Networks Several works have been proposed that combine GNNs with recurrent neural networks to capture temporal dynamics (Seo et al., 2018; Manessi et al., 2020; Sankar et al., 2020; Pareja et al., 2020). Other works focus on continuous-time temporal graphs (da Xu et al., 2020; Rossi et al., 2020a). Our work is orthogonal to those works as we focus on the distribution shift of temporal graphs and the question if and when old data can be deleted without sacrificing predictive power. In the following, we introduce and motivate our choice of representative GNN architectures as well as scalable GNN techniques for our experiments.
Dwivedi et al. (2020) have introduced a benchmarking framework to re-evaluate several recent GNN variants. Dwivedi et al. distinguish between isotropic and anisotropic GNN architectures. In isotropic GNNs, all edges are treated equally. Apart from graph convolutional networks (Kipf & Welling, 2017), examples of isotropic GNNs are GraphSAGE-mean (Hamilton et al., 2017), DiffPool (Ying et al., 2018), and GIN (Xu et al., 2019). In anisotropic GNNs, the weights for edges are computed dynamically. Instances of anisotropic GNNs include graph attention networks (Veličković et al., 2018), GatedGCN (Bresson & Laurent, 2017) and MoNet (Monti et al., 2017).
We select GraphSAGE-Mean (GS-Mean) (Hamilton et al., 2017) as a representative for isotropic GNNs because its special treatment of the vertices’ self-information has shown to be beneficial (Dwivedi et al., 2020). The representations from self-connections are concatenated to averaged neighbors’ representations before multiplying the parameters. In GS-Mean, the procedure to obtain representations in layer l+ 1 for vertex i is given by the equations ĥl+1i = h l i|| 1degi ∑ j∈N (i) h l j and hl+1i = σ(U lĥl+1i ), where N (i) is the set of adjacent vertices to vertex i, U l are the parameters of layer l, σ is a non-linear activation function, and ·||· is the concatenation. We select Graph Attention Networks (GATs) by (Veličković et al., 2018) as representative for the class of anisotropic GNNs. In GATs, representations in layer l + 1 for vertex i are computed as follows: ĥl+1i = w l ih l i + ∑ j∈N (i) w l ijh l j and h l+1 i = σ(U
lĥl+1i ), where the edge weights wij and self-connection weightswi are computed by a self-attention mechanism based on the representations hi and hj , i. e., the softmax of a(U lhi||U lhj) over edges, where a is a single-layer neural network with LeakyReLU activation.
Scaling Graph Neural Networks to Large Graphs Several approaches have been proposed to scale GNNs to large graphs. In general, these approaches fall into two categories: sampling either locally (Hamilton et al., 2017; Huang et al., 2018), or globally (Chiang et al., 2019; Zeng et al., 2020), and separating neighborhood aggregation from the neural network component (Wu et al., 2019; Rossi et al., 2020b; Bojchevski et al., 2020).
From both categories, we select one representative for our experiments. We use GraphSAINT (Zeng et al., 2020) as state-of-the-art sampling technique along with simplified GCNs (Wu et al., 2019) as a representative for shifting the neighborhood aggregation into a preprocessing step.
Simplified GCN (Wu et al., 2019) is a scalable variant of Graph Convolutional Networks (Kipf & Welling, 2017) that admits regular mini-batch sampling. Simplified GCN removes nonlinearities and collapses consecutive weight matrices into a single one. Thus, simplified GCN can be described by the equation ŶSGC = softmax(SKXΘ), where the parameter K has a similar effect as the number of layers in a regular GCN, S is the normalized adjacency matrix and Θ is the weight matrix. Instead of using multiple layers, the k-hop neighbourhood is computed by SK , which can be precomputed. This makes Simplified GCN efficient to compute, while not necessarily harming the performance.
In GraphSAINT (Zeng et al., 2020), entire subgraphs are sampled for training GNNs. Subgraph sampling introduces a bias which is counteracted by normalization coefficients for the loss function. The authors propose different sampling methods: vertex sampling, edge sampling, and random-walk sampling. We use the best-performing random-walk sampling for our experiments. The underlying GNN is exchangeable, yet the authors suggest to use Jumping Knowledge networks (JKNets) (Xu et al., 2018). JKNets introduce skip-connection to GNNs: each hidden layer has a direct connection to the output layer, in which the representations are aggregated, e. g., by concatenation. This enables the network to learn from representations corresponding to different levels of the local neighborhood. To isolate the effect of GraphSAINT sampling, we also include JKNets in our comparison.
4 EXPERIMENTAL APPARATUS
Procedure For each evaluation time step t ∈ [tstart, tend], we construct a subgraph G̃ = (Ṽ , Ẽ) of G induced on Ṽ = {v ∈ V |t− c ≤ tsmin(v) ≤ t} and Ẽ = {(u, v) ∈ E | u, v ∈ Ṽ }. The parameter c denotes the window size, i. e., determines the c time steps that the temporal window spans. Then, we supply the competing models with the subgraph G̃, the corresponding vertex features, and labels for vertices {u ∈ Ṽ | tsmin(u) < t} along with an epoch budget for updating their parameters. The task is to predict the labels for vertices {u ∈ Ṽ | tsmin(u) = t}. Finally, we evaluate the accuracy of the model before incrementing t. We provide an algorithmic view in Appendix A.1.
When advancing from one time step to the next, we consider two options of initializing the model. Using cold restarts corresponds to randomly re-initializing each model in each time step and training it from scratch. In contrast, when using warm restarts, we take the final weights of the previous time step as initialization for the next time step. In both cases, we initialize the additional parameters in the output layer randomly, when new classes appear.
Novel Measure for Distribution of Temporal Differences In the following, we develop a novel dataset-agnostic measure for the distribution of temporal difference within the k-hop neighborhood of each vertex. When k graph convolution layers are used, the features within the k-hop neighborhood of each vertex are taken into account for its prediction. This k-hop neighborhood is referred to as the receptive field of a GNN (Chen et al., 2018). When we incrementally train GNNs on a sliding window through time, the window size determines which vertices are available for training and for inference. Ideally, the temporal window covers all vertices within the GNN’s receptive field, such that GNNs have access to all relevant information.
How many vertices of the receptive field are contained in a temporal window of size c depends on the characteristics of the datasets. Therefore, we introduce a new measure for the distribution of temporal differences tdiffk within the receptive field of a k-layer GNN. Let N k(u) be the k-hop neighborhood of u, i. e., the set of vertices that are reachable from u by traversing at most k edges. Then, we define tdiffk(G) to be the multiset of time differences to past vertices:
tdiffk(G) := {tsmin(u)− tsmin(v)|u ∈ V ∧ v ∈ N k(u) ∧ tsmin(u) ≥ tsmin(v)} (1) Please note that this is a measure to determine comparable window sizes over different datasets and different granularities. It needs to be computed only once per dataseoncet, prior to any training iterations. When we consider a GNN with k graph convolution layers, the distribution tdiffk enumerates the temporal differences within the receptive field of the GNN. In our experiments, we will use the 25th, 50th, and 75th percentiles of this distribution for analyzing the effect of the temporal window size. This choice corresponds to an average receptive field coverage of 25%, 50%, and 75%.
Newly Compiled Datasets Pre-compiled temporal graph datasets for our real-world scenario are surprisingly rare. Therefore we contribute three new temporal graph datasets based on scientific
publications: one temporal co-authorship graph dataset (PharmaBio) as well as two newly compiled temporal citation graph datasets based on DBLP (DBLP-easy and DBLP-hard). These new datasets enable us to simulate a real-world scenario, in which not only new vertices but also new classes (venues) appear over time. Table 1 summarizes the basic characteristics of the datasets and Figure 2 shows the distribution of temporal differences tdiffk for different values of k. For details on the dataset creation procedure as well as degree and label distributions, we refer to Appendix A.2.
Evaluation Measures As our datasets have imbalanced classes, one could argue to use Micro or Macro F1-score as evaluation measure. However, we are primarily interested in the relative performance between limited-window training and training on the full graph. Motivated by realworld scenarios, we chose sample-based F1-score as our evaluation measure (equivalent to accuracy in single-label scenarios). When aggregating results over time, we use the unweighted average.
5 EXPERIMENTAL RESULTS
We report the results of our experiments along the research questions stated in the introduction.
Q1: Distribution Shift under Static vs Incremental Training In this experiment, we compare a once-trained static model against incrementally trained models. We train the static models for 400 epochs on the data before the first evaluation time step, which comprises 25% of the total vertices. We train incremental models for 200 epochs on temporal windows of 3 time steps (4 on the PharmaBio dataset) before evaluating each time step. All models have comparable capacity.
Figure 3 shows the results. We see that the accuracy of the static models decreases over time on DBLP-easy and DBLP-hard, where new classes appear over time. On PharmaBio, the accuracy of the static models plateaus, while the accuracy of incrementally trained models increases. That confirms our expectations as PharmaBio does not have any new classes, and incrementally trained models merely benefit from the increased amount of training data, while DBLP-easy and DBLPhard do have new classes appearing during the evaluation time frame. In the following experiments, we only use incrementally trained models because they outperform static models in all cases.
Q2: Training with Warm vs Cold Restarts We compare reusing the parameters of the model from the previous time step (warm restart) against randomly re-initializing the model parameters for each temporal window (cold restart). In both cases, we impose a 200 epoch budget per time step. The window size is set to 4 for PharmaBio and 3 for the two DBLP datasets, corresponding to 50% coverage of the GNNs’ receptive field. All models have comparable capacity.
Figure 4 shows the results. We observe that the results obtained by GNNs using warm and cold restarts are close to each other. On DBLP-hard with 23 new classes appearing during the evaluation steps, GS-Mean seems to benefit from warm restarts, while GATs yield better scores when cold restarts are used. On PharmaBio with a fixed class set, both GNNs benefit from reusing parameters from previous iterations. For now, we conclude that both reinitialization strategies are viable and we proceed by running both variants for the next experiments Q3 and Q4.
Q3: Incremental Training on Different Window Sizes We compare the models trained on windows of different sizes and compare it with a model trained on all available data, i. e., the full graph, which is our baseline. We select three window sizes per dataset based on the distribution of temporal differences tdiff2 (see Section 4). These window sized correspond to quartiles, i. e., the windows cover 25%, 50%, and 75% of the GNNs’ receptive field (RF) (see Table 1). Thus, we can compare window sizes across datasets with different characteristics, i. e., connectivity patterns through time and total number of time steps. The epoch budget is 200 and all models have comparable capacity.
Table 2 (top) shows the results. We observe that those GNN variants trained on the full timeline of the graph yield the highest scores on DBLP-easy and DBLP-hard. There, GNNs with window size 1 (25% RF) yield lower scores than training with larger window sizes (50% and 75% RF). On all datasets, the scores for training with limited window sizes larger than 1 are close to the ones of
full-graph training. In summary, window sizes that cover 50% of the receptive field, GNNs and also MLPs achieve at least 95% classification accuracy compared to full-graph training. When 75% of the receptive field is covered by the temporal window, at least 99% accuracy could be retained in all datasets. We refer to Appendix A.4 for extended results including both reinitialization strategies.
Q4: Incremental Training with Scalable GNN Methods Similarly to Q3, we again compare different window sizes against training on the full graph. This time, we focus on using scalable GNN techniques and aim to learn how they perform in conjunction with temporal windows. We further alleviate the fixed-capacity constraint of previous experiments and tune the hidden size as an additional hyperparameter. We refer to Appendix A.3 for details on hyperparameter choices.
We compare Simplified GCN and GraphSAINT, while including JKNet to isolate the effect of GraphSAINT sampling. Table 2 (bottom) shows the results. We observe that, again, limiting the window size to cover 50% of the GNN’s receptive field leads to at least 95% relative accuracy, compared to full graph training. As expected, GraphSAINT sampling (with JKNets as a base model) yields slightly lower scores than full-batch JKNets. On DBLP-hard, simplified GCN outperforms the other, more complex models. In terms of relative performance, limiting the receptive field does not negatively impact GraphSAINT on DBLP-hard and PharmaBio.
6 DISCUSSION
We have created a new experimental procedure for temporal graphs with new classes appearing over time, for which we contribute three newly compiled datasets with controlled degrees of distribution shift. In this online learning setup, we have evaluated three representative GNN architectures as well as two GNN scaling techniques. With the goal of generalizable results, we have introduced a new measure for the distribution of temporal differences tdiffk, based on which we have selected the temporal window sizes. Our results show that past data can be permanently deleted very early without diminishing the performance of an online vertex classification model. This has direct consequences for online learning of GNNs on temporal graphs and, thus, impacts how GNNs can be employed for numerous real-world applications.
Our main result is that incremental training with limited window sizes is as good as incremental training over the full timeline of the graph (see Q3 and Q4). With window sizes of 3 or 4 (50% receptive field coverage), GNNs achieve at least 95% accuracy compared to using all available data for incremental training. With window sizes of 6 or 8 (75% receptive field coverage), at least 99% accuracy can be retained. This result holds not only for standard GNN architectures but also when scaling techniques such as subgraph sampling are applied on-top of the temporal window. Finally, in almost all experiments, at least 90% of relative accuracy is reached with a window of size 1.
Furthermore, we have verified that incremental training helps to account for distribution shift compared to once-trained, static models (see Q1). We have further investigated on reusing parameters from previous iterations (Q2). Our results show that both strategies are viable, when learning rates are tuned accordingly. During hyperparameter optimization for Q4, in which we alleviated the fixed-capacity constraint, we further noticed that warm restarts are more suitable for higher capacity models with low learning rates, while using cold restarts admits using lower capacity models and higher learning rates (the details of hyperparameter optimization can be found in Appendix A.3).
Even though it was not our main objective to compare the absolute performances of the models, it is noteworthy that simplified GCNs perform surprisingly well on DBLP-hard. Despite the simplicity of the approach, the model yields higher scores than GraphSAINT, JKNets and fixed-capacity GATs, and are only outperformed by GraphSAGE-mean.
A limitation of the present work is that we assume that the true labels of each time step become available as training data for the next time step. In practice, however, only a small fraction of vertices might come with labels for training, while the larger part could be annotated by the model itself. Adapting our experimental procedure to use only a small fraction of true labels in each time step would be an interesting direction of future work.
One could further argue that deleting data that is not linked to the most recent data points would be a viable alternative to deletion based on a fixed time difference. However, this approach would be only feasible in retrospect because, in real-world scenarios, it is impossible to know whether a future data will include a link to a past data point. Still, future work could involve employing other methods to determine what data to delete, such as the personalized PageRank score (Bojchevski et al., 2020).
7 CONCLUSION
Temporal graphs occur in many real-world scenarios such as citation graphs, transaction graphs, and social graphs. Practitioners face a trade-off between memory requirements, which are tied to the temporal window size, and expected accuracy of their models. Until now, it was not clear, how GNNs can be efficiently trained in those online scenarios, especially when distribution shift becomes an issue. We demonstrate that a high level of accuracy can be retained, when training only on a fraction of the temporal graph, determined by a temporal window. The results of this paper can serve as guidelines for training GNNs on temporal graphs, particularly regarding the intentional forgetting of data while retaining a certain percentage of predictive power. For researchers, we supply our newly compiled datasets along with an implementation of the experimental procedure.
We will make the code and data available to reviewers during the peer-reviewing process as suggested in the ICLR 2021 author’s guide.
A APPENDIX
A.1 ALGORITHM FOR OUR EXPERIMENTAL PROCEDURE
Algorithm 1 outlines our incremental training and evaluation procedure.
Data: Temporal graph G, features X , labels y, time steps t, temporal window size c, epoch budget nepochs
Result: Predicted class labels for vertices in each time step of the graph 1 known classes← ∅; 2 θ ← initialize parameters(); 3 for t? ← tstart to tend do 4 G̃ ← subgraph of G induced on vertices u, where t? − c ≤ tsmin(u) ≤ t? ; 5 ỹtrain ← ỹu, where tsmin(u) < t?; 6 if do cold restart then
// Cold restart: re-initialize all parameters 7 θ ← initialize parameters(); 8 else
// Warm restart: initialize new parameters, copy others 9 tmp← clone(θ);
10 θ ← initialize parameters(); 11 θ|known classes ← tmp|known classes; 12 end 13 θ ← train(θ, G̃, X̃ , ỹtrain) for nepochs epochs; 14 ỹpred ← predict(θ, G̃, X̃) for vertices u, where tsmin(u) = t?; 15 known classes← known classes ∪ set(ỹtrain); 16 end
Algorithm 1: Incremental training procedure of our experimental apparatus
A.2 DATASET DETAILS
In the following, we outline the dataset compilation procedure and supply further descriptive statistics of the resulting datasets.
PharmaBio To compile the PharmaBio dataset, we use the metadata of 543,853 papers by Pharma and Biotech companies from Web of Science (Melnychuk et al., 2019). After removing duplicates, our data cleaning procedure ensures that there is a certain amount of labels for each class per year and that each paper is connected to at least one other paper by a same-author edge. More specifically, we: (1) Keep only papers that are in a journal category with at least 20 papers per year; (2) Keep only papers where at least one of the authors has at least two papers per year; (3) Create vocabulary of words (regular expression: \w\w+) that appear in at least 20 papers globally and keep only papers with at least one of these words. We iterate steps 1–3 until no further paper has been removed in one pass. We end up with 68,068 papers from 23,689 authors working for 68 companies. These papers are distributed across 2,818 journals which are, in turn, categorized into seven journal categories. During preprocessing, each paper becomes a vertex in the graph. The class of the paper is the category of the journal in which it was published. We insert an edge between two vertices, if they share at least one common author (based on string comparison).
DBLP-easy To compile these datasets, we use the DBLP Citation Network dataset (version 10)1 (Tang et al., 2008) as a basis. It comprises 3M citing documents and 25M citations to 2M distinct cited documents, ranging between years. We use venues (conferences or journals) as class labels and use citations as edges. First, we select the subset from 1990 until 2015. Then, we follow a similar procedure as above: (1) Keep only papers from venues that have at least τvenue papers in each year they occur (may be only every second year). (2) Keep only papers that stand in at least
1https://aminer.org/citation
one citation relation to another paper. (3) Remove papers from venues that occur only in a single year. (4) Keep only papers with at least one word from a vocabulary of words that are in at least τwords papers. We iterate steps 1–4 until no further paper has been removed in one pass.
DBLP-hard The difference between DBLP-easy and DBLP-hard is that τvenue := 100 papers in the easy variant and τvenue := 45 papers in the hard variant. The minimum word occurrence threshold τwords is set to 20 for DBLP-easy and 40 for DBLP-hard. Finally, we construct the graph with papers as vertices, citations as edges, and venues as classes.
For all three datasets, we use L2-normalized tf-idf (Salton & Buckley, 1988) representations as vertex features based the corresponding papers’ title. We estimate the power law coefficient α via
maximum likelihood (Newman, 2005) α = 1 + n (∑
u∈V ln degu
degmin
)−1 where degmin is 1 (2 for
PharmaBio).
In Figure 5, we visualize the degree distribution, label distribution, the distribution over years, as well as the distributions of temporal differences (as described in Section 4). All compiled datasets seem to follow a power law distribution, which is typical for citation and co-authorship graphs.
For each dataset, we chose the boundaries for our evaluation time steps [tstart, tend], such that 25% of the total number of vertices lie before tstart, and tend is the final time step. For PharmaBio (1985– 2016), that is tstart = 1999, and for both DBLP variants (1990-2015), that is tstart = 2004. Data before tstart may be used for training, depending on the window size. Regarding changes in the class set (distribution shift), DBLP-easy has 12 venues in total, including one bi-annual conference and four new venues appearing in 2005, 2006, 2007, and 2012. DBLP-hard has 73 venues, including one discontinued, nine bi-annual, six irregular venues, and 23 new venues.
A.3 IMPLEMENTATION DETAILS AND HYPERPARAMETERS
We tune the hyperparameters separately for each window size and each restart configuration. We tune the hyperparameters on DBLP-easy and use the same set of hyperparameters for DBLP-hard and PharmaBio.
For experiments Q1-Q3, we design the models to have a comparable capacity: one hidden layer with 64 hidden units. We use ReLU activation on the hidden layer of MLP and GS-Mean. GS-Mean has one hidden layer, i. e. two graph convolutional layers, with 32 units for self-connections and 32 units for aggregated neighbor representations. GAT has one hidden layer composed of 8 attention heads and 8 hidden units per head, along with one attention head for the output layer. We initialize the model parameters according to Glorot and Bengio (Glorot & Bengio, 2010). For both GS-Mean and GAT, the output of the second layer corresponds to the number of classes. We use dropout probability 0.5 on the hidden units for all models in experiment Q3. We use Adam (Kingma & Ba, 2014) to optimize for cross-entropy. We tune the learning rates on DBLP-easy with a search space of {10−1, 5 ·10−2, 10−2, 5 ·10−3, 10−3, 5 ·10−4, 10−4} and re-use these learning rates for the other datasets. The learning rates are tuned separately for each model, each parameter reinitialization strategy, and each window size. We do not use weight decay because it did not increase the performance (search space {0, 10−3, 5 · 10−4, 10−4, 5 · 10−5, 10−5}). The optimal learning rates can be found in Figure 6 for Q1, Figure 7 for Q2, and Figure 8 for Q3. For implementation of GraphSAGE-mean and GATs, we use DeepGraphLibrary (Wang et al., 2019). All methods are trained transductively: for each new snapshot, the new vertices are inserted into the graph without their labels, then, the models are allowed to (up-)train before making predictions.
For the experiment Q4, we use two hidden layers with 64 hidden units each to make use of jumping knowledge (Xu et al., 2018), as suggested as base architecture in GraphSAINT (Zeng et al., 2020). The learning rate is tuned in the space of {0.0001, 0.001, 0.01, 0.1}. Dropout probability is set to 0.2. We do not use weight decay. We also tune the batch size of GraphSAINT in the range of {256, 512, 2048, 4096}, as subgraph size is an important hyperparameter. For simplified GCN, we tune the learning rate in the range of {0.0005, 0.001, 0.005, 0.01, 0.05} and we set the neighborhood aggregation parameter K to 2, corresponding to two-layer aggregation. For implementation of GraphSAINT and JKNet, we use PyTorch-geometric (Fey & Lenssen, 2019). The optimal hyperparameter values as well as the respective search spaces for experiment Q4 can be found in Figure 3. JKNets and simplified GCNs are trained transductively, while GraphSAINT is trained inductively as suggested by the original work (Zeng et al., 2020).
A.4 EXTENDED RESULTS
Table 4 shows the full results table with both warm and cold restarts for experiment Q3. Table 5 shows the full results table with both warm and cold restarts for experiment Q4. Figure 9 visualizes the results for each time step of experiment Q3. | 1. What is the main contribution of the paper, and what are its strengths?
2. What are the weaknesses of the proposed method, particularly regarding memory usage, parameter setting, and performance consistency?
3. How does the reviewer assess the decoupling strategy and its usefulness in GNN training?
4. What are the concerns regarding the lazy-update technique, and how does it affect the training procedure and memory usage?
5. How does the reviewer evaluate the effectiveness of the proposed approach in reducing training time while maintaining performance?
6. Are there any questions or concerns about the experimental settings, such as the choice of epochs or validation loss, and how they affect the results? | Review | Review
Summary
This paper proposes a paradigm which speeds up the training time of GNNs while not compromising too much performance. The method adopts a layerwise training procedure. In particular, the authors inject a loss function at each layer while storing and fixing the feed-forward values of its previous layer. The training is then carried out along all layers parallelly, which allows the updating of paradigms to be decoupled and is not applicable in previous works. A further improvement (lazy-update) by not updating the feed-forward values of each layer is used to reduce the training time.
Reasons for Score
The paper discusses the important topic of layer updating and proposes a decoupling strategy that enables a layerwise parallel updating scheme. However, the lazy-update technique that has been argued as another important point is not fully justified in its memory usage, higher parameter setting, and performance consistency between different experiments.
Pros
The paper tackles the problem of layer decoupling in GNN training, which is an important problem when training large-scale networks. The decoupling training approach is useful if the memory can hold multiple feed-forward layers’ outputs, which is not the case with previous methods [1].
Cons
The paper proposes two strategies: decoupling technique and lazy-update. While I think the first strategy is a good supplement to the previous work, I do feel there are some points that are not properly justified in the arguments and experiments of lazy-update.
The actual memory used in LU-DGL-GCN lazy-update. It is stated that in Algorithm 2,
H
^
(
l
)
is used for each layer, which means that there need to be at least
L
N
K
extra space needed for memorizing these parameters as
H
^
(
l
)
can not be computed on-the-fly due to its lazy-update nature (e.g.
T
l
a
z
y
is too large, when we are at Epoch
t
updating
W
(
l
)
,
H
^
(
l
−
1
)
may refer to
F
H
(
l
−
1
)
where
H
(
l
−
1
)
holds the value multiple epochs ago). With this being said, it’s hard to figure out why LU-DGL-GCN still has similar memory usage as with L2GCN and is drastically different from the normal GCN.
The balance between stableness of LU-DGL-GCN and the large value of
T
l
a
z
y
is hard to find. In Fig. 3, the authors show that the framework is sensitive to
T
l
a
z
y
and can be highly unstable when it is at a small value (e.g. 1 or 5). However, it is natural to find that
T
l
a
z
y
should not be too large as it could slow the training procedure. In the extreme case, if
T
l
a
z
y
is infinite, we can see that the previous layer’s output
H
^
(
l
)
is never updated and the parameters of the whole GCN, therefore, can never be properly optimized. It would be meaningful if the authors could discuss how to set
T
l
a
z
y
for the balance of stableness and time cost which is an important point the paper has argued for the approach.
Exact training time comparison should be stated. It would be necessary to state clearly how the time is computed in Table 2, e.g. fixed epochs or same validation loss, the latter one of which is a more proper choice as the authors are wishing to show the framework is efficient with similar accuracy. With this stated, it is also strange to find that the accuracy of LGCN is higher than LU-DGL-GCN in Table 2 while it is not the case in Figure 3 where Sequential_test is larger than any lazy-update.
Clarifications
Please address and clarify the cons above.
[1] L2-GCN: Layer-Wise and Learned Efficient Training of Graph Convolutional Networks |
ICLR | Title
Online Learning of Graph Neural Networks: When Can Data Be Permanently Deleted
Abstract
Online learning of graph neural networks (GNNs) faces the challenges of distribution shift and ever gbv rowing and changing training data, when temporal graphs evolve over time. This makes it inefficient to train over the complete graph whenever new data arrives. Deleting old data at some point in time may be preferable to maintain a good performance and to account for distribution shift. We systematically analyze these issues by incrementally training and evaluating GNNs in a sliding window over temporal graphs. We experiment with three representative GNN architectures and two scalable GNN techniques, on three new datasets. In our experiments, the GNNs face the challenge that new vertices, edges, and even classes appear and disappear over time. Our results show that no more than 50% of the GNN’s receptive field is necessary to retain at least 95% accuracy compared to training over a full graph. In most cases, i. e., 14 out 18 experiments, we even observe that a temporal window of size 1 is sufficient to retain at least 90%.
1 INTRODUCTION
Training of Graph Neural Networks (GNNs) on temporal graphs has become a hot topic. Recent works include combining GNNs with recurrent modules (Seo et al., 2018; Manessi et al., 2020; Sankar et al., 2020; Pareja et al., 2020) and vertex embeddings as a function of time to cope with continuous-time temporal graphs (da Xu et al., 2020; Rossi et al., 2020a). Concurrently, other approaches have been proposed to improve the scalability of GNNs. Those include sampling-based techniques (Chiang et al., 2019; Zeng et al., 2020) and shifting expensive neighborhood aggregation into pre-processing (Wu et al., 2019; Rossi et al., 2020b) or post-processing (Bojchevski et al., 2020).
However, there are further fundamental issues with temporal graphs that are not properly answered yet. First, as new vertices and edges appear (and disappear) over time, so can new classes. This results in a distribution shift, which is particularly challenging in an online setting, as there is no finite, a-priori known set of classes that can be used for training and it is not known when a new class appears. Second, scalable techniques for GNNs address the increased size of the graph, but always operate on the entire graph and thus on the entire temporal duration the graph spans. However, training on the entire history of a temporal graph (even in the context of scaling techniques like sampling (Chiang et al., 2019; Zeng et al., 2020)) may actually not be needed to perform tasks like vertex classification. Thus, it is important to investigate if, at some point in time, one can actually “intentionally forget” old data and still retain the same predictive power for the given task. In fact, is has been observed in other tasks such as stock-market prediction that too much history can even be counterproductive (Ersan et al., 2020).
Proposed Solution and Research Questions While we do not suggest to use an entirely new GNN architecture, we propose to adapt existing GNN architectures and scalable GNN techniques to the problem of distribution shift in temporal graphs. In essence, we propose a new evaluation procedure for online learning on the basis of the distribution of temporal differences, which assesses the nature of how vertices are connected in a temporal graph by enumerating the temporal differences of connected vertices along k-hop paths. This information is crucial for balancing between capturing the distribution shift while having sufficient vertices within the GNN’s receptive field.
In summary, the central question we aim to answer is, whether we can intentionally forget old data without losing predictive power in an online learning scenario under presence of distribution shift.
We simulate this scenario by applying temporal windows of different sizes over the temporal graph, as illustrated in Figure 1. The window size c resembles how much history of the temporal graph is used for training, or with other words: which information we forget. In this example, data older than t − 2 is ignored. We evaluate the accuracy of representative GNN architectures and scalable GNN techniques trained on the temporal window, against training on the entire timeline of the graph (full history). We evaluate the models by classifying the vertices at time step t, before we advance to the next time step.
To answer the research question, we break it down into four specific questions Q1 to Q4, each answered in a separate experiment. For Q1: Distribution Shift under Static vs Incremental Training, we verify that incremental training is necessary to account for distribution shift, compared to using a once-trained, static model. Extending from Q1, we investigate in Q2: Training with Warm vs Cold Restarts whether it is preferable to reuse model parameters from the previous time step (warm start) or restart with newly initialized parameters at each time step (cold start). In Q3: Incremental Training on Different Window Sizes, we answer the question what influence different choices for the window sizes have, i. e., how far do we need to look into the past such that a GNN trained on the window is still competitive to a model trained on the full graph. Question Q4 extends Q3 by considering Q4: Incremental Training with Scalable GNN Methods, i. e., how scalable GNN approaches compare to using the full history of the temporal graph and to which extent scaling techniques can be applied on top of the temporal window.
New Datasets To enable an analysis with a controlled extent of distribution shift, we contribute three newly compiled temporal graph datasets based on scientific publications: two citation graphs based on DBLP and one co-authorship graph based on Web of Science. To determine candidate window sizes, we contribute a new measure to compute the distribution of temporal differences within the k-hop neighborhood of each vertex, where k corresponds to the number of GNN layers. We select the 25th, 50th, and 75th percentiles of this distribution as candidate window sizes. This results in window sizes of 1, 3, and 6 time steps for the two DBLP datasets, and 1, 4, 8 for the Web of Science dataset.
Results We select three representative GNN architectures: GraphSAGE-Mean (Hamilton et al., 2017), graph attention networks (Veličković et al., 2018) and jumping knowledge networks (Xu et al., 2018) along with graph-agnostic multi-layer perceptrons. As scalable GNN techniques, we consider GraphSAINT (Zeng et al., 2020) as well as simplified GCNs (Wu et al., 2019). The results of our experiments show that already with a small window size of 3 or 4 time steps, GNNs achieve at least 95% accuracy compared to using the full graph. With window sizes of 6 or 8, 99% accuracy can be retained. With a window size of 1, for almost all experiments, a relative accuracy of no less than 90% could be retained, compared to models trained on the full graph. Furthermore, our experiments confirm that incremental training is necessary to account for distribution shift in temporal graphs and we show that both reinitialization strategies are viable and differ only marginally, when the learning rates are tuned accordingly. Surprisingly, simplified GCNs perform notably well on the most challenging dataset DBLP-hard and are only outperformed by GraphSAGE-Mean.
We outline the related work below. We provide a problem formalization and selection of GNNs for our experiments in Section 3. We describe the experimental apparatus and datasets in Section 4. The results of our experiments are reported in Section 5 and discussed in Section 6, before we conclude.
2 RELATED WORK
In Rossi & Neville (2012), the authors distinguish between tasks where the predicted attribute is static or changing over time. The dynamic graph problem is set up in a way that vertex and edge features may change over time and that edges may appear and disappear. This is conceptually different as it assumes a fixed vertex set, whereas in our case, the vertex set is changing over time. Furthermore, the predicted attribute is static in our case because it will not change after the respective vertex has appeared. Several recent works follow this setup and assume a fixed vertex set (Trivedi et al., 2017; Seo et al., 2018; Kumar et al., 2018; Trivedi et al., 2019; Manessi et al., 2020; Sankar et al., 2020).
In Park et al. (2017), the authors use vertex features concatenated with the adjacency vector and apply 1D-convolution. The experiments comprise link prediction and user state prediction. 1Dconvolution on the time axis can be regarded as a sliding window. However, the paper does not consider new classes during the evaluation time frame and does not analyze how much past training data would be required for up-training.
In Fish & Caceres (2017), the authors aim to find the optimal window size, given a dataset, a task, and a model. They treat the window size as a hyperparameter and propose an optimization algorithm which requires multiple runs of the model. This might be rather expensive. Furthermore, the study does not supply insights on how much predictive power can be preserved when selecting a nearoptimal but much smaller, and thus more efficient, window size.
CTDNE (Nguyen et al., 2018) is an embedding method for continuous-time graphs introducing temporal random walks. This approach considers graphs with featureless vertices with the objective to learn a meaningful/useful vertex embedding. In a recent extension of CTDNE (Lee et al., 2020), the method is applied to edge streams via up-training of the embedding. Comparing this approach to our work, we find that we have another task (discrete-time online vertex classification vs continuoustime online vertex embedding), consider a different type of graph (attributed vs featureless), and face different challenges (adaption to new classes). Nevertheless, it would be an interesting direction of future work to apply our experimental procedure to (streaming) CTDNE.
For discrete-time dynamic graphs involving new vertices, Goyal et al. (2018) proposes DynGEM as an autoencoder-like approach that jointly minimize reconstruction loss between t and t + 1 and embedding distance between connected vertices. In Dyngraph2vec (Goyal et al., 2020), the authors extend this approach by additional variants such as recurrent decoders.
EvolveGCN (Pareja et al., 2020) and T-GAT (da Xu et al., 2020) are both inductive approaches designed for attributed temporal graphs. EvolveGCN predicts the parameters of a GCN with an RNN by tying the RNN output or hidden state to the GCN parameters. T-GAT introduces a selfattention mechanism on the time axis. These approaches can cope with newly appearing vertices and are able to predict different labels for the same node at different times. They both require a sequence of graph snapshots for training. When new classes appear, these sequence-based models would need to be retrained. In our setting with limited window sizes, the sequence of snapshots within a window, i.e. the data available for retraining, might become very short: down to only one snapshot in the extreme case. Furthermore, these approaches focus on predicting future edges or predicting a label for each vertex at each time step. Therefore, the models serve a different purpose compared to the setting that we face, in which the label of each vertex is fixed. For these two reasons, we have focused on adapting and evaluating more efficient, static architectures as well as scalable GNN techniques, while leaving the adaption of T-GAT and EvolveGCN as future work.
To summarize, most works on dynamic graphs assume a fixed vertex set, while considering dynamics within the vertex/edge features, and/or the edges themselves. Inductive approaches such as EvolveGCN and T-GAT do allow new nodes. CTDNE can deal with new nodes via up-training. Previous work on finding optimal window sizes proposes a hyperparameter tuning algorithm. However, none of these works specifically analyzes the problem of new classes appearing over time and how much past training data is necessary, or how few is enough, to maintain good predictive power.
3 PROBLEM FORMALIZATION AND SELECTED METHODS
Problem Formalization We consider a vertex-labeled temporal graph Gt = (Vt, Et) with vertices Vt and edges Et, provided by a sequence of snapshots ordered by t ∈ N. Thus, Vt is the (finite) set of vertices that are in the graph at time step t, and Et the corresponding set of edges at time step t. Furthermore, we define the set of all vertices V ::= ⋃ i∈N Vi and all edges E ::= ⋃ i∈NEi, i. e., G = (V,E). Let tsmin : V → N be a function that returns for each vertex v ∈ V the timestamp at which the vertex was first added to the graph, i. e., tsmin : v 7→ min{i ∈ N|v ∈ Vi}. Finally, for each vertex v ∈ V we have a feature vector Xv ∈ RD, where D is the number of vertex features, and a class label yv ∈ C with C being the global set of classes C ::= ⋃ i∈N Ci.
In each time step t, previously unseen vertices and edges and even new classes may appear as illustrated in Figure 1. For these temporal graphs, we investigate training graph neural networks for the vertex classification task, i. e., assigning class labels y to previously unseen vertices based on vertex attributes X and connections to other vertices via edges. We denote the history of vertices and edges we take into account as the temporal window. The temporal window spans a range of multiple time steps, which we denote as the temporal window size c.
Selected Graph Neural Networks Several works have been proposed that combine GNNs with recurrent neural networks to capture temporal dynamics (Seo et al., 2018; Manessi et al., 2020; Sankar et al., 2020; Pareja et al., 2020). Other works focus on continuous-time temporal graphs (da Xu et al., 2020; Rossi et al., 2020a). Our work is orthogonal to those works as we focus on the distribution shift of temporal graphs and the question if and when old data can be deleted without sacrificing predictive power. In the following, we introduce and motivate our choice of representative GNN architectures as well as scalable GNN techniques for our experiments.
Dwivedi et al. (2020) have introduced a benchmarking framework to re-evaluate several recent GNN variants. Dwivedi et al. distinguish between isotropic and anisotropic GNN architectures. In isotropic GNNs, all edges are treated equally. Apart from graph convolutional networks (Kipf & Welling, 2017), examples of isotropic GNNs are GraphSAGE-mean (Hamilton et al., 2017), DiffPool (Ying et al., 2018), and GIN (Xu et al., 2019). In anisotropic GNNs, the weights for edges are computed dynamically. Instances of anisotropic GNNs include graph attention networks (Veličković et al., 2018), GatedGCN (Bresson & Laurent, 2017) and MoNet (Monti et al., 2017).
We select GraphSAGE-Mean (GS-Mean) (Hamilton et al., 2017) as a representative for isotropic GNNs because its special treatment of the vertices’ self-information has shown to be beneficial (Dwivedi et al., 2020). The representations from self-connections are concatenated to averaged neighbors’ representations before multiplying the parameters. In GS-Mean, the procedure to obtain representations in layer l+ 1 for vertex i is given by the equations ĥl+1i = h l i|| 1degi ∑ j∈N (i) h l j and hl+1i = σ(U lĥl+1i ), where N (i) is the set of adjacent vertices to vertex i, U l are the parameters of layer l, σ is a non-linear activation function, and ·||· is the concatenation. We select Graph Attention Networks (GATs) by (Veličković et al., 2018) as representative for the class of anisotropic GNNs. In GATs, representations in layer l + 1 for vertex i are computed as follows: ĥl+1i = w l ih l i + ∑ j∈N (i) w l ijh l j and h l+1 i = σ(U
lĥl+1i ), where the edge weights wij and self-connection weightswi are computed by a self-attention mechanism based on the representations hi and hj , i. e., the softmax of a(U lhi||U lhj) over edges, where a is a single-layer neural network with LeakyReLU activation.
Scaling Graph Neural Networks to Large Graphs Several approaches have been proposed to scale GNNs to large graphs. In general, these approaches fall into two categories: sampling either locally (Hamilton et al., 2017; Huang et al., 2018), or globally (Chiang et al., 2019; Zeng et al., 2020), and separating neighborhood aggregation from the neural network component (Wu et al., 2019; Rossi et al., 2020b; Bojchevski et al., 2020).
From both categories, we select one representative for our experiments. We use GraphSAINT (Zeng et al., 2020) as state-of-the-art sampling technique along with simplified GCNs (Wu et al., 2019) as a representative for shifting the neighborhood aggregation into a preprocessing step.
Simplified GCN (Wu et al., 2019) is a scalable variant of Graph Convolutional Networks (Kipf & Welling, 2017) that admits regular mini-batch sampling. Simplified GCN removes nonlinearities and collapses consecutive weight matrices into a single one. Thus, simplified GCN can be described by the equation ŶSGC = softmax(SKXΘ), where the parameter K has a similar effect as the number of layers in a regular GCN, S is the normalized adjacency matrix and Θ is the weight matrix. Instead of using multiple layers, the k-hop neighbourhood is computed by SK , which can be precomputed. This makes Simplified GCN efficient to compute, while not necessarily harming the performance.
In GraphSAINT (Zeng et al., 2020), entire subgraphs are sampled for training GNNs. Subgraph sampling introduces a bias which is counteracted by normalization coefficients for the loss function. The authors propose different sampling methods: vertex sampling, edge sampling, and random-walk sampling. We use the best-performing random-walk sampling for our experiments. The underlying GNN is exchangeable, yet the authors suggest to use Jumping Knowledge networks (JKNets) (Xu et al., 2018). JKNets introduce skip-connection to GNNs: each hidden layer has a direct connection to the output layer, in which the representations are aggregated, e. g., by concatenation. This enables the network to learn from representations corresponding to different levels of the local neighborhood. To isolate the effect of GraphSAINT sampling, we also include JKNets in our comparison.
4 EXPERIMENTAL APPARATUS
Procedure For each evaluation time step t ∈ [tstart, tend], we construct a subgraph G̃ = (Ṽ , Ẽ) of G induced on Ṽ = {v ∈ V |t− c ≤ tsmin(v) ≤ t} and Ẽ = {(u, v) ∈ E | u, v ∈ Ṽ }. The parameter c denotes the window size, i. e., determines the c time steps that the temporal window spans. Then, we supply the competing models with the subgraph G̃, the corresponding vertex features, and labels for vertices {u ∈ Ṽ | tsmin(u) < t} along with an epoch budget for updating their parameters. The task is to predict the labels for vertices {u ∈ Ṽ | tsmin(u) = t}. Finally, we evaluate the accuracy of the model before incrementing t. We provide an algorithmic view in Appendix A.1.
When advancing from one time step to the next, we consider two options of initializing the model. Using cold restarts corresponds to randomly re-initializing each model in each time step and training it from scratch. In contrast, when using warm restarts, we take the final weights of the previous time step as initialization for the next time step. In both cases, we initialize the additional parameters in the output layer randomly, when new classes appear.
Novel Measure for Distribution of Temporal Differences In the following, we develop a novel dataset-agnostic measure for the distribution of temporal difference within the k-hop neighborhood of each vertex. When k graph convolution layers are used, the features within the k-hop neighborhood of each vertex are taken into account for its prediction. This k-hop neighborhood is referred to as the receptive field of a GNN (Chen et al., 2018). When we incrementally train GNNs on a sliding window through time, the window size determines which vertices are available for training and for inference. Ideally, the temporal window covers all vertices within the GNN’s receptive field, such that GNNs have access to all relevant information.
How many vertices of the receptive field are contained in a temporal window of size c depends on the characteristics of the datasets. Therefore, we introduce a new measure for the distribution of temporal differences tdiffk within the receptive field of a k-layer GNN. Let N k(u) be the k-hop neighborhood of u, i. e., the set of vertices that are reachable from u by traversing at most k edges. Then, we define tdiffk(G) to be the multiset of time differences to past vertices:
tdiffk(G) := {tsmin(u)− tsmin(v)|u ∈ V ∧ v ∈ N k(u) ∧ tsmin(u) ≥ tsmin(v)} (1) Please note that this is a measure to determine comparable window sizes over different datasets and different granularities. It needs to be computed only once per dataseoncet, prior to any training iterations. When we consider a GNN with k graph convolution layers, the distribution tdiffk enumerates the temporal differences within the receptive field of the GNN. In our experiments, we will use the 25th, 50th, and 75th percentiles of this distribution for analyzing the effect of the temporal window size. This choice corresponds to an average receptive field coverage of 25%, 50%, and 75%.
Newly Compiled Datasets Pre-compiled temporal graph datasets for our real-world scenario are surprisingly rare. Therefore we contribute three new temporal graph datasets based on scientific
publications: one temporal co-authorship graph dataset (PharmaBio) as well as two newly compiled temporal citation graph datasets based on DBLP (DBLP-easy and DBLP-hard). These new datasets enable us to simulate a real-world scenario, in which not only new vertices but also new classes (venues) appear over time. Table 1 summarizes the basic characteristics of the datasets and Figure 2 shows the distribution of temporal differences tdiffk for different values of k. For details on the dataset creation procedure as well as degree and label distributions, we refer to Appendix A.2.
Evaluation Measures As our datasets have imbalanced classes, one could argue to use Micro or Macro F1-score as evaluation measure. However, we are primarily interested in the relative performance between limited-window training and training on the full graph. Motivated by realworld scenarios, we chose sample-based F1-score as our evaluation measure (equivalent to accuracy in single-label scenarios). When aggregating results over time, we use the unweighted average.
5 EXPERIMENTAL RESULTS
We report the results of our experiments along the research questions stated in the introduction.
Q1: Distribution Shift under Static vs Incremental Training In this experiment, we compare a once-trained static model against incrementally trained models. We train the static models for 400 epochs on the data before the first evaluation time step, which comprises 25% of the total vertices. We train incremental models for 200 epochs on temporal windows of 3 time steps (4 on the PharmaBio dataset) before evaluating each time step. All models have comparable capacity.
Figure 3 shows the results. We see that the accuracy of the static models decreases over time on DBLP-easy and DBLP-hard, where new classes appear over time. On PharmaBio, the accuracy of the static models plateaus, while the accuracy of incrementally trained models increases. That confirms our expectations as PharmaBio does not have any new classes, and incrementally trained models merely benefit from the increased amount of training data, while DBLP-easy and DBLPhard do have new classes appearing during the evaluation time frame. In the following experiments, we only use incrementally trained models because they outperform static models in all cases.
Q2: Training with Warm vs Cold Restarts We compare reusing the parameters of the model from the previous time step (warm restart) against randomly re-initializing the model parameters for each temporal window (cold restart). In both cases, we impose a 200 epoch budget per time step. The window size is set to 4 for PharmaBio and 3 for the two DBLP datasets, corresponding to 50% coverage of the GNNs’ receptive field. All models have comparable capacity.
Figure 4 shows the results. We observe that the results obtained by GNNs using warm and cold restarts are close to each other. On DBLP-hard with 23 new classes appearing during the evaluation steps, GS-Mean seems to benefit from warm restarts, while GATs yield better scores when cold restarts are used. On PharmaBio with a fixed class set, both GNNs benefit from reusing parameters from previous iterations. For now, we conclude that both reinitialization strategies are viable and we proceed by running both variants for the next experiments Q3 and Q4.
Q3: Incremental Training on Different Window Sizes We compare the models trained on windows of different sizes and compare it with a model trained on all available data, i. e., the full graph, which is our baseline. We select three window sizes per dataset based on the distribution of temporal differences tdiff2 (see Section 4). These window sized correspond to quartiles, i. e., the windows cover 25%, 50%, and 75% of the GNNs’ receptive field (RF) (see Table 1). Thus, we can compare window sizes across datasets with different characteristics, i. e., connectivity patterns through time and total number of time steps. The epoch budget is 200 and all models have comparable capacity.
Table 2 (top) shows the results. We observe that those GNN variants trained on the full timeline of the graph yield the highest scores on DBLP-easy and DBLP-hard. There, GNNs with window size 1 (25% RF) yield lower scores than training with larger window sizes (50% and 75% RF). On all datasets, the scores for training with limited window sizes larger than 1 are close to the ones of
full-graph training. In summary, window sizes that cover 50% of the receptive field, GNNs and also MLPs achieve at least 95% classification accuracy compared to full-graph training. When 75% of the receptive field is covered by the temporal window, at least 99% accuracy could be retained in all datasets. We refer to Appendix A.4 for extended results including both reinitialization strategies.
Q4: Incremental Training with Scalable GNN Methods Similarly to Q3, we again compare different window sizes against training on the full graph. This time, we focus on using scalable GNN techniques and aim to learn how they perform in conjunction with temporal windows. We further alleviate the fixed-capacity constraint of previous experiments and tune the hidden size as an additional hyperparameter. We refer to Appendix A.3 for details on hyperparameter choices.
We compare Simplified GCN and GraphSAINT, while including JKNet to isolate the effect of GraphSAINT sampling. Table 2 (bottom) shows the results. We observe that, again, limiting the window size to cover 50% of the GNN’s receptive field leads to at least 95% relative accuracy, compared to full graph training. As expected, GraphSAINT sampling (with JKNets as a base model) yields slightly lower scores than full-batch JKNets. On DBLP-hard, simplified GCN outperforms the other, more complex models. In terms of relative performance, limiting the receptive field does not negatively impact GraphSAINT on DBLP-hard and PharmaBio.
6 DISCUSSION
We have created a new experimental procedure for temporal graphs with new classes appearing over time, for which we contribute three newly compiled datasets with controlled degrees of distribution shift. In this online learning setup, we have evaluated three representative GNN architectures as well as two GNN scaling techniques. With the goal of generalizable results, we have introduced a new measure for the distribution of temporal differences tdiffk, based on which we have selected the temporal window sizes. Our results show that past data can be permanently deleted very early without diminishing the performance of an online vertex classification model. This has direct consequences for online learning of GNNs on temporal graphs and, thus, impacts how GNNs can be employed for numerous real-world applications.
Our main result is that incremental training with limited window sizes is as good as incremental training over the full timeline of the graph (see Q3 and Q4). With window sizes of 3 or 4 (50% receptive field coverage), GNNs achieve at least 95% accuracy compared to using all available data for incremental training. With window sizes of 6 or 8 (75% receptive field coverage), at least 99% accuracy can be retained. This result holds not only for standard GNN architectures but also when scaling techniques such as subgraph sampling are applied on-top of the temporal window. Finally, in almost all experiments, at least 90% of relative accuracy is reached with a window of size 1.
Furthermore, we have verified that incremental training helps to account for distribution shift compared to once-trained, static models (see Q1). We have further investigated on reusing parameters from previous iterations (Q2). Our results show that both strategies are viable, when learning rates are tuned accordingly. During hyperparameter optimization for Q4, in which we alleviated the fixed-capacity constraint, we further noticed that warm restarts are more suitable for higher capacity models with low learning rates, while using cold restarts admits using lower capacity models and higher learning rates (the details of hyperparameter optimization can be found in Appendix A.3).
Even though it was not our main objective to compare the absolute performances of the models, it is noteworthy that simplified GCNs perform surprisingly well on DBLP-hard. Despite the simplicity of the approach, the model yields higher scores than GraphSAINT, JKNets and fixed-capacity GATs, and are only outperformed by GraphSAGE-mean.
A limitation of the present work is that we assume that the true labels of each time step become available as training data for the next time step. In practice, however, only a small fraction of vertices might come with labels for training, while the larger part could be annotated by the model itself. Adapting our experimental procedure to use only a small fraction of true labels in each time step would be an interesting direction of future work.
One could further argue that deleting data that is not linked to the most recent data points would be a viable alternative to deletion based on a fixed time difference. However, this approach would be only feasible in retrospect because, in real-world scenarios, it is impossible to know whether a future data will include a link to a past data point. Still, future work could involve employing other methods to determine what data to delete, such as the personalized PageRank score (Bojchevski et al., 2020).
7 CONCLUSION
Temporal graphs occur in many real-world scenarios such as citation graphs, transaction graphs, and social graphs. Practitioners face a trade-off between memory requirements, which are tied to the temporal window size, and expected accuracy of their models. Until now, it was not clear, how GNNs can be efficiently trained in those online scenarios, especially when distribution shift becomes an issue. We demonstrate that a high level of accuracy can be retained, when training only on a fraction of the temporal graph, determined by a temporal window. The results of this paper can serve as guidelines for training GNNs on temporal graphs, particularly regarding the intentional forgetting of data while retaining a certain percentage of predictive power. For researchers, we supply our newly compiled datasets along with an implementation of the experimental procedure.
We will make the code and data available to reviewers during the peer-reviewing process as suggested in the ICLR 2021 author’s guide.
A APPENDIX
A.1 ALGORITHM FOR OUR EXPERIMENTAL PROCEDURE
Algorithm 1 outlines our incremental training and evaluation procedure.
Data: Temporal graph G, features X , labels y, time steps t, temporal window size c, epoch budget nepochs
Result: Predicted class labels for vertices in each time step of the graph 1 known classes← ∅; 2 θ ← initialize parameters(); 3 for t? ← tstart to tend do 4 G̃ ← subgraph of G induced on vertices u, where t? − c ≤ tsmin(u) ≤ t? ; 5 ỹtrain ← ỹu, where tsmin(u) < t?; 6 if do cold restart then
// Cold restart: re-initialize all parameters 7 θ ← initialize parameters(); 8 else
// Warm restart: initialize new parameters, copy others 9 tmp← clone(θ);
10 θ ← initialize parameters(); 11 θ|known classes ← tmp|known classes; 12 end 13 θ ← train(θ, G̃, X̃ , ỹtrain) for nepochs epochs; 14 ỹpred ← predict(θ, G̃, X̃) for vertices u, where tsmin(u) = t?; 15 known classes← known classes ∪ set(ỹtrain); 16 end
Algorithm 1: Incremental training procedure of our experimental apparatus
A.2 DATASET DETAILS
In the following, we outline the dataset compilation procedure and supply further descriptive statistics of the resulting datasets.
PharmaBio To compile the PharmaBio dataset, we use the metadata of 543,853 papers by Pharma and Biotech companies from Web of Science (Melnychuk et al., 2019). After removing duplicates, our data cleaning procedure ensures that there is a certain amount of labels for each class per year and that each paper is connected to at least one other paper by a same-author edge. More specifically, we: (1) Keep only papers that are in a journal category with at least 20 papers per year; (2) Keep only papers where at least one of the authors has at least two papers per year; (3) Create vocabulary of words (regular expression: \w\w+) that appear in at least 20 papers globally and keep only papers with at least one of these words. We iterate steps 1–3 until no further paper has been removed in one pass. We end up with 68,068 papers from 23,689 authors working for 68 companies. These papers are distributed across 2,818 journals which are, in turn, categorized into seven journal categories. During preprocessing, each paper becomes a vertex in the graph. The class of the paper is the category of the journal in which it was published. We insert an edge between two vertices, if they share at least one common author (based on string comparison).
DBLP-easy To compile these datasets, we use the DBLP Citation Network dataset (version 10)1 (Tang et al., 2008) as a basis. It comprises 3M citing documents and 25M citations to 2M distinct cited documents, ranging between years. We use venues (conferences or journals) as class labels and use citations as edges. First, we select the subset from 1990 until 2015. Then, we follow a similar procedure as above: (1) Keep only papers from venues that have at least τvenue papers in each year they occur (may be only every second year). (2) Keep only papers that stand in at least
1https://aminer.org/citation
one citation relation to another paper. (3) Remove papers from venues that occur only in a single year. (4) Keep only papers with at least one word from a vocabulary of words that are in at least τwords papers. We iterate steps 1–4 until no further paper has been removed in one pass.
DBLP-hard The difference between DBLP-easy and DBLP-hard is that τvenue := 100 papers in the easy variant and τvenue := 45 papers in the hard variant. The minimum word occurrence threshold τwords is set to 20 for DBLP-easy and 40 for DBLP-hard. Finally, we construct the graph with papers as vertices, citations as edges, and venues as classes.
For all three datasets, we use L2-normalized tf-idf (Salton & Buckley, 1988) representations as vertex features based the corresponding papers’ title. We estimate the power law coefficient α via
maximum likelihood (Newman, 2005) α = 1 + n (∑
u∈V ln degu
degmin
)−1 where degmin is 1 (2 for
PharmaBio).
In Figure 5, we visualize the degree distribution, label distribution, the distribution over years, as well as the distributions of temporal differences (as described in Section 4). All compiled datasets seem to follow a power law distribution, which is typical for citation and co-authorship graphs.
For each dataset, we chose the boundaries for our evaluation time steps [tstart, tend], such that 25% of the total number of vertices lie before tstart, and tend is the final time step. For PharmaBio (1985– 2016), that is tstart = 1999, and for both DBLP variants (1990-2015), that is tstart = 2004. Data before tstart may be used for training, depending on the window size. Regarding changes in the class set (distribution shift), DBLP-easy has 12 venues in total, including one bi-annual conference and four new venues appearing in 2005, 2006, 2007, and 2012. DBLP-hard has 73 venues, including one discontinued, nine bi-annual, six irregular venues, and 23 new venues.
A.3 IMPLEMENTATION DETAILS AND HYPERPARAMETERS
We tune the hyperparameters separately for each window size and each restart configuration. We tune the hyperparameters on DBLP-easy and use the same set of hyperparameters for DBLP-hard and PharmaBio.
For experiments Q1-Q3, we design the models to have a comparable capacity: one hidden layer with 64 hidden units. We use ReLU activation on the hidden layer of MLP and GS-Mean. GS-Mean has one hidden layer, i. e. two graph convolutional layers, with 32 units for self-connections and 32 units for aggregated neighbor representations. GAT has one hidden layer composed of 8 attention heads and 8 hidden units per head, along with one attention head for the output layer. We initialize the model parameters according to Glorot and Bengio (Glorot & Bengio, 2010). For both GS-Mean and GAT, the output of the second layer corresponds to the number of classes. We use dropout probability 0.5 on the hidden units for all models in experiment Q3. We use Adam (Kingma & Ba, 2014) to optimize for cross-entropy. We tune the learning rates on DBLP-easy with a search space of {10−1, 5 ·10−2, 10−2, 5 ·10−3, 10−3, 5 ·10−4, 10−4} and re-use these learning rates for the other datasets. The learning rates are tuned separately for each model, each parameter reinitialization strategy, and each window size. We do not use weight decay because it did not increase the performance (search space {0, 10−3, 5 · 10−4, 10−4, 5 · 10−5, 10−5}). The optimal learning rates can be found in Figure 6 for Q1, Figure 7 for Q2, and Figure 8 for Q3. For implementation of GraphSAGE-mean and GATs, we use DeepGraphLibrary (Wang et al., 2019). All methods are trained transductively: for each new snapshot, the new vertices are inserted into the graph without their labels, then, the models are allowed to (up-)train before making predictions.
For the experiment Q4, we use two hidden layers with 64 hidden units each to make use of jumping knowledge (Xu et al., 2018), as suggested as base architecture in GraphSAINT (Zeng et al., 2020). The learning rate is tuned in the space of {0.0001, 0.001, 0.01, 0.1}. Dropout probability is set to 0.2. We do not use weight decay. We also tune the batch size of GraphSAINT in the range of {256, 512, 2048, 4096}, as subgraph size is an important hyperparameter. For simplified GCN, we tune the learning rate in the range of {0.0005, 0.001, 0.005, 0.01, 0.05} and we set the neighborhood aggregation parameter K to 2, corresponding to two-layer aggregation. For implementation of GraphSAINT and JKNet, we use PyTorch-geometric (Fey & Lenssen, 2019). The optimal hyperparameter values as well as the respective search spaces for experiment Q4 can be found in Figure 3. JKNets and simplified GCNs are trained transductively, while GraphSAINT is trained inductively as suggested by the original work (Zeng et al., 2020).
A.4 EXTENDED RESULTS
Table 4 shows the full results table with both warm and cold restarts for experiment Q3. Table 5 shows the full results table with both warm and cold restarts for experiment Q4. Figure 9 visualizes the results for each time step of experiment Q3. | 1. What is the focus of the reviewed paper?
2. What are the strengths of the paper regarding its contributions and findings?
3. Are there any weaknesses or limitations in the paper's approach or results?
4. How does the reviewer assess the significance and originality of the work?
5. Are there any questions or concerns raised by the reviewer regarding the paper's content or relevance? | Review | Review
This work empirically evaluates the sliding-window strategy for training GNNs with temporal graphs. One may cast the temporal nature of the graph data in an online setting, under which the change of the graph structure as well as the variation of the classes cause distribution shift. The authors conduct a series of experiments to show that the sliding-window strategy is as effective as using the entire historical data for training.
Pluses:
For different temporal graphs, the duration of a time step and the number of time steps (window size) are often ad-hocly defined and are not comparable. The authors introduce a measure of temporal difference that facilitates a more principled definition of the time step and the window size so that they are comparable across datasets.
The authors pose four important questions and conclude clear answers based on experimentation. The findings are: (1) incremental training is necessary to account for distribution shift, compared to a once-trained, static model; (2) incremental training with warm start does not always yield better performance than cold start; (3) the window size needs be large enough for incremental training to catch up with the performance of full-data training (e.g., covering at least 50% receptive field); and (4) these findings extend to several GNN models.
The authors compile three temporal graphs, which enrich the availability of benchmark datasets.
Minuses:
The empirical findings are very much expected, which means that they are not exciting. From the methodological point of view, using sliding windows to train temporal GNNs is a no brainer choice if certain RNN modeling is involved. Since most of the presented results are naturally expected and there lacks theory/method contribution, the reader is unsure about the value of the paper.
A common pattern of the contributed datasets is that nodes and edges are inserted but never deleted. While the empirical findings are quite natural in this simple scenario, there will be a lot more uncertainty when the scenario becomes increasingly complex. For example, in social networks, accounts represented by nodes may be deleted and relationships represented by edges may dynamically change.
For another example, in communication networks where an edge denotes communication between two entities, the edges are instant and time stamped. The challenge in this case is less about distribution shift, but more about how to handle edges and what are the consequences. The online learning of this kind of data necessarily goes beyond a simple GNN such as the ones experimented in this paper, but the findings will be more valuable. |
ICLR | Title
Online Learning of Graph Neural Networks: When Can Data Be Permanently Deleted
Abstract
Online learning of graph neural networks (GNNs) faces the challenges of distribution shift and ever gbv rowing and changing training data, when temporal graphs evolve over time. This makes it inefficient to train over the complete graph whenever new data arrives. Deleting old data at some point in time may be preferable to maintain a good performance and to account for distribution shift. We systematically analyze these issues by incrementally training and evaluating GNNs in a sliding window over temporal graphs. We experiment with three representative GNN architectures and two scalable GNN techniques, on three new datasets. In our experiments, the GNNs face the challenge that new vertices, edges, and even classes appear and disappear over time. Our results show that no more than 50% of the GNN’s receptive field is necessary to retain at least 95% accuracy compared to training over a full graph. In most cases, i. e., 14 out 18 experiments, we even observe that a temporal window of size 1 is sufficient to retain at least 90%.
1 INTRODUCTION
Training of Graph Neural Networks (GNNs) on temporal graphs has become a hot topic. Recent works include combining GNNs with recurrent modules (Seo et al., 2018; Manessi et al., 2020; Sankar et al., 2020; Pareja et al., 2020) and vertex embeddings as a function of time to cope with continuous-time temporal graphs (da Xu et al., 2020; Rossi et al., 2020a). Concurrently, other approaches have been proposed to improve the scalability of GNNs. Those include sampling-based techniques (Chiang et al., 2019; Zeng et al., 2020) and shifting expensive neighborhood aggregation into pre-processing (Wu et al., 2019; Rossi et al., 2020b) or post-processing (Bojchevski et al., 2020).
However, there are further fundamental issues with temporal graphs that are not properly answered yet. First, as new vertices and edges appear (and disappear) over time, so can new classes. This results in a distribution shift, which is particularly challenging in an online setting, as there is no finite, a-priori known set of classes that can be used for training and it is not known when a new class appears. Second, scalable techniques for GNNs address the increased size of the graph, but always operate on the entire graph and thus on the entire temporal duration the graph spans. However, training on the entire history of a temporal graph (even in the context of scaling techniques like sampling (Chiang et al., 2019; Zeng et al., 2020)) may actually not be needed to perform tasks like vertex classification. Thus, it is important to investigate if, at some point in time, one can actually “intentionally forget” old data and still retain the same predictive power for the given task. In fact, is has been observed in other tasks such as stock-market prediction that too much history can even be counterproductive (Ersan et al., 2020).
Proposed Solution and Research Questions While we do not suggest to use an entirely new GNN architecture, we propose to adapt existing GNN architectures and scalable GNN techniques to the problem of distribution shift in temporal graphs. In essence, we propose a new evaluation procedure for online learning on the basis of the distribution of temporal differences, which assesses the nature of how vertices are connected in a temporal graph by enumerating the temporal differences of connected vertices along k-hop paths. This information is crucial for balancing between capturing the distribution shift while having sufficient vertices within the GNN’s receptive field.
In summary, the central question we aim to answer is, whether we can intentionally forget old data without losing predictive power in an online learning scenario under presence of distribution shift.
We simulate this scenario by applying temporal windows of different sizes over the temporal graph, as illustrated in Figure 1. The window size c resembles how much history of the temporal graph is used for training, or with other words: which information we forget. In this example, data older than t − 2 is ignored. We evaluate the accuracy of representative GNN architectures and scalable GNN techniques trained on the temporal window, against training on the entire timeline of the graph (full history). We evaluate the models by classifying the vertices at time step t, before we advance to the next time step.
To answer the research question, we break it down into four specific questions Q1 to Q4, each answered in a separate experiment. For Q1: Distribution Shift under Static vs Incremental Training, we verify that incremental training is necessary to account for distribution shift, compared to using a once-trained, static model. Extending from Q1, we investigate in Q2: Training with Warm vs Cold Restarts whether it is preferable to reuse model parameters from the previous time step (warm start) or restart with newly initialized parameters at each time step (cold start). In Q3: Incremental Training on Different Window Sizes, we answer the question what influence different choices for the window sizes have, i. e., how far do we need to look into the past such that a GNN trained on the window is still competitive to a model trained on the full graph. Question Q4 extends Q3 by considering Q4: Incremental Training with Scalable GNN Methods, i. e., how scalable GNN approaches compare to using the full history of the temporal graph and to which extent scaling techniques can be applied on top of the temporal window.
New Datasets To enable an analysis with a controlled extent of distribution shift, we contribute three newly compiled temporal graph datasets based on scientific publications: two citation graphs based on DBLP and one co-authorship graph based on Web of Science. To determine candidate window sizes, we contribute a new measure to compute the distribution of temporal differences within the k-hop neighborhood of each vertex, where k corresponds to the number of GNN layers. We select the 25th, 50th, and 75th percentiles of this distribution as candidate window sizes. This results in window sizes of 1, 3, and 6 time steps for the two DBLP datasets, and 1, 4, 8 for the Web of Science dataset.
Results We select three representative GNN architectures: GraphSAGE-Mean (Hamilton et al., 2017), graph attention networks (Veličković et al., 2018) and jumping knowledge networks (Xu et al., 2018) along with graph-agnostic multi-layer perceptrons. As scalable GNN techniques, we consider GraphSAINT (Zeng et al., 2020) as well as simplified GCNs (Wu et al., 2019). The results of our experiments show that already with a small window size of 3 or 4 time steps, GNNs achieve at least 95% accuracy compared to using the full graph. With window sizes of 6 or 8, 99% accuracy can be retained. With a window size of 1, for almost all experiments, a relative accuracy of no less than 90% could be retained, compared to models trained on the full graph. Furthermore, our experiments confirm that incremental training is necessary to account for distribution shift in temporal graphs and we show that both reinitialization strategies are viable and differ only marginally, when the learning rates are tuned accordingly. Surprisingly, simplified GCNs perform notably well on the most challenging dataset DBLP-hard and are only outperformed by GraphSAGE-Mean.
We outline the related work below. We provide a problem formalization and selection of GNNs for our experiments in Section 3. We describe the experimental apparatus and datasets in Section 4. The results of our experiments are reported in Section 5 and discussed in Section 6, before we conclude.
2 RELATED WORK
In Rossi & Neville (2012), the authors distinguish between tasks where the predicted attribute is static or changing over time. The dynamic graph problem is set up in a way that vertex and edge features may change over time and that edges may appear and disappear. This is conceptually different as it assumes a fixed vertex set, whereas in our case, the vertex set is changing over time. Furthermore, the predicted attribute is static in our case because it will not change after the respective vertex has appeared. Several recent works follow this setup and assume a fixed vertex set (Trivedi et al., 2017; Seo et al., 2018; Kumar et al., 2018; Trivedi et al., 2019; Manessi et al., 2020; Sankar et al., 2020).
In Park et al. (2017), the authors use vertex features concatenated with the adjacency vector and apply 1D-convolution. The experiments comprise link prediction and user state prediction. 1Dconvolution on the time axis can be regarded as a sliding window. However, the paper does not consider new classes during the evaluation time frame and does not analyze how much past training data would be required for up-training.
In Fish & Caceres (2017), the authors aim to find the optimal window size, given a dataset, a task, and a model. They treat the window size as a hyperparameter and propose an optimization algorithm which requires multiple runs of the model. This might be rather expensive. Furthermore, the study does not supply insights on how much predictive power can be preserved when selecting a nearoptimal but much smaller, and thus more efficient, window size.
CTDNE (Nguyen et al., 2018) is an embedding method for continuous-time graphs introducing temporal random walks. This approach considers graphs with featureless vertices with the objective to learn a meaningful/useful vertex embedding. In a recent extension of CTDNE (Lee et al., 2020), the method is applied to edge streams via up-training of the embedding. Comparing this approach to our work, we find that we have another task (discrete-time online vertex classification vs continuoustime online vertex embedding), consider a different type of graph (attributed vs featureless), and face different challenges (adaption to new classes). Nevertheless, it would be an interesting direction of future work to apply our experimental procedure to (streaming) CTDNE.
For discrete-time dynamic graphs involving new vertices, Goyal et al. (2018) proposes DynGEM as an autoencoder-like approach that jointly minimize reconstruction loss between t and t + 1 and embedding distance between connected vertices. In Dyngraph2vec (Goyal et al., 2020), the authors extend this approach by additional variants such as recurrent decoders.
EvolveGCN (Pareja et al., 2020) and T-GAT (da Xu et al., 2020) are both inductive approaches designed for attributed temporal graphs. EvolveGCN predicts the parameters of a GCN with an RNN by tying the RNN output or hidden state to the GCN parameters. T-GAT introduces a selfattention mechanism on the time axis. These approaches can cope with newly appearing vertices and are able to predict different labels for the same node at different times. They both require a sequence of graph snapshots for training. When new classes appear, these sequence-based models would need to be retrained. In our setting with limited window sizes, the sequence of snapshots within a window, i.e. the data available for retraining, might become very short: down to only one snapshot in the extreme case. Furthermore, these approaches focus on predicting future edges or predicting a label for each vertex at each time step. Therefore, the models serve a different purpose compared to the setting that we face, in which the label of each vertex is fixed. For these two reasons, we have focused on adapting and evaluating more efficient, static architectures as well as scalable GNN techniques, while leaving the adaption of T-GAT and EvolveGCN as future work.
To summarize, most works on dynamic graphs assume a fixed vertex set, while considering dynamics within the vertex/edge features, and/or the edges themselves. Inductive approaches such as EvolveGCN and T-GAT do allow new nodes. CTDNE can deal with new nodes via up-training. Previous work on finding optimal window sizes proposes a hyperparameter tuning algorithm. However, none of these works specifically analyzes the problem of new classes appearing over time and how much past training data is necessary, or how few is enough, to maintain good predictive power.
3 PROBLEM FORMALIZATION AND SELECTED METHODS
Problem Formalization We consider a vertex-labeled temporal graph Gt = (Vt, Et) with vertices Vt and edges Et, provided by a sequence of snapshots ordered by t ∈ N. Thus, Vt is the (finite) set of vertices that are in the graph at time step t, and Et the corresponding set of edges at time step t. Furthermore, we define the set of all vertices V ::= ⋃ i∈N Vi and all edges E ::= ⋃ i∈NEi, i. e., G = (V,E). Let tsmin : V → N be a function that returns for each vertex v ∈ V the timestamp at which the vertex was first added to the graph, i. e., tsmin : v 7→ min{i ∈ N|v ∈ Vi}. Finally, for each vertex v ∈ V we have a feature vector Xv ∈ RD, where D is the number of vertex features, and a class label yv ∈ C with C being the global set of classes C ::= ⋃ i∈N Ci.
In each time step t, previously unseen vertices and edges and even new classes may appear as illustrated in Figure 1. For these temporal graphs, we investigate training graph neural networks for the vertex classification task, i. e., assigning class labels y to previously unseen vertices based on vertex attributes X and connections to other vertices via edges. We denote the history of vertices and edges we take into account as the temporal window. The temporal window spans a range of multiple time steps, which we denote as the temporal window size c.
Selected Graph Neural Networks Several works have been proposed that combine GNNs with recurrent neural networks to capture temporal dynamics (Seo et al., 2018; Manessi et al., 2020; Sankar et al., 2020; Pareja et al., 2020). Other works focus on continuous-time temporal graphs (da Xu et al., 2020; Rossi et al., 2020a). Our work is orthogonal to those works as we focus on the distribution shift of temporal graphs and the question if and when old data can be deleted without sacrificing predictive power. In the following, we introduce and motivate our choice of representative GNN architectures as well as scalable GNN techniques for our experiments.
Dwivedi et al. (2020) have introduced a benchmarking framework to re-evaluate several recent GNN variants. Dwivedi et al. distinguish between isotropic and anisotropic GNN architectures. In isotropic GNNs, all edges are treated equally. Apart from graph convolutional networks (Kipf & Welling, 2017), examples of isotropic GNNs are GraphSAGE-mean (Hamilton et al., 2017), DiffPool (Ying et al., 2018), and GIN (Xu et al., 2019). In anisotropic GNNs, the weights for edges are computed dynamically. Instances of anisotropic GNNs include graph attention networks (Veličković et al., 2018), GatedGCN (Bresson & Laurent, 2017) and MoNet (Monti et al., 2017).
We select GraphSAGE-Mean (GS-Mean) (Hamilton et al., 2017) as a representative for isotropic GNNs because its special treatment of the vertices’ self-information has shown to be beneficial (Dwivedi et al., 2020). The representations from self-connections are concatenated to averaged neighbors’ representations before multiplying the parameters. In GS-Mean, the procedure to obtain representations in layer l+ 1 for vertex i is given by the equations ĥl+1i = h l i|| 1degi ∑ j∈N (i) h l j and hl+1i = σ(U lĥl+1i ), where N (i) is the set of adjacent vertices to vertex i, U l are the parameters of layer l, σ is a non-linear activation function, and ·||· is the concatenation. We select Graph Attention Networks (GATs) by (Veličković et al., 2018) as representative for the class of anisotropic GNNs. In GATs, representations in layer l + 1 for vertex i are computed as follows: ĥl+1i = w l ih l i + ∑ j∈N (i) w l ijh l j and h l+1 i = σ(U
lĥl+1i ), where the edge weights wij and self-connection weightswi are computed by a self-attention mechanism based on the representations hi and hj , i. e., the softmax of a(U lhi||U lhj) over edges, where a is a single-layer neural network with LeakyReLU activation.
Scaling Graph Neural Networks to Large Graphs Several approaches have been proposed to scale GNNs to large graphs. In general, these approaches fall into two categories: sampling either locally (Hamilton et al., 2017; Huang et al., 2018), or globally (Chiang et al., 2019; Zeng et al., 2020), and separating neighborhood aggregation from the neural network component (Wu et al., 2019; Rossi et al., 2020b; Bojchevski et al., 2020).
From both categories, we select one representative for our experiments. We use GraphSAINT (Zeng et al., 2020) as state-of-the-art sampling technique along with simplified GCNs (Wu et al., 2019) as a representative for shifting the neighborhood aggregation into a preprocessing step.
Simplified GCN (Wu et al., 2019) is a scalable variant of Graph Convolutional Networks (Kipf & Welling, 2017) that admits regular mini-batch sampling. Simplified GCN removes nonlinearities and collapses consecutive weight matrices into a single one. Thus, simplified GCN can be described by the equation ŶSGC = softmax(SKXΘ), where the parameter K has a similar effect as the number of layers in a regular GCN, S is the normalized adjacency matrix and Θ is the weight matrix. Instead of using multiple layers, the k-hop neighbourhood is computed by SK , which can be precomputed. This makes Simplified GCN efficient to compute, while not necessarily harming the performance.
In GraphSAINT (Zeng et al., 2020), entire subgraphs are sampled for training GNNs. Subgraph sampling introduces a bias which is counteracted by normalization coefficients for the loss function. The authors propose different sampling methods: vertex sampling, edge sampling, and random-walk sampling. We use the best-performing random-walk sampling for our experiments. The underlying GNN is exchangeable, yet the authors suggest to use Jumping Knowledge networks (JKNets) (Xu et al., 2018). JKNets introduce skip-connection to GNNs: each hidden layer has a direct connection to the output layer, in which the representations are aggregated, e. g., by concatenation. This enables the network to learn from representations corresponding to different levels of the local neighborhood. To isolate the effect of GraphSAINT sampling, we also include JKNets in our comparison.
4 EXPERIMENTAL APPARATUS
Procedure For each evaluation time step t ∈ [tstart, tend], we construct a subgraph G̃ = (Ṽ , Ẽ) of G induced on Ṽ = {v ∈ V |t− c ≤ tsmin(v) ≤ t} and Ẽ = {(u, v) ∈ E | u, v ∈ Ṽ }. The parameter c denotes the window size, i. e., determines the c time steps that the temporal window spans. Then, we supply the competing models with the subgraph G̃, the corresponding vertex features, and labels for vertices {u ∈ Ṽ | tsmin(u) < t} along with an epoch budget for updating their parameters. The task is to predict the labels for vertices {u ∈ Ṽ | tsmin(u) = t}. Finally, we evaluate the accuracy of the model before incrementing t. We provide an algorithmic view in Appendix A.1.
When advancing from one time step to the next, we consider two options of initializing the model. Using cold restarts corresponds to randomly re-initializing each model in each time step and training it from scratch. In contrast, when using warm restarts, we take the final weights of the previous time step as initialization for the next time step. In both cases, we initialize the additional parameters in the output layer randomly, when new classes appear.
Novel Measure for Distribution of Temporal Differences In the following, we develop a novel dataset-agnostic measure for the distribution of temporal difference within the k-hop neighborhood of each vertex. When k graph convolution layers are used, the features within the k-hop neighborhood of each vertex are taken into account for its prediction. This k-hop neighborhood is referred to as the receptive field of a GNN (Chen et al., 2018). When we incrementally train GNNs on a sliding window through time, the window size determines which vertices are available for training and for inference. Ideally, the temporal window covers all vertices within the GNN’s receptive field, such that GNNs have access to all relevant information.
How many vertices of the receptive field are contained in a temporal window of size c depends on the characteristics of the datasets. Therefore, we introduce a new measure for the distribution of temporal differences tdiffk within the receptive field of a k-layer GNN. Let N k(u) be the k-hop neighborhood of u, i. e., the set of vertices that are reachable from u by traversing at most k edges. Then, we define tdiffk(G) to be the multiset of time differences to past vertices:
tdiffk(G) := {tsmin(u)− tsmin(v)|u ∈ V ∧ v ∈ N k(u) ∧ tsmin(u) ≥ tsmin(v)} (1) Please note that this is a measure to determine comparable window sizes over different datasets and different granularities. It needs to be computed only once per dataseoncet, prior to any training iterations. When we consider a GNN with k graph convolution layers, the distribution tdiffk enumerates the temporal differences within the receptive field of the GNN. In our experiments, we will use the 25th, 50th, and 75th percentiles of this distribution for analyzing the effect of the temporal window size. This choice corresponds to an average receptive field coverage of 25%, 50%, and 75%.
Newly Compiled Datasets Pre-compiled temporal graph datasets for our real-world scenario are surprisingly rare. Therefore we contribute three new temporal graph datasets based on scientific
publications: one temporal co-authorship graph dataset (PharmaBio) as well as two newly compiled temporal citation graph datasets based on DBLP (DBLP-easy and DBLP-hard). These new datasets enable us to simulate a real-world scenario, in which not only new vertices but also new classes (venues) appear over time. Table 1 summarizes the basic characteristics of the datasets and Figure 2 shows the distribution of temporal differences tdiffk for different values of k. For details on the dataset creation procedure as well as degree and label distributions, we refer to Appendix A.2.
Evaluation Measures As our datasets have imbalanced classes, one could argue to use Micro or Macro F1-score as evaluation measure. However, we are primarily interested in the relative performance between limited-window training and training on the full graph. Motivated by realworld scenarios, we chose sample-based F1-score as our evaluation measure (equivalent to accuracy in single-label scenarios). When aggregating results over time, we use the unweighted average.
5 EXPERIMENTAL RESULTS
We report the results of our experiments along the research questions stated in the introduction.
Q1: Distribution Shift under Static vs Incremental Training In this experiment, we compare a once-trained static model against incrementally trained models. We train the static models for 400 epochs on the data before the first evaluation time step, which comprises 25% of the total vertices. We train incremental models for 200 epochs on temporal windows of 3 time steps (4 on the PharmaBio dataset) before evaluating each time step. All models have comparable capacity.
Figure 3 shows the results. We see that the accuracy of the static models decreases over time on DBLP-easy and DBLP-hard, where new classes appear over time. On PharmaBio, the accuracy of the static models plateaus, while the accuracy of incrementally trained models increases. That confirms our expectations as PharmaBio does not have any new classes, and incrementally trained models merely benefit from the increased amount of training data, while DBLP-easy and DBLPhard do have new classes appearing during the evaluation time frame. In the following experiments, we only use incrementally trained models because they outperform static models in all cases.
Q2: Training with Warm vs Cold Restarts We compare reusing the parameters of the model from the previous time step (warm restart) against randomly re-initializing the model parameters for each temporal window (cold restart). In both cases, we impose a 200 epoch budget per time step. The window size is set to 4 for PharmaBio and 3 for the two DBLP datasets, corresponding to 50% coverage of the GNNs’ receptive field. All models have comparable capacity.
Figure 4 shows the results. We observe that the results obtained by GNNs using warm and cold restarts are close to each other. On DBLP-hard with 23 new classes appearing during the evaluation steps, GS-Mean seems to benefit from warm restarts, while GATs yield better scores when cold restarts are used. On PharmaBio with a fixed class set, both GNNs benefit from reusing parameters from previous iterations. For now, we conclude that both reinitialization strategies are viable and we proceed by running both variants for the next experiments Q3 and Q4.
Q3: Incremental Training on Different Window Sizes We compare the models trained on windows of different sizes and compare it with a model trained on all available data, i. e., the full graph, which is our baseline. We select three window sizes per dataset based on the distribution of temporal differences tdiff2 (see Section 4). These window sized correspond to quartiles, i. e., the windows cover 25%, 50%, and 75% of the GNNs’ receptive field (RF) (see Table 1). Thus, we can compare window sizes across datasets with different characteristics, i. e., connectivity patterns through time and total number of time steps. The epoch budget is 200 and all models have comparable capacity.
Table 2 (top) shows the results. We observe that those GNN variants trained on the full timeline of the graph yield the highest scores on DBLP-easy and DBLP-hard. There, GNNs with window size 1 (25% RF) yield lower scores than training with larger window sizes (50% and 75% RF). On all datasets, the scores for training with limited window sizes larger than 1 are close to the ones of
full-graph training. In summary, window sizes that cover 50% of the receptive field, GNNs and also MLPs achieve at least 95% classification accuracy compared to full-graph training. When 75% of the receptive field is covered by the temporal window, at least 99% accuracy could be retained in all datasets. We refer to Appendix A.4 for extended results including both reinitialization strategies.
Q4: Incremental Training with Scalable GNN Methods Similarly to Q3, we again compare different window sizes against training on the full graph. This time, we focus on using scalable GNN techniques and aim to learn how they perform in conjunction with temporal windows. We further alleviate the fixed-capacity constraint of previous experiments and tune the hidden size as an additional hyperparameter. We refer to Appendix A.3 for details on hyperparameter choices.
We compare Simplified GCN and GraphSAINT, while including JKNet to isolate the effect of GraphSAINT sampling. Table 2 (bottom) shows the results. We observe that, again, limiting the window size to cover 50% of the GNN’s receptive field leads to at least 95% relative accuracy, compared to full graph training. As expected, GraphSAINT sampling (with JKNets as a base model) yields slightly lower scores than full-batch JKNets. On DBLP-hard, simplified GCN outperforms the other, more complex models. In terms of relative performance, limiting the receptive field does not negatively impact GraphSAINT on DBLP-hard and PharmaBio.
6 DISCUSSION
We have created a new experimental procedure for temporal graphs with new classes appearing over time, for which we contribute three newly compiled datasets with controlled degrees of distribution shift. In this online learning setup, we have evaluated three representative GNN architectures as well as two GNN scaling techniques. With the goal of generalizable results, we have introduced a new measure for the distribution of temporal differences tdiffk, based on which we have selected the temporal window sizes. Our results show that past data can be permanently deleted very early without diminishing the performance of an online vertex classification model. This has direct consequences for online learning of GNNs on temporal graphs and, thus, impacts how GNNs can be employed for numerous real-world applications.
Our main result is that incremental training with limited window sizes is as good as incremental training over the full timeline of the graph (see Q3 and Q4). With window sizes of 3 or 4 (50% receptive field coverage), GNNs achieve at least 95% accuracy compared to using all available data for incremental training. With window sizes of 6 or 8 (75% receptive field coverage), at least 99% accuracy can be retained. This result holds not only for standard GNN architectures but also when scaling techniques such as subgraph sampling are applied on-top of the temporal window. Finally, in almost all experiments, at least 90% of relative accuracy is reached with a window of size 1.
Furthermore, we have verified that incremental training helps to account for distribution shift compared to once-trained, static models (see Q1). We have further investigated on reusing parameters from previous iterations (Q2). Our results show that both strategies are viable, when learning rates are tuned accordingly. During hyperparameter optimization for Q4, in which we alleviated the fixed-capacity constraint, we further noticed that warm restarts are more suitable for higher capacity models with low learning rates, while using cold restarts admits using lower capacity models and higher learning rates (the details of hyperparameter optimization can be found in Appendix A.3).
Even though it was not our main objective to compare the absolute performances of the models, it is noteworthy that simplified GCNs perform surprisingly well on DBLP-hard. Despite the simplicity of the approach, the model yields higher scores than GraphSAINT, JKNets and fixed-capacity GATs, and are only outperformed by GraphSAGE-mean.
A limitation of the present work is that we assume that the true labels of each time step become available as training data for the next time step. In practice, however, only a small fraction of vertices might come with labels for training, while the larger part could be annotated by the model itself. Adapting our experimental procedure to use only a small fraction of true labels in each time step would be an interesting direction of future work.
One could further argue that deleting data that is not linked to the most recent data points would be a viable alternative to deletion based on a fixed time difference. However, this approach would be only feasible in retrospect because, in real-world scenarios, it is impossible to know whether a future data will include a link to a past data point. Still, future work could involve employing other methods to determine what data to delete, such as the personalized PageRank score (Bojchevski et al., 2020).
7 CONCLUSION
Temporal graphs occur in many real-world scenarios such as citation graphs, transaction graphs, and social graphs. Practitioners face a trade-off between memory requirements, which are tied to the temporal window size, and expected accuracy of their models. Until now, it was not clear, how GNNs can be efficiently trained in those online scenarios, especially when distribution shift becomes an issue. We demonstrate that a high level of accuracy can be retained, when training only on a fraction of the temporal graph, determined by a temporal window. The results of this paper can serve as guidelines for training GNNs on temporal graphs, particularly regarding the intentional forgetting of data while retaining a certain percentage of predictive power. For researchers, we supply our newly compiled datasets along with an implementation of the experimental procedure.
We will make the code and data available to reviewers during the peer-reviewing process as suggested in the ICLR 2021 author’s guide.
A APPENDIX
A.1 ALGORITHM FOR OUR EXPERIMENTAL PROCEDURE
Algorithm 1 outlines our incremental training and evaluation procedure.
Data: Temporal graph G, features X , labels y, time steps t, temporal window size c, epoch budget nepochs
Result: Predicted class labels for vertices in each time step of the graph 1 known classes← ∅; 2 θ ← initialize parameters(); 3 for t? ← tstart to tend do 4 G̃ ← subgraph of G induced on vertices u, where t? − c ≤ tsmin(u) ≤ t? ; 5 ỹtrain ← ỹu, where tsmin(u) < t?; 6 if do cold restart then
// Cold restart: re-initialize all parameters 7 θ ← initialize parameters(); 8 else
// Warm restart: initialize new parameters, copy others 9 tmp← clone(θ);
10 θ ← initialize parameters(); 11 θ|known classes ← tmp|known classes; 12 end 13 θ ← train(θ, G̃, X̃ , ỹtrain) for nepochs epochs; 14 ỹpred ← predict(θ, G̃, X̃) for vertices u, where tsmin(u) = t?; 15 known classes← known classes ∪ set(ỹtrain); 16 end
Algorithm 1: Incremental training procedure of our experimental apparatus
A.2 DATASET DETAILS
In the following, we outline the dataset compilation procedure and supply further descriptive statistics of the resulting datasets.
PharmaBio To compile the PharmaBio dataset, we use the metadata of 543,853 papers by Pharma and Biotech companies from Web of Science (Melnychuk et al., 2019). After removing duplicates, our data cleaning procedure ensures that there is a certain amount of labels for each class per year and that each paper is connected to at least one other paper by a same-author edge. More specifically, we: (1) Keep only papers that are in a journal category with at least 20 papers per year; (2) Keep only papers where at least one of the authors has at least two papers per year; (3) Create vocabulary of words (regular expression: \w\w+) that appear in at least 20 papers globally and keep only papers with at least one of these words. We iterate steps 1–3 until no further paper has been removed in one pass. We end up with 68,068 papers from 23,689 authors working for 68 companies. These papers are distributed across 2,818 journals which are, in turn, categorized into seven journal categories. During preprocessing, each paper becomes a vertex in the graph. The class of the paper is the category of the journal in which it was published. We insert an edge between two vertices, if they share at least one common author (based on string comparison).
DBLP-easy To compile these datasets, we use the DBLP Citation Network dataset (version 10)1 (Tang et al., 2008) as a basis. It comprises 3M citing documents and 25M citations to 2M distinct cited documents, ranging between years. We use venues (conferences or journals) as class labels and use citations as edges. First, we select the subset from 1990 until 2015. Then, we follow a similar procedure as above: (1) Keep only papers from venues that have at least τvenue papers in each year they occur (may be only every second year). (2) Keep only papers that stand in at least
1https://aminer.org/citation
one citation relation to another paper. (3) Remove papers from venues that occur only in a single year. (4) Keep only papers with at least one word from a vocabulary of words that are in at least τwords papers. We iterate steps 1–4 until no further paper has been removed in one pass.
DBLP-hard The difference between DBLP-easy and DBLP-hard is that τvenue := 100 papers in the easy variant and τvenue := 45 papers in the hard variant. The minimum word occurrence threshold τwords is set to 20 for DBLP-easy and 40 for DBLP-hard. Finally, we construct the graph with papers as vertices, citations as edges, and venues as classes.
For all three datasets, we use L2-normalized tf-idf (Salton & Buckley, 1988) representations as vertex features based the corresponding papers’ title. We estimate the power law coefficient α via
maximum likelihood (Newman, 2005) α = 1 + n (∑
u∈V ln degu
degmin
)−1 where degmin is 1 (2 for
PharmaBio).
In Figure 5, we visualize the degree distribution, label distribution, the distribution over years, as well as the distributions of temporal differences (as described in Section 4). All compiled datasets seem to follow a power law distribution, which is typical for citation and co-authorship graphs.
For each dataset, we chose the boundaries for our evaluation time steps [tstart, tend], such that 25% of the total number of vertices lie before tstart, and tend is the final time step. For PharmaBio (1985– 2016), that is tstart = 1999, and for both DBLP variants (1990-2015), that is tstart = 2004. Data before tstart may be used for training, depending on the window size. Regarding changes in the class set (distribution shift), DBLP-easy has 12 venues in total, including one bi-annual conference and four new venues appearing in 2005, 2006, 2007, and 2012. DBLP-hard has 73 venues, including one discontinued, nine bi-annual, six irregular venues, and 23 new venues.
A.3 IMPLEMENTATION DETAILS AND HYPERPARAMETERS
We tune the hyperparameters separately for each window size and each restart configuration. We tune the hyperparameters on DBLP-easy and use the same set of hyperparameters for DBLP-hard and PharmaBio.
For experiments Q1-Q3, we design the models to have a comparable capacity: one hidden layer with 64 hidden units. We use ReLU activation on the hidden layer of MLP and GS-Mean. GS-Mean has one hidden layer, i. e. two graph convolutional layers, with 32 units for self-connections and 32 units for aggregated neighbor representations. GAT has one hidden layer composed of 8 attention heads and 8 hidden units per head, along with one attention head for the output layer. We initialize the model parameters according to Glorot and Bengio (Glorot & Bengio, 2010). For both GS-Mean and GAT, the output of the second layer corresponds to the number of classes. We use dropout probability 0.5 on the hidden units for all models in experiment Q3. We use Adam (Kingma & Ba, 2014) to optimize for cross-entropy. We tune the learning rates on DBLP-easy with a search space of {10−1, 5 ·10−2, 10−2, 5 ·10−3, 10−3, 5 ·10−4, 10−4} and re-use these learning rates for the other datasets. The learning rates are tuned separately for each model, each parameter reinitialization strategy, and each window size. We do not use weight decay because it did not increase the performance (search space {0, 10−3, 5 · 10−4, 10−4, 5 · 10−5, 10−5}). The optimal learning rates can be found in Figure 6 for Q1, Figure 7 for Q2, and Figure 8 for Q3. For implementation of GraphSAGE-mean and GATs, we use DeepGraphLibrary (Wang et al., 2019). All methods are trained transductively: for each new snapshot, the new vertices are inserted into the graph without their labels, then, the models are allowed to (up-)train before making predictions.
For the experiment Q4, we use two hidden layers with 64 hidden units each to make use of jumping knowledge (Xu et al., 2018), as suggested as base architecture in GraphSAINT (Zeng et al., 2020). The learning rate is tuned in the space of {0.0001, 0.001, 0.01, 0.1}. Dropout probability is set to 0.2. We do not use weight decay. We also tune the batch size of GraphSAINT in the range of {256, 512, 2048, 4096}, as subgraph size is an important hyperparameter. For simplified GCN, we tune the learning rate in the range of {0.0005, 0.001, 0.005, 0.01, 0.05} and we set the neighborhood aggregation parameter K to 2, corresponding to two-layer aggregation. For implementation of GraphSAINT and JKNet, we use PyTorch-geometric (Fey & Lenssen, 2019). The optimal hyperparameter values as well as the respective search spaces for experiment Q4 can be found in Figure 3. JKNets and simplified GCNs are trained transductively, while GraphSAINT is trained inductively as suggested by the original work (Zeng et al., 2020).
A.4 EXTENDED RESULTS
Table 4 shows the full results table with both warm and cold restarts for experiment Q3. Table 5 shows the full results table with both warm and cold restarts for experiment Q4. Figure 9 visualizes the results for each time step of experiment Q3. | 1. What is the focus of the paper regarding graph neural networks?
2. What are the strengths of the proposed approach, particularly in its relevance and motivation?
3. What are the limitations of the paper, especially regarding its soundness and significance?
4. Do you have any concerns about the originality of the work compared to prior works in temporal GNNs? | Review | Review
Temporal graphs can naturally model many real-world networks, and many graph neural network (GNN)-based methods have been proposed recently. Existing temporal GNNs can handle vertices and edges appearing / disappearing over time, but not vertex classes. This paper precisely considers this problem, and
compiles three vertex classification datasets for future research,
proposes an experimental procedure for evaluating performance under this setting,
explores 5 existing GNNs, and concludes that incremental training for limited periods is as good as that over full timelines.
Pros
(Motivation) It is reasonable to assume that new classes can appear over time in real-world networks. It is also worth investigating whether the full temporal graph (seen so far) is actually required for GNN neighbourhood aggregation in the current timestep.
(Relevance) Learning representations on temporal graphs is a challenging, fast-growing topic, and relevant to the ICLR community.
Cons
(Soundness) Tables 2, 3, and 4 compare accuracies of different static GNNs with varying window sizes (proposed idea) and with full graph (existing idea) which is informative. However, to increase the impact of the paper, the proposed idea (with static GNNs) should also be compared against state-of-the-art temporal GNNs on full graphs (in all these tables). As already cited by the authors, recent temporal GNNs include (but are not limited to) (a) EvolveGCN: Evolving Graph Convolutional Networks for Dynamic Graphs, In AAAI'20, (b) Inductive Representation Learning on Temporal Graphs, In ICLR'20.
(Significance) The experiments in the paper are restricted to multi-class vertex classification with new classes appearing over time (in just one dataset domain based on scientific publications). The authors should clarify what challenges one would face for multi-label classification commonly seen with some datasets (e.g. social networks). It would be more convincing if experiments were also conducted on link prediction (e.g. social network link prediction with new classes i.e. communities appearing over time).
(Originality) Although the assumptions (classes appearing/disappearing over time), evaluation procedure, and datasets have not been considered / proposed before, the novelty of the paper is quite limited. As also acknowledged by the authors, the paper explores well-known existing static GNNs for temporal graphs. From this point of view, the paper is of limited originality since it explores well-known algorithms in an unexplored setting.
To summarise, the paper has strong arguments along the axis of motivation but the major weaknesses outweigh the strengths. |
ICLR | Title
LatentPoison -- Adversarial Attacks On The Latent Space
Abstract
Robustness and security of machine learning (ML) systems are intertwined, wherein a non-robust ML system (classifiers, regressors, etc.) can be subject to attacks using a wide variety of exploits. With the advent of scalable deep learning methodologies, a lot of emphasis has been put on the robustness of supervised, unsupervised and reinforcement learning algorithms. Here, we study the robustness of the latent space of a deep variational autoencoder (dVAE), an unsupervised generative framework, to show that it is indeed possible to perturb the latent space, flip the class predictions and keep the classification probability approximately equal before and after an attack. This means that an agent that looks at the outputs of a decoder would remain oblivious to an attack.
1 INTRODUCTION
The ability to encode data reliably is essential for many tasks including image compression, data retrieval and communication. As data is transmitted between communication channels, error detection and correction is often employed to deduce the presence of erroneous bits (Peterson & Weldon, 1972). The source of such errors can be a result of imperfection in the transmitter, channel or in the receiver. Often times, such errors can be deliberate where a man-in-middle attack (Desmedt, 2011; Conti et al., 2016) can result in deleterious erasure of information, yet to the receiver, it may end up as appearing untampered (Kos et al., 2017).
In deep learning, we are able to learn an encoding process using unsupervised learning such as in autoencoders (AE) (Kingma & Welling, 2013); however, we are less able to design methods for checking whether encodings have been tampered with. Therefore, there are two facets of this problem – the first, is to come up with methodologies of tampering with the models and second, is to detect the adversarial breach. In what follows, we will concentrate only on the first problem by presenting a method for tampering autoencoders. An autoencoder has two components: the encoder maps the input to a latent space, while the decoder maps the latent space to the requisite output. A vanilla autoencoder can, therefore, be used to compress the input to a lower dimensional latent (or feature) space. Other forms of autoencoder include the denoising AE (Vincent et al., 2010) that recovers an undistorted input from a partially corrupted input; the compressive AE (Theis et al., 2017) designed for image compression and the variational AE (Kingma & Welling, 2013) that assumes that the data is generated from a directed graphical model with the encoder operationalized to learn the posterior distribution of the latent space. Autoencoders have wide use in data analytics, computer vision, natural language processing, etc.
We propose an attack that targets the latent encodings of autoencoders, such that if an attack is successful the output of an autoencoder will have a different semantic meaning to the input. Formally, we consider an autoencoder consisting of an encoder and decoder model designed to reconstruct an input data sample such that the label information associated with the input data is maintained. For example, consider a dataset of images, x with the labels, y = {0, 1}, and an encoder, E : x ! z and a decoder, D : z ! x where z is a latent encoding for x. If the encoder and decoder are operating normally, the label of the reconstructed data sample, ˆ̂y = class(D(E(x))) should be the same as the label of the input data sample, where class(·) is the soft output of a binary classifier. In this paper, we focus on learning an attack transformation, T z, such that if z is the latent encoding for a data sample, x, with label 0, T z is the latent encoding for a data sample with label 1. The
attack is designed to flip the label of the original input and change its content. Note that the same T is applied to each encoding and is not specific to either the input data sample or the encoding, it is only dependent on the label of the input data sample.
The success of an attack may be measured in three ways:
1. The number of elements in the latent encoding, changed by the attack process should be small. If the encoding has a particular length, changing multiple elements may make the attack more detectable.
2. When a decoder is applied to tampered encodings, the decoded data samples should be indistinguishable from other decoded data samples that have not been tampered with.
3. Decoded tampered-encodings should be classified with opposite label to the original (untampered) data sample.
Our contribution lies in studying transforms with these properties. Experimentally, we find that optimizing for requirement (1) may implicitly encourage requirement (2). Crucially, in contrast to previous work (Goodfellow et al., 2014), our approach does not require knowledge of the model (here a VAE) parameters; we need access only to the encodings and the output of a classifier, making our approach more practical (Papernot et al., 2017). Finally, we owe the success of this attack method primarily to the near-linear structure of the VAE latent space (Kingma & Welling, 2013) – which our attack exploits.
2 COMPARISON TO PREVIOUS WORK
Security in deep learning algorithms is an emerging area of research. Much focus has gone into the construction of adversarial data examples, inputs that are modified such that they cause deep learning algorithms to fail. Previous work, designing adversarial images, has focused on perturbing input data samples such that a classifier miss classifies adversarial examples (Goodfellow et al., 2014). The perturbation is intended to be so small that a human cannot detect the difference between the original data samples, and its adversarial version. Goodfellow et al. (Goodfellow et al., 2014) propose adding a perturbation proportional to sign(r
x J(✓, x, y)) where J is the cost function used to train a classifier (that is being attacked), ✓ are the parameters of that classifier, and x and y the data and label pair, respectively. This type of attack requires the attacker to have high-level access to the classifiers’ parameters and cost function. An alternative approach that does not require the adversary to have access to the model parameters, is presented by Papernot et al. (Papernot et al., 2017) who propose a more practical approach, requiring only the classifier output and knowledge of the encoding size. Our adversary has similar, practical requirements.
Our approach, is thus tangential to the previous work on adversarial images for classification. We focus on a man-in-middle form of attack (Diffie & Hellman, 1976): rather than launching an attack on data samples, we launch an attack on an intermediate encoding such that a message being sent from a sender is different to the message received by a receiver. Similar to previous work, we do not want the attack on the encoding to be detectable, but in contrast to previous work (Goodfellow et al., 2014; Papernot et al., 2017), we wish for the message – in this example the images – to be detectably changed, while still being consistent with other non-tampered messages.
Our work is more similar to that of Kos et al. (Kos et al., 2017) – in the sense that they propose attacking variational autoencoders in a similar sender-receiver framework. Their goal is to perform an attack on inputs to an autoencoder such that output of the autoencoder belongs to a different class to the input. For example, an image of the digit 8 is encoded, but following an attack, the decoded image is of the digit 7 (Kos et al., 2017). While the overall goal is very similar, their approach is very different since they focus on perturbing images – while we perturb latent encodings. This difference is illustrated in Figure 1.
Finally, most previous work (Goodfellow et al., 2014; Papernot et al., 2017; Kos et al., 2017) requires the calculation of a different perturbation for each adversarial example. Rather, in our approach, we learn a single (additive) adversarial perturbation that may be applied to almost any encoding to launch a successful attack. This makes our approach more practical for larger scale attacks.
3 METHOD
In this section, we describe how we train a VAE and how we learn the adversarial transform that we apply to the latent encoding.
3.1 PROBLEM SETUP
Consider a dataset, D consisting of labeled binary examples, {x i , y i }N i=1, for yi 2 {0, 1}. To perform the mappings between data samples, x, and corresponding latent samples, z, we learn an encoding process, q (z|x), and a decoding process, p ✓
(x|z), which correspond to an encoding and decoding function E (·) and D ✓
(·) respectively. and ✓, parameterize the encoder and decoder, respectively. Our objective is to learn an adversarial transform, T̂ such that class(x) 6= class(T̂ x), where, T̂ , is constrained under an L
p norm. Here, class(·) is the soft output of a binary classifier. Rather than applying an adversarial transformation (Moosavi-Dezfooli et al., 2016), T̂ directly to the data, x, we propose performing the adversarial transform T on the latent representation, T z. We learn a transform, T with z = E (x) subject to class(D (T z)) 6= class(D (z))1.
We consider three methods of attack, and compare two approaches for regularizing T . The three attack methods that we consider are as follows:
1. An Independent attack: We consider an attack on a pre-trained variational autoencoder (VAE). T is learned for the pre-trained VAE.
2. A Poisoning attack: We consider an attack during VAE training (poisoning). T is learned at the same time as the VAE.
3. A Poisoning+Class attack: We consider an attack during VAE training, where the VAE is trained not only to reconstruct samples but to produce reconstructions that have low classification error. This, in turn, encourages the VAE to have a discriminative internal representation, possibly making it more vulnerable to attack. We learn T at the same time.
1Note than in the case where class labels are binary, this is equivalent to: learning a T such that class(D (T z)) = 1 class(D (z)).
ADDITIVE PERTURBATION (z + Z)
Here, we consider T z = z + z. There are several options for the form that z may take. In the first case, z may be a constant. We may learn a single transform to flip an image with label 0 to an image with label 1, and another for moving in the opposite direction. On the other hand, we may learn a single z and apply z to move in one direction and + z to move in the other. The advantage of using a constant z is that at the attack time the adversarial perturbation has already been pre-computed, making it easier to attack multiple times. There is a further advantage to using only a single z because the attacker need only learn a single vector to tamper with (almost) all of the encodings. Alternatively, z may be a function of any combination of variables x, y, z, however, this may require the attacker to learn an attack online – rather than having a precomputed attack that may be deployed easily. In this paper, we are interested in exploring the case where we learn a single, constant z.
We also consider a multiplicative perturbation. However, we reserve explanation of this for the Appendix (Section 7).
3.2 LOSS FUNCTIONS
Here, we consider the cost functions used to train a VAE and learn T . The VAE is trained to reconstruct an input, x, while also minimizing a Kullback-Leibler (KL)-divergence between a chosen prior distribution, p(z) and the distribution of encoded data samples. The parameters of the VAE are learned by minimizing, J
vae
= BCE(x, x̂) + ↵KL[q (z|x)||p(z)], where BCE is the binary cross-entropy and ↵ is the regularization parameter. A classifier may be learned by minimizing J
class = BCE(y, ŷ). An additional cost function for training the VAE may be the classification loss on reconstructed data samples, BCE(y, ˆ̂y). This is similar to an approach used by Chen et al. (Chen et al., 2016) to synthesize class specific data samples. Finally, to learn the attack transform, T we minimize, J
z = BCE((1 y), y̌) + L p (T ), for the case above (Section 3.1) we have L p (T ) = || z|| p
. This allows us to learn a transform on a latent encoding, that results in a label flip in the decoded image. Minimizing the L
p -norm for p = {1, 2}, encourages the transform to target a minimal number of units of z. Specifically, using p = 1 should encourage the perturbation vector to be sparse (Donoho, 2006). When z is sparse, this means that only a few elements of z may be changed. Such minimal perturbations reduce the likelihood that the attack is detected.
3.3 EVALUATION METHOD
The goal for the attacker is to tamper with the encoding such that the label of the decoded sample is flipped. For example, if the label was 1 initially, following a successful attack, the label should be 0. Rather than assigning binary labels to samples, our classifier outputs values between [0, 1] where 0 or 1 suggests that the classifier is highly certain that a data sample belongs to either class 0 or class 1, while a classifier output of 0.5 means that the classifier is unsure which class the sample belongs to. When an attack is successful, we expect a classifier to predict the class of the reconstructed image with high certainty. Further, for an attack to be undetectable, we would expect a classifier to predict the label of a reconstructed, un-tampered data sample with almost the same certainty as a tampered one. Formally, we may evaluate the quality of an attack by measuring |✏| such that 2:
class(x) = 1 class(T̂ x) + ✏ class(D
✓ (z)) = 1 class(D ✓
(T z)) + ✏
Based purely on the classification loss, in the case where ✏ = 0, the encodings that have been tampered with would be indistinguishable from those that had not. An attack may be considered undetectable if |✏| is small. Typically, |✏| may be related to the standard deviation in classification results.
To calculate epsilon we make two practical alterations. The first is that our classifier outputs values [0, 1], which do not necessarily correspond to probabilities, but may in some respect capture the confidence of a single classification. Using the output of the classifier, we compute confidence
2 We assume class(x) = class(x̂).
scores, where 0 corresponds to low confidence and 1 to high confidence. For a sample whose true label is 1, the confidence is taken to be the output of the classifier. For a sample whose true label is 0, the confidence is taken to be (1 class(·)), where class(·) is the output of the classifier. The second, is that if the classifier is more confident when classifying one class compared to the other, it does not make sense to compare class(x) to class(T̂ x). Rather, we compare:
class(x(y=1))) = class(T̂ x(y=0)) + ✏
class(D ✓ (z(y=1))) = class(D ✓ (T z(y=0))) + ✏ where xy=0 and xy=1 are a data samples with true labels 0 and 1 respectively. zy=0 and zy=1 are encodings of data samples xy=0 and xy=1, respectively.
We measure the performance of all attacks using the same classifier, so that we may compare attack types more easily. As a consequence, we are also able to show that the attack is partially agnostic to the classifier, provided that the classifier is trained to perform a similar task.
We discuss an additional probabilistic evaluation method in Section 6.4 of the Appendix.
4 EXPERIMENTS AND RESULTS
We compare 3 methods of attack using 2 different types of regularization on z – totaling 6 experiments. The three methods of attack are listed in Section 3 and the two types of regularization are the L1-norm and the L2-norm. We show qualitative results for only two examples in the main text and reserve the rest for the appendix. We provide a quantitative analysis in the form of confidence score (discussed in Section 3.3) for all 6 attack types.
4.1 DATASET
Experiments are performed on the CelebA dataset consisting of 200k colour images of faces, of which 100 are reserved for testing. The samples are of size 64⇥ 64, and we do not crop the images. Each image is assigned a binary label, 1 for smiling and 0 for not smiling.
4.2 USING (z + z) WITH L2 REGULARIZATION
In this section, we focus on adversaries that have been trained using L2 regularization. Figure 4 shows the results of an adversarial attack, where the adversary is learned for a pre-trained VAE, which was trained without label information. We expected this to be a more challenging form of attack since the VAE would not have been trained with any discriminative label information – making it less likely to learn features specifically for “smile” and “not smile”. Visual examples of decoded tampered and non-tampered encodings are shown in Figure 4. Figure 4(a) shows reconstructed images of people smiling, while (b) shows similar faces, but without smiles (attacked). Similarly, Figure 4(c) shows reconstructed images of people that are not smiling, while (d) shows similar faces smiling (attacked). In most cases, the success of the attack is obvious.
Quantitative results in Table 1 show several important results. In all cases, the decoded tamperedencodings are classified with high confidence. This is higher than the classifier on either the original image or the reconstructed ones. This suggests that the adversarial attack is successful as tampering with the encoding. By only evaluating the attacks by the confidence, it appears that all adversaries perform similarly well for all attack types. However, it is important to consider the difference between the confidence of reconstructed samples and the samples whose encoding was tampered with. Since the attacker aims to directly optimize the classification score, it is no surprise that affected samples have higher confidence score. It does, however, make the attack potentially more detectable. From this perspective, the more successful attacks would be those whose difference between confidence scores is small (see Section 3.3).
For this particular set of attacks, the most stealthy would be switching from “no smile” to “smile” attacking a VAE trained using label information. We may expect a VAE trained with label information to be a particularly good target as it is already trained to learn discriminative features. We also notice that it is easier for the attacker to move in the direction from “no smile” to “smile” than the reverse. The reason for this may be related to the slight bias in the classification results. However, this may also stem from the subjective labelling problem. Some of the faces in Figure (a) that belong to the “smile” class are not clearly smiling.
Both the qualitative results in Figure 4 and the quantitative results in Table 1 indicate successful attack strategies. Further, visual results are shown in the Appendix for the other attack methods, and images showing the pixel-wise difference between reconstructions and attacked samples are also shown (Figure 11) to highlight the effects of T .
4.3 USING (z + z) WITH L1 REGULARIZATION
In this section, we look at results for attacks using L1 regularization on the encoding. L1 regularization is intended to encourage sparsity in z, targeting only a few units of the encoding. In Figure 10 in the appendix, we show that L1 regularization does indeed lead to a more sparse z being learned.
In Figure 5, we show visual results of an adversarial attack, with the original reconstructions on the left and the reconstructions for tampered encodings on the right. We show examples of all 3 types of attack, with L1 regularization in the appendix. The attack appears to be successful in all cases. We visualize the pixel-wise change between reconstructions of encodings and tampered encodings in Figure 11 of the appendix. Note that our results are not “cherry picked”, but simply chosen randomly.
Table 2 shows confidence values for each type of attack when using L1 regularization on z. In all cases, the confidence values for the samples which were attacked is higher than both reconstructed samples and original data samples. This is likely to be because the adversary is picking a perturbation that directly optimises the classification score. It is, however, important to remember that the classifier used to evaluate the attack is the same for all attacks and not the same one used for training the adversary.
As before, if there is a clear difference in confidence score between the reconstructed data samples and the decoded tampered-encodings, it will be obvious that an attack has taken place. If we consider the difference between these scores, then the most stealthy attacks are those learning the z at the
same time as learning the VAE to switch between “no smile” and “smile”. Similarly, with the results obtained with L2 regularization on z, the more successful attack – in terms of stealth – is to go from “no smile” to “smile” for all attack types.
5 DISCUSSION AND CONCLUSION
In this paper, we propose the idea of latent poisoning – an efficient methodology for an adversarial attack i.e., by structured modification of the latent space of a variational autoencoder. Both additive and multiplicative perturbation, with sparse and dense structure, show that it is indeed possible to flip the predictive class with minimum changes to the latent code.
Our experiments show that additive perturbations are easier to operationalize than the multiplicative transformation of the latent space. It is likely that additive perturbations have reasonable performance because of the near-linear structure of the latent space. It has been shown that given two images and their corresponding points in latent space, it is possible to linearly interpolate between samples in latent space to synthesize intermediate images that transit smoothly between the two initial images (Kingma & Welling, 2013; Radford et al., 2015). If the two images were drawn from each of the binary classes, and a smooth interpolation existed between them, this would mean that additive perturbation in the latent space, along this vector, would allow movement of samples from one class to the other.
How can we counter such a poisoning of the latent space? It might be helpful to look into the predictive probability and its uncertainty on outputs from an autoencoder. If the uncertainty is above a threshold value, an attack may be detected. Detection via predictive probability and its uncertainty, as well as alternative methods, such as inspection of the latent encoding, become even more difficult when the attacker has altered the latent distribution minimally (under a norm).
Given the prevalence of machine learning algorithms, the robustness of such algorithms is increasingly becoming important (McDaniel et al., 2016; Abadi et al., 2017), possibly at par with reporting test error of such systems.
6 APPENDIX
6.1 SAMPLES WITH AND WITH OUT LABEL SWITCH
In the main body of the text, we showed received images for the case where an attack has taken place for two types of attack. In this section, we show the remaining examples.
6.2 COMPARE USING | z|1 WITH | z|2
In this section, we compose Tables of values and figures to compare the 3 different attacks for the 2 different regularization methods.
6.3 ENTROPY OF PERTURBATION
We expect that using L1 regularization will give more sparse perturbations, z than using L2 regularization. In Figure 10, we show the effect of the regularization term for each attack type: (1) learning a z for a pre-trained VAE, (2) learning a z while training a VAE and (3) learning a z while training a VAE and using class information to train the VAE. It is clear from Figure 10 that using L1 regularization does indeed result in a more sparse z.
6.4 CAN WE USE KNOWLEDGE OF THE PRIOR TO DETECT AN ADVERSARIAL ATTACK?
Figure 10 provides information about the magnitude of the adversarial perturbations. Here, we consider how knowledge of the magnitude of the perturbations, may allow us to understand the probability of an attack being detected. We consider an approach to individually test each element of a latent encoding to see if we can determine whether an attack has taken place. We refer to a single element of the perturbation z, as z and consider whether we can detect perturbation to a single element in isolation from the other elements in the encoding.
In a variational autoencoder, the distribution of encoded data samples is trained to belong to a chosen prior distribution – in this case a Gaussian. Assuming that the autoencoder is trained well, we may say that the distribution of encoded data samples is Gaussian. Further, we assume that each element in the encoding is drawn independently from the Gaussian distribution. From this, we know that c.99.5% each individual encoding value lies between 2.807 and 2.807 where sigma is the
standard deviation of the Gaussian distribution. This means that approximately 1/200 3 elements lie outside this interval. In our case = 1.
Any addition to samples from Gaussian distribution results in a shift of the distribution. For an adversarial attack involving the additive perturbation of z on a single unit of z, we may calculate the probability that a single element in a tampered encoding lies outside the range [ 2.807, 2.807]. The formula for this is given by:
P99.5%( z) = 1 1
2
1 + erf ✓ 2.807 zp
2
◆ + 1
2
1 + erf ✓ 2.807 zp
2
◆
where erf(·) is the error function. Note that P99.5%(1) = 0.04, P99.5%(2) = 0.2 and P99.5%(5) = 0.98.
We may use this to evaluate our attack processes and may also be used to further regularize our models to ensure that the probability of being detected is less than a chosen threshold. Looking at Figure 10 we can see that only attacks in (a) and (b) using L2 regularization are likely to be undetectable according to the criteria above, assuming that the encoded data samples follow a Gaussian distribution.
6.5 THE EPSILON GAP
Here, we compare the ✏-gap (described in Section 3.3) for each type of attack, using each type of regularization. We expected that using L1 regularization would encourage minimal change to the encoding needed to make a switch between labels. Therefore we might expect this to influence the epsilon value. However, for a sparse z to have the desired properties we also require the structure of the latent space to be sparse. Since we did not enforce any sparsity constraint on the latent encoding when training the VAE, sparsity on the latent samples is not guaranteed. Therefore,
3our latent encoding is of size 200, however the choice of a 99.5% is fairly arbitrary and may be chosen more precisely depending on application.
although it is useful to learn sparse encodings to facilitate the speed of the attack (minimal number of changes to the encoding), it does not clearly affect the overall quality of the attack.
Table 3: Epsilon gap values
Samples z + z p=1 p=2 p=1 p=2
Learn z & Independent 0.07 0.19 0.09 0.10 Learn z & Poisoning jointly 0.20 0.10 0.00 0.09 Learn z & Poisoning+Class 0.18 0.11 0.07 0.00
6.6 THE EFFECT OF z ON x
In Figure 11 we show the difference between the reconstructed data samples and decoded tamperedencodings. These images highlight the effect of the adversarial perturbation – applied to the latent space – in the data space.
7 IMPLEMENTATION DETAILS
For both the encoder, decoder and classifier we use an architecture similar to that used by Radford et al. Radford et al. (2015). We weight the KL-divergence in the VAE loss by ↵ = 0.1 and we train the model using Adam with a learning rate of 2e 4 however training was not sensitive to this parameter – training with a learning rate of 1e 3 also worked. Our code both for training (with all parameter values) and evaluation will be made available after the review process via Github.
MULTIPLICATIVE PERTURBATION z · (1 + z)
To formulate a multiplicative perturbation, we require that the element(s) that encode smile or no smile have different signs for each class. We may then learn a multiplicative mask, where most of the values are ones, and one or a few values are negative. The values may not be positive. If the values are positive then signs in the encoding cannot be switched and no label swap may take place. In this formulation, we cannot guarantee that the encoding will take the desired form. From preliminary experiments, we see that faces classified as “smiling” often appear to be smiling more intensely after the transform. This is likely to be because the autoencoder considered the image to be a person not smiling in the first place.
In our formulation, we use a single z to which we apply L p regularization to. The transform is then z(1 + z). Note that it does not make sense to have a formulation for each direction i.e. z(1 z) for the other direction; if the encoding for opposite samples has opposite signs a negative z is sufficient to provide a transform in both directions.
For multiplicative transforms, the perturbations do not appear to perform as well as for the additive approach. This might be a reflection of the near-linear structure of the latent space learned by the autoencoder. An adversary applying an additive perturbation is able to target the near-linear structure, while an adversary applying a multiplicative perturbation implies much stronger assumptions on the structure of the latent space – which apparently do not hold for all variational autoencoders. | 1. What is the main contribution of the paper, and how does it differ from previous works?
2. How does the proposed scheme work, and in what situations might it be more effective than previous approaches?
3. Can you provide more details about the attack methods discussed in Section 3.1, particularly VAE and T?
4. Can you explain how the loss functions listed in Section 3.2 are combined and optimized in the proposed VAE?
5. Can you provide more experimental results to compare the performance of the proposed scheme with baseline methods, such as [Kos+17], and demonstrate the effectiveness of attacks on latent variables versus inputs? | Review | Review
The idea is clearly stated (but lacks some details) and I enjoyed reading the paper.
I understand the difference between [Kos+17] and the proposed scheme but I could not understand in which situation the proposed scheme works better. From the adversary's standpoint, it would be easier to manipulate inputs than latent variables. On the other hand, I agree that sample-independent perturbation is much more practical than sample-dependent perturbation.
In Section 3.1, the attack methods #2 and #3 should be detailed more. I could not imagine how VAE and T are trained simultaneously.
In Section 3.2, the authors listed a couple of loss functions. How were these loss functions are combined? The final optimization problem that is used for training of the propose VAE should be formally defined. Also, the detailed specification of the VAE should be detailed.
From figures in Figure 4 and Figure 5, I could see that the proposed scheme performs successfully in a qualitative manner, however, it is difficult to evaluate the proposed scheme qualitatively without comparisons with baselines. For example, can the proposed scheme can be compared with [Kos+17] or some other sample-dependent attacks? Also, can you experimentally show that attacks on latent variables are more powerful than attacks on inputs? |
ICLR | Title
LatentPoison -- Adversarial Attacks On The Latent Space
Abstract
Robustness and security of machine learning (ML) systems are intertwined, wherein a non-robust ML system (classifiers, regressors, etc.) can be subject to attacks using a wide variety of exploits. With the advent of scalable deep learning methodologies, a lot of emphasis has been put on the robustness of supervised, unsupervised and reinforcement learning algorithms. Here, we study the robustness of the latent space of a deep variational autoencoder (dVAE), an unsupervised generative framework, to show that it is indeed possible to perturb the latent space, flip the class predictions and keep the classification probability approximately equal before and after an attack. This means that an agent that looks at the outputs of a decoder would remain oblivious to an attack.
1 INTRODUCTION
The ability to encode data reliably is essential for many tasks including image compression, data retrieval and communication. As data is transmitted between communication channels, error detection and correction is often employed to deduce the presence of erroneous bits (Peterson & Weldon, 1972). The source of such errors can be a result of imperfection in the transmitter, channel or in the receiver. Often times, such errors can be deliberate where a man-in-middle attack (Desmedt, 2011; Conti et al., 2016) can result in deleterious erasure of information, yet to the receiver, it may end up as appearing untampered (Kos et al., 2017).
In deep learning, we are able to learn an encoding process using unsupervised learning such as in autoencoders (AE) (Kingma & Welling, 2013); however, we are less able to design methods for checking whether encodings have been tampered with. Therefore, there are two facets of this problem – the first, is to come up with methodologies of tampering with the models and second, is to detect the adversarial breach. In what follows, we will concentrate only on the first problem by presenting a method for tampering autoencoders. An autoencoder has two components: the encoder maps the input to a latent space, while the decoder maps the latent space to the requisite output. A vanilla autoencoder can, therefore, be used to compress the input to a lower dimensional latent (or feature) space. Other forms of autoencoder include the denoising AE (Vincent et al., 2010) that recovers an undistorted input from a partially corrupted input; the compressive AE (Theis et al., 2017) designed for image compression and the variational AE (Kingma & Welling, 2013) that assumes that the data is generated from a directed graphical model with the encoder operationalized to learn the posterior distribution of the latent space. Autoencoders have wide use in data analytics, computer vision, natural language processing, etc.
We propose an attack that targets the latent encodings of autoencoders, such that if an attack is successful the output of an autoencoder will have a different semantic meaning to the input. Formally, we consider an autoencoder consisting of an encoder and decoder model designed to reconstruct an input data sample such that the label information associated with the input data is maintained. For example, consider a dataset of images, x with the labels, y = {0, 1}, and an encoder, E : x ! z and a decoder, D : z ! x where z is a latent encoding for x. If the encoder and decoder are operating normally, the label of the reconstructed data sample, ˆ̂y = class(D(E(x))) should be the same as the label of the input data sample, where class(·) is the soft output of a binary classifier. In this paper, we focus on learning an attack transformation, T z, such that if z is the latent encoding for a data sample, x, with label 0, T z is the latent encoding for a data sample with label 1. The
attack is designed to flip the label of the original input and change its content. Note that the same T is applied to each encoding and is not specific to either the input data sample or the encoding, it is only dependent on the label of the input data sample.
The success of an attack may be measured in three ways:
1. The number of elements in the latent encoding, changed by the attack process should be small. If the encoding has a particular length, changing multiple elements may make the attack more detectable.
2. When a decoder is applied to tampered encodings, the decoded data samples should be indistinguishable from other decoded data samples that have not been tampered with.
3. Decoded tampered-encodings should be classified with opposite label to the original (untampered) data sample.
Our contribution lies in studying transforms with these properties. Experimentally, we find that optimizing for requirement (1) may implicitly encourage requirement (2). Crucially, in contrast to previous work (Goodfellow et al., 2014), our approach does not require knowledge of the model (here a VAE) parameters; we need access only to the encodings and the output of a classifier, making our approach more practical (Papernot et al., 2017). Finally, we owe the success of this attack method primarily to the near-linear structure of the VAE latent space (Kingma & Welling, 2013) – which our attack exploits.
2 COMPARISON TO PREVIOUS WORK
Security in deep learning algorithms is an emerging area of research. Much focus has gone into the construction of adversarial data examples, inputs that are modified such that they cause deep learning algorithms to fail. Previous work, designing adversarial images, has focused on perturbing input data samples such that a classifier miss classifies adversarial examples (Goodfellow et al., 2014). The perturbation is intended to be so small that a human cannot detect the difference between the original data samples, and its adversarial version. Goodfellow et al. (Goodfellow et al., 2014) propose adding a perturbation proportional to sign(r
x J(✓, x, y)) where J is the cost function used to train a classifier (that is being attacked), ✓ are the parameters of that classifier, and x and y the data and label pair, respectively. This type of attack requires the attacker to have high-level access to the classifiers’ parameters and cost function. An alternative approach that does not require the adversary to have access to the model parameters, is presented by Papernot et al. (Papernot et al., 2017) who propose a more practical approach, requiring only the classifier output and knowledge of the encoding size. Our adversary has similar, practical requirements.
Our approach, is thus tangential to the previous work on adversarial images for classification. We focus on a man-in-middle form of attack (Diffie & Hellman, 1976): rather than launching an attack on data samples, we launch an attack on an intermediate encoding such that a message being sent from a sender is different to the message received by a receiver. Similar to previous work, we do not want the attack on the encoding to be detectable, but in contrast to previous work (Goodfellow et al., 2014; Papernot et al., 2017), we wish for the message – in this example the images – to be detectably changed, while still being consistent with other non-tampered messages.
Our work is more similar to that of Kos et al. (Kos et al., 2017) – in the sense that they propose attacking variational autoencoders in a similar sender-receiver framework. Their goal is to perform an attack on inputs to an autoencoder such that output of the autoencoder belongs to a different class to the input. For example, an image of the digit 8 is encoded, but following an attack, the decoded image is of the digit 7 (Kos et al., 2017). While the overall goal is very similar, their approach is very different since they focus on perturbing images – while we perturb latent encodings. This difference is illustrated in Figure 1.
Finally, most previous work (Goodfellow et al., 2014; Papernot et al., 2017; Kos et al., 2017) requires the calculation of a different perturbation for each adversarial example. Rather, in our approach, we learn a single (additive) adversarial perturbation that may be applied to almost any encoding to launch a successful attack. This makes our approach more practical for larger scale attacks.
3 METHOD
In this section, we describe how we train a VAE and how we learn the adversarial transform that we apply to the latent encoding.
3.1 PROBLEM SETUP
Consider a dataset, D consisting of labeled binary examples, {x i , y i }N i=1, for yi 2 {0, 1}. To perform the mappings between data samples, x, and corresponding latent samples, z, we learn an encoding process, q (z|x), and a decoding process, p ✓
(x|z), which correspond to an encoding and decoding function E (·) and D ✓
(·) respectively. and ✓, parameterize the encoder and decoder, respectively. Our objective is to learn an adversarial transform, T̂ such that class(x) 6= class(T̂ x), where, T̂ , is constrained under an L
p norm. Here, class(·) is the soft output of a binary classifier. Rather than applying an adversarial transformation (Moosavi-Dezfooli et al., 2016), T̂ directly to the data, x, we propose performing the adversarial transform T on the latent representation, T z. We learn a transform, T with z = E (x) subject to class(D (T z)) 6= class(D (z))1.
We consider three methods of attack, and compare two approaches for regularizing T . The three attack methods that we consider are as follows:
1. An Independent attack: We consider an attack on a pre-trained variational autoencoder (VAE). T is learned for the pre-trained VAE.
2. A Poisoning attack: We consider an attack during VAE training (poisoning). T is learned at the same time as the VAE.
3. A Poisoning+Class attack: We consider an attack during VAE training, where the VAE is trained not only to reconstruct samples but to produce reconstructions that have low classification error. This, in turn, encourages the VAE to have a discriminative internal representation, possibly making it more vulnerable to attack. We learn T at the same time.
1Note than in the case where class labels are binary, this is equivalent to: learning a T such that class(D (T z)) = 1 class(D (z)).
ADDITIVE PERTURBATION (z + Z)
Here, we consider T z = z + z. There are several options for the form that z may take. In the first case, z may be a constant. We may learn a single transform to flip an image with label 0 to an image with label 1, and another for moving in the opposite direction. On the other hand, we may learn a single z and apply z to move in one direction and + z to move in the other. The advantage of using a constant z is that at the attack time the adversarial perturbation has already been pre-computed, making it easier to attack multiple times. There is a further advantage to using only a single z because the attacker need only learn a single vector to tamper with (almost) all of the encodings. Alternatively, z may be a function of any combination of variables x, y, z, however, this may require the attacker to learn an attack online – rather than having a precomputed attack that may be deployed easily. In this paper, we are interested in exploring the case where we learn a single, constant z.
We also consider a multiplicative perturbation. However, we reserve explanation of this for the Appendix (Section 7).
3.2 LOSS FUNCTIONS
Here, we consider the cost functions used to train a VAE and learn T . The VAE is trained to reconstruct an input, x, while also minimizing a Kullback-Leibler (KL)-divergence between a chosen prior distribution, p(z) and the distribution of encoded data samples. The parameters of the VAE are learned by minimizing, J
vae
= BCE(x, x̂) + ↵KL[q (z|x)||p(z)], where BCE is the binary cross-entropy and ↵ is the regularization parameter. A classifier may be learned by minimizing J
class = BCE(y, ŷ). An additional cost function for training the VAE may be the classification loss on reconstructed data samples, BCE(y, ˆ̂y). This is similar to an approach used by Chen et al. (Chen et al., 2016) to synthesize class specific data samples. Finally, to learn the attack transform, T we minimize, J
z = BCE((1 y), y̌) + L p (T ), for the case above (Section 3.1) we have L p (T ) = || z|| p
. This allows us to learn a transform on a latent encoding, that results in a label flip in the decoded image. Minimizing the L
p -norm for p = {1, 2}, encourages the transform to target a minimal number of units of z. Specifically, using p = 1 should encourage the perturbation vector to be sparse (Donoho, 2006). When z is sparse, this means that only a few elements of z may be changed. Such minimal perturbations reduce the likelihood that the attack is detected.
3.3 EVALUATION METHOD
The goal for the attacker is to tamper with the encoding such that the label of the decoded sample is flipped. For example, if the label was 1 initially, following a successful attack, the label should be 0. Rather than assigning binary labels to samples, our classifier outputs values between [0, 1] where 0 or 1 suggests that the classifier is highly certain that a data sample belongs to either class 0 or class 1, while a classifier output of 0.5 means that the classifier is unsure which class the sample belongs to. When an attack is successful, we expect a classifier to predict the class of the reconstructed image with high certainty. Further, for an attack to be undetectable, we would expect a classifier to predict the label of a reconstructed, un-tampered data sample with almost the same certainty as a tampered one. Formally, we may evaluate the quality of an attack by measuring |✏| such that 2:
class(x) = 1 class(T̂ x) + ✏ class(D
✓ (z)) = 1 class(D ✓
(T z)) + ✏
Based purely on the classification loss, in the case where ✏ = 0, the encodings that have been tampered with would be indistinguishable from those that had not. An attack may be considered undetectable if |✏| is small. Typically, |✏| may be related to the standard deviation in classification results.
To calculate epsilon we make two practical alterations. The first is that our classifier outputs values [0, 1], which do not necessarily correspond to probabilities, but may in some respect capture the confidence of a single classification. Using the output of the classifier, we compute confidence
2 We assume class(x) = class(x̂).
scores, where 0 corresponds to low confidence and 1 to high confidence. For a sample whose true label is 1, the confidence is taken to be the output of the classifier. For a sample whose true label is 0, the confidence is taken to be (1 class(·)), where class(·) is the output of the classifier. The second, is that if the classifier is more confident when classifying one class compared to the other, it does not make sense to compare class(x) to class(T̂ x). Rather, we compare:
class(x(y=1))) = class(T̂ x(y=0)) + ✏
class(D ✓ (z(y=1))) = class(D ✓ (T z(y=0))) + ✏ where xy=0 and xy=1 are a data samples with true labels 0 and 1 respectively. zy=0 and zy=1 are encodings of data samples xy=0 and xy=1, respectively.
We measure the performance of all attacks using the same classifier, so that we may compare attack types more easily. As a consequence, we are also able to show that the attack is partially agnostic to the classifier, provided that the classifier is trained to perform a similar task.
We discuss an additional probabilistic evaluation method in Section 6.4 of the Appendix.
4 EXPERIMENTS AND RESULTS
We compare 3 methods of attack using 2 different types of regularization on z – totaling 6 experiments. The three methods of attack are listed in Section 3 and the two types of regularization are the L1-norm and the L2-norm. We show qualitative results for only two examples in the main text and reserve the rest for the appendix. We provide a quantitative analysis in the form of confidence score (discussed in Section 3.3) for all 6 attack types.
4.1 DATASET
Experiments are performed on the CelebA dataset consisting of 200k colour images of faces, of which 100 are reserved for testing. The samples are of size 64⇥ 64, and we do not crop the images. Each image is assigned a binary label, 1 for smiling and 0 for not smiling.
4.2 USING (z + z) WITH L2 REGULARIZATION
In this section, we focus on adversaries that have been trained using L2 regularization. Figure 4 shows the results of an adversarial attack, where the adversary is learned for a pre-trained VAE, which was trained without label information. We expected this to be a more challenging form of attack since the VAE would not have been trained with any discriminative label information – making it less likely to learn features specifically for “smile” and “not smile”. Visual examples of decoded tampered and non-tampered encodings are shown in Figure 4. Figure 4(a) shows reconstructed images of people smiling, while (b) shows similar faces, but without smiles (attacked). Similarly, Figure 4(c) shows reconstructed images of people that are not smiling, while (d) shows similar faces smiling (attacked). In most cases, the success of the attack is obvious.
Quantitative results in Table 1 show several important results. In all cases, the decoded tamperedencodings are classified with high confidence. This is higher than the classifier on either the original image or the reconstructed ones. This suggests that the adversarial attack is successful as tampering with the encoding. By only evaluating the attacks by the confidence, it appears that all adversaries perform similarly well for all attack types. However, it is important to consider the difference between the confidence of reconstructed samples and the samples whose encoding was tampered with. Since the attacker aims to directly optimize the classification score, it is no surprise that affected samples have higher confidence score. It does, however, make the attack potentially more detectable. From this perspective, the more successful attacks would be those whose difference between confidence scores is small (see Section 3.3).
For this particular set of attacks, the most stealthy would be switching from “no smile” to “smile” attacking a VAE trained using label information. We may expect a VAE trained with label information to be a particularly good target as it is already trained to learn discriminative features. We also notice that it is easier for the attacker to move in the direction from “no smile” to “smile” than the reverse. The reason for this may be related to the slight bias in the classification results. However, this may also stem from the subjective labelling problem. Some of the faces in Figure (a) that belong to the “smile” class are not clearly smiling.
Both the qualitative results in Figure 4 and the quantitative results in Table 1 indicate successful attack strategies. Further, visual results are shown in the Appendix for the other attack methods, and images showing the pixel-wise difference between reconstructions and attacked samples are also shown (Figure 11) to highlight the effects of T .
4.3 USING (z + z) WITH L1 REGULARIZATION
In this section, we look at results for attacks using L1 regularization on the encoding. L1 regularization is intended to encourage sparsity in z, targeting only a few units of the encoding. In Figure 10 in the appendix, we show that L1 regularization does indeed lead to a more sparse z being learned.
In Figure 5, we show visual results of an adversarial attack, with the original reconstructions on the left and the reconstructions for tampered encodings on the right. We show examples of all 3 types of attack, with L1 regularization in the appendix. The attack appears to be successful in all cases. We visualize the pixel-wise change between reconstructions of encodings and tampered encodings in Figure 11 of the appendix. Note that our results are not “cherry picked”, but simply chosen randomly.
Table 2 shows confidence values for each type of attack when using L1 regularization on z. In all cases, the confidence values for the samples which were attacked is higher than both reconstructed samples and original data samples. This is likely to be because the adversary is picking a perturbation that directly optimises the classification score. It is, however, important to remember that the classifier used to evaluate the attack is the same for all attacks and not the same one used for training the adversary.
As before, if there is a clear difference in confidence score between the reconstructed data samples and the decoded tampered-encodings, it will be obvious that an attack has taken place. If we consider the difference between these scores, then the most stealthy attacks are those learning the z at the
same time as learning the VAE to switch between “no smile” and “smile”. Similarly, with the results obtained with L2 regularization on z, the more successful attack – in terms of stealth – is to go from “no smile” to “smile” for all attack types.
5 DISCUSSION AND CONCLUSION
In this paper, we propose the idea of latent poisoning – an efficient methodology for an adversarial attack i.e., by structured modification of the latent space of a variational autoencoder. Both additive and multiplicative perturbation, with sparse and dense structure, show that it is indeed possible to flip the predictive class with minimum changes to the latent code.
Our experiments show that additive perturbations are easier to operationalize than the multiplicative transformation of the latent space. It is likely that additive perturbations have reasonable performance because of the near-linear structure of the latent space. It has been shown that given two images and their corresponding points in latent space, it is possible to linearly interpolate between samples in latent space to synthesize intermediate images that transit smoothly between the two initial images (Kingma & Welling, 2013; Radford et al., 2015). If the two images were drawn from each of the binary classes, and a smooth interpolation existed between them, this would mean that additive perturbation in the latent space, along this vector, would allow movement of samples from one class to the other.
How can we counter such a poisoning of the latent space? It might be helpful to look into the predictive probability and its uncertainty on outputs from an autoencoder. If the uncertainty is above a threshold value, an attack may be detected. Detection via predictive probability and its uncertainty, as well as alternative methods, such as inspection of the latent encoding, become even more difficult when the attacker has altered the latent distribution minimally (under a norm).
Given the prevalence of machine learning algorithms, the robustness of such algorithms is increasingly becoming important (McDaniel et al., 2016; Abadi et al., 2017), possibly at par with reporting test error of such systems.
6 APPENDIX
6.1 SAMPLES WITH AND WITH OUT LABEL SWITCH
In the main body of the text, we showed received images for the case where an attack has taken place for two types of attack. In this section, we show the remaining examples.
6.2 COMPARE USING | z|1 WITH | z|2
In this section, we compose Tables of values and figures to compare the 3 different attacks for the 2 different regularization methods.
6.3 ENTROPY OF PERTURBATION
We expect that using L1 regularization will give more sparse perturbations, z than using L2 regularization. In Figure 10, we show the effect of the regularization term for each attack type: (1) learning a z for a pre-trained VAE, (2) learning a z while training a VAE and (3) learning a z while training a VAE and using class information to train the VAE. It is clear from Figure 10 that using L1 regularization does indeed result in a more sparse z.
6.4 CAN WE USE KNOWLEDGE OF THE PRIOR TO DETECT AN ADVERSARIAL ATTACK?
Figure 10 provides information about the magnitude of the adversarial perturbations. Here, we consider how knowledge of the magnitude of the perturbations, may allow us to understand the probability of an attack being detected. We consider an approach to individually test each element of a latent encoding to see if we can determine whether an attack has taken place. We refer to a single element of the perturbation z, as z and consider whether we can detect perturbation to a single element in isolation from the other elements in the encoding.
In a variational autoencoder, the distribution of encoded data samples is trained to belong to a chosen prior distribution – in this case a Gaussian. Assuming that the autoencoder is trained well, we may say that the distribution of encoded data samples is Gaussian. Further, we assume that each element in the encoding is drawn independently from the Gaussian distribution. From this, we know that c.99.5% each individual encoding value lies between 2.807 and 2.807 where sigma is the
standard deviation of the Gaussian distribution. This means that approximately 1/200 3 elements lie outside this interval. In our case = 1.
Any addition to samples from Gaussian distribution results in a shift of the distribution. For an adversarial attack involving the additive perturbation of z on a single unit of z, we may calculate the probability that a single element in a tampered encoding lies outside the range [ 2.807, 2.807]. The formula for this is given by:
P99.5%( z) = 1 1
2
1 + erf ✓ 2.807 zp
2
◆ + 1
2
1 + erf ✓ 2.807 zp
2
◆
where erf(·) is the error function. Note that P99.5%(1) = 0.04, P99.5%(2) = 0.2 and P99.5%(5) = 0.98.
We may use this to evaluate our attack processes and may also be used to further regularize our models to ensure that the probability of being detected is less than a chosen threshold. Looking at Figure 10 we can see that only attacks in (a) and (b) using L2 regularization are likely to be undetectable according to the criteria above, assuming that the encoded data samples follow a Gaussian distribution.
6.5 THE EPSILON GAP
Here, we compare the ✏-gap (described in Section 3.3) for each type of attack, using each type of regularization. We expected that using L1 regularization would encourage minimal change to the encoding needed to make a switch between labels. Therefore we might expect this to influence the epsilon value. However, for a sparse z to have the desired properties we also require the structure of the latent space to be sparse. Since we did not enforce any sparsity constraint on the latent encoding when training the VAE, sparsity on the latent samples is not guaranteed. Therefore,
3our latent encoding is of size 200, however the choice of a 99.5% is fairly arbitrary and may be chosen more precisely depending on application.
although it is useful to learn sparse encodings to facilitate the speed of the attack (minimal number of changes to the encoding), it does not clearly affect the overall quality of the attack.
Table 3: Epsilon gap values
Samples z + z p=1 p=2 p=1 p=2
Learn z & Independent 0.07 0.19 0.09 0.10 Learn z & Poisoning jointly 0.20 0.10 0.00 0.09 Learn z & Poisoning+Class 0.18 0.11 0.07 0.00
6.6 THE EFFECT OF z ON x
In Figure 11 we show the difference between the reconstructed data samples and decoded tamperedencodings. These images highlight the effect of the adversarial perturbation – applied to the latent space – in the data space.
7 IMPLEMENTATION DETAILS
For both the encoder, decoder and classifier we use an architecture similar to that used by Radford et al. Radford et al. (2015). We weight the KL-divergence in the VAE loss by ↵ = 0.1 and we train the model using Adam with a learning rate of 2e 4 however training was not sensitive to this parameter – training with a learning rate of 1e 3 also worked. Our code both for training (with all parameter values) and evaluation will be made available after the review process via Github.
MULTIPLICATIVE PERTURBATION z · (1 + z)
To formulate a multiplicative perturbation, we require that the element(s) that encode smile or no smile have different signs for each class. We may then learn a multiplicative mask, where most of the values are ones, and one or a few values are negative. The values may not be positive. If the values are positive then signs in the encoding cannot be switched and no label swap may take place. In this formulation, we cannot guarantee that the encoding will take the desired form. From preliminary experiments, we see that faces classified as “smiling” often appear to be smiling more intensely after the transform. This is likely to be because the autoencoder considered the image to be a person not smiling in the first place.
In our formulation, we use a single z to which we apply L p regularization to. The transform is then z(1 + z). Note that it does not make sense to have a formulation for each direction i.e. z(1 z) for the other direction; if the encoding for opposite samples has opposite signs a negative z is sufficient to provide a transform in both directions.
For multiplicative transforms, the perturbations do not appear to perform as well as for the additive approach. This might be a reflection of the near-linear structure of the latent space learned by the autoencoder. An adversary applying an additive perturbation is able to target the near-linear structure, while an adversary applying a multiplicative perturbation implies much stronger assumptions on the structure of the latent space – which apparently do not hold for all variational autoencoders. | 1. What is the main purpose of using VAEs or GANs according to the reviewer?
2. What are the limitations of using VAEs for compression, according to the reviewer?
3. What is the proposed attack in the paper, and how does it relate to the use of VAEs?
4. How does the reviewer suggest modifying the approach to make the attack less feasible?
5. What are the challenges with implementing the other two attacks mentioned in the review? | Review | Review
This paper misses the point of what VAEs (or GANs, in general) are used for. The idea of using VAEs is not to encode and decode images (or in general any input), but to recover the generating process that created those images so we have an unlimited source of samples. The use of these techniques for compressing is still unclear and their quality today is too low. So the attack that the authors are proposing does not make sense and my take is that we should see significant changes before they can make sense.
But let’s assume that at some point they can be used as the authors propose. In which one person encodes an image, send the latent variable to a friend, but a foe intercepts it on the way and tampers with it so the receiver recovers the wrong image without knowing. Now if the sender believes the sample can be tampered with, if the sender codes z with his private key would not make the attack useless? I think this will make the first attack useless.
The other two attacks require that the foe is inserted in the middle of the training of the VAE. This is even less doable, because the encoder and decoder are not train remotely. They are train of the same machine or cluster in a controlled manner by the person that would use the system. Once it is train it will give away the decoder and keep the encoder for sending information. |
ICLR | Title
LatentPoison -- Adversarial Attacks On The Latent Space
Abstract
Robustness and security of machine learning (ML) systems are intertwined, wherein a non-robust ML system (classifiers, regressors, etc.) can be subject to attacks using a wide variety of exploits. With the advent of scalable deep learning methodologies, a lot of emphasis has been put on the robustness of supervised, unsupervised and reinforcement learning algorithms. Here, we study the robustness of the latent space of a deep variational autoencoder (dVAE), an unsupervised generative framework, to show that it is indeed possible to perturb the latent space, flip the class predictions and keep the classification probability approximately equal before and after an attack. This means that an agent that looks at the outputs of a decoder would remain oblivious to an attack.
1 INTRODUCTION
The ability to encode data reliably is essential for many tasks including image compression, data retrieval and communication. As data is transmitted between communication channels, error detection and correction is often employed to deduce the presence of erroneous bits (Peterson & Weldon, 1972). The source of such errors can be a result of imperfection in the transmitter, channel or in the receiver. Often times, such errors can be deliberate where a man-in-middle attack (Desmedt, 2011; Conti et al., 2016) can result in deleterious erasure of information, yet to the receiver, it may end up as appearing untampered (Kos et al., 2017).
In deep learning, we are able to learn an encoding process using unsupervised learning such as in autoencoders (AE) (Kingma & Welling, 2013); however, we are less able to design methods for checking whether encodings have been tampered with. Therefore, there are two facets of this problem – the first, is to come up with methodologies of tampering with the models and second, is to detect the adversarial breach. In what follows, we will concentrate only on the first problem by presenting a method for tampering autoencoders. An autoencoder has two components: the encoder maps the input to a latent space, while the decoder maps the latent space to the requisite output. A vanilla autoencoder can, therefore, be used to compress the input to a lower dimensional latent (or feature) space. Other forms of autoencoder include the denoising AE (Vincent et al., 2010) that recovers an undistorted input from a partially corrupted input; the compressive AE (Theis et al., 2017) designed for image compression and the variational AE (Kingma & Welling, 2013) that assumes that the data is generated from a directed graphical model with the encoder operationalized to learn the posterior distribution of the latent space. Autoencoders have wide use in data analytics, computer vision, natural language processing, etc.
We propose an attack that targets the latent encodings of autoencoders, such that if an attack is successful the output of an autoencoder will have a different semantic meaning to the input. Formally, we consider an autoencoder consisting of an encoder and decoder model designed to reconstruct an input data sample such that the label information associated with the input data is maintained. For example, consider a dataset of images, x with the labels, y = {0, 1}, and an encoder, E : x ! z and a decoder, D : z ! x where z is a latent encoding for x. If the encoder and decoder are operating normally, the label of the reconstructed data sample, ˆ̂y = class(D(E(x))) should be the same as the label of the input data sample, where class(·) is the soft output of a binary classifier. In this paper, we focus on learning an attack transformation, T z, such that if z is the latent encoding for a data sample, x, with label 0, T z is the latent encoding for a data sample with label 1. The
attack is designed to flip the label of the original input and change its content. Note that the same T is applied to each encoding and is not specific to either the input data sample or the encoding, it is only dependent on the label of the input data sample.
The success of an attack may be measured in three ways:
1. The number of elements in the latent encoding, changed by the attack process should be small. If the encoding has a particular length, changing multiple elements may make the attack more detectable.
2. When a decoder is applied to tampered encodings, the decoded data samples should be indistinguishable from other decoded data samples that have not been tampered with.
3. Decoded tampered-encodings should be classified with opposite label to the original (untampered) data sample.
Our contribution lies in studying transforms with these properties. Experimentally, we find that optimizing for requirement (1) may implicitly encourage requirement (2). Crucially, in contrast to previous work (Goodfellow et al., 2014), our approach does not require knowledge of the model (here a VAE) parameters; we need access only to the encodings and the output of a classifier, making our approach more practical (Papernot et al., 2017). Finally, we owe the success of this attack method primarily to the near-linear structure of the VAE latent space (Kingma & Welling, 2013) – which our attack exploits.
2 COMPARISON TO PREVIOUS WORK
Security in deep learning algorithms is an emerging area of research. Much focus has gone into the construction of adversarial data examples, inputs that are modified such that they cause deep learning algorithms to fail. Previous work, designing adversarial images, has focused on perturbing input data samples such that a classifier miss classifies adversarial examples (Goodfellow et al., 2014). The perturbation is intended to be so small that a human cannot detect the difference between the original data samples, and its adversarial version. Goodfellow et al. (Goodfellow et al., 2014) propose adding a perturbation proportional to sign(r
x J(✓, x, y)) where J is the cost function used to train a classifier (that is being attacked), ✓ are the parameters of that classifier, and x and y the data and label pair, respectively. This type of attack requires the attacker to have high-level access to the classifiers’ parameters and cost function. An alternative approach that does not require the adversary to have access to the model parameters, is presented by Papernot et al. (Papernot et al., 2017) who propose a more practical approach, requiring only the classifier output and knowledge of the encoding size. Our adversary has similar, practical requirements.
Our approach, is thus tangential to the previous work on adversarial images for classification. We focus on a man-in-middle form of attack (Diffie & Hellman, 1976): rather than launching an attack on data samples, we launch an attack on an intermediate encoding such that a message being sent from a sender is different to the message received by a receiver. Similar to previous work, we do not want the attack on the encoding to be detectable, but in contrast to previous work (Goodfellow et al., 2014; Papernot et al., 2017), we wish for the message – in this example the images – to be detectably changed, while still being consistent with other non-tampered messages.
Our work is more similar to that of Kos et al. (Kos et al., 2017) – in the sense that they propose attacking variational autoencoders in a similar sender-receiver framework. Their goal is to perform an attack on inputs to an autoencoder such that output of the autoencoder belongs to a different class to the input. For example, an image of the digit 8 is encoded, but following an attack, the decoded image is of the digit 7 (Kos et al., 2017). While the overall goal is very similar, their approach is very different since they focus on perturbing images – while we perturb latent encodings. This difference is illustrated in Figure 1.
Finally, most previous work (Goodfellow et al., 2014; Papernot et al., 2017; Kos et al., 2017) requires the calculation of a different perturbation for each adversarial example. Rather, in our approach, we learn a single (additive) adversarial perturbation that may be applied to almost any encoding to launch a successful attack. This makes our approach more practical for larger scale attacks.
3 METHOD
In this section, we describe how we train a VAE and how we learn the adversarial transform that we apply to the latent encoding.
3.1 PROBLEM SETUP
Consider a dataset, D consisting of labeled binary examples, {x i , y i }N i=1, for yi 2 {0, 1}. To perform the mappings between data samples, x, and corresponding latent samples, z, we learn an encoding process, q (z|x), and a decoding process, p ✓
(x|z), which correspond to an encoding and decoding function E (·) and D ✓
(·) respectively. and ✓, parameterize the encoder and decoder, respectively. Our objective is to learn an adversarial transform, T̂ such that class(x) 6= class(T̂ x), where, T̂ , is constrained under an L
p norm. Here, class(·) is the soft output of a binary classifier. Rather than applying an adversarial transformation (Moosavi-Dezfooli et al., 2016), T̂ directly to the data, x, we propose performing the adversarial transform T on the latent representation, T z. We learn a transform, T with z = E (x) subject to class(D (T z)) 6= class(D (z))1.
We consider three methods of attack, and compare two approaches for regularizing T . The three attack methods that we consider are as follows:
1. An Independent attack: We consider an attack on a pre-trained variational autoencoder (VAE). T is learned for the pre-trained VAE.
2. A Poisoning attack: We consider an attack during VAE training (poisoning). T is learned at the same time as the VAE.
3. A Poisoning+Class attack: We consider an attack during VAE training, where the VAE is trained not only to reconstruct samples but to produce reconstructions that have low classification error. This, in turn, encourages the VAE to have a discriminative internal representation, possibly making it more vulnerable to attack. We learn T at the same time.
1Note than in the case where class labels are binary, this is equivalent to: learning a T such that class(D (T z)) = 1 class(D (z)).
ADDITIVE PERTURBATION (z + Z)
Here, we consider T z = z + z. There are several options for the form that z may take. In the first case, z may be a constant. We may learn a single transform to flip an image with label 0 to an image with label 1, and another for moving in the opposite direction. On the other hand, we may learn a single z and apply z to move in one direction and + z to move in the other. The advantage of using a constant z is that at the attack time the adversarial perturbation has already been pre-computed, making it easier to attack multiple times. There is a further advantage to using only a single z because the attacker need only learn a single vector to tamper with (almost) all of the encodings. Alternatively, z may be a function of any combination of variables x, y, z, however, this may require the attacker to learn an attack online – rather than having a precomputed attack that may be deployed easily. In this paper, we are interested in exploring the case where we learn a single, constant z.
We also consider a multiplicative perturbation. However, we reserve explanation of this for the Appendix (Section 7).
3.2 LOSS FUNCTIONS
Here, we consider the cost functions used to train a VAE and learn T . The VAE is trained to reconstruct an input, x, while also minimizing a Kullback-Leibler (KL)-divergence between a chosen prior distribution, p(z) and the distribution of encoded data samples. The parameters of the VAE are learned by minimizing, J
vae
= BCE(x, x̂) + ↵KL[q (z|x)||p(z)], where BCE is the binary cross-entropy and ↵ is the regularization parameter. A classifier may be learned by minimizing J
class = BCE(y, ŷ). An additional cost function for training the VAE may be the classification loss on reconstructed data samples, BCE(y, ˆ̂y). This is similar to an approach used by Chen et al. (Chen et al., 2016) to synthesize class specific data samples. Finally, to learn the attack transform, T we minimize, J
z = BCE((1 y), y̌) + L p (T ), for the case above (Section 3.1) we have L p (T ) = || z|| p
. This allows us to learn a transform on a latent encoding, that results in a label flip in the decoded image. Minimizing the L
p -norm for p = {1, 2}, encourages the transform to target a minimal number of units of z. Specifically, using p = 1 should encourage the perturbation vector to be sparse (Donoho, 2006). When z is sparse, this means that only a few elements of z may be changed. Such minimal perturbations reduce the likelihood that the attack is detected.
3.3 EVALUATION METHOD
The goal for the attacker is to tamper with the encoding such that the label of the decoded sample is flipped. For example, if the label was 1 initially, following a successful attack, the label should be 0. Rather than assigning binary labels to samples, our classifier outputs values between [0, 1] where 0 or 1 suggests that the classifier is highly certain that a data sample belongs to either class 0 or class 1, while a classifier output of 0.5 means that the classifier is unsure which class the sample belongs to. When an attack is successful, we expect a classifier to predict the class of the reconstructed image with high certainty. Further, for an attack to be undetectable, we would expect a classifier to predict the label of a reconstructed, un-tampered data sample with almost the same certainty as a tampered one. Formally, we may evaluate the quality of an attack by measuring |✏| such that 2:
class(x) = 1 class(T̂ x) + ✏ class(D
✓ (z)) = 1 class(D ✓
(T z)) + ✏
Based purely on the classification loss, in the case where ✏ = 0, the encodings that have been tampered with would be indistinguishable from those that had not. An attack may be considered undetectable if |✏| is small. Typically, |✏| may be related to the standard deviation in classification results.
To calculate epsilon we make two practical alterations. The first is that our classifier outputs values [0, 1], which do not necessarily correspond to probabilities, but may in some respect capture the confidence of a single classification. Using the output of the classifier, we compute confidence
2 We assume class(x) = class(x̂).
scores, where 0 corresponds to low confidence and 1 to high confidence. For a sample whose true label is 1, the confidence is taken to be the output of the classifier. For a sample whose true label is 0, the confidence is taken to be (1 class(·)), where class(·) is the output of the classifier. The second, is that if the classifier is more confident when classifying one class compared to the other, it does not make sense to compare class(x) to class(T̂ x). Rather, we compare:
class(x(y=1))) = class(T̂ x(y=0)) + ✏
class(D ✓ (z(y=1))) = class(D ✓ (T z(y=0))) + ✏ where xy=0 and xy=1 are a data samples with true labels 0 and 1 respectively. zy=0 and zy=1 are encodings of data samples xy=0 and xy=1, respectively.
We measure the performance of all attacks using the same classifier, so that we may compare attack types more easily. As a consequence, we are also able to show that the attack is partially agnostic to the classifier, provided that the classifier is trained to perform a similar task.
We discuss an additional probabilistic evaluation method in Section 6.4 of the Appendix.
4 EXPERIMENTS AND RESULTS
We compare 3 methods of attack using 2 different types of regularization on z – totaling 6 experiments. The three methods of attack are listed in Section 3 and the two types of regularization are the L1-norm and the L2-norm. We show qualitative results for only two examples in the main text and reserve the rest for the appendix. We provide a quantitative analysis in the form of confidence score (discussed in Section 3.3) for all 6 attack types.
4.1 DATASET
Experiments are performed on the CelebA dataset consisting of 200k colour images of faces, of which 100 are reserved for testing. The samples are of size 64⇥ 64, and we do not crop the images. Each image is assigned a binary label, 1 for smiling and 0 for not smiling.
4.2 USING (z + z) WITH L2 REGULARIZATION
In this section, we focus on adversaries that have been trained using L2 regularization. Figure 4 shows the results of an adversarial attack, where the adversary is learned for a pre-trained VAE, which was trained without label information. We expected this to be a more challenging form of attack since the VAE would not have been trained with any discriminative label information – making it less likely to learn features specifically for “smile” and “not smile”. Visual examples of decoded tampered and non-tampered encodings are shown in Figure 4. Figure 4(a) shows reconstructed images of people smiling, while (b) shows similar faces, but without smiles (attacked). Similarly, Figure 4(c) shows reconstructed images of people that are not smiling, while (d) shows similar faces smiling (attacked). In most cases, the success of the attack is obvious.
Quantitative results in Table 1 show several important results. In all cases, the decoded tamperedencodings are classified with high confidence. This is higher than the classifier on either the original image or the reconstructed ones. This suggests that the adversarial attack is successful as tampering with the encoding. By only evaluating the attacks by the confidence, it appears that all adversaries perform similarly well for all attack types. However, it is important to consider the difference between the confidence of reconstructed samples and the samples whose encoding was tampered with. Since the attacker aims to directly optimize the classification score, it is no surprise that affected samples have higher confidence score. It does, however, make the attack potentially more detectable. From this perspective, the more successful attacks would be those whose difference between confidence scores is small (see Section 3.3).
For this particular set of attacks, the most stealthy would be switching from “no smile” to “smile” attacking a VAE trained using label information. We may expect a VAE trained with label information to be a particularly good target as it is already trained to learn discriminative features. We also notice that it is easier for the attacker to move in the direction from “no smile” to “smile” than the reverse. The reason for this may be related to the slight bias in the classification results. However, this may also stem from the subjective labelling problem. Some of the faces in Figure (a) that belong to the “smile” class are not clearly smiling.
Both the qualitative results in Figure 4 and the quantitative results in Table 1 indicate successful attack strategies. Further, visual results are shown in the Appendix for the other attack methods, and images showing the pixel-wise difference between reconstructions and attacked samples are also shown (Figure 11) to highlight the effects of T .
4.3 USING (z + z) WITH L1 REGULARIZATION
In this section, we look at results for attacks using L1 regularization on the encoding. L1 regularization is intended to encourage sparsity in z, targeting only a few units of the encoding. In Figure 10 in the appendix, we show that L1 regularization does indeed lead to a more sparse z being learned.
In Figure 5, we show visual results of an adversarial attack, with the original reconstructions on the left and the reconstructions for tampered encodings on the right. We show examples of all 3 types of attack, with L1 regularization in the appendix. The attack appears to be successful in all cases. We visualize the pixel-wise change between reconstructions of encodings and tampered encodings in Figure 11 of the appendix. Note that our results are not “cherry picked”, but simply chosen randomly.
Table 2 shows confidence values for each type of attack when using L1 regularization on z. In all cases, the confidence values for the samples which were attacked is higher than both reconstructed samples and original data samples. This is likely to be because the adversary is picking a perturbation that directly optimises the classification score. It is, however, important to remember that the classifier used to evaluate the attack is the same for all attacks and not the same one used for training the adversary.
As before, if there is a clear difference in confidence score between the reconstructed data samples and the decoded tampered-encodings, it will be obvious that an attack has taken place. If we consider the difference between these scores, then the most stealthy attacks are those learning the z at the
same time as learning the VAE to switch between “no smile” and “smile”. Similarly, with the results obtained with L2 regularization on z, the more successful attack – in terms of stealth – is to go from “no smile” to “smile” for all attack types.
5 DISCUSSION AND CONCLUSION
In this paper, we propose the idea of latent poisoning – an efficient methodology for an adversarial attack i.e., by structured modification of the latent space of a variational autoencoder. Both additive and multiplicative perturbation, with sparse and dense structure, show that it is indeed possible to flip the predictive class with minimum changes to the latent code.
Our experiments show that additive perturbations are easier to operationalize than the multiplicative transformation of the latent space. It is likely that additive perturbations have reasonable performance because of the near-linear structure of the latent space. It has been shown that given two images and their corresponding points in latent space, it is possible to linearly interpolate between samples in latent space to synthesize intermediate images that transit smoothly between the two initial images (Kingma & Welling, 2013; Radford et al., 2015). If the two images were drawn from each of the binary classes, and a smooth interpolation existed between them, this would mean that additive perturbation in the latent space, along this vector, would allow movement of samples from one class to the other.
How can we counter such a poisoning of the latent space? It might be helpful to look into the predictive probability and its uncertainty on outputs from an autoencoder. If the uncertainty is above a threshold value, an attack may be detected. Detection via predictive probability and its uncertainty, as well as alternative methods, such as inspection of the latent encoding, become even more difficult when the attacker has altered the latent distribution minimally (under a norm).
Given the prevalence of machine learning algorithms, the robustness of such algorithms is increasingly becoming important (McDaniel et al., 2016; Abadi et al., 2017), possibly at par with reporting test error of such systems.
6 APPENDIX
6.1 SAMPLES WITH AND WITH OUT LABEL SWITCH
In the main body of the text, we showed received images for the case where an attack has taken place for two types of attack. In this section, we show the remaining examples.
6.2 COMPARE USING | z|1 WITH | z|2
In this section, we compose Tables of values and figures to compare the 3 different attacks for the 2 different regularization methods.
6.3 ENTROPY OF PERTURBATION
We expect that using L1 regularization will give more sparse perturbations, z than using L2 regularization. In Figure 10, we show the effect of the regularization term for each attack type: (1) learning a z for a pre-trained VAE, (2) learning a z while training a VAE and (3) learning a z while training a VAE and using class information to train the VAE. It is clear from Figure 10 that using L1 regularization does indeed result in a more sparse z.
6.4 CAN WE USE KNOWLEDGE OF THE PRIOR TO DETECT AN ADVERSARIAL ATTACK?
Figure 10 provides information about the magnitude of the adversarial perturbations. Here, we consider how knowledge of the magnitude of the perturbations, may allow us to understand the probability of an attack being detected. We consider an approach to individually test each element of a latent encoding to see if we can determine whether an attack has taken place. We refer to a single element of the perturbation z, as z and consider whether we can detect perturbation to a single element in isolation from the other elements in the encoding.
In a variational autoencoder, the distribution of encoded data samples is trained to belong to a chosen prior distribution – in this case a Gaussian. Assuming that the autoencoder is trained well, we may say that the distribution of encoded data samples is Gaussian. Further, we assume that each element in the encoding is drawn independently from the Gaussian distribution. From this, we know that c.99.5% each individual encoding value lies between 2.807 and 2.807 where sigma is the
standard deviation of the Gaussian distribution. This means that approximately 1/200 3 elements lie outside this interval. In our case = 1.
Any addition to samples from Gaussian distribution results in a shift of the distribution. For an adversarial attack involving the additive perturbation of z on a single unit of z, we may calculate the probability that a single element in a tampered encoding lies outside the range [ 2.807, 2.807]. The formula for this is given by:
P99.5%( z) = 1 1
2
1 + erf ✓ 2.807 zp
2
◆ + 1
2
1 + erf ✓ 2.807 zp
2
◆
where erf(·) is the error function. Note that P99.5%(1) = 0.04, P99.5%(2) = 0.2 and P99.5%(5) = 0.98.
We may use this to evaluate our attack processes and may also be used to further regularize our models to ensure that the probability of being detected is less than a chosen threshold. Looking at Figure 10 we can see that only attacks in (a) and (b) using L2 regularization are likely to be undetectable according to the criteria above, assuming that the encoded data samples follow a Gaussian distribution.
6.5 THE EPSILON GAP
Here, we compare the ✏-gap (described in Section 3.3) for each type of attack, using each type of regularization. We expected that using L1 regularization would encourage minimal change to the encoding needed to make a switch between labels. Therefore we might expect this to influence the epsilon value. However, for a sparse z to have the desired properties we also require the structure of the latent space to be sparse. Since we did not enforce any sparsity constraint on the latent encoding when training the VAE, sparsity on the latent samples is not guaranteed. Therefore,
3our latent encoding is of size 200, however the choice of a 99.5% is fairly arbitrary and may be chosen more precisely depending on application.
although it is useful to learn sparse encodings to facilitate the speed of the attack (minimal number of changes to the encoding), it does not clearly affect the overall quality of the attack.
Table 3: Epsilon gap values
Samples z + z p=1 p=2 p=1 p=2
Learn z & Independent 0.07 0.19 0.09 0.10 Learn z & Poisoning jointly 0.20 0.10 0.00 0.09 Learn z & Poisoning+Class 0.18 0.11 0.07 0.00
6.6 THE EFFECT OF z ON x
In Figure 11 we show the difference between the reconstructed data samples and decoded tamperedencodings. These images highlight the effect of the adversarial perturbation – applied to the latent space – in the data space.
7 IMPLEMENTATION DETAILS
For both the encoder, decoder and classifier we use an architecture similar to that used by Radford et al. Radford et al. (2015). We weight the KL-divergence in the VAE loss by ↵ = 0.1 and we train the model using Adam with a learning rate of 2e 4 however training was not sensitive to this parameter – training with a learning rate of 1e 3 also worked. Our code both for training (with all parameter values) and evaluation will be made available after the review process via Github.
MULTIPLICATIVE PERTURBATION z · (1 + z)
To formulate a multiplicative perturbation, we require that the element(s) that encode smile or no smile have different signs for each class. We may then learn a multiplicative mask, where most of the values are ones, and one or a few values are negative. The values may not be positive. If the values are positive then signs in the encoding cannot be switched and no label swap may take place. In this formulation, we cannot guarantee that the encoding will take the desired form. From preliminary experiments, we see that faces classified as “smiling” often appear to be smiling more intensely after the transform. This is likely to be because the autoencoder considered the image to be a person not smiling in the first place.
In our formulation, we use a single z to which we apply L p regularization to. The transform is then z(1 + z). Note that it does not make sense to have a formulation for each direction i.e. z(1 z) for the other direction; if the encoding for opposite samples has opposite signs a negative z is sufficient to provide a transform in both directions.
For multiplicative transforms, the perturbations do not appear to perform as well as for the additive approach. This might be a reflection of the near-linear structure of the latent space learned by the autoencoder. An adversary applying an additive perturbation is able to target the near-linear structure, while an adversary applying a multiplicative perturbation implies much stronger assumptions on the structure of the latent space – which apparently do not hold for all variational autoencoders. | 1. What is the main contribution of the paper in terms of security and machine learning?
2. What are the strengths and weaknesses of the proposed "attacking" method?
3. How realistic is the scenario presented in the paper regarding man-in-the-middle attacks?
4. What are the necessary and sufficient conditions for an attacking method to be undetectable?
5. Are there any references for the criteria given in Section 3.3?
6. Why is the correspondence between the classification confidence of transformed and original data required?
7. Would matching the distribution of the confidence be enough? | Review | Review
This paper is concerned with both security and machine learning.
Assuming that data is encoded, transmited, and decoded using a VAE,
the paper proposes a man-in-middle attack that alters the VAE encoding of the input data so that the decoded output will be misclassified.
The objectives are to: 1) fool the autoencoder; the classification output of the autoencoder is different from the actual class of the input; 2) make minimal change in the middle so that the attack is not detectable.
This paper is concerned with both security and machine learning, but there is no clear contributions to either field. From the machine learning perspective, the proposed "attacking" method is standard without any technical novelty. From the security perspective, the scenarios are too simplistic. The encoding-decoding mechanism being attacked is too simple without any security enhancement. This is an unrealistic scenario. For applications with security concerns, there should have been methods to guard against man-in-the-middle attack, and the paper should have at least considered some of them. Without considering the state-of-the-art security defending mechanism, it is difficult to judge the contribution of the paper to the security community.
I am not a security expert, but I doubt that the proposed method are formulated based on well founded security concepts and ideas. For example, what are the necessary and sufficient conditions for an attacking method to be undetectable? Are the criteria about the magnitude of epsilon given on Section 3.3. necessary and sufficient? Is there any reference for them? Why do we require the correspondence between the classification confidence of tranformed and original data? Would it be enough to match the DISTRIBUTION of the confidence? |
ICLR | Title
A Reinforcement Learning Approach to Estimating Long-term Treatment Effects
Abstract
Randomized experiments (a.k.a. A/B tests) are a powerful tool for estimating treatment effects, to inform decisions making in business, healthcare and other applications. In many problems, the treatment has a lasting effect that evolves over time. A limitation with randomized experiments is that they do not easily extend to measure long-term effects, since running long experiments is time-consuming and expensive. In this paper, we take a reinforcement learning (RL) approach that estimates the average reward in a Markov process. Motivated by real-world scenarios where the observed state transition is nonstationary, we develop a new algorithm for a class of nonstationary problems, and demonstrate promising results in two synthetic datasets and one online store dataset.
1 INTRODUCTION
Randomized experiments (a.k.a. A/B tests) are a powerful tool for estimating treatment effects, to inform decisions making in business, healthcare and other applications. In an experiment, units like customers or patients are randomly split into a treatment bucket and a control bucket. For example, in a rideshare app, drivers in the control and treatment buckets are matched to customers in different ways (e.g., with different spatial ranges or different ranking functions). After we expose customers to one of these options for a period of time, usually a few days or weeks, we can record the corresponding customer engagements, and run a statistical hypothesis test on the engagement data to detect if there is a statistically significant difference in customer preference of treatment over control. The result will inform whether the app should launch the treatment or control.
While this method has been widely successful (e.g., in online applications (Kohavi et al., 2020)), it typically measures treatment effect during the short experiment window. However, in many problems, a treatment has a lasting effect that evolves over time. For example, a treatment that increases installation of a mobile app may result in a drop of short-term profit due to promotional benefits like discounts. But the installation allows the customer to benefit from the app, which will increase future engagements and profit in the long term. A limitation with standard randomized experiments is that they do not easily extend to measure long-term effects. We can run a long experiment for months or years to measure the long-term impacts, which however is time-consuming and expensive. We can also design proxy signals that are believed to correlate with long-term engagements (Kohavi et al., 2009), but finding a reliable proxy is challenging in practice. Another solution is the surrogacy method that estimates delayed treatment impacts from surrogate changes during the experiment (Athey et al., 2019). However, it does not estimate long-term impacts resulting from long-term treatment exposure, but rather from short-term exposure during the experiment.
Shi et al. (2022b) mitigates the limitation of standard randomized experiment by framing the longterm effect as a reinforcement learning (RL) problem. Their method is closely related to recent advances in infinite-horizon off-policy evaluation (OPE) (Liu et al., 2018; Nachum et al., 2019a; Xie et al., 2019; Kallus & Uehara, 2020; Uehara et al., 2020; Chandak et al., 2021). However, their solution relies on stationary Markov assumption, which fails to capture the real-world nonstationary dynamics. Motivated by real-world scenarios where the observed state transitions are nonstationary, we consider a class of nonstationary problems, where the observation consists of two additive terms: an endogenous term that follows a stationary Markov process, and an exogenous
term that is time-varying but independent of the policy. Based on this assumption, we develop a new algorithm to jointly estimate long-term reward and the exogenous variables.
Our contributions are threefold. First, it is a novel application of RL to estimate long-term treatment effects, which is challenging for standard randomized experiments. Second, we develop an estimator for a class of nonstationary problems that are motivated by real-world scenarios, and give a preliminary theoretical analysis. Third, we demonstrate promising results in two synthetic datasets and one online store dataset.
2 BACKGROUND
2.1 LONG-TERM TREATMENT EFFECTS
Let π0 and π1 be the control and treatment policies, used to serve individual in respective buckets. In the rideshare example, a policy may decide how to match a driver to a nearby request. During the experiment, each individual (the driver) is randomly assigned to one of the policy groups, and we observe a sequence of behavior features of that individual under the influence of the assigned policy. We use variable D ∈ {0, 1} to denote the random assignment of an individual to one of the policies. The observed features are denoted as a sequence of random variable in Rd
O0, O1, . . . , Ot, . . . ,
where the subscript t indicates time step in the sequence. A time step may be one day or one week, depending on the application. Feature Ot consists of information like number of pickup orders. We are interested in estimating the difference in average long-term reward between treatment and control policies:
∆ = E[ ∞∑ t=0 γtRt|D = 1]− E[ ∞∑ t=0 γtRt|D = 0], (1)
where E averages over individuals and their stochastic sequence of engagements, Rt = r(Ot) is the reward signal (e.g., customer rating) at time step t, following a pre-defined reward function r : Rd → R, and γ ∈ (0, 1) is the discounted factor. The discounted factor γ is a hyper-parameter specified by the decision maker to indicate how much they value future reward over the present. The closer γ is to 1, the greater weight future rewards carry in the discounted sum.
Suppose we have run a randomized experiment with the two policies for a short period of T steps. In the experiment, a set of n individuals are randomly split and exposed to one of the two policies π0 and π1. We denote by dj ∈ {0, 1} the policy assignment of individual j, and Ii the index set of individuals assigned to πi, i.e., j ∈ Ii iff dj = i. The in-experiment trajectory of individual j is:
τj = {oj,0, oj,1, . . . , oj,T }. The in-experiment dataset is the collection of all individual data as Dn = {(τj , dj)}nj=1. Our goal is to find an estimator ∆̂(Dn) ≈ ∆.
2.2 ESTIMATION UNDER STATIONARY MARKOVIAN DYNAMICS
Inspired by recent advances in off-policy evaluation (OPE) (e.g. Liu et al., 2018; Nachum et al., 2019b), the simplest assumption is a fully observed Markov Process that the observation in each time step can fully predict the future distribution under a stationary dynamic kernel. In this paper, we assume the dynamics kernel and reward function are both linear, following the setting in Parr et al. (2008). Linear representations are popular in the RL literature (e.g., Shi et al., 2022b) , and often preferable in industrial applications due to simplicity and greater model interpretability. Assumption 2.1. (Linear Dynamics) there is a matrix Mi such that
E[Ot+1|Ot = o,D = i] = Mio, ∀t ∈ N, i ∈ {0, 1}. (2) Remark 2.2. Unlike standard RL, we don’t have an explicit action for a policy. The difference between the control and treatment policy is revealed by different transition matrix M . Assumption 2.3. (Linear Reward) There is a coefficient vector θr ∈ Rd such that
r(Ot) = θ ⊤ r Ot, ∀t ∈ N. (3)
Remark 2.4. The reward signal may be one of the observed features. For example, if we are interested in customer rating, and rating is one of the observe features, then θr is just a one-hot vector with 1 in the corresponding coordinate. When the reward is complex with unknown coefficient, we can use ordinary least-squares to estimate the coefficient θr. Proposition 2.5. Under Assumption 2.1 and 2.3, if the spectral norm of Mi is smaller than 1γ , then the expected long-term reward of policy πi, v(πi) := E[ ∑∞ t=0 γ
tRt|D = i], can be obtained by: v(πi) = θ ⊤ r (I − γMi)−1Ō(i)0 , where Ō (i) 0 := E[O0|D = i]. (4)
The only remaining step is to estimate Ō(i)0 and Mi. The former can be directly estimated from the Monte Carlo average of the experimental data: Ô(i)0 = 1 ni ∑ j∈Ii o0,j , where ni = |Ii| is the number of individuals assigned to policy πi. To estimate the latter, we may use ordinary least-squares on observed transitions:
M̂i = ∑ j∈Ii T−1∑ t=0 oj,t+1o ⊤ j,t ∑ j∈Ii T−1∑ t=0 oj,to ⊤ j,t −1 . (5) The detailed derivation can be found in (Parr et al., 2008). Once we get the estimated value of v̂i ≈ v(πi), the long term impact in Eq. (1) can be estimated as:
∆̂ = v̂1 − v̂0. Remark 2.6. Although this a model-based estimator, it is equivalent to other OPE estimator in general under linear Markovian assumption (e.g., Nachum et al., 2019b; Duan et al., 2020; Miyaguchi, 2021) and it enjoys similar statistical guarantees as other OPE estimators.
3 OUR METHOD
In Section 2.2, we assumed the observation Ot follows a stationary Markov process, and derived a model-based closed-form solution based on linear reward Assumption 2.3.
In reality, this model assumption has two major limitations. First, real-world environments are nonstationary. For example, in a hotel reservation system, seasonality heavily influences the prediction of the future booking count. Our stationary assumption does not capture those seasonal changes, resulting in poorly learned models and inaccurate predictions of long-term treatment effects. Second, in practice, we are unable to ensure that observed features fully capture the dynamics. OPE methods based on stationary and full observability assumptions are unlikely to work robustly in complex, real-life scenarios.
Figure 1 illustrates nonstationarity in data from an online store (see Section 5 for more details). The figure shows how the weekly average of a business metric changes in a span of 5 months, for two policies (C for control, and T4 for treatment). Such highly non-statioanary data, especially during special seasons towards the right end of the plot, are common.
However, the difference of the two policy groups remains much more stable. This is expected as both policies are affected by the same exogenous affects (seasonal variations in this example).
Figure 1 motivates a relaxed model assumption (Section 3.1), by introducing a non-stationary exogenous component on top of a stationary hidden state St. Our new assumption is that the observation Ot can be decomposed additively into two parts: an endogenous part still follows a stationary Markovian dynamic for each policy group (treatment or control); and an exogenous part which is time-varying and shared across all groups. Based on the new assumption we propose an alternating minimization algorithm that jointly estimates both transition dynamics and exogenous variables.
3.1 NONSTATIONARY MODEL RELAXATION
We assume there is an exogenous noise vector zt for each time step t, to represent the linear additive exogenous noise in the uncontrollable outside world such as seasonal effect, which applies uniformly to every individual under each treatment bucket. We relax Assumption 2.1 as the following: Assumption 3.1. (Linear Additive Exogenous Noise) the observational feature Ot is the sum of the endogenous hidden features and the time-varying exogenous noise zt. Ot = St + zt, ∀t ∈ N. where zt does not depend on policy or any individual in the experiments and St follows the linear Markovian kernel with transition matrix Mi: E[St+1|St = s,D = i] = Mis, ∀t ∈ N, i ∈ {0, 1}. (6) Remark 3.2 (Explanation of the Linear Additive Model). Our linear additive model is inspired by the parallel trend assumption in the Difference-in-Difference (DID) estimator (Lechner et al., 2011). In real-world environments, it is impossible to capture all the covariates that may effect the dynamics. The linear additive exogenous noise zt can be seen as the drive from the outside that is both unobserved and uncontrol. For example, in an intelligent agriculture system, the highly nonstationary weather condition can be seen as exogenous which we cannot control, but the amount of water and fertilizer that affect the growth of the plant can be seen as the hidden state that is controlled by a pre-defined stationary policy. And we add up those two factors as the features (e.g., the condition of the crop) we observed in the real world.
From Assumption 3.1 and linear reward function assumption in 2.3, the closed form of v(πi) can be rewritten as: Proposition 3.3. Under Assumption 3.1 and 2.3, and suppose v(z∞) := ∑∞ t=0 γ
tzt < ∞. Suppose the spectral norm of Mi is smaller than 1γ , the expected long-term reward can be obtained by:
v(πi) = θ ⊤ r (I − γMi)−1S̄(i)0 + v(z∞), where S̄ (i) 0 = E[S0|D = i]. (7)
The long-term reward in Eq. (7) contains v(z∞), which depends on the unknown exogenous noise sequence outside of the experimental window and thus is unpredictable. However, the long term treatment effect, ∆(π1, π0) = v(π1) − v(π0), cancels out the dependency on that exogenous term v(z∞). For simplicity, we redefine v(πi) = θ⊤r (I − γMi)−1S̄(i)0 without the term of v(z∞). Therefore, the only thing we need to estimate is S̄(i)0 and Mi. Once we have the access of z0, we can estimate S̄(i)0 similarly as Monte Carlo sample: Ŝ0 = 1 ni ∑ j∈Ii o0,j − ẑ0. The next question is how to estimate in-experiment exogenous variable zt and the underlying transition kernels.
3.2 OPTIMIZATION FRAMEWORK
We propose to optimize {zt}1≤t≤T and {M0,M1} jointly under a single loss function, with the same spirit of reducing the reconstruction loss of each transition pair similar to the model-based approach.
For each individual j in treatment group i, Assumption 3.1 implies that at time step t + 1, the observation oj,t+1 can be written as: oj,t+1 − zt+1 = Mi(oj,t − zt) + εj,t, ∀j ∈ Ii, 1 ≤ t ≤ T − 1, (8) where εj,t is a noise term with zero mean, so that Mi(oj,t − zt) = E[St+1|St = oj,t − zt, D = i]. Inspired by Eq. (8), given observation history Dn, in order to minimize the empirical reconstruct risk by each transition pair (oj,t, oj,t+1), we construct the following loss function
L(M0,M1, {zt}1≤t≤T ;Dn) = 1∑
i=0 ∑ j∈Ii T−1∑ t=0 ∥oj,t+1 − zt+1 −Mi(oj,t − zt)∥22. (9)
To simplify the notation, Eq. (9) can be rewritten as a vectorized form
L(M0,M1, z;Dn) = 1∑
i=0 ∑ j∈Ii ∥Ai(oj − z)∥22, (10)
Algorithm 1 Estimating Long-Term Effect Under Non-stationary Dynamics Input: In-experiment training Data Dn = {(τj , dj)}nj=1, where τi = (oj,0, oj,1, . . . , oj,T ) is the in-experiment observation features for individual j, dj ∈ {0, 1} is the indicator of which policy group individual j is assigned to. Initialize the estimation of exogenous noise ẑ = 0. Optimization: while not convergent do
Update Mi as the ordinary least square solution given the current ẑ:
M̂i = ∑ j∈Ii T−1∑ t=1 (oj,t+1 − ẑt+1)(oj,t − ẑt)⊤ ∑ j∈Ii T−1∑ t=1 (oj,t − ẑt)(oj,t − ẑt)⊤ −1 .
Update ẑ according to Eq. (12):
ẑ = (n0G0 + n1G1) −1( 1∑ i=0 ∑ j∈Ii Gioj).
end while Evaluation: Compute v̂i = θ⊤r (I − γM̂i)−1 ( Ô (i) 0 − ẑ0 ) , where Ô(i)0 = 1 ni ∑ j∈Ii o0,j .
Output the long-term impact estimation as ∆̂ = v̂1 − v̂0.
where oj = oj,0oj,1. . . oj,T , and z = z0z1. . . zT are column vector aggregate over the experiment time horizon, and Ai is a dT × d(T + 1) matrix constructing by a block matrix Mi:
Ai = −Mi I ... 0 −Mi ... ...
... I 0 ... −Mi I dT×d(T+1) . (11)
3.3 ALTERNATING MINIMIZATION
To reconstruct Mi and z, we apply alternating minimization on the loss function L(M0,M1, z;Dn) in Eq. (10). By looking at the zero-gradient point of the loss function, under proper non-degenerate assumption (see Appendix for details), we have: Proposition 3.4. Suppose (n0G0 + n1G1) is nonsingular, the minimizer of z given Mi is a closedform solution in the followings:
argmin z
L(M0,M1, z;Dn) = (n0G0 + n1G1)−1( 1∑
i=0 ∑ j∈Ii Gioj), where Gi = A ⊤ i Ai. (12)
The minimizer of Mi given z is similar to Eq. (5), except that we subtract the exogenous part zt from the observation:
argmin M0,M1
L(M0,M1, z;Dn)
:= ∑ j∈Ii T−1∑ t=1 (oj,t+1 − ẑt+1)(oj,t − ẑt)⊤ ∑ j∈Ii T−1∑ t=1 (oj,t − ẑt)(oj,t − ẑt)⊤ −1 . (13)
The final optimization process is summarized in Algorithm 1.
3.4 THEORETICAL ANALYSIS
We give a preliminary theoretical analysis in this section to give readers some insights on how good our estimator is once a partial oracle information is given. We will extend our analysis to quantify the error of the estimator at the convergence state of alternating minimization in future work.
To simplify our analysis, we first assume we get access to the true transition matrix Mi, and our goal is to quantify the error between v̂(πi) and the true policy value v(πi) for each policy πi. Proposition 3.5. Suppose we have bounded noise and matrices under Assumption A.1 and Assumption A.2, and suppose n0 = n1 = n2 is equally divided. When we get access of the oracle transition matrix Mi = M∗i , i ∈ {0, 1}, let ẑ = argminz L(M∗0 ,M∗1 , z;Dn). If we plugin ẑ in the estimation of v̂(πi), we will have
|v̂(πi)− v(πi)| = O( 1√ n ),
with probability at least 1− δ.
In the second analysis we assume that we get an accurate z. In this case, the estimation of M̂ reduces to the stationary assumption case in Assumption 2.1 where the hidden state variable st = ot − zt is fully recovered. We follow the analysis (e.g., Duan et al., 2020; Miyaguchi, 2021) of linear MDP to characterize the error. Proposition 3.6 (Proposition 11 in Miyaguchi (2021)). Suppose we get access to the oracle exogenous noise z during the experimental period, let M̂i = argminMi L({Mi}, z∗;Dn) in Eq. (13). Under the assumption in Proposition 11 in Miyaguchi (2021), with the plugin estimator v̂ with M̂i, we have:
|v̂(πi)− v(πi)| = O(n− 1 2d+2 ),
with probability at least 1− δ.
3.5 PRACTICAL CONSIDERATIONS
Regularize the Transition Dynamic Matrices. Degenerated case may happen during the alternating minimization when either 1) the spectral norm is too large, i.e. ∥Mi∥2 ≥ 1γ , leading the long-term operator (I − γMi)−1 = ∑∞ t=0 γ
tM ti diverges in Eq. (7), or 2) the matrix inversion calculation of Mi in Eq. (13) is not well-defined. To avoid those scenarios and stabilize the computation procedure, we add a regularization term of Mi as λi∥Mi − Id∥22 in our experiment. The intuition is that the transition matrix should be close to identity matrix as in practice the treatment policy typically deviates from the control policy in an incremental manner.
After adding the regularization, the closed-form minimizer of Mi of the regularized loss function becomes:
Mi = λiId + ∑ j∈Ii T−1∑ t=1 (oj,t+1 − zt+1)(oj,t − zt)⊤ λiId + ∑ j∈Ii T−1∑ t=1 (oj,t − zt)(oj,t − zt)⊤ −1.
Regularize the Exogenous Variable. There is a challenge in deriving the closed-form z in Eq. (12) where n0G0+n1G1 can be degenerated or nearly degenerated. By definition, Gi is always singular. Moreover, if there is no control of the minimal eigenvalue of (n0G0+n1G1), e.g. close to zero, the update step on z is uncontrolled and the variance of noise can be magnified in the direction of the minimal eigenvector. Therefore it is crucial to regularize z.
To tackle the possible degenerated circumstances, one natural idea is to include regularization of the ℓ2 norm of z, where the regularized loss function can be written as:
Lλ(z,M0,M1;D) = L(z,M0,M1;D) + λz∥z∥22. (14) Its corresponding minimizer of ẑ can be written as:
ẑ = (λI + n0G0 + n1G1) −1( 1∑ i=0 ∑ j∈Ii Gioj),
where I is the identity matrix of dimension d× (T + 1). It is worth mentioning that when the regularization parameter λ increases to infinity, z will go to 0, and the solution reduces to the stationary case in Assumption 2.1.
Extend to Multiple Treatment Policies The optimization framework can be easily extend to multiple treatment policies case. Suppose we have k different treatment policies π1, π2, · · · , πk and let π0 be the control policy, the closed form solution for ẑ under multiple dataset of different treatment groups can be derived as
ẑλ = (λI + k∑ i=0 niGi) −1( k∑ i=0 ∑ j∈Ii Gioj).
And the closed-form update for Mi stays the same. The final estimation of the treatment effect for policy πi is ∆̂ = v̂i − v̂0.
4 RELATED WORK
Estimating long-term treatment effects Our work is related to causal inference with temporal data. The surrogate index method (Athey et al., 2019; 2020) makes a different assumption that the long-term effect is independent of the treatment conditioned on the surrogate index measured during the experiment. It then estimates long-term impacts resulting from short-term exposure during the experiment. In contrast, our work aims to estimate long-term impacts resulting from long-term exposure. Time series methods (e.g. Bojinov & Shephard, 2019) require probabilistic treatments, which allow an individual to be exposed to different treatments at different time periods during an experiment. They then estimate the temporal treatment effect, which is averaged over all the temporal steps, differs from traditional treatment effect which is averaged over randomized individuals.
Our method draws inspirations from off-policy evaluation(OPE) and related areas, whose goal is to estimate the long-term policy value, usually from a offline dataset collected under different policies. Most early work focuses on the family of inverse propensity score estimators that are prone to high variance in long-horizon problems (e.g., Precup et al., 2000; Murphy et al., 2001; Jiang & Li, 2016). Recently, there are growing interests in long- and even infinite-horizon settings (Liu et al., 2018; Nachum et al., 2019a; Xie et al., 2019; Tang et al., 2020; Uehara et al., 2020; Dai et al., 2020; Chandak et al., 2021). In particular, Shi et al. (2022b) considers a similar problem of estimating long-term impacts, which is comparable to our stationary baseline. However, these methods either rely on the stationarity assumption that is violated in many applications, or consider the general nonstationary Markov decision process (Kallus & Uehara, 2020) that does not leverage domainspecific assumptions.
RL in nonstationary or confounded environments Our model is a special case of Partially Observable Markov Decision Process (POMDP) (Åström, 1965; Kaelbling et al., 1998). OPE in general POMDPs remains challenging, unless various assumptions are made (e.g., Tennenholtz et al., 2020; Bennett et al., 2021; Shi et al., 2022a). Most assumptions are on the causal relation of the logged data, such as relation between state, action and confounded variable. In contrast, we make an assumption motivated by real-world data, which allows our estimator to cancel out exogenous variables from observations.
Our assumption is also related to MDP with Exogenous Variables (e.g., Dietterich et al., 2018; Chitnis & Lozano-Pérez, 2020), and Dynamics Parameter MDP (DPMDP) or Hidden Paramter MDP (HiP-MDP) (Al-Shedivat et al., 2017; Xie et al., 2020). For exogenous variable, they assume observation features can be partitioned into two groups, where the exogenous group is not affected by the action and the endogenous group evolve as in a typical MDP. The major challenge is infer the right partition. Several recent works (e.g Misra et al., 2020; Du et al., 2019; Efroni et al., 2021) combine exogenous variable with rich observation in RL. This is different from our assumption where we assume the observation is a sum of both parts, which is a more natural assumption in applications like e-commerce. For DPMDP and Hip-MDP, they assume a meta task variable which is non-stationary and changed across time but the task variable dynamic can be captured by a sequential model. Our
assumption can be viewed as a linear special case but our focus is not to better characterize the system but is to remove the exogenous part for better predictions.
5 EXPERIMENTS
We evaluate our methods in three problems: a synthetic dataset, a dataset from the Type-1 Diabete RL simulator (Xie, 2019), and a real-world dataset from an online store. The ground truth ∆ is computed either from a true simulator or using the average of the real experimental data under a long time period. We compare our methods based on plug-in estimator of the stationary solution in Eq. (4), its non-stationary variant in Algorithm 1, and an Naive Average baseline. The baseline directly uses the short-term reward average as the estimate of the long-term effect.
5.1 SYNTHETIC SIMULATION
The synthetic environment generates 4 randomized matrix Mi for policies {πi}3i=0 and a trajectory of randomized exogenous noise {zt}Tt=0. See details of the synthetic dynamic in Appendix C. The randomized sequence follows the non-stationary dynamics with a parameter α controlling the scale of the exogenous noise: oj,t = sj,t+αzt, ∀j, t. We collect n trajectories for each policy until t = T (w/ varying T ). We vary the parameters of the generating sequences: number n of trajectories, horizon T , data dimension d, and scale α of the exogenous noise. We plot the logarithmic Mean Square Error (MSE) for each method in Figure 2. The result shows that our estimator method (the green line) clearly outperforms all other baselines. Moreover, Figure 2(d) shows the increase of the scale of the exogenous noise does not affect estimation accuracy of our method.
5.2 TYPE-1 DIABETE SIMULATOR
This environment is modified based on an open-source implementation1 of the FDA approved Type1 Diabetes simulator (T1DMS) (Man et al., 2014). The environment simulates two-day behavior in an in-silico patient’s life. Consumption of a meal increases the blood-glucose level in the body. If the level is too high, the patient suffers from hyperglycemia. If the level is too low, the patient suffers from hypoglycemia. The goal is to control the blood glucose level by regulating the insulin dosage to minimize the risk associated with both hyperglycemia and hypoglycemia.
We modify the Bagal and Bolus (BB) policy (Bastani, 2014) (control policy) in the codebase and set two glucose target levels and different noise levels as our two treatment policies. We collect information in the first 12-hour of all the three policies with 5000 randomized patients in each policy group and use those information to predict the long-term effect. The observation feature is 2-dimensional: glucose level (CGM) and the amount of insulin injection. The non-stationarity comes from the time and amount of the consumption of a meal, which is time varying, but otherwise shared by all patients. We average a 2-day simulation window over random 250, 000 patients as ground truth treatment effect between policy groups.
1https://github.com/jxx123/simglucose
Similar to the synthetic simulator, we vary the number of patients and the experimental period. Figure 3 shows that the non-stationary method performs better in the prediction accuracy compared to stationary method in both predictions of CGM and the amount of insulin injection. Even though the simulator is non-linear, our simple linear additive exogenous noise assumption still captures the small local changes well, which is approximately linear.
5.3 DATA FROM AN ONLINE STORE
We test our methods under 4 long-running experiments in an online store with a total of 7 different treatment policies (some experiments have more than 1 treatment). Each experiment has 1 control policy. We evaluate 4 business metrics related to customer purchases in the store (Metrics 1-4), and use d = 17 features. All the experiments lasted for 12 weeks. We treat the first 5 weeks as the experiment window, and use data in those weeks to estimate long-term impacts of the 4 metrics. The trailing 7-week average of the metrics are used as ground true to evaluate accuracy of various estimators. Table 1 reports the median of the Mean Absolute Percentage Error (MAPE) of the estimators; See full results in Appendix C.
Given the high cost in such long-running experiments, we cannot collect more data points for comparison, and for computing statistical significance. That said, there is good evidence from the reported number that our method produces better predictions of long-term treatment effects than Naive Average. Furthermore, our method improves on the stationary baseline, suggesting the practical relevance of our nonstationary assumtion, and effectiveness of the proposed estimator.
6 CONCLUSIONS
In this paper we study how to estimate the long-term treatment effect by using only the inexperimental data in the non-stationary environment. We propose a novel non-stationary RL model and an algorithm to make prediction. A major limitation is the linear assumption in both the dynamics model and the additive exogenous part. Once the real world model includes a highly non-linear part, the prediction value can be biased. Future direction includes further relax our model to nonlinear case to better capture the real world environment.
A PROOF
In this section, we provide detailed proof for the theorem in the main text, as a self-contained section, we briefly introduce the notation as below, and adopt the regularized, multiple policy groups settings in the appendix:
• n: number of total individuals. • Ii: the index set for policy πi; ni = |Ii| as the number of individual in under policy πi. • k total number of different policy group. • Dn: dataset for n individuals in the experimental period.
In the appendix, we denote the ground truth dynamic M∗i and the ground truth exogenous noise z ∗ with a star ∗ to distinguish the variables Mi and z during optmization process.
A.1 ASSUMPTIONS
The dynamic assumption of our linear additive exogenous noise assumption in Assumption 3.1 can be rewritten as the following equation:
M∗i (oj,t − z∗t ) = (oj,t+1 − z∗t+1) + εj,t, ∀j ∈ Ii, 0 ≤ t ≤ T − 1. (15)
where εj,t is a zero-mean noise. Let εj = εj,0εj,1. . . εj,T−1 ∈ Rd×T , {εj}1≤j≤n forms a martingale: E[εj |Fj−1] = 0, (16)
where the filtration Fj = {o1, ..., oj−1} is the information up to the first j − 1 individuals. We make addition bounded assumption on the zero-mean noise term for the proof: Assumption A.1 (Bounded Noise assumption). Let εj = M∗i (oj,t − z∗t )− (oj,t+1 − z∗t+1), j ∈ Ii be the residual of the transition under the true transition matrix M∗i , we have
∥εj∥2 ≤ Cε, ∀j, (17) where Cε is a uniform constant independent of policy assignment.
For the empirical covariance matrix in the middle step of the calculation, we assume they are all bounded. Assumption A.2 (Bounded Norm for Matrices). We make the following assumptions on matrices
1. ∥M∗i ∥ ≤ CMi < 1γ , ∀i.
2. ∥(Λ∗n/n)−1∥ ≤ CΛ.
A.2 LOSS FUNCTION AND ALTERNATING MINIMIZATION
Our loss function can be written as:
L({Mi}1≤i≤k, z;Dn) = k∑
i=1 ∑ j∈Ii ∥Ai(z − oj)∥22 + λz∥z∥22 + k∑ i=1 λi∥Mi − Id∥2F . (18)
Lemma A.3. Fix {Mi}, denote Gi = A⊤i Ai where Ai is defined in Eq. (11), the minimization of z = argminz L({Mi}1≤i≤k, z;Dn) is
z({Mi}) = ( λzId×(T+1) +
k∑ i=1 niGi )−1 k∑ i=1 ∑ j∈Ii Gioj . (19)
Proof. By taking the gradient of the loss function, we will have:
0 =∇zL({Mi}1≤i≤k, z;Dn)
=2 k∑ i=1 ∑ j∈Ii Gi(z − oj) + 2λzz
which implies
z({Mi}) = ( λzId×(T+1) +
k∑ i=1 niGi )−1 k∑ i=1 ∑ j∈Ii Gioj . Here, Gi = A⊤i Ai is semi-definite, so the inversion of the large matrix in the right side of the expression always exists.
Similarly we can get the minimizer of Mi fixing z. Lemma A.4. By fixing z, the minimizer of Mi can be written as
Mi(z) = λiId + ∑ j∈Ii T−1∑ t=1 (oj,t+1 − zt+1)(oj,t − zt)⊤ λiId + ∑ j∈Ii T−1∑ t=1 (oj,t − zt)(oj,t − zt)⊤ −1 .
(20)
Proof. The proof is similarly applied by looking at the zero gradient of Mi.
If we set λi = 0 and z = 0, the minimization reduces back to estimation of M̂ in Eq. (5).
A.3 ERROR ANALYSIS
Lemma A.5. Let M∗i be the true dynamic of the underlying state, we have: z({M∗i })− z∗ = −λz(Λ∗n)−1z∗ + (Λ∗n)−1 k∑
i=1 ∑ j∈Ii A⊤i εj , (21) where Λ∗n = λzId×T + ∑k i=1 niG ∗ i .
Proof. By expand the definition of z({M∗i }, we have: z({M∗i }) = (Λ∗n)−1 k∑
i=1 ∑ j∈Ii Gioj =(Λ∗n) −1 k∑ i=1 ∑ j∈Ii A⊤i (Aioj)
=(Λ∗n) −1 k∑ i=1 ∑ j∈Ii A⊤i (Aiz ∗ + εj)
=(Λ∗n) −1 Λ∗nz∗ − λzz∗ + k∑ i=1 ∑ j∈Ii A⊤i εj)
=z∗ − λz(Λ∗n)−1z∗ + (Λ∗n)−1 k∑ i=1 ∑ j∈Ii A⊤i εj)
Lemma A.6. Let z∗ be the true exogenous noise, we have:
Mi(z ∗)−M∗i = λi(Id −M∗i ) + ∑ j∈Ii T−1∑ t=1 εj,t(oj,t − z∗t )⊤ (λiId +Σ∗n)−1 , (22)
where Σ∗n = ∑ j∈Ii ∑T−1 t=1 (oj,t − z∗t )(oj,t − z∗t )⊤ is the empirical covariace matrix.
Proof. By expand the definition of Mi(z∗), we have:
Mi(z ∗) = λiId + ∑ j∈Ii T−1∑ t=1 (oj,t+1 − z∗t+1)(oj,t − z∗t )⊤ λiId + ∑ j∈Ii T−1∑ t=1 (oj,t − z∗t )(oj,t − z∗t )⊤ −1
(23)
= λiId + ∑ j∈Ii T−1∑ t=1 (εj,t +M ∗ i (oj,t − z∗t )) (oj,t − z∗t )⊤ (λiId +Σ∗n)−1 (24) =M∗i +
λi(Id −M∗i ) + ∑ j∈Ii T−1∑ t=1 εj,t(oj,t − z∗t )⊤ (λiId +Σ∗n)−1 (25)
A.4 PROOF OF PROPOSITION 2.5
Proof. By induction, it is not hard to prove that E[Ot|O0 = o,D = i] = M ti o.
Sum up all condition on O0, we have: E[Ot|D = i] = M tiE[O0].
By the definition of long-term discounted reward G, we have: v(πi) =E[ ∞∑ t=0 γtRt|D = i]
= ∞∑ t=0 γtE[θ⊤r Ot|D = i]
=θ⊤r ∞∑ t=0 γtM tiE[O0]
=θ⊤r (I − γMi)−1E[O0], where the last equation holds when ∥Mi∥ < 1γ .
A.5 PROOF OF PROPOSITION 3.5
Proof. From Lemma A.5, suppose λz = 0 and (Λ∗n) −1 exists, we have: z − z∗ = (Λ∗n)−1 1∑
i=0 ∑ j∈Ii A⊤i εj . Consider v̂(π0) if we plugin ẑ and the true dynamic M∗0 , the error between v̂ and v is
v̂(π0)− v(π0) =θ⊤r (I − γM∗0 )−1(z0 − z∗0) :=β⊤r (z0 − z∗0) =(β⊤r , 0, . . . , 0)(z0 − z∗0) =β̃⊤r (z0 − z∗0),
where βr = (I − γM∗0 )−T θr, and β̃r is the extended vector of βr if we fill the other vector value at other time step as 0.
Expand the difference (z0 − z∗0) we have:
v̂(π0)− v(π0) =β̃⊤r (z0 − z∗0)
= 1∑ i=0 β̃⊤r (Λ ∗ n) −1A⊤i ( ∑ j∈Ii εj)
= 1∑ i=0 β̃⊤r ( Λ∗n n )−1A⊤i (
∑ j∈Ii εj
n )
≤∥β̃r∥ 1∑
i=0
∥(Λ ∗ n
n )−1Ai∥∥
∑ j∈Ii εj
n ∥.
By Assumption A.1 and Assumption A.2, the norm of β̃r is the same as βr, which is bounded by ∥βr∥ ≤ 11−γCMi ∥θr∥. The matrix norm in the middle factor is bounded because of Assumption A.2. Finally, by vector concentration inequality, since εj is norm-subGaussian (Jin et al., 2019), there exist a constant c that with probability at least 1− δ:
∥ ∑ j∈Ii εj
n ∥ ≤ c
√ log(2dT/δ)
n .
In sum, the error is bounded by O( 1√ n ) with probability at least 1− δ, and the constant depends on Cε, CMi , CΛ and the norm of ∥θr∥.
A.6 PROOF OF PROPOSITION 3.6
Proof. Since we get access to the ground true z∗, the remaining problem is by changing the state as sj,t = oj,t − z∗t and reduce the problem back to standard MDP. The detailed proof can refer to Proposition 11 in Miyaguchi (2021).
B REDUCE THE COMPUTATION COMPLEXITY WITH PRE-COMPUTATION
In this section, we explain how to reduce the computation complexity with pre-computation.
Pre-computation. Compute
Mi(0) = ∑ j∈Ii T−1∑ t=1 oj,t+1o ⊤ j,t ∑ j∈Ii T−1∑ t=1 oj,to ⊤ j,t −1
and ōt = ∑ j∈Ii oj,t.
The pre-computation requires computation complexity of O(nTd2 + d3), where d2 is the computation complexity of the outer product, d3 is the computation complexity of the matrix inversion after summing up the matrix.
In Each Iteration. The computation of M can be rewritten as
Mi(z) = ∑ j∈Ii T−1∑ t=1 (oj,t+1 − zt+1)(oj,t − zt)⊤ ∑ j∈Ii T−1∑ t=1 (oj,t − zt)(oj,t − zt)⊤ −1
=Mi(0)− T−1∑ t=1 zt+1ō ⊤ t − T−1∑ t=1 ōt+1z ⊤ t + T−1∑ t=1 zt+1z ⊤ t ,
which requires computation complexity of O(Td2). Similarly, the computation of
z(G) = ( k∑ i=0 niGi) −1( k∑ i=0 Giōj)
requires computation complexity of O(T 2d2). Both steps are computationally scalable, since it does not rely on number of individuals n (which is often much larger than T and d).
Overall Computation Complexity. Suppose we execute the iterations for k times, then the total computation complexity for the alternating minimization is O(nTd2 + d3 + kT 2d2). In practice, the number of different individual n is far larger than the experiment horizon T and the feature dimension d, therefore the computation complexity essentially scales linearly with n.
C EXPERIMENTS DETAILS
C.1 SYNTHETIC SIMULATION
The synthetic environment generates 4 randomized matrix Mi for policies {πi}3i=0, where each entry of Mi is a positive number randomly sample from a uniform distribution between (0, 1). We normalize each row so that it sums up to 1, and we set M̃i = 0.5I + 0.5Mi as our final transition matrix. The 0.5I part ensures each matrix is not too far away from each other.
We generate a set of i.i.d. random vector ηt ∼ N (0, 1.5I) and set zt+1 = zt + ηt recursively. And we let z̃t = αt ∗ zt as the final exogenous noise, where αt = eβt and βt ∼ N (0, 0.5I), i.i.d.. All the parameters (zt and Mi) of the dynamic are fixed once generated, and we use the dynamic to generate our observation for each individual, following
st+1 = Mist + εt, and ot = st + αzt, ∀t where εt is independently drawn from a standard normal distribution, and α control the level of exogenous noise.
C.2 POLICY CONSTRUCTION IN TYPE-1 DIABETE SIMULATOR
The Bagal and Bolus policy is a parametrized policy based on the amount of insulin that a person with diabetes is instructed to inject prior to eating a meal (Bastani, 2014)
injection = current blood glucose− target blood glucose
CF +
meal size
CR ,
where CF and CR are parameter based on patients information such as body weights, which is already specified in the simulator.
We set our two treatment policies with target blood glucose level at 145 and 130 (compared to control: 140). And we increase the noise in the insulin pump simulator in both the treatment policies.
C.3 RANDOM PATIENTS GENERATION IN TYPE-1 DIABETE SIMULATOR
Type-1 Diabete simulator pre-stores 30 patients parameter. To randomly generate a new patient, we randomly pick two different patients A and B, and use a random linear coefficient β ∼ U(0, 0.2) and mixed the parameter of a new patient as
θ = (1− α)θA + αθB , where θA and θB are the parameters of patients A and B, respectively. Since patient A has more weight of the parameter, the parameters in Bagal and Bolus policy, CF and CR, follow patient A’s parameter.
C.4 FULL RESULTS FOR ALL THE ONLINE STORE EXPERIMENTS. | 1. What is the focus of the paper regarding reinforcement learning and long-term effects?
2. What are the strengths of the proposed algorithm, particularly its simplicity and relevance to practical problems?
3. Do you have any concerns or suggestions regarding the paper's assumptions and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a reinforcement learning based algorithm to estimate long-term effect for a class of nonstationary problems. Empirical results in both synthetic and real datasets show the potential of the proposed algorithm.
Strengths And Weaknesses
The paper studies a practical and important problem: estimate long-term effect under nonstationary dynamics. The proposed algorithm is natural and simple.
My main concern is the linear assumptions. Is it possible to generalize the results for generalized linear models? Will the prediction value be pretty biased for generalized linear models? Another comment is that there are other papers that use reinforcement learning approach to estimate long-term effect, for example, [1] and literature on dynamic treatment regimes, and I think these papers need to be cited for comparison.
[1] Chengchun Shi, Xiaoyu Wang, Shikai Luo, Hongtu Zhu, Jieping Ye,and Rui Song, Dynamic Causal Effects Evaluation in A/B Testing with a Reinforcement Learning Framework, 2020.
Clarity, Quality, Novelty And Reproducibility
The overall presentation is good. It is not hard to understand the paper. The proposed algorithm and analysis are pretty natural. It seems that the code is not provided, so it is hard to judge the reproducibility. |
ICLR | Title
A Reinforcement Learning Approach to Estimating Long-term Treatment Effects
Abstract
Randomized experiments (a.k.a. A/B tests) are a powerful tool for estimating treatment effects, to inform decisions making in business, healthcare and other applications. In many problems, the treatment has a lasting effect that evolves over time. A limitation with randomized experiments is that they do not easily extend to measure long-term effects, since running long experiments is time-consuming and expensive. In this paper, we take a reinforcement learning (RL) approach that estimates the average reward in a Markov process. Motivated by real-world scenarios where the observed state transition is nonstationary, we develop a new algorithm for a class of nonstationary problems, and demonstrate promising results in two synthetic datasets and one online store dataset.
1 INTRODUCTION
Randomized experiments (a.k.a. A/B tests) are a powerful tool for estimating treatment effects, to inform decisions making in business, healthcare and other applications. In an experiment, units like customers or patients are randomly split into a treatment bucket and a control bucket. For example, in a rideshare app, drivers in the control and treatment buckets are matched to customers in different ways (e.g., with different spatial ranges or different ranking functions). After we expose customers to one of these options for a period of time, usually a few days or weeks, we can record the corresponding customer engagements, and run a statistical hypothesis test on the engagement data to detect if there is a statistically significant difference in customer preference of treatment over control. The result will inform whether the app should launch the treatment or control.
While this method has been widely successful (e.g., in online applications (Kohavi et al., 2020)), it typically measures treatment effect during the short experiment window. However, in many problems, a treatment has a lasting effect that evolves over time. For example, a treatment that increases installation of a mobile app may result in a drop of short-term profit due to promotional benefits like discounts. But the installation allows the customer to benefit from the app, which will increase future engagements and profit in the long term. A limitation with standard randomized experiments is that they do not easily extend to measure long-term effects. We can run a long experiment for months or years to measure the long-term impacts, which however is time-consuming and expensive. We can also design proxy signals that are believed to correlate with long-term engagements (Kohavi et al., 2009), but finding a reliable proxy is challenging in practice. Another solution is the surrogacy method that estimates delayed treatment impacts from surrogate changes during the experiment (Athey et al., 2019). However, it does not estimate long-term impacts resulting from long-term treatment exposure, but rather from short-term exposure during the experiment.
Shi et al. (2022b) mitigates the limitation of standard randomized experiment by framing the longterm effect as a reinforcement learning (RL) problem. Their method is closely related to recent advances in infinite-horizon off-policy evaluation (OPE) (Liu et al., 2018; Nachum et al., 2019a; Xie et al., 2019; Kallus & Uehara, 2020; Uehara et al., 2020; Chandak et al., 2021). However, their solution relies on stationary Markov assumption, which fails to capture the real-world nonstationary dynamics. Motivated by real-world scenarios where the observed state transitions are nonstationary, we consider a class of nonstationary problems, where the observation consists of two additive terms: an endogenous term that follows a stationary Markov process, and an exogenous
term that is time-varying but independent of the policy. Based on this assumption, we develop a new algorithm to jointly estimate long-term reward and the exogenous variables.
Our contributions are threefold. First, it is a novel application of RL to estimate long-term treatment effects, which is challenging for standard randomized experiments. Second, we develop an estimator for a class of nonstationary problems that are motivated by real-world scenarios, and give a preliminary theoretical analysis. Third, we demonstrate promising results in two synthetic datasets and one online store dataset.
2 BACKGROUND
2.1 LONG-TERM TREATMENT EFFECTS
Let π0 and π1 be the control and treatment policies, used to serve individual in respective buckets. In the rideshare example, a policy may decide how to match a driver to a nearby request. During the experiment, each individual (the driver) is randomly assigned to one of the policy groups, and we observe a sequence of behavior features of that individual under the influence of the assigned policy. We use variable D ∈ {0, 1} to denote the random assignment of an individual to one of the policies. The observed features are denoted as a sequence of random variable in Rd
O0, O1, . . . , Ot, . . . ,
where the subscript t indicates time step in the sequence. A time step may be one day or one week, depending on the application. Feature Ot consists of information like number of pickup orders. We are interested in estimating the difference in average long-term reward between treatment and control policies:
∆ = E[ ∞∑ t=0 γtRt|D = 1]− E[ ∞∑ t=0 γtRt|D = 0], (1)
where E averages over individuals and their stochastic sequence of engagements, Rt = r(Ot) is the reward signal (e.g., customer rating) at time step t, following a pre-defined reward function r : Rd → R, and γ ∈ (0, 1) is the discounted factor. The discounted factor γ is a hyper-parameter specified by the decision maker to indicate how much they value future reward over the present. The closer γ is to 1, the greater weight future rewards carry in the discounted sum.
Suppose we have run a randomized experiment with the two policies for a short period of T steps. In the experiment, a set of n individuals are randomly split and exposed to one of the two policies π0 and π1. We denote by dj ∈ {0, 1} the policy assignment of individual j, and Ii the index set of individuals assigned to πi, i.e., j ∈ Ii iff dj = i. The in-experiment trajectory of individual j is:
τj = {oj,0, oj,1, . . . , oj,T }. The in-experiment dataset is the collection of all individual data as Dn = {(τj , dj)}nj=1. Our goal is to find an estimator ∆̂(Dn) ≈ ∆.
2.2 ESTIMATION UNDER STATIONARY MARKOVIAN DYNAMICS
Inspired by recent advances in off-policy evaluation (OPE) (e.g. Liu et al., 2018; Nachum et al., 2019b), the simplest assumption is a fully observed Markov Process that the observation in each time step can fully predict the future distribution under a stationary dynamic kernel. In this paper, we assume the dynamics kernel and reward function are both linear, following the setting in Parr et al. (2008). Linear representations are popular in the RL literature (e.g., Shi et al., 2022b) , and often preferable in industrial applications due to simplicity and greater model interpretability. Assumption 2.1. (Linear Dynamics) there is a matrix Mi such that
E[Ot+1|Ot = o,D = i] = Mio, ∀t ∈ N, i ∈ {0, 1}. (2) Remark 2.2. Unlike standard RL, we don’t have an explicit action for a policy. The difference between the control and treatment policy is revealed by different transition matrix M . Assumption 2.3. (Linear Reward) There is a coefficient vector θr ∈ Rd such that
r(Ot) = θ ⊤ r Ot, ∀t ∈ N. (3)
Remark 2.4. The reward signal may be one of the observed features. For example, if we are interested in customer rating, and rating is one of the observe features, then θr is just a one-hot vector with 1 in the corresponding coordinate. When the reward is complex with unknown coefficient, we can use ordinary least-squares to estimate the coefficient θr. Proposition 2.5. Under Assumption 2.1 and 2.3, if the spectral norm of Mi is smaller than 1γ , then the expected long-term reward of policy πi, v(πi) := E[ ∑∞ t=0 γ
tRt|D = i], can be obtained by: v(πi) = θ ⊤ r (I − γMi)−1Ō(i)0 , where Ō (i) 0 := E[O0|D = i]. (4)
The only remaining step is to estimate Ō(i)0 and Mi. The former can be directly estimated from the Monte Carlo average of the experimental data: Ô(i)0 = 1 ni ∑ j∈Ii o0,j , where ni = |Ii| is the number of individuals assigned to policy πi. To estimate the latter, we may use ordinary least-squares on observed transitions:
M̂i = ∑ j∈Ii T−1∑ t=0 oj,t+1o ⊤ j,t ∑ j∈Ii T−1∑ t=0 oj,to ⊤ j,t −1 . (5) The detailed derivation can be found in (Parr et al., 2008). Once we get the estimated value of v̂i ≈ v(πi), the long term impact in Eq. (1) can be estimated as:
∆̂ = v̂1 − v̂0. Remark 2.6. Although this a model-based estimator, it is equivalent to other OPE estimator in general under linear Markovian assumption (e.g., Nachum et al., 2019b; Duan et al., 2020; Miyaguchi, 2021) and it enjoys similar statistical guarantees as other OPE estimators.
3 OUR METHOD
In Section 2.2, we assumed the observation Ot follows a stationary Markov process, and derived a model-based closed-form solution based on linear reward Assumption 2.3.
In reality, this model assumption has two major limitations. First, real-world environments are nonstationary. For example, in a hotel reservation system, seasonality heavily influences the prediction of the future booking count. Our stationary assumption does not capture those seasonal changes, resulting in poorly learned models and inaccurate predictions of long-term treatment effects. Second, in practice, we are unable to ensure that observed features fully capture the dynamics. OPE methods based on stationary and full observability assumptions are unlikely to work robustly in complex, real-life scenarios.
Figure 1 illustrates nonstationarity in data from an online store (see Section 5 for more details). The figure shows how the weekly average of a business metric changes in a span of 5 months, for two policies (C for control, and T4 for treatment). Such highly non-statioanary data, especially during special seasons towards the right end of the plot, are common.
However, the difference of the two policy groups remains much more stable. This is expected as both policies are affected by the same exogenous affects (seasonal variations in this example).
Figure 1 motivates a relaxed model assumption (Section 3.1), by introducing a non-stationary exogenous component on top of a stationary hidden state St. Our new assumption is that the observation Ot can be decomposed additively into two parts: an endogenous part still follows a stationary Markovian dynamic for each policy group (treatment or control); and an exogenous part which is time-varying and shared across all groups. Based on the new assumption we propose an alternating minimization algorithm that jointly estimates both transition dynamics and exogenous variables.
3.1 NONSTATIONARY MODEL RELAXATION
We assume there is an exogenous noise vector zt for each time step t, to represent the linear additive exogenous noise in the uncontrollable outside world such as seasonal effect, which applies uniformly to every individual under each treatment bucket. We relax Assumption 2.1 as the following: Assumption 3.1. (Linear Additive Exogenous Noise) the observational feature Ot is the sum of the endogenous hidden features and the time-varying exogenous noise zt. Ot = St + zt, ∀t ∈ N. where zt does not depend on policy or any individual in the experiments and St follows the linear Markovian kernel with transition matrix Mi: E[St+1|St = s,D = i] = Mis, ∀t ∈ N, i ∈ {0, 1}. (6) Remark 3.2 (Explanation of the Linear Additive Model). Our linear additive model is inspired by the parallel trend assumption in the Difference-in-Difference (DID) estimator (Lechner et al., 2011). In real-world environments, it is impossible to capture all the covariates that may effect the dynamics. The linear additive exogenous noise zt can be seen as the drive from the outside that is both unobserved and uncontrol. For example, in an intelligent agriculture system, the highly nonstationary weather condition can be seen as exogenous which we cannot control, but the amount of water and fertilizer that affect the growth of the plant can be seen as the hidden state that is controlled by a pre-defined stationary policy. And we add up those two factors as the features (e.g., the condition of the crop) we observed in the real world.
From Assumption 3.1 and linear reward function assumption in 2.3, the closed form of v(πi) can be rewritten as: Proposition 3.3. Under Assumption 3.1 and 2.3, and suppose v(z∞) := ∑∞ t=0 γ
tzt < ∞. Suppose the spectral norm of Mi is smaller than 1γ , the expected long-term reward can be obtained by:
v(πi) = θ ⊤ r (I − γMi)−1S̄(i)0 + v(z∞), where S̄ (i) 0 = E[S0|D = i]. (7)
The long-term reward in Eq. (7) contains v(z∞), which depends on the unknown exogenous noise sequence outside of the experimental window and thus is unpredictable. However, the long term treatment effect, ∆(π1, π0) = v(π1) − v(π0), cancels out the dependency on that exogenous term v(z∞). For simplicity, we redefine v(πi) = θ⊤r (I − γMi)−1S̄(i)0 without the term of v(z∞). Therefore, the only thing we need to estimate is S̄(i)0 and Mi. Once we have the access of z0, we can estimate S̄(i)0 similarly as Monte Carlo sample: Ŝ0 = 1 ni ∑ j∈Ii o0,j − ẑ0. The next question is how to estimate in-experiment exogenous variable zt and the underlying transition kernels.
3.2 OPTIMIZATION FRAMEWORK
We propose to optimize {zt}1≤t≤T and {M0,M1} jointly under a single loss function, with the same spirit of reducing the reconstruction loss of each transition pair similar to the model-based approach.
For each individual j in treatment group i, Assumption 3.1 implies that at time step t + 1, the observation oj,t+1 can be written as: oj,t+1 − zt+1 = Mi(oj,t − zt) + εj,t, ∀j ∈ Ii, 1 ≤ t ≤ T − 1, (8) where εj,t is a noise term with zero mean, so that Mi(oj,t − zt) = E[St+1|St = oj,t − zt, D = i]. Inspired by Eq. (8), given observation history Dn, in order to minimize the empirical reconstruct risk by each transition pair (oj,t, oj,t+1), we construct the following loss function
L(M0,M1, {zt}1≤t≤T ;Dn) = 1∑
i=0 ∑ j∈Ii T−1∑ t=0 ∥oj,t+1 − zt+1 −Mi(oj,t − zt)∥22. (9)
To simplify the notation, Eq. (9) can be rewritten as a vectorized form
L(M0,M1, z;Dn) = 1∑
i=0 ∑ j∈Ii ∥Ai(oj − z)∥22, (10)
Algorithm 1 Estimating Long-Term Effect Under Non-stationary Dynamics Input: In-experiment training Data Dn = {(τj , dj)}nj=1, where τi = (oj,0, oj,1, . . . , oj,T ) is the in-experiment observation features for individual j, dj ∈ {0, 1} is the indicator of which policy group individual j is assigned to. Initialize the estimation of exogenous noise ẑ = 0. Optimization: while not convergent do
Update Mi as the ordinary least square solution given the current ẑ:
M̂i = ∑ j∈Ii T−1∑ t=1 (oj,t+1 − ẑt+1)(oj,t − ẑt)⊤ ∑ j∈Ii T−1∑ t=1 (oj,t − ẑt)(oj,t − ẑt)⊤ −1 .
Update ẑ according to Eq. (12):
ẑ = (n0G0 + n1G1) −1( 1∑ i=0 ∑ j∈Ii Gioj).
end while Evaluation: Compute v̂i = θ⊤r (I − γM̂i)−1 ( Ô (i) 0 − ẑ0 ) , where Ô(i)0 = 1 ni ∑ j∈Ii o0,j .
Output the long-term impact estimation as ∆̂ = v̂1 − v̂0.
where oj = oj,0oj,1. . . oj,T , and z = z0z1. . . zT are column vector aggregate over the experiment time horizon, and Ai is a dT × d(T + 1) matrix constructing by a block matrix Mi:
Ai = −Mi I ... 0 −Mi ... ...
... I 0 ... −Mi I dT×d(T+1) . (11)
3.3 ALTERNATING MINIMIZATION
To reconstruct Mi and z, we apply alternating minimization on the loss function L(M0,M1, z;Dn) in Eq. (10). By looking at the zero-gradient point of the loss function, under proper non-degenerate assumption (see Appendix for details), we have: Proposition 3.4. Suppose (n0G0 + n1G1) is nonsingular, the minimizer of z given Mi is a closedform solution in the followings:
argmin z
L(M0,M1, z;Dn) = (n0G0 + n1G1)−1( 1∑
i=0 ∑ j∈Ii Gioj), where Gi = A ⊤ i Ai. (12)
The minimizer of Mi given z is similar to Eq. (5), except that we subtract the exogenous part zt from the observation:
argmin M0,M1
L(M0,M1, z;Dn)
:= ∑ j∈Ii T−1∑ t=1 (oj,t+1 − ẑt+1)(oj,t − ẑt)⊤ ∑ j∈Ii T−1∑ t=1 (oj,t − ẑt)(oj,t − ẑt)⊤ −1 . (13)
The final optimization process is summarized in Algorithm 1.
3.4 THEORETICAL ANALYSIS
We give a preliminary theoretical analysis in this section to give readers some insights on how good our estimator is once a partial oracle information is given. We will extend our analysis to quantify the error of the estimator at the convergence state of alternating minimization in future work.
To simplify our analysis, we first assume we get access to the true transition matrix Mi, and our goal is to quantify the error between v̂(πi) and the true policy value v(πi) for each policy πi. Proposition 3.5. Suppose we have bounded noise and matrices under Assumption A.1 and Assumption A.2, and suppose n0 = n1 = n2 is equally divided. When we get access of the oracle transition matrix Mi = M∗i , i ∈ {0, 1}, let ẑ = argminz L(M∗0 ,M∗1 , z;Dn). If we plugin ẑ in the estimation of v̂(πi), we will have
|v̂(πi)− v(πi)| = O( 1√ n ),
with probability at least 1− δ.
In the second analysis we assume that we get an accurate z. In this case, the estimation of M̂ reduces to the stationary assumption case in Assumption 2.1 where the hidden state variable st = ot − zt is fully recovered. We follow the analysis (e.g., Duan et al., 2020; Miyaguchi, 2021) of linear MDP to characterize the error. Proposition 3.6 (Proposition 11 in Miyaguchi (2021)). Suppose we get access to the oracle exogenous noise z during the experimental period, let M̂i = argminMi L({Mi}, z∗;Dn) in Eq. (13). Under the assumption in Proposition 11 in Miyaguchi (2021), with the plugin estimator v̂ with M̂i, we have:
|v̂(πi)− v(πi)| = O(n− 1 2d+2 ),
with probability at least 1− δ.
3.5 PRACTICAL CONSIDERATIONS
Regularize the Transition Dynamic Matrices. Degenerated case may happen during the alternating minimization when either 1) the spectral norm is too large, i.e. ∥Mi∥2 ≥ 1γ , leading the long-term operator (I − γMi)−1 = ∑∞ t=0 γ
tM ti diverges in Eq. (7), or 2) the matrix inversion calculation of Mi in Eq. (13) is not well-defined. To avoid those scenarios and stabilize the computation procedure, we add a regularization term of Mi as λi∥Mi − Id∥22 in our experiment. The intuition is that the transition matrix should be close to identity matrix as in practice the treatment policy typically deviates from the control policy in an incremental manner.
After adding the regularization, the closed-form minimizer of Mi of the regularized loss function becomes:
Mi = λiId + ∑ j∈Ii T−1∑ t=1 (oj,t+1 − zt+1)(oj,t − zt)⊤ λiId + ∑ j∈Ii T−1∑ t=1 (oj,t − zt)(oj,t − zt)⊤ −1.
Regularize the Exogenous Variable. There is a challenge in deriving the closed-form z in Eq. (12) where n0G0+n1G1 can be degenerated or nearly degenerated. By definition, Gi is always singular. Moreover, if there is no control of the minimal eigenvalue of (n0G0+n1G1), e.g. close to zero, the update step on z is uncontrolled and the variance of noise can be magnified in the direction of the minimal eigenvector. Therefore it is crucial to regularize z.
To tackle the possible degenerated circumstances, one natural idea is to include regularization of the ℓ2 norm of z, where the regularized loss function can be written as:
Lλ(z,M0,M1;D) = L(z,M0,M1;D) + λz∥z∥22. (14) Its corresponding minimizer of ẑ can be written as:
ẑ = (λI + n0G0 + n1G1) −1( 1∑ i=0 ∑ j∈Ii Gioj),
where I is the identity matrix of dimension d× (T + 1). It is worth mentioning that when the regularization parameter λ increases to infinity, z will go to 0, and the solution reduces to the stationary case in Assumption 2.1.
Extend to Multiple Treatment Policies The optimization framework can be easily extend to multiple treatment policies case. Suppose we have k different treatment policies π1, π2, · · · , πk and let π0 be the control policy, the closed form solution for ẑ under multiple dataset of different treatment groups can be derived as
ẑλ = (λI + k∑ i=0 niGi) −1( k∑ i=0 ∑ j∈Ii Gioj).
And the closed-form update for Mi stays the same. The final estimation of the treatment effect for policy πi is ∆̂ = v̂i − v̂0.
4 RELATED WORK
Estimating long-term treatment effects Our work is related to causal inference with temporal data. The surrogate index method (Athey et al., 2019; 2020) makes a different assumption that the long-term effect is independent of the treatment conditioned on the surrogate index measured during the experiment. It then estimates long-term impacts resulting from short-term exposure during the experiment. In contrast, our work aims to estimate long-term impacts resulting from long-term exposure. Time series methods (e.g. Bojinov & Shephard, 2019) require probabilistic treatments, which allow an individual to be exposed to different treatments at different time periods during an experiment. They then estimate the temporal treatment effect, which is averaged over all the temporal steps, differs from traditional treatment effect which is averaged over randomized individuals.
Our method draws inspirations from off-policy evaluation(OPE) and related areas, whose goal is to estimate the long-term policy value, usually from a offline dataset collected under different policies. Most early work focuses on the family of inverse propensity score estimators that are prone to high variance in long-horizon problems (e.g., Precup et al., 2000; Murphy et al., 2001; Jiang & Li, 2016). Recently, there are growing interests in long- and even infinite-horizon settings (Liu et al., 2018; Nachum et al., 2019a; Xie et al., 2019; Tang et al., 2020; Uehara et al., 2020; Dai et al., 2020; Chandak et al., 2021). In particular, Shi et al. (2022b) considers a similar problem of estimating long-term impacts, which is comparable to our stationary baseline. However, these methods either rely on the stationarity assumption that is violated in many applications, or consider the general nonstationary Markov decision process (Kallus & Uehara, 2020) that does not leverage domainspecific assumptions.
RL in nonstationary or confounded environments Our model is a special case of Partially Observable Markov Decision Process (POMDP) (Åström, 1965; Kaelbling et al., 1998). OPE in general POMDPs remains challenging, unless various assumptions are made (e.g., Tennenholtz et al., 2020; Bennett et al., 2021; Shi et al., 2022a). Most assumptions are on the causal relation of the logged data, such as relation between state, action and confounded variable. In contrast, we make an assumption motivated by real-world data, which allows our estimator to cancel out exogenous variables from observations.
Our assumption is also related to MDP with Exogenous Variables (e.g., Dietterich et al., 2018; Chitnis & Lozano-Pérez, 2020), and Dynamics Parameter MDP (DPMDP) or Hidden Paramter MDP (HiP-MDP) (Al-Shedivat et al., 2017; Xie et al., 2020). For exogenous variable, they assume observation features can be partitioned into two groups, where the exogenous group is not affected by the action and the endogenous group evolve as in a typical MDP. The major challenge is infer the right partition. Several recent works (e.g Misra et al., 2020; Du et al., 2019; Efroni et al., 2021) combine exogenous variable with rich observation in RL. This is different from our assumption where we assume the observation is a sum of both parts, which is a more natural assumption in applications like e-commerce. For DPMDP and Hip-MDP, they assume a meta task variable which is non-stationary and changed across time but the task variable dynamic can be captured by a sequential model. Our
assumption can be viewed as a linear special case but our focus is not to better characterize the system but is to remove the exogenous part for better predictions.
5 EXPERIMENTS
We evaluate our methods in three problems: a synthetic dataset, a dataset from the Type-1 Diabete RL simulator (Xie, 2019), and a real-world dataset from an online store. The ground truth ∆ is computed either from a true simulator or using the average of the real experimental data under a long time period. We compare our methods based on plug-in estimator of the stationary solution in Eq. (4), its non-stationary variant in Algorithm 1, and an Naive Average baseline. The baseline directly uses the short-term reward average as the estimate of the long-term effect.
5.1 SYNTHETIC SIMULATION
The synthetic environment generates 4 randomized matrix Mi for policies {πi}3i=0 and a trajectory of randomized exogenous noise {zt}Tt=0. See details of the synthetic dynamic in Appendix C. The randomized sequence follows the non-stationary dynamics with a parameter α controlling the scale of the exogenous noise: oj,t = sj,t+αzt, ∀j, t. We collect n trajectories for each policy until t = T (w/ varying T ). We vary the parameters of the generating sequences: number n of trajectories, horizon T , data dimension d, and scale α of the exogenous noise. We plot the logarithmic Mean Square Error (MSE) for each method in Figure 2. The result shows that our estimator method (the green line) clearly outperforms all other baselines. Moreover, Figure 2(d) shows the increase of the scale of the exogenous noise does not affect estimation accuracy of our method.
5.2 TYPE-1 DIABETE SIMULATOR
This environment is modified based on an open-source implementation1 of the FDA approved Type1 Diabetes simulator (T1DMS) (Man et al., 2014). The environment simulates two-day behavior in an in-silico patient’s life. Consumption of a meal increases the blood-glucose level in the body. If the level is too high, the patient suffers from hyperglycemia. If the level is too low, the patient suffers from hypoglycemia. The goal is to control the blood glucose level by regulating the insulin dosage to minimize the risk associated with both hyperglycemia and hypoglycemia.
We modify the Bagal and Bolus (BB) policy (Bastani, 2014) (control policy) in the codebase and set two glucose target levels and different noise levels as our two treatment policies. We collect information in the first 12-hour of all the three policies with 5000 randomized patients in each policy group and use those information to predict the long-term effect. The observation feature is 2-dimensional: glucose level (CGM) and the amount of insulin injection. The non-stationarity comes from the time and amount of the consumption of a meal, which is time varying, but otherwise shared by all patients. We average a 2-day simulation window over random 250, 000 patients as ground truth treatment effect between policy groups.
1https://github.com/jxx123/simglucose
Similar to the synthetic simulator, we vary the number of patients and the experimental period. Figure 3 shows that the non-stationary method performs better in the prediction accuracy compared to stationary method in both predictions of CGM and the amount of insulin injection. Even though the simulator is non-linear, our simple linear additive exogenous noise assumption still captures the small local changes well, which is approximately linear.
5.3 DATA FROM AN ONLINE STORE
We test our methods under 4 long-running experiments in an online store with a total of 7 different treatment policies (some experiments have more than 1 treatment). Each experiment has 1 control policy. We evaluate 4 business metrics related to customer purchases in the store (Metrics 1-4), and use d = 17 features. All the experiments lasted for 12 weeks. We treat the first 5 weeks as the experiment window, and use data in those weeks to estimate long-term impacts of the 4 metrics. The trailing 7-week average of the metrics are used as ground true to evaluate accuracy of various estimators. Table 1 reports the median of the Mean Absolute Percentage Error (MAPE) of the estimators; See full results in Appendix C.
Given the high cost in such long-running experiments, we cannot collect more data points for comparison, and for computing statistical significance. That said, there is good evidence from the reported number that our method produces better predictions of long-term treatment effects than Naive Average. Furthermore, our method improves on the stationary baseline, suggesting the practical relevance of our nonstationary assumtion, and effectiveness of the proposed estimator.
6 CONCLUSIONS
In this paper we study how to estimate the long-term treatment effect by using only the inexperimental data in the non-stationary environment. We propose a novel non-stationary RL model and an algorithm to make prediction. A major limitation is the linear assumption in both the dynamics model and the additive exogenous part. Once the real world model includes a highly non-linear part, the prediction value can be biased. Future direction includes further relax our model to nonlinear case to better capture the real world environment.
A PROOF
In this section, we provide detailed proof for the theorem in the main text, as a self-contained section, we briefly introduce the notation as below, and adopt the regularized, multiple policy groups settings in the appendix:
• n: number of total individuals. • Ii: the index set for policy πi; ni = |Ii| as the number of individual in under policy πi. • k total number of different policy group. • Dn: dataset for n individuals in the experimental period.
In the appendix, we denote the ground truth dynamic M∗i and the ground truth exogenous noise z ∗ with a star ∗ to distinguish the variables Mi and z during optmization process.
A.1 ASSUMPTIONS
The dynamic assumption of our linear additive exogenous noise assumption in Assumption 3.1 can be rewritten as the following equation:
M∗i (oj,t − z∗t ) = (oj,t+1 − z∗t+1) + εj,t, ∀j ∈ Ii, 0 ≤ t ≤ T − 1. (15)
where εj,t is a zero-mean noise. Let εj = εj,0εj,1. . . εj,T−1 ∈ Rd×T , {εj}1≤j≤n forms a martingale: E[εj |Fj−1] = 0, (16)
where the filtration Fj = {o1, ..., oj−1} is the information up to the first j − 1 individuals. We make addition bounded assumption on the zero-mean noise term for the proof: Assumption A.1 (Bounded Noise assumption). Let εj = M∗i (oj,t − z∗t )− (oj,t+1 − z∗t+1), j ∈ Ii be the residual of the transition under the true transition matrix M∗i , we have
∥εj∥2 ≤ Cε, ∀j, (17) where Cε is a uniform constant independent of policy assignment.
For the empirical covariance matrix in the middle step of the calculation, we assume they are all bounded. Assumption A.2 (Bounded Norm for Matrices). We make the following assumptions on matrices
1. ∥M∗i ∥ ≤ CMi < 1γ , ∀i.
2. ∥(Λ∗n/n)−1∥ ≤ CΛ.
A.2 LOSS FUNCTION AND ALTERNATING MINIMIZATION
Our loss function can be written as:
L({Mi}1≤i≤k, z;Dn) = k∑
i=1 ∑ j∈Ii ∥Ai(z − oj)∥22 + λz∥z∥22 + k∑ i=1 λi∥Mi − Id∥2F . (18)
Lemma A.3. Fix {Mi}, denote Gi = A⊤i Ai where Ai is defined in Eq. (11), the minimization of z = argminz L({Mi}1≤i≤k, z;Dn) is
z({Mi}) = ( λzId×(T+1) +
k∑ i=1 niGi )−1 k∑ i=1 ∑ j∈Ii Gioj . (19)
Proof. By taking the gradient of the loss function, we will have:
0 =∇zL({Mi}1≤i≤k, z;Dn)
=2 k∑ i=1 ∑ j∈Ii Gi(z − oj) + 2λzz
which implies
z({Mi}) = ( λzId×(T+1) +
k∑ i=1 niGi )−1 k∑ i=1 ∑ j∈Ii Gioj . Here, Gi = A⊤i Ai is semi-definite, so the inversion of the large matrix in the right side of the expression always exists.
Similarly we can get the minimizer of Mi fixing z. Lemma A.4. By fixing z, the minimizer of Mi can be written as
Mi(z) = λiId + ∑ j∈Ii T−1∑ t=1 (oj,t+1 − zt+1)(oj,t − zt)⊤ λiId + ∑ j∈Ii T−1∑ t=1 (oj,t − zt)(oj,t − zt)⊤ −1 .
(20)
Proof. The proof is similarly applied by looking at the zero gradient of Mi.
If we set λi = 0 and z = 0, the minimization reduces back to estimation of M̂ in Eq. (5).
A.3 ERROR ANALYSIS
Lemma A.5. Let M∗i be the true dynamic of the underlying state, we have: z({M∗i })− z∗ = −λz(Λ∗n)−1z∗ + (Λ∗n)−1 k∑
i=1 ∑ j∈Ii A⊤i εj , (21) where Λ∗n = λzId×T + ∑k i=1 niG ∗ i .
Proof. By expand the definition of z({M∗i }, we have: z({M∗i }) = (Λ∗n)−1 k∑
i=1 ∑ j∈Ii Gioj =(Λ∗n) −1 k∑ i=1 ∑ j∈Ii A⊤i (Aioj)
=(Λ∗n) −1 k∑ i=1 ∑ j∈Ii A⊤i (Aiz ∗ + εj)
=(Λ∗n) −1 Λ∗nz∗ − λzz∗ + k∑ i=1 ∑ j∈Ii A⊤i εj)
=z∗ − λz(Λ∗n)−1z∗ + (Λ∗n)−1 k∑ i=1 ∑ j∈Ii A⊤i εj)
Lemma A.6. Let z∗ be the true exogenous noise, we have:
Mi(z ∗)−M∗i = λi(Id −M∗i ) + ∑ j∈Ii T−1∑ t=1 εj,t(oj,t − z∗t )⊤ (λiId +Σ∗n)−1 , (22)
where Σ∗n = ∑ j∈Ii ∑T−1 t=1 (oj,t − z∗t )(oj,t − z∗t )⊤ is the empirical covariace matrix.
Proof. By expand the definition of Mi(z∗), we have:
Mi(z ∗) = λiId + ∑ j∈Ii T−1∑ t=1 (oj,t+1 − z∗t+1)(oj,t − z∗t )⊤ λiId + ∑ j∈Ii T−1∑ t=1 (oj,t − z∗t )(oj,t − z∗t )⊤ −1
(23)
= λiId + ∑ j∈Ii T−1∑ t=1 (εj,t +M ∗ i (oj,t − z∗t )) (oj,t − z∗t )⊤ (λiId +Σ∗n)−1 (24) =M∗i +
λi(Id −M∗i ) + ∑ j∈Ii T−1∑ t=1 εj,t(oj,t − z∗t )⊤ (λiId +Σ∗n)−1 (25)
A.4 PROOF OF PROPOSITION 2.5
Proof. By induction, it is not hard to prove that E[Ot|O0 = o,D = i] = M ti o.
Sum up all condition on O0, we have: E[Ot|D = i] = M tiE[O0].
By the definition of long-term discounted reward G, we have: v(πi) =E[ ∞∑ t=0 γtRt|D = i]
= ∞∑ t=0 γtE[θ⊤r Ot|D = i]
=θ⊤r ∞∑ t=0 γtM tiE[O0]
=θ⊤r (I − γMi)−1E[O0], where the last equation holds when ∥Mi∥ < 1γ .
A.5 PROOF OF PROPOSITION 3.5
Proof. From Lemma A.5, suppose λz = 0 and (Λ∗n) −1 exists, we have: z − z∗ = (Λ∗n)−1 1∑
i=0 ∑ j∈Ii A⊤i εj . Consider v̂(π0) if we plugin ẑ and the true dynamic M∗0 , the error between v̂ and v is
v̂(π0)− v(π0) =θ⊤r (I − γM∗0 )−1(z0 − z∗0) :=β⊤r (z0 − z∗0) =(β⊤r , 0, . . . , 0)(z0 − z∗0) =β̃⊤r (z0 − z∗0),
where βr = (I − γM∗0 )−T θr, and β̃r is the extended vector of βr if we fill the other vector value at other time step as 0.
Expand the difference (z0 − z∗0) we have:
v̂(π0)− v(π0) =β̃⊤r (z0 − z∗0)
= 1∑ i=0 β̃⊤r (Λ ∗ n) −1A⊤i ( ∑ j∈Ii εj)
= 1∑ i=0 β̃⊤r ( Λ∗n n )−1A⊤i (
∑ j∈Ii εj
n )
≤∥β̃r∥ 1∑
i=0
∥(Λ ∗ n
n )−1Ai∥∥
∑ j∈Ii εj
n ∥.
By Assumption A.1 and Assumption A.2, the norm of β̃r is the same as βr, which is bounded by ∥βr∥ ≤ 11−γCMi ∥θr∥. The matrix norm in the middle factor is bounded because of Assumption A.2. Finally, by vector concentration inequality, since εj is norm-subGaussian (Jin et al., 2019), there exist a constant c that with probability at least 1− δ:
∥ ∑ j∈Ii εj
n ∥ ≤ c
√ log(2dT/δ)
n .
In sum, the error is bounded by O( 1√ n ) with probability at least 1− δ, and the constant depends on Cε, CMi , CΛ and the norm of ∥θr∥.
A.6 PROOF OF PROPOSITION 3.6
Proof. Since we get access to the ground true z∗, the remaining problem is by changing the state as sj,t = oj,t − z∗t and reduce the problem back to standard MDP. The detailed proof can refer to Proposition 11 in Miyaguchi (2021).
B REDUCE THE COMPUTATION COMPLEXITY WITH PRE-COMPUTATION
In this section, we explain how to reduce the computation complexity with pre-computation.
Pre-computation. Compute
Mi(0) = ∑ j∈Ii T−1∑ t=1 oj,t+1o ⊤ j,t ∑ j∈Ii T−1∑ t=1 oj,to ⊤ j,t −1
and ōt = ∑ j∈Ii oj,t.
The pre-computation requires computation complexity of O(nTd2 + d3), where d2 is the computation complexity of the outer product, d3 is the computation complexity of the matrix inversion after summing up the matrix.
In Each Iteration. The computation of M can be rewritten as
Mi(z) = ∑ j∈Ii T−1∑ t=1 (oj,t+1 − zt+1)(oj,t − zt)⊤ ∑ j∈Ii T−1∑ t=1 (oj,t − zt)(oj,t − zt)⊤ −1
=Mi(0)− T−1∑ t=1 zt+1ō ⊤ t − T−1∑ t=1 ōt+1z ⊤ t + T−1∑ t=1 zt+1z ⊤ t ,
which requires computation complexity of O(Td2). Similarly, the computation of
z(G) = ( k∑ i=0 niGi) −1( k∑ i=0 Giōj)
requires computation complexity of O(T 2d2). Both steps are computationally scalable, since it does not rely on number of individuals n (which is often much larger than T and d).
Overall Computation Complexity. Suppose we execute the iterations for k times, then the total computation complexity for the alternating minimization is O(nTd2 + d3 + kT 2d2). In practice, the number of different individual n is far larger than the experiment horizon T and the feature dimension d, therefore the computation complexity essentially scales linearly with n.
C EXPERIMENTS DETAILS
C.1 SYNTHETIC SIMULATION
The synthetic environment generates 4 randomized matrix Mi for policies {πi}3i=0, where each entry of Mi is a positive number randomly sample from a uniform distribution between (0, 1). We normalize each row so that it sums up to 1, and we set M̃i = 0.5I + 0.5Mi as our final transition matrix. The 0.5I part ensures each matrix is not too far away from each other.
We generate a set of i.i.d. random vector ηt ∼ N (0, 1.5I) and set zt+1 = zt + ηt recursively. And we let z̃t = αt ∗ zt as the final exogenous noise, where αt = eβt and βt ∼ N (0, 0.5I), i.i.d.. All the parameters (zt and Mi) of the dynamic are fixed once generated, and we use the dynamic to generate our observation for each individual, following
st+1 = Mist + εt, and ot = st + αzt, ∀t where εt is independently drawn from a standard normal distribution, and α control the level of exogenous noise.
C.2 POLICY CONSTRUCTION IN TYPE-1 DIABETE SIMULATOR
The Bagal and Bolus policy is a parametrized policy based on the amount of insulin that a person with diabetes is instructed to inject prior to eating a meal (Bastani, 2014)
injection = current blood glucose− target blood glucose
CF +
meal size
CR ,
where CF and CR are parameter based on patients information such as body weights, which is already specified in the simulator.
We set our two treatment policies with target blood glucose level at 145 and 130 (compared to control: 140). And we increase the noise in the insulin pump simulator in both the treatment policies.
C.3 RANDOM PATIENTS GENERATION IN TYPE-1 DIABETE SIMULATOR
Type-1 Diabete simulator pre-stores 30 patients parameter. To randomly generate a new patient, we randomly pick two different patients A and B, and use a random linear coefficient β ∼ U(0, 0.2) and mixed the parameter of a new patient as
θ = (1− α)θA + αθB , where θA and θB are the parameters of patients A and B, respectively. Since patient A has more weight of the parameter, the parameters in Bagal and Bolus policy, CF and CR, follow patient A’s parameter.
C.4 FULL RESULTS FOR ALL THE ONLINE STORE EXPERIMENTS. | 1. What are the strengths and weaknesses of the paper regarding its contributions, algorithm, and theoretical justifications?
2. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
3. What are the missing literature and prior works that should be included and contrasted with the proposed approach?
4. How can the author(s) improve the paper by relaxing the linear MDP assumption, justifying the use of reinforcement learning, formulating the problem using hypothesis testing, providing rigorous uncertainty quantification, and conducting more detailed theoretical analysis?
5. Is there any concern about the simulation setting used in the paper, and how could it be improved?
6. Are there any questions or concerns regarding the reproducibility of the numerical experiments, and what could be done to address them? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper adopts a reinforcement learning framework to estimate the long-term treatment effects in nonstationary environments. The main contributions lies in the development of a practical algorithm to estimate causal effect under nonstationarity. The algorithm is justified via both theoretical results, synthetic environments and a real-world online store dataset.
Strengths And Weaknesses
Strengths:
Nonstationarity is commonly seen in real-world applications. Most existing works on policy evaluation did not take nonstationarity into consideration. The paper takes the issue of nonstationarity seriously and borrows ideas from the economics literature to deal with nonstationary environments.
A practical algorithm is developed for long-term treatment effect evaluation under nonstationarity. The algorithm is also easier to implement. Various practical considerations are discussed and several extensions are outlined.
Theoretical justifications of the proposed algorithm are provided. In addition, one online-store dataset is also employed to evaluate the proposed algorithm in real applications.
Weaknesses:
Missing literature on A/B testing & causal inference. There is a huge literature on A/B testing. In addition, there is a growing literature on estimating long term treatment effects in causal inference, see e.g., https://scholar.harvard.edu/files/shephard/files/cause20170718.pdf and the papers that cited this paper. These works are not discussed in the paper, but shall be included and potentially contrasted (see also point #4 below). More important, there are some prior works that proposed to use reinforcement learning for long-term treatment effects estimation in A/B testing, see e.g., https://www.tandfonline.com/doi/full/10.1080/01621459.2022.2027776. They also adopt ideas from the off-policy evaluation literature. The author(s) might want to discuss in detail the difference from these papers.
The contributions are somehow overstated given the prior work on applying reinforcement learning to long-term treatment effects estimation. I suggest the author(s) to focus on the issue of nonstationarity and revise the contribution, the introduction section and the summary. You might also want to include "nonstationarity/nonstationary environments" in the title to highlight the contributions of the paper more accurately.
The linear MDP assumption is strong and shall be relaxed if possible.
The use of reinforcement learning framework is not well-justified. In particular, under the current experimental design, each subject receives one static treatment all the time. Existing A/B testing methods are also applicable for causal effect estimation. The paper would benefit from a detailed discussion about the advantage of employing reinforcement learning over standard A/B testing methods.
Uncertainty quantification is not studied in the paper. In additional to the point estimator, in A/B testing, decision makers are equally interested in understanding if a new product is significantly better compared to an old one or not. The author(s) might want to formulate the problem using hypothesis testing and develop a rigorous testing procedure to test these hypotheses.
Some of the descriptions are not very accurate and some details are missing. For instance, in my opinion, off-policy evaluation might not be very related to the problem the author(s) studied. In particular, off-policy evaluation considers the scenario where the behavior policy differs from the target policy. However, in the current setting, each subject receives one of the two target policies all the time. This is essentially an "on-policy" (as apposed to off-policy) setting.
In Propositions 3.5 and 3.6, asymptotic rate of convergence is provided to quantify the difference between the proposed estimator and the ground truth. It would be better to develop nonasymptotic error bounds not only as a function of the sample size, but other relevant parameters in the problem as well.
The type-I diabetes setting is not very realistic. In practice, it would be impossible to get data for over 10 thousand patients. It might be better to use another environment if your method requires a large number of trajectories.
I might miss something, but I did not find the link for the code. So cannot check the reproducibility of the numerical experiments.
Clarity, Quality, Novelty And Reproducibility
The presentation is clear in general.
The quality is good. But the paper would benefit from a substantial revision to better highlight the contribution, relax the linearity assumption, justify the use of reinforcement learning, formulate the problem based on hypothesis testing, provide rigorous uncertainty quantification, conduct more detailed theoretical analysis, use a different simulation setting and include the link for the code.
The main novelty includes the development of a practical algorithm for long-term treatment effect evaluation under nonstationarity. The associated theoretical analysis is novel as well. Nonetheless, some of the claimed contributions have been developed and employed in the existing literature. |
ICLR | Title
A Reinforcement Learning Approach to Estimating Long-term Treatment Effects
Abstract
Randomized experiments (a.k.a. A/B tests) are a powerful tool for estimating treatment effects, to inform decisions making in business, healthcare and other applications. In many problems, the treatment has a lasting effect that evolves over time. A limitation with randomized experiments is that they do not easily extend to measure long-term effects, since running long experiments is time-consuming and expensive. In this paper, we take a reinforcement learning (RL) approach that estimates the average reward in a Markov process. Motivated by real-world scenarios where the observed state transition is nonstationary, we develop a new algorithm for a class of nonstationary problems, and demonstrate promising results in two synthetic datasets and one online store dataset.
1 INTRODUCTION
Randomized experiments (a.k.a. A/B tests) are a powerful tool for estimating treatment effects, to inform decisions making in business, healthcare and other applications. In an experiment, units like customers or patients are randomly split into a treatment bucket and a control bucket. For example, in a rideshare app, drivers in the control and treatment buckets are matched to customers in different ways (e.g., with different spatial ranges or different ranking functions). After we expose customers to one of these options for a period of time, usually a few days or weeks, we can record the corresponding customer engagements, and run a statistical hypothesis test on the engagement data to detect if there is a statistically significant difference in customer preference of treatment over control. The result will inform whether the app should launch the treatment or control.
While this method has been widely successful (e.g., in online applications (Kohavi et al., 2020)), it typically measures treatment effect during the short experiment window. However, in many problems, a treatment has a lasting effect that evolves over time. For example, a treatment that increases installation of a mobile app may result in a drop of short-term profit due to promotional benefits like discounts. But the installation allows the customer to benefit from the app, which will increase future engagements and profit in the long term. A limitation with standard randomized experiments is that they do not easily extend to measure long-term effects. We can run a long experiment for months or years to measure the long-term impacts, which however is time-consuming and expensive. We can also design proxy signals that are believed to correlate with long-term engagements (Kohavi et al., 2009), but finding a reliable proxy is challenging in practice. Another solution is the surrogacy method that estimates delayed treatment impacts from surrogate changes during the experiment (Athey et al., 2019). However, it does not estimate long-term impacts resulting from long-term treatment exposure, but rather from short-term exposure during the experiment.
Shi et al. (2022b) mitigates the limitation of standard randomized experiment by framing the longterm effect as a reinforcement learning (RL) problem. Their method is closely related to recent advances in infinite-horizon off-policy evaluation (OPE) (Liu et al., 2018; Nachum et al., 2019a; Xie et al., 2019; Kallus & Uehara, 2020; Uehara et al., 2020; Chandak et al., 2021). However, their solution relies on stationary Markov assumption, which fails to capture the real-world nonstationary dynamics. Motivated by real-world scenarios where the observed state transitions are nonstationary, we consider a class of nonstationary problems, where the observation consists of two additive terms: an endogenous term that follows a stationary Markov process, and an exogenous
term that is time-varying but independent of the policy. Based on this assumption, we develop a new algorithm to jointly estimate long-term reward and the exogenous variables.
Our contributions are threefold. First, it is a novel application of RL to estimate long-term treatment effects, which is challenging for standard randomized experiments. Second, we develop an estimator for a class of nonstationary problems that are motivated by real-world scenarios, and give a preliminary theoretical analysis. Third, we demonstrate promising results in two synthetic datasets and one online store dataset.
2 BACKGROUND
2.1 LONG-TERM TREATMENT EFFECTS
Let π0 and π1 be the control and treatment policies, used to serve individual in respective buckets. In the rideshare example, a policy may decide how to match a driver to a nearby request. During the experiment, each individual (the driver) is randomly assigned to one of the policy groups, and we observe a sequence of behavior features of that individual under the influence of the assigned policy. We use variable D ∈ {0, 1} to denote the random assignment of an individual to one of the policies. The observed features are denoted as a sequence of random variable in Rd
O0, O1, . . . , Ot, . . . ,
where the subscript t indicates time step in the sequence. A time step may be one day or one week, depending on the application. Feature Ot consists of information like number of pickup orders. We are interested in estimating the difference in average long-term reward between treatment and control policies:
∆ = E[ ∞∑ t=0 γtRt|D = 1]− E[ ∞∑ t=0 γtRt|D = 0], (1)
where E averages over individuals and their stochastic sequence of engagements, Rt = r(Ot) is the reward signal (e.g., customer rating) at time step t, following a pre-defined reward function r : Rd → R, and γ ∈ (0, 1) is the discounted factor. The discounted factor γ is a hyper-parameter specified by the decision maker to indicate how much they value future reward over the present. The closer γ is to 1, the greater weight future rewards carry in the discounted sum.
Suppose we have run a randomized experiment with the two policies for a short period of T steps. In the experiment, a set of n individuals are randomly split and exposed to one of the two policies π0 and π1. We denote by dj ∈ {0, 1} the policy assignment of individual j, and Ii the index set of individuals assigned to πi, i.e., j ∈ Ii iff dj = i. The in-experiment trajectory of individual j is:
τj = {oj,0, oj,1, . . . , oj,T }. The in-experiment dataset is the collection of all individual data as Dn = {(τj , dj)}nj=1. Our goal is to find an estimator ∆̂(Dn) ≈ ∆.
2.2 ESTIMATION UNDER STATIONARY MARKOVIAN DYNAMICS
Inspired by recent advances in off-policy evaluation (OPE) (e.g. Liu et al., 2018; Nachum et al., 2019b), the simplest assumption is a fully observed Markov Process that the observation in each time step can fully predict the future distribution under a stationary dynamic kernel. In this paper, we assume the dynamics kernel and reward function are both linear, following the setting in Parr et al. (2008). Linear representations are popular in the RL literature (e.g., Shi et al., 2022b) , and often preferable in industrial applications due to simplicity and greater model interpretability. Assumption 2.1. (Linear Dynamics) there is a matrix Mi such that
E[Ot+1|Ot = o,D = i] = Mio, ∀t ∈ N, i ∈ {0, 1}. (2) Remark 2.2. Unlike standard RL, we don’t have an explicit action for a policy. The difference between the control and treatment policy is revealed by different transition matrix M . Assumption 2.3. (Linear Reward) There is a coefficient vector θr ∈ Rd such that
r(Ot) = θ ⊤ r Ot, ∀t ∈ N. (3)
Remark 2.4. The reward signal may be one of the observed features. For example, if we are interested in customer rating, and rating is one of the observe features, then θr is just a one-hot vector with 1 in the corresponding coordinate. When the reward is complex with unknown coefficient, we can use ordinary least-squares to estimate the coefficient θr. Proposition 2.5. Under Assumption 2.1 and 2.3, if the spectral norm of Mi is smaller than 1γ , then the expected long-term reward of policy πi, v(πi) := E[ ∑∞ t=0 γ
tRt|D = i], can be obtained by: v(πi) = θ ⊤ r (I − γMi)−1Ō(i)0 , where Ō (i) 0 := E[O0|D = i]. (4)
The only remaining step is to estimate Ō(i)0 and Mi. The former can be directly estimated from the Monte Carlo average of the experimental data: Ô(i)0 = 1 ni ∑ j∈Ii o0,j , where ni = |Ii| is the number of individuals assigned to policy πi. To estimate the latter, we may use ordinary least-squares on observed transitions:
M̂i = ∑ j∈Ii T−1∑ t=0 oj,t+1o ⊤ j,t ∑ j∈Ii T−1∑ t=0 oj,to ⊤ j,t −1 . (5) The detailed derivation can be found in (Parr et al., 2008). Once we get the estimated value of v̂i ≈ v(πi), the long term impact in Eq. (1) can be estimated as:
∆̂ = v̂1 − v̂0. Remark 2.6. Although this a model-based estimator, it is equivalent to other OPE estimator in general under linear Markovian assumption (e.g., Nachum et al., 2019b; Duan et al., 2020; Miyaguchi, 2021) and it enjoys similar statistical guarantees as other OPE estimators.
3 OUR METHOD
In Section 2.2, we assumed the observation Ot follows a stationary Markov process, and derived a model-based closed-form solution based on linear reward Assumption 2.3.
In reality, this model assumption has two major limitations. First, real-world environments are nonstationary. For example, in a hotel reservation system, seasonality heavily influences the prediction of the future booking count. Our stationary assumption does not capture those seasonal changes, resulting in poorly learned models and inaccurate predictions of long-term treatment effects. Second, in practice, we are unable to ensure that observed features fully capture the dynamics. OPE methods based on stationary and full observability assumptions are unlikely to work robustly in complex, real-life scenarios.
Figure 1 illustrates nonstationarity in data from an online store (see Section 5 for more details). The figure shows how the weekly average of a business metric changes in a span of 5 months, for two policies (C for control, and T4 for treatment). Such highly non-statioanary data, especially during special seasons towards the right end of the plot, are common.
However, the difference of the two policy groups remains much more stable. This is expected as both policies are affected by the same exogenous affects (seasonal variations in this example).
Figure 1 motivates a relaxed model assumption (Section 3.1), by introducing a non-stationary exogenous component on top of a stationary hidden state St. Our new assumption is that the observation Ot can be decomposed additively into two parts: an endogenous part still follows a stationary Markovian dynamic for each policy group (treatment or control); and an exogenous part which is time-varying and shared across all groups. Based on the new assumption we propose an alternating minimization algorithm that jointly estimates both transition dynamics and exogenous variables.
3.1 NONSTATIONARY MODEL RELAXATION
We assume there is an exogenous noise vector zt for each time step t, to represent the linear additive exogenous noise in the uncontrollable outside world such as seasonal effect, which applies uniformly to every individual under each treatment bucket. We relax Assumption 2.1 as the following: Assumption 3.1. (Linear Additive Exogenous Noise) the observational feature Ot is the sum of the endogenous hidden features and the time-varying exogenous noise zt. Ot = St + zt, ∀t ∈ N. where zt does not depend on policy or any individual in the experiments and St follows the linear Markovian kernel with transition matrix Mi: E[St+1|St = s,D = i] = Mis, ∀t ∈ N, i ∈ {0, 1}. (6) Remark 3.2 (Explanation of the Linear Additive Model). Our linear additive model is inspired by the parallel trend assumption in the Difference-in-Difference (DID) estimator (Lechner et al., 2011). In real-world environments, it is impossible to capture all the covariates that may effect the dynamics. The linear additive exogenous noise zt can be seen as the drive from the outside that is both unobserved and uncontrol. For example, in an intelligent agriculture system, the highly nonstationary weather condition can be seen as exogenous which we cannot control, but the amount of water and fertilizer that affect the growth of the plant can be seen as the hidden state that is controlled by a pre-defined stationary policy. And we add up those two factors as the features (e.g., the condition of the crop) we observed in the real world.
From Assumption 3.1 and linear reward function assumption in 2.3, the closed form of v(πi) can be rewritten as: Proposition 3.3. Under Assumption 3.1 and 2.3, and suppose v(z∞) := ∑∞ t=0 γ
tzt < ∞. Suppose the spectral norm of Mi is smaller than 1γ , the expected long-term reward can be obtained by:
v(πi) = θ ⊤ r (I − γMi)−1S̄(i)0 + v(z∞), where S̄ (i) 0 = E[S0|D = i]. (7)
The long-term reward in Eq. (7) contains v(z∞), which depends on the unknown exogenous noise sequence outside of the experimental window and thus is unpredictable. However, the long term treatment effect, ∆(π1, π0) = v(π1) − v(π0), cancels out the dependency on that exogenous term v(z∞). For simplicity, we redefine v(πi) = θ⊤r (I − γMi)−1S̄(i)0 without the term of v(z∞). Therefore, the only thing we need to estimate is S̄(i)0 and Mi. Once we have the access of z0, we can estimate S̄(i)0 similarly as Monte Carlo sample: Ŝ0 = 1 ni ∑ j∈Ii o0,j − ẑ0. The next question is how to estimate in-experiment exogenous variable zt and the underlying transition kernels.
3.2 OPTIMIZATION FRAMEWORK
We propose to optimize {zt}1≤t≤T and {M0,M1} jointly under a single loss function, with the same spirit of reducing the reconstruction loss of each transition pair similar to the model-based approach.
For each individual j in treatment group i, Assumption 3.1 implies that at time step t + 1, the observation oj,t+1 can be written as: oj,t+1 − zt+1 = Mi(oj,t − zt) + εj,t, ∀j ∈ Ii, 1 ≤ t ≤ T − 1, (8) where εj,t is a noise term with zero mean, so that Mi(oj,t − zt) = E[St+1|St = oj,t − zt, D = i]. Inspired by Eq. (8), given observation history Dn, in order to minimize the empirical reconstruct risk by each transition pair (oj,t, oj,t+1), we construct the following loss function
L(M0,M1, {zt}1≤t≤T ;Dn) = 1∑
i=0 ∑ j∈Ii T−1∑ t=0 ∥oj,t+1 − zt+1 −Mi(oj,t − zt)∥22. (9)
To simplify the notation, Eq. (9) can be rewritten as a vectorized form
L(M0,M1, z;Dn) = 1∑
i=0 ∑ j∈Ii ∥Ai(oj − z)∥22, (10)
Algorithm 1 Estimating Long-Term Effect Under Non-stationary Dynamics Input: In-experiment training Data Dn = {(τj , dj)}nj=1, where τi = (oj,0, oj,1, . . . , oj,T ) is the in-experiment observation features for individual j, dj ∈ {0, 1} is the indicator of which policy group individual j is assigned to. Initialize the estimation of exogenous noise ẑ = 0. Optimization: while not convergent do
Update Mi as the ordinary least square solution given the current ẑ:
M̂i = ∑ j∈Ii T−1∑ t=1 (oj,t+1 − ẑt+1)(oj,t − ẑt)⊤ ∑ j∈Ii T−1∑ t=1 (oj,t − ẑt)(oj,t − ẑt)⊤ −1 .
Update ẑ according to Eq. (12):
ẑ = (n0G0 + n1G1) −1( 1∑ i=0 ∑ j∈Ii Gioj).
end while Evaluation: Compute v̂i = θ⊤r (I − γM̂i)−1 ( Ô (i) 0 − ẑ0 ) , where Ô(i)0 = 1 ni ∑ j∈Ii o0,j .
Output the long-term impact estimation as ∆̂ = v̂1 − v̂0.
where oj = oj,0oj,1. . . oj,T , and z = z0z1. . . zT are column vector aggregate over the experiment time horizon, and Ai is a dT × d(T + 1) matrix constructing by a block matrix Mi:
Ai = −Mi I ... 0 −Mi ... ...
... I 0 ... −Mi I dT×d(T+1) . (11)
3.3 ALTERNATING MINIMIZATION
To reconstruct Mi and z, we apply alternating minimization on the loss function L(M0,M1, z;Dn) in Eq. (10). By looking at the zero-gradient point of the loss function, under proper non-degenerate assumption (see Appendix for details), we have: Proposition 3.4. Suppose (n0G0 + n1G1) is nonsingular, the minimizer of z given Mi is a closedform solution in the followings:
argmin z
L(M0,M1, z;Dn) = (n0G0 + n1G1)−1( 1∑
i=0 ∑ j∈Ii Gioj), where Gi = A ⊤ i Ai. (12)
The minimizer of Mi given z is similar to Eq. (5), except that we subtract the exogenous part zt from the observation:
argmin M0,M1
L(M0,M1, z;Dn)
:= ∑ j∈Ii T−1∑ t=1 (oj,t+1 − ẑt+1)(oj,t − ẑt)⊤ ∑ j∈Ii T−1∑ t=1 (oj,t − ẑt)(oj,t − ẑt)⊤ −1 . (13)
The final optimization process is summarized in Algorithm 1.
3.4 THEORETICAL ANALYSIS
We give a preliminary theoretical analysis in this section to give readers some insights on how good our estimator is once a partial oracle information is given. We will extend our analysis to quantify the error of the estimator at the convergence state of alternating minimization in future work.
To simplify our analysis, we first assume we get access to the true transition matrix Mi, and our goal is to quantify the error between v̂(πi) and the true policy value v(πi) for each policy πi. Proposition 3.5. Suppose we have bounded noise and matrices under Assumption A.1 and Assumption A.2, and suppose n0 = n1 = n2 is equally divided. When we get access of the oracle transition matrix Mi = M∗i , i ∈ {0, 1}, let ẑ = argminz L(M∗0 ,M∗1 , z;Dn). If we plugin ẑ in the estimation of v̂(πi), we will have
|v̂(πi)− v(πi)| = O( 1√ n ),
with probability at least 1− δ.
In the second analysis we assume that we get an accurate z. In this case, the estimation of M̂ reduces to the stationary assumption case in Assumption 2.1 where the hidden state variable st = ot − zt is fully recovered. We follow the analysis (e.g., Duan et al., 2020; Miyaguchi, 2021) of linear MDP to characterize the error. Proposition 3.6 (Proposition 11 in Miyaguchi (2021)). Suppose we get access to the oracle exogenous noise z during the experimental period, let M̂i = argminMi L({Mi}, z∗;Dn) in Eq. (13). Under the assumption in Proposition 11 in Miyaguchi (2021), with the plugin estimator v̂ with M̂i, we have:
|v̂(πi)− v(πi)| = O(n− 1 2d+2 ),
with probability at least 1− δ.
3.5 PRACTICAL CONSIDERATIONS
Regularize the Transition Dynamic Matrices. Degenerated case may happen during the alternating minimization when either 1) the spectral norm is too large, i.e. ∥Mi∥2 ≥ 1γ , leading the long-term operator (I − γMi)−1 = ∑∞ t=0 γ
tM ti diverges in Eq. (7), or 2) the matrix inversion calculation of Mi in Eq. (13) is not well-defined. To avoid those scenarios and stabilize the computation procedure, we add a regularization term of Mi as λi∥Mi − Id∥22 in our experiment. The intuition is that the transition matrix should be close to identity matrix as in practice the treatment policy typically deviates from the control policy in an incremental manner.
After adding the regularization, the closed-form minimizer of Mi of the regularized loss function becomes:
Mi = λiId + ∑ j∈Ii T−1∑ t=1 (oj,t+1 − zt+1)(oj,t − zt)⊤ λiId + ∑ j∈Ii T−1∑ t=1 (oj,t − zt)(oj,t − zt)⊤ −1.
Regularize the Exogenous Variable. There is a challenge in deriving the closed-form z in Eq. (12) where n0G0+n1G1 can be degenerated or nearly degenerated. By definition, Gi is always singular. Moreover, if there is no control of the minimal eigenvalue of (n0G0+n1G1), e.g. close to zero, the update step on z is uncontrolled and the variance of noise can be magnified in the direction of the minimal eigenvector. Therefore it is crucial to regularize z.
To tackle the possible degenerated circumstances, one natural idea is to include regularization of the ℓ2 norm of z, where the regularized loss function can be written as:
Lλ(z,M0,M1;D) = L(z,M0,M1;D) + λz∥z∥22. (14) Its corresponding minimizer of ẑ can be written as:
ẑ = (λI + n0G0 + n1G1) −1( 1∑ i=0 ∑ j∈Ii Gioj),
where I is the identity matrix of dimension d× (T + 1). It is worth mentioning that when the regularization parameter λ increases to infinity, z will go to 0, and the solution reduces to the stationary case in Assumption 2.1.
Extend to Multiple Treatment Policies The optimization framework can be easily extend to multiple treatment policies case. Suppose we have k different treatment policies π1, π2, · · · , πk and let π0 be the control policy, the closed form solution for ẑ under multiple dataset of different treatment groups can be derived as
ẑλ = (λI + k∑ i=0 niGi) −1( k∑ i=0 ∑ j∈Ii Gioj).
And the closed-form update for Mi stays the same. The final estimation of the treatment effect for policy πi is ∆̂ = v̂i − v̂0.
4 RELATED WORK
Estimating long-term treatment effects Our work is related to causal inference with temporal data. The surrogate index method (Athey et al., 2019; 2020) makes a different assumption that the long-term effect is independent of the treatment conditioned on the surrogate index measured during the experiment. It then estimates long-term impacts resulting from short-term exposure during the experiment. In contrast, our work aims to estimate long-term impacts resulting from long-term exposure. Time series methods (e.g. Bojinov & Shephard, 2019) require probabilistic treatments, which allow an individual to be exposed to different treatments at different time periods during an experiment. They then estimate the temporal treatment effect, which is averaged over all the temporal steps, differs from traditional treatment effect which is averaged over randomized individuals.
Our method draws inspirations from off-policy evaluation(OPE) and related areas, whose goal is to estimate the long-term policy value, usually from a offline dataset collected under different policies. Most early work focuses on the family of inverse propensity score estimators that are prone to high variance in long-horizon problems (e.g., Precup et al., 2000; Murphy et al., 2001; Jiang & Li, 2016). Recently, there are growing interests in long- and even infinite-horizon settings (Liu et al., 2018; Nachum et al., 2019a; Xie et al., 2019; Tang et al., 2020; Uehara et al., 2020; Dai et al., 2020; Chandak et al., 2021). In particular, Shi et al. (2022b) considers a similar problem of estimating long-term impacts, which is comparable to our stationary baseline. However, these methods either rely on the stationarity assumption that is violated in many applications, or consider the general nonstationary Markov decision process (Kallus & Uehara, 2020) that does not leverage domainspecific assumptions.
RL in nonstationary or confounded environments Our model is a special case of Partially Observable Markov Decision Process (POMDP) (Åström, 1965; Kaelbling et al., 1998). OPE in general POMDPs remains challenging, unless various assumptions are made (e.g., Tennenholtz et al., 2020; Bennett et al., 2021; Shi et al., 2022a). Most assumptions are on the causal relation of the logged data, such as relation between state, action and confounded variable. In contrast, we make an assumption motivated by real-world data, which allows our estimator to cancel out exogenous variables from observations.
Our assumption is also related to MDP with Exogenous Variables (e.g., Dietterich et al., 2018; Chitnis & Lozano-Pérez, 2020), and Dynamics Parameter MDP (DPMDP) or Hidden Paramter MDP (HiP-MDP) (Al-Shedivat et al., 2017; Xie et al., 2020). For exogenous variable, they assume observation features can be partitioned into two groups, where the exogenous group is not affected by the action and the endogenous group evolve as in a typical MDP. The major challenge is infer the right partition. Several recent works (e.g Misra et al., 2020; Du et al., 2019; Efroni et al., 2021) combine exogenous variable with rich observation in RL. This is different from our assumption where we assume the observation is a sum of both parts, which is a more natural assumption in applications like e-commerce. For DPMDP and Hip-MDP, they assume a meta task variable which is non-stationary and changed across time but the task variable dynamic can be captured by a sequential model. Our
assumption can be viewed as a linear special case but our focus is not to better characterize the system but is to remove the exogenous part for better predictions.
5 EXPERIMENTS
We evaluate our methods in three problems: a synthetic dataset, a dataset from the Type-1 Diabete RL simulator (Xie, 2019), and a real-world dataset from an online store. The ground truth ∆ is computed either from a true simulator or using the average of the real experimental data under a long time period. We compare our methods based on plug-in estimator of the stationary solution in Eq. (4), its non-stationary variant in Algorithm 1, and an Naive Average baseline. The baseline directly uses the short-term reward average as the estimate of the long-term effect.
5.1 SYNTHETIC SIMULATION
The synthetic environment generates 4 randomized matrix Mi for policies {πi}3i=0 and a trajectory of randomized exogenous noise {zt}Tt=0. See details of the synthetic dynamic in Appendix C. The randomized sequence follows the non-stationary dynamics with a parameter α controlling the scale of the exogenous noise: oj,t = sj,t+αzt, ∀j, t. We collect n trajectories for each policy until t = T (w/ varying T ). We vary the parameters of the generating sequences: number n of trajectories, horizon T , data dimension d, and scale α of the exogenous noise. We plot the logarithmic Mean Square Error (MSE) for each method in Figure 2. The result shows that our estimator method (the green line) clearly outperforms all other baselines. Moreover, Figure 2(d) shows the increase of the scale of the exogenous noise does not affect estimation accuracy of our method.
5.2 TYPE-1 DIABETE SIMULATOR
This environment is modified based on an open-source implementation1 of the FDA approved Type1 Diabetes simulator (T1DMS) (Man et al., 2014). The environment simulates two-day behavior in an in-silico patient’s life. Consumption of a meal increases the blood-glucose level in the body. If the level is too high, the patient suffers from hyperglycemia. If the level is too low, the patient suffers from hypoglycemia. The goal is to control the blood glucose level by regulating the insulin dosage to minimize the risk associated with both hyperglycemia and hypoglycemia.
We modify the Bagal and Bolus (BB) policy (Bastani, 2014) (control policy) in the codebase and set two glucose target levels and different noise levels as our two treatment policies. We collect information in the first 12-hour of all the three policies with 5000 randomized patients in each policy group and use those information to predict the long-term effect. The observation feature is 2-dimensional: glucose level (CGM) and the amount of insulin injection. The non-stationarity comes from the time and amount of the consumption of a meal, which is time varying, but otherwise shared by all patients. We average a 2-day simulation window over random 250, 000 patients as ground truth treatment effect between policy groups.
1https://github.com/jxx123/simglucose
Similar to the synthetic simulator, we vary the number of patients and the experimental period. Figure 3 shows that the non-stationary method performs better in the prediction accuracy compared to stationary method in both predictions of CGM and the amount of insulin injection. Even though the simulator is non-linear, our simple linear additive exogenous noise assumption still captures the small local changes well, which is approximately linear.
5.3 DATA FROM AN ONLINE STORE
We test our methods under 4 long-running experiments in an online store with a total of 7 different treatment policies (some experiments have more than 1 treatment). Each experiment has 1 control policy. We evaluate 4 business metrics related to customer purchases in the store (Metrics 1-4), and use d = 17 features. All the experiments lasted for 12 weeks. We treat the first 5 weeks as the experiment window, and use data in those weeks to estimate long-term impacts of the 4 metrics. The trailing 7-week average of the metrics are used as ground true to evaluate accuracy of various estimators. Table 1 reports the median of the Mean Absolute Percentage Error (MAPE) of the estimators; See full results in Appendix C.
Given the high cost in such long-running experiments, we cannot collect more data points for comparison, and for computing statistical significance. That said, there is good evidence from the reported number that our method produces better predictions of long-term treatment effects than Naive Average. Furthermore, our method improves on the stationary baseline, suggesting the practical relevance of our nonstationary assumtion, and effectiveness of the proposed estimator.
6 CONCLUSIONS
In this paper we study how to estimate the long-term treatment effect by using only the inexperimental data in the non-stationary environment. We propose a novel non-stationary RL model and an algorithm to make prediction. A major limitation is the linear assumption in both the dynamics model and the additive exogenous part. Once the real world model includes a highly non-linear part, the prediction value can be biased. Future direction includes further relax our model to nonlinear case to better capture the real world environment.
A PROOF
In this section, we provide detailed proof for the theorem in the main text, as a self-contained section, we briefly introduce the notation as below, and adopt the regularized, multiple policy groups settings in the appendix:
• n: number of total individuals. • Ii: the index set for policy πi; ni = |Ii| as the number of individual in under policy πi. • k total number of different policy group. • Dn: dataset for n individuals in the experimental period.
In the appendix, we denote the ground truth dynamic M∗i and the ground truth exogenous noise z ∗ with a star ∗ to distinguish the variables Mi and z during optmization process.
A.1 ASSUMPTIONS
The dynamic assumption of our linear additive exogenous noise assumption in Assumption 3.1 can be rewritten as the following equation:
M∗i (oj,t − z∗t ) = (oj,t+1 − z∗t+1) + εj,t, ∀j ∈ Ii, 0 ≤ t ≤ T − 1. (15)
where εj,t is a zero-mean noise. Let εj = εj,0εj,1. . . εj,T−1 ∈ Rd×T , {εj}1≤j≤n forms a martingale: E[εj |Fj−1] = 0, (16)
where the filtration Fj = {o1, ..., oj−1} is the information up to the first j − 1 individuals. We make addition bounded assumption on the zero-mean noise term for the proof: Assumption A.1 (Bounded Noise assumption). Let εj = M∗i (oj,t − z∗t )− (oj,t+1 − z∗t+1), j ∈ Ii be the residual of the transition under the true transition matrix M∗i , we have
∥εj∥2 ≤ Cε, ∀j, (17) where Cε is a uniform constant independent of policy assignment.
For the empirical covariance matrix in the middle step of the calculation, we assume they are all bounded. Assumption A.2 (Bounded Norm for Matrices). We make the following assumptions on matrices
1. ∥M∗i ∥ ≤ CMi < 1γ , ∀i.
2. ∥(Λ∗n/n)−1∥ ≤ CΛ.
A.2 LOSS FUNCTION AND ALTERNATING MINIMIZATION
Our loss function can be written as:
L({Mi}1≤i≤k, z;Dn) = k∑
i=1 ∑ j∈Ii ∥Ai(z − oj)∥22 + λz∥z∥22 + k∑ i=1 λi∥Mi − Id∥2F . (18)
Lemma A.3. Fix {Mi}, denote Gi = A⊤i Ai where Ai is defined in Eq. (11), the minimization of z = argminz L({Mi}1≤i≤k, z;Dn) is
z({Mi}) = ( λzId×(T+1) +
k∑ i=1 niGi )−1 k∑ i=1 ∑ j∈Ii Gioj . (19)
Proof. By taking the gradient of the loss function, we will have:
0 =∇zL({Mi}1≤i≤k, z;Dn)
=2 k∑ i=1 ∑ j∈Ii Gi(z − oj) + 2λzz
which implies
z({Mi}) = ( λzId×(T+1) +
k∑ i=1 niGi )−1 k∑ i=1 ∑ j∈Ii Gioj . Here, Gi = A⊤i Ai is semi-definite, so the inversion of the large matrix in the right side of the expression always exists.
Similarly we can get the minimizer of Mi fixing z. Lemma A.4. By fixing z, the minimizer of Mi can be written as
Mi(z) = λiId + ∑ j∈Ii T−1∑ t=1 (oj,t+1 − zt+1)(oj,t − zt)⊤ λiId + ∑ j∈Ii T−1∑ t=1 (oj,t − zt)(oj,t − zt)⊤ −1 .
(20)
Proof. The proof is similarly applied by looking at the zero gradient of Mi.
If we set λi = 0 and z = 0, the minimization reduces back to estimation of M̂ in Eq. (5).
A.3 ERROR ANALYSIS
Lemma A.5. Let M∗i be the true dynamic of the underlying state, we have: z({M∗i })− z∗ = −λz(Λ∗n)−1z∗ + (Λ∗n)−1 k∑
i=1 ∑ j∈Ii A⊤i εj , (21) where Λ∗n = λzId×T + ∑k i=1 niG ∗ i .
Proof. By expand the definition of z({M∗i }, we have: z({M∗i }) = (Λ∗n)−1 k∑
i=1 ∑ j∈Ii Gioj =(Λ∗n) −1 k∑ i=1 ∑ j∈Ii A⊤i (Aioj)
=(Λ∗n) −1 k∑ i=1 ∑ j∈Ii A⊤i (Aiz ∗ + εj)
=(Λ∗n) −1 Λ∗nz∗ − λzz∗ + k∑ i=1 ∑ j∈Ii A⊤i εj)
=z∗ − λz(Λ∗n)−1z∗ + (Λ∗n)−1 k∑ i=1 ∑ j∈Ii A⊤i εj)
Lemma A.6. Let z∗ be the true exogenous noise, we have:
Mi(z ∗)−M∗i = λi(Id −M∗i ) + ∑ j∈Ii T−1∑ t=1 εj,t(oj,t − z∗t )⊤ (λiId +Σ∗n)−1 , (22)
where Σ∗n = ∑ j∈Ii ∑T−1 t=1 (oj,t − z∗t )(oj,t − z∗t )⊤ is the empirical covariace matrix.
Proof. By expand the definition of Mi(z∗), we have:
Mi(z ∗) = λiId + ∑ j∈Ii T−1∑ t=1 (oj,t+1 − z∗t+1)(oj,t − z∗t )⊤ λiId + ∑ j∈Ii T−1∑ t=1 (oj,t − z∗t )(oj,t − z∗t )⊤ −1
(23)
= λiId + ∑ j∈Ii T−1∑ t=1 (εj,t +M ∗ i (oj,t − z∗t )) (oj,t − z∗t )⊤ (λiId +Σ∗n)−1 (24) =M∗i +
λi(Id −M∗i ) + ∑ j∈Ii T−1∑ t=1 εj,t(oj,t − z∗t )⊤ (λiId +Σ∗n)−1 (25)
A.4 PROOF OF PROPOSITION 2.5
Proof. By induction, it is not hard to prove that E[Ot|O0 = o,D = i] = M ti o.
Sum up all condition on O0, we have: E[Ot|D = i] = M tiE[O0].
By the definition of long-term discounted reward G, we have: v(πi) =E[ ∞∑ t=0 γtRt|D = i]
= ∞∑ t=0 γtE[θ⊤r Ot|D = i]
=θ⊤r ∞∑ t=0 γtM tiE[O0]
=θ⊤r (I − γMi)−1E[O0], where the last equation holds when ∥Mi∥ < 1γ .
A.5 PROOF OF PROPOSITION 3.5
Proof. From Lemma A.5, suppose λz = 0 and (Λ∗n) −1 exists, we have: z − z∗ = (Λ∗n)−1 1∑
i=0 ∑ j∈Ii A⊤i εj . Consider v̂(π0) if we plugin ẑ and the true dynamic M∗0 , the error between v̂ and v is
v̂(π0)− v(π0) =θ⊤r (I − γM∗0 )−1(z0 − z∗0) :=β⊤r (z0 − z∗0) =(β⊤r , 0, . . . , 0)(z0 − z∗0) =β̃⊤r (z0 − z∗0),
where βr = (I − γM∗0 )−T θr, and β̃r is the extended vector of βr if we fill the other vector value at other time step as 0.
Expand the difference (z0 − z∗0) we have:
v̂(π0)− v(π0) =β̃⊤r (z0 − z∗0)
= 1∑ i=0 β̃⊤r (Λ ∗ n) −1A⊤i ( ∑ j∈Ii εj)
= 1∑ i=0 β̃⊤r ( Λ∗n n )−1A⊤i (
∑ j∈Ii εj
n )
≤∥β̃r∥ 1∑
i=0
∥(Λ ∗ n
n )−1Ai∥∥
∑ j∈Ii εj
n ∥.
By Assumption A.1 and Assumption A.2, the norm of β̃r is the same as βr, which is bounded by ∥βr∥ ≤ 11−γCMi ∥θr∥. The matrix norm in the middle factor is bounded because of Assumption A.2. Finally, by vector concentration inequality, since εj is norm-subGaussian (Jin et al., 2019), there exist a constant c that with probability at least 1− δ:
∥ ∑ j∈Ii εj
n ∥ ≤ c
√ log(2dT/δ)
n .
In sum, the error is bounded by O( 1√ n ) with probability at least 1− δ, and the constant depends on Cε, CMi , CΛ and the norm of ∥θr∥.
A.6 PROOF OF PROPOSITION 3.6
Proof. Since we get access to the ground true z∗, the remaining problem is by changing the state as sj,t = oj,t − z∗t and reduce the problem back to standard MDP. The detailed proof can refer to Proposition 11 in Miyaguchi (2021).
B REDUCE THE COMPUTATION COMPLEXITY WITH PRE-COMPUTATION
In this section, we explain how to reduce the computation complexity with pre-computation.
Pre-computation. Compute
Mi(0) = ∑ j∈Ii T−1∑ t=1 oj,t+1o ⊤ j,t ∑ j∈Ii T−1∑ t=1 oj,to ⊤ j,t −1
and ōt = ∑ j∈Ii oj,t.
The pre-computation requires computation complexity of O(nTd2 + d3), where d2 is the computation complexity of the outer product, d3 is the computation complexity of the matrix inversion after summing up the matrix.
In Each Iteration. The computation of M can be rewritten as
Mi(z) = ∑ j∈Ii T−1∑ t=1 (oj,t+1 − zt+1)(oj,t − zt)⊤ ∑ j∈Ii T−1∑ t=1 (oj,t − zt)(oj,t − zt)⊤ −1
=Mi(0)− T−1∑ t=1 zt+1ō ⊤ t − T−1∑ t=1 ōt+1z ⊤ t + T−1∑ t=1 zt+1z ⊤ t ,
which requires computation complexity of O(Td2). Similarly, the computation of
z(G) = ( k∑ i=0 niGi) −1( k∑ i=0 Giōj)
requires computation complexity of O(T 2d2). Both steps are computationally scalable, since it does not rely on number of individuals n (which is often much larger than T and d).
Overall Computation Complexity. Suppose we execute the iterations for k times, then the total computation complexity for the alternating minimization is O(nTd2 + d3 + kT 2d2). In practice, the number of different individual n is far larger than the experiment horizon T and the feature dimension d, therefore the computation complexity essentially scales linearly with n.
C EXPERIMENTS DETAILS
C.1 SYNTHETIC SIMULATION
The synthetic environment generates 4 randomized matrix Mi for policies {πi}3i=0, where each entry of Mi is a positive number randomly sample from a uniform distribution between (0, 1). We normalize each row so that it sums up to 1, and we set M̃i = 0.5I + 0.5Mi as our final transition matrix. The 0.5I part ensures each matrix is not too far away from each other.
We generate a set of i.i.d. random vector ηt ∼ N (0, 1.5I) and set zt+1 = zt + ηt recursively. And we let z̃t = αt ∗ zt as the final exogenous noise, where αt = eβt and βt ∼ N (0, 0.5I), i.i.d.. All the parameters (zt and Mi) of the dynamic are fixed once generated, and we use the dynamic to generate our observation for each individual, following
st+1 = Mist + εt, and ot = st + αzt, ∀t where εt is independently drawn from a standard normal distribution, and α control the level of exogenous noise.
C.2 POLICY CONSTRUCTION IN TYPE-1 DIABETE SIMULATOR
The Bagal and Bolus policy is a parametrized policy based on the amount of insulin that a person with diabetes is instructed to inject prior to eating a meal (Bastani, 2014)
injection = current blood glucose− target blood glucose
CF +
meal size
CR ,
where CF and CR are parameter based on patients information such as body weights, which is already specified in the simulator.
We set our two treatment policies with target blood glucose level at 145 and 130 (compared to control: 140). And we increase the noise in the insulin pump simulator in both the treatment policies.
C.3 RANDOM PATIENTS GENERATION IN TYPE-1 DIABETE SIMULATOR
Type-1 Diabete simulator pre-stores 30 patients parameter. To randomly generate a new patient, we randomly pick two different patients A and B, and use a random linear coefficient β ∼ U(0, 0.2) and mixed the parameter of a new patient as
θ = (1− α)θA + αθB , where θA and θB are the parameters of patients A and B, respectively. Since patient A has more weight of the parameter, the parameters in Bagal and Bolus policy, CF and CR, follow patient A’s parameter.
C.4 FULL RESULTS FOR ALL THE ONLINE STORE EXPERIMENTS. | 1. What is the focus of the paper regarding non-stationary dynamics?
2. What are the strengths and weaknesses of the proposed method in treating effect estimation?
3. Do you have any concerns or questions regarding the assumptions made in the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any recent works in causal inference that the paper could refer to? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper leverage the variant of the OPE estimator to estimate the average treatment effects in non-stationary dynamics. The problem is interesting and challenging, however, the method proposed in the paper might be limited for practical use.
Strengths And Weaknesses
Strength: The paper proposes a treatment effect estimator in non-stationary dynamics. The authors justify the estimator both theoretically and empirically.
Weaknesses:
The authors claim to estimate the average long-term rewards. However, the definition in equation (1) is different from the discounted sums of the rewards. From this perspective, the long-term average reward in this paper is not the typical setting in RL literature \citep{liao2020batch, liao2021off}. This might mislead the readers.
The paper makes linear assumptions, i.e., Assumptions 2.1 and 2.3, on transition kernel and reward. This assumption can be regarded as the alignment of the linear MDP assumption in RL literature. However, in OPE method referred to in the paper, all of the visitation-based approaches do not require such linear assumptions.
Assumption 3.1. The linear decomposition of the observation
O
t
is restrictive. In a real-world environment, endogenous and exogenous noise is hard to be differentiated by just following a linear way. Otherwise, the author should provide some empirical analysis on whether the decomposition is indeed following a linear decomposition.
Is the Monte Carlo sample estimator
S
^
0
having a large variance when there exists a large distribution shifting of the off-policy data? In this case, how to control the variance of the estimator?
The computational intensity analysis should be provided. The alternating optimization algorithm over
M
0
,
M
1
, and
z
seem to lead to a non-trivial optimization problem. In addition, could the authors show that the algorithm is convergent in a heuristic sense?
Proposition 3.5. The finite sample bound should be explicitly provided not just the informal rate of convergence. In another sense, the Markov process is non-stationary and the data is dependent. I cannot find the parts the authors address such problems when deriving the theoretical results.
reference: Liao, P., Klasnja, P., and Murphy, S. (2021), “Off-policy estimation of long-term aver- age outcomes with applications to mobile health,” Journal of the American Statistical Association, 116, 382–391. Liao, P., Qi, Z., Klasnja, P., and Murphy, S. (2020), “Batch policy learning in average reward markov decision processes,” arXiv preprint arXiv:2007.11771.
Clarity, Quality, Novelty And Reproducibility
The definitions of the important quantities are missing. For example, what's the formal definition of
R
t
,
π
0
, and the definition of
Δ
^
? There are many other terms that are required to be formally defined in the paper.
The closed-form result in (5) is standard in the existing literature. And it would be better for the authors to clarify more about the contribution of this part in the current work.
What's the estimator
z
^
0
representing?
In related work, it would be better to give more recent works in casual inference. |
ICLR | Title
A Reinforcement Learning Approach to Estimating Long-term Treatment Effects
Abstract
Randomized experiments (a.k.a. A/B tests) are a powerful tool for estimating treatment effects, to inform decisions making in business, healthcare and other applications. In many problems, the treatment has a lasting effect that evolves over time. A limitation with randomized experiments is that they do not easily extend to measure long-term effects, since running long experiments is time-consuming and expensive. In this paper, we take a reinforcement learning (RL) approach that estimates the average reward in a Markov process. Motivated by real-world scenarios where the observed state transition is nonstationary, we develop a new algorithm for a class of nonstationary problems, and demonstrate promising results in two synthetic datasets and one online store dataset.
1 INTRODUCTION
Randomized experiments (a.k.a. A/B tests) are a powerful tool for estimating treatment effects, to inform decisions making in business, healthcare and other applications. In an experiment, units like customers or patients are randomly split into a treatment bucket and a control bucket. For example, in a rideshare app, drivers in the control and treatment buckets are matched to customers in different ways (e.g., with different spatial ranges or different ranking functions). After we expose customers to one of these options for a period of time, usually a few days or weeks, we can record the corresponding customer engagements, and run a statistical hypothesis test on the engagement data to detect if there is a statistically significant difference in customer preference of treatment over control. The result will inform whether the app should launch the treatment or control.
While this method has been widely successful (e.g., in online applications (Kohavi et al., 2020)), it typically measures treatment effect during the short experiment window. However, in many problems, a treatment has a lasting effect that evolves over time. For example, a treatment that increases installation of a mobile app may result in a drop of short-term profit due to promotional benefits like discounts. But the installation allows the customer to benefit from the app, which will increase future engagements and profit in the long term. A limitation with standard randomized experiments is that they do not easily extend to measure long-term effects. We can run a long experiment for months or years to measure the long-term impacts, which however is time-consuming and expensive. We can also design proxy signals that are believed to correlate with long-term engagements (Kohavi et al., 2009), but finding a reliable proxy is challenging in practice. Another solution is the surrogacy method that estimates delayed treatment impacts from surrogate changes during the experiment (Athey et al., 2019). However, it does not estimate long-term impacts resulting from long-term treatment exposure, but rather from short-term exposure during the experiment.
Shi et al. (2022b) mitigates the limitation of standard randomized experiment by framing the longterm effect as a reinforcement learning (RL) problem. Their method is closely related to recent advances in infinite-horizon off-policy evaluation (OPE) (Liu et al., 2018; Nachum et al., 2019a; Xie et al., 2019; Kallus & Uehara, 2020; Uehara et al., 2020; Chandak et al., 2021). However, their solution relies on stationary Markov assumption, which fails to capture the real-world nonstationary dynamics. Motivated by real-world scenarios where the observed state transitions are nonstationary, we consider a class of nonstationary problems, where the observation consists of two additive terms: an endogenous term that follows a stationary Markov process, and an exogenous
term that is time-varying but independent of the policy. Based on this assumption, we develop a new algorithm to jointly estimate long-term reward and the exogenous variables.
Our contributions are threefold. First, it is a novel application of RL to estimate long-term treatment effects, which is challenging for standard randomized experiments. Second, we develop an estimator for a class of nonstationary problems that are motivated by real-world scenarios, and give a preliminary theoretical analysis. Third, we demonstrate promising results in two synthetic datasets and one online store dataset.
2 BACKGROUND
2.1 LONG-TERM TREATMENT EFFECTS
Let π0 and π1 be the control and treatment policies, used to serve individual in respective buckets. In the rideshare example, a policy may decide how to match a driver to a nearby request. During the experiment, each individual (the driver) is randomly assigned to one of the policy groups, and we observe a sequence of behavior features of that individual under the influence of the assigned policy. We use variable D ∈ {0, 1} to denote the random assignment of an individual to one of the policies. The observed features are denoted as a sequence of random variable in Rd
O0, O1, . . . , Ot, . . . ,
where the subscript t indicates time step in the sequence. A time step may be one day or one week, depending on the application. Feature Ot consists of information like number of pickup orders. We are interested in estimating the difference in average long-term reward between treatment and control policies:
∆ = E[ ∞∑ t=0 γtRt|D = 1]− E[ ∞∑ t=0 γtRt|D = 0], (1)
where E averages over individuals and their stochastic sequence of engagements, Rt = r(Ot) is the reward signal (e.g., customer rating) at time step t, following a pre-defined reward function r : Rd → R, and γ ∈ (0, 1) is the discounted factor. The discounted factor γ is a hyper-parameter specified by the decision maker to indicate how much they value future reward over the present. The closer γ is to 1, the greater weight future rewards carry in the discounted sum.
Suppose we have run a randomized experiment with the two policies for a short period of T steps. In the experiment, a set of n individuals are randomly split and exposed to one of the two policies π0 and π1. We denote by dj ∈ {0, 1} the policy assignment of individual j, and Ii the index set of individuals assigned to πi, i.e., j ∈ Ii iff dj = i. The in-experiment trajectory of individual j is:
τj = {oj,0, oj,1, . . . , oj,T }. The in-experiment dataset is the collection of all individual data as Dn = {(τj , dj)}nj=1. Our goal is to find an estimator ∆̂(Dn) ≈ ∆.
2.2 ESTIMATION UNDER STATIONARY MARKOVIAN DYNAMICS
Inspired by recent advances in off-policy evaluation (OPE) (e.g. Liu et al., 2018; Nachum et al., 2019b), the simplest assumption is a fully observed Markov Process that the observation in each time step can fully predict the future distribution under a stationary dynamic kernel. In this paper, we assume the dynamics kernel and reward function are both linear, following the setting in Parr et al. (2008). Linear representations are popular in the RL literature (e.g., Shi et al., 2022b) , and often preferable in industrial applications due to simplicity and greater model interpretability. Assumption 2.1. (Linear Dynamics) there is a matrix Mi such that
E[Ot+1|Ot = o,D = i] = Mio, ∀t ∈ N, i ∈ {0, 1}. (2) Remark 2.2. Unlike standard RL, we don’t have an explicit action for a policy. The difference between the control and treatment policy is revealed by different transition matrix M . Assumption 2.3. (Linear Reward) There is a coefficient vector θr ∈ Rd such that
r(Ot) = θ ⊤ r Ot, ∀t ∈ N. (3)
Remark 2.4. The reward signal may be one of the observed features. For example, if we are interested in customer rating, and rating is one of the observe features, then θr is just a one-hot vector with 1 in the corresponding coordinate. When the reward is complex with unknown coefficient, we can use ordinary least-squares to estimate the coefficient θr. Proposition 2.5. Under Assumption 2.1 and 2.3, if the spectral norm of Mi is smaller than 1γ , then the expected long-term reward of policy πi, v(πi) := E[ ∑∞ t=0 γ
tRt|D = i], can be obtained by: v(πi) = θ ⊤ r (I − γMi)−1Ō(i)0 , where Ō (i) 0 := E[O0|D = i]. (4)
The only remaining step is to estimate Ō(i)0 and Mi. The former can be directly estimated from the Monte Carlo average of the experimental data: Ô(i)0 = 1 ni ∑ j∈Ii o0,j , where ni = |Ii| is the number of individuals assigned to policy πi. To estimate the latter, we may use ordinary least-squares on observed transitions:
M̂i = ∑ j∈Ii T−1∑ t=0 oj,t+1o ⊤ j,t ∑ j∈Ii T−1∑ t=0 oj,to ⊤ j,t −1 . (5) The detailed derivation can be found in (Parr et al., 2008). Once we get the estimated value of v̂i ≈ v(πi), the long term impact in Eq. (1) can be estimated as:
∆̂ = v̂1 − v̂0. Remark 2.6. Although this a model-based estimator, it is equivalent to other OPE estimator in general under linear Markovian assumption (e.g., Nachum et al., 2019b; Duan et al., 2020; Miyaguchi, 2021) and it enjoys similar statistical guarantees as other OPE estimators.
3 OUR METHOD
In Section 2.2, we assumed the observation Ot follows a stationary Markov process, and derived a model-based closed-form solution based on linear reward Assumption 2.3.
In reality, this model assumption has two major limitations. First, real-world environments are nonstationary. For example, in a hotel reservation system, seasonality heavily influences the prediction of the future booking count. Our stationary assumption does not capture those seasonal changes, resulting in poorly learned models and inaccurate predictions of long-term treatment effects. Second, in practice, we are unable to ensure that observed features fully capture the dynamics. OPE methods based on stationary and full observability assumptions are unlikely to work robustly in complex, real-life scenarios.
Figure 1 illustrates nonstationarity in data from an online store (see Section 5 for more details). The figure shows how the weekly average of a business metric changes in a span of 5 months, for two policies (C for control, and T4 for treatment). Such highly non-statioanary data, especially during special seasons towards the right end of the plot, are common.
However, the difference of the two policy groups remains much more stable. This is expected as both policies are affected by the same exogenous affects (seasonal variations in this example).
Figure 1 motivates a relaxed model assumption (Section 3.1), by introducing a non-stationary exogenous component on top of a stationary hidden state St. Our new assumption is that the observation Ot can be decomposed additively into two parts: an endogenous part still follows a stationary Markovian dynamic for each policy group (treatment or control); and an exogenous part which is time-varying and shared across all groups. Based on the new assumption we propose an alternating minimization algorithm that jointly estimates both transition dynamics and exogenous variables.
3.1 NONSTATIONARY MODEL RELAXATION
We assume there is an exogenous noise vector zt for each time step t, to represent the linear additive exogenous noise in the uncontrollable outside world such as seasonal effect, which applies uniformly to every individual under each treatment bucket. We relax Assumption 2.1 as the following: Assumption 3.1. (Linear Additive Exogenous Noise) the observational feature Ot is the sum of the endogenous hidden features and the time-varying exogenous noise zt. Ot = St + zt, ∀t ∈ N. where zt does not depend on policy or any individual in the experiments and St follows the linear Markovian kernel with transition matrix Mi: E[St+1|St = s,D = i] = Mis, ∀t ∈ N, i ∈ {0, 1}. (6) Remark 3.2 (Explanation of the Linear Additive Model). Our linear additive model is inspired by the parallel trend assumption in the Difference-in-Difference (DID) estimator (Lechner et al., 2011). In real-world environments, it is impossible to capture all the covariates that may effect the dynamics. The linear additive exogenous noise zt can be seen as the drive from the outside that is both unobserved and uncontrol. For example, in an intelligent agriculture system, the highly nonstationary weather condition can be seen as exogenous which we cannot control, but the amount of water and fertilizer that affect the growth of the plant can be seen as the hidden state that is controlled by a pre-defined stationary policy. And we add up those two factors as the features (e.g., the condition of the crop) we observed in the real world.
From Assumption 3.1 and linear reward function assumption in 2.3, the closed form of v(πi) can be rewritten as: Proposition 3.3. Under Assumption 3.1 and 2.3, and suppose v(z∞) := ∑∞ t=0 γ
tzt < ∞. Suppose the spectral norm of Mi is smaller than 1γ , the expected long-term reward can be obtained by:
v(πi) = θ ⊤ r (I − γMi)−1S̄(i)0 + v(z∞), where S̄ (i) 0 = E[S0|D = i]. (7)
The long-term reward in Eq. (7) contains v(z∞), which depends on the unknown exogenous noise sequence outside of the experimental window and thus is unpredictable. However, the long term treatment effect, ∆(π1, π0) = v(π1) − v(π0), cancels out the dependency on that exogenous term v(z∞). For simplicity, we redefine v(πi) = θ⊤r (I − γMi)−1S̄(i)0 without the term of v(z∞). Therefore, the only thing we need to estimate is S̄(i)0 and Mi. Once we have the access of z0, we can estimate S̄(i)0 similarly as Monte Carlo sample: Ŝ0 = 1 ni ∑ j∈Ii o0,j − ẑ0. The next question is how to estimate in-experiment exogenous variable zt and the underlying transition kernels.
3.2 OPTIMIZATION FRAMEWORK
We propose to optimize {zt}1≤t≤T and {M0,M1} jointly under a single loss function, with the same spirit of reducing the reconstruction loss of each transition pair similar to the model-based approach.
For each individual j in treatment group i, Assumption 3.1 implies that at time step t + 1, the observation oj,t+1 can be written as: oj,t+1 − zt+1 = Mi(oj,t − zt) + εj,t, ∀j ∈ Ii, 1 ≤ t ≤ T − 1, (8) where εj,t is a noise term with zero mean, so that Mi(oj,t − zt) = E[St+1|St = oj,t − zt, D = i]. Inspired by Eq. (8), given observation history Dn, in order to minimize the empirical reconstruct risk by each transition pair (oj,t, oj,t+1), we construct the following loss function
L(M0,M1, {zt}1≤t≤T ;Dn) = 1∑
i=0 ∑ j∈Ii T−1∑ t=0 ∥oj,t+1 − zt+1 −Mi(oj,t − zt)∥22. (9)
To simplify the notation, Eq. (9) can be rewritten as a vectorized form
L(M0,M1, z;Dn) = 1∑
i=0 ∑ j∈Ii ∥Ai(oj − z)∥22, (10)
Algorithm 1 Estimating Long-Term Effect Under Non-stationary Dynamics Input: In-experiment training Data Dn = {(τj , dj)}nj=1, where τi = (oj,0, oj,1, . . . , oj,T ) is the in-experiment observation features for individual j, dj ∈ {0, 1} is the indicator of which policy group individual j is assigned to. Initialize the estimation of exogenous noise ẑ = 0. Optimization: while not convergent do
Update Mi as the ordinary least square solution given the current ẑ:
M̂i = ∑ j∈Ii T−1∑ t=1 (oj,t+1 − ẑt+1)(oj,t − ẑt)⊤ ∑ j∈Ii T−1∑ t=1 (oj,t − ẑt)(oj,t − ẑt)⊤ −1 .
Update ẑ according to Eq. (12):
ẑ = (n0G0 + n1G1) −1( 1∑ i=0 ∑ j∈Ii Gioj).
end while Evaluation: Compute v̂i = θ⊤r (I − γM̂i)−1 ( Ô (i) 0 − ẑ0 ) , where Ô(i)0 = 1 ni ∑ j∈Ii o0,j .
Output the long-term impact estimation as ∆̂ = v̂1 − v̂0.
where oj = oj,0oj,1. . . oj,T , and z = z0z1. . . zT are column vector aggregate over the experiment time horizon, and Ai is a dT × d(T + 1) matrix constructing by a block matrix Mi:
Ai = −Mi I ... 0 −Mi ... ...
... I 0 ... −Mi I dT×d(T+1) . (11)
3.3 ALTERNATING MINIMIZATION
To reconstruct Mi and z, we apply alternating minimization on the loss function L(M0,M1, z;Dn) in Eq. (10). By looking at the zero-gradient point of the loss function, under proper non-degenerate assumption (see Appendix for details), we have: Proposition 3.4. Suppose (n0G0 + n1G1) is nonsingular, the minimizer of z given Mi is a closedform solution in the followings:
argmin z
L(M0,M1, z;Dn) = (n0G0 + n1G1)−1( 1∑
i=0 ∑ j∈Ii Gioj), where Gi = A ⊤ i Ai. (12)
The minimizer of Mi given z is similar to Eq. (5), except that we subtract the exogenous part zt from the observation:
argmin M0,M1
L(M0,M1, z;Dn)
:= ∑ j∈Ii T−1∑ t=1 (oj,t+1 − ẑt+1)(oj,t − ẑt)⊤ ∑ j∈Ii T−1∑ t=1 (oj,t − ẑt)(oj,t − ẑt)⊤ −1 . (13)
The final optimization process is summarized in Algorithm 1.
3.4 THEORETICAL ANALYSIS
We give a preliminary theoretical analysis in this section to give readers some insights on how good our estimator is once a partial oracle information is given. We will extend our analysis to quantify the error of the estimator at the convergence state of alternating minimization in future work.
To simplify our analysis, we first assume we get access to the true transition matrix Mi, and our goal is to quantify the error between v̂(πi) and the true policy value v(πi) for each policy πi. Proposition 3.5. Suppose we have bounded noise and matrices under Assumption A.1 and Assumption A.2, and suppose n0 = n1 = n2 is equally divided. When we get access of the oracle transition matrix Mi = M∗i , i ∈ {0, 1}, let ẑ = argminz L(M∗0 ,M∗1 , z;Dn). If we plugin ẑ in the estimation of v̂(πi), we will have
|v̂(πi)− v(πi)| = O( 1√ n ),
with probability at least 1− δ.
In the second analysis we assume that we get an accurate z. In this case, the estimation of M̂ reduces to the stationary assumption case in Assumption 2.1 where the hidden state variable st = ot − zt is fully recovered. We follow the analysis (e.g., Duan et al., 2020; Miyaguchi, 2021) of linear MDP to characterize the error. Proposition 3.6 (Proposition 11 in Miyaguchi (2021)). Suppose we get access to the oracle exogenous noise z during the experimental period, let M̂i = argminMi L({Mi}, z∗;Dn) in Eq. (13). Under the assumption in Proposition 11 in Miyaguchi (2021), with the plugin estimator v̂ with M̂i, we have:
|v̂(πi)− v(πi)| = O(n− 1 2d+2 ),
with probability at least 1− δ.
3.5 PRACTICAL CONSIDERATIONS
Regularize the Transition Dynamic Matrices. Degenerated case may happen during the alternating minimization when either 1) the spectral norm is too large, i.e. ∥Mi∥2 ≥ 1γ , leading the long-term operator (I − γMi)−1 = ∑∞ t=0 γ
tM ti diverges in Eq. (7), or 2) the matrix inversion calculation of Mi in Eq. (13) is not well-defined. To avoid those scenarios and stabilize the computation procedure, we add a regularization term of Mi as λi∥Mi − Id∥22 in our experiment. The intuition is that the transition matrix should be close to identity matrix as in practice the treatment policy typically deviates from the control policy in an incremental manner.
After adding the regularization, the closed-form minimizer of Mi of the regularized loss function becomes:
Mi = λiId + ∑ j∈Ii T−1∑ t=1 (oj,t+1 − zt+1)(oj,t − zt)⊤ λiId + ∑ j∈Ii T−1∑ t=1 (oj,t − zt)(oj,t − zt)⊤ −1.
Regularize the Exogenous Variable. There is a challenge in deriving the closed-form z in Eq. (12) where n0G0+n1G1 can be degenerated or nearly degenerated. By definition, Gi is always singular. Moreover, if there is no control of the minimal eigenvalue of (n0G0+n1G1), e.g. close to zero, the update step on z is uncontrolled and the variance of noise can be magnified in the direction of the minimal eigenvector. Therefore it is crucial to regularize z.
To tackle the possible degenerated circumstances, one natural idea is to include regularization of the ℓ2 norm of z, where the regularized loss function can be written as:
Lλ(z,M0,M1;D) = L(z,M0,M1;D) + λz∥z∥22. (14) Its corresponding minimizer of ẑ can be written as:
ẑ = (λI + n0G0 + n1G1) −1( 1∑ i=0 ∑ j∈Ii Gioj),
where I is the identity matrix of dimension d× (T + 1). It is worth mentioning that when the regularization parameter λ increases to infinity, z will go to 0, and the solution reduces to the stationary case in Assumption 2.1.
Extend to Multiple Treatment Policies The optimization framework can be easily extend to multiple treatment policies case. Suppose we have k different treatment policies π1, π2, · · · , πk and let π0 be the control policy, the closed form solution for ẑ under multiple dataset of different treatment groups can be derived as
ẑλ = (λI + k∑ i=0 niGi) −1( k∑ i=0 ∑ j∈Ii Gioj).
And the closed-form update for Mi stays the same. The final estimation of the treatment effect for policy πi is ∆̂ = v̂i − v̂0.
4 RELATED WORK
Estimating long-term treatment effects Our work is related to causal inference with temporal data. The surrogate index method (Athey et al., 2019; 2020) makes a different assumption that the long-term effect is independent of the treatment conditioned on the surrogate index measured during the experiment. It then estimates long-term impacts resulting from short-term exposure during the experiment. In contrast, our work aims to estimate long-term impacts resulting from long-term exposure. Time series methods (e.g. Bojinov & Shephard, 2019) require probabilistic treatments, which allow an individual to be exposed to different treatments at different time periods during an experiment. They then estimate the temporal treatment effect, which is averaged over all the temporal steps, differs from traditional treatment effect which is averaged over randomized individuals.
Our method draws inspirations from off-policy evaluation(OPE) and related areas, whose goal is to estimate the long-term policy value, usually from a offline dataset collected under different policies. Most early work focuses on the family of inverse propensity score estimators that are prone to high variance in long-horizon problems (e.g., Precup et al., 2000; Murphy et al., 2001; Jiang & Li, 2016). Recently, there are growing interests in long- and even infinite-horizon settings (Liu et al., 2018; Nachum et al., 2019a; Xie et al., 2019; Tang et al., 2020; Uehara et al., 2020; Dai et al., 2020; Chandak et al., 2021). In particular, Shi et al. (2022b) considers a similar problem of estimating long-term impacts, which is comparable to our stationary baseline. However, these methods either rely on the stationarity assumption that is violated in many applications, or consider the general nonstationary Markov decision process (Kallus & Uehara, 2020) that does not leverage domainspecific assumptions.
RL in nonstationary or confounded environments Our model is a special case of Partially Observable Markov Decision Process (POMDP) (Åström, 1965; Kaelbling et al., 1998). OPE in general POMDPs remains challenging, unless various assumptions are made (e.g., Tennenholtz et al., 2020; Bennett et al., 2021; Shi et al., 2022a). Most assumptions are on the causal relation of the logged data, such as relation between state, action and confounded variable. In contrast, we make an assumption motivated by real-world data, which allows our estimator to cancel out exogenous variables from observations.
Our assumption is also related to MDP with Exogenous Variables (e.g., Dietterich et al., 2018; Chitnis & Lozano-Pérez, 2020), and Dynamics Parameter MDP (DPMDP) or Hidden Paramter MDP (HiP-MDP) (Al-Shedivat et al., 2017; Xie et al., 2020). For exogenous variable, they assume observation features can be partitioned into two groups, where the exogenous group is not affected by the action and the endogenous group evolve as in a typical MDP. The major challenge is infer the right partition. Several recent works (e.g Misra et al., 2020; Du et al., 2019; Efroni et al., 2021) combine exogenous variable with rich observation in RL. This is different from our assumption where we assume the observation is a sum of both parts, which is a more natural assumption in applications like e-commerce. For DPMDP and Hip-MDP, they assume a meta task variable which is non-stationary and changed across time but the task variable dynamic can be captured by a sequential model. Our
assumption can be viewed as a linear special case but our focus is not to better characterize the system but is to remove the exogenous part for better predictions.
5 EXPERIMENTS
We evaluate our methods in three problems: a synthetic dataset, a dataset from the Type-1 Diabete RL simulator (Xie, 2019), and a real-world dataset from an online store. The ground truth ∆ is computed either from a true simulator or using the average of the real experimental data under a long time period. We compare our methods based on plug-in estimator of the stationary solution in Eq. (4), its non-stationary variant in Algorithm 1, and an Naive Average baseline. The baseline directly uses the short-term reward average as the estimate of the long-term effect.
5.1 SYNTHETIC SIMULATION
The synthetic environment generates 4 randomized matrix Mi for policies {πi}3i=0 and a trajectory of randomized exogenous noise {zt}Tt=0. See details of the synthetic dynamic in Appendix C. The randomized sequence follows the non-stationary dynamics with a parameter α controlling the scale of the exogenous noise: oj,t = sj,t+αzt, ∀j, t. We collect n trajectories for each policy until t = T (w/ varying T ). We vary the parameters of the generating sequences: number n of trajectories, horizon T , data dimension d, and scale α of the exogenous noise. We plot the logarithmic Mean Square Error (MSE) for each method in Figure 2. The result shows that our estimator method (the green line) clearly outperforms all other baselines. Moreover, Figure 2(d) shows the increase of the scale of the exogenous noise does not affect estimation accuracy of our method.
5.2 TYPE-1 DIABETE SIMULATOR
This environment is modified based on an open-source implementation1 of the FDA approved Type1 Diabetes simulator (T1DMS) (Man et al., 2014). The environment simulates two-day behavior in an in-silico patient’s life. Consumption of a meal increases the blood-glucose level in the body. If the level is too high, the patient suffers from hyperglycemia. If the level is too low, the patient suffers from hypoglycemia. The goal is to control the blood glucose level by regulating the insulin dosage to minimize the risk associated with both hyperglycemia and hypoglycemia.
We modify the Bagal and Bolus (BB) policy (Bastani, 2014) (control policy) in the codebase and set two glucose target levels and different noise levels as our two treatment policies. We collect information in the first 12-hour of all the three policies with 5000 randomized patients in each policy group and use those information to predict the long-term effect. The observation feature is 2-dimensional: glucose level (CGM) and the amount of insulin injection. The non-stationarity comes from the time and amount of the consumption of a meal, which is time varying, but otherwise shared by all patients. We average a 2-day simulation window over random 250, 000 patients as ground truth treatment effect between policy groups.
1https://github.com/jxx123/simglucose
Similar to the synthetic simulator, we vary the number of patients and the experimental period. Figure 3 shows that the non-stationary method performs better in the prediction accuracy compared to stationary method in both predictions of CGM and the amount of insulin injection. Even though the simulator is non-linear, our simple linear additive exogenous noise assumption still captures the small local changes well, which is approximately linear.
5.3 DATA FROM AN ONLINE STORE
We test our methods under 4 long-running experiments in an online store with a total of 7 different treatment policies (some experiments have more than 1 treatment). Each experiment has 1 control policy. We evaluate 4 business metrics related to customer purchases in the store (Metrics 1-4), and use d = 17 features. All the experiments lasted for 12 weeks. We treat the first 5 weeks as the experiment window, and use data in those weeks to estimate long-term impacts of the 4 metrics. The trailing 7-week average of the metrics are used as ground true to evaluate accuracy of various estimators. Table 1 reports the median of the Mean Absolute Percentage Error (MAPE) of the estimators; See full results in Appendix C.
Given the high cost in such long-running experiments, we cannot collect more data points for comparison, and for computing statistical significance. That said, there is good evidence from the reported number that our method produces better predictions of long-term treatment effects than Naive Average. Furthermore, our method improves on the stationary baseline, suggesting the practical relevance of our nonstationary assumtion, and effectiveness of the proposed estimator.
6 CONCLUSIONS
In this paper we study how to estimate the long-term treatment effect by using only the inexperimental data in the non-stationary environment. We propose a novel non-stationary RL model and an algorithm to make prediction. A major limitation is the linear assumption in both the dynamics model and the additive exogenous part. Once the real world model includes a highly non-linear part, the prediction value can be biased. Future direction includes further relax our model to nonlinear case to better capture the real world environment.
A PROOF
In this section, we provide detailed proof for the theorem in the main text, as a self-contained section, we briefly introduce the notation as below, and adopt the regularized, multiple policy groups settings in the appendix:
• n: number of total individuals. • Ii: the index set for policy πi; ni = |Ii| as the number of individual in under policy πi. • k total number of different policy group. • Dn: dataset for n individuals in the experimental period.
In the appendix, we denote the ground truth dynamic M∗i and the ground truth exogenous noise z ∗ with a star ∗ to distinguish the variables Mi and z during optmization process.
A.1 ASSUMPTIONS
The dynamic assumption of our linear additive exogenous noise assumption in Assumption 3.1 can be rewritten as the following equation:
M∗i (oj,t − z∗t ) = (oj,t+1 − z∗t+1) + εj,t, ∀j ∈ Ii, 0 ≤ t ≤ T − 1. (15)
where εj,t is a zero-mean noise. Let εj = εj,0εj,1. . . εj,T−1 ∈ Rd×T , {εj}1≤j≤n forms a martingale: E[εj |Fj−1] = 0, (16)
where the filtration Fj = {o1, ..., oj−1} is the information up to the first j − 1 individuals. We make addition bounded assumption on the zero-mean noise term for the proof: Assumption A.1 (Bounded Noise assumption). Let εj = M∗i (oj,t − z∗t )− (oj,t+1 − z∗t+1), j ∈ Ii be the residual of the transition under the true transition matrix M∗i , we have
∥εj∥2 ≤ Cε, ∀j, (17) where Cε is a uniform constant independent of policy assignment.
For the empirical covariance matrix in the middle step of the calculation, we assume they are all bounded. Assumption A.2 (Bounded Norm for Matrices). We make the following assumptions on matrices
1. ∥M∗i ∥ ≤ CMi < 1γ , ∀i.
2. ∥(Λ∗n/n)−1∥ ≤ CΛ.
A.2 LOSS FUNCTION AND ALTERNATING MINIMIZATION
Our loss function can be written as:
L({Mi}1≤i≤k, z;Dn) = k∑
i=1 ∑ j∈Ii ∥Ai(z − oj)∥22 + λz∥z∥22 + k∑ i=1 λi∥Mi − Id∥2F . (18)
Lemma A.3. Fix {Mi}, denote Gi = A⊤i Ai where Ai is defined in Eq. (11), the minimization of z = argminz L({Mi}1≤i≤k, z;Dn) is
z({Mi}) = ( λzId×(T+1) +
k∑ i=1 niGi )−1 k∑ i=1 ∑ j∈Ii Gioj . (19)
Proof. By taking the gradient of the loss function, we will have:
0 =∇zL({Mi}1≤i≤k, z;Dn)
=2 k∑ i=1 ∑ j∈Ii Gi(z − oj) + 2λzz
which implies
z({Mi}) = ( λzId×(T+1) +
k∑ i=1 niGi )−1 k∑ i=1 ∑ j∈Ii Gioj . Here, Gi = A⊤i Ai is semi-definite, so the inversion of the large matrix in the right side of the expression always exists.
Similarly we can get the minimizer of Mi fixing z. Lemma A.4. By fixing z, the minimizer of Mi can be written as
Mi(z) = λiId + ∑ j∈Ii T−1∑ t=1 (oj,t+1 − zt+1)(oj,t − zt)⊤ λiId + ∑ j∈Ii T−1∑ t=1 (oj,t − zt)(oj,t − zt)⊤ −1 .
(20)
Proof. The proof is similarly applied by looking at the zero gradient of Mi.
If we set λi = 0 and z = 0, the minimization reduces back to estimation of M̂ in Eq. (5).
A.3 ERROR ANALYSIS
Lemma A.5. Let M∗i be the true dynamic of the underlying state, we have: z({M∗i })− z∗ = −λz(Λ∗n)−1z∗ + (Λ∗n)−1 k∑
i=1 ∑ j∈Ii A⊤i εj , (21) where Λ∗n = λzId×T + ∑k i=1 niG ∗ i .
Proof. By expand the definition of z({M∗i }, we have: z({M∗i }) = (Λ∗n)−1 k∑
i=1 ∑ j∈Ii Gioj =(Λ∗n) −1 k∑ i=1 ∑ j∈Ii A⊤i (Aioj)
=(Λ∗n) −1 k∑ i=1 ∑ j∈Ii A⊤i (Aiz ∗ + εj)
=(Λ∗n) −1 Λ∗nz∗ − λzz∗ + k∑ i=1 ∑ j∈Ii A⊤i εj)
=z∗ − λz(Λ∗n)−1z∗ + (Λ∗n)−1 k∑ i=1 ∑ j∈Ii A⊤i εj)
Lemma A.6. Let z∗ be the true exogenous noise, we have:
Mi(z ∗)−M∗i = λi(Id −M∗i ) + ∑ j∈Ii T−1∑ t=1 εj,t(oj,t − z∗t )⊤ (λiId +Σ∗n)−1 , (22)
where Σ∗n = ∑ j∈Ii ∑T−1 t=1 (oj,t − z∗t )(oj,t − z∗t )⊤ is the empirical covariace matrix.
Proof. By expand the definition of Mi(z∗), we have:
Mi(z ∗) = λiId + ∑ j∈Ii T−1∑ t=1 (oj,t+1 − z∗t+1)(oj,t − z∗t )⊤ λiId + ∑ j∈Ii T−1∑ t=1 (oj,t − z∗t )(oj,t − z∗t )⊤ −1
(23)
= λiId + ∑ j∈Ii T−1∑ t=1 (εj,t +M ∗ i (oj,t − z∗t )) (oj,t − z∗t )⊤ (λiId +Σ∗n)−1 (24) =M∗i +
λi(Id −M∗i ) + ∑ j∈Ii T−1∑ t=1 εj,t(oj,t − z∗t )⊤ (λiId +Σ∗n)−1 (25)
A.4 PROOF OF PROPOSITION 2.5
Proof. By induction, it is not hard to prove that E[Ot|O0 = o,D = i] = M ti o.
Sum up all condition on O0, we have: E[Ot|D = i] = M tiE[O0].
By the definition of long-term discounted reward G, we have: v(πi) =E[ ∞∑ t=0 γtRt|D = i]
= ∞∑ t=0 γtE[θ⊤r Ot|D = i]
=θ⊤r ∞∑ t=0 γtM tiE[O0]
=θ⊤r (I − γMi)−1E[O0], where the last equation holds when ∥Mi∥ < 1γ .
A.5 PROOF OF PROPOSITION 3.5
Proof. From Lemma A.5, suppose λz = 0 and (Λ∗n) −1 exists, we have: z − z∗ = (Λ∗n)−1 1∑
i=0 ∑ j∈Ii A⊤i εj . Consider v̂(π0) if we plugin ẑ and the true dynamic M∗0 , the error between v̂ and v is
v̂(π0)− v(π0) =θ⊤r (I − γM∗0 )−1(z0 − z∗0) :=β⊤r (z0 − z∗0) =(β⊤r , 0, . . . , 0)(z0 − z∗0) =β̃⊤r (z0 − z∗0),
where βr = (I − γM∗0 )−T θr, and β̃r is the extended vector of βr if we fill the other vector value at other time step as 0.
Expand the difference (z0 − z∗0) we have:
v̂(π0)− v(π0) =β̃⊤r (z0 − z∗0)
= 1∑ i=0 β̃⊤r (Λ ∗ n) −1A⊤i ( ∑ j∈Ii εj)
= 1∑ i=0 β̃⊤r ( Λ∗n n )−1A⊤i (
∑ j∈Ii εj
n )
≤∥β̃r∥ 1∑
i=0
∥(Λ ∗ n
n )−1Ai∥∥
∑ j∈Ii εj
n ∥.
By Assumption A.1 and Assumption A.2, the norm of β̃r is the same as βr, which is bounded by ∥βr∥ ≤ 11−γCMi ∥θr∥. The matrix norm in the middle factor is bounded because of Assumption A.2. Finally, by vector concentration inequality, since εj is norm-subGaussian (Jin et al., 2019), there exist a constant c that with probability at least 1− δ:
∥ ∑ j∈Ii εj
n ∥ ≤ c
√ log(2dT/δ)
n .
In sum, the error is bounded by O( 1√ n ) with probability at least 1− δ, and the constant depends on Cε, CMi , CΛ and the norm of ∥θr∥.
A.6 PROOF OF PROPOSITION 3.6
Proof. Since we get access to the ground true z∗, the remaining problem is by changing the state as sj,t = oj,t − z∗t and reduce the problem back to standard MDP. The detailed proof can refer to Proposition 11 in Miyaguchi (2021).
B REDUCE THE COMPUTATION COMPLEXITY WITH PRE-COMPUTATION
In this section, we explain how to reduce the computation complexity with pre-computation.
Pre-computation. Compute
Mi(0) = ∑ j∈Ii T−1∑ t=1 oj,t+1o ⊤ j,t ∑ j∈Ii T−1∑ t=1 oj,to ⊤ j,t −1
and ōt = ∑ j∈Ii oj,t.
The pre-computation requires computation complexity of O(nTd2 + d3), where d2 is the computation complexity of the outer product, d3 is the computation complexity of the matrix inversion after summing up the matrix.
In Each Iteration. The computation of M can be rewritten as
Mi(z) = ∑ j∈Ii T−1∑ t=1 (oj,t+1 − zt+1)(oj,t − zt)⊤ ∑ j∈Ii T−1∑ t=1 (oj,t − zt)(oj,t − zt)⊤ −1
=Mi(0)− T−1∑ t=1 zt+1ō ⊤ t − T−1∑ t=1 ōt+1z ⊤ t + T−1∑ t=1 zt+1z ⊤ t ,
which requires computation complexity of O(Td2). Similarly, the computation of
z(G) = ( k∑ i=0 niGi) −1( k∑ i=0 Giōj)
requires computation complexity of O(T 2d2). Both steps are computationally scalable, since it does not rely on number of individuals n (which is often much larger than T and d).
Overall Computation Complexity. Suppose we execute the iterations for k times, then the total computation complexity for the alternating minimization is O(nTd2 + d3 + kT 2d2). In practice, the number of different individual n is far larger than the experiment horizon T and the feature dimension d, therefore the computation complexity essentially scales linearly with n.
C EXPERIMENTS DETAILS
C.1 SYNTHETIC SIMULATION
The synthetic environment generates 4 randomized matrix Mi for policies {πi}3i=0, where each entry of Mi is a positive number randomly sample from a uniform distribution between (0, 1). We normalize each row so that it sums up to 1, and we set M̃i = 0.5I + 0.5Mi as our final transition matrix. The 0.5I part ensures each matrix is not too far away from each other.
We generate a set of i.i.d. random vector ηt ∼ N (0, 1.5I) and set zt+1 = zt + ηt recursively. And we let z̃t = αt ∗ zt as the final exogenous noise, where αt = eβt and βt ∼ N (0, 0.5I), i.i.d.. All the parameters (zt and Mi) of the dynamic are fixed once generated, and we use the dynamic to generate our observation for each individual, following
st+1 = Mist + εt, and ot = st + αzt, ∀t where εt is independently drawn from a standard normal distribution, and α control the level of exogenous noise.
C.2 POLICY CONSTRUCTION IN TYPE-1 DIABETE SIMULATOR
The Bagal and Bolus policy is a parametrized policy based on the amount of insulin that a person with diabetes is instructed to inject prior to eating a meal (Bastani, 2014)
injection = current blood glucose− target blood glucose
CF +
meal size
CR ,
where CF and CR are parameter based on patients information such as body weights, which is already specified in the simulator.
We set our two treatment policies with target blood glucose level at 145 and 130 (compared to control: 140). And we increase the noise in the insulin pump simulator in both the treatment policies.
C.3 RANDOM PATIENTS GENERATION IN TYPE-1 DIABETE SIMULATOR
Type-1 Diabete simulator pre-stores 30 patients parameter. To randomly generate a new patient, we randomly pick two different patients A and B, and use a random linear coefficient β ∼ U(0, 0.2) and mixed the parameter of a new patient as
θ = (1− α)θA + αθB , where θA and θB are the parameters of patients A and B, respectively. Since patient A has more weight of the parameter, the parameters in Bagal and Bolus policy, CF and CR, follow patient A’s parameter.
C.4 FULL RESULTS FOR ALL THE ONLINE STORE EXPERIMENTS. | 1. What is the focus of the paper regarding non-stationary RL frameworks?
2. What are the strengths of the proposed algorithm, particularly in dealing with real-world problems?
3. What are the weaknesses of the paper regarding its claims and experiments?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any concerns regarding the linearity assumption and its potential impact on the proposed method's applicability to real-world problems? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work proposes a non-stationary RL framework to make long-term predictions on treatment effects. The proposed algorithm shows better results performed in two synthetic datasets and one online store dataset.
Strengths And Weaknesses
Strengths:
The problem statement is well written.
Introducing complexity into RL to deal with a real-world problem is good.
Weaknesses:
This work can be regarded as an intermediate work toward a milestone work. The datasets and data utilized to validate effective’s cannot “prove” the proposed algorithm is “the algorithm” to address the issue.
Long term, but how long is long term? How can the proposed method be used to predict financial market, for instance. Would the linearity assumption be an issue for many real-world problems?
Clarity, Quality, Novelty And Reproducibility
The paper is well written, and all points are clearly presented. The extension over the traditional RL work is not substantial, as we all know one can relax some model assumptions to better serve a real-world problem. The question is how much more data and computation is required to justify the benefits. This is not clearly articulated by the authors. |
ICLR | Title
Sparse Transformer: Concentrated Attention Through Explicit Selection
Abstract
Self-attention-based Transformer has demonstrated the state-of-the-art performances in a number of natural language processing tasks. Self-attention is able to model long-term dependencies, but it may suffer from the extraction of irrelevant information in the context. To tackle the problem, we propose a novel model called Explicit Sparse Transformer. Explicit Sparse Transformer is able to improve the concentration of attention on the global context through an explicit selection of the most relevant segments. Extensive experimental results on a series of natural language processing and computer vision tasks, including neural machine translation, image captioning, and language modeling, all demonstrate the advantages of Explicit Sparse Transformer in model performance. We also show that our proposed sparse attention method achieves comparable or better results than the previous sparse attention method, but significantly reduces training and testing time. For example, the inference speed is twice that of sparsemax in Transformer model.
N/A
Self-attention-based Transformer has demonstrated the state-of-the-art performances in a number of natural language processing tasks. Self-attention is able to model long-term dependencies, but it may suffer from the extraction of irrelevant information in the context. To tackle the problem, we propose a novel model called Explicit Sparse Transformer. Explicit Sparse Transformer is able to improve the concentration of attention on the global context through an explicit selection of the most relevant segments. Extensive experimental results on a series of natural language processing and computer vision tasks, including neural machine translation, image captioning, and language modeling, all demonstrate the advantages of Explicit Sparse Transformer in model performance. We also show that our proposed sparse attention method achieves comparable or better results than the previous sparse attention method, but significantly reduces training and testing time. For example, the inference speed is twice that of sparsemax in Transformer model.
1 INTRODUCTION
Understanding natural language requires the ability to pay attention to the most relevant information. For example, people tend to focus on the most relevant segments to search for the answers to their questions in mind during reading. However, retrieving problems may occur if irrelevant segments impose negative impacts on reading comprehension. Such distraction hinders the understanding process, which calls for an effective attention.
This principle is also applicable to the computation systems for natural language. Attention has been a vital component of the models for natural language understanding and natural language generation. Recently, Vaswani et al. (2017) proposed Transformer, a model based on the attention mechanism for Neural Machine Translation(NMT). Transformer has shown outstanding performance in natural language generation tasks. More recently, the success of BERT (Devlin et al., 2018) in natural language processing shows the great usefulness of both the attention mechanism and the framework of Transformer.
However, the attention in vanilla Transformer has a obvious drawback, as the Transformer assigns credits to all components of the context. This causes a lack of focus. As illustrated in Figure 1, the attention in vanilla Transformer assigns high credits to many irrelevant words, while in Explicit Sparse Transformer, it concentrates on the most relevant k words. For the word “tim”, the most related words should be ”heart” and the immediate words. Yet the attention in vanilla Transformer does not focus on them but gives credits to some irrelevant words such as “him”.
Recent works have studied applying sparse attention in Transformer model. However, they either add local attention constraints (Child et al., 2019) which break long term dependency or hurt the time efficiency (Martins & Astudillo, 2016). Inspired by Ke et al. (2018) which introduce sparse credit assignment to the LSTM model, we propose a novel model called Explicit Sparse Transformer which is equipped with our sparse attention mechanism. We implement an explicit selection method based on top-k selection. Unlike vanilla Transformer, Explicit Sparse Transformer only pays attention to the k most contributive states. Thus Explicit Sparse Transformer can perform more concentrated attention than vanilla Transformer.
We first validate our methods on three tasks. For further investigation, we compare our methods with previous sparse attention methods and experimentally answer how to choose k in a series of qualitative analyses. We are surprised to find that the proposed sparse attention method can also help with training as a regularization method. Visual analysis shows that Explicit Sparse Transformer exhibits a higher potential in performing a high-quality alignment. The contributions of this paper are presented below:
• We propose a novel model called Explicit Sparse Transformer, which enhances the concentration of the Transformer’s attention through explicit selection.
• We conducted extensive experiments on three natural language processing tasks, including Neural Machine Translation, Image Captioning and Language Modeling. Compared with vanilla Transformer, Explicit Sparse Transformer demonstrates better performances in the above three tasks. Specifically, our model reaches the state-of-the-art performances in the IWSLT 2015 English-to-Vietnamese translation.
• Compared to previous sparse attention methods for transformers, our methods are much faster in training and testing, and achieves better results.
2 PREMIERS
The review to the attention mechanism and the attention-based framework of Transformer can be found in Appendix A.1.
3 EXPLICIT SPARSE TRANSFORMER
Lack of concentration in the attention can lead to the failure of relevant information extraction. To this end, we propose a novel model, Explicit Sparse Transformer, which enables the focus on only a few elements through explicit selection. Compared with the conventional attention, no credit will be assigned to the value that is not highly correlated to the query. We provide a comparison between the attention of vanilla Transformer and that of Explicit Sparse Transformer in Figure 2.
𝒒𝟏
𝒒𝟐 ...
𝒒𝒍𝒒
𝒌𝟏 𝒌𝟐 … 𝒌𝒍𝒌
𝒑𝟏𝟏 𝒑𝟏2 … 𝒑𝟏𝑙𝑘 𝒑𝟐𝟏 𝒑𝟐𝟐 … 𝒑2𝑙𝑘 ... ... ... ...
𝒑𝒍𝒒𝟏 𝒑𝒍𝒒𝟐 … 𝒑𝒍𝒒𝑙𝑘
𝑄
𝒕𝟏
𝒕𝟐 ...
𝒕𝒍𝒒
𝑡
𝟏 𝟎 … 𝟎
𝟎 𝟏 … 𝟏 ... ... ... ... 𝟏 𝟎 … 𝟎
-
sign
𝕄
+ 1 −𝕄
−∞
x
𝒑𝟏𝟏 −∞ … −∞
−∞ 𝒑𝟐𝟐 … 𝒑𝟐𝒍𝒌 ... ... ... ...
𝒑𝒍𝒒𝟏 −∞ … −∞
𝝈
𝜶𝟏𝟏 𝟎 … 𝟎
𝟎 𝜶𝟐𝟐 … 𝜶𝟐𝒍𝒌 ... ... ... ...
𝒂𝒍𝒒𝟏 𝟎 … 𝟎
𝐴
Softmax normalization
𝑃
Top-k selection
Explicit Sparse Transformer is still based on the Transformer framework. The difference is in the implementation of self-attention. The attention is degenerated to the sparse attention through top-k selection. In this way, the most contributive components for attention are reserved and the other irrelevant information are removed. This selective method is effective in preserving important information and removing noise. The attention can be much more concentrated on the most contributive elements of value. In the following, we first introduce the sparsification in self-attention and then extend it to context attention.
In the unihead self-attention, the key components, the queryQ[lQ, d], keyK[lK , d] and value V [lV , d], are the linear transformation of the source context, namely the input of each layer, where Q =WQx, K = WKx and V = WV x. Explicit Sparse Transformer first generates the attention scores P as demonstrated below:
P = QKT√ d
(1)
Then the model evaluates the values of the scores P based on the hypothesis that scores with larger values demonstrate higher relevance. The sparse attention masking operationM(·) is implemented upon P in order to select the top-k contributive elements. Specifically, we select the k largest element of each row in P and record their positions in the position matrix (i, j), where k is a hyperparameter. To be specific, say the k-th largest value of row i is ti, if the value of the j-th component is larger than ti, the position (i, j) is recorded. We concatenate the threshold value of each row to form a vector t = [t1, t2, · · · , tlQ ]. The masking functionsM(·, ·) is illustrated as follows:
M(P, k)ij = { Pij if Pij ≥ ti (k-th largest value of row i) −∞ if Pij < ti (k-th largest value of row i)
(2)
With the top-k selection, the high attention scores are selected through an explicit way. This is different from dropout which randomly abandons the scores. Such explicit selection can not only guarantee the preservation of important components, but also simplify the model since k is usually a small number such as 8, detailed analysis can be found in 5.2. The next step after top-k selection is normalization:
A = softmax(M(P, k)) (3) where A refers to the normalized scores. As the scores that are smaller than the top k largest scores are assigned with negative infinity by the masking functionM(·, ·), their normalized scores, namely the probabilities, approximate 0. We show the back-propagation process of Top-k selection in A.3. The output representation of self-attention C can be computed as below:
C = AV (4)
The output is the expectation of the value following the sparsified distribution A. Following the distribution of the selected components, the attention in the Explicit Sparse Transformer model can obtain more focused attention. Also, such sparse attention can extend to context attention. Resembling but different from the self-attention mechanism, the Q is no longer the linear transformation of the source context but the decoding states s. In the implementation, we replace Q with WQs, where WQ is still learnable matrix.
In brief, the attention in our proposed Explicit Sparse Transformer sparsifies the attention weights. The attention can then become focused on the most contributive elements, and it is compatible to both self-attention and context attention. The simple implementation of this method is in the Appendix A.4.
4 RESULTS
We conducted a series of experiments on three natural language processing tasks, including neural machine translation, image captioning and language modeling. Detailed experimental settings are in Appendix A.2.
4.1 NEURAL MACHINE TRANSLATION
Dataset To evaluate the performance of Explicit Sparse Transformer in NMT, we conducted experiments on three NMT tasks, English-to-German translation (En-De) with a large dataset, English-to-Vietnamese (En-Vi) translation and German-to-English translation (De-En) with two datasets of medium size. For En-De, we trained Explicit Sparse Transformer on the standard dataset for WMT 2014 En-De translation. The dataset consists of around 4.5 million sentence pairs. The source and target languages share a vocabulary of 32K sub-word units. We used the newstest 2013 for validation and the newstest 2014 as our test set. We report the results on the test set.
For En-Vi, we trained our model on the dataset in IWSLT 2015 (Cettolo et al., 2014). The dataset consists of around 133K sentence pairs from translated TED talks. The vocabulary size for source language is around 17,200 and that for target language is around 7,800. We used tst2012 for validation, and tst2013 for testing and report the testing results. For De-En, we used the dataset in IWSLT 2014. The training set contains 160K sentence pairs and the validation set contains 7K sentences.
Model BLEU-4 METEOR CIDEr
Following Edunov et al. (2018), we used the same test set with around 7K sentences. The data were preprocessed with byte-pair encoding (Sennrich et al., 2016). The vocabulary size is 14,000.
Result Table 1 presents the results of the baselines and our Explicit Sparse Transformer on the three datasets. For En-De, Transformer-based models outperform the previous methods. Compared with the result of Transformer (Vaswani et al., 2017), Explicit Sparse Transformer reaches 29.4 in BLEU score evaluation, outperforming vanilla Transformer by 0.3 BLEU score. For En-Vi, vanilla Transformer1 reaches 30.2, outperforming the state-of-the-art method (Huang et al., 2017). Our model, Explicit Sparse Transformer, achieves a new state-of-the-art performance, 31.1, by a margin of 0.5 over vanilla Transformer. For De-En, we demonstrate that Transformer-based models outperform the other baselines. Compared with Transformer, our Explicit Sparse Transformer reaches a better performance, 35.6. Its advantage is +0.3. To the best of our knowledge, Explicit Sparse Transformer reaches a top line performance on the dataset.
4.2 IMAGE CAPTIONING
Dataset We evaluated our approach on the image captioning task. Image captioning is a task that combines image understanding and language generation. We conducted experiments on the Microsoft COCO 2014 dataset (Chen et al., 2015a). It contains 123,287 images, each of which is paired 5 with descriptive sentences. We report the results and evaluate the image captioning model on the MSCOCO 2014 test set for image captioning. We used the publicly-available splits provided by Karpathy & Li (2015). The validation set and test set both contain 5,000 images.
Result Table 2 shows the results of the baseline models and Explicit Sparse Transformer on the COCO Karpathy test split. Transformer outperforms the mentioned baseline models. Explicit Sparse Transformer outperforms the implemented Transformer by +0.4 in terms of BLEU-4, +0.3 in terms of METEOR, +0.7 in terms of CIDEr. , which consistently proves its effectiveness in Image Captioning.
4.3 LANGUAGE MODELING
Dataset Enwiki82 is large-scale dataset for character-level language modeling. It contains 100M bytes of unprocessed Wikipedia texts. The inputs include Latin alphabets, non-Latin alphabets, XML markups and special characters. The vocabulary size 205 tokens, including one for unknown characters. We used the same preprocessing method following Chung et al. (2015). The training set contains 90M bytes of data, and the validation set and the test set contains 5M respectively.
Result Table 3 shows the results of the baseline models and Explicit Sparse Transformer-XL on the test set of enwiki8. Compared with the other strong baselines, Transformer-XL can reach a better performance, and Explicit Sparse Transformer outperforms Transformer-XL with an advantage.
1While we did not find the results of Transformer on En-Vi, we reimplemented our vanilla Transformer with the same setting.
2http://mattmahoney.net/dc/text.html
Model Params BPC
5 DISCUSSION
In this section, we performed several analyses for further discussion of Explicit Sparse Transformer. First, we compare the proposed method of topk selection before softmax with previous sparse attention method including various variants of sparsemax (Martins & Astudillo, 2016; Correia et al., 2019; Peters et al., 2019). Second, we discuss about the selection of the value of k. Third, we demonstrate that the top-k sparse attention method helps training. In the end, we conducted a series of qualitative analyses to visualize proposed sparse attention in Transformer.
5.1 COMPARISON WITH OTHER SPARSE ATTENTION METHODS
We compare the performance and speed of our method with the previous sparse attention methods3 on the basis of strong implemented transformer baseline. The training and inference speed are reported on the platform of Pytorch and IWSLT 2014 De-En translation dataset, the batch size for inference is set to 128 in terms of sentence and half precision training(FP-16) is applied.
As we can see from Table 4, the proposed sparse attention method achieve the comparable results as previous sparse attention methods, but the training and testing speed is 2x faster than sparsemax and 10x faster than Entmax-alpha during the inference. This is due to the fact that our method does not introduce too much computation for calculating sparse attention scores.
The other group of sparse attention methods of adding local attention constraints into attention (Child et al., 2019; Sukhbaatar et al., 2019), do not show performance on neural machine translation, so we do not compare them in Table 4.
3We borrow the implementation of Entmax1.5 in Tensorflow from https://github.com/ deep-spin/entmax, and the implementation of Sparsemax, Entmax-1.5, Entmax-alpha in Pytorch from https://gist.github.com/justheuristic/60167e77a95221586be315ae527c3cbd. We have not found a reliable Tensorflow implementation of sparsemax and entmax-alpha in the transformer (we tried to apply the official implementation of sparsemax in Tensorflow to tensor2tensor, but it reports loss of NaN.)
Task Base T T&P
5.2 HOW TO SELECT A PROPER K?
The natural question of how to choose the optimal k comes with the proposed method. We compare the effect of the value of k at exponential scales. We perform experiments on En-Vi and De-En from 3 different initializations for each value of K, and report the mean BLEU scores on the valid set. The figure 3 shows that regardless of the value of 16 on the En-Vi dataset, the model performance generally rises first and then falls as k increases. Under the setting of the k ∈ {4, 8, 16, 32}, setting the value of k to 8 achieves consistent improvements over the
5.3 DO THE PROPOSED SPARSE ATTENTION METHOD HELPS TRAINING?
We are surprised to find that only adding the sparsification in the training phase can also bring an improvement in the performance. We experiment this idea on IWSLT En-Vi and report the results on the valid set in Table 5, . The improvement of 0.3 BLEU scores shows that vanilla Transformer may be overparameterized and the sparsification encourages the simplification of the model.
5.4 DO THE EXPLICIT SPARSE TRANSFORMER ATTEND BETTER?
To perform a thorough evaluation of our Explicit Sparse Transformer, we conducted a case study and visualize the attention distributions of our model and the baseline for further comparison. Specifically, we conducted the analysis on the test set of En-Vi, and randomly selected a sample pair of attention visualization of both models.
The visualization of the context attention of the decoder’s bottom layer in Figure 4(a). The attention distribution of the left figure is fairly disperse. On the contrary, the right figure shows that the sparse attention can choose to focus only on several positions so that the model can be forced to stay focused. For example, when generating the phrase “for thinking about my heart”(Word-to-word translation
from Vietnamese), the generated word cannot be aligned to the corresponding words. As to Explicit Sparse Transformer, when generating the phrase ”with all my heart”, the attention can focus on the corresponding positions with strong confidence.
The visualization of the decoder’s top layer is shown in Figure 4(b). From the figure, the context attention at the top layer of the vanilla Transformer decoder suffers from focusing on the last source token. This is a common behavior of the attention in vanilla Transformer. Such attention with wrong alignment cannot sufficiently extract enough relevant source-side information for the generation. In contrast, Explicit Sparse Transformer, with simple modification on the vanilla version, does not suffer from this problem, but instead focuses on the relevant sections of the source context. The figure on the right demonstrating the attention distribution of Explicit Sparse Transformer shows that our proposed attention in the model is able to perform accurate alignment.
6 RELATED WORK
Attention mechanism has demonstrated outstanding performances in a number of neural-networkbased methods, and it has been a focus in the NLP studies (Bahdanau et al., 2014). A number of studies are proposed to enhance the effects of attention mechanism (Luong et al., 2015; Vaswani et al., 2017; Ke et al., 2018). Luong et al. (2015) propose local attention and Yang et al. (2018) propose local attention for self-attention. Xu et al. (2015) propose hard attention that pays discrete attention in image captioning. Chandar et al. (2016) propose a combination soft attention with hard attention to construct hierarchical memory network. Lin et al. (2018) propose a temperature mechanism to change the softness of attention distribution. Shen et al. (2018) propose an attention which can select a small proportion for focusing. It is trained by reinforcement learning algorithms (Williams, 1992). In terms of memory networks, Rae et al. (2016) propose to sparse access memory
Child et al. (2019) recently propose to use local attention and block attention to sparsify the transformer. Our approach differs from them in that our method does not need to block sentences and still capture long distance dependencies. Besides, we demonstrate the importance of Explicit Sparse Transformer in sequence to sequence learning. Although the variants of sparsemax (Martins & Astudillo, 2016; Correia et al., 2019; Peters et al., 2019) improve in machine translation tasks, we empirically demonstrate in 5.1 that our method introduces less computation in the standard transformer and is much faster than those sparse attention methods on GPUs.
7 CONCLUSION
In this paper, we propose a novel model called Explicit Sparse Transformer. Explicit Sparse Transformer is able to make the attention in vanilla Transformer more concentrated on the most contributive components. Extensive experiments show that Explicit Sparse Transformer outperforms vanilla Transformer in three different NLP tasks. We conducted a series of qualitative analyses to investigate the reasons why Explicit Sparse Transformer outperforms the vanilla Transformer. Furthermore, we find
an obvious problem of the attention at the top layer of the vanilla Transformer, and Explicit Sparse Transformer can alleviate this problem effectively with improved alignment effects.
A APPENDIX
A.1 BACKGROUND
A.1.1 ATTENTION MECHANISM
Bahdanau et al. (2014) first introduced the attention mechanism to learn the alignment between the target-side context and the source-side context, and Luong et al. (2015) formulated several versions for local and global attention. In general, the attention mechanism maps a query and a key-value pair to an output. The attention score function and softmax normalization can turn the query Q and the key K into a distribution α. Following the distribution α, the attention mechanism computes the expectation of the value V and finally generates the output C.
Take the original attention mechanism in NMT as an example. Both key K ∈ Rn×d and value V ∈ Rn×d are the sequence of output states from the encoder. Query Q ∈ Rm×d is the sequence of output states from the decoder, where m is the length of Q, n is the length of K and V , and d is the dimension of the states. Thus, the attention mechanism is formulated as:
C = softmax(f(Q,K))V (5)
where f refers to the attention score computation.
A.1.2 TRANSFORMER
Transformer (Vaswani et al., 2017), which is fully based on the attention mechanism, demonstrates the state-of-the-art performances in a series of natural language generation tasks. Specifically, we focus on self-attention and multi-head attention.
The ideology of self-attention is, as the name implies, the attention over the context itself. In the implementation, the query Q, key K and value V are the linear transformation of the input x, so that Q = WQx, K = WKx and V = WV x where WQ, WK and WV are learnable parameters. Therefore, the computation can be formulated as below:
C = softmax ( QKT√ d ) V (6)
where d refers to the dimension of the states.
The aforementioned mechanism can be regarded as the unihead attention. As to the multi-head attention, the attention computation is separated into g heads (namely 8 for basic model and 16 for large model in the common practice). Thus multiple parts of the inputs can be computed individually. For the i-th head, the output can be computed as in the following formula:
C(i) = softmax ( Q(i)K(i)T√
dk
) V (i) (7)
where C(i) refers to the output of the head, Q(i), K(i) and V (i) are the query, key and value of the head, and dk refers to the size of each head (dk = d/g). Finally, the output of each head are concatenated for the output: C = [C(1), · · · , C(i), · · · , C(g)] (8) In common practice, C is sent through a linear transformation with weight matrix Wc for the final output of multi-head attention.
However, soft attention can assign weights to a lot more words that are less relevent to the query. Therefore, in order to improve concentration in attention for effective information extraction, we study the problem of sparse attention in Transformer and propose our model Explicit Sparse Transformer.
A.2 EXPERIMENTAL DETAILS
We use the default setting in Vaswani et al. (2017) for the implementation of our proposed Explicit Sparse Transformer. The hyper parameters including beam size and training steps are tuned on the valid set.
Neural Machine Translation Training For En-Vi translation, we use default scripts and hyperparameter setting of tensor2tensor4 v1.11.0 to preprocess, train and evaluate our model. We use the default scripts of fairseq5 v0.6.1 to preprocess the De-En and En-De dataset. We train the model on the En-Vi dataset for 35K steps with batch size of 4K. For IWSLT 2015 De-En dataset, batch size is also set to 4K, we update the model every 4 steps and train the model for 90epochs. For WMT 2014 En-De dataset, we train the model for 72 epochs on 4 GPUs with update frequency of 32 and batch size of 3584. We train all models on a single RTX2080TI for two small IWSLT datasets and on a single machine of 4 RTX TITAN for WMT14 En-De. In order to reduce the impact of random initialization, we perform experiments with three different initializations for all models and report the highest for small datasets.
Evaluation We use case-sensitive tokenized BLEU score (Papineni et al., 2002) for the evaluation of WMT14 En-De, and we use case-insensitive BLEU for that of IWSLT 2015 En-Vi and IWSLT 2014 De-En following Lin et al. (2018). Same as Vaswani et al. (2017), compound splitting is used for WMT 14 En-De. For WMT 14 En-De and IWSLT 2014 De-En, we save checkpoints every epoch and average last 10 checkpoints every 5 epochs, We select the averaged checkpoint with best valid BLEU and report its BLEU score on the test set. For IWSLT 2015 En-Vi, we save checkpoints every 600 seconds and average last 20 checkpoints.
Image Captioning We still use the default setting of Transformer for training our proposed Explicit Sparse Transformer. We report the standard automatic evaluation metrics with the help of the COCO captioning evaluation toolkit6 (Chen et al., 2015b), which includes the commonly-used evaluation metrics, BLEU-4 Papineni et al. (2002), METEOR Denkowski & Lavie (2014), and CIDEr Vedantam et al. (2015).
Language Models We follow Dai et al. (2019) and use their implementation for our Explicit Sparse Transformer. Following the previous work (Chung et al., 2015; Dai et al., 2019), we use BPC (E[log2P (xt+1|ht)]), standing for the average number of Bits-Per-Character, for evaluation. Lower BPC refers to better performance. As to the model implementation, we implement Explicit Sparse Transformer-XL, which is based on the base version of Transformer-XL.7 Transformer-XL is a model based on Transformer but has better capability of representing long sequences.
A.3 THE BACK-PROPAGATION PROCESS OF TOP-K SELECTION
The masking functionM(·, ·) is illustrated as follow:
M(P, k)ij = { Pij if Pij ≥ ti (k-th largest value of row i) −∞ if Pij < ti (k-th largest value of row i)
(9)
Denote M =M(P, k). We regard ti as constants. When back-propagating,
∂Mij ∂Pkl = 0 (i 6= k or j 6= l) (10)
∂Mij ∂Pij = { 1 if Pij ≥ ti (k-th largest value of row i) 0 if Pij < ti (k-th largest value of row i)
(11)
4https://github.com/tensorflow/tensor2tensor 5https://github.com/pytorch/fairseq 6https://github.com/tylin/coco-caption 7Due to our limited resources (TPU), we did not implement the big version of Explicit Sparse Transformer-
XL.
The next step after top-k selection is normalization:
A = softmax(M(P, k)) (12)
where A refers to the normalized scores. When backpropagating,
∂Aij ∂Pkl = lQ∑ m=1 lK∑ n=1 ∂Aij ∂Mmn ∂Mmn ∂Pkl
(13)
= ∂Aij ∂Mkl ∂Mkl ∂Pkl
(14)
= ∂Aij ∂Mkl if Pij ≥ ti (k-th largest value of row i)
0 if Pij < ti (k-th largest value of row i) (15)
The softmax function is evidently differentiable, therefore, we have calculated the gradient involved in top-k selection.
A.4 IMPLEMENTATION
Figure 5 shows the code for the idea in case of single head self-attention, the proposed method is easy to implement and plug in the successful Transformer model. | 1. What are the contributions of the paper, particularly in modifying the Transformer?
2. How does the Sparse Transformer improve upon the standard Transformer in terms of attention focusing?
3. What are the strengths and weaknesses of the paper regarding its proposal, experimental metrics, and comparison to other models?
4. How does the reviewer assess the novelty and originality of the proposed model?
5. Are there any concerns regarding the relation between the proposed model and previous proposals for sparse attention?
6. How does the reviewer evaluate the experimental results, particularly in terms of sensitivity to choosing the correct value for k?
7. What are some suggestions for improving the paper, such as providing empirical results concerning the sensitivity of the reported successes to choosing the correct value for k? | Review | Review
CONTRIBUTIONS:
C1. Sparse Transformer: A modification of the Transformer, limiting attention to the top-k locations. (That is a complete statement of the proposed model.)
C2. Experiments showing that, quantitatively, the Sparse Transformer out-performs the standard Transformer on translation, language modeling, and image captioning.
C3. Experiments showing that, qualitatively, in translation, when generating a target word, the Sparse Transformer better focusses attention on the aligned source word
RATING: Reject
REASONS FOR RATING (SUMMARY). The innovativeness seems low given the several previous proposals for sparse attention, the results are not dramatic enough to compensate for the lack of originality, and the comparison to other models is wanting.
REVIEW
Strengths: The paper is clearly written. The question of whether the Transformer’s attention is too diffuse is of interest. The proposal is admirably simple. The quantitative metrics include comparison against many alternative models.
Weaknesses: A primary area of deficiency concerns the relation of the proposed model to other proposals for sparse attention: the authors cite 5 of them (and 2 more are cited in the comment by Cui). The paper should clearly identify the differences between the proposed model and earlier models: it does not discuss this at all. The deficiencies in these previous models should be clearly stated and demonstrated: they are only described as “either restricted range of attention or training difficulty” (Sec 6). A rationale for why the proposal can be expected to remedy these deficiencies should be stated clearly: it is not stated at all. Experimental demonstration that the proposed innovation actually remedies the identified deficiencies should be provided, but is not.
A proposal to use a top-k filter immediately raises the question of the value of k. This is not discussed at all. In particular, no empirical results are given concerning the sensitivity of the reported successes to choosing the correct value for k. We are only told that “k is usually a small number such as 5 or 10” (Sec 3). The experimental details in the appendix do not even state the value of k used in the models reported.
It is an interesting discovery that in the translation task, attention at the top layer of the standard Transformer is strongly focused on the end of the input. This is described as an “obvious problem” (Sec 7). But it can’t obviously be a problem because the performance of the standard Transformer is only very slightly lower than that of the Sparse Transformer: if anything is obvious, it is that processing in the standard Transformer packs a lot of information into its final encoding of the end of the input string, which functions rather like an encoding of the entire sentence.
Presumably, the experimental results reported are those from a single model, since we are not told otherwise. There should be multiple tests of the models with different random initializations, with the means and variances of measures reported. It is possible, however, that limitations of computational resources made that infeasible, although the Appendix seems to indicate that no hyperparameter tuning was done, which greatly reduces computational cost.
COMMENTS FOR IMPROVEMENT, NOT RELEVANT TO RATING DECISION
Although the tiny sample of visualized attention weights provided is useful, a large-scale quantitative assessment of a main claim concerning translation might well be possible: that attention is in fact concentrated on the aligned word might be testable using an aligned bilingual corpus or perhaps an existing forced aligner could be used.
Much space could be saved: it is not necessary to review the standard Transformer, and the modification proposed is so simple that it can be precisely stated in one sentence (see C1 above): the entire page taken up by Sec. 3 is unnecessary, as it adds only implementation details.
Errors that took more than a moment to mentally correct, all on p. 12:
The definition of the BPC should be E[log P(x(t+1) | h(t))]: all parentheses are missing
“regrad” should be “regard”
“derivative” should be “differentiable” in the final sentence |
ICLR | Title
Sparse Transformer: Concentrated Attention Through Explicit Selection
Abstract
Self-attention-based Transformer has demonstrated the state-of-the-art performances in a number of natural language processing tasks. Self-attention is able to model long-term dependencies, but it may suffer from the extraction of irrelevant information in the context. To tackle the problem, we propose a novel model called Explicit Sparse Transformer. Explicit Sparse Transformer is able to improve the concentration of attention on the global context through an explicit selection of the most relevant segments. Extensive experimental results on a series of natural language processing and computer vision tasks, including neural machine translation, image captioning, and language modeling, all demonstrate the advantages of Explicit Sparse Transformer in model performance. We also show that our proposed sparse attention method achieves comparable or better results than the previous sparse attention method, but significantly reduces training and testing time. For example, the inference speed is twice that of sparsemax in Transformer model.
N/A
Self-attention-based Transformer has demonstrated the state-of-the-art performances in a number of natural language processing tasks. Self-attention is able to model long-term dependencies, but it may suffer from the extraction of irrelevant information in the context. To tackle the problem, we propose a novel model called Explicit Sparse Transformer. Explicit Sparse Transformer is able to improve the concentration of attention on the global context through an explicit selection of the most relevant segments. Extensive experimental results on a series of natural language processing and computer vision tasks, including neural machine translation, image captioning, and language modeling, all demonstrate the advantages of Explicit Sparse Transformer in model performance. We also show that our proposed sparse attention method achieves comparable or better results than the previous sparse attention method, but significantly reduces training and testing time. For example, the inference speed is twice that of sparsemax in Transformer model.
1 INTRODUCTION
Understanding natural language requires the ability to pay attention to the most relevant information. For example, people tend to focus on the most relevant segments to search for the answers to their questions in mind during reading. However, retrieving problems may occur if irrelevant segments impose negative impacts on reading comprehension. Such distraction hinders the understanding process, which calls for an effective attention.
This principle is also applicable to the computation systems for natural language. Attention has been a vital component of the models for natural language understanding and natural language generation. Recently, Vaswani et al. (2017) proposed Transformer, a model based on the attention mechanism for Neural Machine Translation(NMT). Transformer has shown outstanding performance in natural language generation tasks. More recently, the success of BERT (Devlin et al., 2018) in natural language processing shows the great usefulness of both the attention mechanism and the framework of Transformer.
However, the attention in vanilla Transformer has a obvious drawback, as the Transformer assigns credits to all components of the context. This causes a lack of focus. As illustrated in Figure 1, the attention in vanilla Transformer assigns high credits to many irrelevant words, while in Explicit Sparse Transformer, it concentrates on the most relevant k words. For the word “tim”, the most related words should be ”heart” and the immediate words. Yet the attention in vanilla Transformer does not focus on them but gives credits to some irrelevant words such as “him”.
Recent works have studied applying sparse attention in Transformer model. However, they either add local attention constraints (Child et al., 2019) which break long term dependency or hurt the time efficiency (Martins & Astudillo, 2016). Inspired by Ke et al. (2018) which introduce sparse credit assignment to the LSTM model, we propose a novel model called Explicit Sparse Transformer which is equipped with our sparse attention mechanism. We implement an explicit selection method based on top-k selection. Unlike vanilla Transformer, Explicit Sparse Transformer only pays attention to the k most contributive states. Thus Explicit Sparse Transformer can perform more concentrated attention than vanilla Transformer.
We first validate our methods on three tasks. For further investigation, we compare our methods with previous sparse attention methods and experimentally answer how to choose k in a series of qualitative analyses. We are surprised to find that the proposed sparse attention method can also help with training as a regularization method. Visual analysis shows that Explicit Sparse Transformer exhibits a higher potential in performing a high-quality alignment. The contributions of this paper are presented below:
• We propose a novel model called Explicit Sparse Transformer, which enhances the concentration of the Transformer’s attention through explicit selection.
• We conducted extensive experiments on three natural language processing tasks, including Neural Machine Translation, Image Captioning and Language Modeling. Compared with vanilla Transformer, Explicit Sparse Transformer demonstrates better performances in the above three tasks. Specifically, our model reaches the state-of-the-art performances in the IWSLT 2015 English-to-Vietnamese translation.
• Compared to previous sparse attention methods for transformers, our methods are much faster in training and testing, and achieves better results.
2 PREMIERS
The review to the attention mechanism and the attention-based framework of Transformer can be found in Appendix A.1.
3 EXPLICIT SPARSE TRANSFORMER
Lack of concentration in the attention can lead to the failure of relevant information extraction. To this end, we propose a novel model, Explicit Sparse Transformer, which enables the focus on only a few elements through explicit selection. Compared with the conventional attention, no credit will be assigned to the value that is not highly correlated to the query. We provide a comparison between the attention of vanilla Transformer and that of Explicit Sparse Transformer in Figure 2.
𝒒𝟏
𝒒𝟐 ...
𝒒𝒍𝒒
𝒌𝟏 𝒌𝟐 … 𝒌𝒍𝒌
𝒑𝟏𝟏 𝒑𝟏2 … 𝒑𝟏𝑙𝑘 𝒑𝟐𝟏 𝒑𝟐𝟐 … 𝒑2𝑙𝑘 ... ... ... ...
𝒑𝒍𝒒𝟏 𝒑𝒍𝒒𝟐 … 𝒑𝒍𝒒𝑙𝑘
𝑄
𝒕𝟏
𝒕𝟐 ...
𝒕𝒍𝒒
𝑡
𝟏 𝟎 … 𝟎
𝟎 𝟏 … 𝟏 ... ... ... ... 𝟏 𝟎 … 𝟎
-
sign
𝕄
+ 1 −𝕄
−∞
x
𝒑𝟏𝟏 −∞ … −∞
−∞ 𝒑𝟐𝟐 … 𝒑𝟐𝒍𝒌 ... ... ... ...
𝒑𝒍𝒒𝟏 −∞ … −∞
𝝈
𝜶𝟏𝟏 𝟎 … 𝟎
𝟎 𝜶𝟐𝟐 … 𝜶𝟐𝒍𝒌 ... ... ... ...
𝒂𝒍𝒒𝟏 𝟎 … 𝟎
𝐴
Softmax normalization
𝑃
Top-k selection
Explicit Sparse Transformer is still based on the Transformer framework. The difference is in the implementation of self-attention. The attention is degenerated to the sparse attention through top-k selection. In this way, the most contributive components for attention are reserved and the other irrelevant information are removed. This selective method is effective in preserving important information and removing noise. The attention can be much more concentrated on the most contributive elements of value. In the following, we first introduce the sparsification in self-attention and then extend it to context attention.
In the unihead self-attention, the key components, the queryQ[lQ, d], keyK[lK , d] and value V [lV , d], are the linear transformation of the source context, namely the input of each layer, where Q =WQx, K = WKx and V = WV x. Explicit Sparse Transformer first generates the attention scores P as demonstrated below:
P = QKT√ d
(1)
Then the model evaluates the values of the scores P based on the hypothesis that scores with larger values demonstrate higher relevance. The sparse attention masking operationM(·) is implemented upon P in order to select the top-k contributive elements. Specifically, we select the k largest element of each row in P and record their positions in the position matrix (i, j), where k is a hyperparameter. To be specific, say the k-th largest value of row i is ti, if the value of the j-th component is larger than ti, the position (i, j) is recorded. We concatenate the threshold value of each row to form a vector t = [t1, t2, · · · , tlQ ]. The masking functionsM(·, ·) is illustrated as follows:
M(P, k)ij = { Pij if Pij ≥ ti (k-th largest value of row i) −∞ if Pij < ti (k-th largest value of row i)
(2)
With the top-k selection, the high attention scores are selected through an explicit way. This is different from dropout which randomly abandons the scores. Such explicit selection can not only guarantee the preservation of important components, but also simplify the model since k is usually a small number such as 8, detailed analysis can be found in 5.2. The next step after top-k selection is normalization:
A = softmax(M(P, k)) (3) where A refers to the normalized scores. As the scores that are smaller than the top k largest scores are assigned with negative infinity by the masking functionM(·, ·), their normalized scores, namely the probabilities, approximate 0. We show the back-propagation process of Top-k selection in A.3. The output representation of self-attention C can be computed as below:
C = AV (4)
The output is the expectation of the value following the sparsified distribution A. Following the distribution of the selected components, the attention in the Explicit Sparse Transformer model can obtain more focused attention. Also, such sparse attention can extend to context attention. Resembling but different from the self-attention mechanism, the Q is no longer the linear transformation of the source context but the decoding states s. In the implementation, we replace Q with WQs, where WQ is still learnable matrix.
In brief, the attention in our proposed Explicit Sparse Transformer sparsifies the attention weights. The attention can then become focused on the most contributive elements, and it is compatible to both self-attention and context attention. The simple implementation of this method is in the Appendix A.4.
4 RESULTS
We conducted a series of experiments on three natural language processing tasks, including neural machine translation, image captioning and language modeling. Detailed experimental settings are in Appendix A.2.
4.1 NEURAL MACHINE TRANSLATION
Dataset To evaluate the performance of Explicit Sparse Transformer in NMT, we conducted experiments on three NMT tasks, English-to-German translation (En-De) with a large dataset, English-to-Vietnamese (En-Vi) translation and German-to-English translation (De-En) with two datasets of medium size. For En-De, we trained Explicit Sparse Transformer on the standard dataset for WMT 2014 En-De translation. The dataset consists of around 4.5 million sentence pairs. The source and target languages share a vocabulary of 32K sub-word units. We used the newstest 2013 for validation and the newstest 2014 as our test set. We report the results on the test set.
For En-Vi, we trained our model on the dataset in IWSLT 2015 (Cettolo et al., 2014). The dataset consists of around 133K sentence pairs from translated TED talks. The vocabulary size for source language is around 17,200 and that for target language is around 7,800. We used tst2012 for validation, and tst2013 for testing and report the testing results. For De-En, we used the dataset in IWSLT 2014. The training set contains 160K sentence pairs and the validation set contains 7K sentences.
Model BLEU-4 METEOR CIDEr
Following Edunov et al. (2018), we used the same test set with around 7K sentences. The data were preprocessed with byte-pair encoding (Sennrich et al., 2016). The vocabulary size is 14,000.
Result Table 1 presents the results of the baselines and our Explicit Sparse Transformer on the three datasets. For En-De, Transformer-based models outperform the previous methods. Compared with the result of Transformer (Vaswani et al., 2017), Explicit Sparse Transformer reaches 29.4 in BLEU score evaluation, outperforming vanilla Transformer by 0.3 BLEU score. For En-Vi, vanilla Transformer1 reaches 30.2, outperforming the state-of-the-art method (Huang et al., 2017). Our model, Explicit Sparse Transformer, achieves a new state-of-the-art performance, 31.1, by a margin of 0.5 over vanilla Transformer. For De-En, we demonstrate that Transformer-based models outperform the other baselines. Compared with Transformer, our Explicit Sparse Transformer reaches a better performance, 35.6. Its advantage is +0.3. To the best of our knowledge, Explicit Sparse Transformer reaches a top line performance on the dataset.
4.2 IMAGE CAPTIONING
Dataset We evaluated our approach on the image captioning task. Image captioning is a task that combines image understanding and language generation. We conducted experiments on the Microsoft COCO 2014 dataset (Chen et al., 2015a). It contains 123,287 images, each of which is paired 5 with descriptive sentences. We report the results and evaluate the image captioning model on the MSCOCO 2014 test set for image captioning. We used the publicly-available splits provided by Karpathy & Li (2015). The validation set and test set both contain 5,000 images.
Result Table 2 shows the results of the baseline models and Explicit Sparse Transformer on the COCO Karpathy test split. Transformer outperforms the mentioned baseline models. Explicit Sparse Transformer outperforms the implemented Transformer by +0.4 in terms of BLEU-4, +0.3 in terms of METEOR, +0.7 in terms of CIDEr. , which consistently proves its effectiveness in Image Captioning.
4.3 LANGUAGE MODELING
Dataset Enwiki82 is large-scale dataset for character-level language modeling. It contains 100M bytes of unprocessed Wikipedia texts. The inputs include Latin alphabets, non-Latin alphabets, XML markups and special characters. The vocabulary size 205 tokens, including one for unknown characters. We used the same preprocessing method following Chung et al. (2015). The training set contains 90M bytes of data, and the validation set and the test set contains 5M respectively.
Result Table 3 shows the results of the baseline models and Explicit Sparse Transformer-XL on the test set of enwiki8. Compared with the other strong baselines, Transformer-XL can reach a better performance, and Explicit Sparse Transformer outperforms Transformer-XL with an advantage.
1While we did not find the results of Transformer on En-Vi, we reimplemented our vanilla Transformer with the same setting.
2http://mattmahoney.net/dc/text.html
Model Params BPC
5 DISCUSSION
In this section, we performed several analyses for further discussion of Explicit Sparse Transformer. First, we compare the proposed method of topk selection before softmax with previous sparse attention method including various variants of sparsemax (Martins & Astudillo, 2016; Correia et al., 2019; Peters et al., 2019). Second, we discuss about the selection of the value of k. Third, we demonstrate that the top-k sparse attention method helps training. In the end, we conducted a series of qualitative analyses to visualize proposed sparse attention in Transformer.
5.1 COMPARISON WITH OTHER SPARSE ATTENTION METHODS
We compare the performance and speed of our method with the previous sparse attention methods3 on the basis of strong implemented transformer baseline. The training and inference speed are reported on the platform of Pytorch and IWSLT 2014 De-En translation dataset, the batch size for inference is set to 128 in terms of sentence and half precision training(FP-16) is applied.
As we can see from Table 4, the proposed sparse attention method achieve the comparable results as previous sparse attention methods, but the training and testing speed is 2x faster than sparsemax and 10x faster than Entmax-alpha during the inference. This is due to the fact that our method does not introduce too much computation for calculating sparse attention scores.
The other group of sparse attention methods of adding local attention constraints into attention (Child et al., 2019; Sukhbaatar et al., 2019), do not show performance on neural machine translation, so we do not compare them in Table 4.
3We borrow the implementation of Entmax1.5 in Tensorflow from https://github.com/ deep-spin/entmax, and the implementation of Sparsemax, Entmax-1.5, Entmax-alpha in Pytorch from https://gist.github.com/justheuristic/60167e77a95221586be315ae527c3cbd. We have not found a reliable Tensorflow implementation of sparsemax and entmax-alpha in the transformer (we tried to apply the official implementation of sparsemax in Tensorflow to tensor2tensor, but it reports loss of NaN.)
Task Base T T&P
5.2 HOW TO SELECT A PROPER K?
The natural question of how to choose the optimal k comes with the proposed method. We compare the effect of the value of k at exponential scales. We perform experiments on En-Vi and De-En from 3 different initializations for each value of K, and report the mean BLEU scores on the valid set. The figure 3 shows that regardless of the value of 16 on the En-Vi dataset, the model performance generally rises first and then falls as k increases. Under the setting of the k ∈ {4, 8, 16, 32}, setting the value of k to 8 achieves consistent improvements over the
5.3 DO THE PROPOSED SPARSE ATTENTION METHOD HELPS TRAINING?
We are surprised to find that only adding the sparsification in the training phase can also bring an improvement in the performance. We experiment this idea on IWSLT En-Vi and report the results on the valid set in Table 5, . The improvement of 0.3 BLEU scores shows that vanilla Transformer may be overparameterized and the sparsification encourages the simplification of the model.
5.4 DO THE EXPLICIT SPARSE TRANSFORMER ATTEND BETTER?
To perform a thorough evaluation of our Explicit Sparse Transformer, we conducted a case study and visualize the attention distributions of our model and the baseline for further comparison. Specifically, we conducted the analysis on the test set of En-Vi, and randomly selected a sample pair of attention visualization of both models.
The visualization of the context attention of the decoder’s bottom layer in Figure 4(a). The attention distribution of the left figure is fairly disperse. On the contrary, the right figure shows that the sparse attention can choose to focus only on several positions so that the model can be forced to stay focused. For example, when generating the phrase “for thinking about my heart”(Word-to-word translation
from Vietnamese), the generated word cannot be aligned to the corresponding words. As to Explicit Sparse Transformer, when generating the phrase ”with all my heart”, the attention can focus on the corresponding positions with strong confidence.
The visualization of the decoder’s top layer is shown in Figure 4(b). From the figure, the context attention at the top layer of the vanilla Transformer decoder suffers from focusing on the last source token. This is a common behavior of the attention in vanilla Transformer. Such attention with wrong alignment cannot sufficiently extract enough relevant source-side information for the generation. In contrast, Explicit Sparse Transformer, with simple modification on the vanilla version, does not suffer from this problem, but instead focuses on the relevant sections of the source context. The figure on the right demonstrating the attention distribution of Explicit Sparse Transformer shows that our proposed attention in the model is able to perform accurate alignment.
6 RELATED WORK
Attention mechanism has demonstrated outstanding performances in a number of neural-networkbased methods, and it has been a focus in the NLP studies (Bahdanau et al., 2014). A number of studies are proposed to enhance the effects of attention mechanism (Luong et al., 2015; Vaswani et al., 2017; Ke et al., 2018). Luong et al. (2015) propose local attention and Yang et al. (2018) propose local attention for self-attention. Xu et al. (2015) propose hard attention that pays discrete attention in image captioning. Chandar et al. (2016) propose a combination soft attention with hard attention to construct hierarchical memory network. Lin et al. (2018) propose a temperature mechanism to change the softness of attention distribution. Shen et al. (2018) propose an attention which can select a small proportion for focusing. It is trained by reinforcement learning algorithms (Williams, 1992). In terms of memory networks, Rae et al. (2016) propose to sparse access memory
Child et al. (2019) recently propose to use local attention and block attention to sparsify the transformer. Our approach differs from them in that our method does not need to block sentences and still capture long distance dependencies. Besides, we demonstrate the importance of Explicit Sparse Transformer in sequence to sequence learning. Although the variants of sparsemax (Martins & Astudillo, 2016; Correia et al., 2019; Peters et al., 2019) improve in machine translation tasks, we empirically demonstrate in 5.1 that our method introduces less computation in the standard transformer and is much faster than those sparse attention methods on GPUs.
7 CONCLUSION
In this paper, we propose a novel model called Explicit Sparse Transformer. Explicit Sparse Transformer is able to make the attention in vanilla Transformer more concentrated on the most contributive components. Extensive experiments show that Explicit Sparse Transformer outperforms vanilla Transformer in three different NLP tasks. We conducted a series of qualitative analyses to investigate the reasons why Explicit Sparse Transformer outperforms the vanilla Transformer. Furthermore, we find
an obvious problem of the attention at the top layer of the vanilla Transformer, and Explicit Sparse Transformer can alleviate this problem effectively with improved alignment effects.
A APPENDIX
A.1 BACKGROUND
A.1.1 ATTENTION MECHANISM
Bahdanau et al. (2014) first introduced the attention mechanism to learn the alignment between the target-side context and the source-side context, and Luong et al. (2015) formulated several versions for local and global attention. In general, the attention mechanism maps a query and a key-value pair to an output. The attention score function and softmax normalization can turn the query Q and the key K into a distribution α. Following the distribution α, the attention mechanism computes the expectation of the value V and finally generates the output C.
Take the original attention mechanism in NMT as an example. Both key K ∈ Rn×d and value V ∈ Rn×d are the sequence of output states from the encoder. Query Q ∈ Rm×d is the sequence of output states from the decoder, where m is the length of Q, n is the length of K and V , and d is the dimension of the states. Thus, the attention mechanism is formulated as:
C = softmax(f(Q,K))V (5)
where f refers to the attention score computation.
A.1.2 TRANSFORMER
Transformer (Vaswani et al., 2017), which is fully based on the attention mechanism, demonstrates the state-of-the-art performances in a series of natural language generation tasks. Specifically, we focus on self-attention and multi-head attention.
The ideology of self-attention is, as the name implies, the attention over the context itself. In the implementation, the query Q, key K and value V are the linear transformation of the input x, so that Q = WQx, K = WKx and V = WV x where WQ, WK and WV are learnable parameters. Therefore, the computation can be formulated as below:
C = softmax ( QKT√ d ) V (6)
where d refers to the dimension of the states.
The aforementioned mechanism can be regarded as the unihead attention. As to the multi-head attention, the attention computation is separated into g heads (namely 8 for basic model and 16 for large model in the common practice). Thus multiple parts of the inputs can be computed individually. For the i-th head, the output can be computed as in the following formula:
C(i) = softmax ( Q(i)K(i)T√
dk
) V (i) (7)
where C(i) refers to the output of the head, Q(i), K(i) and V (i) are the query, key and value of the head, and dk refers to the size of each head (dk = d/g). Finally, the output of each head are concatenated for the output: C = [C(1), · · · , C(i), · · · , C(g)] (8) In common practice, C is sent through a linear transformation with weight matrix Wc for the final output of multi-head attention.
However, soft attention can assign weights to a lot more words that are less relevent to the query. Therefore, in order to improve concentration in attention for effective information extraction, we study the problem of sparse attention in Transformer and propose our model Explicit Sparse Transformer.
A.2 EXPERIMENTAL DETAILS
We use the default setting in Vaswani et al. (2017) for the implementation of our proposed Explicit Sparse Transformer. The hyper parameters including beam size and training steps are tuned on the valid set.
Neural Machine Translation Training For En-Vi translation, we use default scripts and hyperparameter setting of tensor2tensor4 v1.11.0 to preprocess, train and evaluate our model. We use the default scripts of fairseq5 v0.6.1 to preprocess the De-En and En-De dataset. We train the model on the En-Vi dataset for 35K steps with batch size of 4K. For IWSLT 2015 De-En dataset, batch size is also set to 4K, we update the model every 4 steps and train the model for 90epochs. For WMT 2014 En-De dataset, we train the model for 72 epochs on 4 GPUs with update frequency of 32 and batch size of 3584. We train all models on a single RTX2080TI for two small IWSLT datasets and on a single machine of 4 RTX TITAN for WMT14 En-De. In order to reduce the impact of random initialization, we perform experiments with three different initializations for all models and report the highest for small datasets.
Evaluation We use case-sensitive tokenized BLEU score (Papineni et al., 2002) for the evaluation of WMT14 En-De, and we use case-insensitive BLEU for that of IWSLT 2015 En-Vi and IWSLT 2014 De-En following Lin et al. (2018). Same as Vaswani et al. (2017), compound splitting is used for WMT 14 En-De. For WMT 14 En-De and IWSLT 2014 De-En, we save checkpoints every epoch and average last 10 checkpoints every 5 epochs, We select the averaged checkpoint with best valid BLEU and report its BLEU score on the test set. For IWSLT 2015 En-Vi, we save checkpoints every 600 seconds and average last 20 checkpoints.
Image Captioning We still use the default setting of Transformer for training our proposed Explicit Sparse Transformer. We report the standard automatic evaluation metrics with the help of the COCO captioning evaluation toolkit6 (Chen et al., 2015b), which includes the commonly-used evaluation metrics, BLEU-4 Papineni et al. (2002), METEOR Denkowski & Lavie (2014), and CIDEr Vedantam et al. (2015).
Language Models We follow Dai et al. (2019) and use their implementation for our Explicit Sparse Transformer. Following the previous work (Chung et al., 2015; Dai et al., 2019), we use BPC (E[log2P (xt+1|ht)]), standing for the average number of Bits-Per-Character, for evaluation. Lower BPC refers to better performance. As to the model implementation, we implement Explicit Sparse Transformer-XL, which is based on the base version of Transformer-XL.7 Transformer-XL is a model based on Transformer but has better capability of representing long sequences.
A.3 THE BACK-PROPAGATION PROCESS OF TOP-K SELECTION
The masking functionM(·, ·) is illustrated as follow:
M(P, k)ij = { Pij if Pij ≥ ti (k-th largest value of row i) −∞ if Pij < ti (k-th largest value of row i)
(9)
Denote M =M(P, k). We regard ti as constants. When back-propagating,
∂Mij ∂Pkl = 0 (i 6= k or j 6= l) (10)
∂Mij ∂Pij = { 1 if Pij ≥ ti (k-th largest value of row i) 0 if Pij < ti (k-th largest value of row i)
(11)
4https://github.com/tensorflow/tensor2tensor 5https://github.com/pytorch/fairseq 6https://github.com/tylin/coco-caption 7Due to our limited resources (TPU), we did not implement the big version of Explicit Sparse Transformer-
XL.
The next step after top-k selection is normalization:
A = softmax(M(P, k)) (12)
where A refers to the normalized scores. When backpropagating,
∂Aij ∂Pkl = lQ∑ m=1 lK∑ n=1 ∂Aij ∂Mmn ∂Mmn ∂Pkl
(13)
= ∂Aij ∂Mkl ∂Mkl ∂Pkl
(14)
= ∂Aij ∂Mkl if Pij ≥ ti (k-th largest value of row i)
0 if Pij < ti (k-th largest value of row i) (15)
The softmax function is evidently differentiable, therefore, we have calculated the gradient involved in top-k selection.
A.4 IMPLEMENTATION
Figure 5 shows the code for the idea in case of single head self-attention, the proposed method is easy to implement and plug in the successful Transformer model. | 1. What is the focus of the paper, and what problem does it aim to solve?
2. Is the approach well-motivated and placed in the literature?
3. Do the results support the claims, and are they scientifically rigorous?
4. Are there any concerns or questions regarding the choice of k for the top-k operation, and how it affects the model performance?
5. Are there any issues with the gradients propagation through the top-k operation, particularly in the initial stages of training?
6. Would open-sourcing the code and providing more ablation experiments improve the paper? | Review | Review
1. What is the specific question/problem tackled by the paper?
The authors tackle the problem of sparse attention for various generative modeling tasks such as machine translation and image captioning. The main motivation behind studying this problem is the premise that sparse varieties of attention might generalize better than full attention. The authors propose a sparse attention mechanism based on the top-k selection where all attention values in a row are dropped if they are not higher than the k^{th} largest item in the row. Since this is a non-differentiable operation the authors propose to train this model by setting the gradients of the non-selected items to 0. The authors report results on machine translation, language modeling and image captioning.
2. Is the approach well motivated, including being well-placed in the literature?
In my view the main reasons to study sparse variants of attention are either 1) scale to sequences longer than are possible with full attention (this is e.g., the motivation behind [1]) or 2) generalize better than full attention. The motivation of this work seems to be the latter as the authors claim improvements in terms of performance over full attention. The authors cite prior work on sparse attention mechanisms.
3. Does the paper support the claims? This includes determining if results, whether theoretical or empirical, are correct and if they are scientifically rigorous.
The authors report good results on machine translation, showing that their sparse attention method improves performance on En-De to 29.4 BLEU, on De-En to 35.6 BLEU and on En-Vi to 31.1 BLEU, improving on full attention baselines. However, the authors have not submitted code for reproducing their results. The authors also do not report what choice of k is used for the top-k operation and how they made their choice of the optimal k? The paper would be well served by more ablation experiments demonstrating what the impact the choice of k has on the model performance. For example, I would expect to be able to reproduce original Transformer results using k = maximum sequence length.
I am also not fully clear about how gradients are propagated through the top-k operation. It seems that if an index is not selected (i.e. it's attention value is smaller than top-k) it's gradient is set to 0. However, this seems problematic - for e.g., in the initial stages an important item might have a low attention value due to random initialization and might not make it to the top-k. Because of the way gradients are propagated it will not receive any gradient, and therefore will not be incentivized to increase its value. This doesn't seem like a good solution to me.
Since the paper is mainly an empirical work, it would be improved by open-sourcing anonymized code so that it's results and claims may be verified. It would also be improved in more ablation experiments or explanations in what the optimal choice of k should be for the top-k and how that affects the results.
[1] Generating Long Sequences with Sparse Transformers by Child et al (https://arxiv.org/abs/1904.10509) |
ICLR | Title
Sparse Transformer: Concentrated Attention Through Explicit Selection
Abstract
Self-attention-based Transformer has demonstrated the state-of-the-art performances in a number of natural language processing tasks. Self-attention is able to model long-term dependencies, but it may suffer from the extraction of irrelevant information in the context. To tackle the problem, we propose a novel model called Explicit Sparse Transformer. Explicit Sparse Transformer is able to improve the concentration of attention on the global context through an explicit selection of the most relevant segments. Extensive experimental results on a series of natural language processing and computer vision tasks, including neural machine translation, image captioning, and language modeling, all demonstrate the advantages of Explicit Sparse Transformer in model performance. We also show that our proposed sparse attention method achieves comparable or better results than the previous sparse attention method, but significantly reduces training and testing time. For example, the inference speed is twice that of sparsemax in Transformer model.
N/A
Self-attention-based Transformer has demonstrated the state-of-the-art performances in a number of natural language processing tasks. Self-attention is able to model long-term dependencies, but it may suffer from the extraction of irrelevant information in the context. To tackle the problem, we propose a novel model called Explicit Sparse Transformer. Explicit Sparse Transformer is able to improve the concentration of attention on the global context through an explicit selection of the most relevant segments. Extensive experimental results on a series of natural language processing and computer vision tasks, including neural machine translation, image captioning, and language modeling, all demonstrate the advantages of Explicit Sparse Transformer in model performance. We also show that our proposed sparse attention method achieves comparable or better results than the previous sparse attention method, but significantly reduces training and testing time. For example, the inference speed is twice that of sparsemax in Transformer model.
1 INTRODUCTION
Understanding natural language requires the ability to pay attention to the most relevant information. For example, people tend to focus on the most relevant segments to search for the answers to their questions in mind during reading. However, retrieving problems may occur if irrelevant segments impose negative impacts on reading comprehension. Such distraction hinders the understanding process, which calls for an effective attention.
This principle is also applicable to the computation systems for natural language. Attention has been a vital component of the models for natural language understanding and natural language generation. Recently, Vaswani et al. (2017) proposed Transformer, a model based on the attention mechanism for Neural Machine Translation(NMT). Transformer has shown outstanding performance in natural language generation tasks. More recently, the success of BERT (Devlin et al., 2018) in natural language processing shows the great usefulness of both the attention mechanism and the framework of Transformer.
However, the attention in vanilla Transformer has a obvious drawback, as the Transformer assigns credits to all components of the context. This causes a lack of focus. As illustrated in Figure 1, the attention in vanilla Transformer assigns high credits to many irrelevant words, while in Explicit Sparse Transformer, it concentrates on the most relevant k words. For the word “tim”, the most related words should be ”heart” and the immediate words. Yet the attention in vanilla Transformer does not focus on them but gives credits to some irrelevant words such as “him”.
Recent works have studied applying sparse attention in Transformer model. However, they either add local attention constraints (Child et al., 2019) which break long term dependency or hurt the time efficiency (Martins & Astudillo, 2016). Inspired by Ke et al. (2018) which introduce sparse credit assignment to the LSTM model, we propose a novel model called Explicit Sparse Transformer which is equipped with our sparse attention mechanism. We implement an explicit selection method based on top-k selection. Unlike vanilla Transformer, Explicit Sparse Transformer only pays attention to the k most contributive states. Thus Explicit Sparse Transformer can perform more concentrated attention than vanilla Transformer.
We first validate our methods on three tasks. For further investigation, we compare our methods with previous sparse attention methods and experimentally answer how to choose k in a series of qualitative analyses. We are surprised to find that the proposed sparse attention method can also help with training as a regularization method. Visual analysis shows that Explicit Sparse Transformer exhibits a higher potential in performing a high-quality alignment. The contributions of this paper are presented below:
• We propose a novel model called Explicit Sparse Transformer, which enhances the concentration of the Transformer’s attention through explicit selection.
• We conducted extensive experiments on three natural language processing tasks, including Neural Machine Translation, Image Captioning and Language Modeling. Compared with vanilla Transformer, Explicit Sparse Transformer demonstrates better performances in the above three tasks. Specifically, our model reaches the state-of-the-art performances in the IWSLT 2015 English-to-Vietnamese translation.
• Compared to previous sparse attention methods for transformers, our methods are much faster in training and testing, and achieves better results.
2 PREMIERS
The review to the attention mechanism and the attention-based framework of Transformer can be found in Appendix A.1.
3 EXPLICIT SPARSE TRANSFORMER
Lack of concentration in the attention can lead to the failure of relevant information extraction. To this end, we propose a novel model, Explicit Sparse Transformer, which enables the focus on only a few elements through explicit selection. Compared with the conventional attention, no credit will be assigned to the value that is not highly correlated to the query. We provide a comparison between the attention of vanilla Transformer and that of Explicit Sparse Transformer in Figure 2.
𝒒𝟏
𝒒𝟐 ...
𝒒𝒍𝒒
𝒌𝟏 𝒌𝟐 … 𝒌𝒍𝒌
𝒑𝟏𝟏 𝒑𝟏2 … 𝒑𝟏𝑙𝑘 𝒑𝟐𝟏 𝒑𝟐𝟐 … 𝒑2𝑙𝑘 ... ... ... ...
𝒑𝒍𝒒𝟏 𝒑𝒍𝒒𝟐 … 𝒑𝒍𝒒𝑙𝑘
𝑄
𝒕𝟏
𝒕𝟐 ...
𝒕𝒍𝒒
𝑡
𝟏 𝟎 … 𝟎
𝟎 𝟏 … 𝟏 ... ... ... ... 𝟏 𝟎 … 𝟎
-
sign
𝕄
+ 1 −𝕄
−∞
x
𝒑𝟏𝟏 −∞ … −∞
−∞ 𝒑𝟐𝟐 … 𝒑𝟐𝒍𝒌 ... ... ... ...
𝒑𝒍𝒒𝟏 −∞ … −∞
𝝈
𝜶𝟏𝟏 𝟎 … 𝟎
𝟎 𝜶𝟐𝟐 … 𝜶𝟐𝒍𝒌 ... ... ... ...
𝒂𝒍𝒒𝟏 𝟎 … 𝟎
𝐴
Softmax normalization
𝑃
Top-k selection
Explicit Sparse Transformer is still based on the Transformer framework. The difference is in the implementation of self-attention. The attention is degenerated to the sparse attention through top-k selection. In this way, the most contributive components for attention are reserved and the other irrelevant information are removed. This selective method is effective in preserving important information and removing noise. The attention can be much more concentrated on the most contributive elements of value. In the following, we first introduce the sparsification in self-attention and then extend it to context attention.
In the unihead self-attention, the key components, the queryQ[lQ, d], keyK[lK , d] and value V [lV , d], are the linear transformation of the source context, namely the input of each layer, where Q =WQx, K = WKx and V = WV x. Explicit Sparse Transformer first generates the attention scores P as demonstrated below:
P = QKT√ d
(1)
Then the model evaluates the values of the scores P based on the hypothesis that scores with larger values demonstrate higher relevance. The sparse attention masking operationM(·) is implemented upon P in order to select the top-k contributive elements. Specifically, we select the k largest element of each row in P and record their positions in the position matrix (i, j), where k is a hyperparameter. To be specific, say the k-th largest value of row i is ti, if the value of the j-th component is larger than ti, the position (i, j) is recorded. We concatenate the threshold value of each row to form a vector t = [t1, t2, · · · , tlQ ]. The masking functionsM(·, ·) is illustrated as follows:
M(P, k)ij = { Pij if Pij ≥ ti (k-th largest value of row i) −∞ if Pij < ti (k-th largest value of row i)
(2)
With the top-k selection, the high attention scores are selected through an explicit way. This is different from dropout which randomly abandons the scores. Such explicit selection can not only guarantee the preservation of important components, but also simplify the model since k is usually a small number such as 8, detailed analysis can be found in 5.2. The next step after top-k selection is normalization:
A = softmax(M(P, k)) (3) where A refers to the normalized scores. As the scores that are smaller than the top k largest scores are assigned with negative infinity by the masking functionM(·, ·), their normalized scores, namely the probabilities, approximate 0. We show the back-propagation process of Top-k selection in A.3. The output representation of self-attention C can be computed as below:
C = AV (4)
The output is the expectation of the value following the sparsified distribution A. Following the distribution of the selected components, the attention in the Explicit Sparse Transformer model can obtain more focused attention. Also, such sparse attention can extend to context attention. Resembling but different from the self-attention mechanism, the Q is no longer the linear transformation of the source context but the decoding states s. In the implementation, we replace Q with WQs, where WQ is still learnable matrix.
In brief, the attention in our proposed Explicit Sparse Transformer sparsifies the attention weights. The attention can then become focused on the most contributive elements, and it is compatible to both self-attention and context attention. The simple implementation of this method is in the Appendix A.4.
4 RESULTS
We conducted a series of experiments on three natural language processing tasks, including neural machine translation, image captioning and language modeling. Detailed experimental settings are in Appendix A.2.
4.1 NEURAL MACHINE TRANSLATION
Dataset To evaluate the performance of Explicit Sparse Transformer in NMT, we conducted experiments on three NMT tasks, English-to-German translation (En-De) with a large dataset, English-to-Vietnamese (En-Vi) translation and German-to-English translation (De-En) with two datasets of medium size. For En-De, we trained Explicit Sparse Transformer on the standard dataset for WMT 2014 En-De translation. The dataset consists of around 4.5 million sentence pairs. The source and target languages share a vocabulary of 32K sub-word units. We used the newstest 2013 for validation and the newstest 2014 as our test set. We report the results on the test set.
For En-Vi, we trained our model on the dataset in IWSLT 2015 (Cettolo et al., 2014). The dataset consists of around 133K sentence pairs from translated TED talks. The vocabulary size for source language is around 17,200 and that for target language is around 7,800. We used tst2012 for validation, and tst2013 for testing and report the testing results. For De-En, we used the dataset in IWSLT 2014. The training set contains 160K sentence pairs and the validation set contains 7K sentences.
Model BLEU-4 METEOR CIDEr
Following Edunov et al. (2018), we used the same test set with around 7K sentences. The data were preprocessed with byte-pair encoding (Sennrich et al., 2016). The vocabulary size is 14,000.
Result Table 1 presents the results of the baselines and our Explicit Sparse Transformer on the three datasets. For En-De, Transformer-based models outperform the previous methods. Compared with the result of Transformer (Vaswani et al., 2017), Explicit Sparse Transformer reaches 29.4 in BLEU score evaluation, outperforming vanilla Transformer by 0.3 BLEU score. For En-Vi, vanilla Transformer1 reaches 30.2, outperforming the state-of-the-art method (Huang et al., 2017). Our model, Explicit Sparse Transformer, achieves a new state-of-the-art performance, 31.1, by a margin of 0.5 over vanilla Transformer. For De-En, we demonstrate that Transformer-based models outperform the other baselines. Compared with Transformer, our Explicit Sparse Transformer reaches a better performance, 35.6. Its advantage is +0.3. To the best of our knowledge, Explicit Sparse Transformer reaches a top line performance on the dataset.
4.2 IMAGE CAPTIONING
Dataset We evaluated our approach on the image captioning task. Image captioning is a task that combines image understanding and language generation. We conducted experiments on the Microsoft COCO 2014 dataset (Chen et al., 2015a). It contains 123,287 images, each of which is paired 5 with descriptive sentences. We report the results and evaluate the image captioning model on the MSCOCO 2014 test set for image captioning. We used the publicly-available splits provided by Karpathy & Li (2015). The validation set and test set both contain 5,000 images.
Result Table 2 shows the results of the baseline models and Explicit Sparse Transformer on the COCO Karpathy test split. Transformer outperforms the mentioned baseline models. Explicit Sparse Transformer outperforms the implemented Transformer by +0.4 in terms of BLEU-4, +0.3 in terms of METEOR, +0.7 in terms of CIDEr. , which consistently proves its effectiveness in Image Captioning.
4.3 LANGUAGE MODELING
Dataset Enwiki82 is large-scale dataset for character-level language modeling. It contains 100M bytes of unprocessed Wikipedia texts. The inputs include Latin alphabets, non-Latin alphabets, XML markups and special characters. The vocabulary size 205 tokens, including one for unknown characters. We used the same preprocessing method following Chung et al. (2015). The training set contains 90M bytes of data, and the validation set and the test set contains 5M respectively.
Result Table 3 shows the results of the baseline models and Explicit Sparse Transformer-XL on the test set of enwiki8. Compared with the other strong baselines, Transformer-XL can reach a better performance, and Explicit Sparse Transformer outperforms Transformer-XL with an advantage.
1While we did not find the results of Transformer on En-Vi, we reimplemented our vanilla Transformer with the same setting.
2http://mattmahoney.net/dc/text.html
Model Params BPC
5 DISCUSSION
In this section, we performed several analyses for further discussion of Explicit Sparse Transformer. First, we compare the proposed method of topk selection before softmax with previous sparse attention method including various variants of sparsemax (Martins & Astudillo, 2016; Correia et al., 2019; Peters et al., 2019). Second, we discuss about the selection of the value of k. Third, we demonstrate that the top-k sparse attention method helps training. In the end, we conducted a series of qualitative analyses to visualize proposed sparse attention in Transformer.
5.1 COMPARISON WITH OTHER SPARSE ATTENTION METHODS
We compare the performance and speed of our method with the previous sparse attention methods3 on the basis of strong implemented transformer baseline. The training and inference speed are reported on the platform of Pytorch and IWSLT 2014 De-En translation dataset, the batch size for inference is set to 128 in terms of sentence and half precision training(FP-16) is applied.
As we can see from Table 4, the proposed sparse attention method achieve the comparable results as previous sparse attention methods, but the training and testing speed is 2x faster than sparsemax and 10x faster than Entmax-alpha during the inference. This is due to the fact that our method does not introduce too much computation for calculating sparse attention scores.
The other group of sparse attention methods of adding local attention constraints into attention (Child et al., 2019; Sukhbaatar et al., 2019), do not show performance on neural machine translation, so we do not compare them in Table 4.
3We borrow the implementation of Entmax1.5 in Tensorflow from https://github.com/ deep-spin/entmax, and the implementation of Sparsemax, Entmax-1.5, Entmax-alpha in Pytorch from https://gist.github.com/justheuristic/60167e77a95221586be315ae527c3cbd. We have not found a reliable Tensorflow implementation of sparsemax and entmax-alpha in the transformer (we tried to apply the official implementation of sparsemax in Tensorflow to tensor2tensor, but it reports loss of NaN.)
Task Base T T&P
5.2 HOW TO SELECT A PROPER K?
The natural question of how to choose the optimal k comes with the proposed method. We compare the effect of the value of k at exponential scales. We perform experiments on En-Vi and De-En from 3 different initializations for each value of K, and report the mean BLEU scores on the valid set. The figure 3 shows that regardless of the value of 16 on the En-Vi dataset, the model performance generally rises first and then falls as k increases. Under the setting of the k ∈ {4, 8, 16, 32}, setting the value of k to 8 achieves consistent improvements over the
5.3 DO THE PROPOSED SPARSE ATTENTION METHOD HELPS TRAINING?
We are surprised to find that only adding the sparsification in the training phase can also bring an improvement in the performance. We experiment this idea on IWSLT En-Vi and report the results on the valid set in Table 5, . The improvement of 0.3 BLEU scores shows that vanilla Transformer may be overparameterized and the sparsification encourages the simplification of the model.
5.4 DO THE EXPLICIT SPARSE TRANSFORMER ATTEND BETTER?
To perform a thorough evaluation of our Explicit Sparse Transformer, we conducted a case study and visualize the attention distributions of our model and the baseline for further comparison. Specifically, we conducted the analysis on the test set of En-Vi, and randomly selected a sample pair of attention visualization of both models.
The visualization of the context attention of the decoder’s bottom layer in Figure 4(a). The attention distribution of the left figure is fairly disperse. On the contrary, the right figure shows that the sparse attention can choose to focus only on several positions so that the model can be forced to stay focused. For example, when generating the phrase “for thinking about my heart”(Word-to-word translation
from Vietnamese), the generated word cannot be aligned to the corresponding words. As to Explicit Sparse Transformer, when generating the phrase ”with all my heart”, the attention can focus on the corresponding positions with strong confidence.
The visualization of the decoder’s top layer is shown in Figure 4(b). From the figure, the context attention at the top layer of the vanilla Transformer decoder suffers from focusing on the last source token. This is a common behavior of the attention in vanilla Transformer. Such attention with wrong alignment cannot sufficiently extract enough relevant source-side information for the generation. In contrast, Explicit Sparse Transformer, with simple modification on the vanilla version, does not suffer from this problem, but instead focuses on the relevant sections of the source context. The figure on the right demonstrating the attention distribution of Explicit Sparse Transformer shows that our proposed attention in the model is able to perform accurate alignment.
6 RELATED WORK
Attention mechanism has demonstrated outstanding performances in a number of neural-networkbased methods, and it has been a focus in the NLP studies (Bahdanau et al., 2014). A number of studies are proposed to enhance the effects of attention mechanism (Luong et al., 2015; Vaswani et al., 2017; Ke et al., 2018). Luong et al. (2015) propose local attention and Yang et al. (2018) propose local attention for self-attention. Xu et al. (2015) propose hard attention that pays discrete attention in image captioning. Chandar et al. (2016) propose a combination soft attention with hard attention to construct hierarchical memory network. Lin et al. (2018) propose a temperature mechanism to change the softness of attention distribution. Shen et al. (2018) propose an attention which can select a small proportion for focusing. It is trained by reinforcement learning algorithms (Williams, 1992). In terms of memory networks, Rae et al. (2016) propose to sparse access memory
Child et al. (2019) recently propose to use local attention and block attention to sparsify the transformer. Our approach differs from them in that our method does not need to block sentences and still capture long distance dependencies. Besides, we demonstrate the importance of Explicit Sparse Transformer in sequence to sequence learning. Although the variants of sparsemax (Martins & Astudillo, 2016; Correia et al., 2019; Peters et al., 2019) improve in machine translation tasks, we empirically demonstrate in 5.1 that our method introduces less computation in the standard transformer and is much faster than those sparse attention methods on GPUs.
7 CONCLUSION
In this paper, we propose a novel model called Explicit Sparse Transformer. Explicit Sparse Transformer is able to make the attention in vanilla Transformer more concentrated on the most contributive components. Extensive experiments show that Explicit Sparse Transformer outperforms vanilla Transformer in three different NLP tasks. We conducted a series of qualitative analyses to investigate the reasons why Explicit Sparse Transformer outperforms the vanilla Transformer. Furthermore, we find
an obvious problem of the attention at the top layer of the vanilla Transformer, and Explicit Sparse Transformer can alleviate this problem effectively with improved alignment effects.
A APPENDIX
A.1 BACKGROUND
A.1.1 ATTENTION MECHANISM
Bahdanau et al. (2014) first introduced the attention mechanism to learn the alignment between the target-side context and the source-side context, and Luong et al. (2015) formulated several versions for local and global attention. In general, the attention mechanism maps a query and a key-value pair to an output. The attention score function and softmax normalization can turn the query Q and the key K into a distribution α. Following the distribution α, the attention mechanism computes the expectation of the value V and finally generates the output C.
Take the original attention mechanism in NMT as an example. Both key K ∈ Rn×d and value V ∈ Rn×d are the sequence of output states from the encoder. Query Q ∈ Rm×d is the sequence of output states from the decoder, where m is the length of Q, n is the length of K and V , and d is the dimension of the states. Thus, the attention mechanism is formulated as:
C = softmax(f(Q,K))V (5)
where f refers to the attention score computation.
A.1.2 TRANSFORMER
Transformer (Vaswani et al., 2017), which is fully based on the attention mechanism, demonstrates the state-of-the-art performances in a series of natural language generation tasks. Specifically, we focus on self-attention and multi-head attention.
The ideology of self-attention is, as the name implies, the attention over the context itself. In the implementation, the query Q, key K and value V are the linear transformation of the input x, so that Q = WQx, K = WKx and V = WV x where WQ, WK and WV are learnable parameters. Therefore, the computation can be formulated as below:
C = softmax ( QKT√ d ) V (6)
where d refers to the dimension of the states.
The aforementioned mechanism can be regarded as the unihead attention. As to the multi-head attention, the attention computation is separated into g heads (namely 8 for basic model and 16 for large model in the common practice). Thus multiple parts of the inputs can be computed individually. For the i-th head, the output can be computed as in the following formula:
C(i) = softmax ( Q(i)K(i)T√
dk
) V (i) (7)
where C(i) refers to the output of the head, Q(i), K(i) and V (i) are the query, key and value of the head, and dk refers to the size of each head (dk = d/g). Finally, the output of each head are concatenated for the output: C = [C(1), · · · , C(i), · · · , C(g)] (8) In common practice, C is sent through a linear transformation with weight matrix Wc for the final output of multi-head attention.
However, soft attention can assign weights to a lot more words that are less relevent to the query. Therefore, in order to improve concentration in attention for effective information extraction, we study the problem of sparse attention in Transformer and propose our model Explicit Sparse Transformer.
A.2 EXPERIMENTAL DETAILS
We use the default setting in Vaswani et al. (2017) for the implementation of our proposed Explicit Sparse Transformer. The hyper parameters including beam size and training steps are tuned on the valid set.
Neural Machine Translation Training For En-Vi translation, we use default scripts and hyperparameter setting of tensor2tensor4 v1.11.0 to preprocess, train and evaluate our model. We use the default scripts of fairseq5 v0.6.1 to preprocess the De-En and En-De dataset. We train the model on the En-Vi dataset for 35K steps with batch size of 4K. For IWSLT 2015 De-En dataset, batch size is also set to 4K, we update the model every 4 steps and train the model for 90epochs. For WMT 2014 En-De dataset, we train the model for 72 epochs on 4 GPUs with update frequency of 32 and batch size of 3584. We train all models on a single RTX2080TI for two small IWSLT datasets and on a single machine of 4 RTX TITAN for WMT14 En-De. In order to reduce the impact of random initialization, we perform experiments with three different initializations for all models and report the highest for small datasets.
Evaluation We use case-sensitive tokenized BLEU score (Papineni et al., 2002) for the evaluation of WMT14 En-De, and we use case-insensitive BLEU for that of IWSLT 2015 En-Vi and IWSLT 2014 De-En following Lin et al. (2018). Same as Vaswani et al. (2017), compound splitting is used for WMT 14 En-De. For WMT 14 En-De and IWSLT 2014 De-En, we save checkpoints every epoch and average last 10 checkpoints every 5 epochs, We select the averaged checkpoint with best valid BLEU and report its BLEU score on the test set. For IWSLT 2015 En-Vi, we save checkpoints every 600 seconds and average last 20 checkpoints.
Image Captioning We still use the default setting of Transformer for training our proposed Explicit Sparse Transformer. We report the standard automatic evaluation metrics with the help of the COCO captioning evaluation toolkit6 (Chen et al., 2015b), which includes the commonly-used evaluation metrics, BLEU-4 Papineni et al. (2002), METEOR Denkowski & Lavie (2014), and CIDEr Vedantam et al. (2015).
Language Models We follow Dai et al. (2019) and use their implementation for our Explicit Sparse Transformer. Following the previous work (Chung et al., 2015; Dai et al., 2019), we use BPC (E[log2P (xt+1|ht)]), standing for the average number of Bits-Per-Character, for evaluation. Lower BPC refers to better performance. As to the model implementation, we implement Explicit Sparse Transformer-XL, which is based on the base version of Transformer-XL.7 Transformer-XL is a model based on Transformer but has better capability of representing long sequences.
A.3 THE BACK-PROPAGATION PROCESS OF TOP-K SELECTION
The masking functionM(·, ·) is illustrated as follow:
M(P, k)ij = { Pij if Pij ≥ ti (k-th largest value of row i) −∞ if Pij < ti (k-th largest value of row i)
(9)
Denote M =M(P, k). We regard ti as constants. When back-propagating,
∂Mij ∂Pkl = 0 (i 6= k or j 6= l) (10)
∂Mij ∂Pij = { 1 if Pij ≥ ti (k-th largest value of row i) 0 if Pij < ti (k-th largest value of row i)
(11)
4https://github.com/tensorflow/tensor2tensor 5https://github.com/pytorch/fairseq 6https://github.com/tylin/coco-caption 7Due to our limited resources (TPU), we did not implement the big version of Explicit Sparse Transformer-
XL.
The next step after top-k selection is normalization:
A = softmax(M(P, k)) (12)
where A refers to the normalized scores. When backpropagating,
∂Aij ∂Pkl = lQ∑ m=1 lK∑ n=1 ∂Aij ∂Mmn ∂Mmn ∂Pkl
(13)
= ∂Aij ∂Mkl ∂Mkl ∂Pkl
(14)
= ∂Aij ∂Mkl if Pij ≥ ti (k-th largest value of row i)
0 if Pij < ti (k-th largest value of row i) (15)
The softmax function is evidently differentiable, therefore, we have calculated the gradient involved in top-k selection.
A.4 IMPLEMENTATION
Figure 5 shows the code for the idea in case of single head self-attention, the proposed method is easy to implement and plug in the successful Transformer model. | 1. What is the main contribution of the paper, and how does it improve upon previous transformer models?
2. What are the strengths and weaknesses of the proposed method, particularly in its simplicity and computational efficiency?
3. Are there any concerns regarding the experimental results, such as missing baselines or unsubstantiated claims?
4. How does the reviewer assess the clarity and completeness of the paper's content, including the appropriateness of figures and tables?
5. Are there any suggestions for future research directions or potential applications of the proposed technique? | Review | Review
The paper proposes "sparse self-attention", where only top K activations are kept in the softmax. The resulting transformer model is applied to NMT, image caption generation and language modeling, where it outperformed a vanilla Transformer model.
In general, the idea is quite simple and easy to implement. It doesn't add any computational or memory cost. The paper is well written and easy to read. The diverse experimental results show that it brings an improvement. And I think this can be combined with other improvements of Transformer.
However, there are quite many baselines are missing from the tables. The sota on De-En is actually 35.7 by Fonollosa et.al. On enwik8, Transformer XL is not the best medium sized model as the authors claimed. See below:
NTM En-De:
- Wu et.al. Pay Less Attention with Lightweight and Dynamic Convolutions, 2019
- Ott et.al. Scaling Neural Machine Translation, 2018
NTM En-Vi:
- Wang et.al. SwitchOut: an Efficient Data Augmentation Algorithm for Neural Machine Translation, 2018
NTM De-En:
- Wu et.al. Pay Less Attention with Lightweight and Dynamic Convolutions, 2019
- Fonollosa et.al. Joint Source-Target Self Attention with Locality Constraints, 2019
- He et.al. Layer-Wise Coordination between Encoder and Decoder for Neural Machine Translation, 2018
LM Enwik8:
- Sukhbaatar et.al, Adaptive Attention Span in Transformers, 2019
Other comments:
- More experimental details are needed. What is the value K? How different K values affect performance? What is the number of parameters of NMT models.
- The claim "top layer of the vanilla Transformer focuses on the end position of the text" can't be true generally. Probably only true for a certain task.
- Where the numbers in Figure 1 come from? Is it a single attention head or average of all?
- Page 4, "the high are ..." probably typo?
- The related work is missing "Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes" by Rae et.al., which also uses sparse attention. |
ICLR | Title
GPT-Critic: Offline Reinforcement Learning for End-to-End Task-Oriented Dialogue Systems
Abstract
Training a task-oriented dialogue agent can be naturally formulated as offline reinforcement learning (RL) problem, where the agent aims to learn a conversational strategy to achieve user goals, only from a dialogue corpus. It is very challenging in terms of RL since the natural language action space is astronomical, while feasible (syntactically and semantically correct) actions are very sparse. Thus, standard RL methods easily fail and generate responses diverging from human language, even when fine-tuning a powerful pre-trained language model. In this paper, we introduce GPT-Critic, an offline RL method for task-oriented dialogue. GPT-Critic is built upon GPT-2, fine-tuning the language model through behavior cloning of the critic-guided self-generated sentences. GPT-Critic is essentially free from the issue of diverging from human language since it learns from the sentences sampled from the pre-trained language model. In the experiments, we demonstrate that our algorithm outperforms the state-of-the-art in the task-oriented dialogue benchmarks including MultiWOZ 2.0 and ConvLab.
1 INTRODUCTION
Building an end-to-end task-oriented dialogue agent is one of the promising applications of natural language processing (NLP) tasks, yet challenging due to large language action spaces and limited availability of human-annotated data. Recently, large-scale pre-trained language models (LM) have achieved remarkable successes in various NLP tasks with prohibitively large vocabulary (Devlin et al., 2019; Radford et al., 2019; Brown et al., 2020; Raffel et al., 2019). The current best performing end-to-end conversational agents for a task-oriented dialogue system utilize a pre-training on largescale corpus and fine-tuning on downstream tasks (Ham et al., 2020; Yang et al., 2021; Lin et al., 2020; Peng et al., 2021). This combination of pre-training and fine-tuning significantly improves overall performance in the task-oriented dialogues. However, supervised fine-tuning (i.e. imitation learning of the dialogue corpus) alone may not be sufficient to learn an optimal dialogue strategy since the corpus often contains suboptimal dialogues collected from human participants of diverse expertise levels. Thus, in order to optimize the task performance of the conversational agent, goaloriented training (i.e. reinforcement learning) is an essential and promising direction to pursue.
Training a task-oriented conversational agent from a dialogue corpus can be naturally formulated as offline reinforcement learning (RL) problem (Levine et al., 2020; Fujimoto et al., 2019; Jaques et al., 2020), which offers the prospect to optimize the policy solely from the fixed dataset without online environment interaction. Most of the existing offline RL methods are built on the off-policy ActorCritic framework, which performs iterative optimization of the policy (i.e. actor) and the actionvalue function (i.e. critic) (Fujimoto et al., 2019; Janner et al., 2019; Kumar et al., 2020). Yet, a naive application of these offline RL methods generally results in poor dialogue strategies which generate responses in no way similar to human language (Lewis et al., 2017; Zhao et al., 2019; Jang et al., 2020).
Weighted behavior cloning (BC) (Wang et al., 2020) is one of the representative offline RL algorithms, which is free from the issue of diverging from human language. Weighted BC amounts
to filtering out bad actions and imitating good actions. In the context of task-oriented dialogues, that would be equivalent to simply dropping the unsuccessful dialogues from the corpus. However, dropping a whole dialogue from training would be wasteful, since they may still contain some taskspecific information that is useful to properly respond to user requests in the intermediate steps.
In this paper, we present an offline RL algorithm for task-oriented dialogue, which can be adopted for any generative pre-trained language model. Our algorithm, GPT-Critic, aims to revise unsuccessful dialogues into successful ones, rather than removing them as done in weighted BC. It starts with fine-tuning the GPT-2 model and learning the action-value function (critic) using the dialogue corpus. Then, GPT-Critic generates a strategically promising action that is selected based on the value estimated by the critic. GPT-Critic updates the policy through behavior cloning of the critic-guided self-generated responses. This is in contrast to the previous methods that perform weighted behavior cloning on the dialogue corpus, where the action choice is restricted to the support in the dataset (Wang et al., 2020). Compared to traditional actor-critic methods, since GPT-Critic does not rely on policy gradient and updates the policy within the support of generated actions from the GPT-2, it thus inherits GPT-2’s ability to generate human-like responses. In the experiments, we demonstrate that GPT-Critic outperforms the state-of-the-art end-to-end dialogue agent in the task-oriented dialogue benchmarks including MultiWOZ 2.0 (Budzianowski et al., 2018) and ConvLab (Zhu et al., 2020).
2 BACKGROUND
2.1 OFFLINE REINFORCEMENT LEARNING FOR TASK-ORIENTED DIALOGUES
We consider the task-oriented dialogue system that can be modeled as a partially observable Markov decision process (POMDP) (Williams & Young, 2007) defined by tuple 〈S,A,O, T, Z,R, γ〉 where S is the set of environment states s = 〈g, h〉 (underlying state that consists of the user goal g and dialogue history h),A is the set of actions a (a sequence of tokens which represents dialogue act and system response), O is the set of observations o (user utterance), T (s′|s, a) = Pr(st+1 = s′|st = s, at = a) is the transition function, Z(o|s′, a) = Pr(ot+1 = o|st+1 = s′, at = a) is the observation probability, R(g, h, a) is the reward function indicating the utility of executing action a in history h and the user goal g, and γ ∈ (0, 1) is a discount factor. The history at time step t, ht = {o0, a0, . . . ot−1, at−1, ot}, is a sequence of all previous observations and actions. Since the underlying state s (e.g. user goal) is not directly observable, the agent makes decisions based on the entire observation-action history. The policy π(at|ht) is mapping from history ht to a probability distribution overA. The goal is to find an optimal policy π∗ that maximizes the expected cumulative rewards, i.e. π∗ = arg maxπ Eπ [ ∑∞ t=0 γ
tR(g, ht, at)]. The action-value function of policy π is defined as Qπ(h, a) := Eπ [ ∑∞ t=0 γ
tR(g, ht, at)|h0 = h, a0 = a], where Qπ is a unique solution of the Bellman equation: Qπ(h, a) = Eg[R(g, h, a)] + γEπ [Qπ(h′, a′)].
Using offline RL for dialogue policy optimization, the agent optimizes the policy from the precollected dataset D = {{(gj , hjt , a j t , r j t , h j t+1) T t=0}Nj=1} without online environment interaction during the intermediate stages of training. Prior offline RL algorithms (Fujimoto et al., 2019; Janner et al., 2019; Kumar et al., 2020) rely on off-policy actor-critic method, where the critic network is trained by minimizing the temporal differnce error with respect to the target policy π:
arg min φ
E(ht,at,rt,ht+1)∼D [( rt + γEat+1∼π(ht+1) [ Qφ̄(ht+1, at+1) ] −Qφ(ht, at) )2] (1)
where φ̄ is the parameters of the target network. As discussed in the prior work (Fujimoto et al., 2019; Kumar et al., 2020), optimizing this loss can be challenging in the offline RL setting due to the overestimation issue in the bootstrapping process by taking out-of-distribution (OOD) actions to evaluate the value of the next state.
2.2 END-TO-END TASK-ORIENTED DIALOGUE SYSTEM
We focus on the MultiWOZ 2.0 dataset (Budzianowski et al., 2018), which is a representative benchmark for task-oriented dialogue. The MultiWOZ dataset is a fully-annotated corpus of human-human task-oriented conversations, which is collected via the Wizard-of-Oz setting (Kelley, 1984). The traditional approach to building a task-oriented dialogue system adopts a modular pipeline, which consists of the following four modules: 1) A natural language understanding (NLU) module (Kim et al., 2017; Zhu et al., 2020) identifies the user’s intent and extracts the information of slots and their values, 2) A Dialogue state tracking (DST) module (Williams et al., 2013) infers the belief state, 3) A dialogue policy (POL) module decides the system action, 4) A natural language generation (NLG) module (Wen et al., 2015) generates the system response corresponding to the system action. Recently, end-to-end task-oriented dialogue methods leveraging the pre-trained language model have been proposed (Yang et al., 2021; Ham et al., 2020; Lin et al., 2020; Peng et al., 2021; Hosseini-Asl et al., 2020), and significantly improves overall performance in the task-oriented dialogues. In this paper, our algorithm is built upon UBAR (Yang et al., 2021), which is based on GPT-2 (Radford et al., 2019) and currently the state-of-the-art end-to-end dialogue agent for the MultiWOZ domain.
3 OFFLINE REINFORCEMENT LEARNING FOR END-TO-END TASK-ORIENTED DIALOGUE SYSTEMS
The corpus collected from human-human conversations inevitably contains unsuccessful dialogues in terms of task completion. For example, approximately 20% dialogues of the MultiWOZ dataset fail to meet the user goal. Therefore, a naive behavior cloning of the whole dataset would limit the performance of the conversational agent since the dataset includes a lot of unsuccessful dialogues: an agent that imitates failure would be inevitably suboptimal. Yet, dropping the unsuccessful dialogues from the corpus as done in weighted BC is also undesirable, since they may contain some task-specific information that is useful to properly respond to user requests. We thus aim to revise unsuccessful dialogues into successful ones in order to prevent repeating the past failure while improving the task performance.
In this section, we present GPT-Critic, an offline RL algorithm for task-oriented dialogue. Our GPTCritic is analogous to Actor-Critic method: GPT (Actor) decides which action to take while the Critic informs how good the action was and provides a signal for policy improvement. Still, GPTCritic is distinct from the Actor-Critic methods in that it does not rely on the policy gradients, which are generally known to cause the issue of diverging from human language (Lewis et al., 2017; Zhao et al., 2019). Instead, we sample a set of action candidates using GPT-2 and pick the best one using the critic, which constitutes a revised dialogue corpus. Then, we perform supervised fine-tuning of the GPT-2 on the revised dialogue corpus. This learning procedure of our GPT-Critic does not hurt the agent’s capability to generate human-like sentences, given that the generated action candidates were all natural-looking sentences due to the power of large pre-trained LM. Our algorithm is built upon the GPT-2 but it can be adopted for any generative pre-trained language model.
3.1 POLICY EVALUATION
Our GPT-Critic starts by training the action-value function (i.e. critic), which can evaluate the candidates for the response. The architecture of the critic network basically follows GPT-2 with employing different last layers to compute the Q-value. The parameterization of the critic network Qφ is designed to share the parameters of the Transformer (Vaswani et al., 2017) layers of GPT-2, where the parameters of the Transformer layers are only updated during the policy improvement step. The critic network is trained by minimizing the temporal difference error with respect to the dataset D:
arg min φ
E(ht,at,rt,ht+1,at+1)∼D [( rt + γQφ̄(ht+1, at+1)−Qφ(ht, at) )2] (2)
where φ̄ is the parameters of the target network. Note that Eq. (2) is an on-policy evaluation on the dataset D, which can be optimized very stably since every at+1 is always an in-distribution sample of D. This is in contrast to Eq. (1), which requires evaluation of out-of-distribution actions sampled from the target policy π. The OOD action-value estimation can be very unreliable if the target policy deviates much from the dataset.
This kind of on-policy evaluation has been explored in the offline RL context for stable policy optimization (Brandfonbrener et al., 2021; Goo & Niekum, 2021), but they are limited to only one-step policy improvement: once the policy π is improved by the initial on-policy Q-function (i.e. π(s) = arg maxaQ(s, a)), the new policy deviates from the dataset policy, thus it requires off-policy evaluation for further policy iteration. In contrast, our GPT-Critic performs policy improvement by generating an improved dataset based on the learned critic, where we can perform on-policy evaluation on the new dataset again. As a consequence, GPT-Critic can enjoy the stable multi-step policy iteration through alternation between on-policy evaluation and policy improvement via revising dataset, which will be discussed in the following section.
3.2 POLICY IMPROVEMENT VIA DATASET REVISION
In the task-oriented dialogues, the reward is given by the external program provided as a part of the dataset, which checks whether the user goal is satisfied by examining the dialogue history. To generate the improved dataset, we adopt the common automatic evaluation of dialogue systems, where the agent generates dialogue act and system response on every system turn with fixed user utterances. More formally, the GPT-Critic generates a new dataset containing revised responses by:
Di+1 = {(g, ht, a∗t , r∗t , h∗t+1) | a∗t = arg max a∈{ak}N
{ak}N∼πiθ(ht)
Qφ(ht, a) where ht ∈ Di} (3)
where {ak}N is a set of N response candidates generated from the policy π (i.e. fine-tuned GPT-2), and Di is the dataset at i-th iteration. In the task-oriented dialogues, a reward function R(g, h, a) is provided that can compute a reward given a user goal, dialogue history, and system action. The revised reward r∗t = R(g, ht, a ∗ t ) is computed by given user goal, dialogue history, and revised system action a∗t . The dialogue history is a sequence of all previous observations and actions, thus the revised history h∗t+1 = {o0, a0, . . . , ot, a∗t , ot+1} is defined by replacing the original action at of ht+1 with the revised action a∗t . The examples of revised responses can be found in Appendix B.
In order to address the prohibitively large language action spaces, we explicitly consider the set of response candidates that are generated from the fine-tuned GPT-2. The GPT-Critic selects the
Algorithm 1 GPT-Critic Input: Training dataset D0 = {{(gj , hjt , a j t , r j t , h j t+1) T t=0}Nj=1}, policy network (GPT) πθ , critic network Qφ Fine-tune the initial policy represented by GPT-2 model (e.g. UBAR) for each iteration i do
Update critic by minimizing the temporal difference error until convergence:
argmin φ E(g,ht,at,rt,ht+1,at+1)∼Di
[( rt + γQφ̄(ht+1, at+1)−Qφ(ht, at) )2] Update dataset by critic-guided self-generation:
Di+1 = {(g, ht, a∗t , r∗t , h∗t+1) | a∗t = argmax a∈{ai}N
{ai}N∼πiθ(ht)
Qφ(ht, a) where ht ∈ Di}
Update policy by behavior cloning of critic-guided self-generated dataset: (Early stop according to the loss on the validation set)
argmin θ E(ht,at)∼Di+1 [− log πθ(at|ht)]
end for
most promising response by calculating the Q-values over the response candidates. GPT-Critic then performs behavior cloning of critic-guided self-generated dialogues:
arg min θ E(ht,at)∼Di+1 [− log πθ(at|ht)] (4)
where θ is the parameters of GPT-2. The policy improvement of GPT-Critic is performed by behavior cloning of generated dialogues from the GPT-2, thus GPT-Critic inherits GPT-2’s ability to generate human-like responses.
We can theoretically show that the updated policy by the above policy improvement step has a higher value than the old policy. Furthermore, we can also theoretically show that updated policy by the higher number of candidate actions has a higher value than the policy updated by the lower number of candidate actions. We formalize this result in Theorem 1. Theorem 1. (Policy Improvement) Given a policy π and the number of sampling actions N ≥ 1, If we update the new policy πnewN by
∀s, πnewN (·|s) = arg max a∈{ak}N
{ak}N∼π(s)
Qπ(s, a)
then Qπ new N (s, a) ≥ Qπ(s, a) ∀s, a always holds. Furthermore, for any N,M such that N ≥ M ≥ 1, Qπ new N (s, a) ≥ QπnewM (s, a) ∀s, a always holds. (Proof in Appendix A.)
We describe our algorithm, GPT-Critic, in Algorithm 1, that alternates between policy evaluation and policy improvement via revising the dataset until the policy performance converges.
4 RELATED WORK
End-to-End Task-Oriented Dialogue Systems. The traditional approach to building a task-oriented dialogue system adopts a modular pipeline, which consists of natural language understanding, dialogue state tracking, dialogue policy, and natural language generation. Recently, pre-trained LMbased end-to-end task-oriented dialogue agents that all sub-tasks recast as a single sequence prediction problem have been proposed (Ham et al., 2020; Hosseini-Asl et al., 2020), and significantly improved overall performance in the task-oriented dialogues. There are a number of variants of GPT-2-based end-to-end task-oriented dialogue agents. Yang et al. (2021) leverage the entire dialogue session of every dialogue turn. Peng et al. (2021) adopt transfer learning and machine teaching for training a GPT-2-based dialogue agent. Lin et al. (2020) present efficient dialogue state tracking with a minimal generation length, then leverage pre-trained language models for task-oriented dialogues.
Reinforcement Learning for Task-Oriented Dialogue Systems. Applying the standard RL methods straightforwardly to optimize a task-oriented dialogue agent causes the issue of diverging from human language. To address this problem, interleaving reinforcement learning with supervised learning has been proposed but it is still not free from the issue of diverging from human language (Lewis et al., 2017). Recently, the latent representation models for language actions have been introduced to address the aforementioned problem (Zhao et al., 2019; Yarats & Lewis, 2018). They disentangle the semantics of the utterance and the natural language generation, and then perform goal-based training in the space of the latent variables instead of directly optimizing utterances. However, they cannot be directly applied to large-scale pre-trained language models that are not designed in a way that works inherently with discrete latent variables. Jaques et al. (2020) use KLcontrol to restrict the policy to stay close to its prior policy, but it still suffers from divergence from human language even with carefully chosen hyper-parameters. Furthermore, Jang et al. (2020) adopt Bayes-adaptive Monte-Carlo planning to negotiation dialogue then use it as a policy improvement operator. This approach can prevent the issue of diverging from human language through the policy improvement based on behavior cloning of self-generated dialogues. However, they assume a user model that is difficult enough to be considered another problem.
Offline Reinforcement Learning. There have been extensive studies on offline RL (Fujimoto et al., 2019; Levine et al., 2020; Kumar et al., 2020; Wang et al., 2020). Most of prior works are built on the off-policy actor-critic framework, and they focus on the overestimation issue by taking the OOD actions (Kumar et al., 2019; Lee et al., 2020; Fujimoto et al., 2019; Jaques et al., 2020; Kumar et al., 2020). However, a naive application of these offline RL methods suffer from the issue of diverging from human language in the task-oriented dialogues (Lewis et al., 2017; Zhao et al., 2019; Jang et al., 2020). On the other hand, there are a number of recent works on weighted behavior cloning, where a policy is trained by a variant of supervised learning loss (Wang et al., 2020; Peng et al., 2019; Siegel et al., 2020). The weighted behavior cloning approaches filter out bad actions, then perform behavior cloning on high-quality data. However, in the task-oriented dialogues, simply dropping the unsuccessful dialogues from the corpus is undesirable, since they may contain some task-specific information that is useful to properly respond to user requests. Our GPT-Critic aims to revise unsuccessful dialogues into successful ones, which is in contrast to the weighted behavior cloning on the fixed training dataset, where the action choice is restricted to the support in the dataset (Wang et al., 2020; Peng et al., 2019; Siegel et al., 2020). More recently, Chen et al. (2021) introduce Decision Transformer, a Transformer-based architecture that casts the problem of RL as conditional sequence modeling. These offline RL methods based on behavior cloning are directly applied to the task-oriented dialogues without aforementioned issue, but their results are similar to that of behavior cloning in the task-oriented dialogues.
5 EXPERIMENTS
In this section, we show the experimental results of GPT-critic on both automatic evaluation and human evaluation. First, we evaluate the performances of GPT-Critic on the MultiWOZ 2.0 (Budzianowski et al., 2018) as dataset-based automatic evaluation, compared with baseline methods including offline RL algorithms. Second, for more realistic evaluation, we conduct a simulator-based evaluation on the ConvLab framework (Zhu et al., 2020). Third, we also conduct the human evaluation to evaluate the quality of generated responses. Finally, we give a qualitative analysis of our method using generated dialogue examples on the training dataset of MultiWOZ 2.0, which shows how GPT-Critic improves the performance through the behavior cloning of self-generated dialogues. The qualitative analysis with generated dialogue examples can be found in Appendix B.
5.1 EXPERIMENTAL SETUP
We implement GPT-Critic based on the HuggingFace Transformers library (Wolf et al., 2019) and codebase of UBAR (Yang et al., 2021), which is a GPT-2-based current state-of-the-art end-to-end task-oriented dialogue agent for the MultiWOZ 2.0 dataset. For the generative pre-trained language model, we use DistilGPT2 (Sanh et al., 2019), a distilled version of GPT-2. Figure 2 shows the architecture of our policy and critic network based on GPT-2. We design the parameterization of the critic network to share the parameters of the Transformer layers of GPT-2, where the parameters of the Transformer layers are only updated during the policy improvement step. For the hyperparame-
ters of fine-tuning the GPT-2 model, we follow the setting in the public code of UBAR (Yang et al., 2021). We use N = 5 for the number of candidate actions {ak}N , and the set of candidate actions are constructed by vanilla softmax sampling from the policy, rather than beam search, to collect diverse actions. For each behavior cloning iteration, all models are fine-tuned with a training dataset from the pre-trained GPT-2 and early stop according to the loss on the validation set.
5.2 EVALUATION ON THE MULTIWOZ DATASET
We evaluate our algorithm on the MultiWOZ 2.0 dataset, which is one of the representative taskoriented dialogue benchmarks. The MultiWOZ 2.0 is a large-scale multi-domain Wizard-of-Oz dataset, where a tourist (i.e. user) converses with a clerk (i.e. system) at the information center in a touristic city. It consists of 8438/1000/1000 dialogues for training/validation/testing. For end-to-end evaluation on the MultiWOZ 2.0 dataset, we use the following automatic evaluation metrics: 1) Inform: evaluates whether the system provides an appropriate entity, 2) Success: evaluates whether the system answers all the requested information, 3) BLEU: measures the fluency of the generated response (Papineni et al., 2002). We also report the Combined Score as an overall quality measure (Combined = (Inform + Success)× 0.5 + BLEU). We compare the performance of GPT-Critic with the following algorithms: 1) SFN+RL (Mehri et al., 2019), a seq2seq network that incorporates several pre-trained dialogue modules into a neural dialogue model, 2) DAMD (Zhang et al., 2020), the domain-aware multi-decoder network with multiaction data augmentation method, 3) SimpleTOD (Hosseini-Asl et al., 2020), a GPT-2-based endto-end dialogue agent that all sub-tasks recast as a single sequence prediction problem, 4) SOLOIST (Peng et al., 2021), a GPT-2-based end-to-end dialogue agent with transfer learning and machine teaching, 5) MinTL (Lin et al., 2020), an efficient dialogue state tracking method with a minimal generation length by predicting the difference between old and new states, 6) UBAR (Yang et al., 2021), a GPT-2-based end-to-end dialogue agent that leverages the entire dialogue session of every dialogue turn. We implement our algorithm into the codebase of UBAR (Yang et al., 2021), and the result of UBAR is reproduced by adapting its code to the same evaluation settings as other papers1. Moreover, we also compare the data augmentation method, DATA AUGMENTATION, which is naively fine-tuning the GPT-2 model with additionally generated data by vanilla softmax sampling from the trained policy.
In addition, we also compare with recent offline RL algorithms that are free from the issue of diverging from human language: 1) CRR (Wang et al., 2020), a value-filtered regression method that performs weighted behavior cloning of offline dataset, 2) Decision Transformer (Chen et al., 2021), a Transformer-based architecture that casts the problem of RL as conditional sequence modeling. For a fair comparison, we use the same pre-trained GPT-2 model as a policy network to train the CRR and the Decision Transformer. Moreover, to show that the policy-gradient-based standard RL algorithms suffer from diverging from human language, we also provide examples of responses generated by policy-gradient-based standard RL algorithm in Appendix D.
1The score reported in the UBAR paper is the result of using the true dialogue state for DB search. In order to compare under the same conditions with other algorithms, we record the result of UBAR to use the predicted dialogue state for DB search.
Table 1 and Table 2 show the results of policy iteration. Table 1 shows the performance of training dataset and critic-guided self-generated dialogues being used for each policy improvement step. Table 2 reports the intermediate performance of behavior cloning of training dataset and criticguided self-generated dialogues in each of the policy iteration. As shown in Table 1 and Table 2, the performance of critic-guided self-generated dialogues is improved gradually; the performance of GPT-Critic is also consistently improved through the behavior cloning of improved dataset.
Table 3 summarizes the overall performance of GPT-Critic and baseline algorithms in end-to-end response generation setting, where the generated dialogue state and generated dialogue act are used for the DB search and response generation. The results show that GPT-Critic achieved the best performance in terms of inform rate, success rate, and combined score. Moreover, the performance of GPT-Critic on the BLEU score matches those of other pre-trained LM-based methods, since GPT-Critic inherits GPT-2’s ability to generate human-like responses through the behavior cloning of responses generated by GPT-2. The results show that GPT-Critic improves the task performance of the agent without the issue of diverging from human language. In addition, as can be shown in Table 3, the naive data augmentation is not effective since it will not change the GPT’s sampling distribution in principle.
For the results of offline RL baselines, CRR and Decision Transformer show the results that do not diverge from human-language, since their policy is also trained by behavior cloning. However, both algorithms show limited performance because they perform behavior cloning on a fixed dataset. CRR has achieved remarkable success in continuous control tasks by performing weighted behavior cloning of training dataset filtered by critic, but it does not effectively perform in the task-oriented dialogues because of data scarcity. Furthermore, to evaluate the Decision Transformer, we adopt a delayed return where the agent receives the cumulative reward at the end of dialogue, since the agent cannot observe user goal. Therefore, without observing the user goal at test time, Decision Transformer reduces to the behavior cloning of successful dialogues.
5.3 EVALUATION ON CONVLAB EVALUATOR
In order to evaluate the performance of dialogue agents in an end-to-end fashion, we conduct simulator-based evaluation on ConvLab (Zhu et al., 2020). ConvLab is an open-source toolkit that enables to build task-oriented dialogue systems and perform an end-to-end evaluation. The simulator-based evaluation is more reliable than dataset-based automatic evaluation because it evaluates the performance while interacting with the user simulator. To interact with dialogue systems, ConvLab provides an agenda-based user simulator (Schatzmann et al., 2007) that consists of a BERT (Devlin et al., 2019) for NLU, a rule-based policy, and a template-based NLG. We compare the performance of GPT-Critic with baseline algorithms interacting with the same user simulator and user goals. We report the results with the following metrics: 1) Complete: evaluates whether the system completes the goal, 2) Success: evaluates whether all the user requests have been informed and the booked entities satisfy the constraints, 3) Book: evaluates how many booked entities satisfy the user constraints, 4) Inform (Precision / Recall / F1): evaluates how many user requests have been informed, 5) Turn (success / all): evaluates the average turn number for successful/all dialogues.
We describe the performance of GPT-Critic and baselines in Table 7. Each algorithm is tested for 1000 runs with randomly sampled user goal. The results show that GPT-Critic achieves the best
performance in all metrics related to task accomplishment. However, they also show that GPT-Critic takes longer dialogue turn for the task accomplishment because GPT-Critic is trained by maximizing the success rate without considering the dialogue turn.
5.4 HUMAN EVALUATION
We also conduct human evaluation on Amazon Mechanical Turk (AMT) to assess the quality of generated responses of GPT-Critic and baseline algorithms, using the evaluation protocol as in (Yang et al., 2021; Lin et al., 2020; Zhang et al., 2020). Specifically, human workers on AMT were asked to read the context and generated response by interactive simulation via ConvLab, then score the following two evaluation metrics on a Likert scale (1-5): 1) Appropriateness: evaluates whether the generated responses are appropriate for the given context, 2) Fluency: evaluates whether the generated responses are comprehensible and human-like. We compare the performance of GPT-Critic with same baselines on ConvLab evaluation. Figure 3 summarizes the overall results of human evaluation, where 60 workers evaluate the quality of 30 randomly selected dialogues for each algorithm. The results show that GPT-Critic significantly outperforms baseline algorithms in appropriateness which is related to task accomplishment. Moreover, the result of fluency shows that GPT-Critic does not hurt the agent’s capability to generate human-like sentences.
6 CONCLUSION
We presented GPT-Critic, an offline RL algorithm for task-oriented dialogue system, which can be adopted for any generative pre-trained language model. GPT-Critic aims to learn an end-to-end task-oriented dialogue agent without the issue of diverging from human language. GPT-Critic starts with fine-tuning the GPT-2 model and learning the critic using the dialogue corpus. Then, GPTCritic updates the policy through the behavior cloning of the critic-guided self-generated responses, thus it is essentially free from the issue of diverging from human language. In the experiments, we demonstrated that GPT-Critic outperforms the state-of-the-art algorithms in the task-oriented dialogue benchmarks including MultiWOZ 2.0 and ConvLab.
ACKNOWLEDGMENTS
This work was supported by the National Research Foundation (NRF) of Korea (NRF2019R1A2C1087634, NRF-2021M3I1A1097938) and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.20190-00075, No.2020-0-00940, No.2021-0-02068)
A POLICY IMPROVEMENT THEOREM
Theorem 1. (Policy Improvement) Given a policy π and the number of sampling actions N ≥ 1, If we update the new policy πnewN by
∀s, πnewN (·|s) = arg max a∈{ak}N
{ak}N∼π(s)
Qπ(s, a)
then Qπ new N (s, a) ≥ Qπ(s, a) ∀s, a always holds. Furthermore, for any N,M such that N ≥ M ≥ 1, Qπ new N (s, a) ≥ QπnewM (s, a) ∀s, a always holds.
Proof. For any s, a,N ≥M ,
Qπ new M (s, a) = EP [ R(st, at) + γEat+1∼πnewM (st+1)[Q πnewM (st+1, at+1)]|st = s, at = a ]
= EP [ R(st, at) + γE{ai}M∼π(st+1)[ max
a′∈{ai}M Qπ(st+1, a
′)]|st = s, at = a ]
≤ EP [ R(st, at) + γE{ai}N∼π(st+1)[ max
a′∈{ai}N Qπ(st+1, a
′)]|st = s, at = a ]
= EP [ R(st, at) + γEanewt+1∼πnewN (st+1)[Q π(st+1, a new t+1)|st = s, at = a ] = EP [ R(st, at) + γ ( Eanewt+1∼πnewN (st+1)[R(st+1, a new t+1)] + γEat+2∼π(st+2)[Q π(st+2, at+2)] ) |st = s, at = a
] = EP
[ t+1∑ τ=t Eaτ+1∼πnewN (sτ+1)[γ τ−tR(sτ , aτ )] + γ 2Eat+2∼π(st+2)[Q π(st+2, at+2)]|st = s, at = a ] ≤ EP
[ t+1∑ τ=t Eaτ+1∼πnew(sτ+1)[γ τ−tR(sτ , aτ )] + γ 2Eanewt+2∼πnew(st+2)[Q π(st+2, a new t+2)|st = s, at = a ] = EP
[ t+2∑ τ=t Eaτ+1∼πnew(sτ+1)[γ τ−tR(sτ , aτ )] + γ 3Eat+3∼π(st+3)[Q π(st+3, at+3)|st = s, at = a ] ...
≤ EP [ ∞∑ τ=t Eaτ+1∼πnew(sτ+1)[γ τ−tR(sτ , aτ )]|st = s, at = a ] = Qπ new N (s, a)
For the case of N = 1, note that πnew1 is simply reduced to the π, which concludes the proof:
Qπ new N (s, a) ≥ Qπ new M (s, a) ≥ Qπ new 1 (s, a) = Qπ(s, a) for all s, a,N ≥M ≥ 1.
B QUALITATIVE ANALYSIS OF SELF-GENERATED DIALOGUES
In this section, we provide a qualitative analysis on the critic-guided self-generated responses in GPT-Critic. We show the critic-guided self-generated dialogue examples in Table 5 that illustrates how GPT-Critic improves the performance through the behavior cloning of self-generated dialogues, compared with unsuccessful dialogues on the training dataset of MultiWOZ. Each example demonstrates the critic-guided self-generated dialogue act and delexicalized system response, compared with the original dialogue. The generated responses contain all the requests of user with abundant information, whereas the original responses of unsuccessful dialogues do not contain all the requested information. GPT-Critic improves the performance through the behavior cloning of these
revised responses. Moreover, Table 5 shows that the generated dialogues do not diverge from human language. Since GPT-Critic updates the policy through behavior cloning of the self-generated human-like responses, GPT-Critic is essentially free from the issue of diverging from human language.
C QUALITATIVE EXAMPLES OF STANDARD REINFORCEMENT LEARNING ALGORITHM
In this section, we provide examples of responses generated by standard RL algorithm (REINFORCE) to show that the policy-gradient-based standard RL algorithms suffer from diverging from human language. As can be shown in Table 6, policy-gradient-based RL algorithm generates responses which is diverging from human language.
D EVALUATION FOR THE QUALITY OF GENERATED DIALOGUE STATES AND DIALOGUE ACTS
We additionally conducted the experiments with UBAR and GPT-Critic to explicitly evaluate the generated dialogue states and dialogue acts (rather than evaluating the final system response). The table below shows the performance for predicted dialogue state(Joint accuracy / Slot accuracy) and predicted dialogue act (Dialogue Act f1), where the mean performance and the standard error are reported. As the table presents, GPT-Critic outperforms UBAR on Dialogue Act f1, which measures the performance of dialogue policy prediction. However, in the case of dialogue state tracking, there is no significant performance gap between GPT-Critic and UBAR since our GPT-Critic revises only the dialogue act and system response (which are considered as an action in GPT-Critic) but not the dialogue state in the dataset. | 1. What is the focus of the paper regarding task-oriented dialogue models?
2. What are the claimed contributions of the proposed method in improving dialogue policies and responses?
3. Do you have any concerns or suggestions regarding the experimental setup and comparisons with other works?
4. How does the reviewer assess the novelty, reasonableness, and effectiveness of the proposed approach?
5. Are there any parts of the paper that need clarification or further explanation? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes an offline RL method applied to an end-to-end task-oriented dialogue model, where the proposed GPT-Critic is built on GPT-2 and fine-tuned on the self-generated sentences for policy updating. The paper claims that it is free from the issue of diverging from human language (a common issue in standard RL), because it learns from the sentences directly sampled from the pre-trained language model. The conducted experiments show that the proposed model achieves better performance compared to other task-oriented end-to-end dialogue models in both offline and online settings (MultiWOZ and ConvLab respectively).
Review
This paper focuses on improving the dialogue policy together with the responses by utilizing a pre-trained language model and offline RL. The claimed contributions include:
The proposed method is free from the common issue of diverging from human language, because it learns from the sentences sampled from the pre-trained LM.
The proposed method outperforms other SOTA models in offline and interactive online settings, MultiWOZ and ConvLab respectively.
The proposed method is reasonable and moderately novel. The experimental results are promising for both settings. However, there are unclear parts to be addressed or clarified.
Because the policy learning procedure utilizes the additionally generated dialogue acts and corresponding responses, it is easy to think that naively fine-tuning the GPT-2 model on the additional generated data may also improve the dialogue model performance in terms of its policy and responses (similar to a data augmentation method). Did the authors try this as another compared baseline? This method should be included in the experiments in order to justify the proposed RL approach is necessary.
The paper mentioned that the standard RL methods easily fail and generate responses diverging from human language, even when fine-tuning a pre-trained LM. Hence, it will be better to additionally include the results of other standard RL algorithms for better justifying this claim. (Current experiments only include the results of models that are free from this issue.)
In the experiments in MultiWOZ, this paper only evaluates the response generation results. However, evaluating dialogue policy is also important to justify the learned policy is suitable. It is unclear why the authors only show the response generation results.
The experiments contain two setups, one is offline response evaluation via MultiWOZ, and another is interactive simulation via ConvLab. Both settings show the better performance of the proposed method. The paper can be better if adding the real-user interactions, because the performance may be different between the simulation environment and the real-user interactions reported by prior results (DSTC in ConvLab). Conducting real-human interactions can better justify the effectiveness of the proposed RL method in practical scenarios.
In sum, the proposed method is relatively novel and the idea is reasonable. The performance seems promising in both settings. However, the paper does not include detailed descriptions about the proposed method, making readers not easy to understand. Also, some additional experiments need to be added in order to better justify its claims. |
ICLR | Title
GPT-Critic: Offline Reinforcement Learning for End-to-End Task-Oriented Dialogue Systems
Abstract
Training a task-oriented dialogue agent can be naturally formulated as offline reinforcement learning (RL) problem, where the agent aims to learn a conversational strategy to achieve user goals, only from a dialogue corpus. It is very challenging in terms of RL since the natural language action space is astronomical, while feasible (syntactically and semantically correct) actions are very sparse. Thus, standard RL methods easily fail and generate responses diverging from human language, even when fine-tuning a powerful pre-trained language model. In this paper, we introduce GPT-Critic, an offline RL method for task-oriented dialogue. GPT-Critic is built upon GPT-2, fine-tuning the language model through behavior cloning of the critic-guided self-generated sentences. GPT-Critic is essentially free from the issue of diverging from human language since it learns from the sentences sampled from the pre-trained language model. In the experiments, we demonstrate that our algorithm outperforms the state-of-the-art in the task-oriented dialogue benchmarks including MultiWOZ 2.0 and ConvLab.
1 INTRODUCTION
Building an end-to-end task-oriented dialogue agent is one of the promising applications of natural language processing (NLP) tasks, yet challenging due to large language action spaces and limited availability of human-annotated data. Recently, large-scale pre-trained language models (LM) have achieved remarkable successes in various NLP tasks with prohibitively large vocabulary (Devlin et al., 2019; Radford et al., 2019; Brown et al., 2020; Raffel et al., 2019). The current best performing end-to-end conversational agents for a task-oriented dialogue system utilize a pre-training on largescale corpus and fine-tuning on downstream tasks (Ham et al., 2020; Yang et al., 2021; Lin et al., 2020; Peng et al., 2021). This combination of pre-training and fine-tuning significantly improves overall performance in the task-oriented dialogues. However, supervised fine-tuning (i.e. imitation learning of the dialogue corpus) alone may not be sufficient to learn an optimal dialogue strategy since the corpus often contains suboptimal dialogues collected from human participants of diverse expertise levels. Thus, in order to optimize the task performance of the conversational agent, goaloriented training (i.e. reinforcement learning) is an essential and promising direction to pursue.
Training a task-oriented conversational agent from a dialogue corpus can be naturally formulated as offline reinforcement learning (RL) problem (Levine et al., 2020; Fujimoto et al., 2019; Jaques et al., 2020), which offers the prospect to optimize the policy solely from the fixed dataset without online environment interaction. Most of the existing offline RL methods are built on the off-policy ActorCritic framework, which performs iterative optimization of the policy (i.e. actor) and the actionvalue function (i.e. critic) (Fujimoto et al., 2019; Janner et al., 2019; Kumar et al., 2020). Yet, a naive application of these offline RL methods generally results in poor dialogue strategies which generate responses in no way similar to human language (Lewis et al., 2017; Zhao et al., 2019; Jang et al., 2020).
Weighted behavior cloning (BC) (Wang et al., 2020) is one of the representative offline RL algorithms, which is free from the issue of diverging from human language. Weighted BC amounts
to filtering out bad actions and imitating good actions. In the context of task-oriented dialogues, that would be equivalent to simply dropping the unsuccessful dialogues from the corpus. However, dropping a whole dialogue from training would be wasteful, since they may still contain some taskspecific information that is useful to properly respond to user requests in the intermediate steps.
In this paper, we present an offline RL algorithm for task-oriented dialogue, which can be adopted for any generative pre-trained language model. Our algorithm, GPT-Critic, aims to revise unsuccessful dialogues into successful ones, rather than removing them as done in weighted BC. It starts with fine-tuning the GPT-2 model and learning the action-value function (critic) using the dialogue corpus. Then, GPT-Critic generates a strategically promising action that is selected based on the value estimated by the critic. GPT-Critic updates the policy through behavior cloning of the critic-guided self-generated responses. This is in contrast to the previous methods that perform weighted behavior cloning on the dialogue corpus, where the action choice is restricted to the support in the dataset (Wang et al., 2020). Compared to traditional actor-critic methods, since GPT-Critic does not rely on policy gradient and updates the policy within the support of generated actions from the GPT-2, it thus inherits GPT-2’s ability to generate human-like responses. In the experiments, we demonstrate that GPT-Critic outperforms the state-of-the-art end-to-end dialogue agent in the task-oriented dialogue benchmarks including MultiWOZ 2.0 (Budzianowski et al., 2018) and ConvLab (Zhu et al., 2020).
2 BACKGROUND
2.1 OFFLINE REINFORCEMENT LEARNING FOR TASK-ORIENTED DIALOGUES
We consider the task-oriented dialogue system that can be modeled as a partially observable Markov decision process (POMDP) (Williams & Young, 2007) defined by tuple 〈S,A,O, T, Z,R, γ〉 where S is the set of environment states s = 〈g, h〉 (underlying state that consists of the user goal g and dialogue history h),A is the set of actions a (a sequence of tokens which represents dialogue act and system response), O is the set of observations o (user utterance), T (s′|s, a) = Pr(st+1 = s′|st = s, at = a) is the transition function, Z(o|s′, a) = Pr(ot+1 = o|st+1 = s′, at = a) is the observation probability, R(g, h, a) is the reward function indicating the utility of executing action a in history h and the user goal g, and γ ∈ (0, 1) is a discount factor. The history at time step t, ht = {o0, a0, . . . ot−1, at−1, ot}, is a sequence of all previous observations and actions. Since the underlying state s (e.g. user goal) is not directly observable, the agent makes decisions based on the entire observation-action history. The policy π(at|ht) is mapping from history ht to a probability distribution overA. The goal is to find an optimal policy π∗ that maximizes the expected cumulative rewards, i.e. π∗ = arg maxπ Eπ [ ∑∞ t=0 γ
tR(g, ht, at)]. The action-value function of policy π is defined as Qπ(h, a) := Eπ [ ∑∞ t=0 γ
tR(g, ht, at)|h0 = h, a0 = a], where Qπ is a unique solution of the Bellman equation: Qπ(h, a) = Eg[R(g, h, a)] + γEπ [Qπ(h′, a′)].
Using offline RL for dialogue policy optimization, the agent optimizes the policy from the precollected dataset D = {{(gj , hjt , a j t , r j t , h j t+1) T t=0}Nj=1} without online environment interaction during the intermediate stages of training. Prior offline RL algorithms (Fujimoto et al., 2019; Janner et al., 2019; Kumar et al., 2020) rely on off-policy actor-critic method, where the critic network is trained by minimizing the temporal differnce error with respect to the target policy π:
arg min φ
E(ht,at,rt,ht+1)∼D [( rt + γEat+1∼π(ht+1) [ Qφ̄(ht+1, at+1) ] −Qφ(ht, at) )2] (1)
where φ̄ is the parameters of the target network. As discussed in the prior work (Fujimoto et al., 2019; Kumar et al., 2020), optimizing this loss can be challenging in the offline RL setting due to the overestimation issue in the bootstrapping process by taking out-of-distribution (OOD) actions to evaluate the value of the next state.
2.2 END-TO-END TASK-ORIENTED DIALOGUE SYSTEM
We focus on the MultiWOZ 2.0 dataset (Budzianowski et al., 2018), which is a representative benchmark for task-oriented dialogue. The MultiWOZ dataset is a fully-annotated corpus of human-human task-oriented conversations, which is collected via the Wizard-of-Oz setting (Kelley, 1984). The traditional approach to building a task-oriented dialogue system adopts a modular pipeline, which consists of the following four modules: 1) A natural language understanding (NLU) module (Kim et al., 2017; Zhu et al., 2020) identifies the user’s intent and extracts the information of slots and their values, 2) A Dialogue state tracking (DST) module (Williams et al., 2013) infers the belief state, 3) A dialogue policy (POL) module decides the system action, 4) A natural language generation (NLG) module (Wen et al., 2015) generates the system response corresponding to the system action. Recently, end-to-end task-oriented dialogue methods leveraging the pre-trained language model have been proposed (Yang et al., 2021; Ham et al., 2020; Lin et al., 2020; Peng et al., 2021; Hosseini-Asl et al., 2020), and significantly improves overall performance in the task-oriented dialogues. In this paper, our algorithm is built upon UBAR (Yang et al., 2021), which is based on GPT-2 (Radford et al., 2019) and currently the state-of-the-art end-to-end dialogue agent for the MultiWOZ domain.
3 OFFLINE REINFORCEMENT LEARNING FOR END-TO-END TASK-ORIENTED DIALOGUE SYSTEMS
The corpus collected from human-human conversations inevitably contains unsuccessful dialogues in terms of task completion. For example, approximately 20% dialogues of the MultiWOZ dataset fail to meet the user goal. Therefore, a naive behavior cloning of the whole dataset would limit the performance of the conversational agent since the dataset includes a lot of unsuccessful dialogues: an agent that imitates failure would be inevitably suboptimal. Yet, dropping the unsuccessful dialogues from the corpus as done in weighted BC is also undesirable, since they may contain some task-specific information that is useful to properly respond to user requests. We thus aim to revise unsuccessful dialogues into successful ones in order to prevent repeating the past failure while improving the task performance.
In this section, we present GPT-Critic, an offline RL algorithm for task-oriented dialogue. Our GPTCritic is analogous to Actor-Critic method: GPT (Actor) decides which action to take while the Critic informs how good the action was and provides a signal for policy improvement. Still, GPTCritic is distinct from the Actor-Critic methods in that it does not rely on the policy gradients, which are generally known to cause the issue of diverging from human language (Lewis et al., 2017; Zhao et al., 2019). Instead, we sample a set of action candidates using GPT-2 and pick the best one using the critic, which constitutes a revised dialogue corpus. Then, we perform supervised fine-tuning of the GPT-2 on the revised dialogue corpus. This learning procedure of our GPT-Critic does not hurt the agent’s capability to generate human-like sentences, given that the generated action candidates were all natural-looking sentences due to the power of large pre-trained LM. Our algorithm is built upon the GPT-2 but it can be adopted for any generative pre-trained language model.
3.1 POLICY EVALUATION
Our GPT-Critic starts by training the action-value function (i.e. critic), which can evaluate the candidates for the response. The architecture of the critic network basically follows GPT-2 with employing different last layers to compute the Q-value. The parameterization of the critic network Qφ is designed to share the parameters of the Transformer (Vaswani et al., 2017) layers of GPT-2, where the parameters of the Transformer layers are only updated during the policy improvement step. The critic network is trained by minimizing the temporal difference error with respect to the dataset D:
arg min φ
E(ht,at,rt,ht+1,at+1)∼D [( rt + γQφ̄(ht+1, at+1)−Qφ(ht, at) )2] (2)
where φ̄ is the parameters of the target network. Note that Eq. (2) is an on-policy evaluation on the dataset D, which can be optimized very stably since every at+1 is always an in-distribution sample of D. This is in contrast to Eq. (1), which requires evaluation of out-of-distribution actions sampled from the target policy π. The OOD action-value estimation can be very unreliable if the target policy deviates much from the dataset.
This kind of on-policy evaluation has been explored in the offline RL context for stable policy optimization (Brandfonbrener et al., 2021; Goo & Niekum, 2021), but they are limited to only one-step policy improvement: once the policy π is improved by the initial on-policy Q-function (i.e. π(s) = arg maxaQ(s, a)), the new policy deviates from the dataset policy, thus it requires off-policy evaluation for further policy iteration. In contrast, our GPT-Critic performs policy improvement by generating an improved dataset based on the learned critic, where we can perform on-policy evaluation on the new dataset again. As a consequence, GPT-Critic can enjoy the stable multi-step policy iteration through alternation between on-policy evaluation and policy improvement via revising dataset, which will be discussed in the following section.
3.2 POLICY IMPROVEMENT VIA DATASET REVISION
In the task-oriented dialogues, the reward is given by the external program provided as a part of the dataset, which checks whether the user goal is satisfied by examining the dialogue history. To generate the improved dataset, we adopt the common automatic evaluation of dialogue systems, where the agent generates dialogue act and system response on every system turn with fixed user utterances. More formally, the GPT-Critic generates a new dataset containing revised responses by:
Di+1 = {(g, ht, a∗t , r∗t , h∗t+1) | a∗t = arg max a∈{ak}N
{ak}N∼πiθ(ht)
Qφ(ht, a) where ht ∈ Di} (3)
where {ak}N is a set of N response candidates generated from the policy π (i.e. fine-tuned GPT-2), and Di is the dataset at i-th iteration. In the task-oriented dialogues, a reward function R(g, h, a) is provided that can compute a reward given a user goal, dialogue history, and system action. The revised reward r∗t = R(g, ht, a ∗ t ) is computed by given user goal, dialogue history, and revised system action a∗t . The dialogue history is a sequence of all previous observations and actions, thus the revised history h∗t+1 = {o0, a0, . . . , ot, a∗t , ot+1} is defined by replacing the original action at of ht+1 with the revised action a∗t . The examples of revised responses can be found in Appendix B.
In order to address the prohibitively large language action spaces, we explicitly consider the set of response candidates that are generated from the fine-tuned GPT-2. The GPT-Critic selects the
Algorithm 1 GPT-Critic Input: Training dataset D0 = {{(gj , hjt , a j t , r j t , h j t+1) T t=0}Nj=1}, policy network (GPT) πθ , critic network Qφ Fine-tune the initial policy represented by GPT-2 model (e.g. UBAR) for each iteration i do
Update critic by minimizing the temporal difference error until convergence:
argmin φ E(g,ht,at,rt,ht+1,at+1)∼Di
[( rt + γQφ̄(ht+1, at+1)−Qφ(ht, at) )2] Update dataset by critic-guided self-generation:
Di+1 = {(g, ht, a∗t , r∗t , h∗t+1) | a∗t = argmax a∈{ai}N
{ai}N∼πiθ(ht)
Qφ(ht, a) where ht ∈ Di}
Update policy by behavior cloning of critic-guided self-generated dataset: (Early stop according to the loss on the validation set)
argmin θ E(ht,at)∼Di+1 [− log πθ(at|ht)]
end for
most promising response by calculating the Q-values over the response candidates. GPT-Critic then performs behavior cloning of critic-guided self-generated dialogues:
arg min θ E(ht,at)∼Di+1 [− log πθ(at|ht)] (4)
where θ is the parameters of GPT-2. The policy improvement of GPT-Critic is performed by behavior cloning of generated dialogues from the GPT-2, thus GPT-Critic inherits GPT-2’s ability to generate human-like responses.
We can theoretically show that the updated policy by the above policy improvement step has a higher value than the old policy. Furthermore, we can also theoretically show that updated policy by the higher number of candidate actions has a higher value than the policy updated by the lower number of candidate actions. We formalize this result in Theorem 1. Theorem 1. (Policy Improvement) Given a policy π and the number of sampling actions N ≥ 1, If we update the new policy πnewN by
∀s, πnewN (·|s) = arg max a∈{ak}N
{ak}N∼π(s)
Qπ(s, a)
then Qπ new N (s, a) ≥ Qπ(s, a) ∀s, a always holds. Furthermore, for any N,M such that N ≥ M ≥ 1, Qπ new N (s, a) ≥ QπnewM (s, a) ∀s, a always holds. (Proof in Appendix A.)
We describe our algorithm, GPT-Critic, in Algorithm 1, that alternates between policy evaluation and policy improvement via revising the dataset until the policy performance converges.
4 RELATED WORK
End-to-End Task-Oriented Dialogue Systems. The traditional approach to building a task-oriented dialogue system adopts a modular pipeline, which consists of natural language understanding, dialogue state tracking, dialogue policy, and natural language generation. Recently, pre-trained LMbased end-to-end task-oriented dialogue agents that all sub-tasks recast as a single sequence prediction problem have been proposed (Ham et al., 2020; Hosseini-Asl et al., 2020), and significantly improved overall performance in the task-oriented dialogues. There are a number of variants of GPT-2-based end-to-end task-oriented dialogue agents. Yang et al. (2021) leverage the entire dialogue session of every dialogue turn. Peng et al. (2021) adopt transfer learning and machine teaching for training a GPT-2-based dialogue agent. Lin et al. (2020) present efficient dialogue state tracking with a minimal generation length, then leverage pre-trained language models for task-oriented dialogues.
Reinforcement Learning for Task-Oriented Dialogue Systems. Applying the standard RL methods straightforwardly to optimize a task-oriented dialogue agent causes the issue of diverging from human language. To address this problem, interleaving reinforcement learning with supervised learning has been proposed but it is still not free from the issue of diverging from human language (Lewis et al., 2017). Recently, the latent representation models for language actions have been introduced to address the aforementioned problem (Zhao et al., 2019; Yarats & Lewis, 2018). They disentangle the semantics of the utterance and the natural language generation, and then perform goal-based training in the space of the latent variables instead of directly optimizing utterances. However, they cannot be directly applied to large-scale pre-trained language models that are not designed in a way that works inherently with discrete latent variables. Jaques et al. (2020) use KLcontrol to restrict the policy to stay close to its prior policy, but it still suffers from divergence from human language even with carefully chosen hyper-parameters. Furthermore, Jang et al. (2020) adopt Bayes-adaptive Monte-Carlo planning to negotiation dialogue then use it as a policy improvement operator. This approach can prevent the issue of diverging from human language through the policy improvement based on behavior cloning of self-generated dialogues. However, they assume a user model that is difficult enough to be considered another problem.
Offline Reinforcement Learning. There have been extensive studies on offline RL (Fujimoto et al., 2019; Levine et al., 2020; Kumar et al., 2020; Wang et al., 2020). Most of prior works are built on the off-policy actor-critic framework, and they focus on the overestimation issue by taking the OOD actions (Kumar et al., 2019; Lee et al., 2020; Fujimoto et al., 2019; Jaques et al., 2020; Kumar et al., 2020). However, a naive application of these offline RL methods suffer from the issue of diverging from human language in the task-oriented dialogues (Lewis et al., 2017; Zhao et al., 2019; Jang et al., 2020). On the other hand, there are a number of recent works on weighted behavior cloning, where a policy is trained by a variant of supervised learning loss (Wang et al., 2020; Peng et al., 2019; Siegel et al., 2020). The weighted behavior cloning approaches filter out bad actions, then perform behavior cloning on high-quality data. However, in the task-oriented dialogues, simply dropping the unsuccessful dialogues from the corpus is undesirable, since they may contain some task-specific information that is useful to properly respond to user requests. Our GPT-Critic aims to revise unsuccessful dialogues into successful ones, which is in contrast to the weighted behavior cloning on the fixed training dataset, where the action choice is restricted to the support in the dataset (Wang et al., 2020; Peng et al., 2019; Siegel et al., 2020). More recently, Chen et al. (2021) introduce Decision Transformer, a Transformer-based architecture that casts the problem of RL as conditional sequence modeling. These offline RL methods based on behavior cloning are directly applied to the task-oriented dialogues without aforementioned issue, but their results are similar to that of behavior cloning in the task-oriented dialogues.
5 EXPERIMENTS
In this section, we show the experimental results of GPT-critic on both automatic evaluation and human evaluation. First, we evaluate the performances of GPT-Critic on the MultiWOZ 2.0 (Budzianowski et al., 2018) as dataset-based automatic evaluation, compared with baseline methods including offline RL algorithms. Second, for more realistic evaluation, we conduct a simulator-based evaluation on the ConvLab framework (Zhu et al., 2020). Third, we also conduct the human evaluation to evaluate the quality of generated responses. Finally, we give a qualitative analysis of our method using generated dialogue examples on the training dataset of MultiWOZ 2.0, which shows how GPT-Critic improves the performance through the behavior cloning of self-generated dialogues. The qualitative analysis with generated dialogue examples can be found in Appendix B.
5.1 EXPERIMENTAL SETUP
We implement GPT-Critic based on the HuggingFace Transformers library (Wolf et al., 2019) and codebase of UBAR (Yang et al., 2021), which is a GPT-2-based current state-of-the-art end-to-end task-oriented dialogue agent for the MultiWOZ 2.0 dataset. For the generative pre-trained language model, we use DistilGPT2 (Sanh et al., 2019), a distilled version of GPT-2. Figure 2 shows the architecture of our policy and critic network based on GPT-2. We design the parameterization of the critic network to share the parameters of the Transformer layers of GPT-2, where the parameters of the Transformer layers are only updated during the policy improvement step. For the hyperparame-
ters of fine-tuning the GPT-2 model, we follow the setting in the public code of UBAR (Yang et al., 2021). We use N = 5 for the number of candidate actions {ak}N , and the set of candidate actions are constructed by vanilla softmax sampling from the policy, rather than beam search, to collect diverse actions. For each behavior cloning iteration, all models are fine-tuned with a training dataset from the pre-trained GPT-2 and early stop according to the loss on the validation set.
5.2 EVALUATION ON THE MULTIWOZ DATASET
We evaluate our algorithm on the MultiWOZ 2.0 dataset, which is one of the representative taskoriented dialogue benchmarks. The MultiWOZ 2.0 is a large-scale multi-domain Wizard-of-Oz dataset, where a tourist (i.e. user) converses with a clerk (i.e. system) at the information center in a touristic city. It consists of 8438/1000/1000 dialogues for training/validation/testing. For end-to-end evaluation on the MultiWOZ 2.0 dataset, we use the following automatic evaluation metrics: 1) Inform: evaluates whether the system provides an appropriate entity, 2) Success: evaluates whether the system answers all the requested information, 3) BLEU: measures the fluency of the generated response (Papineni et al., 2002). We also report the Combined Score as an overall quality measure (Combined = (Inform + Success)× 0.5 + BLEU). We compare the performance of GPT-Critic with the following algorithms: 1) SFN+RL (Mehri et al., 2019), a seq2seq network that incorporates several pre-trained dialogue modules into a neural dialogue model, 2) DAMD (Zhang et al., 2020), the domain-aware multi-decoder network with multiaction data augmentation method, 3) SimpleTOD (Hosseini-Asl et al., 2020), a GPT-2-based endto-end dialogue agent that all sub-tasks recast as a single sequence prediction problem, 4) SOLOIST (Peng et al., 2021), a GPT-2-based end-to-end dialogue agent with transfer learning and machine teaching, 5) MinTL (Lin et al., 2020), an efficient dialogue state tracking method with a minimal generation length by predicting the difference between old and new states, 6) UBAR (Yang et al., 2021), a GPT-2-based end-to-end dialogue agent that leverages the entire dialogue session of every dialogue turn. We implement our algorithm into the codebase of UBAR (Yang et al., 2021), and the result of UBAR is reproduced by adapting its code to the same evaluation settings as other papers1. Moreover, we also compare the data augmentation method, DATA AUGMENTATION, which is naively fine-tuning the GPT-2 model with additionally generated data by vanilla softmax sampling from the trained policy.
In addition, we also compare with recent offline RL algorithms that are free from the issue of diverging from human language: 1) CRR (Wang et al., 2020), a value-filtered regression method that performs weighted behavior cloning of offline dataset, 2) Decision Transformer (Chen et al., 2021), a Transformer-based architecture that casts the problem of RL as conditional sequence modeling. For a fair comparison, we use the same pre-trained GPT-2 model as a policy network to train the CRR and the Decision Transformer. Moreover, to show that the policy-gradient-based standard RL algorithms suffer from diverging from human language, we also provide examples of responses generated by policy-gradient-based standard RL algorithm in Appendix D.
1The score reported in the UBAR paper is the result of using the true dialogue state for DB search. In order to compare under the same conditions with other algorithms, we record the result of UBAR to use the predicted dialogue state for DB search.
Table 1 and Table 2 show the results of policy iteration. Table 1 shows the performance of training dataset and critic-guided self-generated dialogues being used for each policy improvement step. Table 2 reports the intermediate performance of behavior cloning of training dataset and criticguided self-generated dialogues in each of the policy iteration. As shown in Table 1 and Table 2, the performance of critic-guided self-generated dialogues is improved gradually; the performance of GPT-Critic is also consistently improved through the behavior cloning of improved dataset.
Table 3 summarizes the overall performance of GPT-Critic and baseline algorithms in end-to-end response generation setting, where the generated dialogue state and generated dialogue act are used for the DB search and response generation. The results show that GPT-Critic achieved the best performance in terms of inform rate, success rate, and combined score. Moreover, the performance of GPT-Critic on the BLEU score matches those of other pre-trained LM-based methods, since GPT-Critic inherits GPT-2’s ability to generate human-like responses through the behavior cloning of responses generated by GPT-2. The results show that GPT-Critic improves the task performance of the agent without the issue of diverging from human language. In addition, as can be shown in Table 3, the naive data augmentation is not effective since it will not change the GPT’s sampling distribution in principle.
For the results of offline RL baselines, CRR and Decision Transformer show the results that do not diverge from human-language, since their policy is also trained by behavior cloning. However, both algorithms show limited performance because they perform behavior cloning on a fixed dataset. CRR has achieved remarkable success in continuous control tasks by performing weighted behavior cloning of training dataset filtered by critic, but it does not effectively perform in the task-oriented dialogues because of data scarcity. Furthermore, to evaluate the Decision Transformer, we adopt a delayed return where the agent receives the cumulative reward at the end of dialogue, since the agent cannot observe user goal. Therefore, without observing the user goal at test time, Decision Transformer reduces to the behavior cloning of successful dialogues.
5.3 EVALUATION ON CONVLAB EVALUATOR
In order to evaluate the performance of dialogue agents in an end-to-end fashion, we conduct simulator-based evaluation on ConvLab (Zhu et al., 2020). ConvLab is an open-source toolkit that enables to build task-oriented dialogue systems and perform an end-to-end evaluation. The simulator-based evaluation is more reliable than dataset-based automatic evaluation because it evaluates the performance while interacting with the user simulator. To interact with dialogue systems, ConvLab provides an agenda-based user simulator (Schatzmann et al., 2007) that consists of a BERT (Devlin et al., 2019) for NLU, a rule-based policy, and a template-based NLG. We compare the performance of GPT-Critic with baseline algorithms interacting with the same user simulator and user goals. We report the results with the following metrics: 1) Complete: evaluates whether the system completes the goal, 2) Success: evaluates whether all the user requests have been informed and the booked entities satisfy the constraints, 3) Book: evaluates how many booked entities satisfy the user constraints, 4) Inform (Precision / Recall / F1): evaluates how many user requests have been informed, 5) Turn (success / all): evaluates the average turn number for successful/all dialogues.
We describe the performance of GPT-Critic and baselines in Table 7. Each algorithm is tested for 1000 runs with randomly sampled user goal. The results show that GPT-Critic achieves the best
performance in all metrics related to task accomplishment. However, they also show that GPT-Critic takes longer dialogue turn for the task accomplishment because GPT-Critic is trained by maximizing the success rate without considering the dialogue turn.
5.4 HUMAN EVALUATION
We also conduct human evaluation on Amazon Mechanical Turk (AMT) to assess the quality of generated responses of GPT-Critic and baseline algorithms, using the evaluation protocol as in (Yang et al., 2021; Lin et al., 2020; Zhang et al., 2020). Specifically, human workers on AMT were asked to read the context and generated response by interactive simulation via ConvLab, then score the following two evaluation metrics on a Likert scale (1-5): 1) Appropriateness: evaluates whether the generated responses are appropriate for the given context, 2) Fluency: evaluates whether the generated responses are comprehensible and human-like. We compare the performance of GPT-Critic with same baselines on ConvLab evaluation. Figure 3 summarizes the overall results of human evaluation, where 60 workers evaluate the quality of 30 randomly selected dialogues for each algorithm. The results show that GPT-Critic significantly outperforms baseline algorithms in appropriateness which is related to task accomplishment. Moreover, the result of fluency shows that GPT-Critic does not hurt the agent’s capability to generate human-like sentences.
6 CONCLUSION
We presented GPT-Critic, an offline RL algorithm for task-oriented dialogue system, which can be adopted for any generative pre-trained language model. GPT-Critic aims to learn an end-to-end task-oriented dialogue agent without the issue of diverging from human language. GPT-Critic starts with fine-tuning the GPT-2 model and learning the critic using the dialogue corpus. Then, GPTCritic updates the policy through the behavior cloning of the critic-guided self-generated responses, thus it is essentially free from the issue of diverging from human language. In the experiments, we demonstrated that GPT-Critic outperforms the state-of-the-art algorithms in the task-oriented dialogue benchmarks including MultiWOZ 2.0 and ConvLab.
ACKNOWLEDGMENTS
This work was supported by the National Research Foundation (NRF) of Korea (NRF2019R1A2C1087634, NRF-2021M3I1A1097938) and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.20190-00075, No.2020-0-00940, No.2021-0-02068)
A POLICY IMPROVEMENT THEOREM
Theorem 1. (Policy Improvement) Given a policy π and the number of sampling actions N ≥ 1, If we update the new policy πnewN by
∀s, πnewN (·|s) = arg max a∈{ak}N
{ak}N∼π(s)
Qπ(s, a)
then Qπ new N (s, a) ≥ Qπ(s, a) ∀s, a always holds. Furthermore, for any N,M such that N ≥ M ≥ 1, Qπ new N (s, a) ≥ QπnewM (s, a) ∀s, a always holds.
Proof. For any s, a,N ≥M ,
Qπ new M (s, a) = EP [ R(st, at) + γEat+1∼πnewM (st+1)[Q πnewM (st+1, at+1)]|st = s, at = a ]
= EP [ R(st, at) + γE{ai}M∼π(st+1)[ max
a′∈{ai}M Qπ(st+1, a
′)]|st = s, at = a ]
≤ EP [ R(st, at) + γE{ai}N∼π(st+1)[ max
a′∈{ai}N Qπ(st+1, a
′)]|st = s, at = a ]
= EP [ R(st, at) + γEanewt+1∼πnewN (st+1)[Q π(st+1, a new t+1)|st = s, at = a ] = EP [ R(st, at) + γ ( Eanewt+1∼πnewN (st+1)[R(st+1, a new t+1)] + γEat+2∼π(st+2)[Q π(st+2, at+2)] ) |st = s, at = a
] = EP
[ t+1∑ τ=t Eaτ+1∼πnewN (sτ+1)[γ τ−tR(sτ , aτ )] + γ 2Eat+2∼π(st+2)[Q π(st+2, at+2)]|st = s, at = a ] ≤ EP
[ t+1∑ τ=t Eaτ+1∼πnew(sτ+1)[γ τ−tR(sτ , aτ )] + γ 2Eanewt+2∼πnew(st+2)[Q π(st+2, a new t+2)|st = s, at = a ] = EP
[ t+2∑ τ=t Eaτ+1∼πnew(sτ+1)[γ τ−tR(sτ , aτ )] + γ 3Eat+3∼π(st+3)[Q π(st+3, at+3)|st = s, at = a ] ...
≤ EP [ ∞∑ τ=t Eaτ+1∼πnew(sτ+1)[γ τ−tR(sτ , aτ )]|st = s, at = a ] = Qπ new N (s, a)
For the case of N = 1, note that πnew1 is simply reduced to the π, which concludes the proof:
Qπ new N (s, a) ≥ Qπ new M (s, a) ≥ Qπ new 1 (s, a) = Qπ(s, a) for all s, a,N ≥M ≥ 1.
B QUALITATIVE ANALYSIS OF SELF-GENERATED DIALOGUES
In this section, we provide a qualitative analysis on the critic-guided self-generated responses in GPT-Critic. We show the critic-guided self-generated dialogue examples in Table 5 that illustrates how GPT-Critic improves the performance through the behavior cloning of self-generated dialogues, compared with unsuccessful dialogues on the training dataset of MultiWOZ. Each example demonstrates the critic-guided self-generated dialogue act and delexicalized system response, compared with the original dialogue. The generated responses contain all the requests of user with abundant information, whereas the original responses of unsuccessful dialogues do not contain all the requested information. GPT-Critic improves the performance through the behavior cloning of these
revised responses. Moreover, Table 5 shows that the generated dialogues do not diverge from human language. Since GPT-Critic updates the policy through behavior cloning of the self-generated human-like responses, GPT-Critic is essentially free from the issue of diverging from human language.
C QUALITATIVE EXAMPLES OF STANDARD REINFORCEMENT LEARNING ALGORITHM
In this section, we provide examples of responses generated by standard RL algorithm (REINFORCE) to show that the policy-gradient-based standard RL algorithms suffer from diverging from human language. As can be shown in Table 6, policy-gradient-based RL algorithm generates responses which is diverging from human language.
D EVALUATION FOR THE QUALITY OF GENERATED DIALOGUE STATES AND DIALOGUE ACTS
We additionally conducted the experiments with UBAR and GPT-Critic to explicitly evaluate the generated dialogue states and dialogue acts (rather than evaluating the final system response). The table below shows the performance for predicted dialogue state(Joint accuracy / Slot accuracy) and predicted dialogue act (Dialogue Act f1), where the mean performance and the standard error are reported. As the table presents, GPT-Critic outperforms UBAR on Dialogue Act f1, which measures the performance of dialogue policy prediction. However, in the case of dialogue state tracking, there is no significant performance gap between GPT-Critic and UBAR since our GPT-Critic revises only the dialogue act and system response (which are considered as an action in GPT-Critic) but not the dialogue state in the dataset. | 1. What is the focus and contribution of the paper regarding reinforcement learning for dialogue agents?
2. What are the strengths and weaknesses of the proposed approach, particularly in its simplicity and experimental results?
3. Do you have any concerns or questions about the methodology, such as action space, candidate actions, reward computation, and overfitting?
4. How does the reviewer assess the novelty and comparisons with other works in the field?
5. Are there any minor comments or suggestions for improving the paper's clarity and readability? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a reinforcement learning-based approach to building a task-oriented dialogue agent. Given a dialogue dataset annotated with rewards, the state-action value function is first trained by minimizing temporal differences. Then, a new training dataset is created by using the best actions selected among the candidates generated by the current policy (i.e., the language model). The policy is then updated by behavior cloning using the created dataset, and the whole process is repeated. The authors have conducted experiments using MultiWOZ and ConvLab and shown that this iterative process improves the performance of the agent.
Review
The proposed approach is very simple and should be easy to implement. The experimental results seem promising. However, I do have a couple of concerns.
Firstly, I think that some important details are missing in the paper. For example, what is the action space of the agent? Do you treat the conjunction of a dialogue act and a system response as a single action? If so, how exactly are the candidate actions generated? Is some kind of beam search employed? I think some actual examples of actions (and rewards) would help the reader understand the proposed method more clearly.
The paper does not really describe how the rewards are computed, either. In particular, I am wondering how the reward for the newly selected action is computed. Is it given to the agent by an external program? Then, it seems to me that the whole training procedure is more like (a somewhat restricted version of) actor-critic-based reinforcement learning than offline reinforcement learning (in which the agent cannot interact with the environment). If that is the case, what is the novelty of the proposed method?
The authors claim in Table 3 that their proposed approach gives much better results than UBAR, but the original paper of UBAR (Yang et al., 2021) reports much better results (e.g., Inform score of 95.4). Why is there such a big difference?
Algorithm 1 states that the policy is updated by behavior cloning until "convergence". I am wondering if it causes any overfitting problem. Is overfitting the reason why the whole training process is stopped at the fourth iteration (Table 2)?
Minor comments:
p. 1: outperforms the state-of-the-art -> outperforms the state of the art; p. 1: not trained to for -> not trained for? p. 2: fine-turning the GPT-2 -> fine-turning GPT-2? fine-turning the GPT-2 model? p. 2: generates strategically -> generates a strategically; p. 2: Pr(O_{t+1} ... ) -> should not be italic? p. 3: by training action-value -> by training the action-value; p. 3: of critic network -> of the critic network; p. 4: for
i
-th -> for the
i
-th; p. 4: in task-oriented -> in the task-oriented? p. 4: generated system response -> generated system responses? p. 4: from (Zhao et al., 2019) -> from Zhao et al. (2019); p. 4: prohibitory -> prohibitively? p. 4: over response candidates -> over the response candidates? p. 4: updated policy by above -> the updated policy by the above? p. 5: on MultiWOZ domain -> on the MultiWOZ domain; p. 5: ConvLab framework -> the ConvLab framework; p. 5: HuggingFace Transforms library -> the HuggingFace …; p. 7: prices -> priced? p. 8: user goal -> the user goal? p. 8: with following -> with the following; p. 8: the all -> all the; p. 8: whereas original -> whereas the original? p. 9: straightforward -> straightforwardly? p. 8: large scale -> large-scale? |
ICLR | Title
GPT-Critic: Offline Reinforcement Learning for End-to-End Task-Oriented Dialogue Systems
Abstract
Training a task-oriented dialogue agent can be naturally formulated as offline reinforcement learning (RL) problem, where the agent aims to learn a conversational strategy to achieve user goals, only from a dialogue corpus. It is very challenging in terms of RL since the natural language action space is astronomical, while feasible (syntactically and semantically correct) actions are very sparse. Thus, standard RL methods easily fail and generate responses diverging from human language, even when fine-tuning a powerful pre-trained language model. In this paper, we introduce GPT-Critic, an offline RL method for task-oriented dialogue. GPT-Critic is built upon GPT-2, fine-tuning the language model through behavior cloning of the critic-guided self-generated sentences. GPT-Critic is essentially free from the issue of diverging from human language since it learns from the sentences sampled from the pre-trained language model. In the experiments, we demonstrate that our algorithm outperforms the state-of-the-art in the task-oriented dialogue benchmarks including MultiWOZ 2.0 and ConvLab.
1 INTRODUCTION
Building an end-to-end task-oriented dialogue agent is one of the promising applications of natural language processing (NLP) tasks, yet challenging due to large language action spaces and limited availability of human-annotated data. Recently, large-scale pre-trained language models (LM) have achieved remarkable successes in various NLP tasks with prohibitively large vocabulary (Devlin et al., 2019; Radford et al., 2019; Brown et al., 2020; Raffel et al., 2019). The current best performing end-to-end conversational agents for a task-oriented dialogue system utilize a pre-training on largescale corpus and fine-tuning on downstream tasks (Ham et al., 2020; Yang et al., 2021; Lin et al., 2020; Peng et al., 2021). This combination of pre-training and fine-tuning significantly improves overall performance in the task-oriented dialogues. However, supervised fine-tuning (i.e. imitation learning of the dialogue corpus) alone may not be sufficient to learn an optimal dialogue strategy since the corpus often contains suboptimal dialogues collected from human participants of diverse expertise levels. Thus, in order to optimize the task performance of the conversational agent, goaloriented training (i.e. reinforcement learning) is an essential and promising direction to pursue.
Training a task-oriented conversational agent from a dialogue corpus can be naturally formulated as offline reinforcement learning (RL) problem (Levine et al., 2020; Fujimoto et al., 2019; Jaques et al., 2020), which offers the prospect to optimize the policy solely from the fixed dataset without online environment interaction. Most of the existing offline RL methods are built on the off-policy ActorCritic framework, which performs iterative optimization of the policy (i.e. actor) and the actionvalue function (i.e. critic) (Fujimoto et al., 2019; Janner et al., 2019; Kumar et al., 2020). Yet, a naive application of these offline RL methods generally results in poor dialogue strategies which generate responses in no way similar to human language (Lewis et al., 2017; Zhao et al., 2019; Jang et al., 2020).
Weighted behavior cloning (BC) (Wang et al., 2020) is one of the representative offline RL algorithms, which is free from the issue of diverging from human language. Weighted BC amounts
to filtering out bad actions and imitating good actions. In the context of task-oriented dialogues, that would be equivalent to simply dropping the unsuccessful dialogues from the corpus. However, dropping a whole dialogue from training would be wasteful, since they may still contain some taskspecific information that is useful to properly respond to user requests in the intermediate steps.
In this paper, we present an offline RL algorithm for task-oriented dialogue, which can be adopted for any generative pre-trained language model. Our algorithm, GPT-Critic, aims to revise unsuccessful dialogues into successful ones, rather than removing them as done in weighted BC. It starts with fine-tuning the GPT-2 model and learning the action-value function (critic) using the dialogue corpus. Then, GPT-Critic generates a strategically promising action that is selected based on the value estimated by the critic. GPT-Critic updates the policy through behavior cloning of the critic-guided self-generated responses. This is in contrast to the previous methods that perform weighted behavior cloning on the dialogue corpus, where the action choice is restricted to the support in the dataset (Wang et al., 2020). Compared to traditional actor-critic methods, since GPT-Critic does not rely on policy gradient and updates the policy within the support of generated actions from the GPT-2, it thus inherits GPT-2’s ability to generate human-like responses. In the experiments, we demonstrate that GPT-Critic outperforms the state-of-the-art end-to-end dialogue agent in the task-oriented dialogue benchmarks including MultiWOZ 2.0 (Budzianowski et al., 2018) and ConvLab (Zhu et al., 2020).
2 BACKGROUND
2.1 OFFLINE REINFORCEMENT LEARNING FOR TASK-ORIENTED DIALOGUES
We consider the task-oriented dialogue system that can be modeled as a partially observable Markov decision process (POMDP) (Williams & Young, 2007) defined by tuple 〈S,A,O, T, Z,R, γ〉 where S is the set of environment states s = 〈g, h〉 (underlying state that consists of the user goal g and dialogue history h),A is the set of actions a (a sequence of tokens which represents dialogue act and system response), O is the set of observations o (user utterance), T (s′|s, a) = Pr(st+1 = s′|st = s, at = a) is the transition function, Z(o|s′, a) = Pr(ot+1 = o|st+1 = s′, at = a) is the observation probability, R(g, h, a) is the reward function indicating the utility of executing action a in history h and the user goal g, and γ ∈ (0, 1) is a discount factor. The history at time step t, ht = {o0, a0, . . . ot−1, at−1, ot}, is a sequence of all previous observations and actions. Since the underlying state s (e.g. user goal) is not directly observable, the agent makes decisions based on the entire observation-action history. The policy π(at|ht) is mapping from history ht to a probability distribution overA. The goal is to find an optimal policy π∗ that maximizes the expected cumulative rewards, i.e. π∗ = arg maxπ Eπ [ ∑∞ t=0 γ
tR(g, ht, at)]. The action-value function of policy π is defined as Qπ(h, a) := Eπ [ ∑∞ t=0 γ
tR(g, ht, at)|h0 = h, a0 = a], where Qπ is a unique solution of the Bellman equation: Qπ(h, a) = Eg[R(g, h, a)] + γEπ [Qπ(h′, a′)].
Using offline RL for dialogue policy optimization, the agent optimizes the policy from the precollected dataset D = {{(gj , hjt , a j t , r j t , h j t+1) T t=0}Nj=1} without online environment interaction during the intermediate stages of training. Prior offline RL algorithms (Fujimoto et al., 2019; Janner et al., 2019; Kumar et al., 2020) rely on off-policy actor-critic method, where the critic network is trained by minimizing the temporal differnce error with respect to the target policy π:
arg min φ
E(ht,at,rt,ht+1)∼D [( rt + γEat+1∼π(ht+1) [ Qφ̄(ht+1, at+1) ] −Qφ(ht, at) )2] (1)
where φ̄ is the parameters of the target network. As discussed in the prior work (Fujimoto et al., 2019; Kumar et al., 2020), optimizing this loss can be challenging in the offline RL setting due to the overestimation issue in the bootstrapping process by taking out-of-distribution (OOD) actions to evaluate the value of the next state.
2.2 END-TO-END TASK-ORIENTED DIALOGUE SYSTEM
We focus on the MultiWOZ 2.0 dataset (Budzianowski et al., 2018), which is a representative benchmark for task-oriented dialogue. The MultiWOZ dataset is a fully-annotated corpus of human-human task-oriented conversations, which is collected via the Wizard-of-Oz setting (Kelley, 1984). The traditional approach to building a task-oriented dialogue system adopts a modular pipeline, which consists of the following four modules: 1) A natural language understanding (NLU) module (Kim et al., 2017; Zhu et al., 2020) identifies the user’s intent and extracts the information of slots and their values, 2) A Dialogue state tracking (DST) module (Williams et al., 2013) infers the belief state, 3) A dialogue policy (POL) module decides the system action, 4) A natural language generation (NLG) module (Wen et al., 2015) generates the system response corresponding to the system action. Recently, end-to-end task-oriented dialogue methods leveraging the pre-trained language model have been proposed (Yang et al., 2021; Ham et al., 2020; Lin et al., 2020; Peng et al., 2021; Hosseini-Asl et al., 2020), and significantly improves overall performance in the task-oriented dialogues. In this paper, our algorithm is built upon UBAR (Yang et al., 2021), which is based on GPT-2 (Radford et al., 2019) and currently the state-of-the-art end-to-end dialogue agent for the MultiWOZ domain.
3 OFFLINE REINFORCEMENT LEARNING FOR END-TO-END TASK-ORIENTED DIALOGUE SYSTEMS
The corpus collected from human-human conversations inevitably contains unsuccessful dialogues in terms of task completion. For example, approximately 20% dialogues of the MultiWOZ dataset fail to meet the user goal. Therefore, a naive behavior cloning of the whole dataset would limit the performance of the conversational agent since the dataset includes a lot of unsuccessful dialogues: an agent that imitates failure would be inevitably suboptimal. Yet, dropping the unsuccessful dialogues from the corpus as done in weighted BC is also undesirable, since they may contain some task-specific information that is useful to properly respond to user requests. We thus aim to revise unsuccessful dialogues into successful ones in order to prevent repeating the past failure while improving the task performance.
In this section, we present GPT-Critic, an offline RL algorithm for task-oriented dialogue. Our GPTCritic is analogous to Actor-Critic method: GPT (Actor) decides which action to take while the Critic informs how good the action was and provides a signal for policy improvement. Still, GPTCritic is distinct from the Actor-Critic methods in that it does not rely on the policy gradients, which are generally known to cause the issue of diverging from human language (Lewis et al., 2017; Zhao et al., 2019). Instead, we sample a set of action candidates using GPT-2 and pick the best one using the critic, which constitutes a revised dialogue corpus. Then, we perform supervised fine-tuning of the GPT-2 on the revised dialogue corpus. This learning procedure of our GPT-Critic does not hurt the agent’s capability to generate human-like sentences, given that the generated action candidates were all natural-looking sentences due to the power of large pre-trained LM. Our algorithm is built upon the GPT-2 but it can be adopted for any generative pre-trained language model.
3.1 POLICY EVALUATION
Our GPT-Critic starts by training the action-value function (i.e. critic), which can evaluate the candidates for the response. The architecture of the critic network basically follows GPT-2 with employing different last layers to compute the Q-value. The parameterization of the critic network Qφ is designed to share the parameters of the Transformer (Vaswani et al., 2017) layers of GPT-2, where the parameters of the Transformer layers are only updated during the policy improvement step. The critic network is trained by minimizing the temporal difference error with respect to the dataset D:
arg min φ
E(ht,at,rt,ht+1,at+1)∼D [( rt + γQφ̄(ht+1, at+1)−Qφ(ht, at) )2] (2)
where φ̄ is the parameters of the target network. Note that Eq. (2) is an on-policy evaluation on the dataset D, which can be optimized very stably since every at+1 is always an in-distribution sample of D. This is in contrast to Eq. (1), which requires evaluation of out-of-distribution actions sampled from the target policy π. The OOD action-value estimation can be very unreliable if the target policy deviates much from the dataset.
This kind of on-policy evaluation has been explored in the offline RL context for stable policy optimization (Brandfonbrener et al., 2021; Goo & Niekum, 2021), but they are limited to only one-step policy improvement: once the policy π is improved by the initial on-policy Q-function (i.e. π(s) = arg maxaQ(s, a)), the new policy deviates from the dataset policy, thus it requires off-policy evaluation for further policy iteration. In contrast, our GPT-Critic performs policy improvement by generating an improved dataset based on the learned critic, where we can perform on-policy evaluation on the new dataset again. As a consequence, GPT-Critic can enjoy the stable multi-step policy iteration through alternation between on-policy evaluation and policy improvement via revising dataset, which will be discussed in the following section.
3.2 POLICY IMPROVEMENT VIA DATASET REVISION
In the task-oriented dialogues, the reward is given by the external program provided as a part of the dataset, which checks whether the user goal is satisfied by examining the dialogue history. To generate the improved dataset, we adopt the common automatic evaluation of dialogue systems, where the agent generates dialogue act and system response on every system turn with fixed user utterances. More formally, the GPT-Critic generates a new dataset containing revised responses by:
Di+1 = {(g, ht, a∗t , r∗t , h∗t+1) | a∗t = arg max a∈{ak}N
{ak}N∼πiθ(ht)
Qφ(ht, a) where ht ∈ Di} (3)
where {ak}N is a set of N response candidates generated from the policy π (i.e. fine-tuned GPT-2), and Di is the dataset at i-th iteration. In the task-oriented dialogues, a reward function R(g, h, a) is provided that can compute a reward given a user goal, dialogue history, and system action. The revised reward r∗t = R(g, ht, a ∗ t ) is computed by given user goal, dialogue history, and revised system action a∗t . The dialogue history is a sequence of all previous observations and actions, thus the revised history h∗t+1 = {o0, a0, . . . , ot, a∗t , ot+1} is defined by replacing the original action at of ht+1 with the revised action a∗t . The examples of revised responses can be found in Appendix B.
In order to address the prohibitively large language action spaces, we explicitly consider the set of response candidates that are generated from the fine-tuned GPT-2. The GPT-Critic selects the
Algorithm 1 GPT-Critic Input: Training dataset D0 = {{(gj , hjt , a j t , r j t , h j t+1) T t=0}Nj=1}, policy network (GPT) πθ , critic network Qφ Fine-tune the initial policy represented by GPT-2 model (e.g. UBAR) for each iteration i do
Update critic by minimizing the temporal difference error until convergence:
argmin φ E(g,ht,at,rt,ht+1,at+1)∼Di
[( rt + γQφ̄(ht+1, at+1)−Qφ(ht, at) )2] Update dataset by critic-guided self-generation:
Di+1 = {(g, ht, a∗t , r∗t , h∗t+1) | a∗t = argmax a∈{ai}N
{ai}N∼πiθ(ht)
Qφ(ht, a) where ht ∈ Di}
Update policy by behavior cloning of critic-guided self-generated dataset: (Early stop according to the loss on the validation set)
argmin θ E(ht,at)∼Di+1 [− log πθ(at|ht)]
end for
most promising response by calculating the Q-values over the response candidates. GPT-Critic then performs behavior cloning of critic-guided self-generated dialogues:
arg min θ E(ht,at)∼Di+1 [− log πθ(at|ht)] (4)
where θ is the parameters of GPT-2. The policy improvement of GPT-Critic is performed by behavior cloning of generated dialogues from the GPT-2, thus GPT-Critic inherits GPT-2’s ability to generate human-like responses.
We can theoretically show that the updated policy by the above policy improvement step has a higher value than the old policy. Furthermore, we can also theoretically show that updated policy by the higher number of candidate actions has a higher value than the policy updated by the lower number of candidate actions. We formalize this result in Theorem 1. Theorem 1. (Policy Improvement) Given a policy π and the number of sampling actions N ≥ 1, If we update the new policy πnewN by
∀s, πnewN (·|s) = arg max a∈{ak}N
{ak}N∼π(s)
Qπ(s, a)
then Qπ new N (s, a) ≥ Qπ(s, a) ∀s, a always holds. Furthermore, for any N,M such that N ≥ M ≥ 1, Qπ new N (s, a) ≥ QπnewM (s, a) ∀s, a always holds. (Proof in Appendix A.)
We describe our algorithm, GPT-Critic, in Algorithm 1, that alternates between policy evaluation and policy improvement via revising the dataset until the policy performance converges.
4 RELATED WORK
End-to-End Task-Oriented Dialogue Systems. The traditional approach to building a task-oriented dialogue system adopts a modular pipeline, which consists of natural language understanding, dialogue state tracking, dialogue policy, and natural language generation. Recently, pre-trained LMbased end-to-end task-oriented dialogue agents that all sub-tasks recast as a single sequence prediction problem have been proposed (Ham et al., 2020; Hosseini-Asl et al., 2020), and significantly improved overall performance in the task-oriented dialogues. There are a number of variants of GPT-2-based end-to-end task-oriented dialogue agents. Yang et al. (2021) leverage the entire dialogue session of every dialogue turn. Peng et al. (2021) adopt transfer learning and machine teaching for training a GPT-2-based dialogue agent. Lin et al. (2020) present efficient dialogue state tracking with a minimal generation length, then leverage pre-trained language models for task-oriented dialogues.
Reinforcement Learning for Task-Oriented Dialogue Systems. Applying the standard RL methods straightforwardly to optimize a task-oriented dialogue agent causes the issue of diverging from human language. To address this problem, interleaving reinforcement learning with supervised learning has been proposed but it is still not free from the issue of diverging from human language (Lewis et al., 2017). Recently, the latent representation models for language actions have been introduced to address the aforementioned problem (Zhao et al., 2019; Yarats & Lewis, 2018). They disentangle the semantics of the utterance and the natural language generation, and then perform goal-based training in the space of the latent variables instead of directly optimizing utterances. However, they cannot be directly applied to large-scale pre-trained language models that are not designed in a way that works inherently with discrete latent variables. Jaques et al. (2020) use KLcontrol to restrict the policy to stay close to its prior policy, but it still suffers from divergence from human language even with carefully chosen hyper-parameters. Furthermore, Jang et al. (2020) adopt Bayes-adaptive Monte-Carlo planning to negotiation dialogue then use it as a policy improvement operator. This approach can prevent the issue of diverging from human language through the policy improvement based on behavior cloning of self-generated dialogues. However, they assume a user model that is difficult enough to be considered another problem.
Offline Reinforcement Learning. There have been extensive studies on offline RL (Fujimoto et al., 2019; Levine et al., 2020; Kumar et al., 2020; Wang et al., 2020). Most of prior works are built on the off-policy actor-critic framework, and they focus on the overestimation issue by taking the OOD actions (Kumar et al., 2019; Lee et al., 2020; Fujimoto et al., 2019; Jaques et al., 2020; Kumar et al., 2020). However, a naive application of these offline RL methods suffer from the issue of diverging from human language in the task-oriented dialogues (Lewis et al., 2017; Zhao et al., 2019; Jang et al., 2020). On the other hand, there are a number of recent works on weighted behavior cloning, where a policy is trained by a variant of supervised learning loss (Wang et al., 2020; Peng et al., 2019; Siegel et al., 2020). The weighted behavior cloning approaches filter out bad actions, then perform behavior cloning on high-quality data. However, in the task-oriented dialogues, simply dropping the unsuccessful dialogues from the corpus is undesirable, since they may contain some task-specific information that is useful to properly respond to user requests. Our GPT-Critic aims to revise unsuccessful dialogues into successful ones, which is in contrast to the weighted behavior cloning on the fixed training dataset, where the action choice is restricted to the support in the dataset (Wang et al., 2020; Peng et al., 2019; Siegel et al., 2020). More recently, Chen et al. (2021) introduce Decision Transformer, a Transformer-based architecture that casts the problem of RL as conditional sequence modeling. These offline RL methods based on behavior cloning are directly applied to the task-oriented dialogues without aforementioned issue, but their results are similar to that of behavior cloning in the task-oriented dialogues.
5 EXPERIMENTS
In this section, we show the experimental results of GPT-critic on both automatic evaluation and human evaluation. First, we evaluate the performances of GPT-Critic on the MultiWOZ 2.0 (Budzianowski et al., 2018) as dataset-based automatic evaluation, compared with baseline methods including offline RL algorithms. Second, for more realistic evaluation, we conduct a simulator-based evaluation on the ConvLab framework (Zhu et al., 2020). Third, we also conduct the human evaluation to evaluate the quality of generated responses. Finally, we give a qualitative analysis of our method using generated dialogue examples on the training dataset of MultiWOZ 2.0, which shows how GPT-Critic improves the performance through the behavior cloning of self-generated dialogues. The qualitative analysis with generated dialogue examples can be found in Appendix B.
5.1 EXPERIMENTAL SETUP
We implement GPT-Critic based on the HuggingFace Transformers library (Wolf et al., 2019) and codebase of UBAR (Yang et al., 2021), which is a GPT-2-based current state-of-the-art end-to-end task-oriented dialogue agent for the MultiWOZ 2.0 dataset. For the generative pre-trained language model, we use DistilGPT2 (Sanh et al., 2019), a distilled version of GPT-2. Figure 2 shows the architecture of our policy and critic network based on GPT-2. We design the parameterization of the critic network to share the parameters of the Transformer layers of GPT-2, where the parameters of the Transformer layers are only updated during the policy improvement step. For the hyperparame-
ters of fine-tuning the GPT-2 model, we follow the setting in the public code of UBAR (Yang et al., 2021). We use N = 5 for the number of candidate actions {ak}N , and the set of candidate actions are constructed by vanilla softmax sampling from the policy, rather than beam search, to collect diverse actions. For each behavior cloning iteration, all models are fine-tuned with a training dataset from the pre-trained GPT-2 and early stop according to the loss on the validation set.
5.2 EVALUATION ON THE MULTIWOZ DATASET
We evaluate our algorithm on the MultiWOZ 2.0 dataset, which is one of the representative taskoriented dialogue benchmarks. The MultiWOZ 2.0 is a large-scale multi-domain Wizard-of-Oz dataset, where a tourist (i.e. user) converses with a clerk (i.e. system) at the information center in a touristic city. It consists of 8438/1000/1000 dialogues for training/validation/testing. For end-to-end evaluation on the MultiWOZ 2.0 dataset, we use the following automatic evaluation metrics: 1) Inform: evaluates whether the system provides an appropriate entity, 2) Success: evaluates whether the system answers all the requested information, 3) BLEU: measures the fluency of the generated response (Papineni et al., 2002). We also report the Combined Score as an overall quality measure (Combined = (Inform + Success)× 0.5 + BLEU). We compare the performance of GPT-Critic with the following algorithms: 1) SFN+RL (Mehri et al., 2019), a seq2seq network that incorporates several pre-trained dialogue modules into a neural dialogue model, 2) DAMD (Zhang et al., 2020), the domain-aware multi-decoder network with multiaction data augmentation method, 3) SimpleTOD (Hosseini-Asl et al., 2020), a GPT-2-based endto-end dialogue agent that all sub-tasks recast as a single sequence prediction problem, 4) SOLOIST (Peng et al., 2021), a GPT-2-based end-to-end dialogue agent with transfer learning and machine teaching, 5) MinTL (Lin et al., 2020), an efficient dialogue state tracking method with a minimal generation length by predicting the difference between old and new states, 6) UBAR (Yang et al., 2021), a GPT-2-based end-to-end dialogue agent that leverages the entire dialogue session of every dialogue turn. We implement our algorithm into the codebase of UBAR (Yang et al., 2021), and the result of UBAR is reproduced by adapting its code to the same evaluation settings as other papers1. Moreover, we also compare the data augmentation method, DATA AUGMENTATION, which is naively fine-tuning the GPT-2 model with additionally generated data by vanilla softmax sampling from the trained policy.
In addition, we also compare with recent offline RL algorithms that are free from the issue of diverging from human language: 1) CRR (Wang et al., 2020), a value-filtered regression method that performs weighted behavior cloning of offline dataset, 2) Decision Transformer (Chen et al., 2021), a Transformer-based architecture that casts the problem of RL as conditional sequence modeling. For a fair comparison, we use the same pre-trained GPT-2 model as a policy network to train the CRR and the Decision Transformer. Moreover, to show that the policy-gradient-based standard RL algorithms suffer from diverging from human language, we also provide examples of responses generated by policy-gradient-based standard RL algorithm in Appendix D.
1The score reported in the UBAR paper is the result of using the true dialogue state for DB search. In order to compare under the same conditions with other algorithms, we record the result of UBAR to use the predicted dialogue state for DB search.
Table 1 and Table 2 show the results of policy iteration. Table 1 shows the performance of training dataset and critic-guided self-generated dialogues being used for each policy improvement step. Table 2 reports the intermediate performance of behavior cloning of training dataset and criticguided self-generated dialogues in each of the policy iteration. As shown in Table 1 and Table 2, the performance of critic-guided self-generated dialogues is improved gradually; the performance of GPT-Critic is also consistently improved through the behavior cloning of improved dataset.
Table 3 summarizes the overall performance of GPT-Critic and baseline algorithms in end-to-end response generation setting, where the generated dialogue state and generated dialogue act are used for the DB search and response generation. The results show that GPT-Critic achieved the best performance in terms of inform rate, success rate, and combined score. Moreover, the performance of GPT-Critic on the BLEU score matches those of other pre-trained LM-based methods, since GPT-Critic inherits GPT-2’s ability to generate human-like responses through the behavior cloning of responses generated by GPT-2. The results show that GPT-Critic improves the task performance of the agent without the issue of diverging from human language. In addition, as can be shown in Table 3, the naive data augmentation is not effective since it will not change the GPT’s sampling distribution in principle.
For the results of offline RL baselines, CRR and Decision Transformer show the results that do not diverge from human-language, since their policy is also trained by behavior cloning. However, both algorithms show limited performance because they perform behavior cloning on a fixed dataset. CRR has achieved remarkable success in continuous control tasks by performing weighted behavior cloning of training dataset filtered by critic, but it does not effectively perform in the task-oriented dialogues because of data scarcity. Furthermore, to evaluate the Decision Transformer, we adopt a delayed return where the agent receives the cumulative reward at the end of dialogue, since the agent cannot observe user goal. Therefore, without observing the user goal at test time, Decision Transformer reduces to the behavior cloning of successful dialogues.
5.3 EVALUATION ON CONVLAB EVALUATOR
In order to evaluate the performance of dialogue agents in an end-to-end fashion, we conduct simulator-based evaluation on ConvLab (Zhu et al., 2020). ConvLab is an open-source toolkit that enables to build task-oriented dialogue systems and perform an end-to-end evaluation. The simulator-based evaluation is more reliable than dataset-based automatic evaluation because it evaluates the performance while interacting with the user simulator. To interact with dialogue systems, ConvLab provides an agenda-based user simulator (Schatzmann et al., 2007) that consists of a BERT (Devlin et al., 2019) for NLU, a rule-based policy, and a template-based NLG. We compare the performance of GPT-Critic with baseline algorithms interacting with the same user simulator and user goals. We report the results with the following metrics: 1) Complete: evaluates whether the system completes the goal, 2) Success: evaluates whether all the user requests have been informed and the booked entities satisfy the constraints, 3) Book: evaluates how many booked entities satisfy the user constraints, 4) Inform (Precision / Recall / F1): evaluates how many user requests have been informed, 5) Turn (success / all): evaluates the average turn number for successful/all dialogues.
We describe the performance of GPT-Critic and baselines in Table 7. Each algorithm is tested for 1000 runs with randomly sampled user goal. The results show that GPT-Critic achieves the best
performance in all metrics related to task accomplishment. However, they also show that GPT-Critic takes longer dialogue turn for the task accomplishment because GPT-Critic is trained by maximizing the success rate without considering the dialogue turn.
5.4 HUMAN EVALUATION
We also conduct human evaluation on Amazon Mechanical Turk (AMT) to assess the quality of generated responses of GPT-Critic and baseline algorithms, using the evaluation protocol as in (Yang et al., 2021; Lin et al., 2020; Zhang et al., 2020). Specifically, human workers on AMT were asked to read the context and generated response by interactive simulation via ConvLab, then score the following two evaluation metrics on a Likert scale (1-5): 1) Appropriateness: evaluates whether the generated responses are appropriate for the given context, 2) Fluency: evaluates whether the generated responses are comprehensible and human-like. We compare the performance of GPT-Critic with same baselines on ConvLab evaluation. Figure 3 summarizes the overall results of human evaluation, where 60 workers evaluate the quality of 30 randomly selected dialogues for each algorithm. The results show that GPT-Critic significantly outperforms baseline algorithms in appropriateness which is related to task accomplishment. Moreover, the result of fluency shows that GPT-Critic does not hurt the agent’s capability to generate human-like sentences.
6 CONCLUSION
We presented GPT-Critic, an offline RL algorithm for task-oriented dialogue system, which can be adopted for any generative pre-trained language model. GPT-Critic aims to learn an end-to-end task-oriented dialogue agent without the issue of diverging from human language. GPT-Critic starts with fine-tuning the GPT-2 model and learning the critic using the dialogue corpus. Then, GPTCritic updates the policy through the behavior cloning of the critic-guided self-generated responses, thus it is essentially free from the issue of diverging from human language. In the experiments, we demonstrated that GPT-Critic outperforms the state-of-the-art algorithms in the task-oriented dialogue benchmarks including MultiWOZ 2.0 and ConvLab.
ACKNOWLEDGMENTS
This work was supported by the National Research Foundation (NRF) of Korea (NRF2019R1A2C1087634, NRF-2021M3I1A1097938) and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.20190-00075, No.2020-0-00940, No.2021-0-02068)
A POLICY IMPROVEMENT THEOREM
Theorem 1. (Policy Improvement) Given a policy π and the number of sampling actions N ≥ 1, If we update the new policy πnewN by
∀s, πnewN (·|s) = arg max a∈{ak}N
{ak}N∼π(s)
Qπ(s, a)
then Qπ new N (s, a) ≥ Qπ(s, a) ∀s, a always holds. Furthermore, for any N,M such that N ≥ M ≥ 1, Qπ new N (s, a) ≥ QπnewM (s, a) ∀s, a always holds.
Proof. For any s, a,N ≥M ,
Qπ new M (s, a) = EP [ R(st, at) + γEat+1∼πnewM (st+1)[Q πnewM (st+1, at+1)]|st = s, at = a ]
= EP [ R(st, at) + γE{ai}M∼π(st+1)[ max
a′∈{ai}M Qπ(st+1, a
′)]|st = s, at = a ]
≤ EP [ R(st, at) + γE{ai}N∼π(st+1)[ max
a′∈{ai}N Qπ(st+1, a
′)]|st = s, at = a ]
= EP [ R(st, at) + γEanewt+1∼πnewN (st+1)[Q π(st+1, a new t+1)|st = s, at = a ] = EP [ R(st, at) + γ ( Eanewt+1∼πnewN (st+1)[R(st+1, a new t+1)] + γEat+2∼π(st+2)[Q π(st+2, at+2)] ) |st = s, at = a
] = EP
[ t+1∑ τ=t Eaτ+1∼πnewN (sτ+1)[γ τ−tR(sτ , aτ )] + γ 2Eat+2∼π(st+2)[Q π(st+2, at+2)]|st = s, at = a ] ≤ EP
[ t+1∑ τ=t Eaτ+1∼πnew(sτ+1)[γ τ−tR(sτ , aτ )] + γ 2Eanewt+2∼πnew(st+2)[Q π(st+2, a new t+2)|st = s, at = a ] = EP
[ t+2∑ τ=t Eaτ+1∼πnew(sτ+1)[γ τ−tR(sτ , aτ )] + γ 3Eat+3∼π(st+3)[Q π(st+3, at+3)|st = s, at = a ] ...
≤ EP [ ∞∑ τ=t Eaτ+1∼πnew(sτ+1)[γ τ−tR(sτ , aτ )]|st = s, at = a ] = Qπ new N (s, a)
For the case of N = 1, note that πnew1 is simply reduced to the π, which concludes the proof:
Qπ new N (s, a) ≥ Qπ new M (s, a) ≥ Qπ new 1 (s, a) = Qπ(s, a) for all s, a,N ≥M ≥ 1.
B QUALITATIVE ANALYSIS OF SELF-GENERATED DIALOGUES
In this section, we provide a qualitative analysis on the critic-guided self-generated responses in GPT-Critic. We show the critic-guided self-generated dialogue examples in Table 5 that illustrates how GPT-Critic improves the performance through the behavior cloning of self-generated dialogues, compared with unsuccessful dialogues on the training dataset of MultiWOZ. Each example demonstrates the critic-guided self-generated dialogue act and delexicalized system response, compared with the original dialogue. The generated responses contain all the requests of user with abundant information, whereas the original responses of unsuccessful dialogues do not contain all the requested information. GPT-Critic improves the performance through the behavior cloning of these
revised responses. Moreover, Table 5 shows that the generated dialogues do not diverge from human language. Since GPT-Critic updates the policy through behavior cloning of the self-generated human-like responses, GPT-Critic is essentially free from the issue of diverging from human language.
C QUALITATIVE EXAMPLES OF STANDARD REINFORCEMENT LEARNING ALGORITHM
In this section, we provide examples of responses generated by standard RL algorithm (REINFORCE) to show that the policy-gradient-based standard RL algorithms suffer from diverging from human language. As can be shown in Table 6, policy-gradient-based RL algorithm generates responses which is diverging from human language.
D EVALUATION FOR THE QUALITY OF GENERATED DIALOGUE STATES AND DIALOGUE ACTS
We additionally conducted the experiments with UBAR and GPT-Critic to explicitly evaluate the generated dialogue states and dialogue acts (rather than evaluating the final system response). The table below shows the performance for predicted dialogue state(Joint accuracy / Slot accuracy) and predicted dialogue act (Dialogue Act f1), where the mean performance and the standard error are reported. As the table presents, GPT-Critic outperforms UBAR on Dialogue Act f1, which measures the performance of dialogue policy prediction. However, in the case of dialogue state tracking, there is no significant performance gap between GPT-Critic and UBAR since our GPT-Critic revises only the dialogue act and system response (which are considered as an action in GPT-Critic) but not the dialogue state in the dataset. | 1. What is the primary concern of the paper regarding response generation in task-oriented dialogue agents?
2. What is the proposed solution to address the issue, and how does it build upon previous works?
3. What are the strengths of the proposed approach, particularly in its ability to guide response generation?
4. What are the weaknesses of the paper, especially regarding its novelty and potential limitations?
5. How does the reviewer assess the effectiveness of the proposed method, and what additional evaluations or improvements would they suggest? | Summary Of The Paper
Review | Summary Of The Paper
To tackle the response generation diverging issue from human language, this paper introduces a critic on the top of the pretrained GPT-2 task-oriented dialogue agent, and demonstrates promising empirical results on two datasets (MultiWoz and ConvLab).
Review
Strengths: This paper introduces a critic value function on the top of the pretrained GPT-2 task-oriented dialogue agent, to guide the response generation.
Weakness:
The novelty may be limited by only adding a critic value function on the top of the existing work.
Is it possible to add human evaluation for at least of the datasets? (fixed in the rebuttal) |
ICLR | Title
GPT-Critic: Offline Reinforcement Learning for End-to-End Task-Oriented Dialogue Systems
Abstract
Training a task-oriented dialogue agent can be naturally formulated as offline reinforcement learning (RL) problem, where the agent aims to learn a conversational strategy to achieve user goals, only from a dialogue corpus. It is very challenging in terms of RL since the natural language action space is astronomical, while feasible (syntactically and semantically correct) actions are very sparse. Thus, standard RL methods easily fail and generate responses diverging from human language, even when fine-tuning a powerful pre-trained language model. In this paper, we introduce GPT-Critic, an offline RL method for task-oriented dialogue. GPT-Critic is built upon GPT-2, fine-tuning the language model through behavior cloning of the critic-guided self-generated sentences. GPT-Critic is essentially free from the issue of diverging from human language since it learns from the sentences sampled from the pre-trained language model. In the experiments, we demonstrate that our algorithm outperforms the state-of-the-art in the task-oriented dialogue benchmarks including MultiWOZ 2.0 and ConvLab.
1 INTRODUCTION
Building an end-to-end task-oriented dialogue agent is one of the promising applications of natural language processing (NLP) tasks, yet challenging due to large language action spaces and limited availability of human-annotated data. Recently, large-scale pre-trained language models (LM) have achieved remarkable successes in various NLP tasks with prohibitively large vocabulary (Devlin et al., 2019; Radford et al., 2019; Brown et al., 2020; Raffel et al., 2019). The current best performing end-to-end conversational agents for a task-oriented dialogue system utilize a pre-training on largescale corpus and fine-tuning on downstream tasks (Ham et al., 2020; Yang et al., 2021; Lin et al., 2020; Peng et al., 2021). This combination of pre-training and fine-tuning significantly improves overall performance in the task-oriented dialogues. However, supervised fine-tuning (i.e. imitation learning of the dialogue corpus) alone may not be sufficient to learn an optimal dialogue strategy since the corpus often contains suboptimal dialogues collected from human participants of diverse expertise levels. Thus, in order to optimize the task performance of the conversational agent, goaloriented training (i.e. reinforcement learning) is an essential and promising direction to pursue.
Training a task-oriented conversational agent from a dialogue corpus can be naturally formulated as offline reinforcement learning (RL) problem (Levine et al., 2020; Fujimoto et al., 2019; Jaques et al., 2020), which offers the prospect to optimize the policy solely from the fixed dataset without online environment interaction. Most of the existing offline RL methods are built on the off-policy ActorCritic framework, which performs iterative optimization of the policy (i.e. actor) and the actionvalue function (i.e. critic) (Fujimoto et al., 2019; Janner et al., 2019; Kumar et al., 2020). Yet, a naive application of these offline RL methods generally results in poor dialogue strategies which generate responses in no way similar to human language (Lewis et al., 2017; Zhao et al., 2019; Jang et al., 2020).
Weighted behavior cloning (BC) (Wang et al., 2020) is one of the representative offline RL algorithms, which is free from the issue of diverging from human language. Weighted BC amounts
to filtering out bad actions and imitating good actions. In the context of task-oriented dialogues, that would be equivalent to simply dropping the unsuccessful dialogues from the corpus. However, dropping a whole dialogue from training would be wasteful, since they may still contain some taskspecific information that is useful to properly respond to user requests in the intermediate steps.
In this paper, we present an offline RL algorithm for task-oriented dialogue, which can be adopted for any generative pre-trained language model. Our algorithm, GPT-Critic, aims to revise unsuccessful dialogues into successful ones, rather than removing them as done in weighted BC. It starts with fine-tuning the GPT-2 model and learning the action-value function (critic) using the dialogue corpus. Then, GPT-Critic generates a strategically promising action that is selected based on the value estimated by the critic. GPT-Critic updates the policy through behavior cloning of the critic-guided self-generated responses. This is in contrast to the previous methods that perform weighted behavior cloning on the dialogue corpus, where the action choice is restricted to the support in the dataset (Wang et al., 2020). Compared to traditional actor-critic methods, since GPT-Critic does not rely on policy gradient and updates the policy within the support of generated actions from the GPT-2, it thus inherits GPT-2’s ability to generate human-like responses. In the experiments, we demonstrate that GPT-Critic outperforms the state-of-the-art end-to-end dialogue agent in the task-oriented dialogue benchmarks including MultiWOZ 2.0 (Budzianowski et al., 2018) and ConvLab (Zhu et al., 2020).
2 BACKGROUND
2.1 OFFLINE REINFORCEMENT LEARNING FOR TASK-ORIENTED DIALOGUES
We consider the task-oriented dialogue system that can be modeled as a partially observable Markov decision process (POMDP) (Williams & Young, 2007) defined by tuple 〈S,A,O, T, Z,R, γ〉 where S is the set of environment states s = 〈g, h〉 (underlying state that consists of the user goal g and dialogue history h),A is the set of actions a (a sequence of tokens which represents dialogue act and system response), O is the set of observations o (user utterance), T (s′|s, a) = Pr(st+1 = s′|st = s, at = a) is the transition function, Z(o|s′, a) = Pr(ot+1 = o|st+1 = s′, at = a) is the observation probability, R(g, h, a) is the reward function indicating the utility of executing action a in history h and the user goal g, and γ ∈ (0, 1) is a discount factor. The history at time step t, ht = {o0, a0, . . . ot−1, at−1, ot}, is a sequence of all previous observations and actions. Since the underlying state s (e.g. user goal) is not directly observable, the agent makes decisions based on the entire observation-action history. The policy π(at|ht) is mapping from history ht to a probability distribution overA. The goal is to find an optimal policy π∗ that maximizes the expected cumulative rewards, i.e. π∗ = arg maxπ Eπ [ ∑∞ t=0 γ
tR(g, ht, at)]. The action-value function of policy π is defined as Qπ(h, a) := Eπ [ ∑∞ t=0 γ
tR(g, ht, at)|h0 = h, a0 = a], where Qπ is a unique solution of the Bellman equation: Qπ(h, a) = Eg[R(g, h, a)] + γEπ [Qπ(h′, a′)].
Using offline RL for dialogue policy optimization, the agent optimizes the policy from the precollected dataset D = {{(gj , hjt , a j t , r j t , h j t+1) T t=0}Nj=1} without online environment interaction during the intermediate stages of training. Prior offline RL algorithms (Fujimoto et al., 2019; Janner et al., 2019; Kumar et al., 2020) rely on off-policy actor-critic method, where the critic network is trained by minimizing the temporal differnce error with respect to the target policy π:
arg min φ
E(ht,at,rt,ht+1)∼D [( rt + γEat+1∼π(ht+1) [ Qφ̄(ht+1, at+1) ] −Qφ(ht, at) )2] (1)
where φ̄ is the parameters of the target network. As discussed in the prior work (Fujimoto et al., 2019; Kumar et al., 2020), optimizing this loss can be challenging in the offline RL setting due to the overestimation issue in the bootstrapping process by taking out-of-distribution (OOD) actions to evaluate the value of the next state.
2.2 END-TO-END TASK-ORIENTED DIALOGUE SYSTEM
We focus on the MultiWOZ 2.0 dataset (Budzianowski et al., 2018), which is a representative benchmark for task-oriented dialogue. The MultiWOZ dataset is a fully-annotated corpus of human-human task-oriented conversations, which is collected via the Wizard-of-Oz setting (Kelley, 1984). The traditional approach to building a task-oriented dialogue system adopts a modular pipeline, which consists of the following four modules: 1) A natural language understanding (NLU) module (Kim et al., 2017; Zhu et al., 2020) identifies the user’s intent and extracts the information of slots and their values, 2) A Dialogue state tracking (DST) module (Williams et al., 2013) infers the belief state, 3) A dialogue policy (POL) module decides the system action, 4) A natural language generation (NLG) module (Wen et al., 2015) generates the system response corresponding to the system action. Recently, end-to-end task-oriented dialogue methods leveraging the pre-trained language model have been proposed (Yang et al., 2021; Ham et al., 2020; Lin et al., 2020; Peng et al., 2021; Hosseini-Asl et al., 2020), and significantly improves overall performance in the task-oriented dialogues. In this paper, our algorithm is built upon UBAR (Yang et al., 2021), which is based on GPT-2 (Radford et al., 2019) and currently the state-of-the-art end-to-end dialogue agent for the MultiWOZ domain.
3 OFFLINE REINFORCEMENT LEARNING FOR END-TO-END TASK-ORIENTED DIALOGUE SYSTEMS
The corpus collected from human-human conversations inevitably contains unsuccessful dialogues in terms of task completion. For example, approximately 20% dialogues of the MultiWOZ dataset fail to meet the user goal. Therefore, a naive behavior cloning of the whole dataset would limit the performance of the conversational agent since the dataset includes a lot of unsuccessful dialogues: an agent that imitates failure would be inevitably suboptimal. Yet, dropping the unsuccessful dialogues from the corpus as done in weighted BC is also undesirable, since they may contain some task-specific information that is useful to properly respond to user requests. We thus aim to revise unsuccessful dialogues into successful ones in order to prevent repeating the past failure while improving the task performance.
In this section, we present GPT-Critic, an offline RL algorithm for task-oriented dialogue. Our GPTCritic is analogous to Actor-Critic method: GPT (Actor) decides which action to take while the Critic informs how good the action was and provides a signal for policy improvement. Still, GPTCritic is distinct from the Actor-Critic methods in that it does not rely on the policy gradients, which are generally known to cause the issue of diverging from human language (Lewis et al., 2017; Zhao et al., 2019). Instead, we sample a set of action candidates using GPT-2 and pick the best one using the critic, which constitutes a revised dialogue corpus. Then, we perform supervised fine-tuning of the GPT-2 on the revised dialogue corpus. This learning procedure of our GPT-Critic does not hurt the agent’s capability to generate human-like sentences, given that the generated action candidates were all natural-looking sentences due to the power of large pre-trained LM. Our algorithm is built upon the GPT-2 but it can be adopted for any generative pre-trained language model.
3.1 POLICY EVALUATION
Our GPT-Critic starts by training the action-value function (i.e. critic), which can evaluate the candidates for the response. The architecture of the critic network basically follows GPT-2 with employing different last layers to compute the Q-value. The parameterization of the critic network Qφ is designed to share the parameters of the Transformer (Vaswani et al., 2017) layers of GPT-2, where the parameters of the Transformer layers are only updated during the policy improvement step. The critic network is trained by minimizing the temporal difference error with respect to the dataset D:
arg min φ
E(ht,at,rt,ht+1,at+1)∼D [( rt + γQφ̄(ht+1, at+1)−Qφ(ht, at) )2] (2)
where φ̄ is the parameters of the target network. Note that Eq. (2) is an on-policy evaluation on the dataset D, which can be optimized very stably since every at+1 is always an in-distribution sample of D. This is in contrast to Eq. (1), which requires evaluation of out-of-distribution actions sampled from the target policy π. The OOD action-value estimation can be very unreliable if the target policy deviates much from the dataset.
This kind of on-policy evaluation has been explored in the offline RL context for stable policy optimization (Brandfonbrener et al., 2021; Goo & Niekum, 2021), but they are limited to only one-step policy improvement: once the policy π is improved by the initial on-policy Q-function (i.e. π(s) = arg maxaQ(s, a)), the new policy deviates from the dataset policy, thus it requires off-policy evaluation for further policy iteration. In contrast, our GPT-Critic performs policy improvement by generating an improved dataset based on the learned critic, where we can perform on-policy evaluation on the new dataset again. As a consequence, GPT-Critic can enjoy the stable multi-step policy iteration through alternation between on-policy evaluation and policy improvement via revising dataset, which will be discussed in the following section.
3.2 POLICY IMPROVEMENT VIA DATASET REVISION
In the task-oriented dialogues, the reward is given by the external program provided as a part of the dataset, which checks whether the user goal is satisfied by examining the dialogue history. To generate the improved dataset, we adopt the common automatic evaluation of dialogue systems, where the agent generates dialogue act and system response on every system turn with fixed user utterances. More formally, the GPT-Critic generates a new dataset containing revised responses by:
Di+1 = {(g, ht, a∗t , r∗t , h∗t+1) | a∗t = arg max a∈{ak}N
{ak}N∼πiθ(ht)
Qφ(ht, a) where ht ∈ Di} (3)
where {ak}N is a set of N response candidates generated from the policy π (i.e. fine-tuned GPT-2), and Di is the dataset at i-th iteration. In the task-oriented dialogues, a reward function R(g, h, a) is provided that can compute a reward given a user goal, dialogue history, and system action. The revised reward r∗t = R(g, ht, a ∗ t ) is computed by given user goal, dialogue history, and revised system action a∗t . The dialogue history is a sequence of all previous observations and actions, thus the revised history h∗t+1 = {o0, a0, . . . , ot, a∗t , ot+1} is defined by replacing the original action at of ht+1 with the revised action a∗t . The examples of revised responses can be found in Appendix B.
In order to address the prohibitively large language action spaces, we explicitly consider the set of response candidates that are generated from the fine-tuned GPT-2. The GPT-Critic selects the
Algorithm 1 GPT-Critic Input: Training dataset D0 = {{(gj , hjt , a j t , r j t , h j t+1) T t=0}Nj=1}, policy network (GPT) πθ , critic network Qφ Fine-tune the initial policy represented by GPT-2 model (e.g. UBAR) for each iteration i do
Update critic by minimizing the temporal difference error until convergence:
argmin φ E(g,ht,at,rt,ht+1,at+1)∼Di
[( rt + γQφ̄(ht+1, at+1)−Qφ(ht, at) )2] Update dataset by critic-guided self-generation:
Di+1 = {(g, ht, a∗t , r∗t , h∗t+1) | a∗t = argmax a∈{ai}N
{ai}N∼πiθ(ht)
Qφ(ht, a) where ht ∈ Di}
Update policy by behavior cloning of critic-guided self-generated dataset: (Early stop according to the loss on the validation set)
argmin θ E(ht,at)∼Di+1 [− log πθ(at|ht)]
end for
most promising response by calculating the Q-values over the response candidates. GPT-Critic then performs behavior cloning of critic-guided self-generated dialogues:
arg min θ E(ht,at)∼Di+1 [− log πθ(at|ht)] (4)
where θ is the parameters of GPT-2. The policy improvement of GPT-Critic is performed by behavior cloning of generated dialogues from the GPT-2, thus GPT-Critic inherits GPT-2’s ability to generate human-like responses.
We can theoretically show that the updated policy by the above policy improvement step has a higher value than the old policy. Furthermore, we can also theoretically show that updated policy by the higher number of candidate actions has a higher value than the policy updated by the lower number of candidate actions. We formalize this result in Theorem 1. Theorem 1. (Policy Improvement) Given a policy π and the number of sampling actions N ≥ 1, If we update the new policy πnewN by
∀s, πnewN (·|s) = arg max a∈{ak}N
{ak}N∼π(s)
Qπ(s, a)
then Qπ new N (s, a) ≥ Qπ(s, a) ∀s, a always holds. Furthermore, for any N,M such that N ≥ M ≥ 1, Qπ new N (s, a) ≥ QπnewM (s, a) ∀s, a always holds. (Proof in Appendix A.)
We describe our algorithm, GPT-Critic, in Algorithm 1, that alternates between policy evaluation and policy improvement via revising the dataset until the policy performance converges.
4 RELATED WORK
End-to-End Task-Oriented Dialogue Systems. The traditional approach to building a task-oriented dialogue system adopts a modular pipeline, which consists of natural language understanding, dialogue state tracking, dialogue policy, and natural language generation. Recently, pre-trained LMbased end-to-end task-oriented dialogue agents that all sub-tasks recast as a single sequence prediction problem have been proposed (Ham et al., 2020; Hosseini-Asl et al., 2020), and significantly improved overall performance in the task-oriented dialogues. There are a number of variants of GPT-2-based end-to-end task-oriented dialogue agents. Yang et al. (2021) leverage the entire dialogue session of every dialogue turn. Peng et al. (2021) adopt transfer learning and machine teaching for training a GPT-2-based dialogue agent. Lin et al. (2020) present efficient dialogue state tracking with a minimal generation length, then leverage pre-trained language models for task-oriented dialogues.
Reinforcement Learning for Task-Oriented Dialogue Systems. Applying the standard RL methods straightforwardly to optimize a task-oriented dialogue agent causes the issue of diverging from human language. To address this problem, interleaving reinforcement learning with supervised learning has been proposed but it is still not free from the issue of diverging from human language (Lewis et al., 2017). Recently, the latent representation models for language actions have been introduced to address the aforementioned problem (Zhao et al., 2019; Yarats & Lewis, 2018). They disentangle the semantics of the utterance and the natural language generation, and then perform goal-based training in the space of the latent variables instead of directly optimizing utterances. However, they cannot be directly applied to large-scale pre-trained language models that are not designed in a way that works inherently with discrete latent variables. Jaques et al. (2020) use KLcontrol to restrict the policy to stay close to its prior policy, but it still suffers from divergence from human language even with carefully chosen hyper-parameters. Furthermore, Jang et al. (2020) adopt Bayes-adaptive Monte-Carlo planning to negotiation dialogue then use it as a policy improvement operator. This approach can prevent the issue of diverging from human language through the policy improvement based on behavior cloning of self-generated dialogues. However, they assume a user model that is difficult enough to be considered another problem.
Offline Reinforcement Learning. There have been extensive studies on offline RL (Fujimoto et al., 2019; Levine et al., 2020; Kumar et al., 2020; Wang et al., 2020). Most of prior works are built on the off-policy actor-critic framework, and they focus on the overestimation issue by taking the OOD actions (Kumar et al., 2019; Lee et al., 2020; Fujimoto et al., 2019; Jaques et al., 2020; Kumar et al., 2020). However, a naive application of these offline RL methods suffer from the issue of diverging from human language in the task-oriented dialogues (Lewis et al., 2017; Zhao et al., 2019; Jang et al., 2020). On the other hand, there are a number of recent works on weighted behavior cloning, where a policy is trained by a variant of supervised learning loss (Wang et al., 2020; Peng et al., 2019; Siegel et al., 2020). The weighted behavior cloning approaches filter out bad actions, then perform behavior cloning on high-quality data. However, in the task-oriented dialogues, simply dropping the unsuccessful dialogues from the corpus is undesirable, since they may contain some task-specific information that is useful to properly respond to user requests. Our GPT-Critic aims to revise unsuccessful dialogues into successful ones, which is in contrast to the weighted behavior cloning on the fixed training dataset, where the action choice is restricted to the support in the dataset (Wang et al., 2020; Peng et al., 2019; Siegel et al., 2020). More recently, Chen et al. (2021) introduce Decision Transformer, a Transformer-based architecture that casts the problem of RL as conditional sequence modeling. These offline RL methods based on behavior cloning are directly applied to the task-oriented dialogues without aforementioned issue, but their results are similar to that of behavior cloning in the task-oriented dialogues.
5 EXPERIMENTS
In this section, we show the experimental results of GPT-critic on both automatic evaluation and human evaluation. First, we evaluate the performances of GPT-Critic on the MultiWOZ 2.0 (Budzianowski et al., 2018) as dataset-based automatic evaluation, compared with baseline methods including offline RL algorithms. Second, for more realistic evaluation, we conduct a simulator-based evaluation on the ConvLab framework (Zhu et al., 2020). Third, we also conduct the human evaluation to evaluate the quality of generated responses. Finally, we give a qualitative analysis of our method using generated dialogue examples on the training dataset of MultiWOZ 2.0, which shows how GPT-Critic improves the performance through the behavior cloning of self-generated dialogues. The qualitative analysis with generated dialogue examples can be found in Appendix B.
5.1 EXPERIMENTAL SETUP
We implement GPT-Critic based on the HuggingFace Transformers library (Wolf et al., 2019) and codebase of UBAR (Yang et al., 2021), which is a GPT-2-based current state-of-the-art end-to-end task-oriented dialogue agent for the MultiWOZ 2.0 dataset. For the generative pre-trained language model, we use DistilGPT2 (Sanh et al., 2019), a distilled version of GPT-2. Figure 2 shows the architecture of our policy and critic network based on GPT-2. We design the parameterization of the critic network to share the parameters of the Transformer layers of GPT-2, where the parameters of the Transformer layers are only updated during the policy improvement step. For the hyperparame-
ters of fine-tuning the GPT-2 model, we follow the setting in the public code of UBAR (Yang et al., 2021). We use N = 5 for the number of candidate actions {ak}N , and the set of candidate actions are constructed by vanilla softmax sampling from the policy, rather than beam search, to collect diverse actions. For each behavior cloning iteration, all models are fine-tuned with a training dataset from the pre-trained GPT-2 and early stop according to the loss on the validation set.
5.2 EVALUATION ON THE MULTIWOZ DATASET
We evaluate our algorithm on the MultiWOZ 2.0 dataset, which is one of the representative taskoriented dialogue benchmarks. The MultiWOZ 2.0 is a large-scale multi-domain Wizard-of-Oz dataset, where a tourist (i.e. user) converses with a clerk (i.e. system) at the information center in a touristic city. It consists of 8438/1000/1000 dialogues for training/validation/testing. For end-to-end evaluation on the MultiWOZ 2.0 dataset, we use the following automatic evaluation metrics: 1) Inform: evaluates whether the system provides an appropriate entity, 2) Success: evaluates whether the system answers all the requested information, 3) BLEU: measures the fluency of the generated response (Papineni et al., 2002). We also report the Combined Score as an overall quality measure (Combined = (Inform + Success)× 0.5 + BLEU). We compare the performance of GPT-Critic with the following algorithms: 1) SFN+RL (Mehri et al., 2019), a seq2seq network that incorporates several pre-trained dialogue modules into a neural dialogue model, 2) DAMD (Zhang et al., 2020), the domain-aware multi-decoder network with multiaction data augmentation method, 3) SimpleTOD (Hosseini-Asl et al., 2020), a GPT-2-based endto-end dialogue agent that all sub-tasks recast as a single sequence prediction problem, 4) SOLOIST (Peng et al., 2021), a GPT-2-based end-to-end dialogue agent with transfer learning and machine teaching, 5) MinTL (Lin et al., 2020), an efficient dialogue state tracking method with a minimal generation length by predicting the difference between old and new states, 6) UBAR (Yang et al., 2021), a GPT-2-based end-to-end dialogue agent that leverages the entire dialogue session of every dialogue turn. We implement our algorithm into the codebase of UBAR (Yang et al., 2021), and the result of UBAR is reproduced by adapting its code to the same evaluation settings as other papers1. Moreover, we also compare the data augmentation method, DATA AUGMENTATION, which is naively fine-tuning the GPT-2 model with additionally generated data by vanilla softmax sampling from the trained policy.
In addition, we also compare with recent offline RL algorithms that are free from the issue of diverging from human language: 1) CRR (Wang et al., 2020), a value-filtered regression method that performs weighted behavior cloning of offline dataset, 2) Decision Transformer (Chen et al., 2021), a Transformer-based architecture that casts the problem of RL as conditional sequence modeling. For a fair comparison, we use the same pre-trained GPT-2 model as a policy network to train the CRR and the Decision Transformer. Moreover, to show that the policy-gradient-based standard RL algorithms suffer from diverging from human language, we also provide examples of responses generated by policy-gradient-based standard RL algorithm in Appendix D.
1The score reported in the UBAR paper is the result of using the true dialogue state for DB search. In order to compare under the same conditions with other algorithms, we record the result of UBAR to use the predicted dialogue state for DB search.
Table 1 and Table 2 show the results of policy iteration. Table 1 shows the performance of training dataset and critic-guided self-generated dialogues being used for each policy improvement step. Table 2 reports the intermediate performance of behavior cloning of training dataset and criticguided self-generated dialogues in each of the policy iteration. As shown in Table 1 and Table 2, the performance of critic-guided self-generated dialogues is improved gradually; the performance of GPT-Critic is also consistently improved through the behavior cloning of improved dataset.
Table 3 summarizes the overall performance of GPT-Critic and baseline algorithms in end-to-end response generation setting, where the generated dialogue state and generated dialogue act are used for the DB search and response generation. The results show that GPT-Critic achieved the best performance in terms of inform rate, success rate, and combined score. Moreover, the performance of GPT-Critic on the BLEU score matches those of other pre-trained LM-based methods, since GPT-Critic inherits GPT-2’s ability to generate human-like responses through the behavior cloning of responses generated by GPT-2. The results show that GPT-Critic improves the task performance of the agent without the issue of diverging from human language. In addition, as can be shown in Table 3, the naive data augmentation is not effective since it will not change the GPT’s sampling distribution in principle.
For the results of offline RL baselines, CRR and Decision Transformer show the results that do not diverge from human-language, since their policy is also trained by behavior cloning. However, both algorithms show limited performance because they perform behavior cloning on a fixed dataset. CRR has achieved remarkable success in continuous control tasks by performing weighted behavior cloning of training dataset filtered by critic, but it does not effectively perform in the task-oriented dialogues because of data scarcity. Furthermore, to evaluate the Decision Transformer, we adopt a delayed return where the agent receives the cumulative reward at the end of dialogue, since the agent cannot observe user goal. Therefore, without observing the user goal at test time, Decision Transformer reduces to the behavior cloning of successful dialogues.
5.3 EVALUATION ON CONVLAB EVALUATOR
In order to evaluate the performance of dialogue agents in an end-to-end fashion, we conduct simulator-based evaluation on ConvLab (Zhu et al., 2020). ConvLab is an open-source toolkit that enables to build task-oriented dialogue systems and perform an end-to-end evaluation. The simulator-based evaluation is more reliable than dataset-based automatic evaluation because it evaluates the performance while interacting with the user simulator. To interact with dialogue systems, ConvLab provides an agenda-based user simulator (Schatzmann et al., 2007) that consists of a BERT (Devlin et al., 2019) for NLU, a rule-based policy, and a template-based NLG. We compare the performance of GPT-Critic with baseline algorithms interacting with the same user simulator and user goals. We report the results with the following metrics: 1) Complete: evaluates whether the system completes the goal, 2) Success: evaluates whether all the user requests have been informed and the booked entities satisfy the constraints, 3) Book: evaluates how many booked entities satisfy the user constraints, 4) Inform (Precision / Recall / F1): evaluates how many user requests have been informed, 5) Turn (success / all): evaluates the average turn number for successful/all dialogues.
We describe the performance of GPT-Critic and baselines in Table 7. Each algorithm is tested for 1000 runs with randomly sampled user goal. The results show that GPT-Critic achieves the best
performance in all metrics related to task accomplishment. However, they also show that GPT-Critic takes longer dialogue turn for the task accomplishment because GPT-Critic is trained by maximizing the success rate without considering the dialogue turn.
5.4 HUMAN EVALUATION
We also conduct human evaluation on Amazon Mechanical Turk (AMT) to assess the quality of generated responses of GPT-Critic and baseline algorithms, using the evaluation protocol as in (Yang et al., 2021; Lin et al., 2020; Zhang et al., 2020). Specifically, human workers on AMT were asked to read the context and generated response by interactive simulation via ConvLab, then score the following two evaluation metrics on a Likert scale (1-5): 1) Appropriateness: evaluates whether the generated responses are appropriate for the given context, 2) Fluency: evaluates whether the generated responses are comprehensible and human-like. We compare the performance of GPT-Critic with same baselines on ConvLab evaluation. Figure 3 summarizes the overall results of human evaluation, where 60 workers evaluate the quality of 30 randomly selected dialogues for each algorithm. The results show that GPT-Critic significantly outperforms baseline algorithms in appropriateness which is related to task accomplishment. Moreover, the result of fluency shows that GPT-Critic does not hurt the agent’s capability to generate human-like sentences.
6 CONCLUSION
We presented GPT-Critic, an offline RL algorithm for task-oriented dialogue system, which can be adopted for any generative pre-trained language model. GPT-Critic aims to learn an end-to-end task-oriented dialogue agent without the issue of diverging from human language. GPT-Critic starts with fine-tuning the GPT-2 model and learning the critic using the dialogue corpus. Then, GPTCritic updates the policy through the behavior cloning of the critic-guided self-generated responses, thus it is essentially free from the issue of diverging from human language. In the experiments, we demonstrated that GPT-Critic outperforms the state-of-the-art algorithms in the task-oriented dialogue benchmarks including MultiWOZ 2.0 and ConvLab.
ACKNOWLEDGMENTS
This work was supported by the National Research Foundation (NRF) of Korea (NRF2019R1A2C1087634, NRF-2021M3I1A1097938) and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.20190-00075, No.2020-0-00940, No.2021-0-02068)
A POLICY IMPROVEMENT THEOREM
Theorem 1. (Policy Improvement) Given a policy π and the number of sampling actions N ≥ 1, If we update the new policy πnewN by
∀s, πnewN (·|s) = arg max a∈{ak}N
{ak}N∼π(s)
Qπ(s, a)
then Qπ new N (s, a) ≥ Qπ(s, a) ∀s, a always holds. Furthermore, for any N,M such that N ≥ M ≥ 1, Qπ new N (s, a) ≥ QπnewM (s, a) ∀s, a always holds.
Proof. For any s, a,N ≥M ,
Qπ new M (s, a) = EP [ R(st, at) + γEat+1∼πnewM (st+1)[Q πnewM (st+1, at+1)]|st = s, at = a ]
= EP [ R(st, at) + γE{ai}M∼π(st+1)[ max
a′∈{ai}M Qπ(st+1, a
′)]|st = s, at = a ]
≤ EP [ R(st, at) + γE{ai}N∼π(st+1)[ max
a′∈{ai}N Qπ(st+1, a
′)]|st = s, at = a ]
= EP [ R(st, at) + γEanewt+1∼πnewN (st+1)[Q π(st+1, a new t+1)|st = s, at = a ] = EP [ R(st, at) + γ ( Eanewt+1∼πnewN (st+1)[R(st+1, a new t+1)] + γEat+2∼π(st+2)[Q π(st+2, at+2)] ) |st = s, at = a
] = EP
[ t+1∑ τ=t Eaτ+1∼πnewN (sτ+1)[γ τ−tR(sτ , aτ )] + γ 2Eat+2∼π(st+2)[Q π(st+2, at+2)]|st = s, at = a ] ≤ EP
[ t+1∑ τ=t Eaτ+1∼πnew(sτ+1)[γ τ−tR(sτ , aτ )] + γ 2Eanewt+2∼πnew(st+2)[Q π(st+2, a new t+2)|st = s, at = a ] = EP
[ t+2∑ τ=t Eaτ+1∼πnew(sτ+1)[γ τ−tR(sτ , aτ )] + γ 3Eat+3∼π(st+3)[Q π(st+3, at+3)|st = s, at = a ] ...
≤ EP [ ∞∑ τ=t Eaτ+1∼πnew(sτ+1)[γ τ−tR(sτ , aτ )]|st = s, at = a ] = Qπ new N (s, a)
For the case of N = 1, note that πnew1 is simply reduced to the π, which concludes the proof:
Qπ new N (s, a) ≥ Qπ new M (s, a) ≥ Qπ new 1 (s, a) = Qπ(s, a) for all s, a,N ≥M ≥ 1.
B QUALITATIVE ANALYSIS OF SELF-GENERATED DIALOGUES
In this section, we provide a qualitative analysis on the critic-guided self-generated responses in GPT-Critic. We show the critic-guided self-generated dialogue examples in Table 5 that illustrates how GPT-Critic improves the performance through the behavior cloning of self-generated dialogues, compared with unsuccessful dialogues on the training dataset of MultiWOZ. Each example demonstrates the critic-guided self-generated dialogue act and delexicalized system response, compared with the original dialogue. The generated responses contain all the requests of user with abundant information, whereas the original responses of unsuccessful dialogues do not contain all the requested information. GPT-Critic improves the performance through the behavior cloning of these
revised responses. Moreover, Table 5 shows that the generated dialogues do not diverge from human language. Since GPT-Critic updates the policy through behavior cloning of the self-generated human-like responses, GPT-Critic is essentially free from the issue of diverging from human language.
C QUALITATIVE EXAMPLES OF STANDARD REINFORCEMENT LEARNING ALGORITHM
In this section, we provide examples of responses generated by standard RL algorithm (REINFORCE) to show that the policy-gradient-based standard RL algorithms suffer from diverging from human language. As can be shown in Table 6, policy-gradient-based RL algorithm generates responses which is diverging from human language.
D EVALUATION FOR THE QUALITY OF GENERATED DIALOGUE STATES AND DIALOGUE ACTS
We additionally conducted the experiments with UBAR and GPT-Critic to explicitly evaluate the generated dialogue states and dialogue acts (rather than evaluating the final system response). The table below shows the performance for predicted dialogue state(Joint accuracy / Slot accuracy) and predicted dialogue act (Dialogue Act f1), where the mean performance and the standard error are reported. As the table presents, GPT-Critic outperforms UBAR on Dialogue Act f1, which measures the performance of dialogue policy prediction. However, in the case of dialogue state tracking, there is no significant performance gap between GPT-Critic and UBAR since our GPT-Critic revises only the dialogue act and system response (which are considered as an action in GPT-Critic) but not the dialogue state in the dataset. | 1. How does the proposed model handle large-scale natural language action spaces in reinforcement learning?
2. What are the strengths of the proposed approach, particularly in incorporating policy and q networks?
3. What are the concerns regarding q-value estimation, and how does the paper address them?
4. What are the exploration perspectives of self-generation in the proposed method?
5. How does the choice of N impact the final performance, and what sampling methods are used for generating response candidates? | Summary Of The Paper
Review | Summary Of The Paper
The paper works on offline reinforcement learning for natural language action space setting, particularly for task-oriented dialogue management. The paper nicely incorporate the policy network (to sample agent response) and the q network (to evaluate the agent response) into a single GPT-2 network and propose a policy interation algorithm to optimize both the q and policy network. During policy evaluation, the q network is updated with sampled system actions and responses. During policy improvement, the sampled system actions and responses with maximum q values are used as labels to update the policy network. The model achieves SoTA performance on MultiWOZ dataset.
Review
The paper proposes a nice way to handle large scale natural language action space for RL. An action in this model is a system action plus response (a sequence of tokens) rather than pre-specified variables in discrete latent models for dialogue (LaRL, etc.), so the proposed model can be easily incorporated into a transformer-based language model (e.g. GPT-2).
My biggest concern is the q-value estimation. Estimating the Q value for an offline dataset is hard due to the problem of overestimation, which is already pointed out by the authors in the paper. However, the authors didn't describe in detail how they mitigate this problem. The only sentence in the paper about this is "However, we avoid this OOD problem by xxx ... revised by generated system response and evaluated reward using offline automatic evaluation". More details are needed, and ablation study about this is needed. Otherwise, I don't know if the proposed method can generalize to other natural language action space applications where the ground-truth reward function is unknown for offline automatic evaluation.
My second concern is the exploration perspective of self-generation. The paper mentioned that {a_k}^N is a set of N response candidates generated from the current policy \pi. What number is N set to? What is the impact of N to the final performance? Moreover, is{a_k}^N sampled by beam search or vanilla sampling? I'd assume that we want to sample {a_k}^N that is diverse enough, rather than similar system responses with only one or two different words. More discussion about the generation is needed. |
ICLR | Title
RankedDrop: Enhancing Deep Graph Convolutional Networks Training
Abstract
Graph Neural Networks (GNNs) are playing a more and more important role for analyzing unstructured data from the complex real world. Introducing random edge dropping from the input graph at training epochs could reduce over-fitting and over-smoothing phenomenon and increase the depth of GNNs. However, such method relies strongly on the chosen randomness. It makes the accuracy depend on the initialization of the randomness, which lets the selection of hyperparameters be even more difficult. We propose in this paper RankedDrop a novel method with a spatial-aware dropping-edge selection. The selection takes into account the graph global information using PageRank, and graph local neighborhood information with node degree. RankedDrop provides a more stable training results comparing to the state-of-the-art solution, by maintaining the advantages of random edge dropping. Furthermore, RankedDrop is a general method that can be deployed on a deep learning framework for enhancing performance of GNNs.
1 INTRODUCTION & CONTEXT
Convolutional Neural Networks (CNNs) demonstrated a great success in our today’s daily life for image classification and many other applications. However in the real world there are still many non-Euclidean (graph) data like social networks or reference systems that cannot be handled by CNNs. After Defferrard et al. (2016) introducing Graph Neural Networks (GNNs), Defferrard et al. (2016) generalized CNNs to graph to exploit their potential for classification problems on nonEuclidean data structure. The computation of Graph Convolutional Neural Networks (GCNs) can be summarized as iterative neighborhood aggregations with a message passing schema (Huan et al. (2021)).
He et al. (2016) showed that deeper CNN has higher potential to achieve better precision. However, modern GCNs (Kipf & Welling (2017); Pei et al. (2021); Hamilton et al. (2017)) can work with very limited number of layers, because training deep neural networks is a very complex task (Claesen & De Moor (2015)), the complexity of the computed function grows exponentially with depth (Raghu et al. (2017)), and the deeper the networks are, the more they are subject to over-smoothing (Li et al. (2018); Chen et al. (2020)). Meanwhile, deeper GCN and/or small graph datasets could lead to over-fitting, where a model could fit well the training data but poorly the testing data.
Dropout (Hinton et al. (2012); Srivastava et al. (2014)) is a promising regularization techniques to reduce over-fitting. In the field of GCN, DropEdge introduced by Rong et al. (2019), which randomly removes a certain proportion of edges from the input graph at each epoch, showed promising results to reduce the convergence speed of over-fitting and over-smoothing. Moreover, the random dropping happened on the message passing schema of most of GCNs. Therefore such method could be applied for many GCN backbone models like GCN (Kipf & Welling (2017)), ResGCN (Pei et al. (2021)), GraphSage (Hamilton et al. (2017) ), IncepGCN (szegedy2016rethinking) and JKNet (Xu et al. (2018)).
However, the accuracy obtained by DropEdge depends on how the randomness of dropping is initialized. Moreover, the only parameter that can be adjusted in DropEdge is the percentage of edges that will be dropped. Missing of control on the way of how dropping edges be selected, may limit possibilities to optimize GCN training according to application domain and chosen backbone architecture. Furthermore, a graph structure includes a lot of useful information (Newman (2003)).
Random dropping may destroy graph structure information and again limits the potential of optimizing GCN training.
This paper proposes RankedDrop a novel method with a spatial-aware dropping-edge selection. The selection takes into account the graph global information using PageRank, and graph local neighborhood information with node degree. Graph structure information is extracted to reduce the impact of randomness in the selection and also to improve the final accuracy after training. RankedDrop provides a more stable training results comparing to the state-of-the-art solution, by maintaining the advantages of random edge dropping including over-fitting and over-smoothing reduction and being a general method for different GCN backbones. Shown by our experiments, the accuracies of deep GCNs on semi-supervised learning are significantly improved by using RankedDrop.
2 RANKEDDROP METHOD WITH DATA SELECTION
RankedDrop is a general method, and it is applied on the input graph of a GCN training before each epoch. It first extracts a score based on graph analysis for each node (Sec 2.2); after that the nodes are reordered to control the selection probability then selected according to the computed score (Sec 2.3); at the end the edges of selected nodes are selected and we drop the selected edges to create the new input graph (Sec 2.4).
2.1 NOTATIONS AND PRELIMINARIES
We use an adjacency matrix A to represent the original input graph G, and nnz the number of non-zero value of A, a.k.a. the number of edges of the graph G. We denote p the proportion of edges from G that will be dropped. Therefore, after dropping, the new input graph Gdrop has (1− p)× nnz edges. We denote the resulting adjacency matrix Adrop for Gdrop, and we use A′ to denote the matrix of p× nnz dropped edges. The relation between the above three matrices is:
Adrop = A−A′ (1)
The theorem 1 introduced in the paper (Rong et al. (2019)) proved that training GCN on Gdrop instead of G allows to reduce the speed of convergence of the over-smoothing and to reduce the loss of information. The idea is based on the concept of mixing time in the random walk theory (Lovász (1993)), and the proof is based on the work of Oono & Suzuki (2019). Luo et al. (2021) demonstrated again the effectiveness of such dropping method.
In the following parts of the paper, we focus on the methods of selection of edges to drop, which are the main contributions of RankedDrop. RankedDrop ranks the nodes in order to assign a weight during the selection. Different from Sparsification (Eppstein et al. (1997)) or DropEdge (Rong et al. (2019)), the goal of RankedDrop is to control the randomness with several parameters, but not to completely control the choice of the drop edges, to create at each iteration a sub-graph in a more intelligent way. It means that we bias the probability (greater or lesser) of being selected of each edge according to our graph analysis, to create dropping strategy by reducing the dependency on full randomness.
2.2 GRAPH INFORMATION EXTRACTION
Most GCN architectures are mainly oriented on inter-neighbor communication (Huan et al. (2021)). The information propagates through edges w.r.t. GCN layers. The shorter the path between two nodes, the more they will influence each other. Removing the most impactful neighbors limits such over-influence and reserves space for taking into account the information from other neighbors for each epoch and among the epochs in a training. With the above idea, we propose here a node ranking strategy in order to prepare a better dropping selection for the next steps. Two kinds of graph structure information are extracted and used in the node selection step:
Local structure information The degrees of each node, which reflects the local impact of the node on its neighborhood. Higher degree reflects stronger influence from a local point of view in the
graph. If a node has a lot of neighbors, it will have an impact at each layer on them and therefore the information it contains will be strongly taken into account at the local level. The degrees are extracted from the adjacency matrix A. Consider that A is an n × n matrix and that its number of nonzero elements is nnz. A vector of size n is used to store the degrees of the nodes of the graph.
Global structure information Different graph node ranking algorithms (Agarwal & Chakrabarti (2007)) could be used here to judge importance of each node on the global graph. We use in the paper the PageRank algorithm (Page et al. (1999)) to generate the score of importance, because (1) PageRank is the most studied algorithms of the last decades, by our knowledge it can be easily implemented in a distributed way to accelerate its computation; (2) It was already used in GNNs to reduce the over-smoothing (Bojchevski et al. (2019)). From the adjacency matrix A, a vector of size n will be returned and will contain the score of each node. The algorithm 1 represents the implementation of PageRank used in RankedDrop. It shows that the main operation of each iteration of the PageRank is a matrix-vector multiplication where the output vector is used to perform the next iteration multiplication. By considering A as sparse, the cost of this sequence of sparse matrixvector multiplications is reduced and can be executed efficiently in a distributed way (Hugues & Petiton (2010)), which allows to optimize the extra computations that PageRank requires. This iterative method stops when the convergence has reached the expected precision. The result vector of the last iteration contains then the scores of each node of the graph and all elements are between 0 and 1. The higher the score, the more important the node is in the global graph. A β coefficient is also introduced during the PageRank. It is an optimization allowing to redistribute a part of the scores of each node among all the other nodes. In this way, the convergence of the result vector is faster and avoids that all the score is distributed only within the strongly connected component. Conventionally, the β coefficient is fixed around 0.85, this value was used for the experiments in the section 3.
The local and global structure information is used to rank the nodes of the graph to determinate the overall importance of each node in the graph. The importance of the nodes in the global and/or local structure of the graph gives a score to each node so that the nodes with a higher score are more often included in the matrix Adrop. Thus, the structure of the graphs Gdrop that will be generated at each iteration will be closer to the structure of the graph G than when the dropping is done randomly. We note s the vector of size n which stores the final score of the associated nodes. The computation of this score is flexible. There are many possibilities to compute the values of s by taking the information of the local and/or global structure, and potentially other information. At the end, the goal is to have a vector such that ∀i ∈ [1, n], 0 ≤ Si ≤ 1 and ∑n i=1 Si = 1.
2.3 NODE SELECTION WITH PROBABILITY CONTROL
After getting the score vector s, we sort the nodes according to their scores in a decreasing order. The permutations performed during the sorting are stored in memory in order to keep the association information between the nodes and the scores.
After the sorting, we create a probability scale from the sorted score vector by applying a Scan-WithAdd (SWA) algorithm (a.k.a prefix sum, Blelloch (1990)). SWA will generate an interval between 0 and 1 for each node. Therefore, the node selection is no more in a fully random way but the randomness is limited in the interval. The resulting vector is of size n where the values are more and more ordered. The resulting vector is such that SWAs[n] = 1. In addition, SWA could help visualize the inequalities of score between the nodes in the graph, like the Lorenz curve used in economics (Lorenz (1905)).
The node selection is performed with the SWA vector. The SWA value of each node corresponds to the probability of the node being selected. Formatting the score vector as a SWA accelerates the selection of nodes. For each node selection, we take a random number between 0 and 1 and find the node associated with this value in the vector SWAs. A binary search (Knuth (1998)) on the SWA vector can find the node with a O(log n) complexity, where it is necessary to browse element by element the vector of s scores at each node selection. We will discuss the selection of nodes from the SWA vector in more detail in the next section.
Algorithm 1 Algorithm to get the PageRank score from adjacency matrix
Input: A the adjacency (sparse) matrix, δ precision, β coefficient Output: v vector of PageRank score of size n Initialisation :
1: sum← 0 2: err ← INF 3: new vector tmp of size n 4: assign 1n to each element in v
START LOOP 5: while err > δ do 6: reset all element of tmp to 0 7: tmp← SpMV between A and v 8: for each elem in tmp do 9: elem← β ∗ elem+ (1− β) ∗ 1n
10: end for 11: err ← norm between tpm and v 12: v ← tmp 13: end while 14: return v
Algorithm 2 Node selection from the score vector Input: SWAs the Scan-With-Add final score
vector, sumScore the sum of the remaining nodes’ scores, malus the vector of malus applied to each node. Output: ind the index of the node to perform the drop edge Initialisation :
1: r ← randomin]0, 1[ 2: r ← r ∗ sumScore 3: m← 0 4: a← 0 5: b← size of SWAs 6: while b-a != 1 do 7: c← (a+ b)/2 8: m← m+ malus on c node 9: if SWA S of c−m− sB < r then
10: a← c 11: else 12: b← c 13: add (c + b)/2 on the potential malus node list 14: add c in the explored nodes list 15: end if 16: end while 17: return b
2.4 DROPPING EDGE SELECTION
The last step is to select the exact edges to drop. Different from the previous steps that can be performed only once in the beginning of training, the edge selection is performed for each epoch to generate a different subgraph. At each epoch, p × nnz edges are chosen from the selected nodes and are removed.
Different edge selection algorithms could be applied here. For example, the selection could be based on the tail, on the head, or directly removing all edges of a selected node (a.k.a DropNode). For the experiments presented in the section 3, we randomly select edges from the selected node. This adds randomness to the selection process. To select a node, we took our inspiration from the bisection method (Burden & Faires (1985)). By randomly pulling in a uniform way a number r between 0 and 1, we obtain the index of the node i that checks SWAs[i] < r < SWAs[i + 1] by performing a dichotomy. The selection of edges and nodes is based on the SWA des scores SWAs, , the importance of the node in the graph will influence its probability to be selected at each epoch. Thus, the randomness is controlled but the selection probabilities are different for each node so that the randomness takes into account the global structure of the graph. It is possible from the PageRank results and/or degrees vector to create subgraphs that keep the key nodes of the graph so that at each epoch the graph generated is consistent with the structure of the initial graph.
It is useful to keep an efficient selection method because it is performed a large number of times. This is why we have tried to optimize the implementation of this selection (see the algorithm 2) by adding a malus system when exploring the SWA vector to avoid selecting a node from which all edges have already been selected. When all the edges associated to a node have been dropped, there is no more interest to select this node again. Our optimization is based on the fact that the bisection method can be represented as a tree. At each step of the dichotomy, there is the possibility to move either the lower or the upper bound. To be sure not to select a node i, it is possible to apply a malus (equal to the score of the node i) to the explored branches when the upper bound is moved in the path to access the SWAs[i] (because all these paths lead to nodes further in the SWAs vector). Thus,
it is enough to modify at most log n malus value instead of modifying a large number of values in SWAs.
3 EXPERIMENTS AND DISCUSSIONS
RankedDrop is a general method that could equip different algorithms for the node selection and edge selection, and can be applied for different GNN architectures. In the experiments of this paper, we used the classic PageRank to compute scores for the node selection; we used basic random selection for the edge selection; and we used mostly the standard GCN as the training architecture, since Rong et al. (2019) and Luo et al. (2021) have already demonstrated the genericity of these Dropping methods for other GNN architecture. We believe such standard configuration could provide a clear and general idea of the potential offered by RankedDrop.
3.1 DATASETS AND ENVIRONMENT
Three standard citation datasets were used in our experiments: Cora, Citeseer and Pubmed. These datasets represent collections of scientific articles that are classified according to the paper’s main research topic (Sen et al. (2008)). More information of these datasets can be found in the table x. We notice that these graphs are very sparse, since their number of edges per node is very low: in average about two edges per node. However, if we see closely, the highest degree nodes of those datasets have more than 100 edges (99 for Citeseer and up to 171 for Pubmed). This means that a very large number of nodes have a very low degree (≤ 2). Therefore, only few important nodes propagate their information very widely; the information of other low degree notes are quickly drowned. For example in Cora, the node with the highest degree is directly connected to more than 6% of the nodes in the graph.
The extraction of graph data, the score computation until edge selection and dropping were done before the GCN training on Intel Xeon Processor E5-2690 with 8 cores. The result was used to build Gdrop. The trainings with Gdrop were done on Nvidia Tesla V100 PCIe 16GB GPUs. The original GCN, the state of the art DropEdge and our RankedDrop were compared in this section to validate our solution.
3.2 HOW SCAN-WITH-ADD HELPS NODE SELECTION
The values of the SWA of PageRank and Degree are represented graphically in figure 1. We can see that PageRank allows to put forward a small number of nodes; these curves increase very quickly. The distribution of the scores is very unequal: for all the datasets, 10% of the best ranked nodes share more than 80% of the total score, because PageRank highlights the most important nodes in an exponential way. Therefore if the nodes are randomly selected, there is an 80% chance to choose one of the 10% high ranked nodes. By using SWA-PageRank, the best rated nodes of the G graph will be very often integrated to the G′ graph, because they are the nodes that have the most impact in the global structure of the graph. On the other hand, the scores of SWA-Degree rise more slowly. The inequality between the nodes is thus less important. The highest degree nodes are privileged in terms of scores but they are not too much highlighted compared to SWA-PageRank. Therefore, we can choose the most adapted SWA and decide to select either the most or the less important nodes to prepare the edge selection.
3.3 IMPACT OF RANKEDDROP ON OVER-FITTING
The figures 2 and 3 show the training and validation loss curves in full-supervised and semisupervised learning, respectively. All curves for the same dataset were obtained with the same hyperparameters, only the percentage of dropping edges is different between DropEdge and RankedDrop. We can observe that, the two dropping methods in general have the similar behavior, both have better loss convergence than the original GCN, and in some cases, the validation loss of RankedDrop converges again better than the one of DropEdge. These experiments show that RankedDrop is the best method to reduce the over-fitting phenomenon and to stabilize the loss. RankedDrop has also the same behavior on over-smoothing reduction as DropEdge, so we will not discuss here.
3.4 IMPACT OF DROPPING CONTROL
The accuracies after training with different proportion of non-drop edges are shown in figure 4. The accuracies were obtained with the best hyper-parameters for each cases with GCN in the semi-
supervised learning. We can observe that for all three datasets the best accuracies are all from RankedDrop. Moreover, the best accuracy obtained by RankedDrop preserve more edges than the best one by DropEdge. For example, for the Citeseer dataset, the best accuracy subgraphs by DropEdge using 20% edges of the original graph, whereas the best accuracy subgraphs by RankedDrop maintain 60% of the edges. We believe the fewer edges was dropped, the more information of the original graph is kept, and we have more chance to achieve a better accuracy.
3.5.1 SEMI-SUPERVISED
3.5 OVERALL PERFORMANCE RESULTS
We first compare the accuracy between original GCN, GCN with DropEdge and GCN with RankedDrop in the semi-supervised learning, with 2, 4 and 8 layers (Table 2). The hyper-parameters for 2 layers are from the paper of DropEdge, and the one for 4 and 8 layers are the best one that we found. The parameters used for the selection of the nodes with RankedDrop are available in the appendix A. The accuracies obtained with RankedDrop are all higher than with DropEdge. Moreover, the deeper the GCN is, the better accuracy improvement RankedDrop offers comparing to DropEdge. The accuracy obtained with RankedDrop for the 8-layer GCN is 20% better than the one with DropEdge. Even for the 2-layer GCN, the accuracies of RankedDrop are equivalent or superior to those well-tuned by DropEdge. This is particularly true with the Citeseer dataset where the 2-layer GCN with RankedDrop obtained 1% higher accuracy than with DropEdge.
3.5.2 FULL-SUPERVISED
The accuracies of full-supervised learning are presented in the table 3. For each of the datasets, we evaluated with three different backbones: GCN, IncepGCN and JKNet. The number of layers for each backbone was chosen from the best accuracy declared by DropEdge. We used the same hyper-parameters given by Rong et al. (2019), only the edge dropping percentage is modified for RankedDrop. The accuracies are globally equivalent between RankedDrop and DropEdge; and RankedDrop achieved better accuracies than DropEdge for Cora the smallest dataset. It again show that RankedDrop reduce better the over-fitting phenomenon. Moreover, the hyper-parameters used here are not specifically adapted to RankedDrop, but RankedDrop can still achieve good accuracies. We believe there are still space to increase accuracies with RankedDrop by optimizing the hyperparameters.
4 CONCLUSION & PERSPECTIVE
The RankedDrop method that we proposed in this paper provided more control on the selection of dropping edges and allows to customize the dropping step for various neural network architectures. Thanks to a personalized score system and the addition of several parameters, the control of the edges to drop is personalized. RankedDrop keeps the advantages of DropEdge concerning the reduction of over-smoothing and over-fitting as well as the possibility to use it on different architectures, while allowing to take into account information on the graph structure. RankedDrop add more control on randomness and a new degree of freedom for dropping selection. We have shown that the results given by RankedDrop are very encouraging and are more stable. The degree of freedom brought by taking into account the structure of the graph allows to project the construction of deeper GNNs.
It is also possible to imagine using this method to better control the training of neural networks on denser graphs. The computations that are performed to extract the data from the graph representation matrix can be executed in distributed computing. The choice of edges to drop at each epoch is more complex to do in distributed computing and will be the subject of future work.
A APPENDIX: HYPERPARAMETERS IN EXPERIMENTS
In the table 4 are gathered the parameters used to generate the accuracys that have been presented in the paper. There are both the hyperparameters of the models that are used for the execution of the backbones, and also the few parameters that we used to control the selection of the edges to drop. We have implemented three ways to take into account the information from the structure of the graph. This is the parameter which is named score. Either we have used only the degree information or the PageRank information, which is respectively indicated by Deg and PR, or we have used both at the same time to build the score vector and it is noted PRxD. In addition to this parameter, we have influenced the choice of edges to remove from the graph with the following parameters:
• dd: It is a boolean that removes the edge in the opposite direction of the selected edge when the dataset is symmetric. Vertices are removed in pairs, and this allows to keep a undirected graph.
• reverse: It is a boolean that allows to reverse the adjacency matrix. By doing this, each edge is no longer associated with the tail node but with the head node, and if the scores of the two nodes associated with that edge are not the same, it changes the probability of selecting that particular edge.
• lowest: It is a boolean that reverses the ranking of the nodes of the graph using the reciprocal of the score associated to each node.
R ef
B ac
kb on
e D
at as
et nl
ay er
s H
yp er
-p ar
am et
er s
Ta bl
e 2
G C
N C
or a
2 lr
:0 .0
01 ,w
ei gh
tde
ca y:
1e -4
,s am
pl in
gpe
rc en
t:0 .7
,s co
re :P
R xD
,d d:
fa ls
e, re
ve rs
e: tr
ue ,l
ow es
t:t ru
e, ni
te r:
40 0
Ta bl
e 2
G C
N C
ite se
er 2
lr :0
.0 07
,w ei
gh t-
de ca
y: 1e
-4 ,s
am pl
in g-
pe rc
en t:0
.6 ,s
co re
:P R
,d d:
fa ls
e, re
ve rs
e: fa
ls e,
lo w
es t:t
ru e,
ni te
r: 40 0 Ta bl e 2 G C N Pu bm ed 2 lr :0 .0 09 ,w ei gh tde ca y: 1e -2 ,s am pl in gpe rc en t:0 .8 ,s co re :P R ,d d: tr ue ,r ev er se :fa ls e, lo w es t:t ru e, ni te r: 40 0 Ta bl e 2 G C N C or a 4 lr :0 .0 04 ,w ei gh tde ca y: 1e -4 ,s am pl in gpe rc en t:0 .3 ,s co re :P R xD ,d d: fa ls e, re ve rs e: tr ue ,l ow es t:t ru e, ni te r:
40 0
Ta bl
e 2
G C
N C
ite se
er 4
lr :0
.0 08
,w ei
gh t-
de ca
y: 1e
-3 ,s
am pl
in g-
pe rc
en t:0
.1 ,s
co re
:D eg
,d d:
fa ls
e, re
ve rs
e: tr
ue ,l
ow es
t:f al
se ,n
ite r:
40 0
Ta bl
e 2
G C
N Pu
bm ed
4 lr
:0 .0
08 ,w
ei gh
tde
ca y:
1e -2
,s am
pl in
gpe
rc en
t:0 .9
,s co
re :D
eg ,d
d: fa
ls e,
re ve
rs e:
tr ue
,l ow
es t:f
al se
,n ite
r: 40 0 Ta bl e 2 G C N C or a 8 lr :0 .0 03 ,w ei gh tde ca y: 1e -5 ,s am pl in gpe rc en t:0 .7 ,s co re :P R ,d d: tr ue ,r ev er se :tr ue ,l ow es t:f al se ,n ite r: 10 00 Ta bl e 2 G C N C ite se er 8 lr :0 .0 01 ,w ei gh tde ca y: 1e -5 ,s am pl in gpe rc en t:0 .5 ,s co re :P R xD ,d d: fa ls e, re ve rs e: tr ue ,l ow es t:f al se ,n ite r:
10 00
Ta bl
e 2
G C
N Pu
bm ed
8 lr
:0 .0
06 ,w
ei gh
tde
ca y:
1e -4
,s am
pl in
gpe
rc en
t:0 .5
,s co
re :P
R xD
,d d:
tr ue
,r ev
er se
:tr ue
,l ow
es t:t
ru e,
ni te
r: 10 00 Ta bl e 3 G C N C or a 4 lr :0 .0 1, w ei gh tde ca y: 0. 00 5, sa m pl in gpe rc en t:0 .6 ,s co re :D eg ,d d: tr ue ,r ev er se :tr ue ,l ow es t:t ru e, ni te r: 40 0 Ta bl e 3 G C N C ite se er 4 lr :0 .0 09 ,w ei gh tde ca y: 1e -3 ,s am pl in gpe rc en t:0 .1 ,s co re :D eg ,d d: tr ue ,r ev er se :tr ue ,l ow es t:f al se ,n ite r: 40 0 Ta bl e 3 G C N Pu bm ed 4 lr :0 .0 1, w ei gh tde ca y: 1e -3 ,s am pl in gpe rc en t:0 .2 ,s co re :P R xD ,d d: tr ue ,r ev er se :tr ue ,l ow es t:f al se ,n ite r: 40 0 Ta bl e 3 In ce pG C N C or a 8 lr :0 .0 1, w ei gh tde ca y: 1e -3 ,s am pl in gpe rc en t:0 .1 ,s co re :P R ,d d: tr ue ,r ev er se :fa ls e, lo w es t:t ru e, ni te r: 40 0 Ta bl e 3 In ce pG C N C ite se er 8 lr :0 .0 02 ,w ei gh tde ca y: 0. 00 5, sa m pl in gpe rc en t:0 .1 ,s co re :D eg ,d d: tr ue ,r ev er se :tr ue ,l ow es t:f al se ,n ite r: 40 0 Ta bl e 3 In ce pG C N Pu bm ed 4 lr :0 .0 02 ,w ei gh tde ca y: 1e -5 ,s am pl in gpe rc en t:0 .3 ,s co re :P R xD ,d d: fa ls e, re ve rs e: tr ue ,l ow es t:t ru e, ni te r: 40 0 Ta bl e 3 JK N et C or a 16 lr :0 .0 08 ,w ei gh tde ca y: 5e -4 ,s am pl in gpe rc en t:0 .1 ,s co re :P R ,d d: tr ue ,r ev er se :tr ue ,l ow es t:t ru e, ni te r: 40 0 Ta bl e 3 JK N et C ite se er 8 lr :0 .0 04 ,w ei gh tde ca y: 5e -5 ,s am pl in gpe rc en t:0 .8 ,s co re :P R ,d d: tr ue ,r ev er se :tr ue ,l ow es t:t ru e, ni te r: 40 0 Ta bl e 3 JK N et Pu bm ed 64 lr :0 .0 05 ,w ei gh tde ca y: 1e -4 ,s am pl in gpe rc en t:0 .9 ,s co re :D eg ,d d: fa ls e, re ve rs e: fa ls e, lo w es t:f al se ,n ite r: 40 0
Ta bl
e 4:
H yp
er -p
ar am
et er
s us
ed to
ob ta
in th
e ac
cu ra
cy pr
es en
te d
in th
is pa
pe rw
ith th
e R
an ke
dD ro
p m
et ho
d. | 1. What is the main contribution of the paper on increasing the depth of GNNs?
2. What are the strengths and weaknesses of the proposed RankedDrop method compared to other de-oversmoothing methods?
3. How does the paper illustrate the collaboration between local information and global information in PageRank?
4. What are some ambiguities and typos in the introduction of the paper that need improvement?
5. How do the experiments compare RankedDrop with other baselines, and what are some limitations regarding the number of layers considered? | Summary Of The Paper
Review | Summary Of The Paper
In this paper, the authors focus on increasing the depth of GNNs and maintaining the performance in the meanwhile. Starting with the DropEdge model, the authors propose an incremental improvement, which adds a certain ordering to reduce the randomness of dropping edge selections, called RankedDrop. Some experimental results show the superiority of RankedDrop over DropEdge. RankedDrop is straightforward and depends on existing technologies like PageRank, and the novelty and theoretical contribution of RankedDrop is limited. Moreover, the introduction of the proposed RankedDrop is ambiguous and the corresponding experiments are weak to some extent. For detailed improvements, please refer to the following sections.
Review
Strengths:
The problem is important and interesting. In the real-world scenario, when increasing the depth of GNNs is indispensable for reducing the representation uncertainty, maintaining the performance at the same time is very necessary. Based on an existing model DropEdge, this paper proposes the RankedDrop to reduce the randomness during the edge selection for better performance. The RankedDrop is straight and seems to be easy to be realized. However, there are some unneglectable weaknesses are listed below.
Weaknesses:
(1) Limited Novelty and Technical Contribution. The authors claimed that they use the classic PageRank to compute scores for the node selection and basic random selection for the edge selection. The novelty and technical contribution of this paper are limited. To be more specific, the RankedDrop just gives the node selection with an ordering (computed by PageRank), the follow-up edge selection is still random. I may doubt the natural difference between DropEdge [2] with RankedDrop. Last year, several de-oversmoothing methods [1,3] are proposed and share the same intuition with the author, i.e., reduce the randomness during the dropping edge process.
(2) Ambiguous Illustration. The introduction of this paper needs to be improved to a large extent. Some aspects are listed below. (i) For SWA, this part comes very suddenly. Currently, there is only one sentence describing the intuition of why using SWA, which is insufficient. Algorithm 1 is well-known, and the current context is enough. Therefore, the authors may want to emphasize more why SWA is indispensable, and what if only concern the original ranking of PageRank vectors. (ii) How do local information and global information collaborate? The authors claimed that the selection takes the global information of PageRank and the local information of node degrees. However, it seems only PageRank is used in the paper. Moreover, in Figure 1, how PageRank distribution of the whole graph is obtained? Is that aggregating the PageRank vector of every seed node? (iii) Some notation comes from nowhere, the authors may want to introduce them before using them, like S_i, SWA_s[n], and malus. (iv) Some details like, will the edges dropped in the last epoch will be added back for the next epoch? (v) some typos.
(3) Weak Experiments. (i) From DropEdge, many attempts for deeper GNNs are proposed last year, and some of them share the same insight with RankedDrop, i.e., drop edges during the training process [1, 3]. And some use additional regularizers to realize deeper GNNs [2, 5]. Only setting one baseline is not adequate to show the superiority of RankedDrop. To be helpful, some baselines [1-5] are listed below, and all of them have the code available online. Baselines having similar intuitions [1, 3] should be compared at first. It would be good if other baselines could be compared to see the effectiveness of different intuitions. (ii) Figure 2 and Figure 3 are not that convincing, vanilla GCN seems to have the competitive performance w.r.t training and validation loss. For example, in Figure 2, in Cora, GCN achieves less training loss and validation loss. Also in Pubmed of Figure 2, GCN seems to do the same as the DropEdge methods. The authors may want to increase the number of layers to see the variance of loss in a more grained way. (iii) The sampling of the number of layers is not sufficient, in the paper, the authors only consider 2, 4, and 8 layers. The authors may want to check the adequate sampling from the reference listed below. For example, in [2] 0-30 layers, in [3] 2-64 layers, in [4] 0-200 layers, and in [5] 0-30 layers. The author may want to increase the range and the granularity.
[1] Deli Chen, Yankai Lin, Wei Li, Peng Li, Jie Zhou, Xu Sun: Measuring and Relieving the Over-Smoothing Problem for Graph Neural Networks from the Topological View. AAAI 2020
[2] Lingxiao Zhao, Leman Akoglu: PairNorm: Tackling Oversmoothing in GNNs. ICLR 2020
[3] Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, Yaliang Li: Simple and Deep Graph Convolutional Networks. ICML 2020
[4] Meng Liu, Hongyang Gao, Shuiwang Ji: Towards Deeper Graph Neural Networks. KDD 2020
[5] Kaixiong Zhou, Xiao Huang, Yuening Li, Daochen Zha, Rui Chen, Xia Hu: Towards Deeper Graph Neural Networks with Differentiable Group Normalization. NeurIPS 2020
[6] Yimeng Min, Frederik Wenkel, Guy Wolf: Scattering GCN: Overcoming Oversmoothness in Graph Convolutional Networks. NeurIPS 2020
[7] Kenta Oono, Taiji Suzuki: Optimization and Generalization Analysis of Transduction through Gradient Boosting and Application to Multi-scale Graph Neural Networks. NeurIPS 2020 |
ICLR | Title
RankedDrop: Enhancing Deep Graph Convolutional Networks Training
Abstract
Graph Neural Networks (GNNs) are playing a more and more important role for analyzing unstructured data from the complex real world. Introducing random edge dropping from the input graph at training epochs could reduce over-fitting and over-smoothing phenomenon and increase the depth of GNNs. However, such method relies strongly on the chosen randomness. It makes the accuracy depend on the initialization of the randomness, which lets the selection of hyperparameters be even more difficult. We propose in this paper RankedDrop a novel method with a spatial-aware dropping-edge selection. The selection takes into account the graph global information using PageRank, and graph local neighborhood information with node degree. RankedDrop provides a more stable training results comparing to the state-of-the-art solution, by maintaining the advantages of random edge dropping. Furthermore, RankedDrop is a general method that can be deployed on a deep learning framework for enhancing performance of GNNs.
1 INTRODUCTION & CONTEXT
Convolutional Neural Networks (CNNs) demonstrated a great success in our today’s daily life for image classification and many other applications. However in the real world there are still many non-Euclidean (graph) data like social networks or reference systems that cannot be handled by CNNs. After Defferrard et al. (2016) introducing Graph Neural Networks (GNNs), Defferrard et al. (2016) generalized CNNs to graph to exploit their potential for classification problems on nonEuclidean data structure. The computation of Graph Convolutional Neural Networks (GCNs) can be summarized as iterative neighborhood aggregations with a message passing schema (Huan et al. (2021)).
He et al. (2016) showed that deeper CNN has higher potential to achieve better precision. However, modern GCNs (Kipf & Welling (2017); Pei et al. (2021); Hamilton et al. (2017)) can work with very limited number of layers, because training deep neural networks is a very complex task (Claesen & De Moor (2015)), the complexity of the computed function grows exponentially with depth (Raghu et al. (2017)), and the deeper the networks are, the more they are subject to over-smoothing (Li et al. (2018); Chen et al. (2020)). Meanwhile, deeper GCN and/or small graph datasets could lead to over-fitting, where a model could fit well the training data but poorly the testing data.
Dropout (Hinton et al. (2012); Srivastava et al. (2014)) is a promising regularization techniques to reduce over-fitting. In the field of GCN, DropEdge introduced by Rong et al. (2019), which randomly removes a certain proportion of edges from the input graph at each epoch, showed promising results to reduce the convergence speed of over-fitting and over-smoothing. Moreover, the random dropping happened on the message passing schema of most of GCNs. Therefore such method could be applied for many GCN backbone models like GCN (Kipf & Welling (2017)), ResGCN (Pei et al. (2021)), GraphSage (Hamilton et al. (2017) ), IncepGCN (szegedy2016rethinking) and JKNet (Xu et al. (2018)).
However, the accuracy obtained by DropEdge depends on how the randomness of dropping is initialized. Moreover, the only parameter that can be adjusted in DropEdge is the percentage of edges that will be dropped. Missing of control on the way of how dropping edges be selected, may limit possibilities to optimize GCN training according to application domain and chosen backbone architecture. Furthermore, a graph structure includes a lot of useful information (Newman (2003)).
Random dropping may destroy graph structure information and again limits the potential of optimizing GCN training.
This paper proposes RankedDrop a novel method with a spatial-aware dropping-edge selection. The selection takes into account the graph global information using PageRank, and graph local neighborhood information with node degree. Graph structure information is extracted to reduce the impact of randomness in the selection and also to improve the final accuracy after training. RankedDrop provides a more stable training results comparing to the state-of-the-art solution, by maintaining the advantages of random edge dropping including over-fitting and over-smoothing reduction and being a general method for different GCN backbones. Shown by our experiments, the accuracies of deep GCNs on semi-supervised learning are significantly improved by using RankedDrop.
2 RANKEDDROP METHOD WITH DATA SELECTION
RankedDrop is a general method, and it is applied on the input graph of a GCN training before each epoch. It first extracts a score based on graph analysis for each node (Sec 2.2); after that the nodes are reordered to control the selection probability then selected according to the computed score (Sec 2.3); at the end the edges of selected nodes are selected and we drop the selected edges to create the new input graph (Sec 2.4).
2.1 NOTATIONS AND PRELIMINARIES
We use an adjacency matrix A to represent the original input graph G, and nnz the number of non-zero value of A, a.k.a. the number of edges of the graph G. We denote p the proportion of edges from G that will be dropped. Therefore, after dropping, the new input graph Gdrop has (1− p)× nnz edges. We denote the resulting adjacency matrix Adrop for Gdrop, and we use A′ to denote the matrix of p× nnz dropped edges. The relation between the above three matrices is:
Adrop = A−A′ (1)
The theorem 1 introduced in the paper (Rong et al. (2019)) proved that training GCN on Gdrop instead of G allows to reduce the speed of convergence of the over-smoothing and to reduce the loss of information. The idea is based on the concept of mixing time in the random walk theory (Lovász (1993)), and the proof is based on the work of Oono & Suzuki (2019). Luo et al. (2021) demonstrated again the effectiveness of such dropping method.
In the following parts of the paper, we focus on the methods of selection of edges to drop, which are the main contributions of RankedDrop. RankedDrop ranks the nodes in order to assign a weight during the selection. Different from Sparsification (Eppstein et al. (1997)) or DropEdge (Rong et al. (2019)), the goal of RankedDrop is to control the randomness with several parameters, but not to completely control the choice of the drop edges, to create at each iteration a sub-graph in a more intelligent way. It means that we bias the probability (greater or lesser) of being selected of each edge according to our graph analysis, to create dropping strategy by reducing the dependency on full randomness.
2.2 GRAPH INFORMATION EXTRACTION
Most GCN architectures are mainly oriented on inter-neighbor communication (Huan et al. (2021)). The information propagates through edges w.r.t. GCN layers. The shorter the path between two nodes, the more they will influence each other. Removing the most impactful neighbors limits such over-influence and reserves space for taking into account the information from other neighbors for each epoch and among the epochs in a training. With the above idea, we propose here a node ranking strategy in order to prepare a better dropping selection for the next steps. Two kinds of graph structure information are extracted and used in the node selection step:
Local structure information The degrees of each node, which reflects the local impact of the node on its neighborhood. Higher degree reflects stronger influence from a local point of view in the
graph. If a node has a lot of neighbors, it will have an impact at each layer on them and therefore the information it contains will be strongly taken into account at the local level. The degrees are extracted from the adjacency matrix A. Consider that A is an n × n matrix and that its number of nonzero elements is nnz. A vector of size n is used to store the degrees of the nodes of the graph.
Global structure information Different graph node ranking algorithms (Agarwal & Chakrabarti (2007)) could be used here to judge importance of each node on the global graph. We use in the paper the PageRank algorithm (Page et al. (1999)) to generate the score of importance, because (1) PageRank is the most studied algorithms of the last decades, by our knowledge it can be easily implemented in a distributed way to accelerate its computation; (2) It was already used in GNNs to reduce the over-smoothing (Bojchevski et al. (2019)). From the adjacency matrix A, a vector of size n will be returned and will contain the score of each node. The algorithm 1 represents the implementation of PageRank used in RankedDrop. It shows that the main operation of each iteration of the PageRank is a matrix-vector multiplication where the output vector is used to perform the next iteration multiplication. By considering A as sparse, the cost of this sequence of sparse matrixvector multiplications is reduced and can be executed efficiently in a distributed way (Hugues & Petiton (2010)), which allows to optimize the extra computations that PageRank requires. This iterative method stops when the convergence has reached the expected precision. The result vector of the last iteration contains then the scores of each node of the graph and all elements are between 0 and 1. The higher the score, the more important the node is in the global graph. A β coefficient is also introduced during the PageRank. It is an optimization allowing to redistribute a part of the scores of each node among all the other nodes. In this way, the convergence of the result vector is faster and avoids that all the score is distributed only within the strongly connected component. Conventionally, the β coefficient is fixed around 0.85, this value was used for the experiments in the section 3.
The local and global structure information is used to rank the nodes of the graph to determinate the overall importance of each node in the graph. The importance of the nodes in the global and/or local structure of the graph gives a score to each node so that the nodes with a higher score are more often included in the matrix Adrop. Thus, the structure of the graphs Gdrop that will be generated at each iteration will be closer to the structure of the graph G than when the dropping is done randomly. We note s the vector of size n which stores the final score of the associated nodes. The computation of this score is flexible. There are many possibilities to compute the values of s by taking the information of the local and/or global structure, and potentially other information. At the end, the goal is to have a vector such that ∀i ∈ [1, n], 0 ≤ Si ≤ 1 and ∑n i=1 Si = 1.
2.3 NODE SELECTION WITH PROBABILITY CONTROL
After getting the score vector s, we sort the nodes according to their scores in a decreasing order. The permutations performed during the sorting are stored in memory in order to keep the association information between the nodes and the scores.
After the sorting, we create a probability scale from the sorted score vector by applying a Scan-WithAdd (SWA) algorithm (a.k.a prefix sum, Blelloch (1990)). SWA will generate an interval between 0 and 1 for each node. Therefore, the node selection is no more in a fully random way but the randomness is limited in the interval. The resulting vector is of size n where the values are more and more ordered. The resulting vector is such that SWAs[n] = 1. In addition, SWA could help visualize the inequalities of score between the nodes in the graph, like the Lorenz curve used in economics (Lorenz (1905)).
The node selection is performed with the SWA vector. The SWA value of each node corresponds to the probability of the node being selected. Formatting the score vector as a SWA accelerates the selection of nodes. For each node selection, we take a random number between 0 and 1 and find the node associated with this value in the vector SWAs. A binary search (Knuth (1998)) on the SWA vector can find the node with a O(log n) complexity, where it is necessary to browse element by element the vector of s scores at each node selection. We will discuss the selection of nodes from the SWA vector in more detail in the next section.
Algorithm 1 Algorithm to get the PageRank score from adjacency matrix
Input: A the adjacency (sparse) matrix, δ precision, β coefficient Output: v vector of PageRank score of size n Initialisation :
1: sum← 0 2: err ← INF 3: new vector tmp of size n 4: assign 1n to each element in v
START LOOP 5: while err > δ do 6: reset all element of tmp to 0 7: tmp← SpMV between A and v 8: for each elem in tmp do 9: elem← β ∗ elem+ (1− β) ∗ 1n
10: end for 11: err ← norm between tpm and v 12: v ← tmp 13: end while 14: return v
Algorithm 2 Node selection from the score vector Input: SWAs the Scan-With-Add final score
vector, sumScore the sum of the remaining nodes’ scores, malus the vector of malus applied to each node. Output: ind the index of the node to perform the drop edge Initialisation :
1: r ← randomin]0, 1[ 2: r ← r ∗ sumScore 3: m← 0 4: a← 0 5: b← size of SWAs 6: while b-a != 1 do 7: c← (a+ b)/2 8: m← m+ malus on c node 9: if SWA S of c−m− sB < r then
10: a← c 11: else 12: b← c 13: add (c + b)/2 on the potential malus node list 14: add c in the explored nodes list 15: end if 16: end while 17: return b
2.4 DROPPING EDGE SELECTION
The last step is to select the exact edges to drop. Different from the previous steps that can be performed only once in the beginning of training, the edge selection is performed for each epoch to generate a different subgraph. At each epoch, p × nnz edges are chosen from the selected nodes and are removed.
Different edge selection algorithms could be applied here. For example, the selection could be based on the tail, on the head, or directly removing all edges of a selected node (a.k.a DropNode). For the experiments presented in the section 3, we randomly select edges from the selected node. This adds randomness to the selection process. To select a node, we took our inspiration from the bisection method (Burden & Faires (1985)). By randomly pulling in a uniform way a number r between 0 and 1, we obtain the index of the node i that checks SWAs[i] < r < SWAs[i + 1] by performing a dichotomy. The selection of edges and nodes is based on the SWA des scores SWAs, , the importance of the node in the graph will influence its probability to be selected at each epoch. Thus, the randomness is controlled but the selection probabilities are different for each node so that the randomness takes into account the global structure of the graph. It is possible from the PageRank results and/or degrees vector to create subgraphs that keep the key nodes of the graph so that at each epoch the graph generated is consistent with the structure of the initial graph.
It is useful to keep an efficient selection method because it is performed a large number of times. This is why we have tried to optimize the implementation of this selection (see the algorithm 2) by adding a malus system when exploring the SWA vector to avoid selecting a node from which all edges have already been selected. When all the edges associated to a node have been dropped, there is no more interest to select this node again. Our optimization is based on the fact that the bisection method can be represented as a tree. At each step of the dichotomy, there is the possibility to move either the lower or the upper bound. To be sure not to select a node i, it is possible to apply a malus (equal to the score of the node i) to the explored branches when the upper bound is moved in the path to access the SWAs[i] (because all these paths lead to nodes further in the SWAs vector). Thus,
it is enough to modify at most log n malus value instead of modifying a large number of values in SWAs.
3 EXPERIMENTS AND DISCUSSIONS
RankedDrop is a general method that could equip different algorithms for the node selection and edge selection, and can be applied for different GNN architectures. In the experiments of this paper, we used the classic PageRank to compute scores for the node selection; we used basic random selection for the edge selection; and we used mostly the standard GCN as the training architecture, since Rong et al. (2019) and Luo et al. (2021) have already demonstrated the genericity of these Dropping methods for other GNN architecture. We believe such standard configuration could provide a clear and general idea of the potential offered by RankedDrop.
3.1 DATASETS AND ENVIRONMENT
Three standard citation datasets were used in our experiments: Cora, Citeseer and Pubmed. These datasets represent collections of scientific articles that are classified according to the paper’s main research topic (Sen et al. (2008)). More information of these datasets can be found in the table x. We notice that these graphs are very sparse, since their number of edges per node is very low: in average about two edges per node. However, if we see closely, the highest degree nodes of those datasets have more than 100 edges (99 for Citeseer and up to 171 for Pubmed). This means that a very large number of nodes have a very low degree (≤ 2). Therefore, only few important nodes propagate their information very widely; the information of other low degree notes are quickly drowned. For example in Cora, the node with the highest degree is directly connected to more than 6% of the nodes in the graph.
The extraction of graph data, the score computation until edge selection and dropping were done before the GCN training on Intel Xeon Processor E5-2690 with 8 cores. The result was used to build Gdrop. The trainings with Gdrop were done on Nvidia Tesla V100 PCIe 16GB GPUs. The original GCN, the state of the art DropEdge and our RankedDrop were compared in this section to validate our solution.
3.2 HOW SCAN-WITH-ADD HELPS NODE SELECTION
The values of the SWA of PageRank and Degree are represented graphically in figure 1. We can see that PageRank allows to put forward a small number of nodes; these curves increase very quickly. The distribution of the scores is very unequal: for all the datasets, 10% of the best ranked nodes share more than 80% of the total score, because PageRank highlights the most important nodes in an exponential way. Therefore if the nodes are randomly selected, there is an 80% chance to choose one of the 10% high ranked nodes. By using SWA-PageRank, the best rated nodes of the G graph will be very often integrated to the G′ graph, because they are the nodes that have the most impact in the global structure of the graph. On the other hand, the scores of SWA-Degree rise more slowly. The inequality between the nodes is thus less important. The highest degree nodes are privileged in terms of scores but they are not too much highlighted compared to SWA-PageRank. Therefore, we can choose the most adapted SWA and decide to select either the most or the less important nodes to prepare the edge selection.
3.3 IMPACT OF RANKEDDROP ON OVER-FITTING
The figures 2 and 3 show the training and validation loss curves in full-supervised and semisupervised learning, respectively. All curves for the same dataset were obtained with the same hyperparameters, only the percentage of dropping edges is different between DropEdge and RankedDrop. We can observe that, the two dropping methods in general have the similar behavior, both have better loss convergence than the original GCN, and in some cases, the validation loss of RankedDrop converges again better than the one of DropEdge. These experiments show that RankedDrop is the best method to reduce the over-fitting phenomenon and to stabilize the loss. RankedDrop has also the same behavior on over-smoothing reduction as DropEdge, so we will not discuss here.
3.4 IMPACT OF DROPPING CONTROL
The accuracies after training with different proportion of non-drop edges are shown in figure 4. The accuracies were obtained with the best hyper-parameters for each cases with GCN in the semi-
supervised learning. We can observe that for all three datasets the best accuracies are all from RankedDrop. Moreover, the best accuracy obtained by RankedDrop preserve more edges than the best one by DropEdge. For example, for the Citeseer dataset, the best accuracy subgraphs by DropEdge using 20% edges of the original graph, whereas the best accuracy subgraphs by RankedDrop maintain 60% of the edges. We believe the fewer edges was dropped, the more information of the original graph is kept, and we have more chance to achieve a better accuracy.
3.5.1 SEMI-SUPERVISED
3.5 OVERALL PERFORMANCE RESULTS
We first compare the accuracy between original GCN, GCN with DropEdge and GCN with RankedDrop in the semi-supervised learning, with 2, 4 and 8 layers (Table 2). The hyper-parameters for 2 layers are from the paper of DropEdge, and the one for 4 and 8 layers are the best one that we found. The parameters used for the selection of the nodes with RankedDrop are available in the appendix A. The accuracies obtained with RankedDrop are all higher than with DropEdge. Moreover, the deeper the GCN is, the better accuracy improvement RankedDrop offers comparing to DropEdge. The accuracy obtained with RankedDrop for the 8-layer GCN is 20% better than the one with DropEdge. Even for the 2-layer GCN, the accuracies of RankedDrop are equivalent or superior to those well-tuned by DropEdge. This is particularly true with the Citeseer dataset where the 2-layer GCN with RankedDrop obtained 1% higher accuracy than with DropEdge.
3.5.2 FULL-SUPERVISED
The accuracies of full-supervised learning are presented in the table 3. For each of the datasets, we evaluated with three different backbones: GCN, IncepGCN and JKNet. The number of layers for each backbone was chosen from the best accuracy declared by DropEdge. We used the same hyper-parameters given by Rong et al. (2019), only the edge dropping percentage is modified for RankedDrop. The accuracies are globally equivalent between RankedDrop and DropEdge; and RankedDrop achieved better accuracies than DropEdge for Cora the smallest dataset. It again show that RankedDrop reduce better the over-fitting phenomenon. Moreover, the hyper-parameters used here are not specifically adapted to RankedDrop, but RankedDrop can still achieve good accuracies. We believe there are still space to increase accuracies with RankedDrop by optimizing the hyperparameters.
4 CONCLUSION & PERSPECTIVE
The RankedDrop method that we proposed in this paper provided more control on the selection of dropping edges and allows to customize the dropping step for various neural network architectures. Thanks to a personalized score system and the addition of several parameters, the control of the edges to drop is personalized. RankedDrop keeps the advantages of DropEdge concerning the reduction of over-smoothing and over-fitting as well as the possibility to use it on different architectures, while allowing to take into account information on the graph structure. RankedDrop add more control on randomness and a new degree of freedom for dropping selection. We have shown that the results given by RankedDrop are very encouraging and are more stable. The degree of freedom brought by taking into account the structure of the graph allows to project the construction of deeper GNNs.
It is also possible to imagine using this method to better control the training of neural networks on denser graphs. The computations that are performed to extract the data from the graph representation matrix can be executed in distributed computing. The choice of edges to drop at each epoch is more complex to do in distributed computing and will be the subject of future work.
A APPENDIX: HYPERPARAMETERS IN EXPERIMENTS
In the table 4 are gathered the parameters used to generate the accuracys that have been presented in the paper. There are both the hyperparameters of the models that are used for the execution of the backbones, and also the few parameters that we used to control the selection of the edges to drop. We have implemented three ways to take into account the information from the structure of the graph. This is the parameter which is named score. Either we have used only the degree information or the PageRank information, which is respectively indicated by Deg and PR, or we have used both at the same time to build the score vector and it is noted PRxD. In addition to this parameter, we have influenced the choice of edges to remove from the graph with the following parameters:
• dd: It is a boolean that removes the edge in the opposite direction of the selected edge when the dataset is symmetric. Vertices are removed in pairs, and this allows to keep a undirected graph.
• reverse: It is a boolean that allows to reverse the adjacency matrix. By doing this, each edge is no longer associated with the tail node but with the head node, and if the scores of the two nodes associated with that edge are not the same, it changes the probability of selecting that particular edge.
• lowest: It is a boolean that reverses the ranking of the nodes of the graph using the reciprocal of the score associated to each node.
R ef
B ac
kb on
e D
at as
et nl
ay er
s H
yp er
-p ar
am et
er s
Ta bl
e 2
G C
N C
or a
2 lr
:0 .0
01 ,w
ei gh
tde
ca y:
1e -4
,s am
pl in
gpe
rc en
t:0 .7
,s co
re :P
R xD
,d d:
fa ls
e, re
ve rs
e: tr
ue ,l
ow es
t:t ru
e, ni
te r:
40 0
Ta bl
e 2
G C
N C
ite se
er 2
lr :0
.0 07
,w ei
gh t-
de ca
y: 1e
-4 ,s
am pl
in g-
pe rc
en t:0
.6 ,s
co re
:P R
,d d:
fa ls
e, re
ve rs
e: fa
ls e,
lo w
es t:t
ru e,
ni te
r: 40 0 Ta bl e 2 G C N Pu bm ed 2 lr :0 .0 09 ,w ei gh tde ca y: 1e -2 ,s am pl in gpe rc en t:0 .8 ,s co re :P R ,d d: tr ue ,r ev er se :fa ls e, lo w es t:t ru e, ni te r: 40 0 Ta bl e 2 G C N C or a 4 lr :0 .0 04 ,w ei gh tde ca y: 1e -4 ,s am pl in gpe rc en t:0 .3 ,s co re :P R xD ,d d: fa ls e, re ve rs e: tr ue ,l ow es t:t ru e, ni te r:
40 0
Ta bl
e 2
G C
N C
ite se
er 4
lr :0
.0 08
,w ei
gh t-
de ca
y: 1e
-3 ,s
am pl
in g-
pe rc
en t:0
.1 ,s
co re
:D eg
,d d:
fa ls
e, re
ve rs
e: tr
ue ,l
ow es
t:f al
se ,n
ite r:
40 0
Ta bl
e 2
G C
N Pu
bm ed
4 lr
:0 .0
08 ,w
ei gh
tde
ca y:
1e -2
,s am
pl in
gpe
rc en
t:0 .9
,s co
re :D
eg ,d
d: fa
ls e,
re ve
rs e:
tr ue
,l ow
es t:f
al se
,n ite
r: 40 0 Ta bl e 2 G C N C or a 8 lr :0 .0 03 ,w ei gh tde ca y: 1e -5 ,s am pl in gpe rc en t:0 .7 ,s co re :P R ,d d: tr ue ,r ev er se :tr ue ,l ow es t:f al se ,n ite r: 10 00 Ta bl e 2 G C N C ite se er 8 lr :0 .0 01 ,w ei gh tde ca y: 1e -5 ,s am pl in gpe rc en t:0 .5 ,s co re :P R xD ,d d: fa ls e, re ve rs e: tr ue ,l ow es t:f al se ,n ite r:
10 00
Ta bl
e 2
G C
N Pu
bm ed
8 lr
:0 .0
06 ,w
ei gh
tde
ca y:
1e -4
,s am
pl in
gpe
rc en
t:0 .5
,s co
re :P
R xD
,d d:
tr ue
,r ev
er se
:tr ue
,l ow
es t:t
ru e,
ni te
r: 10 00 Ta bl e 3 G C N C or a 4 lr :0 .0 1, w ei gh tde ca y: 0. 00 5, sa m pl in gpe rc en t:0 .6 ,s co re :D eg ,d d: tr ue ,r ev er se :tr ue ,l ow es t:t ru e, ni te r: 40 0 Ta bl e 3 G C N C ite se er 4 lr :0 .0 09 ,w ei gh tde ca y: 1e -3 ,s am pl in gpe rc en t:0 .1 ,s co re :D eg ,d d: tr ue ,r ev er se :tr ue ,l ow es t:f al se ,n ite r: 40 0 Ta bl e 3 G C N Pu bm ed 4 lr :0 .0 1, w ei gh tde ca y: 1e -3 ,s am pl in gpe rc en t:0 .2 ,s co re :P R xD ,d d: tr ue ,r ev er se :tr ue ,l ow es t:f al se ,n ite r: 40 0 Ta bl e 3 In ce pG C N C or a 8 lr :0 .0 1, w ei gh tde ca y: 1e -3 ,s am pl in gpe rc en t:0 .1 ,s co re :P R ,d d: tr ue ,r ev er se :fa ls e, lo w es t:t ru e, ni te r: 40 0 Ta bl e 3 In ce pG C N C ite se er 8 lr :0 .0 02 ,w ei gh tde ca y: 0. 00 5, sa m pl in gpe rc en t:0 .1 ,s co re :D eg ,d d: tr ue ,r ev er se :tr ue ,l ow es t:f al se ,n ite r: 40 0 Ta bl e 3 In ce pG C N Pu bm ed 4 lr :0 .0 02 ,w ei gh tde ca y: 1e -5 ,s am pl in gpe rc en t:0 .3 ,s co re :P R xD ,d d: fa ls e, re ve rs e: tr ue ,l ow es t:t ru e, ni te r: 40 0 Ta bl e 3 JK N et C or a 16 lr :0 .0 08 ,w ei gh tde ca y: 5e -4 ,s am pl in gpe rc en t:0 .1 ,s co re :P R ,d d: tr ue ,r ev er se :tr ue ,l ow es t:t ru e, ni te r: 40 0 Ta bl e 3 JK N et C ite se er 8 lr :0 .0 04 ,w ei gh tde ca y: 5e -5 ,s am pl in gpe rc en t:0 .8 ,s co re :P R ,d d: tr ue ,r ev er se :tr ue ,l ow es t:t ru e, ni te r: 40 0 Ta bl e 3 JK N et Pu bm ed 64 lr :0 .0 05 ,w ei gh tde ca y: 1e -4 ,s am pl in gpe rc en t:0 .9 ,s co re :D eg ,d d: fa ls e, re ve rs e: fa ls e, lo w es t:f al se ,n ite r: 40 0
Ta bl
e 4:
H yp
er -p
ar am
et er
s us
ed to
ob ta
in th
e ac
cu ra
cy pr
es en
te d
in th
is pa
pe rw
ith th
e R
an ke
dD ro
p m
et ho
d. | 1. What is the focus of the paper, and how does it build upon prior works?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its technical novelty and contributions to the field?
3. How does the reviewer assess the clarity and completeness of the paper's content, including the discussion of relevant papers and experimental results?
4. Are there any concerns or suggestions regarding the comparisons with other works and the presentation of the results? | Summary Of The Paper
Review | Summary Of The Paper
The authors of this paper proposed RankedDrop, which is a followup work on DropEdge. Instead of the random edge dropping in DropEdge, RankedDrop uses global information from PageRank as well as local information from node degree to determine the edges to be dropped.
Review
Pros:
The problem of oversmoothing of GNNs is very important and worth studying, and the naive approach DropEdge does have a lot of space for improvements.
This paper is overall clearly written and easy to follow.
Cons:
This paper is missing the discussion of some very relavent papers. E.g., [1][2]
I found the technical novalty of the proposed method quite marginal.
In the experiments, the authors only compared with vanilla GNNs and DropEdge, while many highly correlated and more recent baselines on the exactly same problem exist. For example, NeuralSparse [1], GAugM [2], PTDNet [3].
The experimental performances reported in Tables 2 and 3 showed marginal improvements, which are hard to compare without the confident intervals such as standard deviation.
[1] Robust Graph Representation Learning via Neural Sparsification, ICML'20
[2] Data Augmentation for Graph Neural Networks, AAAI'21
[3] Learning to Drop: Robust Graph Neural Network via Topological Denoising, WSDM'21 |
ICLR | Title
RankedDrop: Enhancing Deep Graph Convolutional Networks Training
Abstract
Graph Neural Networks (GNNs) are playing a more and more important role for analyzing unstructured data from the complex real world. Introducing random edge dropping from the input graph at training epochs could reduce over-fitting and over-smoothing phenomenon and increase the depth of GNNs. However, such method relies strongly on the chosen randomness. It makes the accuracy depend on the initialization of the randomness, which lets the selection of hyperparameters be even more difficult. We propose in this paper RankedDrop a novel method with a spatial-aware dropping-edge selection. The selection takes into account the graph global information using PageRank, and graph local neighborhood information with node degree. RankedDrop provides a more stable training results comparing to the state-of-the-art solution, by maintaining the advantages of random edge dropping. Furthermore, RankedDrop is a general method that can be deployed on a deep learning framework for enhancing performance of GNNs.
1 INTRODUCTION & CONTEXT
Convolutional Neural Networks (CNNs) demonstrated a great success in our today’s daily life for image classification and many other applications. However in the real world there are still many non-Euclidean (graph) data like social networks or reference systems that cannot be handled by CNNs. After Defferrard et al. (2016) introducing Graph Neural Networks (GNNs), Defferrard et al. (2016) generalized CNNs to graph to exploit their potential for classification problems on nonEuclidean data structure. The computation of Graph Convolutional Neural Networks (GCNs) can be summarized as iterative neighborhood aggregations with a message passing schema (Huan et al. (2021)).
He et al. (2016) showed that deeper CNN has higher potential to achieve better precision. However, modern GCNs (Kipf & Welling (2017); Pei et al. (2021); Hamilton et al. (2017)) can work with very limited number of layers, because training deep neural networks is a very complex task (Claesen & De Moor (2015)), the complexity of the computed function grows exponentially with depth (Raghu et al. (2017)), and the deeper the networks are, the more they are subject to over-smoothing (Li et al. (2018); Chen et al. (2020)). Meanwhile, deeper GCN and/or small graph datasets could lead to over-fitting, where a model could fit well the training data but poorly the testing data.
Dropout (Hinton et al. (2012); Srivastava et al. (2014)) is a promising regularization techniques to reduce over-fitting. In the field of GCN, DropEdge introduced by Rong et al. (2019), which randomly removes a certain proportion of edges from the input graph at each epoch, showed promising results to reduce the convergence speed of over-fitting and over-smoothing. Moreover, the random dropping happened on the message passing schema of most of GCNs. Therefore such method could be applied for many GCN backbone models like GCN (Kipf & Welling (2017)), ResGCN (Pei et al. (2021)), GraphSage (Hamilton et al. (2017) ), IncepGCN (szegedy2016rethinking) and JKNet (Xu et al. (2018)).
However, the accuracy obtained by DropEdge depends on how the randomness of dropping is initialized. Moreover, the only parameter that can be adjusted in DropEdge is the percentage of edges that will be dropped. Missing of control on the way of how dropping edges be selected, may limit possibilities to optimize GCN training according to application domain and chosen backbone architecture. Furthermore, a graph structure includes a lot of useful information (Newman (2003)).
Random dropping may destroy graph structure information and again limits the potential of optimizing GCN training.
This paper proposes RankedDrop a novel method with a spatial-aware dropping-edge selection. The selection takes into account the graph global information using PageRank, and graph local neighborhood information with node degree. Graph structure information is extracted to reduce the impact of randomness in the selection and also to improve the final accuracy after training. RankedDrop provides a more stable training results comparing to the state-of-the-art solution, by maintaining the advantages of random edge dropping including over-fitting and over-smoothing reduction and being a general method for different GCN backbones. Shown by our experiments, the accuracies of deep GCNs on semi-supervised learning are significantly improved by using RankedDrop.
2 RANKEDDROP METHOD WITH DATA SELECTION
RankedDrop is a general method, and it is applied on the input graph of a GCN training before each epoch. It first extracts a score based on graph analysis for each node (Sec 2.2); after that the nodes are reordered to control the selection probability then selected according to the computed score (Sec 2.3); at the end the edges of selected nodes are selected and we drop the selected edges to create the new input graph (Sec 2.4).
2.1 NOTATIONS AND PRELIMINARIES
We use an adjacency matrix A to represent the original input graph G, and nnz the number of non-zero value of A, a.k.a. the number of edges of the graph G. We denote p the proportion of edges from G that will be dropped. Therefore, after dropping, the new input graph Gdrop has (1− p)× nnz edges. We denote the resulting adjacency matrix Adrop for Gdrop, and we use A′ to denote the matrix of p× nnz dropped edges. The relation between the above three matrices is:
Adrop = A−A′ (1)
The theorem 1 introduced in the paper (Rong et al. (2019)) proved that training GCN on Gdrop instead of G allows to reduce the speed of convergence of the over-smoothing and to reduce the loss of information. The idea is based on the concept of mixing time in the random walk theory (Lovász (1993)), and the proof is based on the work of Oono & Suzuki (2019). Luo et al. (2021) demonstrated again the effectiveness of such dropping method.
In the following parts of the paper, we focus on the methods of selection of edges to drop, which are the main contributions of RankedDrop. RankedDrop ranks the nodes in order to assign a weight during the selection. Different from Sparsification (Eppstein et al. (1997)) or DropEdge (Rong et al. (2019)), the goal of RankedDrop is to control the randomness with several parameters, but not to completely control the choice of the drop edges, to create at each iteration a sub-graph in a more intelligent way. It means that we bias the probability (greater or lesser) of being selected of each edge according to our graph analysis, to create dropping strategy by reducing the dependency on full randomness.
2.2 GRAPH INFORMATION EXTRACTION
Most GCN architectures are mainly oriented on inter-neighbor communication (Huan et al. (2021)). The information propagates through edges w.r.t. GCN layers. The shorter the path between two nodes, the more they will influence each other. Removing the most impactful neighbors limits such over-influence and reserves space for taking into account the information from other neighbors for each epoch and among the epochs in a training. With the above idea, we propose here a node ranking strategy in order to prepare a better dropping selection for the next steps. Two kinds of graph structure information are extracted and used in the node selection step:
Local structure information The degrees of each node, which reflects the local impact of the node on its neighborhood. Higher degree reflects stronger influence from a local point of view in the
graph. If a node has a lot of neighbors, it will have an impact at each layer on them and therefore the information it contains will be strongly taken into account at the local level. The degrees are extracted from the adjacency matrix A. Consider that A is an n × n matrix and that its number of nonzero elements is nnz. A vector of size n is used to store the degrees of the nodes of the graph.
Global structure information Different graph node ranking algorithms (Agarwal & Chakrabarti (2007)) could be used here to judge importance of each node on the global graph. We use in the paper the PageRank algorithm (Page et al. (1999)) to generate the score of importance, because (1) PageRank is the most studied algorithms of the last decades, by our knowledge it can be easily implemented in a distributed way to accelerate its computation; (2) It was already used in GNNs to reduce the over-smoothing (Bojchevski et al. (2019)). From the adjacency matrix A, a vector of size n will be returned and will contain the score of each node. The algorithm 1 represents the implementation of PageRank used in RankedDrop. It shows that the main operation of each iteration of the PageRank is a matrix-vector multiplication where the output vector is used to perform the next iteration multiplication. By considering A as sparse, the cost of this sequence of sparse matrixvector multiplications is reduced and can be executed efficiently in a distributed way (Hugues & Petiton (2010)), which allows to optimize the extra computations that PageRank requires. This iterative method stops when the convergence has reached the expected precision. The result vector of the last iteration contains then the scores of each node of the graph and all elements are between 0 and 1. The higher the score, the more important the node is in the global graph. A β coefficient is also introduced during the PageRank. It is an optimization allowing to redistribute a part of the scores of each node among all the other nodes. In this way, the convergence of the result vector is faster and avoids that all the score is distributed only within the strongly connected component. Conventionally, the β coefficient is fixed around 0.85, this value was used for the experiments in the section 3.
The local and global structure information is used to rank the nodes of the graph to determinate the overall importance of each node in the graph. The importance of the nodes in the global and/or local structure of the graph gives a score to each node so that the nodes with a higher score are more often included in the matrix Adrop. Thus, the structure of the graphs Gdrop that will be generated at each iteration will be closer to the structure of the graph G than when the dropping is done randomly. We note s the vector of size n which stores the final score of the associated nodes. The computation of this score is flexible. There are many possibilities to compute the values of s by taking the information of the local and/or global structure, and potentially other information. At the end, the goal is to have a vector such that ∀i ∈ [1, n], 0 ≤ Si ≤ 1 and ∑n i=1 Si = 1.
2.3 NODE SELECTION WITH PROBABILITY CONTROL
After getting the score vector s, we sort the nodes according to their scores in a decreasing order. The permutations performed during the sorting are stored in memory in order to keep the association information between the nodes and the scores.
After the sorting, we create a probability scale from the sorted score vector by applying a Scan-WithAdd (SWA) algorithm (a.k.a prefix sum, Blelloch (1990)). SWA will generate an interval between 0 and 1 for each node. Therefore, the node selection is no more in a fully random way but the randomness is limited in the interval. The resulting vector is of size n where the values are more and more ordered. The resulting vector is such that SWAs[n] = 1. In addition, SWA could help visualize the inequalities of score between the nodes in the graph, like the Lorenz curve used in economics (Lorenz (1905)).
The node selection is performed with the SWA vector. The SWA value of each node corresponds to the probability of the node being selected. Formatting the score vector as a SWA accelerates the selection of nodes. For each node selection, we take a random number between 0 and 1 and find the node associated with this value in the vector SWAs. A binary search (Knuth (1998)) on the SWA vector can find the node with a O(log n) complexity, where it is necessary to browse element by element the vector of s scores at each node selection. We will discuss the selection of nodes from the SWA vector in more detail in the next section.
Algorithm 1 Algorithm to get the PageRank score from adjacency matrix
Input: A the adjacency (sparse) matrix, δ precision, β coefficient Output: v vector of PageRank score of size n Initialisation :
1: sum← 0 2: err ← INF 3: new vector tmp of size n 4: assign 1n to each element in v
START LOOP 5: while err > δ do 6: reset all element of tmp to 0 7: tmp← SpMV between A and v 8: for each elem in tmp do 9: elem← β ∗ elem+ (1− β) ∗ 1n
10: end for 11: err ← norm between tpm and v 12: v ← tmp 13: end while 14: return v
Algorithm 2 Node selection from the score vector Input: SWAs the Scan-With-Add final score
vector, sumScore the sum of the remaining nodes’ scores, malus the vector of malus applied to each node. Output: ind the index of the node to perform the drop edge Initialisation :
1: r ← randomin]0, 1[ 2: r ← r ∗ sumScore 3: m← 0 4: a← 0 5: b← size of SWAs 6: while b-a != 1 do 7: c← (a+ b)/2 8: m← m+ malus on c node 9: if SWA S of c−m− sB < r then
10: a← c 11: else 12: b← c 13: add (c + b)/2 on the potential malus node list 14: add c in the explored nodes list 15: end if 16: end while 17: return b
2.4 DROPPING EDGE SELECTION
The last step is to select the exact edges to drop. Different from the previous steps that can be performed only once in the beginning of training, the edge selection is performed for each epoch to generate a different subgraph. At each epoch, p × nnz edges are chosen from the selected nodes and are removed.
Different edge selection algorithms could be applied here. For example, the selection could be based on the tail, on the head, or directly removing all edges of a selected node (a.k.a DropNode). For the experiments presented in the section 3, we randomly select edges from the selected node. This adds randomness to the selection process. To select a node, we took our inspiration from the bisection method (Burden & Faires (1985)). By randomly pulling in a uniform way a number r between 0 and 1, we obtain the index of the node i that checks SWAs[i] < r < SWAs[i + 1] by performing a dichotomy. The selection of edges and nodes is based on the SWA des scores SWAs, , the importance of the node in the graph will influence its probability to be selected at each epoch. Thus, the randomness is controlled but the selection probabilities are different for each node so that the randomness takes into account the global structure of the graph. It is possible from the PageRank results and/or degrees vector to create subgraphs that keep the key nodes of the graph so that at each epoch the graph generated is consistent with the structure of the initial graph.
It is useful to keep an efficient selection method because it is performed a large number of times. This is why we have tried to optimize the implementation of this selection (see the algorithm 2) by adding a malus system when exploring the SWA vector to avoid selecting a node from which all edges have already been selected. When all the edges associated to a node have been dropped, there is no more interest to select this node again. Our optimization is based on the fact that the bisection method can be represented as a tree. At each step of the dichotomy, there is the possibility to move either the lower or the upper bound. To be sure not to select a node i, it is possible to apply a malus (equal to the score of the node i) to the explored branches when the upper bound is moved in the path to access the SWAs[i] (because all these paths lead to nodes further in the SWAs vector). Thus,
it is enough to modify at most log n malus value instead of modifying a large number of values in SWAs.
3 EXPERIMENTS AND DISCUSSIONS
RankedDrop is a general method that could equip different algorithms for the node selection and edge selection, and can be applied for different GNN architectures. In the experiments of this paper, we used the classic PageRank to compute scores for the node selection; we used basic random selection for the edge selection; and we used mostly the standard GCN as the training architecture, since Rong et al. (2019) and Luo et al. (2021) have already demonstrated the genericity of these Dropping methods for other GNN architecture. We believe such standard configuration could provide a clear and general idea of the potential offered by RankedDrop.
3.1 DATASETS AND ENVIRONMENT
Three standard citation datasets were used in our experiments: Cora, Citeseer and Pubmed. These datasets represent collections of scientific articles that are classified according to the paper’s main research topic (Sen et al. (2008)). More information of these datasets can be found in the table x. We notice that these graphs are very sparse, since their number of edges per node is very low: in average about two edges per node. However, if we see closely, the highest degree nodes of those datasets have more than 100 edges (99 for Citeseer and up to 171 for Pubmed). This means that a very large number of nodes have a very low degree (≤ 2). Therefore, only few important nodes propagate their information very widely; the information of other low degree notes are quickly drowned. For example in Cora, the node with the highest degree is directly connected to more than 6% of the nodes in the graph.
The extraction of graph data, the score computation until edge selection and dropping were done before the GCN training on Intel Xeon Processor E5-2690 with 8 cores. The result was used to build Gdrop. The trainings with Gdrop were done on Nvidia Tesla V100 PCIe 16GB GPUs. The original GCN, the state of the art DropEdge and our RankedDrop were compared in this section to validate our solution.
3.2 HOW SCAN-WITH-ADD HELPS NODE SELECTION
The values of the SWA of PageRank and Degree are represented graphically in figure 1. We can see that PageRank allows to put forward a small number of nodes; these curves increase very quickly. The distribution of the scores is very unequal: for all the datasets, 10% of the best ranked nodes share more than 80% of the total score, because PageRank highlights the most important nodes in an exponential way. Therefore if the nodes are randomly selected, there is an 80% chance to choose one of the 10% high ranked nodes. By using SWA-PageRank, the best rated nodes of the G graph will be very often integrated to the G′ graph, because they are the nodes that have the most impact in the global structure of the graph. On the other hand, the scores of SWA-Degree rise more slowly. The inequality between the nodes is thus less important. The highest degree nodes are privileged in terms of scores but they are not too much highlighted compared to SWA-PageRank. Therefore, we can choose the most adapted SWA and decide to select either the most or the less important nodes to prepare the edge selection.
3.3 IMPACT OF RANKEDDROP ON OVER-FITTING
The figures 2 and 3 show the training and validation loss curves in full-supervised and semisupervised learning, respectively. All curves for the same dataset were obtained with the same hyperparameters, only the percentage of dropping edges is different between DropEdge and RankedDrop. We can observe that, the two dropping methods in general have the similar behavior, both have better loss convergence than the original GCN, and in some cases, the validation loss of RankedDrop converges again better than the one of DropEdge. These experiments show that RankedDrop is the best method to reduce the over-fitting phenomenon and to stabilize the loss. RankedDrop has also the same behavior on over-smoothing reduction as DropEdge, so we will not discuss here.
3.4 IMPACT OF DROPPING CONTROL
The accuracies after training with different proportion of non-drop edges are shown in figure 4. The accuracies were obtained with the best hyper-parameters for each cases with GCN in the semi-
supervised learning. We can observe that for all three datasets the best accuracies are all from RankedDrop. Moreover, the best accuracy obtained by RankedDrop preserve more edges than the best one by DropEdge. For example, for the Citeseer dataset, the best accuracy subgraphs by DropEdge using 20% edges of the original graph, whereas the best accuracy subgraphs by RankedDrop maintain 60% of the edges. We believe the fewer edges was dropped, the more information of the original graph is kept, and we have more chance to achieve a better accuracy.
3.5.1 SEMI-SUPERVISED
3.5 OVERALL PERFORMANCE RESULTS
We first compare the accuracy between original GCN, GCN with DropEdge and GCN with RankedDrop in the semi-supervised learning, with 2, 4 and 8 layers (Table 2). The hyper-parameters for 2 layers are from the paper of DropEdge, and the one for 4 and 8 layers are the best one that we found. The parameters used for the selection of the nodes with RankedDrop are available in the appendix A. The accuracies obtained with RankedDrop are all higher than with DropEdge. Moreover, the deeper the GCN is, the better accuracy improvement RankedDrop offers comparing to DropEdge. The accuracy obtained with RankedDrop for the 8-layer GCN is 20% better than the one with DropEdge. Even for the 2-layer GCN, the accuracies of RankedDrop are equivalent or superior to those well-tuned by DropEdge. This is particularly true with the Citeseer dataset where the 2-layer GCN with RankedDrop obtained 1% higher accuracy than with DropEdge.
3.5.2 FULL-SUPERVISED
The accuracies of full-supervised learning are presented in the table 3. For each of the datasets, we evaluated with three different backbones: GCN, IncepGCN and JKNet. The number of layers for each backbone was chosen from the best accuracy declared by DropEdge. We used the same hyper-parameters given by Rong et al. (2019), only the edge dropping percentage is modified for RankedDrop. The accuracies are globally equivalent between RankedDrop and DropEdge; and RankedDrop achieved better accuracies than DropEdge for Cora the smallest dataset. It again show that RankedDrop reduce better the over-fitting phenomenon. Moreover, the hyper-parameters used here are not specifically adapted to RankedDrop, but RankedDrop can still achieve good accuracies. We believe there are still space to increase accuracies with RankedDrop by optimizing the hyperparameters.
4 CONCLUSION & PERSPECTIVE
The RankedDrop method that we proposed in this paper provided more control on the selection of dropping edges and allows to customize the dropping step for various neural network architectures. Thanks to a personalized score system and the addition of several parameters, the control of the edges to drop is personalized. RankedDrop keeps the advantages of DropEdge concerning the reduction of over-smoothing and over-fitting as well as the possibility to use it on different architectures, while allowing to take into account information on the graph structure. RankedDrop add more control on randomness and a new degree of freedom for dropping selection. We have shown that the results given by RankedDrop are very encouraging and are more stable. The degree of freedom brought by taking into account the structure of the graph allows to project the construction of deeper GNNs.
It is also possible to imagine using this method to better control the training of neural networks on denser graphs. The computations that are performed to extract the data from the graph representation matrix can be executed in distributed computing. The choice of edges to drop at each epoch is more complex to do in distributed computing and will be the subject of future work.
A APPENDIX: HYPERPARAMETERS IN EXPERIMENTS
In the table 4 are gathered the parameters used to generate the accuracys that have been presented in the paper. There are both the hyperparameters of the models that are used for the execution of the backbones, and also the few parameters that we used to control the selection of the edges to drop. We have implemented three ways to take into account the information from the structure of the graph. This is the parameter which is named score. Either we have used only the degree information or the PageRank information, which is respectively indicated by Deg and PR, or we have used both at the same time to build the score vector and it is noted PRxD. In addition to this parameter, we have influenced the choice of edges to remove from the graph with the following parameters:
• dd: It is a boolean that removes the edge in the opposite direction of the selected edge when the dataset is symmetric. Vertices are removed in pairs, and this allows to keep a undirected graph.
• reverse: It is a boolean that allows to reverse the adjacency matrix. By doing this, each edge is no longer associated with the tail node but with the head node, and if the scores of the two nodes associated with that edge are not the same, it changes the probability of selecting that particular edge.
• lowest: It is a boolean that reverses the ranking of the nodes of the graph using the reciprocal of the score associated to each node.
R ef
B ac
kb on
e D
at as
et nl
ay er
s H
yp er
-p ar
am et
er s
Ta bl
e 2
G C
N C
or a
2 lr
:0 .0
01 ,w
ei gh
tde
ca y:
1e -4
,s am
pl in
gpe
rc en
t:0 .7
,s co
re :P
R xD
,d d:
fa ls
e, re
ve rs
e: tr
ue ,l
ow es
t:t ru
e, ni
te r:
40 0
Ta bl
e 2
G C
N C
ite se
er 2
lr :0
.0 07
,w ei
gh t-
de ca
y: 1e
-4 ,s
am pl
in g-
pe rc
en t:0
.6 ,s
co re
:P R
,d d:
fa ls
e, re
ve rs
e: fa
ls e,
lo w
es t:t
ru e,
ni te
r: 40 0 Ta bl e 2 G C N Pu bm ed 2 lr :0 .0 09 ,w ei gh tde ca y: 1e -2 ,s am pl in gpe rc en t:0 .8 ,s co re :P R ,d d: tr ue ,r ev er se :fa ls e, lo w es t:t ru e, ni te r: 40 0 Ta bl e 2 G C N C or a 4 lr :0 .0 04 ,w ei gh tde ca y: 1e -4 ,s am pl in gpe rc en t:0 .3 ,s co re :P R xD ,d d: fa ls e, re ve rs e: tr ue ,l ow es t:t ru e, ni te r:
40 0
Ta bl
e 2
G C
N C
ite se
er 4
lr :0
.0 08
,w ei
gh t-
de ca
y: 1e
-3 ,s
am pl
in g-
pe rc
en t:0
.1 ,s
co re
:D eg
,d d:
fa ls
e, re
ve rs
e: tr
ue ,l
ow es
t:f al
se ,n
ite r:
40 0
Ta bl
e 2
G C
N Pu
bm ed
4 lr
:0 .0
08 ,w
ei gh
tde
ca y:
1e -2
,s am
pl in
gpe
rc en
t:0 .9
,s co
re :D
eg ,d
d: fa
ls e,
re ve
rs e:
tr ue
,l ow
es t:f
al se
,n ite
r: 40 0 Ta bl e 2 G C N C or a 8 lr :0 .0 03 ,w ei gh tde ca y: 1e -5 ,s am pl in gpe rc en t:0 .7 ,s co re :P R ,d d: tr ue ,r ev er se :tr ue ,l ow es t:f al se ,n ite r: 10 00 Ta bl e 2 G C N C ite se er 8 lr :0 .0 01 ,w ei gh tde ca y: 1e -5 ,s am pl in gpe rc en t:0 .5 ,s co re :P R xD ,d d: fa ls e, re ve rs e: tr ue ,l ow es t:f al se ,n ite r:
10 00
Ta bl
e 2
G C
N Pu
bm ed
8 lr
:0 .0
06 ,w
ei gh
tde
ca y:
1e -4
,s am
pl in
gpe
rc en
t:0 .5
,s co
re :P
R xD
,d d:
tr ue
,r ev
er se
:tr ue
,l ow
es t:t
ru e,
ni te
r: 10 00 Ta bl e 3 G C N C or a 4 lr :0 .0 1, w ei gh tde ca y: 0. 00 5, sa m pl in gpe rc en t:0 .6 ,s co re :D eg ,d d: tr ue ,r ev er se :tr ue ,l ow es t:t ru e, ni te r: 40 0 Ta bl e 3 G C N C ite se er 4 lr :0 .0 09 ,w ei gh tde ca y: 1e -3 ,s am pl in gpe rc en t:0 .1 ,s co re :D eg ,d d: tr ue ,r ev er se :tr ue ,l ow es t:f al se ,n ite r: 40 0 Ta bl e 3 G C N Pu bm ed 4 lr :0 .0 1, w ei gh tde ca y: 1e -3 ,s am pl in gpe rc en t:0 .2 ,s co re :P R xD ,d d: tr ue ,r ev er se :tr ue ,l ow es t:f al se ,n ite r: 40 0 Ta bl e 3 In ce pG C N C or a 8 lr :0 .0 1, w ei gh tde ca y: 1e -3 ,s am pl in gpe rc en t:0 .1 ,s co re :P R ,d d: tr ue ,r ev er se :fa ls e, lo w es t:t ru e, ni te r: 40 0 Ta bl e 3 In ce pG C N C ite se er 8 lr :0 .0 02 ,w ei gh tde ca y: 0. 00 5, sa m pl in gpe rc en t:0 .1 ,s co re :D eg ,d d: tr ue ,r ev er se :tr ue ,l ow es t:f al se ,n ite r: 40 0 Ta bl e 3 In ce pG C N Pu bm ed 4 lr :0 .0 02 ,w ei gh tde ca y: 1e -5 ,s am pl in gpe rc en t:0 .3 ,s co re :P R xD ,d d: fa ls e, re ve rs e: tr ue ,l ow es t:t ru e, ni te r: 40 0 Ta bl e 3 JK N et C or a 16 lr :0 .0 08 ,w ei gh tde ca y: 5e -4 ,s am pl in gpe rc en t:0 .1 ,s co re :P R ,d d: tr ue ,r ev er se :tr ue ,l ow es t:t ru e, ni te r: 40 0 Ta bl e 3 JK N et C ite se er 8 lr :0 .0 04 ,w ei gh tde ca y: 5e -5 ,s am pl in gpe rc en t:0 .8 ,s co re :P R ,d d: tr ue ,r ev er se :tr ue ,l ow es t:t ru e, ni te r: 40 0 Ta bl e 3 JK N et Pu bm ed 64 lr :0 .0 05 ,w ei gh tde ca y: 1e -4 ,s am pl in gpe rc en t:0 .9 ,s co re :D eg ,d d: fa ls e, re ve rs e: fa ls e, lo w es t:f al se ,n ite r: 40 0
Ta bl
e 4:
H yp
er -p
ar am
et er
s us
ed to
ob ta
in th
e ac
cu ra
cy pr
es en
te d
in th
is pa
pe rw
ith th
e R
an ke
dD ro
p m
et ho
d. | 1. What is the focus of the paper regarding node classification tasks?
2. What are the strengths of the proposed method, particularly its performance compared to baselines?
3. What are the weaknesses of the paper, such as limitations in dataset evaluation and unclear problem tackling?
4. Are there any questions regarding the absence of semi-supervised learning results? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a method with a spatial-aware dropping-edge selection. The selection takes into account the graph global information using PageRank, and graph local neighborhood information with node degree.
Review
Strengths
The authors' method obtain better performance compared with the baselines on full-supervised node classification tasks.
Weakness
The evaluated datasets are 'Cora, Citeseer, PubMed', more evaluation on other datasets could be better.
The reason that the proposed method can tackle the over smoothing and overfitting problem is not clear to me.
There seems to be no result of semi-supervised learning, although the author mentioned related experiments in section 3.5.1. |
ICLR | Title
RankedDrop: Enhancing Deep Graph Convolutional Networks Training
Abstract
Graph Neural Networks (GNNs) are playing a more and more important role for analyzing unstructured data from the complex real world. Introducing random edge dropping from the input graph at training epochs could reduce over-fitting and over-smoothing phenomenon and increase the depth of GNNs. However, such method relies strongly on the chosen randomness. It makes the accuracy depend on the initialization of the randomness, which lets the selection of hyperparameters be even more difficult. We propose in this paper RankedDrop a novel method with a spatial-aware dropping-edge selection. The selection takes into account the graph global information using PageRank, and graph local neighborhood information with node degree. RankedDrop provides a more stable training results comparing to the state-of-the-art solution, by maintaining the advantages of random edge dropping. Furthermore, RankedDrop is a general method that can be deployed on a deep learning framework for enhancing performance of GNNs.
1 INTRODUCTION & CONTEXT
Convolutional Neural Networks (CNNs) demonstrated a great success in our today’s daily life for image classification and many other applications. However in the real world there are still many non-Euclidean (graph) data like social networks or reference systems that cannot be handled by CNNs. After Defferrard et al. (2016) introducing Graph Neural Networks (GNNs), Defferrard et al. (2016) generalized CNNs to graph to exploit their potential for classification problems on nonEuclidean data structure. The computation of Graph Convolutional Neural Networks (GCNs) can be summarized as iterative neighborhood aggregations with a message passing schema (Huan et al. (2021)).
He et al. (2016) showed that deeper CNN has higher potential to achieve better precision. However, modern GCNs (Kipf & Welling (2017); Pei et al. (2021); Hamilton et al. (2017)) can work with very limited number of layers, because training deep neural networks is a very complex task (Claesen & De Moor (2015)), the complexity of the computed function grows exponentially with depth (Raghu et al. (2017)), and the deeper the networks are, the more they are subject to over-smoothing (Li et al. (2018); Chen et al. (2020)). Meanwhile, deeper GCN and/or small graph datasets could lead to over-fitting, where a model could fit well the training data but poorly the testing data.
Dropout (Hinton et al. (2012); Srivastava et al. (2014)) is a promising regularization techniques to reduce over-fitting. In the field of GCN, DropEdge introduced by Rong et al. (2019), which randomly removes a certain proportion of edges from the input graph at each epoch, showed promising results to reduce the convergence speed of over-fitting and over-smoothing. Moreover, the random dropping happened on the message passing schema of most of GCNs. Therefore such method could be applied for many GCN backbone models like GCN (Kipf & Welling (2017)), ResGCN (Pei et al. (2021)), GraphSage (Hamilton et al. (2017) ), IncepGCN (szegedy2016rethinking) and JKNet (Xu et al. (2018)).
However, the accuracy obtained by DropEdge depends on how the randomness of dropping is initialized. Moreover, the only parameter that can be adjusted in DropEdge is the percentage of edges that will be dropped. Missing of control on the way of how dropping edges be selected, may limit possibilities to optimize GCN training according to application domain and chosen backbone architecture. Furthermore, a graph structure includes a lot of useful information (Newman (2003)).
Random dropping may destroy graph structure information and again limits the potential of optimizing GCN training.
This paper proposes RankedDrop a novel method with a spatial-aware dropping-edge selection. The selection takes into account the graph global information using PageRank, and graph local neighborhood information with node degree. Graph structure information is extracted to reduce the impact of randomness in the selection and also to improve the final accuracy after training. RankedDrop provides a more stable training results comparing to the state-of-the-art solution, by maintaining the advantages of random edge dropping including over-fitting and over-smoothing reduction and being a general method for different GCN backbones. Shown by our experiments, the accuracies of deep GCNs on semi-supervised learning are significantly improved by using RankedDrop.
2 RANKEDDROP METHOD WITH DATA SELECTION
RankedDrop is a general method, and it is applied on the input graph of a GCN training before each epoch. It first extracts a score based on graph analysis for each node (Sec 2.2); after that the nodes are reordered to control the selection probability then selected according to the computed score (Sec 2.3); at the end the edges of selected nodes are selected and we drop the selected edges to create the new input graph (Sec 2.4).
2.1 NOTATIONS AND PRELIMINARIES
We use an adjacency matrix A to represent the original input graph G, and nnz the number of non-zero value of A, a.k.a. the number of edges of the graph G. We denote p the proportion of edges from G that will be dropped. Therefore, after dropping, the new input graph Gdrop has (1− p)× nnz edges. We denote the resulting adjacency matrix Adrop for Gdrop, and we use A′ to denote the matrix of p× nnz dropped edges. The relation between the above three matrices is:
Adrop = A−A′ (1)
The theorem 1 introduced in the paper (Rong et al. (2019)) proved that training GCN on Gdrop instead of G allows to reduce the speed of convergence of the over-smoothing and to reduce the loss of information. The idea is based on the concept of mixing time in the random walk theory (Lovász (1993)), and the proof is based on the work of Oono & Suzuki (2019). Luo et al. (2021) demonstrated again the effectiveness of such dropping method.
In the following parts of the paper, we focus on the methods of selection of edges to drop, which are the main contributions of RankedDrop. RankedDrop ranks the nodes in order to assign a weight during the selection. Different from Sparsification (Eppstein et al. (1997)) or DropEdge (Rong et al. (2019)), the goal of RankedDrop is to control the randomness with several parameters, but not to completely control the choice of the drop edges, to create at each iteration a sub-graph in a more intelligent way. It means that we bias the probability (greater or lesser) of being selected of each edge according to our graph analysis, to create dropping strategy by reducing the dependency on full randomness.
2.2 GRAPH INFORMATION EXTRACTION
Most GCN architectures are mainly oriented on inter-neighbor communication (Huan et al. (2021)). The information propagates through edges w.r.t. GCN layers. The shorter the path between two nodes, the more they will influence each other. Removing the most impactful neighbors limits such over-influence and reserves space for taking into account the information from other neighbors for each epoch and among the epochs in a training. With the above idea, we propose here a node ranking strategy in order to prepare a better dropping selection for the next steps. Two kinds of graph structure information are extracted and used in the node selection step:
Local structure information The degrees of each node, which reflects the local impact of the node on its neighborhood. Higher degree reflects stronger influence from a local point of view in the
graph. If a node has a lot of neighbors, it will have an impact at each layer on them and therefore the information it contains will be strongly taken into account at the local level. The degrees are extracted from the adjacency matrix A. Consider that A is an n × n matrix and that its number of nonzero elements is nnz. A vector of size n is used to store the degrees of the nodes of the graph.
Global structure information Different graph node ranking algorithms (Agarwal & Chakrabarti (2007)) could be used here to judge importance of each node on the global graph. We use in the paper the PageRank algorithm (Page et al. (1999)) to generate the score of importance, because (1) PageRank is the most studied algorithms of the last decades, by our knowledge it can be easily implemented in a distributed way to accelerate its computation; (2) It was already used in GNNs to reduce the over-smoothing (Bojchevski et al. (2019)). From the adjacency matrix A, a vector of size n will be returned and will contain the score of each node. The algorithm 1 represents the implementation of PageRank used in RankedDrop. It shows that the main operation of each iteration of the PageRank is a matrix-vector multiplication where the output vector is used to perform the next iteration multiplication. By considering A as sparse, the cost of this sequence of sparse matrixvector multiplications is reduced and can be executed efficiently in a distributed way (Hugues & Petiton (2010)), which allows to optimize the extra computations that PageRank requires. This iterative method stops when the convergence has reached the expected precision. The result vector of the last iteration contains then the scores of each node of the graph and all elements are between 0 and 1. The higher the score, the more important the node is in the global graph. A β coefficient is also introduced during the PageRank. It is an optimization allowing to redistribute a part of the scores of each node among all the other nodes. In this way, the convergence of the result vector is faster and avoids that all the score is distributed only within the strongly connected component. Conventionally, the β coefficient is fixed around 0.85, this value was used for the experiments in the section 3.
The local and global structure information is used to rank the nodes of the graph to determinate the overall importance of each node in the graph. The importance of the nodes in the global and/or local structure of the graph gives a score to each node so that the nodes with a higher score are more often included in the matrix Adrop. Thus, the structure of the graphs Gdrop that will be generated at each iteration will be closer to the structure of the graph G than when the dropping is done randomly. We note s the vector of size n which stores the final score of the associated nodes. The computation of this score is flexible. There are many possibilities to compute the values of s by taking the information of the local and/or global structure, and potentially other information. At the end, the goal is to have a vector such that ∀i ∈ [1, n], 0 ≤ Si ≤ 1 and ∑n i=1 Si = 1.
2.3 NODE SELECTION WITH PROBABILITY CONTROL
After getting the score vector s, we sort the nodes according to their scores in a decreasing order. The permutations performed during the sorting are stored in memory in order to keep the association information between the nodes and the scores.
After the sorting, we create a probability scale from the sorted score vector by applying a Scan-WithAdd (SWA) algorithm (a.k.a prefix sum, Blelloch (1990)). SWA will generate an interval between 0 and 1 for each node. Therefore, the node selection is no more in a fully random way but the randomness is limited in the interval. The resulting vector is of size n where the values are more and more ordered. The resulting vector is such that SWAs[n] = 1. In addition, SWA could help visualize the inequalities of score between the nodes in the graph, like the Lorenz curve used in economics (Lorenz (1905)).
The node selection is performed with the SWA vector. The SWA value of each node corresponds to the probability of the node being selected. Formatting the score vector as a SWA accelerates the selection of nodes. For each node selection, we take a random number between 0 and 1 and find the node associated with this value in the vector SWAs. A binary search (Knuth (1998)) on the SWA vector can find the node with a O(log n) complexity, where it is necessary to browse element by element the vector of s scores at each node selection. We will discuss the selection of nodes from the SWA vector in more detail in the next section.
Algorithm 1 Algorithm to get the PageRank score from adjacency matrix
Input: A the adjacency (sparse) matrix, δ precision, β coefficient Output: v vector of PageRank score of size n Initialisation :
1: sum← 0 2: err ← INF 3: new vector tmp of size n 4: assign 1n to each element in v
START LOOP 5: while err > δ do 6: reset all element of tmp to 0 7: tmp← SpMV between A and v 8: for each elem in tmp do 9: elem← β ∗ elem+ (1− β) ∗ 1n
10: end for 11: err ← norm between tpm and v 12: v ← tmp 13: end while 14: return v
Algorithm 2 Node selection from the score vector Input: SWAs the Scan-With-Add final score
vector, sumScore the sum of the remaining nodes’ scores, malus the vector of malus applied to each node. Output: ind the index of the node to perform the drop edge Initialisation :
1: r ← randomin]0, 1[ 2: r ← r ∗ sumScore 3: m← 0 4: a← 0 5: b← size of SWAs 6: while b-a != 1 do 7: c← (a+ b)/2 8: m← m+ malus on c node 9: if SWA S of c−m− sB < r then
10: a← c 11: else 12: b← c 13: add (c + b)/2 on the potential malus node list 14: add c in the explored nodes list 15: end if 16: end while 17: return b
2.4 DROPPING EDGE SELECTION
The last step is to select the exact edges to drop. Different from the previous steps that can be performed only once in the beginning of training, the edge selection is performed for each epoch to generate a different subgraph. At each epoch, p × nnz edges are chosen from the selected nodes and are removed.
Different edge selection algorithms could be applied here. For example, the selection could be based on the tail, on the head, or directly removing all edges of a selected node (a.k.a DropNode). For the experiments presented in the section 3, we randomly select edges from the selected node. This adds randomness to the selection process. To select a node, we took our inspiration from the bisection method (Burden & Faires (1985)). By randomly pulling in a uniform way a number r between 0 and 1, we obtain the index of the node i that checks SWAs[i] < r < SWAs[i + 1] by performing a dichotomy. The selection of edges and nodes is based on the SWA des scores SWAs, , the importance of the node in the graph will influence its probability to be selected at each epoch. Thus, the randomness is controlled but the selection probabilities are different for each node so that the randomness takes into account the global structure of the graph. It is possible from the PageRank results and/or degrees vector to create subgraphs that keep the key nodes of the graph so that at each epoch the graph generated is consistent with the structure of the initial graph.
It is useful to keep an efficient selection method because it is performed a large number of times. This is why we have tried to optimize the implementation of this selection (see the algorithm 2) by adding a malus system when exploring the SWA vector to avoid selecting a node from which all edges have already been selected. When all the edges associated to a node have been dropped, there is no more interest to select this node again. Our optimization is based on the fact that the bisection method can be represented as a tree. At each step of the dichotomy, there is the possibility to move either the lower or the upper bound. To be sure not to select a node i, it is possible to apply a malus (equal to the score of the node i) to the explored branches when the upper bound is moved in the path to access the SWAs[i] (because all these paths lead to nodes further in the SWAs vector). Thus,
it is enough to modify at most log n malus value instead of modifying a large number of values in SWAs.
3 EXPERIMENTS AND DISCUSSIONS
RankedDrop is a general method that could equip different algorithms for the node selection and edge selection, and can be applied for different GNN architectures. In the experiments of this paper, we used the classic PageRank to compute scores for the node selection; we used basic random selection for the edge selection; and we used mostly the standard GCN as the training architecture, since Rong et al. (2019) and Luo et al. (2021) have already demonstrated the genericity of these Dropping methods for other GNN architecture. We believe such standard configuration could provide a clear and general idea of the potential offered by RankedDrop.
3.1 DATASETS AND ENVIRONMENT
Three standard citation datasets were used in our experiments: Cora, Citeseer and Pubmed. These datasets represent collections of scientific articles that are classified according to the paper’s main research topic (Sen et al. (2008)). More information of these datasets can be found in the table x. We notice that these graphs are very sparse, since their number of edges per node is very low: in average about two edges per node. However, if we see closely, the highest degree nodes of those datasets have more than 100 edges (99 for Citeseer and up to 171 for Pubmed). This means that a very large number of nodes have a very low degree (≤ 2). Therefore, only few important nodes propagate their information very widely; the information of other low degree notes are quickly drowned. For example in Cora, the node with the highest degree is directly connected to more than 6% of the nodes in the graph.
The extraction of graph data, the score computation until edge selection and dropping were done before the GCN training on Intel Xeon Processor E5-2690 with 8 cores. The result was used to build Gdrop. The trainings with Gdrop were done on Nvidia Tesla V100 PCIe 16GB GPUs. The original GCN, the state of the art DropEdge and our RankedDrop were compared in this section to validate our solution.
3.2 HOW SCAN-WITH-ADD HELPS NODE SELECTION
The values of the SWA of PageRank and Degree are represented graphically in figure 1. We can see that PageRank allows to put forward a small number of nodes; these curves increase very quickly. The distribution of the scores is very unequal: for all the datasets, 10% of the best ranked nodes share more than 80% of the total score, because PageRank highlights the most important nodes in an exponential way. Therefore if the nodes are randomly selected, there is an 80% chance to choose one of the 10% high ranked nodes. By using SWA-PageRank, the best rated nodes of the G graph will be very often integrated to the G′ graph, because they are the nodes that have the most impact in the global structure of the graph. On the other hand, the scores of SWA-Degree rise more slowly. The inequality between the nodes is thus less important. The highest degree nodes are privileged in terms of scores but they are not too much highlighted compared to SWA-PageRank. Therefore, we can choose the most adapted SWA and decide to select either the most or the less important nodes to prepare the edge selection.
3.3 IMPACT OF RANKEDDROP ON OVER-FITTING
The figures 2 and 3 show the training and validation loss curves in full-supervised and semisupervised learning, respectively. All curves for the same dataset were obtained with the same hyperparameters, only the percentage of dropping edges is different between DropEdge and RankedDrop. We can observe that, the two dropping methods in general have the similar behavior, both have better loss convergence than the original GCN, and in some cases, the validation loss of RankedDrop converges again better than the one of DropEdge. These experiments show that RankedDrop is the best method to reduce the over-fitting phenomenon and to stabilize the loss. RankedDrop has also the same behavior on over-smoothing reduction as DropEdge, so we will not discuss here.
3.4 IMPACT OF DROPPING CONTROL
The accuracies after training with different proportion of non-drop edges are shown in figure 4. The accuracies were obtained with the best hyper-parameters for each cases with GCN in the semi-
supervised learning. We can observe that for all three datasets the best accuracies are all from RankedDrop. Moreover, the best accuracy obtained by RankedDrop preserve more edges than the best one by DropEdge. For example, for the Citeseer dataset, the best accuracy subgraphs by DropEdge using 20% edges of the original graph, whereas the best accuracy subgraphs by RankedDrop maintain 60% of the edges. We believe the fewer edges was dropped, the more information of the original graph is kept, and we have more chance to achieve a better accuracy.
3.5.1 SEMI-SUPERVISED
3.5 OVERALL PERFORMANCE RESULTS
We first compare the accuracy between original GCN, GCN with DropEdge and GCN with RankedDrop in the semi-supervised learning, with 2, 4 and 8 layers (Table 2). The hyper-parameters for 2 layers are from the paper of DropEdge, and the one for 4 and 8 layers are the best one that we found. The parameters used for the selection of the nodes with RankedDrop are available in the appendix A. The accuracies obtained with RankedDrop are all higher than with DropEdge. Moreover, the deeper the GCN is, the better accuracy improvement RankedDrop offers comparing to DropEdge. The accuracy obtained with RankedDrop for the 8-layer GCN is 20% better than the one with DropEdge. Even for the 2-layer GCN, the accuracies of RankedDrop are equivalent or superior to those well-tuned by DropEdge. This is particularly true with the Citeseer dataset where the 2-layer GCN with RankedDrop obtained 1% higher accuracy than with DropEdge.
3.5.2 FULL-SUPERVISED
The accuracies of full-supervised learning are presented in the table 3. For each of the datasets, we evaluated with three different backbones: GCN, IncepGCN and JKNet. The number of layers for each backbone was chosen from the best accuracy declared by DropEdge. We used the same hyper-parameters given by Rong et al. (2019), only the edge dropping percentage is modified for RankedDrop. The accuracies are globally equivalent between RankedDrop and DropEdge; and RankedDrop achieved better accuracies than DropEdge for Cora the smallest dataset. It again show that RankedDrop reduce better the over-fitting phenomenon. Moreover, the hyper-parameters used here are not specifically adapted to RankedDrop, but RankedDrop can still achieve good accuracies. We believe there are still space to increase accuracies with RankedDrop by optimizing the hyperparameters.
4 CONCLUSION & PERSPECTIVE
The RankedDrop method that we proposed in this paper provided more control on the selection of dropping edges and allows to customize the dropping step for various neural network architectures. Thanks to a personalized score system and the addition of several parameters, the control of the edges to drop is personalized. RankedDrop keeps the advantages of DropEdge concerning the reduction of over-smoothing and over-fitting as well as the possibility to use it on different architectures, while allowing to take into account information on the graph structure. RankedDrop add more control on randomness and a new degree of freedom for dropping selection. We have shown that the results given by RankedDrop are very encouraging and are more stable. The degree of freedom brought by taking into account the structure of the graph allows to project the construction of deeper GNNs.
It is also possible to imagine using this method to better control the training of neural networks on denser graphs. The computations that are performed to extract the data from the graph representation matrix can be executed in distributed computing. The choice of edges to drop at each epoch is more complex to do in distributed computing and will be the subject of future work.
A APPENDIX: HYPERPARAMETERS IN EXPERIMENTS
In the table 4 are gathered the parameters used to generate the accuracys that have been presented in the paper. There are both the hyperparameters of the models that are used for the execution of the backbones, and also the few parameters that we used to control the selection of the edges to drop. We have implemented three ways to take into account the information from the structure of the graph. This is the parameter which is named score. Either we have used only the degree information or the PageRank information, which is respectively indicated by Deg and PR, or we have used both at the same time to build the score vector and it is noted PRxD. In addition to this parameter, we have influenced the choice of edges to remove from the graph with the following parameters:
• dd: It is a boolean that removes the edge in the opposite direction of the selected edge when the dataset is symmetric. Vertices are removed in pairs, and this allows to keep a undirected graph.
• reverse: It is a boolean that allows to reverse the adjacency matrix. By doing this, each edge is no longer associated with the tail node but with the head node, and if the scores of the two nodes associated with that edge are not the same, it changes the probability of selecting that particular edge.
• lowest: It is a boolean that reverses the ranking of the nodes of the graph using the reciprocal of the score associated to each node.
R ef
B ac
kb on
e D
at as
et nl
ay er
s H
yp er
-p ar
am et
er s
Ta bl
e 2
G C
N C
or a
2 lr
:0 .0
01 ,w
ei gh
tde
ca y:
1e -4
,s am
pl in
gpe
rc en
t:0 .7
,s co
re :P
R xD
,d d:
fa ls
e, re
ve rs
e: tr
ue ,l
ow es
t:t ru
e, ni
te r:
40 0
Ta bl
e 2
G C
N C
ite se
er 2
lr :0
.0 07
,w ei
gh t-
de ca
y: 1e
-4 ,s
am pl
in g-
pe rc
en t:0
.6 ,s
co re
:P R
,d d:
fa ls
e, re
ve rs
e: fa
ls e,
lo w
es t:t
ru e,
ni te
r: 40 0 Ta bl e 2 G C N Pu bm ed 2 lr :0 .0 09 ,w ei gh tde ca y: 1e -2 ,s am pl in gpe rc en t:0 .8 ,s co re :P R ,d d: tr ue ,r ev er se :fa ls e, lo w es t:t ru e, ni te r: 40 0 Ta bl e 2 G C N C or a 4 lr :0 .0 04 ,w ei gh tde ca y: 1e -4 ,s am pl in gpe rc en t:0 .3 ,s co re :P R xD ,d d: fa ls e, re ve rs e: tr ue ,l ow es t:t ru e, ni te r:
40 0
Ta bl
e 2
G C
N C
ite se
er 4
lr :0
.0 08
,w ei
gh t-
de ca
y: 1e
-3 ,s
am pl
in g-
pe rc
en t:0
.1 ,s
co re
:D eg
,d d:
fa ls
e, re
ve rs
e: tr
ue ,l
ow es
t:f al
se ,n
ite r:
40 0
Ta bl
e 2
G C
N Pu
bm ed
4 lr
:0 .0
08 ,w
ei gh
tde
ca y:
1e -2
,s am
pl in
gpe
rc en
t:0 .9
,s co
re :D
eg ,d
d: fa
ls e,
re ve
rs e:
tr ue
,l ow
es t:f
al se
,n ite
r: 40 0 Ta bl e 2 G C N C or a 8 lr :0 .0 03 ,w ei gh tde ca y: 1e -5 ,s am pl in gpe rc en t:0 .7 ,s co re :P R ,d d: tr ue ,r ev er se :tr ue ,l ow es t:f al se ,n ite r: 10 00 Ta bl e 2 G C N C ite se er 8 lr :0 .0 01 ,w ei gh tde ca y: 1e -5 ,s am pl in gpe rc en t:0 .5 ,s co re :P R xD ,d d: fa ls e, re ve rs e: tr ue ,l ow es t:f al se ,n ite r:
10 00
Ta bl
e 2
G C
N Pu
bm ed
8 lr
:0 .0
06 ,w
ei gh
tde
ca y:
1e -4
,s am
pl in
gpe
rc en
t:0 .5
,s co
re :P
R xD
,d d:
tr ue
,r ev
er se
:tr ue
,l ow
es t:t
ru e,
ni te
r: 10 00 Ta bl e 3 G C N C or a 4 lr :0 .0 1, w ei gh tde ca y: 0. 00 5, sa m pl in gpe rc en t:0 .6 ,s co re :D eg ,d d: tr ue ,r ev er se :tr ue ,l ow es t:t ru e, ni te r: 40 0 Ta bl e 3 G C N C ite se er 4 lr :0 .0 09 ,w ei gh tde ca y: 1e -3 ,s am pl in gpe rc en t:0 .1 ,s co re :D eg ,d d: tr ue ,r ev er se :tr ue ,l ow es t:f al se ,n ite r: 40 0 Ta bl e 3 G C N Pu bm ed 4 lr :0 .0 1, w ei gh tde ca y: 1e -3 ,s am pl in gpe rc en t:0 .2 ,s co re :P R xD ,d d: tr ue ,r ev er se :tr ue ,l ow es t:f al se ,n ite r: 40 0 Ta bl e 3 In ce pG C N C or a 8 lr :0 .0 1, w ei gh tde ca y: 1e -3 ,s am pl in gpe rc en t:0 .1 ,s co re :P R ,d d: tr ue ,r ev er se :fa ls e, lo w es t:t ru e, ni te r: 40 0 Ta bl e 3 In ce pG C N C ite se er 8 lr :0 .0 02 ,w ei gh tde ca y: 0. 00 5, sa m pl in gpe rc en t:0 .1 ,s co re :D eg ,d d: tr ue ,r ev er se :tr ue ,l ow es t:f al se ,n ite r: 40 0 Ta bl e 3 In ce pG C N Pu bm ed 4 lr :0 .0 02 ,w ei gh tde ca y: 1e -5 ,s am pl in gpe rc en t:0 .3 ,s co re :P R xD ,d d: fa ls e, re ve rs e: tr ue ,l ow es t:t ru e, ni te r: 40 0 Ta bl e 3 JK N et C or a 16 lr :0 .0 08 ,w ei gh tde ca y: 5e -4 ,s am pl in gpe rc en t:0 .1 ,s co re :P R ,d d: tr ue ,r ev er se :tr ue ,l ow es t:t ru e, ni te r: 40 0 Ta bl e 3 JK N et C ite se er 8 lr :0 .0 04 ,w ei gh tde ca y: 5e -5 ,s am pl in gpe rc en t:0 .8 ,s co re :P R ,d d: tr ue ,r ev er se :tr ue ,l ow es t:t ru e, ni te r: 40 0 Ta bl e 3 JK N et Pu bm ed 64 lr :0 .0 05 ,w ei gh tde ca y: 1e -4 ,s am pl in gpe rc en t:0 .9 ,s co re :D eg ,d d: fa ls e, re ve rs e: fa ls e, lo w es t:f al se ,n ite r: 40 0
Ta bl
e 4:
H yp
er -p
ar am
et er
s us
ed to
ob ta
in th
e ac
cu ra
cy pr
es en
te d
in th
is pa
pe rw
ith th
e R
an ke
dD ro
p m
et ho
d. | 1. What is the focus of the paper on graph neural networks?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its novelty and technical contribution?
3. Do you have any concerns regarding the experimental evaluation and validation of the method?
4. Are there any minor issues or typos in the paper that should be addressed? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a novel edge-dropping method: RankedDrop to enhance the robustness of the training process for Graph Neural Networks.
Review
RankedDrop uses the PageRank to calculate the score of each node and employs the SWA algorithm to make the node selection process during training. Overall, this paper is well organized and easy to follow.
Weakness
The technical contribution is limited. Overall, RankedDrop combines two existing techniques to perform edge sampling of GNNs. There is no theoretical analysis about how RankedDrop can alleviate the over-smoothing or why it can surpass the other sampling methods. Therefore, this combination seems trivial. The technical contribution is limited.
This paper only conducts the experiments on three small datasets which are weak for the modern GNN evaluations [1]. It’s better to evaluate the performance on more challenging datasets. Meanwhile, The authors only apply RankedDrop to one GNN backbone: Graph Convolutional Networks. It is insufficient to validate the effectiveness of RankedDrop on a single backbone.
There are some small typos & mistakes in Algorithm 2.
[1] Pitfalls of Graph Neural Network Evaluation |
ICLR | Title
GPViT: A High Resolution Non-Hierarchical Vision Transformer with Group Propagation
Abstract
We present the Group Propagation Vision Transformer (GPViT): a novel nonhierarchical (i.e. non-pyramidal) transformer model designed for general visual recognition with high-resolution features. High-resolution features (or tokens) are a natural fit for tasks that involve perceiving fine-grained details such as detection and segmentation, but exchanging global information between these features is expensive in memory and computation because of the way self-attention scales. We provide a highly efficient alternative Group Propagation Block (GP Block) to exchange global information. In each GP Block, features are first grouped together by a fixed number of learnable group tokens; we then perform Group Propagation where global information is exchanged between the grouped features; finally, global information in the updated grouped features is returned back to the image features through a transformer decoder. We evaluate GPViT on a variety of visual recognition tasks including image classification, semantic segmentation, object detection, and instance segmentation. Our method achieves significant performance gains over previous works across all tasks, especially on tasks that require high-resolution outputs, for example, our GPViT-L3 outperforms Swin Transformer-B by 2.0 mIoU on ADE20K semantic segmentation with only half as many parameters. Code and pre-trained models are available at https://github.com/ChenhongyiYang/GPViT.
N/A
1 INTRODUCTION
Vision Transformer (ViT) architectures have achieved excellent results in general visual recognition tasks, outperforming ConvNets in many instances. In the original ViT architecture, image patches are passed through transformer encoder layers, each containing self-attention and MLP blocks. The spatial resolution of the image patches is constant throughout the network. Self-attention allows for information to be exchanged between patches across the whole image i.e. globally, however it is computationally expensive and does not place an emphasis on local information exchange between nearby patches, as a convolution would. Recent work has sought to build convolutional properties back into
vision transformers (Liu et al., 2021; Wu et al., 2021; Wang et al., 2021) through a hierarchical (pyramidal) architecture. This design reduces computational cost, and improves ViT performance on tasks such as detection and segmentation.
Is this design necessary for structured prediction? It incorporates additional inductive biases e.g. the assumption that nearby image tokens contains similar information, which contrasts with the
∗Equal Contribution
motivation for ViTs in the first place. A recent study (Li et al., 2022a) demonstrates that a plain non-hierarchical ViT, a model that maintains the same feature resolution in all layers (non-pyramidal), can achieve comparable performance on object detection and segmentation tasks to a hierarchical counterpart. How do we go one step further and surpass this? One path would be to increase feature resolution (i.e. the number of image tokens). A plain ViT with more tokens would maintain high-resolution features throughout the network as there is no downsampling. This would facilitate fine-grained, detailed outputs ideal for tasks such as object detection and segmentation. It also simplifies the design for downstream applications, removing the need to find a way to combine different scales of features in a hierarchical ViT. However, this brings new challenges in terms of computation. Self-attention has quadratic complexity in the number of image tokens. Doubling feature resolution (i.e. quadrupling the number of tokens) would lead to a 16× increase in compute. How do we maintain global information exchange between image tokens without this huge increase in computational cost?
In this paper, we propose the Group Propagation Vision Transformer (GPViT): a non-hierarchical ViT which uses high resolution features throughout, and allows for efficient global information exchange between image tokens. We design a novel Group Propagation Block (GP Block) for use in plain ViTs. Figure 1 provides a high-level illustration of how this block works. In detail, we use learnable group tokens and the cross-attention operation to group a large number of high-resolution image features into a fixed number of grouped features. Intuitively, we can view each group as a cluster of patches representing the same semantic concept. We then use an MLPMixer (Tolstikhin et al., 2021) module to update the grouped features and propagate global information among them. This process allows information exchange at a low computational cost, as the number of groups is much smaller than the number of image tokens. Finally, we ungroup the grouped features using another cross-attention operation where the updated grouped features act as key and value pairs, and are queried by the image token features. This updates the high resolution image token features with the group-propagated information. The GP Block only has a linear complexity in the number of image tokens, which allows it to scale better than ordinary self-attention. This block is the foundation of our simple non-hierarchical vision transformer architecture for general visual recognition.
We conduct experiments on multiple visual recognition tasks including image classification, object detection, instance segmentation, and semantic segmentation. We show significant improvements over previous approaches, including hierarchical vision transformers, under the same model size in all tasks. The performance gain is especially large for object detection and segmentation. For example, in Figure 2, we show GPViT’s advantage over the nonhierarchical DeiT (Touvron et al., 2021a) and hierarchical Swin Transformer (Liu et al., 2021) on those recognition tasks. In addition, our smallest model GPViT-L1 can outperform the Swin Transformer-B (Liu et al., 2021) by 2.6 APbb and 1.4mk in COCO Mask R-CNN (He et al., 2017) object detection and instance segmentation with only 30% as many parameters, and
GPViT-L2 outperforms Swin Transformer-B by 0.5 mIoU on UperNet (Xiao et al., 2018) ADE20K semantic segmentation also with only 40% as many parameters.
2 RELATED WORK
Vision Transformers. Vision Transformers have shown great success in visual recognition. They have fewer inductive biases, e.g. translation invariance, scale-invariance, and feature locality (Xu et al., 2021b) than ConvNets and can better capture long-range relationships between image pixels. In the original ViT architecture (Dosovitskiy et al., 2021; Touvron et al., 2021a), images are split into patches and are transformed into tokens that are passed through the encoder of a transformer (Vaswani et al.,
2017). Based on this framework, LeViT (Graham et al., 2021) achieves a significant performance improvement over ViT by combining convolutional and transformer encoder layers. An important development in ViT architectures is the incorporation of a hierarchical feature pyramid structure, as typically seen in ConvNets (Wang et al., 2021; Liu et al., 2021; Xu et al., 2021a; Wu et al., 2021; Fan et al., 2021). For example, Liu et al. (2021) propose a shifted windowing scheme to efficiently propagate feature information in the hierarchical ViT. Such a pyramid architecture provides multi-scale features for a wide range of visual recognition tasks. Following this line of research, recent work has studied the use of hierarchical features in ViTs (Ren et al., 2022b; Guo et al., 2022; Li et al., 2022b; Dong et al., 2022; Hatamizadeh et al., 2022; Chen et al., 2022a; d’Ascoli et al., 2021; Lee et al., 2022). For example, Ren et al. (2022b) introduce using multi-resolution features as attention keys and values to make the model learn better multi-scale information. While this is encouraging, it introduces extra complexity in the downstream model’s design on how to utilize the multi-scale features effectively. Recently, Li et al. (2022a) revisited the plain non-hierarchical ViT for visual recognition; using such a model simplifies the use of features and better decouples the pre-training and downstream stages of model design. Our work extends on this as we examine how to efficiently increase the feature resolution in a non-hierarchical ViT.
Attention Mechanisms in ViTs. A bottleneck when using high resolution features in ViTs is the quadratic complexity in the computation of the self-attention layer. To tackle this challenge, several local attention mechanisms have been proposed (Liu et al., 2021; Huang et al., 2019; Dong et al., 2022; Xu et al., 2021a; Zhang et al., 2022; Han et al., 2021) to allow each image token to attend to local region instead of the whole image. However, using only local attention hinders a model’s ability to exchange information globally. To counter this problem, RegionViT (Chen et al., 2022a) and GCViT (Hatamizadeh et al., 2022) first down-sample their feature maps and exchange global information between the down-sampled features, before using self-attention to transfer information between the original image features and the down-sampled features. This is similar in spirit to our GP Block. However, unlike RegionViT and GCViT, in a GP Block the grouped features are not constrained to a particular rectangular region, but can correspond to any shape or even entirely disconnected image parts. There is recent work using transformer decoder layers with cross-attention between visual tokens and learnable tokens (Carion et al., 2020; Cheng et al., 2022; Jaegle et al., 2022; Hudson & Zitnick, 2021), however, there are three fundamental differences between these and ours: (i) Each of our GP blocks operates as an ‘encoder-decoder’ architecture with two rounds of cross-attention between visual tokens and group tokens: the first round groups the visual tokens for group propagation, and the second round ungroups the updated groups back into visual tokens; (ii) The underlying functionality is different: GP blocks facilitate more efficient global information propagation throughout the ViT, while previous work applies the decoder to obtain the final results for inference (e.g bounding boxes, or masks in Carion et al. (2020); Cheng et al. (2022)); (iii) The GP block is a general module that can be insert into any layer of the ViT, while previous work utilizes the decoder only in the end of the network.
High-Resolution Visual Recognition. Previous work (Wang et al., 2020; Cheng et al., 2020) has shown that high-resolution images and features are beneficial to visual recognition tasks, especially to those requiring the perception of fine-grained image details, for example, semantic segmentation (Wang et al., 2020), pose-estimation (Sun et al., 2019), and small object detection (Yang et al., 2022). For example, HRNet (Wang et al., 2020) introduces a high-resolution ConvNet backbone. It maintains a high-resolution branch and exchanges information between different resolutions of features with interpolation and strided convolutions. Inspired by this work, HRFormer (Yuan et al., 2021) and HRViT (Gu et al., 2022) replace the convolutions in HRNet with self-attention blocks. GPViT is even simpler: it maintains single-scale and high-resolution feature maps without requiring any cross-resolution information to be maintained.
Object-Centric Representation. Our idea of performing information propagation among grouped regions is related to object-centric representation learning (Wang & Gupta, 2018; Kipf & Welling, 2017; Watters et al., 2017; Qi et al., 2021; Locatello et al., 2020; Kipf et al., 2022; Elsayed et al., 2022; Xu et al., 2022). For example, Locatello et al. (2020) proposes slot-attention, which allows automatic discovery of object segments via a self-supervised reconstruction objective. Instead of using reconstruction, Xu et al. (2022) utilizes language as an alternative signal for object segmentation discovery and shows it can be directly transferred to semantic segmentation in a zero-shot manner. All the above work extract object-centric features for downstream applications, while our work inserts this object-centric information propagation mechanism as a building block inside ViTs to compute
high-resolution representations more efficiently and improve high-resolution features. In this respect, our work is related to Li & Gupta (2018) where the graph convolution operations are inserted into ConvNets for better spatial reasoning.
3 METHOD
We present the overall architecture of our Group Propagation Vision Transformer (GPViT) in Figure 3 (a). GPViT is designed for general high-resolution visual recognition. For stable training, we first feed the input image into a down-sampling convolutional stem to generate image features (also known as image tokens), as in Dosovitskiy et al. (2021); Liu et al. (2021). In GPViT we downsample by a factor of 8 by default. The features are therefore higher resolution than in the original ViT where the factor is 16. Unlike most recently proposed methods (Liu et al., 2021; Li et al., 2022b) that adopt a pyramid structure to generate features in multiple resolutions, we keep the features at a high resolution without any down-sampling.
After combining the initial image features with positional embeddings (Vaswani et al., 2017), we feed them into the core GPViT architecture. We replace the original self-attention block in ViT with local attention to avoid the quadratic complexity of self-attention. However, stacking local attention blocks alone does not allow for long-range information exchange between patches and therefore is harmful to performance. To counter this problem, we propose the Group Propagation Block (GP Block)—which we describe in full in Section 3.1—to efficiently propagate global information across the whole image. In our implementation, we use a mixture of GP Blocks and local attention layers to form our GPViT and keep the overall depth unchanged. Lastly, we average the final features to get the model’s output.
3.1 GROUP PROPAGATION BLOCK
Our key technical contribution is the GP block, which efficiently exchanges global information between each image patch with a linear complexity. We visualize the structure of the GP block in Figure 3 (b). It has a bottleneck structure and comprises of three stages, namely, Feature Grouping, Group Propagation, and Feature Ungrouping. In the first stage the image features are grouped, then in the second stage global information is propagated between the grouped features, and in the last stage, this global information is transferred back to the image features.
Feature Grouping. The input to a GP Block is a matrix of image features X ∈ RN×C (The blue tokens in Figure 3 (b)) where N is the total number of image features (or image tokens) and C is the dimensionality of each feature vector. We use M learnable group tokens stored in a matrix G ∈ RM×C (the multi-colored tokens in Figure 3 (b)) where the group number M is a model hyper-parameter. Grouping is performed using a simplified multi-head attention operation (Vaswani et al., 2017), which gives us grouped features Y ∈ RM×C (the half-and-half tokens in Figure 3 (b)):
Attention(Q,K, V ) = Softmax( QKT√
d )V, (1) Y = Concat{h} ( Attention(WQh Gh,W K h Xh,W V h Xh) ) , (2)
where d is the channel number, h is the head index, and W {Q,K,V }h are projection matrices for the query, key, and values, respectively in the attention operation. We remove the feature projection layers after the concatenation operation and set WQh and W V h to be identity matrix. Therefore, the grouped features are simply the weighted sum of image features at each head where the weights are computed by the attention operation.
Group Propagation. After acquiring the grouped features, we can update and propagate global information between them. We use an MLPMixer (Tolstikhin et al., 2021) (Equation 3; the red box in Figure 3 (b)) to achieve this, as MLPMixer provides a good trade-off between model parameters, FLOPs, and model accuracy. MLPMixer requires a fixed-sized input, which is compatible with our fixed number of groups. Specifically, our MLPMixer contains two consecutive MLPs. Recall that Y ∈ RM×C contains the grouped features from the first Feature Grouping stage. We can update these features to Ỹ ∈ RM×C with the MLPMixer by computing:
Y ′ = Y + MLP1(LayerNorm(Y )T ))T , (3)
Ỹ = Y ′ + MLP2(LayerNorm(Y ′))), (4) where the first MLP is used for mixing information between each group, and the second is used to mix channel-wise information.
Feature Ungrouping. After updating the grouped features, we can return global information to the image features through a Feature Ungrouping process. Specifically, the features are ungrouped using a transformer decoder layer where grouped features are queried by the image features.
U = Concat{h} ( Attention(W̃Qh Xh, W̃ K h Ỹh, W̃ V h Ỹh) ) , (5)
Z ′ = Wproj ∗ Concat(U,X), Z ′′ = Z ′ + FFN(Z ′), Z = DWConv(Z ′′), (6)
where W̃ {Q,K,V }h are the projection matrices in the attention operation, Wproj is a linear matrix that projects concatenated features Z ′ to the same dimension as image features X , FFN is a feed-forward network, and DWConv is a depth-wise convolution layer. We modify the original transformer decoder layer by replacing the first residual connection with a concatenation operation (Equation 5; the blue box in Figure 3 (b)), and move the feature projection layer after this to transform the feature to the original dimension. We find this modification benefits the downstream tasks in different sizes of models. We take inspiration from Ren et al. (2022b) and add a depth-wise convolution at the end of the GP Block to improve the locality property of the features (Equation 6; the yellow box in Figure 3 (b)). Finally, a GP Block outputs Z as its final output.
3.2 ARCHITECTURE VARIANTS OF GPVIT
In this paper we study four variants of the proposed GPViT. We present their architectural details in Table 1. These four variants largely differ in the number of feature channels used (i.e. the model width). We use the recently proposed LePE attention (Dong et al., 2022) as local attention by default. The FLOPs are counted using 224×224 inputs. Please refer to Section B.1 in our Appendix for detailed architectural hyper-parameters and training recipes for these variants.
3.3 COMPUTATIONAL COSTS OF HIERARCHICAL AND NON-HIERARCHICAL VITS.
We visualize both the non-hierarchical and hierarchical ViT in Figure 4 (a), where the non-hierarchical ViT simply stacks attention blocks and the hierarchical ViT divides the network into several stages and down-samples the feature map at each stage. Naturally, with the same resolution input, the non-hierarchical ViT will have a higher computation cost. The cost is divided into two parts as shown in Figure 4 (b): the self-attention module and the FFN module. Our GPViT largely reduces the computation of information propagation by using our GP Block instead of self-attention. However, the cost of the FFN stays high for high resolution image features. Therefore we will expect higher FLOPs from GPViT compared to a hierarchical ViT given similar model parameters. However, we believe non-
hierarchical ViTs are still a direction worthy of exploration given their simplicity in extracting high-resolution features and the removal of the need to study the design of efficient downstream models that utilize multi-scale features as required for a hierarchical ViT. This helps to maintain the independence of the model’s pre-training and fine-tuning designs (Li et al., 2022a). In our experiments, we show that our GPViT can achieve better detection and segmentation performance compared to state-of-the-art hierarchical ViTs with similar FLOP counts.
4 EXPERIMENTS
4.1 IMAGENET-1K CLASSIFICATION
Setting: To ensure a fair comparison with previous work, we largely follow the training recipe of Swin Transformer (Liu et al., 2021). We build models using the MMClassification (Contributors, 2020a) toolkit. The models are trained for 300 epochs with a batch size of 2048 using the AdamW optimizer with a weight decay of 0.05 and a peak learning rate of 0.002. A cosine learning rate schedule is used to gradually decrease the learning rate. We use the data augmentations from Liu et al. (2021); these include Mixup (Zhang et al., 2017), Cutmix (Yun et al., 2019), Random erasing (Zhong et al., 2020) and Rand augment (Cubuk et al., 2020).
We compare GPViT with hierarchical and nonhierarchical vision transformers on the ImageNet-1K classification task and report the results in Table 2. As shown in the table, because of our high-resolution design and effective global information propagation via the grouping mechanism, our GPViT outperforms outperforms the non-hierarchical baseline DeiT (Touvron et al., 2021a). In addition, GPViT also outperforms Swin Transformer (Liu et al., 2021) and two recently proposed hierarchical counterparts RegionViT (Chen et al., 2022a) and DWViT (Ren et al., 2022a). This result showcases the potential of nonhierarchical vision transformers and suggests that the hierarchical design inherited from the ConvNet era is not necessary for obtaining a high-performing visual recognition model. This corroborates the work of Li et al. (2022a). That said, we do note that the FLOPs of our models are higher than most alternatives for a
similar parameter count. However, for a similar FLOP count we observe that GPViT can achieve a comparable top-1 accuracy, but with many fewer parameters than the alternatives. For example, GPViT-L2 (15.0 G) has similar FLOPs to the Swin Transformer-B (15.4 G) and ShiftViT-B (15.6 G), but it achieves a similar accuracy with significantly fewer parameters (23.8 M v.s. 88 M and 89 M).
4.2 COCO OBJECT DETECTION AND INSTANCE SEGMENTATION
Setting: We follow Chen et al. (2022b) to use Mask R-CNN and RetinaNet models for the COCO object detection and instance segmentation tasks. We use ViTAdapter (Chen et al., 2022b) to generate multi-scale features as FPN inputs and evaluate the model for both 1× and 3× training schedules.
Results: We compare GPViT to state-of-the-art backbones, all pre-trained on ImageNet-1K. We report the results in Table 3 and Table 4. For competing methods we report the performance of their largest-sized models. For both detectors our GPViT is able to surpass the other backbones by a large margin for a similar parameter count. With Mask R-CNN (Table 3), our smallest GPViT-L1 surpasses its Swin Transformer-B (Liu et al., 2021) counterpart by 2.6 APbb and 1.4 APmk for the 1× training schedule with fewer FLOPs and only 30% as many parameters. When comparing with models that are also equipped with ViTAdapter (Chen et al., 2022b), we observe that GPViT achieves a better
AP with fewer parameters, e.g. our smallest GPViT-L1 outperforms ViT-Adapter-B in both training schedules. These results showcase GPViT’s effectiveness at extracting good regional features for object detection and instance segmentation. A similar conclusion can be drawn from the single-stage RetinaNet detector; with RetinaNet (Table 4), GPViT-L1 has FLOPs similar to the recently proposed RegionViT-B (Chen et al., 2022a), but it outperforms RegionViT-B by 2.5 and 2.0 APbb in both 1× and 3× schedules with only 25% as many parameters. In Table 3, we also compare our Mask R-CNN with the recently proposed ViTDet (Li et al., 2021a) that also uses a non-hierarchical ViT as the backbone network. Here we continue to use the standard 3× (36 epochs) training recipe for GPViT. The results show that under similar FLOPs, even if ViTDet is equipped with more parameters (111M), advanced masked-auto-encoder (MAE) pre-training (He et al., 2022), a longer training schedule (100 epochs), and heavy regularizations like large-scale jittering (Ghiasi et al., 2021), our model can still achieve a comparable performance, which further validates the effectiveness of GPViT.
4.3 ADE20K SEMANTIC SEGMENTATION
Setting: We follow previous work (Liu et al., 2021) and use UperNet (Xiao et al., 2018) as the segmentation network. We also report performance when using the recently proposed SegFormer (Xie et al., 2021) model. For both models, we train for 160k iterations.
Results: We summarise the segmentation performance of GPViT and other state-of-the-art backbone networks in Table 5. For UperNet, we report results with the largest available model size for the competing methods to show how far we can go in the segmentation task. Thanks to its high-resolution design, GPViT outperforms all competing methods in mIoU with fewer FLOPs and fewer parameters. For example, GPViT-L1 only has 37M parameters but it can achieve comparable mIoU to methods with only half the number of FLOPs. This result tells us that for tasks requiring the perception of fine-grained details, scaling-up feature resolution is a better strategy than scaling up model size. GPViT also excels when used with SegFormer. Specifically, GPViT achieves better mIoU than recently proposed vision transformers with similar parameter counts, including HRViT (Gu et al., 2022) that was specifically designed for semantic segmentation. We attribute these promising results to GPViT’s high-resolution design and its effective encapsulation of global information.
4.4 ABLATION STUDIES
Setting: We conduct ablation studies using two types of local attention: the simple window attention (Liu et al., 2021) and the more advanced LePE attention (Dong et al., 2022), which we used in previous experiments. We use the L1 level models (Param < 10 M) for all experiments. All models are pre-trained on ImageNet classification task for 300 epochs using the same setting as in Section 4.1. We report both ImageNet Top-1 accuracy and ADE20K SegFormer mIOU. Please refer to our appendix for more ablation experiments.
Building GPViT step by step. Here we show how we build GPViT step by step and present the results in Table 6. We start building our GPViT from a lowresolution vanilla DeiT with patch sizes of 16 and embedding channels of 216 (same as GPViT-Tiny). It achieves 77.4 top-1 accuracy on ImageNet and 42.2 mIoU on ADE20K. Then we increase the resolution by shrinking the patch size to 8. The FLOPs of the ImageNet and ADE20K models increase by 4.4× and 7.0× respectively. ImageNet accuracy increases to 79.2 but training this model for segmentation proves
to be unstable. We see that enlarging the feature resolution using global self-attention leads to the number of FLOPs exploding and makes convergence difficult. We now replace self-attention with window attention (Liu et al., 2021) and the more advanced LePE attention (Dong et al., 2022). For both local attention mechanisms, the FLOPs of the ImageNet and ADE20K models drop to 5.8G and 34G respectively. We then incorporate GP Blocks, and observe that the accuracy and mIoU improve for both types of local attention and FLOPs remain unchanged. These results showcase the effectiveness of using high-resolution features as well as the importance of our combination of local attention blocks and GP Blocks to maintain a reasonable computation cost.
Global information exchange. Here, we compare our GP Block with other blocks that can exchange global information between image features. The competing blocks include the global attention block, the convolution propagation block (Li et al., 2022a), and the shifting window-attention block (Liu et al., 2021) designed for window attention. We follow ViTDet (Li et al., 2022a) to build the convolution propagation block that stacks two 3×3 convolution layers with a residual connection. We use the original version of the shifting windowattention block as in Liu et al. (2021). The resulting models are acquired by putting competiting
blocks in the same place as as our GP Block. We report the results in Table 7. We observe that simply replacing the local attention layers with convolution layers causes severe performance drops for both types of local attention.We also observe that replacing local attention with global attention can improve performance for a very large increase in FLOPs. For window attention, we found that using the shifting window strategy slightly hurts the performance. We postulate that this is caused by a deficit of shifting window layers; half of Swin Transformer layers are shifting window layers, but we only use four here. For both types of local attention, GP Block achieves the best performance on ImageNet and ADE20K. These results show GP Block’s effectiveness in propagating global information.
Number of group tokens. Here we study how the different combinations of the number of groups tokens in GP Blocks affect the overall model performance. We report the results in Table 8. We find using a large number of group tokens across the whole network can give us higher accuracy on ImageNet but at additional computational cost. However, using too few group e.g. 16 tokens will harm performance. In GPViT we choose to progressively decrease the
number of group tokens from 64 to 16. This strategy gives us a good trade-off between accuracy and computational cost.
Grouped features propagation. In Table 9 we compare different methods for global information propagation. The results show that even when we add nothing to explicitly propagate global information the model can still achieve a good performance (79.8% accuracy on ImageNet). The reason is that in this case the image features are still grouped and ungrouped so the
global information can still be exchanged in these two operations. We also find that self-attention achieve a slightly better accuracy than MLPMixer (80.7 v.s. 80.5), but is more expensive. In GPViT we use MLPMixer for propagating global information.
5 CONCLUSION
In this paper, we have presented the Group Propagation Vision Transformer (GPViT): a nonhierarchical vision transformer designed for high-resolution visual recognition. The core of GPViT is the GP Block, which was proposed to efficiently exchange global information among high-resolution features. The GP Block first forms grouped features and thxen updates them through Group Propagation. Finally, these updated group features are queried back to the image features. We have shown that GPViT can achieve better performance than previous work on ImageNet classification, COCO object detection and instance segmentation, and ADE20K semantic segmentation.
6 ACKNOWLEDGEMENT
Prof. Xiaolong Wang’s group was supported, in part, by gifts from Qualcomm and Amazon Research Award. Chenhongyi Yang was supported by a PhD studentship provided by the School of Engineering, University of Edinburgh.
A FURTHER ABLATION STUDIES
A.1 STUDY ON RUNNING EFFICIENCY In Table 10, we compare the inference speed of GPViT with ViT baselines. Specifically, for each variant of our GPViT, we compare it to ViT models with patch size 16 (low-resolution) and patch size 8 (high-resolution) while keeping the channel dimensions the same. We report inference time using three different input sizes, which correspond to the three typical input sizes used by ImageNet-1k classification, COCO object detection and instance segmentation, and ADE20K semantic segmentation. We draw three observations from the results:
• When using small-sized inputs, GPViT runs slower than the low-resolution ViT. Despite the efficient design of our GP Block and the use of local attention, the high-resolution design still incurs a significant cost for forward passes, slowing down inference speed.
• When the models are applied to downstream tasks where they take larger-sized inputs, all but the largest GPViT models are faster than their low-resolution ViT counterparts. For example, when the model channel number is 216, GPViT takes 83 ms to process an 800×1280 sized image, while ViT-D216-P16 takes 155 ms. In this case, the self-attention operations with quadratic complexity severely slow down the speed of the ViT even with low resolution features. On the other hand, the computations in GP Block and local attentions grow much less than self-attention when the input scales up.
• GPViT is faster than the high-resolution ViT baselines when using small inputs. In addition, high-resolution ViTs are not even able to process large-sized inputs: we got Out of Memory
errors when using a NVIDIA 2080Ti GPU with 11 GB memory. This highlights our technical contribution of efficiently processing high-resolution features with GPViT.
We further study how the computation cost for high-resolution features changes when the model size and input scale up by examining FLOP counts. The results are shown in Figure 5 where we compare GP Block with different group numbers to self-attention and local-attention operations: Self-attention and GP Block can both exchange global information between image features, but the computational cost of GP Block grows much slower than self-attention. Local attention operations have a similar level of efficiency to GP Block, but are unable to exchange global information because of their limited receptive field.
B IMPLEMENTATION DETAILS
B.1 MODEL DETAILS OF GPVIT
The model details of different GPViT variants are presented in Table 11. Different GPViT variants are main difference by their model width (channels) and share similar hyper-parameters in other architecture designs.
B.2 TRAINING RECIPE FOR IMAGENET
The ImageNet experiments are based on the MMClassification toolkit (Contributors, 2020a). The models are trained for 300 epochs with a batch size of 2048; the AdamW optimizer was used with a weight decay of 0.05 and a peak learning rate of 0.002. The cosine learning rate schedule is adopted. The gradient clip is set to 5.0 (we also tested 1.0 and found it worked well too); data augmentation strategies are from Liu et al. (2021) and include Mixup (Zhang et al., 2017), Cutmix (Yun et al., 2019), Random erasing (Zhong et al., 2020) and Rand augment (Cubuk et al., 2020).
B.3 TRAINING RECIPE FOR COCO
The COCO experiments are based on the MMDetection toolkit (Chen et al., 2019). Following commonly used training settings, both Mask R-CNN and RetinaNet models are trained for 12 epochs (1×) and 36 epochs (3×). For the 3× schedule, we follow previous work (Liu et al., 2021; Ren et al., 2022b) to use multi-scale inputs during training. The AdamW optimizer was used with an initial learning rate of 0.0002 and weight decay of 0.05. We used ViTAdapter (Chen et al., 2022b) to generate multi-scale features and followed the default hyper-parameter settings in Chen et al. (2022b).
B.4 TRAINING RECIPE FOR ADE20K
The ADE20K experiments are based on the MMSegmentation toolkit (Contributors, 2020b). Following commonly used training settings, both UperNet and SegFormer models are trained for 160000 iterations. The input images are cropped to 512×512 during training. The AdamW optimizer was used with an initial learning rate of 0.00006 and weight decay of 0.01. We did not use ViTAdapter (Chen et al., 2022b) for segmentation experiments.
C VISUALIZATIONS In Figure 6, we visualise the feature grouping results using models trained on ImageNet, COCO and ADE20K. We observe that the feature grouping can separate a image’s foreground and background in all three datasets. When the model receives fine-grained supervision like bounding boxes and semantic masks, the feature grouping can correspond to more details in the image.
D COMPREHENSIVE COMPARISON
In Table 12 and Table 13, we provide a more comprehensive comparison between GPViT and other visual recognition models on ImageNet-1k classification and COCO Mask R-CNN object detection and instance segmentation. | 1. What is the focus and contribution of the paper on visual recognition tasks?
2. What are the strengths of the proposed approach, particularly in terms of efficiency and global information exchange?
3. Do you have any concerns or questions regarding scaling up the GPViT model for better performance?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a non-hierarchical transformer model for visual recognition tasks (detection, segmentation). Unlike recently proposed hierarchical methods like Swin Transformer that use an hierarchical transformer architecture, exchanging global information between features is computationally expensive for non-hierarchical transformers. To deal with this challenge, the paper proposes an efficient Group Propagation Block (GP Block) to exchange global information between high-resolution features. In a GP block, grouped features formed by learnable group tokens and then global information is exchanged between grouped features. Finally, global information in updated grouped features is returned to the image features through the transformer decoder. GPVIT is evaluated on image classification, semantic segmentation, object detection, and instance segmentation and obtains state-of-the-art performance. Keeping parameters or FLOPs constant, GP-ViT shows improved performance over previous methods in all cases.
Strengths And Weaknesses
Strengths:
The paper is clearly written. The method is explained well and carefully evaluated.
The idea of group propagation is intelligent and is effective at reducing computational cost while keeping the architecture simple.
The ablation studies and especially the explanation of how the architecture was built are informative and help explain the contribution of different components of the architecture.
The method obtains SOTA performance across tasks, with impressive gains when compared to prior methods with similar number of parameters or FLOPs.
Questions: How would you scale up GPViT beyond L3 to obtain better performance? Are there marginal gains to be had on current datasets by further scale up?
Clarity, Quality, Novelty And Reproducibility
See above. |
ICLR | Title
GPViT: A High Resolution Non-Hierarchical Vision Transformer with Group Propagation
Abstract
We present the Group Propagation Vision Transformer (GPViT): a novel nonhierarchical (i.e. non-pyramidal) transformer model designed for general visual recognition with high-resolution features. High-resolution features (or tokens) are a natural fit for tasks that involve perceiving fine-grained details such as detection and segmentation, but exchanging global information between these features is expensive in memory and computation because of the way self-attention scales. We provide a highly efficient alternative Group Propagation Block (GP Block) to exchange global information. In each GP Block, features are first grouped together by a fixed number of learnable group tokens; we then perform Group Propagation where global information is exchanged between the grouped features; finally, global information in the updated grouped features is returned back to the image features through a transformer decoder. We evaluate GPViT on a variety of visual recognition tasks including image classification, semantic segmentation, object detection, and instance segmentation. Our method achieves significant performance gains over previous works across all tasks, especially on tasks that require high-resolution outputs, for example, our GPViT-L3 outperforms Swin Transformer-B by 2.0 mIoU on ADE20K semantic segmentation with only half as many parameters. Code and pre-trained models are available at https://github.com/ChenhongyiYang/GPViT.
N/A
1 INTRODUCTION
Vision Transformer (ViT) architectures have achieved excellent results in general visual recognition tasks, outperforming ConvNets in many instances. In the original ViT architecture, image patches are passed through transformer encoder layers, each containing self-attention and MLP blocks. The spatial resolution of the image patches is constant throughout the network. Self-attention allows for information to be exchanged between patches across the whole image i.e. globally, however it is computationally expensive and does not place an emphasis on local information exchange between nearby patches, as a convolution would. Recent work has sought to build convolutional properties back into
vision transformers (Liu et al., 2021; Wu et al., 2021; Wang et al., 2021) through a hierarchical (pyramidal) architecture. This design reduces computational cost, and improves ViT performance on tasks such as detection and segmentation.
Is this design necessary for structured prediction? It incorporates additional inductive biases e.g. the assumption that nearby image tokens contains similar information, which contrasts with the
∗Equal Contribution
motivation for ViTs in the first place. A recent study (Li et al., 2022a) demonstrates that a plain non-hierarchical ViT, a model that maintains the same feature resolution in all layers (non-pyramidal), can achieve comparable performance on object detection and segmentation tasks to a hierarchical counterpart. How do we go one step further and surpass this? One path would be to increase feature resolution (i.e. the number of image tokens). A plain ViT with more tokens would maintain high-resolution features throughout the network as there is no downsampling. This would facilitate fine-grained, detailed outputs ideal for tasks such as object detection and segmentation. It also simplifies the design for downstream applications, removing the need to find a way to combine different scales of features in a hierarchical ViT. However, this brings new challenges in terms of computation. Self-attention has quadratic complexity in the number of image tokens. Doubling feature resolution (i.e. quadrupling the number of tokens) would lead to a 16× increase in compute. How do we maintain global information exchange between image tokens without this huge increase in computational cost?
In this paper, we propose the Group Propagation Vision Transformer (GPViT): a non-hierarchical ViT which uses high resolution features throughout, and allows for efficient global information exchange between image tokens. We design a novel Group Propagation Block (GP Block) for use in plain ViTs. Figure 1 provides a high-level illustration of how this block works. In detail, we use learnable group tokens and the cross-attention operation to group a large number of high-resolution image features into a fixed number of grouped features. Intuitively, we can view each group as a cluster of patches representing the same semantic concept. We then use an MLPMixer (Tolstikhin et al., 2021) module to update the grouped features and propagate global information among them. This process allows information exchange at a low computational cost, as the number of groups is much smaller than the number of image tokens. Finally, we ungroup the grouped features using another cross-attention operation where the updated grouped features act as key and value pairs, and are queried by the image token features. This updates the high resolution image token features with the group-propagated information. The GP Block only has a linear complexity in the number of image tokens, which allows it to scale better than ordinary self-attention. This block is the foundation of our simple non-hierarchical vision transformer architecture for general visual recognition.
We conduct experiments on multiple visual recognition tasks including image classification, object detection, instance segmentation, and semantic segmentation. We show significant improvements over previous approaches, including hierarchical vision transformers, under the same model size in all tasks. The performance gain is especially large for object detection and segmentation. For example, in Figure 2, we show GPViT’s advantage over the nonhierarchical DeiT (Touvron et al., 2021a) and hierarchical Swin Transformer (Liu et al., 2021) on those recognition tasks. In addition, our smallest model GPViT-L1 can outperform the Swin Transformer-B (Liu et al., 2021) by 2.6 APbb and 1.4mk in COCO Mask R-CNN (He et al., 2017) object detection and instance segmentation with only 30% as many parameters, and
GPViT-L2 outperforms Swin Transformer-B by 0.5 mIoU on UperNet (Xiao et al., 2018) ADE20K semantic segmentation also with only 40% as many parameters.
2 RELATED WORK
Vision Transformers. Vision Transformers have shown great success in visual recognition. They have fewer inductive biases, e.g. translation invariance, scale-invariance, and feature locality (Xu et al., 2021b) than ConvNets and can better capture long-range relationships between image pixels. In the original ViT architecture (Dosovitskiy et al., 2021; Touvron et al., 2021a), images are split into patches and are transformed into tokens that are passed through the encoder of a transformer (Vaswani et al.,
2017). Based on this framework, LeViT (Graham et al., 2021) achieves a significant performance improvement over ViT by combining convolutional and transformer encoder layers. An important development in ViT architectures is the incorporation of a hierarchical feature pyramid structure, as typically seen in ConvNets (Wang et al., 2021; Liu et al., 2021; Xu et al., 2021a; Wu et al., 2021; Fan et al., 2021). For example, Liu et al. (2021) propose a shifted windowing scheme to efficiently propagate feature information in the hierarchical ViT. Such a pyramid architecture provides multi-scale features for a wide range of visual recognition tasks. Following this line of research, recent work has studied the use of hierarchical features in ViTs (Ren et al., 2022b; Guo et al., 2022; Li et al., 2022b; Dong et al., 2022; Hatamizadeh et al., 2022; Chen et al., 2022a; d’Ascoli et al., 2021; Lee et al., 2022). For example, Ren et al. (2022b) introduce using multi-resolution features as attention keys and values to make the model learn better multi-scale information. While this is encouraging, it introduces extra complexity in the downstream model’s design on how to utilize the multi-scale features effectively. Recently, Li et al. (2022a) revisited the plain non-hierarchical ViT for visual recognition; using such a model simplifies the use of features and better decouples the pre-training and downstream stages of model design. Our work extends on this as we examine how to efficiently increase the feature resolution in a non-hierarchical ViT.
Attention Mechanisms in ViTs. A bottleneck when using high resolution features in ViTs is the quadratic complexity in the computation of the self-attention layer. To tackle this challenge, several local attention mechanisms have been proposed (Liu et al., 2021; Huang et al., 2019; Dong et al., 2022; Xu et al., 2021a; Zhang et al., 2022; Han et al., 2021) to allow each image token to attend to local region instead of the whole image. However, using only local attention hinders a model’s ability to exchange information globally. To counter this problem, RegionViT (Chen et al., 2022a) and GCViT (Hatamizadeh et al., 2022) first down-sample their feature maps and exchange global information between the down-sampled features, before using self-attention to transfer information between the original image features and the down-sampled features. This is similar in spirit to our GP Block. However, unlike RegionViT and GCViT, in a GP Block the grouped features are not constrained to a particular rectangular region, but can correspond to any shape or even entirely disconnected image parts. There is recent work using transformer decoder layers with cross-attention between visual tokens and learnable tokens (Carion et al., 2020; Cheng et al., 2022; Jaegle et al., 2022; Hudson & Zitnick, 2021), however, there are three fundamental differences between these and ours: (i) Each of our GP blocks operates as an ‘encoder-decoder’ architecture with two rounds of cross-attention between visual tokens and group tokens: the first round groups the visual tokens for group propagation, and the second round ungroups the updated groups back into visual tokens; (ii) The underlying functionality is different: GP blocks facilitate more efficient global information propagation throughout the ViT, while previous work applies the decoder to obtain the final results for inference (e.g bounding boxes, or masks in Carion et al. (2020); Cheng et al. (2022)); (iii) The GP block is a general module that can be insert into any layer of the ViT, while previous work utilizes the decoder only in the end of the network.
High-Resolution Visual Recognition. Previous work (Wang et al., 2020; Cheng et al., 2020) has shown that high-resolution images and features are beneficial to visual recognition tasks, especially to those requiring the perception of fine-grained image details, for example, semantic segmentation (Wang et al., 2020), pose-estimation (Sun et al., 2019), and small object detection (Yang et al., 2022). For example, HRNet (Wang et al., 2020) introduces a high-resolution ConvNet backbone. It maintains a high-resolution branch and exchanges information between different resolutions of features with interpolation and strided convolutions. Inspired by this work, HRFormer (Yuan et al., 2021) and HRViT (Gu et al., 2022) replace the convolutions in HRNet with self-attention blocks. GPViT is even simpler: it maintains single-scale and high-resolution feature maps without requiring any cross-resolution information to be maintained.
Object-Centric Representation. Our idea of performing information propagation among grouped regions is related to object-centric representation learning (Wang & Gupta, 2018; Kipf & Welling, 2017; Watters et al., 2017; Qi et al., 2021; Locatello et al., 2020; Kipf et al., 2022; Elsayed et al., 2022; Xu et al., 2022). For example, Locatello et al. (2020) proposes slot-attention, which allows automatic discovery of object segments via a self-supervised reconstruction objective. Instead of using reconstruction, Xu et al. (2022) utilizes language as an alternative signal for object segmentation discovery and shows it can be directly transferred to semantic segmentation in a zero-shot manner. All the above work extract object-centric features for downstream applications, while our work inserts this object-centric information propagation mechanism as a building block inside ViTs to compute
high-resolution representations more efficiently and improve high-resolution features. In this respect, our work is related to Li & Gupta (2018) where the graph convolution operations are inserted into ConvNets for better spatial reasoning.
3 METHOD
We present the overall architecture of our Group Propagation Vision Transformer (GPViT) in Figure 3 (a). GPViT is designed for general high-resolution visual recognition. For stable training, we first feed the input image into a down-sampling convolutional stem to generate image features (also known as image tokens), as in Dosovitskiy et al. (2021); Liu et al. (2021). In GPViT we downsample by a factor of 8 by default. The features are therefore higher resolution than in the original ViT where the factor is 16. Unlike most recently proposed methods (Liu et al., 2021; Li et al., 2022b) that adopt a pyramid structure to generate features in multiple resolutions, we keep the features at a high resolution without any down-sampling.
After combining the initial image features with positional embeddings (Vaswani et al., 2017), we feed them into the core GPViT architecture. We replace the original self-attention block in ViT with local attention to avoid the quadratic complexity of self-attention. However, stacking local attention blocks alone does not allow for long-range information exchange between patches and therefore is harmful to performance. To counter this problem, we propose the Group Propagation Block (GP Block)—which we describe in full in Section 3.1—to efficiently propagate global information across the whole image. In our implementation, we use a mixture of GP Blocks and local attention layers to form our GPViT and keep the overall depth unchanged. Lastly, we average the final features to get the model’s output.
3.1 GROUP PROPAGATION BLOCK
Our key technical contribution is the GP block, which efficiently exchanges global information between each image patch with a linear complexity. We visualize the structure of the GP block in Figure 3 (b). It has a bottleneck structure and comprises of three stages, namely, Feature Grouping, Group Propagation, and Feature Ungrouping. In the first stage the image features are grouped, then in the second stage global information is propagated between the grouped features, and in the last stage, this global information is transferred back to the image features.
Feature Grouping. The input to a GP Block is a matrix of image features X ∈ RN×C (The blue tokens in Figure 3 (b)) where N is the total number of image features (or image tokens) and C is the dimensionality of each feature vector. We use M learnable group tokens stored in a matrix G ∈ RM×C (the multi-colored tokens in Figure 3 (b)) where the group number M is a model hyper-parameter. Grouping is performed using a simplified multi-head attention operation (Vaswani et al., 2017), which gives us grouped features Y ∈ RM×C (the half-and-half tokens in Figure 3 (b)):
Attention(Q,K, V ) = Softmax( QKT√
d )V, (1) Y = Concat{h} ( Attention(WQh Gh,W K h Xh,W V h Xh) ) , (2)
where d is the channel number, h is the head index, and W {Q,K,V }h are projection matrices for the query, key, and values, respectively in the attention operation. We remove the feature projection layers after the concatenation operation and set WQh and W V h to be identity matrix. Therefore, the grouped features are simply the weighted sum of image features at each head where the weights are computed by the attention operation.
Group Propagation. After acquiring the grouped features, we can update and propagate global information between them. We use an MLPMixer (Tolstikhin et al., 2021) (Equation 3; the red box in Figure 3 (b)) to achieve this, as MLPMixer provides a good trade-off between model parameters, FLOPs, and model accuracy. MLPMixer requires a fixed-sized input, which is compatible with our fixed number of groups. Specifically, our MLPMixer contains two consecutive MLPs. Recall that Y ∈ RM×C contains the grouped features from the first Feature Grouping stage. We can update these features to Ỹ ∈ RM×C with the MLPMixer by computing:
Y ′ = Y + MLP1(LayerNorm(Y )T ))T , (3)
Ỹ = Y ′ + MLP2(LayerNorm(Y ′))), (4) where the first MLP is used for mixing information between each group, and the second is used to mix channel-wise information.
Feature Ungrouping. After updating the grouped features, we can return global information to the image features through a Feature Ungrouping process. Specifically, the features are ungrouped using a transformer decoder layer where grouped features are queried by the image features.
U = Concat{h} ( Attention(W̃Qh Xh, W̃ K h Ỹh, W̃ V h Ỹh) ) , (5)
Z ′ = Wproj ∗ Concat(U,X), Z ′′ = Z ′ + FFN(Z ′), Z = DWConv(Z ′′), (6)
where W̃ {Q,K,V }h are the projection matrices in the attention operation, Wproj is a linear matrix that projects concatenated features Z ′ to the same dimension as image features X , FFN is a feed-forward network, and DWConv is a depth-wise convolution layer. We modify the original transformer decoder layer by replacing the first residual connection with a concatenation operation (Equation 5; the blue box in Figure 3 (b)), and move the feature projection layer after this to transform the feature to the original dimension. We find this modification benefits the downstream tasks in different sizes of models. We take inspiration from Ren et al. (2022b) and add a depth-wise convolution at the end of the GP Block to improve the locality property of the features (Equation 6; the yellow box in Figure 3 (b)). Finally, a GP Block outputs Z as its final output.
3.2 ARCHITECTURE VARIANTS OF GPVIT
In this paper we study four variants of the proposed GPViT. We present their architectural details in Table 1. These four variants largely differ in the number of feature channels used (i.e. the model width). We use the recently proposed LePE attention (Dong et al., 2022) as local attention by default. The FLOPs are counted using 224×224 inputs. Please refer to Section B.1 in our Appendix for detailed architectural hyper-parameters and training recipes for these variants.
3.3 COMPUTATIONAL COSTS OF HIERARCHICAL AND NON-HIERARCHICAL VITS.
We visualize both the non-hierarchical and hierarchical ViT in Figure 4 (a), where the non-hierarchical ViT simply stacks attention blocks and the hierarchical ViT divides the network into several stages and down-samples the feature map at each stage. Naturally, with the same resolution input, the non-hierarchical ViT will have a higher computation cost. The cost is divided into two parts as shown in Figure 4 (b): the self-attention module and the FFN module. Our GPViT largely reduces the computation of information propagation by using our GP Block instead of self-attention. However, the cost of the FFN stays high for high resolution image features. Therefore we will expect higher FLOPs from GPViT compared to a hierarchical ViT given similar model parameters. However, we believe non-
hierarchical ViTs are still a direction worthy of exploration given their simplicity in extracting high-resolution features and the removal of the need to study the design of efficient downstream models that utilize multi-scale features as required for a hierarchical ViT. This helps to maintain the independence of the model’s pre-training and fine-tuning designs (Li et al., 2022a). In our experiments, we show that our GPViT can achieve better detection and segmentation performance compared to state-of-the-art hierarchical ViTs with similar FLOP counts.
4 EXPERIMENTS
4.1 IMAGENET-1K CLASSIFICATION
Setting: To ensure a fair comparison with previous work, we largely follow the training recipe of Swin Transformer (Liu et al., 2021). We build models using the MMClassification (Contributors, 2020a) toolkit. The models are trained for 300 epochs with a batch size of 2048 using the AdamW optimizer with a weight decay of 0.05 and a peak learning rate of 0.002. A cosine learning rate schedule is used to gradually decrease the learning rate. We use the data augmentations from Liu et al. (2021); these include Mixup (Zhang et al., 2017), Cutmix (Yun et al., 2019), Random erasing (Zhong et al., 2020) and Rand augment (Cubuk et al., 2020).
We compare GPViT with hierarchical and nonhierarchical vision transformers on the ImageNet-1K classification task and report the results in Table 2. As shown in the table, because of our high-resolution design and effective global information propagation via the grouping mechanism, our GPViT outperforms outperforms the non-hierarchical baseline DeiT (Touvron et al., 2021a). In addition, GPViT also outperforms Swin Transformer (Liu et al., 2021) and two recently proposed hierarchical counterparts RegionViT (Chen et al., 2022a) and DWViT (Ren et al., 2022a). This result showcases the potential of nonhierarchical vision transformers and suggests that the hierarchical design inherited from the ConvNet era is not necessary for obtaining a high-performing visual recognition model. This corroborates the work of Li et al. (2022a). That said, we do note that the FLOPs of our models are higher than most alternatives for a
similar parameter count. However, for a similar FLOP count we observe that GPViT can achieve a comparable top-1 accuracy, but with many fewer parameters than the alternatives. For example, GPViT-L2 (15.0 G) has similar FLOPs to the Swin Transformer-B (15.4 G) and ShiftViT-B (15.6 G), but it achieves a similar accuracy with significantly fewer parameters (23.8 M v.s. 88 M and 89 M).
4.2 COCO OBJECT DETECTION AND INSTANCE SEGMENTATION
Setting: We follow Chen et al. (2022b) to use Mask R-CNN and RetinaNet models for the COCO object detection and instance segmentation tasks. We use ViTAdapter (Chen et al., 2022b) to generate multi-scale features as FPN inputs and evaluate the model for both 1× and 3× training schedules.
Results: We compare GPViT to state-of-the-art backbones, all pre-trained on ImageNet-1K. We report the results in Table 3 and Table 4. For competing methods we report the performance of their largest-sized models. For both detectors our GPViT is able to surpass the other backbones by a large margin for a similar parameter count. With Mask R-CNN (Table 3), our smallest GPViT-L1 surpasses its Swin Transformer-B (Liu et al., 2021) counterpart by 2.6 APbb and 1.4 APmk for the 1× training schedule with fewer FLOPs and only 30% as many parameters. When comparing with models that are also equipped with ViTAdapter (Chen et al., 2022b), we observe that GPViT achieves a better
AP with fewer parameters, e.g. our smallest GPViT-L1 outperforms ViT-Adapter-B in both training schedules. These results showcase GPViT’s effectiveness at extracting good regional features for object detection and instance segmentation. A similar conclusion can be drawn from the single-stage RetinaNet detector; with RetinaNet (Table 4), GPViT-L1 has FLOPs similar to the recently proposed RegionViT-B (Chen et al., 2022a), but it outperforms RegionViT-B by 2.5 and 2.0 APbb in both 1× and 3× schedules with only 25% as many parameters. In Table 3, we also compare our Mask R-CNN with the recently proposed ViTDet (Li et al., 2021a) that also uses a non-hierarchical ViT as the backbone network. Here we continue to use the standard 3× (36 epochs) training recipe for GPViT. The results show that under similar FLOPs, even if ViTDet is equipped with more parameters (111M), advanced masked-auto-encoder (MAE) pre-training (He et al., 2022), a longer training schedule (100 epochs), and heavy regularizations like large-scale jittering (Ghiasi et al., 2021), our model can still achieve a comparable performance, which further validates the effectiveness of GPViT.
4.3 ADE20K SEMANTIC SEGMENTATION
Setting: We follow previous work (Liu et al., 2021) and use UperNet (Xiao et al., 2018) as the segmentation network. We also report performance when using the recently proposed SegFormer (Xie et al., 2021) model. For both models, we train for 160k iterations.
Results: We summarise the segmentation performance of GPViT and other state-of-the-art backbone networks in Table 5. For UperNet, we report results with the largest available model size for the competing methods to show how far we can go in the segmentation task. Thanks to its high-resolution design, GPViT outperforms all competing methods in mIoU with fewer FLOPs and fewer parameters. For example, GPViT-L1 only has 37M parameters but it can achieve comparable mIoU to methods with only half the number of FLOPs. This result tells us that for tasks requiring the perception of fine-grained details, scaling-up feature resolution is a better strategy than scaling up model size. GPViT also excels when used with SegFormer. Specifically, GPViT achieves better mIoU than recently proposed vision transformers with similar parameter counts, including HRViT (Gu et al., 2022) that was specifically designed for semantic segmentation. We attribute these promising results to GPViT’s high-resolution design and its effective encapsulation of global information.
4.4 ABLATION STUDIES
Setting: We conduct ablation studies using two types of local attention: the simple window attention (Liu et al., 2021) and the more advanced LePE attention (Dong et al., 2022), which we used in previous experiments. We use the L1 level models (Param < 10 M) for all experiments. All models are pre-trained on ImageNet classification task for 300 epochs using the same setting as in Section 4.1. We report both ImageNet Top-1 accuracy and ADE20K SegFormer mIOU. Please refer to our appendix for more ablation experiments.
Building GPViT step by step. Here we show how we build GPViT step by step and present the results in Table 6. We start building our GPViT from a lowresolution vanilla DeiT with patch sizes of 16 and embedding channels of 216 (same as GPViT-Tiny). It achieves 77.4 top-1 accuracy on ImageNet and 42.2 mIoU on ADE20K. Then we increase the resolution by shrinking the patch size to 8. The FLOPs of the ImageNet and ADE20K models increase by 4.4× and 7.0× respectively. ImageNet accuracy increases to 79.2 but training this model for segmentation proves
to be unstable. We see that enlarging the feature resolution using global self-attention leads to the number of FLOPs exploding and makes convergence difficult. We now replace self-attention with window attention (Liu et al., 2021) and the more advanced LePE attention (Dong et al., 2022). For both local attention mechanisms, the FLOPs of the ImageNet and ADE20K models drop to 5.8G and 34G respectively. We then incorporate GP Blocks, and observe that the accuracy and mIoU improve for both types of local attention and FLOPs remain unchanged. These results showcase the effectiveness of using high-resolution features as well as the importance of our combination of local attention blocks and GP Blocks to maintain a reasonable computation cost.
Global information exchange. Here, we compare our GP Block with other blocks that can exchange global information between image features. The competing blocks include the global attention block, the convolution propagation block (Li et al., 2022a), and the shifting window-attention block (Liu et al., 2021) designed for window attention. We follow ViTDet (Li et al., 2022a) to build the convolution propagation block that stacks two 3×3 convolution layers with a residual connection. We use the original version of the shifting windowattention block as in Liu et al. (2021). The resulting models are acquired by putting competiting
blocks in the same place as as our GP Block. We report the results in Table 7. We observe that simply replacing the local attention layers with convolution layers causes severe performance drops for both types of local attention.We also observe that replacing local attention with global attention can improve performance for a very large increase in FLOPs. For window attention, we found that using the shifting window strategy slightly hurts the performance. We postulate that this is caused by a deficit of shifting window layers; half of Swin Transformer layers are shifting window layers, but we only use four here. For both types of local attention, GP Block achieves the best performance on ImageNet and ADE20K. These results show GP Block’s effectiveness in propagating global information.
Number of group tokens. Here we study how the different combinations of the number of groups tokens in GP Blocks affect the overall model performance. We report the results in Table 8. We find using a large number of group tokens across the whole network can give us higher accuracy on ImageNet but at additional computational cost. However, using too few group e.g. 16 tokens will harm performance. In GPViT we choose to progressively decrease the
number of group tokens from 64 to 16. This strategy gives us a good trade-off between accuracy and computational cost.
Grouped features propagation. In Table 9 we compare different methods for global information propagation. The results show that even when we add nothing to explicitly propagate global information the model can still achieve a good performance (79.8% accuracy on ImageNet). The reason is that in this case the image features are still grouped and ungrouped so the
global information can still be exchanged in these two operations. We also find that self-attention achieve a slightly better accuracy than MLPMixer (80.7 v.s. 80.5), but is more expensive. In GPViT we use MLPMixer for propagating global information.
5 CONCLUSION
In this paper, we have presented the Group Propagation Vision Transformer (GPViT): a nonhierarchical vision transformer designed for high-resolution visual recognition. The core of GPViT is the GP Block, which was proposed to efficiently exchange global information among high-resolution features. The GP Block first forms grouped features and thxen updates them through Group Propagation. Finally, these updated group features are queried back to the image features. We have shown that GPViT can achieve better performance than previous work on ImageNet classification, COCO object detection and instance segmentation, and ADE20K semantic segmentation.
6 ACKNOWLEDGEMENT
Prof. Xiaolong Wang’s group was supported, in part, by gifts from Qualcomm and Amazon Research Award. Chenhongyi Yang was supported by a PhD studentship provided by the School of Engineering, University of Edinburgh.
A FURTHER ABLATION STUDIES
A.1 STUDY ON RUNNING EFFICIENCY In Table 10, we compare the inference speed of GPViT with ViT baselines. Specifically, for each variant of our GPViT, we compare it to ViT models with patch size 16 (low-resolution) and patch size 8 (high-resolution) while keeping the channel dimensions the same. We report inference time using three different input sizes, which correspond to the three typical input sizes used by ImageNet-1k classification, COCO object detection and instance segmentation, and ADE20K semantic segmentation. We draw three observations from the results:
• When using small-sized inputs, GPViT runs slower than the low-resolution ViT. Despite the efficient design of our GP Block and the use of local attention, the high-resolution design still incurs a significant cost for forward passes, slowing down inference speed.
• When the models are applied to downstream tasks where they take larger-sized inputs, all but the largest GPViT models are faster than their low-resolution ViT counterparts. For example, when the model channel number is 216, GPViT takes 83 ms to process an 800×1280 sized image, while ViT-D216-P16 takes 155 ms. In this case, the self-attention operations with quadratic complexity severely slow down the speed of the ViT even with low resolution features. On the other hand, the computations in GP Block and local attentions grow much less than self-attention when the input scales up.
• GPViT is faster than the high-resolution ViT baselines when using small inputs. In addition, high-resolution ViTs are not even able to process large-sized inputs: we got Out of Memory
errors when using a NVIDIA 2080Ti GPU with 11 GB memory. This highlights our technical contribution of efficiently processing high-resolution features with GPViT.
We further study how the computation cost for high-resolution features changes when the model size and input scale up by examining FLOP counts. The results are shown in Figure 5 where we compare GP Block with different group numbers to self-attention and local-attention operations: Self-attention and GP Block can both exchange global information between image features, but the computational cost of GP Block grows much slower than self-attention. Local attention operations have a similar level of efficiency to GP Block, but are unable to exchange global information because of their limited receptive field.
B IMPLEMENTATION DETAILS
B.1 MODEL DETAILS OF GPVIT
The model details of different GPViT variants are presented in Table 11. Different GPViT variants are main difference by their model width (channels) and share similar hyper-parameters in other architecture designs.
B.2 TRAINING RECIPE FOR IMAGENET
The ImageNet experiments are based on the MMClassification toolkit (Contributors, 2020a). The models are trained for 300 epochs with a batch size of 2048; the AdamW optimizer was used with a weight decay of 0.05 and a peak learning rate of 0.002. The cosine learning rate schedule is adopted. The gradient clip is set to 5.0 (we also tested 1.0 and found it worked well too); data augmentation strategies are from Liu et al. (2021) and include Mixup (Zhang et al., 2017), Cutmix (Yun et al., 2019), Random erasing (Zhong et al., 2020) and Rand augment (Cubuk et al., 2020).
B.3 TRAINING RECIPE FOR COCO
The COCO experiments are based on the MMDetection toolkit (Chen et al., 2019). Following commonly used training settings, both Mask R-CNN and RetinaNet models are trained for 12 epochs (1×) and 36 epochs (3×). For the 3× schedule, we follow previous work (Liu et al., 2021; Ren et al., 2022b) to use multi-scale inputs during training. The AdamW optimizer was used with an initial learning rate of 0.0002 and weight decay of 0.05. We used ViTAdapter (Chen et al., 2022b) to generate multi-scale features and followed the default hyper-parameter settings in Chen et al. (2022b).
B.4 TRAINING RECIPE FOR ADE20K
The ADE20K experiments are based on the MMSegmentation toolkit (Contributors, 2020b). Following commonly used training settings, both UperNet and SegFormer models are trained for 160000 iterations. The input images are cropped to 512×512 during training. The AdamW optimizer was used with an initial learning rate of 0.00006 and weight decay of 0.01. We did not use ViTAdapter (Chen et al., 2022b) for segmentation experiments.
C VISUALIZATIONS In Figure 6, we visualise the feature grouping results using models trained on ImageNet, COCO and ADE20K. We observe that the feature grouping can separate a image’s foreground and background in all three datasets. When the model receives fine-grained supervision like bounding boxes and semantic masks, the feature grouping can correspond to more details in the image.
D COMPREHENSIVE COMPARISON
In Table 12 and Table 13, we provide a more comprehensive comparison between GPViT and other visual recognition models on ImageNet-1k classification and COCO Mask R-CNN object detection and instance segmentation. | 1. What is the focus and contribution of the paper regarding transformer-based architectures?
2. What are the strengths and weaknesses of the proposed approach, particularly in its application and design choices?
3. Do you have any concerns regarding the novelty of the paper, considering related work in the field?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or suggestions regarding the experimentation and analysis of the proposed method? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper presents a new transformer-based architecture that enables the extraction of high resolution feature maps from an images, while avoiding hierarchical downsampling in intermediate layers. To deal with the quadratic complexity of normal attention layers, a set of latent group representations is learned that collects information from the high resolution tokens. The latent group representations then mix their information based on an MLPMixer block and in turn the image patch tokens can cross attend to the update group features. Furthermore local attention is used for self-attention of the patch tokens. Using these principle, the complexity is linear in the number of groups and in the number of tokens, allowing a higher number of used input tokens. The resulting architecture is evaluated on several tasks (classification, detection, segmentation) and achieves good results compared to other methods with similar FLOP counts.
Strengths And Weaknesses
Strengths:
While the idea to use such groups to reduce the complexity of attention is not really very novel, the application in this specific architecture is very interesting and show some promising results. Furthermore, a lot of the design choices seems rather general and similar systems could be applicable to other modalities such as point clouds, or potentially even multi-modal data.
The overall direction of not using intermediate downsampling is interesting and worth investigating.
The paper is easy to read and quite straightforward to follow.
Weaknesses:
I think there is quite some rather related work that is not discussed. For example the Perceiver models by DeepMind have a different focus, but they are pretty similar in the underlying idea. Mask2Former also has a similar transformer part, albeit it works on top of a backbone to extract basic features and similar ideas where also explored in "Generative Adversarial Transformers". These are just a few I can recall off the top of my head. I wouldn't be surprised if there are a lot of other similar approaches. I think it is crucial that such related work is discussed with more care. Given that this is one main part of the contributions, the paper loses quite a bit of novelty in my mind. (And just because this uses an MLPMixer instead of vanilla attention does not make the model inherently different!)
I'm quite sad to see yet another paper that simply claims a method achieves state of the art results on some task, when it's obviously not true. Simply looking at other approaches, it becomes clear that the numbers are far from state of the art. I know the focus here is likely on models with similar FLOP counts, but in most of the sentences where claims about being state of the art are made, this fact is simply omitted. It would be very important to clarify this! And apart from that, it would also be interesting to see how well this type of architecture generalizes to bigger versions of a model with similar FLOP counts than the actual state of the art. This should clearly be clarified more!
Just looking at parameter counts and FLOP counts is not really giving a clear picture, but the actual throughput of a model is also important. (Have a look at the paper called "The Efficiency Misnomer".) Considering this architecture, I wouldn't be surprised if such a comparison would actually be favorable for the model, but I think it's sad that we don't see such numbers.
I think the idea of the approach is interesting and it would be great to see a bit more experiments with respect to the design choices that went into the different building blocks. E.g. how many groups should be used, or why is the MLPMixer used instead of normal self-attention? Indeed some of these things are discussed in the appendix, but I think they should be featured more prominently in the main paper in order to highlight the important aspects of this architecture design.
Clarity, Quality, Novelty And Reproducibility
Overall the paper is pretty clear and easy to follow. Especially the overview figure of the architecture clearly explains the main building blocks. Even though I am not aware of approaches that follow the exact same approach, there are quite some very related existing methods that are not discussed here and I think this limits the novelty. Nevertheless a deeper view into this architecture could be interesting, but sadly the results here are more focused on showing "state of the art" performance. As such we also don't gain a lot of novel insights. Given the authors state that code will be released upon acceptance, I assume the results should be reproducible. Furthermore, the results are based on several MM toolkits, potentially further improving the reproducibility. |
ICLR | Title
GPViT: A High Resolution Non-Hierarchical Vision Transformer with Group Propagation
Abstract
We present the Group Propagation Vision Transformer (GPViT): a novel nonhierarchical (i.e. non-pyramidal) transformer model designed for general visual recognition with high-resolution features. High-resolution features (or tokens) are a natural fit for tasks that involve perceiving fine-grained details such as detection and segmentation, but exchanging global information between these features is expensive in memory and computation because of the way self-attention scales. We provide a highly efficient alternative Group Propagation Block (GP Block) to exchange global information. In each GP Block, features are first grouped together by a fixed number of learnable group tokens; we then perform Group Propagation where global information is exchanged between the grouped features; finally, global information in the updated grouped features is returned back to the image features through a transformer decoder. We evaluate GPViT on a variety of visual recognition tasks including image classification, semantic segmentation, object detection, and instance segmentation. Our method achieves significant performance gains over previous works across all tasks, especially on tasks that require high-resolution outputs, for example, our GPViT-L3 outperforms Swin Transformer-B by 2.0 mIoU on ADE20K semantic segmentation with only half as many parameters. Code and pre-trained models are available at https://github.com/ChenhongyiYang/GPViT.
N/A
1 INTRODUCTION
Vision Transformer (ViT) architectures have achieved excellent results in general visual recognition tasks, outperforming ConvNets in many instances. In the original ViT architecture, image patches are passed through transformer encoder layers, each containing self-attention and MLP blocks. The spatial resolution of the image patches is constant throughout the network. Self-attention allows for information to be exchanged between patches across the whole image i.e. globally, however it is computationally expensive and does not place an emphasis on local information exchange between nearby patches, as a convolution would. Recent work has sought to build convolutional properties back into
vision transformers (Liu et al., 2021; Wu et al., 2021; Wang et al., 2021) through a hierarchical (pyramidal) architecture. This design reduces computational cost, and improves ViT performance on tasks such as detection and segmentation.
Is this design necessary for structured prediction? It incorporates additional inductive biases e.g. the assumption that nearby image tokens contains similar information, which contrasts with the
∗Equal Contribution
motivation for ViTs in the first place. A recent study (Li et al., 2022a) demonstrates that a plain non-hierarchical ViT, a model that maintains the same feature resolution in all layers (non-pyramidal), can achieve comparable performance on object detection and segmentation tasks to a hierarchical counterpart. How do we go one step further and surpass this? One path would be to increase feature resolution (i.e. the number of image tokens). A plain ViT with more tokens would maintain high-resolution features throughout the network as there is no downsampling. This would facilitate fine-grained, detailed outputs ideal for tasks such as object detection and segmentation. It also simplifies the design for downstream applications, removing the need to find a way to combine different scales of features in a hierarchical ViT. However, this brings new challenges in terms of computation. Self-attention has quadratic complexity in the number of image tokens. Doubling feature resolution (i.e. quadrupling the number of tokens) would lead to a 16× increase in compute. How do we maintain global information exchange between image tokens without this huge increase in computational cost?
In this paper, we propose the Group Propagation Vision Transformer (GPViT): a non-hierarchical ViT which uses high resolution features throughout, and allows for efficient global information exchange between image tokens. We design a novel Group Propagation Block (GP Block) for use in plain ViTs. Figure 1 provides a high-level illustration of how this block works. In detail, we use learnable group tokens and the cross-attention operation to group a large number of high-resolution image features into a fixed number of grouped features. Intuitively, we can view each group as a cluster of patches representing the same semantic concept. We then use an MLPMixer (Tolstikhin et al., 2021) module to update the grouped features and propagate global information among them. This process allows information exchange at a low computational cost, as the number of groups is much smaller than the number of image tokens. Finally, we ungroup the grouped features using another cross-attention operation where the updated grouped features act as key and value pairs, and are queried by the image token features. This updates the high resolution image token features with the group-propagated information. The GP Block only has a linear complexity in the number of image tokens, which allows it to scale better than ordinary self-attention. This block is the foundation of our simple non-hierarchical vision transformer architecture for general visual recognition.
We conduct experiments on multiple visual recognition tasks including image classification, object detection, instance segmentation, and semantic segmentation. We show significant improvements over previous approaches, including hierarchical vision transformers, under the same model size in all tasks. The performance gain is especially large for object detection and segmentation. For example, in Figure 2, we show GPViT’s advantage over the nonhierarchical DeiT (Touvron et al., 2021a) and hierarchical Swin Transformer (Liu et al., 2021) on those recognition tasks. In addition, our smallest model GPViT-L1 can outperform the Swin Transformer-B (Liu et al., 2021) by 2.6 APbb and 1.4mk in COCO Mask R-CNN (He et al., 2017) object detection and instance segmentation with only 30% as many parameters, and
GPViT-L2 outperforms Swin Transformer-B by 0.5 mIoU on UperNet (Xiao et al., 2018) ADE20K semantic segmentation also with only 40% as many parameters.
2 RELATED WORK
Vision Transformers. Vision Transformers have shown great success in visual recognition. They have fewer inductive biases, e.g. translation invariance, scale-invariance, and feature locality (Xu et al., 2021b) than ConvNets and can better capture long-range relationships between image pixels. In the original ViT architecture (Dosovitskiy et al., 2021; Touvron et al., 2021a), images are split into patches and are transformed into tokens that are passed through the encoder of a transformer (Vaswani et al.,
2017). Based on this framework, LeViT (Graham et al., 2021) achieves a significant performance improvement over ViT by combining convolutional and transformer encoder layers. An important development in ViT architectures is the incorporation of a hierarchical feature pyramid structure, as typically seen in ConvNets (Wang et al., 2021; Liu et al., 2021; Xu et al., 2021a; Wu et al., 2021; Fan et al., 2021). For example, Liu et al. (2021) propose a shifted windowing scheme to efficiently propagate feature information in the hierarchical ViT. Such a pyramid architecture provides multi-scale features for a wide range of visual recognition tasks. Following this line of research, recent work has studied the use of hierarchical features in ViTs (Ren et al., 2022b; Guo et al., 2022; Li et al., 2022b; Dong et al., 2022; Hatamizadeh et al., 2022; Chen et al., 2022a; d’Ascoli et al., 2021; Lee et al., 2022). For example, Ren et al. (2022b) introduce using multi-resolution features as attention keys and values to make the model learn better multi-scale information. While this is encouraging, it introduces extra complexity in the downstream model’s design on how to utilize the multi-scale features effectively. Recently, Li et al. (2022a) revisited the plain non-hierarchical ViT for visual recognition; using such a model simplifies the use of features and better decouples the pre-training and downstream stages of model design. Our work extends on this as we examine how to efficiently increase the feature resolution in a non-hierarchical ViT.
Attention Mechanisms in ViTs. A bottleneck when using high resolution features in ViTs is the quadratic complexity in the computation of the self-attention layer. To tackle this challenge, several local attention mechanisms have been proposed (Liu et al., 2021; Huang et al., 2019; Dong et al., 2022; Xu et al., 2021a; Zhang et al., 2022; Han et al., 2021) to allow each image token to attend to local region instead of the whole image. However, using only local attention hinders a model’s ability to exchange information globally. To counter this problem, RegionViT (Chen et al., 2022a) and GCViT (Hatamizadeh et al., 2022) first down-sample their feature maps and exchange global information between the down-sampled features, before using self-attention to transfer information between the original image features and the down-sampled features. This is similar in spirit to our GP Block. However, unlike RegionViT and GCViT, in a GP Block the grouped features are not constrained to a particular rectangular region, but can correspond to any shape or even entirely disconnected image parts. There is recent work using transformer decoder layers with cross-attention between visual tokens and learnable tokens (Carion et al., 2020; Cheng et al., 2022; Jaegle et al., 2022; Hudson & Zitnick, 2021), however, there are three fundamental differences between these and ours: (i) Each of our GP blocks operates as an ‘encoder-decoder’ architecture with two rounds of cross-attention between visual tokens and group tokens: the first round groups the visual tokens for group propagation, and the second round ungroups the updated groups back into visual tokens; (ii) The underlying functionality is different: GP blocks facilitate more efficient global information propagation throughout the ViT, while previous work applies the decoder to obtain the final results for inference (e.g bounding boxes, or masks in Carion et al. (2020); Cheng et al. (2022)); (iii) The GP block is a general module that can be insert into any layer of the ViT, while previous work utilizes the decoder only in the end of the network.
High-Resolution Visual Recognition. Previous work (Wang et al., 2020; Cheng et al., 2020) has shown that high-resolution images and features are beneficial to visual recognition tasks, especially to those requiring the perception of fine-grained image details, for example, semantic segmentation (Wang et al., 2020), pose-estimation (Sun et al., 2019), and small object detection (Yang et al., 2022). For example, HRNet (Wang et al., 2020) introduces a high-resolution ConvNet backbone. It maintains a high-resolution branch and exchanges information between different resolutions of features with interpolation and strided convolutions. Inspired by this work, HRFormer (Yuan et al., 2021) and HRViT (Gu et al., 2022) replace the convolutions in HRNet with self-attention blocks. GPViT is even simpler: it maintains single-scale and high-resolution feature maps without requiring any cross-resolution information to be maintained.
Object-Centric Representation. Our idea of performing information propagation among grouped regions is related to object-centric representation learning (Wang & Gupta, 2018; Kipf & Welling, 2017; Watters et al., 2017; Qi et al., 2021; Locatello et al., 2020; Kipf et al., 2022; Elsayed et al., 2022; Xu et al., 2022). For example, Locatello et al. (2020) proposes slot-attention, which allows automatic discovery of object segments via a self-supervised reconstruction objective. Instead of using reconstruction, Xu et al. (2022) utilizes language as an alternative signal for object segmentation discovery and shows it can be directly transferred to semantic segmentation in a zero-shot manner. All the above work extract object-centric features for downstream applications, while our work inserts this object-centric information propagation mechanism as a building block inside ViTs to compute
high-resolution representations more efficiently and improve high-resolution features. In this respect, our work is related to Li & Gupta (2018) where the graph convolution operations are inserted into ConvNets for better spatial reasoning.
3 METHOD
We present the overall architecture of our Group Propagation Vision Transformer (GPViT) in Figure 3 (a). GPViT is designed for general high-resolution visual recognition. For stable training, we first feed the input image into a down-sampling convolutional stem to generate image features (also known as image tokens), as in Dosovitskiy et al. (2021); Liu et al. (2021). In GPViT we downsample by a factor of 8 by default. The features are therefore higher resolution than in the original ViT where the factor is 16. Unlike most recently proposed methods (Liu et al., 2021; Li et al., 2022b) that adopt a pyramid structure to generate features in multiple resolutions, we keep the features at a high resolution without any down-sampling.
After combining the initial image features with positional embeddings (Vaswani et al., 2017), we feed them into the core GPViT architecture. We replace the original self-attention block in ViT with local attention to avoid the quadratic complexity of self-attention. However, stacking local attention blocks alone does not allow for long-range information exchange between patches and therefore is harmful to performance. To counter this problem, we propose the Group Propagation Block (GP Block)—which we describe in full in Section 3.1—to efficiently propagate global information across the whole image. In our implementation, we use a mixture of GP Blocks and local attention layers to form our GPViT and keep the overall depth unchanged. Lastly, we average the final features to get the model’s output.
3.1 GROUP PROPAGATION BLOCK
Our key technical contribution is the GP block, which efficiently exchanges global information between each image patch with a linear complexity. We visualize the structure of the GP block in Figure 3 (b). It has a bottleneck structure and comprises of three stages, namely, Feature Grouping, Group Propagation, and Feature Ungrouping. In the first stage the image features are grouped, then in the second stage global information is propagated between the grouped features, and in the last stage, this global information is transferred back to the image features.
Feature Grouping. The input to a GP Block is a matrix of image features X ∈ RN×C (The blue tokens in Figure 3 (b)) where N is the total number of image features (or image tokens) and C is the dimensionality of each feature vector. We use M learnable group tokens stored in a matrix G ∈ RM×C (the multi-colored tokens in Figure 3 (b)) where the group number M is a model hyper-parameter. Grouping is performed using a simplified multi-head attention operation (Vaswani et al., 2017), which gives us grouped features Y ∈ RM×C (the half-and-half tokens in Figure 3 (b)):
Attention(Q,K, V ) = Softmax( QKT√
d )V, (1) Y = Concat{h} ( Attention(WQh Gh,W K h Xh,W V h Xh) ) , (2)
where d is the channel number, h is the head index, and W {Q,K,V }h are projection matrices for the query, key, and values, respectively in the attention operation. We remove the feature projection layers after the concatenation operation and set WQh and W V h to be identity matrix. Therefore, the grouped features are simply the weighted sum of image features at each head where the weights are computed by the attention operation.
Group Propagation. After acquiring the grouped features, we can update and propagate global information between them. We use an MLPMixer (Tolstikhin et al., 2021) (Equation 3; the red box in Figure 3 (b)) to achieve this, as MLPMixer provides a good trade-off between model parameters, FLOPs, and model accuracy. MLPMixer requires a fixed-sized input, which is compatible with our fixed number of groups. Specifically, our MLPMixer contains two consecutive MLPs. Recall that Y ∈ RM×C contains the grouped features from the first Feature Grouping stage. We can update these features to Ỹ ∈ RM×C with the MLPMixer by computing:
Y ′ = Y + MLP1(LayerNorm(Y )T ))T , (3)
Ỹ = Y ′ + MLP2(LayerNorm(Y ′))), (4) where the first MLP is used for mixing information between each group, and the second is used to mix channel-wise information.
Feature Ungrouping. After updating the grouped features, we can return global information to the image features through a Feature Ungrouping process. Specifically, the features are ungrouped using a transformer decoder layer where grouped features are queried by the image features.
U = Concat{h} ( Attention(W̃Qh Xh, W̃ K h Ỹh, W̃ V h Ỹh) ) , (5)
Z ′ = Wproj ∗ Concat(U,X), Z ′′ = Z ′ + FFN(Z ′), Z = DWConv(Z ′′), (6)
where W̃ {Q,K,V }h are the projection matrices in the attention operation, Wproj is a linear matrix that projects concatenated features Z ′ to the same dimension as image features X , FFN is a feed-forward network, and DWConv is a depth-wise convolution layer. We modify the original transformer decoder layer by replacing the first residual connection with a concatenation operation (Equation 5; the blue box in Figure 3 (b)), and move the feature projection layer after this to transform the feature to the original dimension. We find this modification benefits the downstream tasks in different sizes of models. We take inspiration from Ren et al. (2022b) and add a depth-wise convolution at the end of the GP Block to improve the locality property of the features (Equation 6; the yellow box in Figure 3 (b)). Finally, a GP Block outputs Z as its final output.
3.2 ARCHITECTURE VARIANTS OF GPVIT
In this paper we study four variants of the proposed GPViT. We present their architectural details in Table 1. These four variants largely differ in the number of feature channels used (i.e. the model width). We use the recently proposed LePE attention (Dong et al., 2022) as local attention by default. The FLOPs are counted using 224×224 inputs. Please refer to Section B.1 in our Appendix for detailed architectural hyper-parameters and training recipes for these variants.
3.3 COMPUTATIONAL COSTS OF HIERARCHICAL AND NON-HIERARCHICAL VITS.
We visualize both the non-hierarchical and hierarchical ViT in Figure 4 (a), where the non-hierarchical ViT simply stacks attention blocks and the hierarchical ViT divides the network into several stages and down-samples the feature map at each stage. Naturally, with the same resolution input, the non-hierarchical ViT will have a higher computation cost. The cost is divided into two parts as shown in Figure 4 (b): the self-attention module and the FFN module. Our GPViT largely reduces the computation of information propagation by using our GP Block instead of self-attention. However, the cost of the FFN stays high for high resolution image features. Therefore we will expect higher FLOPs from GPViT compared to a hierarchical ViT given similar model parameters. However, we believe non-
hierarchical ViTs are still a direction worthy of exploration given their simplicity in extracting high-resolution features and the removal of the need to study the design of efficient downstream models that utilize multi-scale features as required for a hierarchical ViT. This helps to maintain the independence of the model’s pre-training and fine-tuning designs (Li et al., 2022a). In our experiments, we show that our GPViT can achieve better detection and segmentation performance compared to state-of-the-art hierarchical ViTs with similar FLOP counts.
4 EXPERIMENTS
4.1 IMAGENET-1K CLASSIFICATION
Setting: To ensure a fair comparison with previous work, we largely follow the training recipe of Swin Transformer (Liu et al., 2021). We build models using the MMClassification (Contributors, 2020a) toolkit. The models are trained for 300 epochs with a batch size of 2048 using the AdamW optimizer with a weight decay of 0.05 and a peak learning rate of 0.002. A cosine learning rate schedule is used to gradually decrease the learning rate. We use the data augmentations from Liu et al. (2021); these include Mixup (Zhang et al., 2017), Cutmix (Yun et al., 2019), Random erasing (Zhong et al., 2020) and Rand augment (Cubuk et al., 2020).
We compare GPViT with hierarchical and nonhierarchical vision transformers on the ImageNet-1K classification task and report the results in Table 2. As shown in the table, because of our high-resolution design and effective global information propagation via the grouping mechanism, our GPViT outperforms outperforms the non-hierarchical baseline DeiT (Touvron et al., 2021a). In addition, GPViT also outperforms Swin Transformer (Liu et al., 2021) and two recently proposed hierarchical counterparts RegionViT (Chen et al., 2022a) and DWViT (Ren et al., 2022a). This result showcases the potential of nonhierarchical vision transformers and suggests that the hierarchical design inherited from the ConvNet era is not necessary for obtaining a high-performing visual recognition model. This corroborates the work of Li et al. (2022a). That said, we do note that the FLOPs of our models are higher than most alternatives for a
similar parameter count. However, for a similar FLOP count we observe that GPViT can achieve a comparable top-1 accuracy, but with many fewer parameters than the alternatives. For example, GPViT-L2 (15.0 G) has similar FLOPs to the Swin Transformer-B (15.4 G) and ShiftViT-B (15.6 G), but it achieves a similar accuracy with significantly fewer parameters (23.8 M v.s. 88 M and 89 M).
4.2 COCO OBJECT DETECTION AND INSTANCE SEGMENTATION
Setting: We follow Chen et al. (2022b) to use Mask R-CNN and RetinaNet models for the COCO object detection and instance segmentation tasks. We use ViTAdapter (Chen et al., 2022b) to generate multi-scale features as FPN inputs and evaluate the model for both 1× and 3× training schedules.
Results: We compare GPViT to state-of-the-art backbones, all pre-trained on ImageNet-1K. We report the results in Table 3 and Table 4. For competing methods we report the performance of their largest-sized models. For both detectors our GPViT is able to surpass the other backbones by a large margin for a similar parameter count. With Mask R-CNN (Table 3), our smallest GPViT-L1 surpasses its Swin Transformer-B (Liu et al., 2021) counterpart by 2.6 APbb and 1.4 APmk for the 1× training schedule with fewer FLOPs and only 30% as many parameters. When comparing with models that are also equipped with ViTAdapter (Chen et al., 2022b), we observe that GPViT achieves a better
AP with fewer parameters, e.g. our smallest GPViT-L1 outperforms ViT-Adapter-B in both training schedules. These results showcase GPViT’s effectiveness at extracting good regional features for object detection and instance segmentation. A similar conclusion can be drawn from the single-stage RetinaNet detector; with RetinaNet (Table 4), GPViT-L1 has FLOPs similar to the recently proposed RegionViT-B (Chen et al., 2022a), but it outperforms RegionViT-B by 2.5 and 2.0 APbb in both 1× and 3× schedules with only 25% as many parameters. In Table 3, we also compare our Mask R-CNN with the recently proposed ViTDet (Li et al., 2021a) that also uses a non-hierarchical ViT as the backbone network. Here we continue to use the standard 3× (36 epochs) training recipe for GPViT. The results show that under similar FLOPs, even if ViTDet is equipped with more parameters (111M), advanced masked-auto-encoder (MAE) pre-training (He et al., 2022), a longer training schedule (100 epochs), and heavy regularizations like large-scale jittering (Ghiasi et al., 2021), our model can still achieve a comparable performance, which further validates the effectiveness of GPViT.
4.3 ADE20K SEMANTIC SEGMENTATION
Setting: We follow previous work (Liu et al., 2021) and use UperNet (Xiao et al., 2018) as the segmentation network. We also report performance when using the recently proposed SegFormer (Xie et al., 2021) model. For both models, we train for 160k iterations.
Results: We summarise the segmentation performance of GPViT and other state-of-the-art backbone networks in Table 5. For UperNet, we report results with the largest available model size for the competing methods to show how far we can go in the segmentation task. Thanks to its high-resolution design, GPViT outperforms all competing methods in mIoU with fewer FLOPs and fewer parameters. For example, GPViT-L1 only has 37M parameters but it can achieve comparable mIoU to methods with only half the number of FLOPs. This result tells us that for tasks requiring the perception of fine-grained details, scaling-up feature resolution is a better strategy than scaling up model size. GPViT also excels when used with SegFormer. Specifically, GPViT achieves better mIoU than recently proposed vision transformers with similar parameter counts, including HRViT (Gu et al., 2022) that was specifically designed for semantic segmentation. We attribute these promising results to GPViT’s high-resolution design and its effective encapsulation of global information.
4.4 ABLATION STUDIES
Setting: We conduct ablation studies using two types of local attention: the simple window attention (Liu et al., 2021) and the more advanced LePE attention (Dong et al., 2022), which we used in previous experiments. We use the L1 level models (Param < 10 M) for all experiments. All models are pre-trained on ImageNet classification task for 300 epochs using the same setting as in Section 4.1. We report both ImageNet Top-1 accuracy and ADE20K SegFormer mIOU. Please refer to our appendix for more ablation experiments.
Building GPViT step by step. Here we show how we build GPViT step by step and present the results in Table 6. We start building our GPViT from a lowresolution vanilla DeiT with patch sizes of 16 and embedding channels of 216 (same as GPViT-Tiny). It achieves 77.4 top-1 accuracy on ImageNet and 42.2 mIoU on ADE20K. Then we increase the resolution by shrinking the patch size to 8. The FLOPs of the ImageNet and ADE20K models increase by 4.4× and 7.0× respectively. ImageNet accuracy increases to 79.2 but training this model for segmentation proves
to be unstable. We see that enlarging the feature resolution using global self-attention leads to the number of FLOPs exploding and makes convergence difficult. We now replace self-attention with window attention (Liu et al., 2021) and the more advanced LePE attention (Dong et al., 2022). For both local attention mechanisms, the FLOPs of the ImageNet and ADE20K models drop to 5.8G and 34G respectively. We then incorporate GP Blocks, and observe that the accuracy and mIoU improve for both types of local attention and FLOPs remain unchanged. These results showcase the effectiveness of using high-resolution features as well as the importance of our combination of local attention blocks and GP Blocks to maintain a reasonable computation cost.
Global information exchange. Here, we compare our GP Block with other blocks that can exchange global information between image features. The competing blocks include the global attention block, the convolution propagation block (Li et al., 2022a), and the shifting window-attention block (Liu et al., 2021) designed for window attention. We follow ViTDet (Li et al., 2022a) to build the convolution propagation block that stacks two 3×3 convolution layers with a residual connection. We use the original version of the shifting windowattention block as in Liu et al. (2021). The resulting models are acquired by putting competiting
blocks in the same place as as our GP Block. We report the results in Table 7. We observe that simply replacing the local attention layers with convolution layers causes severe performance drops for both types of local attention.We also observe that replacing local attention with global attention can improve performance for a very large increase in FLOPs. For window attention, we found that using the shifting window strategy slightly hurts the performance. We postulate that this is caused by a deficit of shifting window layers; half of Swin Transformer layers are shifting window layers, but we only use four here. For both types of local attention, GP Block achieves the best performance on ImageNet and ADE20K. These results show GP Block’s effectiveness in propagating global information.
Number of group tokens. Here we study how the different combinations of the number of groups tokens in GP Blocks affect the overall model performance. We report the results in Table 8. We find using a large number of group tokens across the whole network can give us higher accuracy on ImageNet but at additional computational cost. However, using too few group e.g. 16 tokens will harm performance. In GPViT we choose to progressively decrease the
number of group tokens from 64 to 16. This strategy gives us a good trade-off between accuracy and computational cost.
Grouped features propagation. In Table 9 we compare different methods for global information propagation. The results show that even when we add nothing to explicitly propagate global information the model can still achieve a good performance (79.8% accuracy on ImageNet). The reason is that in this case the image features are still grouped and ungrouped so the
global information can still be exchanged in these two operations. We also find that self-attention achieve a slightly better accuracy than MLPMixer (80.7 v.s. 80.5), but is more expensive. In GPViT we use MLPMixer for propagating global information.
5 CONCLUSION
In this paper, we have presented the Group Propagation Vision Transformer (GPViT): a nonhierarchical vision transformer designed for high-resolution visual recognition. The core of GPViT is the GP Block, which was proposed to efficiently exchange global information among high-resolution features. The GP Block first forms grouped features and thxen updates them through Group Propagation. Finally, these updated group features are queried back to the image features. We have shown that GPViT can achieve better performance than previous work on ImageNet classification, COCO object detection and instance segmentation, and ADE20K semantic segmentation.
6 ACKNOWLEDGEMENT
Prof. Xiaolong Wang’s group was supported, in part, by gifts from Qualcomm and Amazon Research Award. Chenhongyi Yang was supported by a PhD studentship provided by the School of Engineering, University of Edinburgh.
A FURTHER ABLATION STUDIES
A.1 STUDY ON RUNNING EFFICIENCY In Table 10, we compare the inference speed of GPViT with ViT baselines. Specifically, for each variant of our GPViT, we compare it to ViT models with patch size 16 (low-resolution) and patch size 8 (high-resolution) while keeping the channel dimensions the same. We report inference time using three different input sizes, which correspond to the three typical input sizes used by ImageNet-1k classification, COCO object detection and instance segmentation, and ADE20K semantic segmentation. We draw three observations from the results:
• When using small-sized inputs, GPViT runs slower than the low-resolution ViT. Despite the efficient design of our GP Block and the use of local attention, the high-resolution design still incurs a significant cost for forward passes, slowing down inference speed.
• When the models are applied to downstream tasks where they take larger-sized inputs, all but the largest GPViT models are faster than their low-resolution ViT counterparts. For example, when the model channel number is 216, GPViT takes 83 ms to process an 800×1280 sized image, while ViT-D216-P16 takes 155 ms. In this case, the self-attention operations with quadratic complexity severely slow down the speed of the ViT even with low resolution features. On the other hand, the computations in GP Block and local attentions grow much less than self-attention when the input scales up.
• GPViT is faster than the high-resolution ViT baselines when using small inputs. In addition, high-resolution ViTs are not even able to process large-sized inputs: we got Out of Memory
errors when using a NVIDIA 2080Ti GPU with 11 GB memory. This highlights our technical contribution of efficiently processing high-resolution features with GPViT.
We further study how the computation cost for high-resolution features changes when the model size and input scale up by examining FLOP counts. The results are shown in Figure 5 where we compare GP Block with different group numbers to self-attention and local-attention operations: Self-attention and GP Block can both exchange global information between image features, but the computational cost of GP Block grows much slower than self-attention. Local attention operations have a similar level of efficiency to GP Block, but are unable to exchange global information because of their limited receptive field.
B IMPLEMENTATION DETAILS
B.1 MODEL DETAILS OF GPVIT
The model details of different GPViT variants are presented in Table 11. Different GPViT variants are main difference by their model width (channels) and share similar hyper-parameters in other architecture designs.
B.2 TRAINING RECIPE FOR IMAGENET
The ImageNet experiments are based on the MMClassification toolkit (Contributors, 2020a). The models are trained for 300 epochs with a batch size of 2048; the AdamW optimizer was used with a weight decay of 0.05 and a peak learning rate of 0.002. The cosine learning rate schedule is adopted. The gradient clip is set to 5.0 (we also tested 1.0 and found it worked well too); data augmentation strategies are from Liu et al. (2021) and include Mixup (Zhang et al., 2017), Cutmix (Yun et al., 2019), Random erasing (Zhong et al., 2020) and Rand augment (Cubuk et al., 2020).
B.3 TRAINING RECIPE FOR COCO
The COCO experiments are based on the MMDetection toolkit (Chen et al., 2019). Following commonly used training settings, both Mask R-CNN and RetinaNet models are trained for 12 epochs (1×) and 36 epochs (3×). For the 3× schedule, we follow previous work (Liu et al., 2021; Ren et al., 2022b) to use multi-scale inputs during training. The AdamW optimizer was used with an initial learning rate of 0.0002 and weight decay of 0.05. We used ViTAdapter (Chen et al., 2022b) to generate multi-scale features and followed the default hyper-parameter settings in Chen et al. (2022b).
B.4 TRAINING RECIPE FOR ADE20K
The ADE20K experiments are based on the MMSegmentation toolkit (Contributors, 2020b). Following commonly used training settings, both UperNet and SegFormer models are trained for 160000 iterations. The input images are cropped to 512×512 during training. The AdamW optimizer was used with an initial learning rate of 0.00006 and weight decay of 0.01. We did not use ViTAdapter (Chen et al., 2022b) for segmentation experiments.
C VISUALIZATIONS In Figure 6, we visualise the feature grouping results using models trained on ImageNet, COCO and ADE20K. We observe that the feature grouping can separate a image’s foreground and background in all three datasets. When the model receives fine-grained supervision like bounding boxes and semantic masks, the feature grouping can correspond to more details in the image.
D COMPREHENSIVE COMPARISON
In Table 12 and Table 13, we provide a more comprehensive comparison between GPViT and other visual recognition models on ImageNet-1k classification and COCO Mask R-CNN object detection and instance segmentation. | 1. What is the focus and contribution of the paper on visual recognition using transformers?
2. What are the strengths of the proposed approach, particularly in its ability to exchange global information tokens?
3. What are the weaknesses of the paper regarding its claims, experiments, and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any concerns or questions regarding the paper's methodology, results, or conclusions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors present GPViT, a non-hierarchival transformer model for visual recognition. The authors argue that by not using a hierarchical approach, and by relying only on high resolution features they achieve better recognition when fine grained details are needed for recognition. The authors introduce the Group Propagation Block to exchange global information tokens which achieves runtime efficiencies under these scenarios. They evaluate their approach on image classification. semantic segmentation object detection and instance segmentation.
Strengths And Weaknesses
Strengths:
S1: tested on a variety of tasks
S2: a good discussion on past transformer work, and the paper's place within this past work is discussed
S3: thorough testing and a thorough ablation study is performed. The authors show that their model achieves good accuracy. For imagenet however it is not always clear to me that there is an improvement in terms of the number of parameters used. Discuss this in more detail in the results
Weaknesses:
W1: unless I missed it, will source code be provided. I did not notice a discussion on whether source code will be provided to make it easy for someone to replicate these results.
W2: section 2: clarify what you mean by "inductive biases".
W3: Eq 2: where is the concat operation shown in Figure 2?
Clarity, Quality, Novelty And Reproducibility
See my comments above. Overall the paper is well written. it has a few items that need clarification but i do not think these constitute ground for rejection. The model is described in detail. It is not clear to me whether source code will be provided. A clarification from the authors is needed on this. The results are good and I think the ICLR audience would be interested in the paper. |
ICLR | Title
Stochastic Quantized Activation: To prevent Overfitting in Fast Adversarial Training
Abstract
Existing neural networks are vulnerable to ”adversarial examples”—created by adding maliciously designed small perturbations in inputs to induce a misclassification by the networks. The most investigated defense strategy is adversarial training which augments training data with adversarial examples. However, applying single-step adversaries in adversarial training does not support the robustness of the networks, instead, they will even make the networks to be overfitted. In contrast to the single-step, multi-step training results in the state-of-the-art performance on MNIST and CIFAR10, yet it needs a massive amount of time. Therefore, we propose a method, Stochastic Quantized Activation (SQA) that solves overfitting problems in single-step adversarial training and fastly achieves the robustness comparable to the multi-step. SQA attenuates the adversarial effects by providing random selectivity to activation functions and allows the network to learn robustness with only single-step training. Throughout the experiment, our method demonstrates the state-of-the-art robustness against one of the strongest white-box attacks as PGD training, but with much less computational cost. Finally, we visualize the learning process of the network with SQA to handle strong adversaries, which is different from existing methods.
1 INTRODUCTION
As Convolutional Neural Networks (CNNs) stand out as a solution to many real world computer vision tasks (LeCun et al., 2015; Angelova et al., 2015; Levine et al., 2016; Litjens et al., 2017), achieving a certain level of robustness has become indispensable for security-sensitive systems, such as autonomous driving, robot vision, and identity authentication. However, recent studies (Szegedy et al., 2014; Goodfellow et al., 2015) have shown that the existing CNNs are vulnerable to small perturbations of the input that are intentionally or adversarially designed to fool the system. The adversarial attack is a serious problem since these maliciously designed attacks have shown effective in physical world scenarios, where inputs are obtained from signals of cameras and other sensors (Kurakin et al., 2016; Evtimov et al., 2017). Another disconcerting feature about adversarial examples is their transferability across different models (Szegedy et al., 2014; Papernot et al., 2016; Liu et al., 2016) that enables black-box attacks. In other words, adversarial examples can be designed from a different model without having the information about the target network.
The most studied defense strategy against adversarial attacks is adversarial training (Goodfellow et al., 2015; Kurakin et al., 2017; Tramr et al., 2018; Madry et al., 2018), which increases robustness by augmenting training data with adversarial examples. Since adversarial training requires the model to train adversarial examples in addition to training data, the model consumes extra time to learn features of the examples via fine-tuning. Even though the model is trained on more examples, it still might be defenseless to new examples generated by different attack due to the overfitting problem. Recently, Madry et al. (2018) have found that adversarial training on examples created via gradient descent with random restarts, Projected Gradient Descent (PGD) training, results in a universally and partially unbreakable model on MNIST and CIFAR-10. This method shows the state-of-the-art performance on MNIST and CIFAR-10 to the best of our knowledge, but the examples are created iteratively and the time increases proportionally to the number of steps. For instance, in our CIFAR10 training, FGSM training on ResNet18 took less than 2 hours for 30 epochs; however, PGD training took about 30 hours for the same epochs. Thus, it is essential to find the universal method that is resistant against all of the attacks, with less computational cost.
Since high dimensional representations of the neural networks give extreme complexity to the boundary of trained manifolds (Tanay & Griffin, 2016; Dube, 2018), we start from the idea that is to reduce degrees of freedom available to the adversary. In this sense, we propose a Stochastic Quantized Activation (SQA) that provides stochastic randomness to the output of an original activation and reduces the opportunity for the attacker to make adversaries. The best advantage of SQA is that SQA with fast adversarial training, training with only FGSM examples, allows the model to have robustness comparable to PGD training with less computational cost. In particular, although SQA is one of the obfuscated gradients defined by Athalye et al. (2018), iterative optimization-based methods does not successfully circumvent our defense. Besides, SQA can be combined with any deep learning models with a few lines of code but guarantees a certain level of robustness against adversarial attacks.
In this paper, we first explain existing methods for adversarial attacks and defenses we refer in Section 2. We separate the existing defense strategies into two categories and analyze the strengths and weaknesses. In Section 3, we introduce the procedure of SQA, with an algorithm described in 1. In Section 4, we show our experimental results on MNIST and CIFAR-10 and compare with existing defense systems. Lastly, we visualize the penultimate layer of our networks and compare how SQA with fast adversarial training, learns differently from the existing methods. Section 5 concludes the work and contributions of this paper are as follows:
• We propose a Stochastic Quantized Activation (SQA) which achieves a significant level of robustness combined with FGSM training, comparable to state-of-the-art PGD adversarial training with much less computational cost.
• Due to the efficiency and the flexibility of the proposed method, it can be fastly and widely applied to any existing deep neural networks and combine with other types of defense strategies.
• We analytically demonstrate how SQA makes the model robust against adversaries in highlevel and low-level by using t-SNE, and plotting activation maps.
2 RELATED WORK
In this section, we investigate the existing methods of adversarial attacks and defenses that appear in the following subsections. First, we define the adversarial examples with the notations formally used in this paper. Let x denote input and y denote the prediction of the input from the DNN classifier f , y = f(x). Then, an adversarial example is crafted by adding a malicious noise η into the original input x, causing a different prediction from the true label, y∗. The formal representation is as follows, where x′ is an adversarial example and is the noise level.
x′ = x+ · η, where f(x′) 6= y∗ (1)
2.1 GENERATING ADVERSARIAL EXAMPLES
Fast Gradient Sign Method (FGSM) is a fast single-step method to create adversarial examples proposed by Goodfellow et al. (2015). The authors suggest the adversarial examples are crafted because of the effects of the linear summation in DNNs, and the algorithm is as follows.
x′ = x + · sign(∇x J(f(x), y∗)) (2)
Here J(f(x), y∗) is the loss between the output prediction f(x) and the true label y∗. However, calculating the loss based on the difference between predictions and true labels makes the label leaking effect (Kurakin et al., 2017), so one simple way to prevent it is to put the prediction y instead of y∗. The intuition behind of the Equation 2 is that increasing loss J by perturbing the input x adding the gradient of loss, which makes the prediction get out of the extrema.
Projected Gradient Descent (PGD) is one of the strongest known white box attacks (Madry et al., 2018). It is a multi-step variant of FGSM, which means that it finds the adversarial perturbation ηn by using the same equation from FGSM, but iteratively. What makes this attack stronger is that
it finds the adversary from starts with random -uniform perturbation clipped in the range of the normalized pixel values, [0,1].
x′0 = Clipx(x+ uniform(− , )), x′n+1 = Clipx, (x′n + α · sign(∇x J(f(x′n), y∗))) (3)
Carlini & Wagner Attack (C & W Attack) is strong optimization-based iterative attack proposed by Carlini & Wagner (2017). It uses Adam (Kingma & Ba, 2014) to optimize over the adversarial perturbation ηn using an auxiliary variable ωn and solves the equation below.
minimize ||ηn||p + c · f(xn + ηn)
where ηn = 1
2 (tanh (ωn) + 1)− xn.
(4)
The function f(·) is defined as
f(x) = max(Z(x)y∗ −max i 6=y∗ (Z(x)i),−κ), (5)
and we can determine the confidence with which the misclassification occurs by adjusting κ.
2.2 DEFENSIVE STRATEGY
Adversarial training increases robustness by augmenting training data in relation to adversarial examples. Previous studies (Goodfellow et al., 2015; Kurakin et al., 2017; Tramr et al., 2018) have shown that adversarially training models improve the classification accuracy when presenting them with adversarial examples. However, the intrinsic problem of this method is the high cost associated with additionally generating adversarial examples and patching them into a training batch. For this reason, practical adversarial training on a large scale dataset such as ImageNet uses fast-generated adversarial examples using FGSM only for training data. However, Madry et al. (2018) have shown that FGSM adversaries don’t increase robustness especially for large since the network overfits to these adversarial examples. They instead, suggest to train the network with a multi-step FGSMk, PGD adversaries, and it shows the state-of-the-art performance on MNIST and CIFAR-10.
Obfuscated Gradients make the network hard to generate adversaries by not having useful gradients. Recently, Athalye et al. (2018) defined three types of obfuscated gradients: Shattered Gradients, Stochastic Gradients, and Exploding & Vanishing Gradients. (Dhillon et al., 2018; Buckman et al., 2018; Song et al., 2018; Xie et al., 2018) have considered one of these gradients, but Athalye et al. (2018) make the attacks which successfully circumvent the defense by making 0% accuracy on 6 out of 7 defenses at ICLR2018. SQA can be considered as both shattered gradients and stochastic gradients. However, we found that our method does not overfit to the adversarial examples and shows robustness against the different type of attacks including the one used to break obfuscated gradients. The next section explains the details of our method.
3 STOCHASTIC QUANTIZED ACTIVATION
Algorithm 1 Stochastic Quantized Activation 1: function FORWARD(hi, λ) 2: gi ← (hi −min∀j⊆J hiJ) / (max∀j⊆J hiJ −min∀j⊆J hiJ) ∗ λ 3: gi ← bgic + Bernoulli(gi − bgic) 4: gi ← (gi / λ) ∗ (max∀j⊆J hiJ −min∀j⊆J hiJ) + min∀j⊆J hiJ 5: return gi
6: function BACKWARD(∂gi/∂hi) 7: return ∂gi/∂hi
In this section, we introduce the concept of SQA starting from a typical low-bit representation in DNNs as prerequisites (Courbariaux et al., 2015). Then, we show the procedure of our quantization stochasticity. The difference between typical low-bit DNNs (Hubara et al., 2016a; Courbariaux et al.,
2015; Hubara et al., 2016b) and our proposed method is that we only consider the quantization of activations except weight vectors. We found that this does not significantly slow down the training with PyTorch (Paszke et al., 2017) but maintains full-precision weight representation, which enables easier convergence than BNNs without additional training strategies.
BinaryConnect constraints the weights to either +1 or -1 during propagations (Courbariaux et al., 2015). Two types of binarization, deterministic and stochastic, are introduced. They are respectively described by the following equations.
wb = { +1 if w ≥ 0, −1 otherwise. (6)
wb = { +1 with probability p = σ(w), −1 with probability 1− p.
where σ(x) = clip( x+ 1
2 , 0, 1) = max(0, min(1,
x+ 1
2 ))
(7)
BNNs are originally designed to reduce the significant amount of memory consumption and costs taken by propagating in full-precision networks. Recently, however, Galloway et al. (2018) shows another benefit of low-precision neural networks, which improves robustness against some adversarial attacks.
Thus, we propose SQA, a stochastic activation function giving the quantized threshold effects into vanilla CNNs, which is described in Algorithm 1. The algorithm can be divided into three steps.
• Min-Max normalization with scaling • Stochastic Quantization • Inverse Min-Max normalization after rescaling
Let hi be a latent space, the output from a ith convolutional layer after ReLU activation. We first perform min-max normalization, making hi ranging from 0 to 1. Then we scale the hi ranging from 0 to λ by multiplying a scale factor λ, which determines the level of quantization from binary to quaternary in our experiment. In the next step, we stochastically quantize the scaled gi as gi presented in the below equation.
gi = bgic + Bernoulli(gi − bgic) (8)
This makes gi converge into the closest or second closest integers, either bgic or bgic + 1 with a probability of each, 1 - (gi−bgic) and gi−bgic. For instance, if we let gi = 1.7, then the probability of gi = 1 is 0.3 and gi = 2 is 0.7. The final step is rescaling gi into the range of original output ReLU activation hi. To rescale the value within the original range, gi is first divided by λ, and inverse min-max normalization is applied as presented in Algorithm 1.
Since it is impossible to find exact derivatives with respect of discretized activations, an alternative is to approximate it by a straight through estimator (Bengio et al., 2013). The concept of a straight through estimator is fixing the incoming gradients to a threshold function equal to its outgoing gradients, ignoring the derivative of the threshold function itself. This is the reason why we rescale gi to the original range of hi. In other words, we do not want to consider the scale factors multiplied in the activation function when we use a straight through estimator.
4 EXPERIMENT
4.1 DATASET AND IMPLEMENTATION DETAILS
In this experiment, we show the feasibility of our approach with several different settings on MNIST and CIFAR-10 using PyTorch (Paszke et al., 2017). We use Adversarial Box (Wang & Gavin Ding, 2018) to generate FGSM and PGD adversaries and implement C.W adversaries (l∞) based on Athalye et al. (2018). The results for each MNIST and CIFAR-10 are shown in Sec 4.2 and 4.3.
Model Parameters For MNIST, we use a baseline model as a Vanilla CNN consisting of three convolutional layers with two fully-connected layers on the top. Since there is a correlation between robustness and model capacity (Madry et al., 2018), we use two networks with different channel sizes and increase channels by a factor of 2. This result in networks with each (16, 32, 64) and (64, 128, 256) filters and they are denoted as SMALLandLARGE in Table 1. We apply SQA on the first and second layers with each λ = 1 and 2. We use Stochastic Gradient Descent (SGD) with learning rate of 0.1, momentum of 0.9, and weight decay of 5e-4. We adjust the learning rate decreasing by 0.1 after every 30 steps within total 100 epochs.
For CIFAR-10, we use ResNet model (He et al., 2015) as a baseline. We adpot 34 and 101 layers of ResNets denoted as RES34 and RES101 in Table 2. An interesting property found on training CIFAR-10 dataset is that quantization with stochasticity shows much higher accuracy rather than deterministic quantization. It seems reasonable since stochasticity is able to provide higher capacity to learn complex RGB images. We apply SQA on the output from the first layer of ResNet and its bottleneck module with each λ = 1 and 2. The same hyper-parameters are applied to the MNIST training except with total epochs of 350 and decreasing learning rate by 0.1 after every 150 steps.
Attack Parameters Throughout the experiments, different l∞ intensity levels are applied to the attacks. For MNIST, = 0.2 and 0.3 are used for FGSM and C&W attacks to give strong adversarial perturbations. We choose 40 steps for C&W attacks. Also, we set = 0.2, a step size of 0.01 and 40 steps for PGD attacks. For CIFAR-10, = 4, 8 are considered for the adversarial attacks. We choose 30 steps for C&W attacks. For PGD attacks we fix 7 steps and the step size as 2 with random perturbation 8. Note that the values for MNIST are in the scale of (0,1) and (0,255) for CIFAR-10. Step sizes for the attacks are chosen to be consistent with Madry et al. (2018).
4.2 ATTACK ON MNIST
Quantization on Different Layers Since quantizing the weights or activation lowers the accuracy on clean images (Courbariaux et al., 2015; Hubara et al., 2016a), it is important to find where to put SQA modules in networks. Thus, we investigate the nth layer-wise quantization applying the deterministic quantization from the first layer of CNN to the third. The result is shown in Figure 1. It is clear that applying quantization on the earlier steps gives higher robustness. This observation is another proof for the argument from Liao et al. (2017) that a small perturbation in an image is amplified to a large perturbation in a higher-level representation so that quantizing the activations in lower-level representation gives more robustness. We further, empirically found that giving binary quantization on the first layer and ternary quantization on the second layer provides less degradation for accuracy and a fair amount of robustness.
SQA v.s. Full-Precision We explore the robustness of SQA against three types of adversarial attacks and the result is shown in Table 1. The networks are all trained with fast single-step adversaries and we could find two known, but interesting properties from the experiments. First, FGSM training the full-precision networks, denoted as SMALLfull, LARGEfull, makes themselves overfit to the adversaries. They show depressed accuracy on especially, PGD attacks, nearly close to 0. However, SQA models does not overfit to the adversaries. Even though SQA models show lower performance on FGSM attacks, they exhibit remarkably high accuracy on the other adversarial examples that have not seen before. The second interesting fact is that the correlation between robustness and model capacity. Madry et al. (2018) have shown that increasing model capacity helps to train the network against strong adversaries successfully. Our experiment also confirms this phenomenon. The performance of LARGESQA is stronger than SMALLSQA against FGSM attacks and more than ten times robust against PGD attacks. This result shows that the model capacity not only increases robustness against the adversaries that have been learned but also prevent overfitting to them.
4.3 ATTACK ON CIFAR10
SQA v.s. Full-Precision We performed experiments on CIFAR-10 to show the effectiveness of SQA on the RGB image dataset. We tried the same types of white-box attacks as in MNIST experiments, and the result is shown in Table 2. Instead of training Vanilla networks, we adopt ResNet (He et al., 2015) since the Vanilla networks are hard to learn useful features on CIFAR-10. Two different ResNets are used for comparing robustness regarding the model capacity, and we found the same phenomena as in MNIST experiments. In other words, SQA module helps to get out of overfitting to the FGSM adversaries, and the larger capacity provides, the higher robustness against different types of attacks.
SQA v.s. Other Existing Methods We compare our module, SQA, with recently proposed defenses including the state-of-the-art, Madry et al. (2018). We also include SAP, PixelDefend, and Thermometer (Dhillon et al., 2018; Song et al., 2018; Buckman et al., 2018) since they use stochastic gradients or shattered gradients that are one of the obfuscated gradients, where our method belongs to. Table 3 shows the performance 1 comparison against PGD and C&W attacks for l∞( = 8). Note that the architectures from the defenses on Table 3 are all different and it is impossible to exactly compare the robustness. We denote the architectures as RESNkW,C , where W stands for Wider ResNets, N is depth, C is the channel size of the first layer, and k is the widen factor. As Athalye et al. (2018) claimed, our method is more robust against gradient-based PGD rather than optimization-based C&W, pushing the state-of-the-art accuracy to 52% against PGD attacks. Also, it shows a fair amount of accuracy against C&W attacks comparable to Adv. Training. This result shows a dramatic impact in a sense that other methods based on obfuscated gradients almost fail to defend against these strong adversaries.
1 The performance of SAP, PixelDefend, and Thermometer is from Athalye et al. (2018).
4.4 TIME COMPLEXITY FOR ADVERSARIAL TRAINING
In this subsection, we explore the time complexity of adversarial training both single-steps and multi-steps. Let τ as the time taken by forward and backward propagation in neural networks, κ is number of steps to find adversaries, and υ is for other processing times including data loading, weight update and etc. Then, we can define the time complexity of adversarial training as follows,
TAdv.Training = (1 + κ) · τ + υ (9)
Then when we consider α as processing time for SQA module and compare SQA + FGSM training with PGD training,
(κ− 1) 2 · τ >> α (10)
As we can see in Table 4, SQA + FGSM training is almost 18 times faster than PGD training where κ is 100.
4.5 VISUALIZING PENULTIMATE LAYERS
In this subsection, we analyze the penultimate layers of the network trained with our method comparing with two full-precision networks: with no defense and with FGSM training. We use C&W
attacks to make adversaries with the parameters described in Section 4.1. We use two different ways to visualize the penultimate layers in high level and low level by using t-SNE (van der Maaten & Hinton, 2008) and plotting activation maps with both clean images and adversarial examples. Firstly, Figure 2 shows t-SNE results from the penultimate layer of our network and a point in tSNE is represented as an image. We select four classes to clearly show how the networks learn and what happens when adversarial noise is added. Here, we demonstrate that the full-precision network trained with FGSM does not correctly classify the classes against the adversarial attacks, as depicted in (B). However, only (C) which is our method shows that the clusters are less broken compared to the other methods. Furthermore, in light of the fact that the robust classifier requires a more complicated decision boundary (Madry et al., 2018), our model seems to have the complicated one by learning adversarial examples.
Secondly, we closely look into the penultimate layer in a low level by plotting the each of the activations. In this time, a point of an activation map stands for the mean value of the activations across about a thousand images per classes. We found that the yellow spots which are the highest values stay in the same location under the adversarial attack, as depicted in (C), Figure 3. In other words, our method shows stable activation frequencies against the adversarial attacks, but training full-precision models with FGSM adversaries does not help to increase robustness, as shown in (B).
5 CONCLUSION
In this paper, we have found that SQA, a stochastic quantization in an activation function, make existing neural networks prevent overfitting to FGSM training. It provides stochastic randomness in quantization to learn a robust decision boundary against adversarial attacks with FGSM training. Our method not only shows dramatic improvements against one of the strongest white-box attacks, comparable to state-of-the-art PGD training but also significantly reduces the computational cost. Throughout visualizing the penultimate layers of our network, we demonstrate that the network learns strong adversaries without overfitting. We expect that SQA could be fastly and widely applied to other defense strategies because of its efficiency and flexibility. In the future work, we plan to experiment on large scale image datasets. | 1. What is the main contribution of the paper on adversarial training?
2. What are the strengths of the proposed approach, particularly in terms of improving robustness to attacks?
3. What are the weaknesses of the paper regarding its claims, experiments, and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. What are some technical questions the reviewer has regarding the paper's approach, such as the susceptibility of min-max normalization to outliers and the choice of uniform noise?
6. What are some potential improvements the reviewer suggests for the paper, such as discussing the trade-off of quantization level vs. robustness and addressing the significant accuracy hit from the method? | Review | Review
The paper proposes a model to improve adversarial training, by introducing random perturbations in the activations of one of the hidden layers. Experiments show that robustness to attacks can be improved, but seemingly at a significant cost to accuracy on non-adversarial input.
I have not spent significant time on adversarial training, and review the paper under the following understanding: It was observed that the decision regions of a class are sprinkled with "holes" that get misclassified. These holes are neither naturally occuring. Their existence allows a potential attacker to coerce a model into mis-classifying by providing specially crafted inputs, in order to attain a benefit. Therefore, those holes are called "adversarial" examples. The risk is heightened by the fact that adversarial examples are commonly not mis-classified by humans (or even detectable by the eye). To "plug" the holes, one includes adversarial examples in the training, called "adversarial training." A resulting system should now have a much improved accuracy for the "holes", while ideally not affecting classification accuracy for the natural examples, which will continue to constitute nearly 100% of the samples the system will be used on. (The "hole" metaphor may not be entirely appropriate, since the space of adversarial examples that are neither misclassified by humans nor detectable is likely much larger than the space of naturally occuring samples.)
The paper proposes a way of plugging the hole by quantizing layer activations. The results show that this makes the system robust to adversarial attacks.
Clarity:
I spent a lot of time figuring out, as someone who has not spent a lot of time with this, what is being evaluated. It is very unclear whether the non-clean systems in Tables 1 and 2 do apply FGSM etc. also in training (in combination with SQA), or only to the test samples. Table 4, the wording in 4.2, and the wording of the Conclusion indicate that they are. But then, where do I find the accuracy on the naturally-occuring (non-manipulated) samples?
The only combination of interpretations that makes sense in the end is to parse "The networks are all trained with fast single-step adversaries" as to mean "The networks are all trained with FGSM", and that the non-Clean columns in Table 1 refer to test data perturbed by the respective method, while the Clean column shows the accuracy on the natural data. This *must* be clarified in the final version, as it took way too long to understand this. I strongly suggest to do this with the naming: change small_full to small_FGSM, and small_SQA to small_SQA+FGSM.
Assuming I figured this out right, the tables still lack the baseline accuracy of doing nothing (clean-clean), so one can know how much the nearly-100% use case gets affected.
Results:
The second concern I have is that, assuming my reading of the results as described above is correct, that the SQA method quite severely affects accuracy on the clean test data, e.g. increasing the error rate on CIFAR by 72% (from 12.33% to 17.06%). There must be a discussion on why such severe performance hit is worth it, especially since there often is an accuracy cliff below which there is a steep loss of usability of a system. For example, according to my personal experience in speech recognition, the difference between 12% and 17% is the difference between decent and unacceptable user experience (also considering that a few percent of errors are caused by ambiguities in the ground-truth annotations themselves, which should be the case for CIFAR as well).
Figure 1 seems a little misleading in this regard since the areas of good accuracy are very condensed. It should be rescaled, as only the area close to the optimum performance is relevant. It does not matter whether we degrade from 99.x% to 77% or 58%, or even 95-ish. All of those hurt performance to the point of not being useful.
It would be nice to discuss what an accuracy metric would be that is useful for the end user. It would have to be a combination of the expected cost of a misclassification of a natural image and the expected cost caused by attacks. A good method would improve this overall metric. A paper attempting to address adversarial attacks should at least discuss this topic briefly, in my view.
Technical soundness:
A technical question I have is whether the min-max normalization may be too susceptible to outliers. A single extreme activation can drastically shift the threshold for \lambda=1. How about a mean-var normalization? If there is batch or layer normalization in the system, your activations may already be scaled into a consistent range anyway, that might allow you to use a constant scaling on top of that.
Another question I have is: quantization is often modeled as adding uniform noise. Why not add noise directly? And why uniform noise? For example, would compute g = h + Gaussian noise with std dev=(max-min)/lambda work equally well? What is special about quantization?
And another technical question: My guess is that the notable loss of accuracy is caused by the strong quantization (two values only in the case of \lambda=1). I think the paper should show results for larger lambdas, specifically whether there is a better trade-off point between the accuracy loss from quantization vs. robustness to adversarial samples.
Section 3/SQA: "This is the reason why we rescale g^i to the original range of h^i" This seems wrong. I think the main reason is that one would not want to totally change the dynamic ranges of the network, as it may affect convergence merely by scaling. You'd want to limit any impact on convergence to the quantization itself.
Significance:
I think the significance is limited. Given that the accuracy impact of the mitigation method is very large, I do not consider this paper as substantially solving the problem, or even bringing a practical solution much closer in reach.
Pros:
- tnteresting idea;
- comparison against various attacks.
Cons:
- Hard to understand because it was left unclear what is evaluated, at least to readers who are not familiar with a possibly existing implied convention;
- The method seems to harm accuracy on clean data a lot, which is the main use case of such a system.
I would in the current form reject the paper. To make it acceptable, the clarity of presentation, especially of the results, must be improved, but more importantly, more work seems necessary to reduce the currently significant accuracy hit from the method, and the trade-off of quantization level vs. robustness should be addressed.
Minor feedback:
Please review the paper for grammar and spelling errors (e.g. "BinaryConnect constraints" or the use of "make", which is often not correct).
In Algorithm 1, I suggest to not use 'g', as it may be mis-read as "gradient." Unless this is a common symbol in this context.
"Thus, we propose SQA" warrants another \subsubsection{}, to indicate where \subsubsection{BinaryConnect} ends.
Section 2.2's early reference to SQA is a little confusing, since SQA has not formally been defined. I would smooth this a little, e.g. change "SQA can be considered" to "We will see that our SQA, as introduced in the next section, can be considered"
"an alternative is to approximate it" probably should be "our approach is to approximate it" |
ICLR | Title
Stochastic Quantized Activation: To prevent Overfitting in Fast Adversarial Training
Abstract
Existing neural networks are vulnerable to ”adversarial examples”—created by adding maliciously designed small perturbations in inputs to induce a misclassification by the networks. The most investigated defense strategy is adversarial training which augments training data with adversarial examples. However, applying single-step adversaries in adversarial training does not support the robustness of the networks, instead, they will even make the networks to be overfitted. In contrast to the single-step, multi-step training results in the state-of-the-art performance on MNIST and CIFAR10, yet it needs a massive amount of time. Therefore, we propose a method, Stochastic Quantized Activation (SQA) that solves overfitting problems in single-step adversarial training and fastly achieves the robustness comparable to the multi-step. SQA attenuates the adversarial effects by providing random selectivity to activation functions and allows the network to learn robustness with only single-step training. Throughout the experiment, our method demonstrates the state-of-the-art robustness against one of the strongest white-box attacks as PGD training, but with much less computational cost. Finally, we visualize the learning process of the network with SQA to handle strong adversaries, which is different from existing methods.
1 INTRODUCTION
As Convolutional Neural Networks (CNNs) stand out as a solution to many real world computer vision tasks (LeCun et al., 2015; Angelova et al., 2015; Levine et al., 2016; Litjens et al., 2017), achieving a certain level of robustness has become indispensable for security-sensitive systems, such as autonomous driving, robot vision, and identity authentication. However, recent studies (Szegedy et al., 2014; Goodfellow et al., 2015) have shown that the existing CNNs are vulnerable to small perturbations of the input that are intentionally or adversarially designed to fool the system. The adversarial attack is a serious problem since these maliciously designed attacks have shown effective in physical world scenarios, where inputs are obtained from signals of cameras and other sensors (Kurakin et al., 2016; Evtimov et al., 2017). Another disconcerting feature about adversarial examples is their transferability across different models (Szegedy et al., 2014; Papernot et al., 2016; Liu et al., 2016) that enables black-box attacks. In other words, adversarial examples can be designed from a different model without having the information about the target network.
The most studied defense strategy against adversarial attacks is adversarial training (Goodfellow et al., 2015; Kurakin et al., 2017; Tramr et al., 2018; Madry et al., 2018), which increases robustness by augmenting training data with adversarial examples. Since adversarial training requires the model to train adversarial examples in addition to training data, the model consumes extra time to learn features of the examples via fine-tuning. Even though the model is trained on more examples, it still might be defenseless to new examples generated by different attack due to the overfitting problem. Recently, Madry et al. (2018) have found that adversarial training on examples created via gradient descent with random restarts, Projected Gradient Descent (PGD) training, results in a universally and partially unbreakable model on MNIST and CIFAR-10. This method shows the state-of-the-art performance on MNIST and CIFAR-10 to the best of our knowledge, but the examples are created iteratively and the time increases proportionally to the number of steps. For instance, in our CIFAR10 training, FGSM training on ResNet18 took less than 2 hours for 30 epochs; however, PGD training took about 30 hours for the same epochs. Thus, it is essential to find the universal method that is resistant against all of the attacks, with less computational cost.
Since high dimensional representations of the neural networks give extreme complexity to the boundary of trained manifolds (Tanay & Griffin, 2016; Dube, 2018), we start from the idea that is to reduce degrees of freedom available to the adversary. In this sense, we propose a Stochastic Quantized Activation (SQA) that provides stochastic randomness to the output of an original activation and reduces the opportunity for the attacker to make adversaries. The best advantage of SQA is that SQA with fast adversarial training, training with only FGSM examples, allows the model to have robustness comparable to PGD training with less computational cost. In particular, although SQA is one of the obfuscated gradients defined by Athalye et al. (2018), iterative optimization-based methods does not successfully circumvent our defense. Besides, SQA can be combined with any deep learning models with a few lines of code but guarantees a certain level of robustness against adversarial attacks.
In this paper, we first explain existing methods for adversarial attacks and defenses we refer in Section 2. We separate the existing defense strategies into two categories and analyze the strengths and weaknesses. In Section 3, we introduce the procedure of SQA, with an algorithm described in 1. In Section 4, we show our experimental results on MNIST and CIFAR-10 and compare with existing defense systems. Lastly, we visualize the penultimate layer of our networks and compare how SQA with fast adversarial training, learns differently from the existing methods. Section 5 concludes the work and contributions of this paper are as follows:
• We propose a Stochastic Quantized Activation (SQA) which achieves a significant level of robustness combined with FGSM training, comparable to state-of-the-art PGD adversarial training with much less computational cost.
• Due to the efficiency and the flexibility of the proposed method, it can be fastly and widely applied to any existing deep neural networks and combine with other types of defense strategies.
• We analytically demonstrate how SQA makes the model robust against adversaries in highlevel and low-level by using t-SNE, and plotting activation maps.
2 RELATED WORK
In this section, we investigate the existing methods of adversarial attacks and defenses that appear in the following subsections. First, we define the adversarial examples with the notations formally used in this paper. Let x denote input and y denote the prediction of the input from the DNN classifier f , y = f(x). Then, an adversarial example is crafted by adding a malicious noise η into the original input x, causing a different prediction from the true label, y∗. The formal representation is as follows, where x′ is an adversarial example and is the noise level.
x′ = x+ · η, where f(x′) 6= y∗ (1)
2.1 GENERATING ADVERSARIAL EXAMPLES
Fast Gradient Sign Method (FGSM) is a fast single-step method to create adversarial examples proposed by Goodfellow et al. (2015). The authors suggest the adversarial examples are crafted because of the effects of the linear summation in DNNs, and the algorithm is as follows.
x′ = x + · sign(∇x J(f(x), y∗)) (2)
Here J(f(x), y∗) is the loss between the output prediction f(x) and the true label y∗. However, calculating the loss based on the difference between predictions and true labels makes the label leaking effect (Kurakin et al., 2017), so one simple way to prevent it is to put the prediction y instead of y∗. The intuition behind of the Equation 2 is that increasing loss J by perturbing the input x adding the gradient of loss, which makes the prediction get out of the extrema.
Projected Gradient Descent (PGD) is one of the strongest known white box attacks (Madry et al., 2018). It is a multi-step variant of FGSM, which means that it finds the adversarial perturbation ηn by using the same equation from FGSM, but iteratively. What makes this attack stronger is that
it finds the adversary from starts with random -uniform perturbation clipped in the range of the normalized pixel values, [0,1].
x′0 = Clipx(x+ uniform(− , )), x′n+1 = Clipx, (x′n + α · sign(∇x J(f(x′n), y∗))) (3)
Carlini & Wagner Attack (C & W Attack) is strong optimization-based iterative attack proposed by Carlini & Wagner (2017). It uses Adam (Kingma & Ba, 2014) to optimize over the adversarial perturbation ηn using an auxiliary variable ωn and solves the equation below.
minimize ||ηn||p + c · f(xn + ηn)
where ηn = 1
2 (tanh (ωn) + 1)− xn.
(4)
The function f(·) is defined as
f(x) = max(Z(x)y∗ −max i 6=y∗ (Z(x)i),−κ), (5)
and we can determine the confidence with which the misclassification occurs by adjusting κ.
2.2 DEFENSIVE STRATEGY
Adversarial training increases robustness by augmenting training data in relation to adversarial examples. Previous studies (Goodfellow et al., 2015; Kurakin et al., 2017; Tramr et al., 2018) have shown that adversarially training models improve the classification accuracy when presenting them with adversarial examples. However, the intrinsic problem of this method is the high cost associated with additionally generating adversarial examples and patching them into a training batch. For this reason, practical adversarial training on a large scale dataset such as ImageNet uses fast-generated adversarial examples using FGSM only for training data. However, Madry et al. (2018) have shown that FGSM adversaries don’t increase robustness especially for large since the network overfits to these adversarial examples. They instead, suggest to train the network with a multi-step FGSMk, PGD adversaries, and it shows the state-of-the-art performance on MNIST and CIFAR-10.
Obfuscated Gradients make the network hard to generate adversaries by not having useful gradients. Recently, Athalye et al. (2018) defined three types of obfuscated gradients: Shattered Gradients, Stochastic Gradients, and Exploding & Vanishing Gradients. (Dhillon et al., 2018; Buckman et al., 2018; Song et al., 2018; Xie et al., 2018) have considered one of these gradients, but Athalye et al. (2018) make the attacks which successfully circumvent the defense by making 0% accuracy on 6 out of 7 defenses at ICLR2018. SQA can be considered as both shattered gradients and stochastic gradients. However, we found that our method does not overfit to the adversarial examples and shows robustness against the different type of attacks including the one used to break obfuscated gradients. The next section explains the details of our method.
3 STOCHASTIC QUANTIZED ACTIVATION
Algorithm 1 Stochastic Quantized Activation 1: function FORWARD(hi, λ) 2: gi ← (hi −min∀j⊆J hiJ) / (max∀j⊆J hiJ −min∀j⊆J hiJ) ∗ λ 3: gi ← bgic + Bernoulli(gi − bgic) 4: gi ← (gi / λ) ∗ (max∀j⊆J hiJ −min∀j⊆J hiJ) + min∀j⊆J hiJ 5: return gi
6: function BACKWARD(∂gi/∂hi) 7: return ∂gi/∂hi
In this section, we introduce the concept of SQA starting from a typical low-bit representation in DNNs as prerequisites (Courbariaux et al., 2015). Then, we show the procedure of our quantization stochasticity. The difference between typical low-bit DNNs (Hubara et al., 2016a; Courbariaux et al.,
2015; Hubara et al., 2016b) and our proposed method is that we only consider the quantization of activations except weight vectors. We found that this does not significantly slow down the training with PyTorch (Paszke et al., 2017) but maintains full-precision weight representation, which enables easier convergence than BNNs without additional training strategies.
BinaryConnect constraints the weights to either +1 or -1 during propagations (Courbariaux et al., 2015). Two types of binarization, deterministic and stochastic, are introduced. They are respectively described by the following equations.
wb = { +1 if w ≥ 0, −1 otherwise. (6)
wb = { +1 with probability p = σ(w), −1 with probability 1− p.
where σ(x) = clip( x+ 1
2 , 0, 1) = max(0, min(1,
x+ 1
2 ))
(7)
BNNs are originally designed to reduce the significant amount of memory consumption and costs taken by propagating in full-precision networks. Recently, however, Galloway et al. (2018) shows another benefit of low-precision neural networks, which improves robustness against some adversarial attacks.
Thus, we propose SQA, a stochastic activation function giving the quantized threshold effects into vanilla CNNs, which is described in Algorithm 1. The algorithm can be divided into three steps.
• Min-Max normalization with scaling • Stochastic Quantization • Inverse Min-Max normalization after rescaling
Let hi be a latent space, the output from a ith convolutional layer after ReLU activation. We first perform min-max normalization, making hi ranging from 0 to 1. Then we scale the hi ranging from 0 to λ by multiplying a scale factor λ, which determines the level of quantization from binary to quaternary in our experiment. In the next step, we stochastically quantize the scaled gi as gi presented in the below equation.
gi = bgic + Bernoulli(gi − bgic) (8)
This makes gi converge into the closest or second closest integers, either bgic or bgic + 1 with a probability of each, 1 - (gi−bgic) and gi−bgic. For instance, if we let gi = 1.7, then the probability of gi = 1 is 0.3 and gi = 2 is 0.7. The final step is rescaling gi into the range of original output ReLU activation hi. To rescale the value within the original range, gi is first divided by λ, and inverse min-max normalization is applied as presented in Algorithm 1.
Since it is impossible to find exact derivatives with respect of discretized activations, an alternative is to approximate it by a straight through estimator (Bengio et al., 2013). The concept of a straight through estimator is fixing the incoming gradients to a threshold function equal to its outgoing gradients, ignoring the derivative of the threshold function itself. This is the reason why we rescale gi to the original range of hi. In other words, we do not want to consider the scale factors multiplied in the activation function when we use a straight through estimator.
4 EXPERIMENT
4.1 DATASET AND IMPLEMENTATION DETAILS
In this experiment, we show the feasibility of our approach with several different settings on MNIST and CIFAR-10 using PyTorch (Paszke et al., 2017). We use Adversarial Box (Wang & Gavin Ding, 2018) to generate FGSM and PGD adversaries and implement C.W adversaries (l∞) based on Athalye et al. (2018). The results for each MNIST and CIFAR-10 are shown in Sec 4.2 and 4.3.
Model Parameters For MNIST, we use a baseline model as a Vanilla CNN consisting of three convolutional layers with two fully-connected layers on the top. Since there is a correlation between robustness and model capacity (Madry et al., 2018), we use two networks with different channel sizes and increase channels by a factor of 2. This result in networks with each (16, 32, 64) and (64, 128, 256) filters and they are denoted as SMALLandLARGE in Table 1. We apply SQA on the first and second layers with each λ = 1 and 2. We use Stochastic Gradient Descent (SGD) with learning rate of 0.1, momentum of 0.9, and weight decay of 5e-4. We adjust the learning rate decreasing by 0.1 after every 30 steps within total 100 epochs.
For CIFAR-10, we use ResNet model (He et al., 2015) as a baseline. We adpot 34 and 101 layers of ResNets denoted as RES34 and RES101 in Table 2. An interesting property found on training CIFAR-10 dataset is that quantization with stochasticity shows much higher accuracy rather than deterministic quantization. It seems reasonable since stochasticity is able to provide higher capacity to learn complex RGB images. We apply SQA on the output from the first layer of ResNet and its bottleneck module with each λ = 1 and 2. The same hyper-parameters are applied to the MNIST training except with total epochs of 350 and decreasing learning rate by 0.1 after every 150 steps.
Attack Parameters Throughout the experiments, different l∞ intensity levels are applied to the attacks. For MNIST, = 0.2 and 0.3 are used for FGSM and C&W attacks to give strong adversarial perturbations. We choose 40 steps for C&W attacks. Also, we set = 0.2, a step size of 0.01 and 40 steps for PGD attacks. For CIFAR-10, = 4, 8 are considered for the adversarial attacks. We choose 30 steps for C&W attacks. For PGD attacks we fix 7 steps and the step size as 2 with random perturbation 8. Note that the values for MNIST are in the scale of (0,1) and (0,255) for CIFAR-10. Step sizes for the attacks are chosen to be consistent with Madry et al. (2018).
4.2 ATTACK ON MNIST
Quantization on Different Layers Since quantizing the weights or activation lowers the accuracy on clean images (Courbariaux et al., 2015; Hubara et al., 2016a), it is important to find where to put SQA modules in networks. Thus, we investigate the nth layer-wise quantization applying the deterministic quantization from the first layer of CNN to the third. The result is shown in Figure 1. It is clear that applying quantization on the earlier steps gives higher robustness. This observation is another proof for the argument from Liao et al. (2017) that a small perturbation in an image is amplified to a large perturbation in a higher-level representation so that quantizing the activations in lower-level representation gives more robustness. We further, empirically found that giving binary quantization on the first layer and ternary quantization on the second layer provides less degradation for accuracy and a fair amount of robustness.
SQA v.s. Full-Precision We explore the robustness of SQA against three types of adversarial attacks and the result is shown in Table 1. The networks are all trained with fast single-step adversaries and we could find two known, but interesting properties from the experiments. First, FGSM training the full-precision networks, denoted as SMALLfull, LARGEfull, makes themselves overfit to the adversaries. They show depressed accuracy on especially, PGD attacks, nearly close to 0. However, SQA models does not overfit to the adversaries. Even though SQA models show lower performance on FGSM attacks, they exhibit remarkably high accuracy on the other adversarial examples that have not seen before. The second interesting fact is that the correlation between robustness and model capacity. Madry et al. (2018) have shown that increasing model capacity helps to train the network against strong adversaries successfully. Our experiment also confirms this phenomenon. The performance of LARGESQA is stronger than SMALLSQA against FGSM attacks and more than ten times robust against PGD attacks. This result shows that the model capacity not only increases robustness against the adversaries that have been learned but also prevent overfitting to them.
4.3 ATTACK ON CIFAR10
SQA v.s. Full-Precision We performed experiments on CIFAR-10 to show the effectiveness of SQA on the RGB image dataset. We tried the same types of white-box attacks as in MNIST experiments, and the result is shown in Table 2. Instead of training Vanilla networks, we adopt ResNet (He et al., 2015) since the Vanilla networks are hard to learn useful features on CIFAR-10. Two different ResNets are used for comparing robustness regarding the model capacity, and we found the same phenomena as in MNIST experiments. In other words, SQA module helps to get out of overfitting to the FGSM adversaries, and the larger capacity provides, the higher robustness against different types of attacks.
SQA v.s. Other Existing Methods We compare our module, SQA, with recently proposed defenses including the state-of-the-art, Madry et al. (2018). We also include SAP, PixelDefend, and Thermometer (Dhillon et al., 2018; Song et al., 2018; Buckman et al., 2018) since they use stochastic gradients or shattered gradients that are one of the obfuscated gradients, where our method belongs to. Table 3 shows the performance 1 comparison against PGD and C&W attacks for l∞( = 8). Note that the architectures from the defenses on Table 3 are all different and it is impossible to exactly compare the robustness. We denote the architectures as RESNkW,C , where W stands for Wider ResNets, N is depth, C is the channel size of the first layer, and k is the widen factor. As Athalye et al. (2018) claimed, our method is more robust against gradient-based PGD rather than optimization-based C&W, pushing the state-of-the-art accuracy to 52% against PGD attacks. Also, it shows a fair amount of accuracy against C&W attacks comparable to Adv. Training. This result shows a dramatic impact in a sense that other methods based on obfuscated gradients almost fail to defend against these strong adversaries.
1 The performance of SAP, PixelDefend, and Thermometer is from Athalye et al. (2018).
4.4 TIME COMPLEXITY FOR ADVERSARIAL TRAINING
In this subsection, we explore the time complexity of adversarial training both single-steps and multi-steps. Let τ as the time taken by forward and backward propagation in neural networks, κ is number of steps to find adversaries, and υ is for other processing times including data loading, weight update and etc. Then, we can define the time complexity of adversarial training as follows,
TAdv.Training = (1 + κ) · τ + υ (9)
Then when we consider α as processing time for SQA module and compare SQA + FGSM training with PGD training,
(κ− 1) 2 · τ >> α (10)
As we can see in Table 4, SQA + FGSM training is almost 18 times faster than PGD training where κ is 100.
4.5 VISUALIZING PENULTIMATE LAYERS
In this subsection, we analyze the penultimate layers of the network trained with our method comparing with two full-precision networks: with no defense and with FGSM training. We use C&W
attacks to make adversaries with the parameters described in Section 4.1. We use two different ways to visualize the penultimate layers in high level and low level by using t-SNE (van der Maaten & Hinton, 2008) and plotting activation maps with both clean images and adversarial examples. Firstly, Figure 2 shows t-SNE results from the penultimate layer of our network and a point in tSNE is represented as an image. We select four classes to clearly show how the networks learn and what happens when adversarial noise is added. Here, we demonstrate that the full-precision network trained with FGSM does not correctly classify the classes against the adversarial attacks, as depicted in (B). However, only (C) which is our method shows that the clusters are less broken compared to the other methods. Furthermore, in light of the fact that the robust classifier requires a more complicated decision boundary (Madry et al., 2018), our model seems to have the complicated one by learning adversarial examples.
Secondly, we closely look into the penultimate layer in a low level by plotting the each of the activations. In this time, a point of an activation map stands for the mean value of the activations across about a thousand images per classes. We found that the yellow spots which are the highest values stay in the same location under the adversarial attack, as depicted in (C), Figure 3. In other words, our method shows stable activation frequencies against the adversarial attacks, but training full-precision models with FGSM adversaries does not help to increase robustness, as shown in (B).
5 CONCLUSION
In this paper, we have found that SQA, a stochastic quantization in an activation function, make existing neural networks prevent overfitting to FGSM training. It provides stochastic randomness in quantization to learn a robust decision boundary against adversarial attacks with FGSM training. Our method not only shows dramatic improvements against one of the strongest white-box attacks, comparable to state-of-the-art PGD training but also significantly reduces the computational cost. Throughout visualizing the penultimate layers of our network, we demonstrate that the network learns strong adversaries without overfitting. We expect that SQA could be fastly and widely applied to other defense strategies because of its efficiency and flexibility. In the future work, we plan to experiment on large scale image datasets. | 1. What is the novelty of the proposed approach in the paper regarding quantization and adversarial training?
2. What are the strengths and weaknesses of the proposed approach compared to prior works, particularly in terms of performance and training time?
3. Do you have any concerns or suggestions regarding the comparisons and discussions in the paper, especially with regards to relevant works in the field? | Review | Review
The paper proposes to quantize activation outputs in FGSM training. The algorithm itself is not novel. The straight through approach for training quantized network has been used in previous papers, as also pointed out by the authors. The new thing is that the authors found that quantization of activation function improves robustness, and the approach can be naturally combined with FGSM adversarial training. Experimental results show comparable (and slightly worse) results compared to adversarial training with PGD, while the proposed approach is faster in training time.
I have the following questions/comments:
1. Why not do SQA with PGD-adversarial training? If SQA+FGSM performs similar to PGD training, SQA+PGD might perform even better.
2. There are several important papers missing in the discussion/comparisons:
- Quantization improves robustness has been reported in a previous paper: "Defend Deep Neural Networks Against Adversarial Examples via Fixed andDynamic Quantized Activation Functions". How does the proposed algorithm compare with this paper?
- Adding stochastic noise in each layer has been used in some recent papers: "Towards Robust Neural Networks via Random Self-ensemble". It will be good to include into discussions.
3. I can't find the comparison between PGD-training and SQA on MNIST. Are they also comparable on MNIST? Showing results on more datasets will make the conclusion more convincing. If the benefit of the proposed approach is training time, showing the scalability on ImageNet will make the argument stronger. |
ICLR | Title
Stochastic Quantized Activation: To prevent Overfitting in Fast Adversarial Training
Abstract
Existing neural networks are vulnerable to ”adversarial examples”—created by adding maliciously designed small perturbations in inputs to induce a misclassification by the networks. The most investigated defense strategy is adversarial training which augments training data with adversarial examples. However, applying single-step adversaries in adversarial training does not support the robustness of the networks, instead, they will even make the networks to be overfitted. In contrast to the single-step, multi-step training results in the state-of-the-art performance on MNIST and CIFAR10, yet it needs a massive amount of time. Therefore, we propose a method, Stochastic Quantized Activation (SQA) that solves overfitting problems in single-step adversarial training and fastly achieves the robustness comparable to the multi-step. SQA attenuates the adversarial effects by providing random selectivity to activation functions and allows the network to learn robustness with only single-step training. Throughout the experiment, our method demonstrates the state-of-the-art robustness against one of the strongest white-box attacks as PGD training, but with much less computational cost. Finally, we visualize the learning process of the network with SQA to handle strong adversaries, which is different from existing methods.
1 INTRODUCTION
As Convolutional Neural Networks (CNNs) stand out as a solution to many real world computer vision tasks (LeCun et al., 2015; Angelova et al., 2015; Levine et al., 2016; Litjens et al., 2017), achieving a certain level of robustness has become indispensable for security-sensitive systems, such as autonomous driving, robot vision, and identity authentication. However, recent studies (Szegedy et al., 2014; Goodfellow et al., 2015) have shown that the existing CNNs are vulnerable to small perturbations of the input that are intentionally or adversarially designed to fool the system. The adversarial attack is a serious problem since these maliciously designed attacks have shown effective in physical world scenarios, where inputs are obtained from signals of cameras and other sensors (Kurakin et al., 2016; Evtimov et al., 2017). Another disconcerting feature about adversarial examples is their transferability across different models (Szegedy et al., 2014; Papernot et al., 2016; Liu et al., 2016) that enables black-box attacks. In other words, adversarial examples can be designed from a different model without having the information about the target network.
The most studied defense strategy against adversarial attacks is adversarial training (Goodfellow et al., 2015; Kurakin et al., 2017; Tramr et al., 2018; Madry et al., 2018), which increases robustness by augmenting training data with adversarial examples. Since adversarial training requires the model to train adversarial examples in addition to training data, the model consumes extra time to learn features of the examples via fine-tuning. Even though the model is trained on more examples, it still might be defenseless to new examples generated by different attack due to the overfitting problem. Recently, Madry et al. (2018) have found that adversarial training on examples created via gradient descent with random restarts, Projected Gradient Descent (PGD) training, results in a universally and partially unbreakable model on MNIST and CIFAR-10. This method shows the state-of-the-art performance on MNIST and CIFAR-10 to the best of our knowledge, but the examples are created iteratively and the time increases proportionally to the number of steps. For instance, in our CIFAR10 training, FGSM training on ResNet18 took less than 2 hours for 30 epochs; however, PGD training took about 30 hours for the same epochs. Thus, it is essential to find the universal method that is resistant against all of the attacks, with less computational cost.
Since high dimensional representations of the neural networks give extreme complexity to the boundary of trained manifolds (Tanay & Griffin, 2016; Dube, 2018), we start from the idea that is to reduce degrees of freedom available to the adversary. In this sense, we propose a Stochastic Quantized Activation (SQA) that provides stochastic randomness to the output of an original activation and reduces the opportunity for the attacker to make adversaries. The best advantage of SQA is that SQA with fast adversarial training, training with only FGSM examples, allows the model to have robustness comparable to PGD training with less computational cost. In particular, although SQA is one of the obfuscated gradients defined by Athalye et al. (2018), iterative optimization-based methods does not successfully circumvent our defense. Besides, SQA can be combined with any deep learning models with a few lines of code but guarantees a certain level of robustness against adversarial attacks.
In this paper, we first explain existing methods for adversarial attacks and defenses we refer in Section 2. We separate the existing defense strategies into two categories and analyze the strengths and weaknesses. In Section 3, we introduce the procedure of SQA, with an algorithm described in 1. In Section 4, we show our experimental results on MNIST and CIFAR-10 and compare with existing defense systems. Lastly, we visualize the penultimate layer of our networks and compare how SQA with fast adversarial training, learns differently from the existing methods. Section 5 concludes the work and contributions of this paper are as follows:
• We propose a Stochastic Quantized Activation (SQA) which achieves a significant level of robustness combined with FGSM training, comparable to state-of-the-art PGD adversarial training with much less computational cost.
• Due to the efficiency and the flexibility of the proposed method, it can be fastly and widely applied to any existing deep neural networks and combine with other types of defense strategies.
• We analytically demonstrate how SQA makes the model robust against adversaries in highlevel and low-level by using t-SNE, and plotting activation maps.
2 RELATED WORK
In this section, we investigate the existing methods of adversarial attacks and defenses that appear in the following subsections. First, we define the adversarial examples with the notations formally used in this paper. Let x denote input and y denote the prediction of the input from the DNN classifier f , y = f(x). Then, an adversarial example is crafted by adding a malicious noise η into the original input x, causing a different prediction from the true label, y∗. The formal representation is as follows, where x′ is an adversarial example and is the noise level.
x′ = x+ · η, where f(x′) 6= y∗ (1)
2.1 GENERATING ADVERSARIAL EXAMPLES
Fast Gradient Sign Method (FGSM) is a fast single-step method to create adversarial examples proposed by Goodfellow et al. (2015). The authors suggest the adversarial examples are crafted because of the effects of the linear summation in DNNs, and the algorithm is as follows.
x′ = x + · sign(∇x J(f(x), y∗)) (2)
Here J(f(x), y∗) is the loss between the output prediction f(x) and the true label y∗. However, calculating the loss based on the difference between predictions and true labels makes the label leaking effect (Kurakin et al., 2017), so one simple way to prevent it is to put the prediction y instead of y∗. The intuition behind of the Equation 2 is that increasing loss J by perturbing the input x adding the gradient of loss, which makes the prediction get out of the extrema.
Projected Gradient Descent (PGD) is one of the strongest known white box attacks (Madry et al., 2018). It is a multi-step variant of FGSM, which means that it finds the adversarial perturbation ηn by using the same equation from FGSM, but iteratively. What makes this attack stronger is that
it finds the adversary from starts with random -uniform perturbation clipped in the range of the normalized pixel values, [0,1].
x′0 = Clipx(x+ uniform(− , )), x′n+1 = Clipx, (x′n + α · sign(∇x J(f(x′n), y∗))) (3)
Carlini & Wagner Attack (C & W Attack) is strong optimization-based iterative attack proposed by Carlini & Wagner (2017). It uses Adam (Kingma & Ba, 2014) to optimize over the adversarial perturbation ηn using an auxiliary variable ωn and solves the equation below.
minimize ||ηn||p + c · f(xn + ηn)
where ηn = 1
2 (tanh (ωn) + 1)− xn.
(4)
The function f(·) is defined as
f(x) = max(Z(x)y∗ −max i 6=y∗ (Z(x)i),−κ), (5)
and we can determine the confidence with which the misclassification occurs by adjusting κ.
2.2 DEFENSIVE STRATEGY
Adversarial training increases robustness by augmenting training data in relation to adversarial examples. Previous studies (Goodfellow et al., 2015; Kurakin et al., 2017; Tramr et al., 2018) have shown that adversarially training models improve the classification accuracy when presenting them with adversarial examples. However, the intrinsic problem of this method is the high cost associated with additionally generating adversarial examples and patching them into a training batch. For this reason, practical adversarial training on a large scale dataset such as ImageNet uses fast-generated adversarial examples using FGSM only for training data. However, Madry et al. (2018) have shown that FGSM adversaries don’t increase robustness especially for large since the network overfits to these adversarial examples. They instead, suggest to train the network with a multi-step FGSMk, PGD adversaries, and it shows the state-of-the-art performance on MNIST and CIFAR-10.
Obfuscated Gradients make the network hard to generate adversaries by not having useful gradients. Recently, Athalye et al. (2018) defined three types of obfuscated gradients: Shattered Gradients, Stochastic Gradients, and Exploding & Vanishing Gradients. (Dhillon et al., 2018; Buckman et al., 2018; Song et al., 2018; Xie et al., 2018) have considered one of these gradients, but Athalye et al. (2018) make the attacks which successfully circumvent the defense by making 0% accuracy on 6 out of 7 defenses at ICLR2018. SQA can be considered as both shattered gradients and stochastic gradients. However, we found that our method does not overfit to the adversarial examples and shows robustness against the different type of attacks including the one used to break obfuscated gradients. The next section explains the details of our method.
3 STOCHASTIC QUANTIZED ACTIVATION
Algorithm 1 Stochastic Quantized Activation 1: function FORWARD(hi, λ) 2: gi ← (hi −min∀j⊆J hiJ) / (max∀j⊆J hiJ −min∀j⊆J hiJ) ∗ λ 3: gi ← bgic + Bernoulli(gi − bgic) 4: gi ← (gi / λ) ∗ (max∀j⊆J hiJ −min∀j⊆J hiJ) + min∀j⊆J hiJ 5: return gi
6: function BACKWARD(∂gi/∂hi) 7: return ∂gi/∂hi
In this section, we introduce the concept of SQA starting from a typical low-bit representation in DNNs as prerequisites (Courbariaux et al., 2015). Then, we show the procedure of our quantization stochasticity. The difference between typical low-bit DNNs (Hubara et al., 2016a; Courbariaux et al.,
2015; Hubara et al., 2016b) and our proposed method is that we only consider the quantization of activations except weight vectors. We found that this does not significantly slow down the training with PyTorch (Paszke et al., 2017) but maintains full-precision weight representation, which enables easier convergence than BNNs without additional training strategies.
BinaryConnect constraints the weights to either +1 or -1 during propagations (Courbariaux et al., 2015). Two types of binarization, deterministic and stochastic, are introduced. They are respectively described by the following equations.
wb = { +1 if w ≥ 0, −1 otherwise. (6)
wb = { +1 with probability p = σ(w), −1 with probability 1− p.
where σ(x) = clip( x+ 1
2 , 0, 1) = max(0, min(1,
x+ 1
2 ))
(7)
BNNs are originally designed to reduce the significant amount of memory consumption and costs taken by propagating in full-precision networks. Recently, however, Galloway et al. (2018) shows another benefit of low-precision neural networks, which improves robustness against some adversarial attacks.
Thus, we propose SQA, a stochastic activation function giving the quantized threshold effects into vanilla CNNs, which is described in Algorithm 1. The algorithm can be divided into three steps.
• Min-Max normalization with scaling • Stochastic Quantization • Inverse Min-Max normalization after rescaling
Let hi be a latent space, the output from a ith convolutional layer after ReLU activation. We first perform min-max normalization, making hi ranging from 0 to 1. Then we scale the hi ranging from 0 to λ by multiplying a scale factor λ, which determines the level of quantization from binary to quaternary in our experiment. In the next step, we stochastically quantize the scaled gi as gi presented in the below equation.
gi = bgic + Bernoulli(gi − bgic) (8)
This makes gi converge into the closest or second closest integers, either bgic or bgic + 1 with a probability of each, 1 - (gi−bgic) and gi−bgic. For instance, if we let gi = 1.7, then the probability of gi = 1 is 0.3 and gi = 2 is 0.7. The final step is rescaling gi into the range of original output ReLU activation hi. To rescale the value within the original range, gi is first divided by λ, and inverse min-max normalization is applied as presented in Algorithm 1.
Since it is impossible to find exact derivatives with respect of discretized activations, an alternative is to approximate it by a straight through estimator (Bengio et al., 2013). The concept of a straight through estimator is fixing the incoming gradients to a threshold function equal to its outgoing gradients, ignoring the derivative of the threshold function itself. This is the reason why we rescale gi to the original range of hi. In other words, we do not want to consider the scale factors multiplied in the activation function when we use a straight through estimator.
4 EXPERIMENT
4.1 DATASET AND IMPLEMENTATION DETAILS
In this experiment, we show the feasibility of our approach with several different settings on MNIST and CIFAR-10 using PyTorch (Paszke et al., 2017). We use Adversarial Box (Wang & Gavin Ding, 2018) to generate FGSM and PGD adversaries and implement C.W adversaries (l∞) based on Athalye et al. (2018). The results for each MNIST and CIFAR-10 are shown in Sec 4.2 and 4.3.
Model Parameters For MNIST, we use a baseline model as a Vanilla CNN consisting of three convolutional layers with two fully-connected layers on the top. Since there is a correlation between robustness and model capacity (Madry et al., 2018), we use two networks with different channel sizes and increase channels by a factor of 2. This result in networks with each (16, 32, 64) and (64, 128, 256) filters and they are denoted as SMALLandLARGE in Table 1. We apply SQA on the first and second layers with each λ = 1 and 2. We use Stochastic Gradient Descent (SGD) with learning rate of 0.1, momentum of 0.9, and weight decay of 5e-4. We adjust the learning rate decreasing by 0.1 after every 30 steps within total 100 epochs.
For CIFAR-10, we use ResNet model (He et al., 2015) as a baseline. We adpot 34 and 101 layers of ResNets denoted as RES34 and RES101 in Table 2. An interesting property found on training CIFAR-10 dataset is that quantization with stochasticity shows much higher accuracy rather than deterministic quantization. It seems reasonable since stochasticity is able to provide higher capacity to learn complex RGB images. We apply SQA on the output from the first layer of ResNet and its bottleneck module with each λ = 1 and 2. The same hyper-parameters are applied to the MNIST training except with total epochs of 350 and decreasing learning rate by 0.1 after every 150 steps.
Attack Parameters Throughout the experiments, different l∞ intensity levels are applied to the attacks. For MNIST, = 0.2 and 0.3 are used for FGSM and C&W attacks to give strong adversarial perturbations. We choose 40 steps for C&W attacks. Also, we set = 0.2, a step size of 0.01 and 40 steps for PGD attacks. For CIFAR-10, = 4, 8 are considered for the adversarial attacks. We choose 30 steps for C&W attacks. For PGD attacks we fix 7 steps and the step size as 2 with random perturbation 8. Note that the values for MNIST are in the scale of (0,1) and (0,255) for CIFAR-10. Step sizes for the attacks are chosen to be consistent with Madry et al. (2018).
4.2 ATTACK ON MNIST
Quantization on Different Layers Since quantizing the weights or activation lowers the accuracy on clean images (Courbariaux et al., 2015; Hubara et al., 2016a), it is important to find where to put SQA modules in networks. Thus, we investigate the nth layer-wise quantization applying the deterministic quantization from the first layer of CNN to the third. The result is shown in Figure 1. It is clear that applying quantization on the earlier steps gives higher robustness. This observation is another proof for the argument from Liao et al. (2017) that a small perturbation in an image is amplified to a large perturbation in a higher-level representation so that quantizing the activations in lower-level representation gives more robustness. We further, empirically found that giving binary quantization on the first layer and ternary quantization on the second layer provides less degradation for accuracy and a fair amount of robustness.
SQA v.s. Full-Precision We explore the robustness of SQA against three types of adversarial attacks and the result is shown in Table 1. The networks are all trained with fast single-step adversaries and we could find two known, but interesting properties from the experiments. First, FGSM training the full-precision networks, denoted as SMALLfull, LARGEfull, makes themselves overfit to the adversaries. They show depressed accuracy on especially, PGD attacks, nearly close to 0. However, SQA models does not overfit to the adversaries. Even though SQA models show lower performance on FGSM attacks, they exhibit remarkably high accuracy on the other adversarial examples that have not seen before. The second interesting fact is that the correlation between robustness and model capacity. Madry et al. (2018) have shown that increasing model capacity helps to train the network against strong adversaries successfully. Our experiment also confirms this phenomenon. The performance of LARGESQA is stronger than SMALLSQA against FGSM attacks and more than ten times robust against PGD attacks. This result shows that the model capacity not only increases robustness against the adversaries that have been learned but also prevent overfitting to them.
4.3 ATTACK ON CIFAR10
SQA v.s. Full-Precision We performed experiments on CIFAR-10 to show the effectiveness of SQA on the RGB image dataset. We tried the same types of white-box attacks as in MNIST experiments, and the result is shown in Table 2. Instead of training Vanilla networks, we adopt ResNet (He et al., 2015) since the Vanilla networks are hard to learn useful features on CIFAR-10. Two different ResNets are used for comparing robustness regarding the model capacity, and we found the same phenomena as in MNIST experiments. In other words, SQA module helps to get out of overfitting to the FGSM adversaries, and the larger capacity provides, the higher robustness against different types of attacks.
SQA v.s. Other Existing Methods We compare our module, SQA, with recently proposed defenses including the state-of-the-art, Madry et al. (2018). We also include SAP, PixelDefend, and Thermometer (Dhillon et al., 2018; Song et al., 2018; Buckman et al., 2018) since they use stochastic gradients or shattered gradients that are one of the obfuscated gradients, where our method belongs to. Table 3 shows the performance 1 comparison against PGD and C&W attacks for l∞( = 8). Note that the architectures from the defenses on Table 3 are all different and it is impossible to exactly compare the robustness. We denote the architectures as RESNkW,C , where W stands for Wider ResNets, N is depth, C is the channel size of the first layer, and k is the widen factor. As Athalye et al. (2018) claimed, our method is more robust against gradient-based PGD rather than optimization-based C&W, pushing the state-of-the-art accuracy to 52% against PGD attacks. Also, it shows a fair amount of accuracy against C&W attacks comparable to Adv. Training. This result shows a dramatic impact in a sense that other methods based on obfuscated gradients almost fail to defend against these strong adversaries.
1 The performance of SAP, PixelDefend, and Thermometer is from Athalye et al. (2018).
4.4 TIME COMPLEXITY FOR ADVERSARIAL TRAINING
In this subsection, we explore the time complexity of adversarial training both single-steps and multi-steps. Let τ as the time taken by forward and backward propagation in neural networks, κ is number of steps to find adversaries, and υ is for other processing times including data loading, weight update and etc. Then, we can define the time complexity of adversarial training as follows,
TAdv.Training = (1 + κ) · τ + υ (9)
Then when we consider α as processing time for SQA module and compare SQA + FGSM training with PGD training,
(κ− 1) 2 · τ >> α (10)
As we can see in Table 4, SQA + FGSM training is almost 18 times faster than PGD training where κ is 100.
4.5 VISUALIZING PENULTIMATE LAYERS
In this subsection, we analyze the penultimate layers of the network trained with our method comparing with two full-precision networks: with no defense and with FGSM training. We use C&W
attacks to make adversaries with the parameters described in Section 4.1. We use two different ways to visualize the penultimate layers in high level and low level by using t-SNE (van der Maaten & Hinton, 2008) and plotting activation maps with both clean images and adversarial examples. Firstly, Figure 2 shows t-SNE results from the penultimate layer of our network and a point in tSNE is represented as an image. We select four classes to clearly show how the networks learn and what happens when adversarial noise is added. Here, we demonstrate that the full-precision network trained with FGSM does not correctly classify the classes against the adversarial attacks, as depicted in (B). However, only (C) which is our method shows that the clusters are less broken compared to the other methods. Furthermore, in light of the fact that the robust classifier requires a more complicated decision boundary (Madry et al., 2018), our model seems to have the complicated one by learning adversarial examples.
Secondly, we closely look into the penultimate layer in a low level by plotting the each of the activations. In this time, a point of an activation map stands for the mean value of the activations across about a thousand images per classes. We found that the yellow spots which are the highest values stay in the same location under the adversarial attack, as depicted in (C), Figure 3. In other words, our method shows stable activation frequencies against the adversarial attacks, but training full-precision models with FGSM adversaries does not help to increase robustness, as shown in (B).
5 CONCLUSION
In this paper, we have found that SQA, a stochastic quantization in an activation function, make existing neural networks prevent overfitting to FGSM training. It provides stochastic randomness in quantization to learn a robust decision boundary against adversarial attacks with FGSM training. Our method not only shows dramatic improvements against one of the strongest white-box attacks, comparable to state-of-the-art PGD training but also significantly reduces the computational cost. Throughout visualizing the penultimate layers of our network, we demonstrate that the network learns strong adversaries without overfitting. We expect that SQA could be fastly and widely applied to other defense strategies because of its efficiency and flexibility. In the future work, we plan to experiment on large scale image datasets. | 1. What is the main contribution of the paper regarding adversarial training?
2. What are the strengths of the proposed approach, particularly in its ability to generalize to unseen attacks?
3. What are the weaknesses of the paper, especially regarding experimental validation and comparisons with other methods?
4. How does the reviewer assess the clarity and interest level of the presentation, including figures and algorithms? | Review | Review
This paper proposes to use a stochastically quantized network combined with adversarial training to improve the robustness of models against adversarial examples. The main finding is that, compared to a full precision network, the quantized network can generalize to unseen adversarial attacks better while training only on FGSM-perturbed input. This provides a modest speedup over traditional adversarial training.
While the findings are certainly interesting, the method lacks experimental validation in certain aspects. The comparison with other adversarial training methods is not standardized across networks, making the efficiency claims questionable. Furthermore, I am uncertain whether the authors implemented expectation over transformations (EoT) for the C&W attack. Since the network produces randomized output, vanilla gradient descent against an adversarial loss is likely to fail. It is conceivable that by taking an average over gradients from different quantizations, the C&W adversary would be able to circumvent the defense better. I would be willing to reconsider my review if the authors can address the above weaknesses.
Pros:
- Surprising result showing that quantization leads to improved generalization to unseen attack methods.
Cons:
- Invalid comparison to other adversarial training techniques since the evaluated models are very different.
- Lack of evaluation against EoT adversary.
- Algorithm 1 is poorly presented. I'm sure there are better ways of expressing such a simple quantization scheme.
- Figures 2 and 3 are uninteresting. The fact that the model is robust against adversaries implies that the activations remain unchanged when presented with perturbed input. |
ICLR | Title
EXPLORING VULNERABILITIES OF BERT-BASED APIS
Abstract
Natural language processing (NLP) tasks, ranging from text classification to text generation, have been revolutionised by pretrained BERT models. This allows corporations to easily build powerful APIs by encapsulating fine-tuned BERT models. These BERT-based APIs are often designed to not only provide reliable service but also protect intellectual properties or privacy-sensitive information of the training data. However, a series of privacy and robustness issues may still exist when a fine-tuned BERT model is deployed as a service. In this work, we first present an effective model extraction attack, where the adversary can practically steal a BERT-based API (the target/victim model). We then demonstrate: (1) how the extracted model can be further exploited to develop effective attribute inference attack to expose sensitive information of the training data of the victim model; (2) how the extracted model can lead to highly transferable adversarial attacks against the victim model. Extensive experiments on multiple benchmark datasets under various realistic settings validate the potential privacy and adversarial vulnerabilities of BERT-based APIs.
1 INTRODUCTION
The emergence of Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) has revolutionised the natural language processing (NLP) field, leading to state-of-the-art performance on a wide range of NLP tasks with minimal task-specific supervision. In the meantime, with the increasing success of contextualised pretrained representations for transfer learning, powerful NLP models can be easily built by fine-tuning the pretrained models like BERT or XLNet (Yang et al., 2019). Building NLP models on pretrained representations typically only require several task-specific layers or just a single feedforward layer on top of BERT. To protect data privacy, system integrity and Intellectual Property (IP), commercial NLP models such as task-specific BERT models are often made indirectly accessible through pay-per-query prediction APIs (Krishna et al., 2019) . This leaves model prediction the only information an attacker can access.
Prior works have found that existing NLP APIs are still vulnerable to model extraction attack, which reconstructs a copy of the remote NLP model based on carefully-designed queries and the outputs of the API (Krishna et al., 2019; Wallace et al., 2020). Pretrained BERT models further make it easier to apply model extraction attack to specialised NLP models obtained by fine-tuning pretrained BERT models (Krishna et al., 2019). In addition to model extraction, it is important to ask the following two questions: 1) will the extracted model also leaks sensitive information about the training data in the target model; and 2) whether the extracted model can cause more vulnerabilities of the target model (i.e. the black-box API).
To answer the above two questions, in this work, we first launch a model extraction attack, where the adversary queries the target model with the goal to steal it and turn it into a white-box model. With the extracted model, we further demonstrate that: 1) it is possible to infer sensitive information about the training data; and 2) the extracted model can be exploited to generate highly transferable adversarial attacks against the remote victim model behind the API. Our results highlight the risks of publicly-hosted NLP APIs being stolen and attacked if they are trained by fine-tuning BERT.
Contributions: First, we demonstrate that the extracted model can be exploited by an attribute inference attack to expose sensitive information about the original training data, leading to a significant privacy leakage. Second, we show that adversarial examples crafted on the extracted model are highly
transferable to the target model, exposing more adversarial vulnerabilities of the target model. Third, extensive experiments with the extracted model on benchmark NLP datasets highlight the potential privacy issues and adversarial vulnerabilities of BERT-based APIs. We also show that both attacks developed on the extracted model can evade the investigated defence strategies.
2 RELATED WORK
2.1 MODEL EXTRACTION ATTACK (MEA)
Model extraction attacks (also referred to as “stealing” or “reverse-engineering”) have been studied both empirically and theoretically, for simple classification tasks (Tramèr et al., 2016), vision tasks (Orekondy et al., 2019), and NLP tasks (Krishna et al., 2019; Wallace et al., 2020). As opposed to stealing parameters (Tramèr et al., 2016), hyperparameters (Wang & Gong, 2018), architectures (Oh et al., 2019), training data information (Shokri et al., 2017) and decision boundaries (Tramèr et al., 2016; Papernot et al., 2017), in this work, we attempt to create a local copy or steal the functionality of a black-box victim model (Krishna et al., 2019; Orekondy et al., 2019), that is a model that replicates the performance of the victim model as closely as possible. If reconstruction is successful, the attacker has effectively stolen the intellectual property.
Furthermore, this extracted model could be used as a reconnaissance step to facilitate later attacks (Krishna et al., 2019). For instance, the adversary could use the extracted model to facilitate private information inference about the training data of the victim model, or to construct adversarial examples that will force the victim model to make incorrect predictions.
2.2 ATTRIBUTE INFERENCE ATTACK
Fredrikson et al. (2014) first proposed model inversion attack on biomedical data. The goal is to infer some missing attributes of an input feature vector based on the interaction with a trained ML model. Since deep neural networks have the ability to memorise arbitrary information (Zhang et al., 2017), the private information can be memorised by BERT as well, which poses a threat to information leakage (Krishna et al., 2019). In NLP application, the input text often provides sufficient clues to portray the author, such as gender, age, and other important attributes. For example, sentiment analysis tasks often have privacy implications for authors whose text is used to train models. Prior works (Coavoux et al., 2018) have shown that user attributes can be easily detectable from online review data, as used extensively in sentiment analysis results (Hovy et al., 2015). One might argue that sensitive information like gender, age, location and password are all not explicitly included in model predictions. Nonetheless, model predictions are produced from the input text, it can meanwhile encode personal information which might be exploited for adversarial usages, especially a modern deep learning model owns more capacity than they need to perform well on their tasks (Zhang et al., 2017). The naive solution of removing protected attributes is insufficient: other features may be highly correlated with, and thus predictive of, the protected attributes (Pedreshi et al., 2008).
2.3 ADVERSARIAL TRANSFERABILITY AGAINST NLP SYSTEM
An important property of adversarial examples is their transferability (Szegedy et al., 2014; Goodfellow et al., 2015; Papernot et al., 2017). It has been shown that adversarial examples generated against one network can also successfully fool other networks (Liu et al., 2016; Papernot et al., 2017), especially the adversarial image examples in computer vision. Similarly, in NLP domain, adversarial examples that are designed to manipulate the substitute model can also be misclassified by the target model are considered transferable (Papernot et al., 2017; Ebrahimi et al., 2018b). Adversarial transferability against NLP system remains largely unexplored. Few recent works have attempted to transfer adversarial examples to the NLP systems (Sun et al., 2020; Wallace et al., 2020), however, it is oblivious how the transferability works against BERT-based APIs, and whether the transferability would succeed when the victim model and the substitute (extracted) model have different architectures.
3 ATTACKING BERT-BASED API
In this work, we consider an adversary attempting to steal or attack BERT-based APIs, either for financial gain or to exploit private information or model errors. As shown in Figure 1, the whole attack pipeline against BERT-based APIs can be summarised into two phases. In phase one (model extraction attack (MEA)), we first sample queries, label them by the victim API, and then train an extracted model on the resulting data. In phase two, we conduct attribute inference attack (AIA) and adversarial example transfer (AET) based on the extracted model. We empirically validate that the extracted model can help enhance privacy leakage and adversarial example transferability in Section 4.3 and Section 4.4.
We remark that our attack pipeline is applicable to many remote BERT-based APIs, as we assume: (a) the capabilities required are limited to observing model output by the APIs; (b) the number of queries is limited.
3.1 VICTIM MODEL: BERT-BASED API
Modern NLP systems are typically based on a pretrained BERT (Devlin et al., 2018; Liu et al., 2019a; Nogueira & Cho, 2019; Joshi et al., 2020). BERT produces rich natural language representations which transfer well to most downstream NLP tasks (sentiment analysis, topic classification, etc.). Modern NLP systems typically leverage the fine-tuning methodology by adding a few task-specific layers on top of the publicly available BERT base,1 and fine-tune the whole model.
3.2 MODEL EXTRACTION ATTACK (MEA)
Model extraction attack aims to steal an intellectual model from cloud services (Tramèr et al., 2016; Orekondy et al., 2019; Krishna et al., 2019; Wallace et al., 2020). In this attack, we assume the victim model is a commercially available black-box API. An adversary with black-box query access to the victim model attempts to reconstruct a local copy (“extracted model”) of the victim model. In a nutshell, we perform model extraction attack in a transfer learning setting, where both the adversary and the victim model fine-tune a pretrained BERT. The goal is to extract a model with comparable accuracy to the victim model. Generally, MEA can be formulated as a two-step approach, as illustrated by the top figure in Figure 1:
1https://github.com/google-research/bert
1. Attacker crafts a set of inputs as queries (transfer set), then sends them to the victim model (BERT-based API) to obtain predictions;
2. Attacker reconstructs a copy of the victim model as an “extracted model” by using the queried query-prediction pairs.
Since the attacker does not have training data for the target model, we apply a task-specific query generator to construct m queries {xi}m1 to the victim model. For each xi, target model returns a K-dim posterior probability vector yi ∈ [0, 1]k, ∑ k y k i = 1. The resulting dataset {xi,yi}m1 is used to train the extracted model. Once the extracted model is obtained, the attacker does not have to pay the provider of the original API anymore for the prediction of new data points.
3.3 ATTRIBUTE INFERENCE ATTACK (AIA)
Next, we investigate how to use the extracted model to aid the attribute inference of the private training data of the victim model, i.e., attribute inference attack (AIA) (Song & Raghunathan, 2020). We remark that AIA is different from inferring attribute distribution as in model inversion attack (Yeom et al., 2018). The intuition behind AIA is that the BERT representation generated by the extracted model can be used to infer the sensitive attribute of the private training data of the victim model (Li et al., 2018b; Coavoux et al., 2018; Lyu et al., 2020b). Note that in our work, the only explicit information that is accessible to the attacker is model prediction given by the victim model to the chosen inputs, rather than the original BERT representation. We specifically exploit BERT representation of the extracted model, as it encodes the most informative message for the follow-up classification. A more detailed description can be referred to Appendix B.
3.4 ADVERSARIAL EXAMPLE TRANSFER (AET)
Due to the success of BERT-based models, numerous works have been proposed to evaluate the vulnerability of BERT based models to adversarial attacks (Jin et al., 2019; Sun et al., 2020). However, most recent works for adversarial example transfer focus on the black-box setting (Gao et al., 2018; Ebrahimi et al., 2018a). In such a setting, the adversary attacks the model via the query feedback only. To circumvent this issue, we leverage the transferability of adversarial examples: we first generate adversarial examples for our extracted model, then transfer them to the BERT-based APIs. The intuition lies in two facts: 1) the rationale of a good model should rely on the salient words; 2) the functionally similarity between our extracted model and the victim model allows for the direct transfer of adversarial examples obtained via gradient-based attacks, which is able to locate the most informative words (Sun et al., 2020). Here our extracted model serves as a surrogate to craft adversarial examples in a white-box manner.
4 EXPERIMENTS AND ANALYSIS
4.1 NLP TASKS AND DATASETS
We extract models on four diverse NLP datasets that focus on two main tasks: sentiment analysis and topic classification. The four NLP datasets include TP-US from Trustpilot Sentiment dataset (Hovy et al., 2015), AG news corpus (Del Corso et al., 2005), Blog posts dataset from the blog authorship corpus (Schler et al., 2006), and YELP dataset (Zhang et al., 2015). Table 1 summarises the statistics of the used datasets. A more detailed description can be referred to Appendix A.
4.2 MEA
To assess the functional similarity between the victim model and the extracted one, we compare the accuracy of two models, i.e., the closer accuracy indicates a higher similarity. In line with prior work (Krishna et al., 2019), we first choose the size of the resulting transfer set (queries) to be comparable (e.g., 1x) to the size of victim’s training set, then scale up to 5x.
Attack Strategies We first study model extraction through simulated experiments: we train victim models, query them as if they are black-box APIs, and then train the extracted model to mimic the victim model. We assume that the attacker has access to the freely available pretrained BERT model used by the victim model.
Query Distribution To investigate how the data distribution of queries (PA) may impact the attack on the victim model trained on data from PV (c.f., Table 1), we experiment with the following experiments.
1. We use the same architecture, hyperparameters, and the original data as the victim (All Same).
2. We use the same architecture and hyperparameters as the victim, but sample queries from different distribution (Data Different).
The second scenario makes fewer assumptions and is more realistic and challenging, as the attacker may not know the target data distribution as a prior. Therefore, in addition to the same data distribution as the victim, we additionally investigate the query distribution PA sourced from the following corpora:
• Reviews data: Yelp and Amazon reviews dataset (Zhang et al., 2015). It is worth noting that we exclude Yelp reviews dataset from the Yelp task to guarantee a fair evaluation. • News data: CNN/DailyMail dataset (Hermann et al., 2015)
Regarding the experiments of MEA, our general findings from Table 2 include: (1) using same data (All Same) as queries achieves the best extraction performance, validating that the closeness of the domain between the victim training data and queries is positively correlated to the extraction; (2) using same data can achieve comparable accuracies, even outperform the victim models, we hypothesise this is due to the regularising effect of training on soft-labels (Hinton et al., 2015); (3) our MEA is effective despite the fact that queries may come from different distributions. Using samples from different corpora (review and news) as queries, our MEA can still achieve 0.85-0.99× victim models’ accuracies when the number of queries varies in {1x,5x}, and the extraction is more successful with 5x queries as expected. This facilitates the follow-up AIA and AET. Even with small query budgets (0.1x and 0.5x), extraction is often successful. More results are available in Appendix C. We also noticed that AG news prefers news data, while reviews data is superior to news data on TP-US, Blog and Yelp. Intuitively, one can attribute this preference to the genre similarity, i.e., news data is close to AG news, while distant from TP-US, Blog and Yelp. To rigorously study this phenomenon, we calculate the uni-gram and 5-gram overlapping between test sets and different queries in the 1x setting. Table 3 corroborates that there is a positive correlation between the accuracy and the lexicon similarity. From now, unless otherwise mentioned, because of their effectiveness (c.f.,
Table 2), we will use news data as queries for AG news, and reviews data as queries for TP-US, Blog and Yelp.2
4.3 AIA
For AIA, we conduct our studies on TP-US, AG news and Blog datasets, as there is no matching demographic information for Yelp. AIA is appraised via the following metrics:
• For demographic variables (i.e., gender and age): 1−X , where X is the average prediction accuracy of the attack models on these two variables. • For named entities: 1 − F , where F is the F1 score between the ground truths and the
prediction by the attackers on the presence of all named entities.
Following Coavoux et al. (2018); Lyu et al. (2020a), we denote the value of 1 − X or 1 − F as empirical privacy, i.e., the inverse accuracy or F1 score of the attacker, higher means better empirical privacy, i.e., lower attack performance.
We first randomly split each dataset in Table 1 into two halves. The first half (denoted as DV ) is used to train a victim model, whereas the second half (denoted as DA) is specifically reserved as the public data for the training of AIA attack model. On the extracted model from MEA, attackers can determine how to infer the private attributes from the BERT representation h of the extracted model over DA. Each attack model consists of a multi-layer feed forward network and a binary classifier, which takes the h as the inputs and emits the predicted private attribute. Once the attack models are obtained, we measure the empirical privacy by the ability of the attack model to predict accurately the specific private attribute in DV . Apart from the standard three corpora used for MEA (c.f., Section 4.2), in AIA, we also consider DA (2nd half) as queries, which is derived from the same distribution as DV . It is worth noting that for AG news, we use the filtered AG news (c.f., Appendix A) with sensitive entity information for AIA.
To gauge the private information leakage, we consider a majority class prediction of each attribute as a baseline. To evaluate whether our extracted model can help enhance AIA, we also take the pretrained BERT without (w/o) fine-tuning as a baseline. Table 4 shows that compared to the pretrained only BERT, the attack model built on the BERT representation of the extracted model indeed largely enhances the attribute inference of the training data of the victim model — more than 4x effective for AG news compared with the majority baseline, even when MEA is based on the queries from different data distribution. This implies that target model predictions inadvertently capture sensitive information about users, such as their gender, age, and other important attributes, apart from the useful
2Empirically, we do not have access to the training data of the victim model.
information for the main task (c.f., Table 2). By contrast, BERT (w/o fine-tuning) is a plain model that did not contain any information about the target model training data.
Interestingly, compared with queries from the same distribution, Table 4 shows that queries from different distributions make AIA easier (see the best results corresponding to the lower privacy protections in bold in Table 4). We believe this anti-intuitive phenomenon is caused by the posterior probability, as the posterior probability of the same distribution is sharper than that of the different distribution.3 This argument can be also confirmed from Section 5, in which we use a temperature coefficient τ at the softmax layer to control the sharpness of the posterior probability.
We speculate that the effectiveness of AIA is related to the undesired deep model memorisation of the victim model, which can be spread to the extracted model through model prediction, incurring information leakage.
We further investigate which kind of attribute is more vulnerable, i.e., the relationship between attribute distribution (histogram variance) and privacy leakage. We empirically found that, compared with the attribute with higher variance, attribute with lower variance is harder to attack.4
4.4 AET
Since we have access to the parameters of the locally extracted model, we craft white-box adversarial examples on it and test whether such examples are transferable to the target model. We evaluate sample crafting using the metric of transferability, which refers to the percentage of adversarial examples transferring from the extracted model to the victim model. We use Blog, TP-US, AG news (full) and Yelp for AET.
How We Generate Natural Adversarial Examples? Following Sun et al. (2020), we first leverage the gradients of the gold labels w.r.t the embeddings of the input tokens to find the most informative tokens. Then we corrupt the selected tokens with the following six sources of typos: 1) Insertion; 2) Deletion; 3) Swap; 4) Mistype: Mistyping a word though keyboard, such as “oh”→ “0h”; 5) Pronounce: Wrongly typing due to the close pronounce of the word, such as “egg”→ “agg”; 6) Replace-W: Replace the word by the frequent human behavioural keyboard typo based on the statistics.5 Note that the above operations are constrained by the character distribution on the keyboard. This approach is denoted as advbert.
To evaluate whether our extracted model is needed to mount transferable attacks, we also attack it by using black-box adversarial examples. Moreover, following (Sun et al., 2020), we also experiment with a variant of adv-bert, where the target tokens are randomly selected instead of from the maximum gradients, namely random adv-bet. Compared with
the adversarial examples crafted by black-box and random adv-bert approaches, Table 5 shows that the adversarial examples crafted on our extracted model in a white-box manner make the target model
3Please refer to the Appendix C for the detailed analysis. 4Please refer to the Appendix C for the detail. 5https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings
more vulnerable to adversarial examples in terms of transferability — more than twice effective in the best case. This validates that our extracted model, which is designed to be a high-fidelity imitation of the victim model, considerably enhances the adversarial example transferability, thus severely damaging the output integrity of the target model.
We examine potential factors that contribute to the successful transferability. We found that collecting a larger number of queries contributes to a better attack performance, i.e., 5x queries generally results in much better transferability compared with 1x. This implies that the extracted model with higher fidelity (closer to the victim model, c.f., Table 2) can considerably enhance the adversarial example transferability.
4.5 ARCHITECTURE MISMATCH
In practice, it is more likely that the adversary does not know the victim’s model architecture. A natural question is whether model extraction is still possible even when the extracted models and the victim models have different architectures. To study the influence of the architectural mismatch, we fix the architecture of the extracted model, while varying the victim model from BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019b) to XLNET (Yang et al., 2019). According to Table 6, when there is an architecture mismatch between the victim model and the extracted model, the efficacy of AIA and AET is alleviated as expected. However, the leakage of the private information is still severe (c.f., the majority class in Table 4). Surprisingly, we observe that for AG news (full), MEA cannot benefit from a more accurate victim, which is different from the findings in Hinton et al. (2015). We conjecture such difference is ascribed to the distribution mismatch between the training data of the victim model and the queries. We will conduct an in-depth study on this in the future.
5 DEFENCE
Although we primarily focus on the vulnerabilities of BERT-based APIs in this work, we briefly discuss several counter strategies the victim model may adopt to reduce the informativeness of prediction while minimising the overall drop in API performance (Shokri et al., 2017).
Hard label only. The posterior probability usually leaks more information from the victim model, thus victim model can choose to only return the hard label.
Softening predictions. A temperature coefficient τ on softmax layer manipulates the distribution of the posterior probability. A higher τ leads to smoother probability, whereas a lower one produces a sharper distribution. When τ is approaching 0, the posterior probability becomes a hard label.
Table 7 indicates that although varying temperature on softmax cannot defend the victim model against MEA, it is an effective defensive approach to AIA when τ = 0.5, i.e., closer to the hard label. Similarly, compared with ND, hard label can help mitigate all attacks to some extent.6
However, there is no defence that is effective against all our attacks (c.f., Table 2, Table 4, Table 5), as all these defences preserve the rank of the most confident label. Models can still be effectively stolen
6We observe the similar behaviours for Yelp and Blog.
and exploited using just the hard label or the smoothed predictions returned by the black-box API. This further validates that the adversary only needs to have access to the victim model’s hard label, and does not always need to have access to the confidence scores for our attacks to be successful.
6 DISCUSSION
Understanding how well our attacks work in various settings is important for defenders to know how vulnerable their systems are. Extensive experiments in this paper indicate that the privacy and robustness of an NLP system depend on the model complexity as well as the task. For example, the privacy leakage of the victim model becomes more serious by inferring from the extracted model for AG news and Blog, while this phenomenon is less obvious for TP-US dataset (c.f., Table 4). In terms of robustness against adversarial example transferability, Blog is more vulnerable (c.f., Table 5).
Adversarial attacks focus more on the study of the robustness of a model. However, under the context of business, we believe adversarial attacks can also be utilised for other purposes. For instance, if a business competitor manages to spot incorrect predictions, they can improve the robustness of their model while launching an advertising campaign against the victim model with these adversarial examples. If a rival company directly leverages black-box adversarial attacks on the victim model, its owner can detect the suspicious querying, which involves intensive similar queries (Jin et al., 2019; Li et al., 2020; Garg & Ramakrishnan, 2020), thereby banning the abnormal usage. Since queries used for our model extraction are genuine instances generated on the Internet, it is unlikely to be suspended by the cloud services. As evidenced in Section 4.4, the victim model is vulnerable to our proposed AET.
Defence against all our investigated attacks in this work is a hard and open problem. An ideal defence should resist against all the possible attacks while striving to have a minimal impact on legitimate users of the model (Orekondy et al., 2019). While current defences are marginally effective, they may fail when adversaries adapt to the defence — sophisticated adversaries might anticipate these defences and develop simple modifications to their attacks to circumvent these defences (Krishna et al., 2019). We hope that this work highlights the need for more research in the development of effective countermeasures to defend against these attacks, or at least to increase the cost of adversaries.
7 CONCLUSIONS
This work goes far beyond only model extraction from BERT-based APIs, we also identified that the extracted model can largely enhance the privacy leakage and adversarial example transferability even in difficult scenarios (e.g., limited query budget, queries from different distributions). Extensive experiments based on representative NLP datasets and tasks under various settings demonstrate the effectiveness of our attacks against BERT-based APIs. We hope that our in-depth investigation can provide new insights, and arouse the awareness of the community for building more trustworthy BERT-based API. A number of avenues for further work are attractive. More broadly, we expect to extend our work to more complex NLP tasks, and develop defences that can ensure privacy, robustness, and accuracy simultaneously.
A DATASET DESCRIPTION
Trustpilot (TP) Trustpilot Sentiment dataset (Hovy et al., 2015) contains reviews associated with a sentiment score on a five point scale, and each review is associated with 3 attributes: gender, age and location, which are self-reported by users. The original dataset is comprised of reviews from different locations, however in this paper, we only derive TP-US for study. Following Coavoux et al. (2018), we extract examples containing information of both gender and age, and treat them as the private information. We categorise “age” into two groups: “under 34” (U34) and “over 45” (O45).
AG news We use AG news corpus (Del Corso et al., 2005). This task is to predict the topic label of the document, with four different topics in total. Following (Zhang et al., 2015; Jin et al., 2019), we use both “title” and “description” fields as the input document.
We use full AG news dataset for MEA and AET, which we call AG news (full). As AIA requires entity information, we use the corpus filtered by Coavoux et al. (2018)7, which we call AG news. The resultant AG news merely includes sentences with the five most frequent person entities, and each sentence contains at least one of these named entities. Thus, the attacker aims to identify these five entities as 5 independent binary classification tasks.
Blog posts (Blog) We derive a blog posts dataset (Blog) from the blog authorship corpus presented (Schler et al., 2006). We recycle the corpus preprocessed by Coavoux et al. (2018), which covers 10 different topics. Similar to TP-US, the private variables are comprised of the age and gender of the author. And the age attribute is binned into two categories, “under 20” (U20) and “over 30” (O30).
Yelp Polarity (Yelp) Yelp dataset is a document-level sentiment classification (Zhang et al., 2015). The original dataset is in a five point scale (1-5), while the polarised version assigns negative labels to the rating of 1 and 2 and positive ones to 4 and 5.
B AIA ALGORITHM
The main algorithm for Attribute Inference Attack (AIA) is shown in Algorithm 1. For each dataset, once the extracted model g′V is built, we query g ′ V with the available public data DA to collect the BERT representation h(xi) for each xi ∈ DA. For each sensitive attribute s, a specific inference model (c.f., Section 4.3) is trained on {(h(xi), si)}, in order to infer the private attributes of the interest; in our case, they are gender, age and named entities (c.f., Table 1).
In more detail, in Algorithm 1, given DA, we take all the non-sensitive attributes xi as input, and the sensitive attribute si as label to train an AIA attack model. During test time, attacker could feed the non-sensitive attributes of any input into the trained model to infer the sensitive attribute. In the case when the attacker gets the non-sensitive attributes of any training record of the victim model, the attacker can successfully infer its sensitive attributes, thus causing privacy leakage of the victim model training data (c.f., Table 4, we use the non-sensitive attributes of DV as test data, and demonstrate the sensitive attribute privacy leakage of DV ). Note that the non-sensitive attributes of the victim training data could be accessible to any attacker.
C ABLATION STUDY
Query Size Due to the budget limit, malicious users cannot issue massive requests. To investigate the attack performance of model extraction under the low-resource setting, we conduct two additional experiments, which only utilise 0.1x and 0.5x size of the training data of the victim models respectively. According to Table 8, although some datasets such as Blog suffer from a drastic drop, the overall performance of the extracted models is comparable to the victim models. In addition, distant domains
7https://github.com/mcoavoux/pnet/tree/master/datasets.
Algorithm 1 Attribute inference attack 1: Input: extracted model g′V , labelled auxiliary data DA = (xi, si), BERT representation layer h,
non-sensitive attributes x∗ 2: Query g′V with DA and collect {(h(xi), si)|(xi, si) ∈ DA}. 3: Train an inference model f on {(h(xi), si)}. 4: Query g′V with x
∗ to get the target BERT representation h(x∗) 5: return f(h(x∗))
exhibit significant degradation, when compared to the close ones. For example, sampling 0.1x-5x queries from news data present a more stable attack performance against the victim model trained on AG news than Blog.
Impact Factor on AIA In Section 6, we found that there is a correlation between the success of AIA and temperature τ on the softmax layer. We conjecture that the causal factor is the sharpness of the posterior probability, i.e., if the model is less confident on its most likely prediction, then AIA is more likely to be successful. This speculation is confirmed by Figure 2, where the higher posterior probability leads to a higher empirical privacy.
Figure 3 and Table 9 indicate that AIA is also affected by the distribution of attributes. Attributes with higher variances cause more information leakage or a lower empirical privacy. For example, for AG-news, entity 2-4 with higher variances result in lower empirical privacy, while entity 0-1 are more resistant to AIA. For TP-US and Blog, as age and gender exhibit similar distribution, AIA performance gap across these two attributes is less obvious, as evidenced by the last two rows in Table 9.
D ADVERSARIAL EXAMPLES
We provide several adversarial examples generated by adv-bert (Sun et al., 2020) in Table 10. Note that all these examples cause a misclassification on both extracted models and victim models.
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
All Same Data Diff. (2nd half) Data Diff. Data Diff. (temp=0.5) Data Diff. (temp=5)
mean median privacy
Correlation between Maximum Posterior Probability and Privacy (AG news)
(a) AG news
0.2000
0.4000
0.6000
0.8000
1.0000
All Same Data Diff. (2nd half) Data Diff. Data Diff. (temp=0.5) Data Diff. (temp=5)
mean median privacy
Correlation between Maximum Posterior Probability and Privacy (TP-US)
(b) TP-US
Correlation between Maximum Posterior Probability and Privacy (Blog)
AG news entity 0 entity 1 entity 2 entity 3 entity 4
All Same 15.61 15.10 7.71 6.95 5.49 Data Diff. (news) 14.79 12.38 3.84 5.33 2.02 | 1. What is the main contribution of the paper regarding the pipeline proposed for attacking and stealing sensitive information from a BERT-based API service?
2. What are the strengths and weaknesses of the assumptions made in the experiment settings?
3. How effective are the model inversion and adversarial transfer attacks in the proposed pipeline?
4. What are the limitations of the stealing method used in the pipeline, particularly in comparison to conventional distillation methods?
5. Are there any potential issues with the assumption of using the same pre-trained BERT parameters for both victim and stealer models? If so, how would using different pre-trained BERT parameters or sizes affect the results? | Review | Review
Overview
The authors propose a pipeline to attack and steal sensitive information from a BERT-based API service, and can subsequently perform adversarial attack to the victim model by creating white-box adversarial samples on the stolen model.
The pipeline can be summarized as the followings:
Using distillation to train (steal) a model from the API.
Conduct model inversion attack to the stolen model in step 1 to expose sensitive information of the training data.
Create adversarial samples for the stolen model in step 1 and use them to attack the original API.
Some of the assumptions of the experiment settings are too strong and far from the real situation, but the idea of using this pipeline to conduct model inversion and adversarial transfer attack is very interesting.
Pros
The pipeline proposed by the authors is very insightful. The experiment results also show the effectiveness of model inversion and adversarial attack.
Cons
The assumption of the Model Extraction Attack part is too strong. The authors use the same pre-trained BERT parameters for both victim and stealer model. However in real practice we are not able to know which pre-trained BERT parameter set is to be used for fine-tuning, nevertheless to get the pre-trained model. What if we use different pre-trained BERT parameters? What if we use a pre-trained BERT with different size (num of layers, hidden dim, etc.)?
Besides, the stealing method is just a conventional distillation. Although the authors claims three differences between their method and distillation: (1) the goal (2) the accessibility of original data (3) the accessibility of hard labels, only the first one is appropriately claimed. For (2), distillation is also broadly used in transfer learning w/o the access of original data. For (3), distillation w/ only soft labels is also very popular and useful, from conventional distillation for compression, to self-distillation.
I'd like to hear the reason why the authors make the assumption that the stealer would have the same pre-trained BERT when attacking, and also curious about the results of using different pre-trained BERT model. I might change the rating if the authors may address these questions. |
ICLR | Title
EXPLORING VULNERABILITIES OF BERT-BASED APIS
Abstract
Natural language processing (NLP) tasks, ranging from text classification to text generation, have been revolutionised by pretrained BERT models. This allows corporations to easily build powerful APIs by encapsulating fine-tuned BERT models. These BERT-based APIs are often designed to not only provide reliable service but also protect intellectual properties or privacy-sensitive information of the training data. However, a series of privacy and robustness issues may still exist when a fine-tuned BERT model is deployed as a service. In this work, we first present an effective model extraction attack, where the adversary can practically steal a BERT-based API (the target/victim model). We then demonstrate: (1) how the extracted model can be further exploited to develop effective attribute inference attack to expose sensitive information of the training data of the victim model; (2) how the extracted model can lead to highly transferable adversarial attacks against the victim model. Extensive experiments on multiple benchmark datasets under various realistic settings validate the potential privacy and adversarial vulnerabilities of BERT-based APIs.
1 INTRODUCTION
The emergence of Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) has revolutionised the natural language processing (NLP) field, leading to state-of-the-art performance on a wide range of NLP tasks with minimal task-specific supervision. In the meantime, with the increasing success of contextualised pretrained representations for transfer learning, powerful NLP models can be easily built by fine-tuning the pretrained models like BERT or XLNet (Yang et al., 2019). Building NLP models on pretrained representations typically only require several task-specific layers or just a single feedforward layer on top of BERT. To protect data privacy, system integrity and Intellectual Property (IP), commercial NLP models such as task-specific BERT models are often made indirectly accessible through pay-per-query prediction APIs (Krishna et al., 2019) . This leaves model prediction the only information an attacker can access.
Prior works have found that existing NLP APIs are still vulnerable to model extraction attack, which reconstructs a copy of the remote NLP model based on carefully-designed queries and the outputs of the API (Krishna et al., 2019; Wallace et al., 2020). Pretrained BERT models further make it easier to apply model extraction attack to specialised NLP models obtained by fine-tuning pretrained BERT models (Krishna et al., 2019). In addition to model extraction, it is important to ask the following two questions: 1) will the extracted model also leaks sensitive information about the training data in the target model; and 2) whether the extracted model can cause more vulnerabilities of the target model (i.e. the black-box API).
To answer the above two questions, in this work, we first launch a model extraction attack, where the adversary queries the target model with the goal to steal it and turn it into a white-box model. With the extracted model, we further demonstrate that: 1) it is possible to infer sensitive information about the training data; and 2) the extracted model can be exploited to generate highly transferable adversarial attacks against the remote victim model behind the API. Our results highlight the risks of publicly-hosted NLP APIs being stolen and attacked if they are trained by fine-tuning BERT.
Contributions: First, we demonstrate that the extracted model can be exploited by an attribute inference attack to expose sensitive information about the original training data, leading to a significant privacy leakage. Second, we show that adversarial examples crafted on the extracted model are highly
transferable to the target model, exposing more adversarial vulnerabilities of the target model. Third, extensive experiments with the extracted model on benchmark NLP datasets highlight the potential privacy issues and adversarial vulnerabilities of BERT-based APIs. We also show that both attacks developed on the extracted model can evade the investigated defence strategies.
2 RELATED WORK
2.1 MODEL EXTRACTION ATTACK (MEA)
Model extraction attacks (also referred to as “stealing” or “reverse-engineering”) have been studied both empirically and theoretically, for simple classification tasks (Tramèr et al., 2016), vision tasks (Orekondy et al., 2019), and NLP tasks (Krishna et al., 2019; Wallace et al., 2020). As opposed to stealing parameters (Tramèr et al., 2016), hyperparameters (Wang & Gong, 2018), architectures (Oh et al., 2019), training data information (Shokri et al., 2017) and decision boundaries (Tramèr et al., 2016; Papernot et al., 2017), in this work, we attempt to create a local copy or steal the functionality of a black-box victim model (Krishna et al., 2019; Orekondy et al., 2019), that is a model that replicates the performance of the victim model as closely as possible. If reconstruction is successful, the attacker has effectively stolen the intellectual property.
Furthermore, this extracted model could be used as a reconnaissance step to facilitate later attacks (Krishna et al., 2019). For instance, the adversary could use the extracted model to facilitate private information inference about the training data of the victim model, or to construct adversarial examples that will force the victim model to make incorrect predictions.
2.2 ATTRIBUTE INFERENCE ATTACK
Fredrikson et al. (2014) first proposed model inversion attack on biomedical data. The goal is to infer some missing attributes of an input feature vector based on the interaction with a trained ML model. Since deep neural networks have the ability to memorise arbitrary information (Zhang et al., 2017), the private information can be memorised by BERT as well, which poses a threat to information leakage (Krishna et al., 2019). In NLP application, the input text often provides sufficient clues to portray the author, such as gender, age, and other important attributes. For example, sentiment analysis tasks often have privacy implications for authors whose text is used to train models. Prior works (Coavoux et al., 2018) have shown that user attributes can be easily detectable from online review data, as used extensively in sentiment analysis results (Hovy et al., 2015). One might argue that sensitive information like gender, age, location and password are all not explicitly included in model predictions. Nonetheless, model predictions are produced from the input text, it can meanwhile encode personal information which might be exploited for adversarial usages, especially a modern deep learning model owns more capacity than they need to perform well on their tasks (Zhang et al., 2017). The naive solution of removing protected attributes is insufficient: other features may be highly correlated with, and thus predictive of, the protected attributes (Pedreshi et al., 2008).
2.3 ADVERSARIAL TRANSFERABILITY AGAINST NLP SYSTEM
An important property of adversarial examples is their transferability (Szegedy et al., 2014; Goodfellow et al., 2015; Papernot et al., 2017). It has been shown that adversarial examples generated against one network can also successfully fool other networks (Liu et al., 2016; Papernot et al., 2017), especially the adversarial image examples in computer vision. Similarly, in NLP domain, adversarial examples that are designed to manipulate the substitute model can also be misclassified by the target model are considered transferable (Papernot et al., 2017; Ebrahimi et al., 2018b). Adversarial transferability against NLP system remains largely unexplored. Few recent works have attempted to transfer adversarial examples to the NLP systems (Sun et al., 2020; Wallace et al., 2020), however, it is oblivious how the transferability works against BERT-based APIs, and whether the transferability would succeed when the victim model and the substitute (extracted) model have different architectures.
3 ATTACKING BERT-BASED API
In this work, we consider an adversary attempting to steal or attack BERT-based APIs, either for financial gain or to exploit private information or model errors. As shown in Figure 1, the whole attack pipeline against BERT-based APIs can be summarised into two phases. In phase one (model extraction attack (MEA)), we first sample queries, label them by the victim API, and then train an extracted model on the resulting data. In phase two, we conduct attribute inference attack (AIA) and adversarial example transfer (AET) based on the extracted model. We empirically validate that the extracted model can help enhance privacy leakage and adversarial example transferability in Section 4.3 and Section 4.4.
We remark that our attack pipeline is applicable to many remote BERT-based APIs, as we assume: (a) the capabilities required are limited to observing model output by the APIs; (b) the number of queries is limited.
3.1 VICTIM MODEL: BERT-BASED API
Modern NLP systems are typically based on a pretrained BERT (Devlin et al., 2018; Liu et al., 2019a; Nogueira & Cho, 2019; Joshi et al., 2020). BERT produces rich natural language representations which transfer well to most downstream NLP tasks (sentiment analysis, topic classification, etc.). Modern NLP systems typically leverage the fine-tuning methodology by adding a few task-specific layers on top of the publicly available BERT base,1 and fine-tune the whole model.
3.2 MODEL EXTRACTION ATTACK (MEA)
Model extraction attack aims to steal an intellectual model from cloud services (Tramèr et al., 2016; Orekondy et al., 2019; Krishna et al., 2019; Wallace et al., 2020). In this attack, we assume the victim model is a commercially available black-box API. An adversary with black-box query access to the victim model attempts to reconstruct a local copy (“extracted model”) of the victim model. In a nutshell, we perform model extraction attack in a transfer learning setting, where both the adversary and the victim model fine-tune a pretrained BERT. The goal is to extract a model with comparable accuracy to the victim model. Generally, MEA can be formulated as a two-step approach, as illustrated by the top figure in Figure 1:
1https://github.com/google-research/bert
1. Attacker crafts a set of inputs as queries (transfer set), then sends them to the victim model (BERT-based API) to obtain predictions;
2. Attacker reconstructs a copy of the victim model as an “extracted model” by using the queried query-prediction pairs.
Since the attacker does not have training data for the target model, we apply a task-specific query generator to construct m queries {xi}m1 to the victim model. For each xi, target model returns a K-dim posterior probability vector yi ∈ [0, 1]k, ∑ k y k i = 1. The resulting dataset {xi,yi}m1 is used to train the extracted model. Once the extracted model is obtained, the attacker does not have to pay the provider of the original API anymore for the prediction of new data points.
3.3 ATTRIBUTE INFERENCE ATTACK (AIA)
Next, we investigate how to use the extracted model to aid the attribute inference of the private training data of the victim model, i.e., attribute inference attack (AIA) (Song & Raghunathan, 2020). We remark that AIA is different from inferring attribute distribution as in model inversion attack (Yeom et al., 2018). The intuition behind AIA is that the BERT representation generated by the extracted model can be used to infer the sensitive attribute of the private training data of the victim model (Li et al., 2018b; Coavoux et al., 2018; Lyu et al., 2020b). Note that in our work, the only explicit information that is accessible to the attacker is model prediction given by the victim model to the chosen inputs, rather than the original BERT representation. We specifically exploit BERT representation of the extracted model, as it encodes the most informative message for the follow-up classification. A more detailed description can be referred to Appendix B.
3.4 ADVERSARIAL EXAMPLE TRANSFER (AET)
Due to the success of BERT-based models, numerous works have been proposed to evaluate the vulnerability of BERT based models to adversarial attacks (Jin et al., 2019; Sun et al., 2020). However, most recent works for adversarial example transfer focus on the black-box setting (Gao et al., 2018; Ebrahimi et al., 2018a). In such a setting, the adversary attacks the model via the query feedback only. To circumvent this issue, we leverage the transferability of adversarial examples: we first generate adversarial examples for our extracted model, then transfer them to the BERT-based APIs. The intuition lies in two facts: 1) the rationale of a good model should rely on the salient words; 2) the functionally similarity between our extracted model and the victim model allows for the direct transfer of adversarial examples obtained via gradient-based attacks, which is able to locate the most informative words (Sun et al., 2020). Here our extracted model serves as a surrogate to craft adversarial examples in a white-box manner.
4 EXPERIMENTS AND ANALYSIS
4.1 NLP TASKS AND DATASETS
We extract models on four diverse NLP datasets that focus on two main tasks: sentiment analysis and topic classification. The four NLP datasets include TP-US from Trustpilot Sentiment dataset (Hovy et al., 2015), AG news corpus (Del Corso et al., 2005), Blog posts dataset from the blog authorship corpus (Schler et al., 2006), and YELP dataset (Zhang et al., 2015). Table 1 summarises the statistics of the used datasets. A more detailed description can be referred to Appendix A.
4.2 MEA
To assess the functional similarity between the victim model and the extracted one, we compare the accuracy of two models, i.e., the closer accuracy indicates a higher similarity. In line with prior work (Krishna et al., 2019), we first choose the size of the resulting transfer set (queries) to be comparable (e.g., 1x) to the size of victim’s training set, then scale up to 5x.
Attack Strategies We first study model extraction through simulated experiments: we train victim models, query them as if they are black-box APIs, and then train the extracted model to mimic the victim model. We assume that the attacker has access to the freely available pretrained BERT model used by the victim model.
Query Distribution To investigate how the data distribution of queries (PA) may impact the attack on the victim model trained on data from PV (c.f., Table 1), we experiment with the following experiments.
1. We use the same architecture, hyperparameters, and the original data as the victim (All Same).
2. We use the same architecture and hyperparameters as the victim, but sample queries from different distribution (Data Different).
The second scenario makes fewer assumptions and is more realistic and challenging, as the attacker may not know the target data distribution as a prior. Therefore, in addition to the same data distribution as the victim, we additionally investigate the query distribution PA sourced from the following corpora:
• Reviews data: Yelp and Amazon reviews dataset (Zhang et al., 2015). It is worth noting that we exclude Yelp reviews dataset from the Yelp task to guarantee a fair evaluation. • News data: CNN/DailyMail dataset (Hermann et al., 2015)
Regarding the experiments of MEA, our general findings from Table 2 include: (1) using same data (All Same) as queries achieves the best extraction performance, validating that the closeness of the domain between the victim training data and queries is positively correlated to the extraction; (2) using same data can achieve comparable accuracies, even outperform the victim models, we hypothesise this is due to the regularising effect of training on soft-labels (Hinton et al., 2015); (3) our MEA is effective despite the fact that queries may come from different distributions. Using samples from different corpora (review and news) as queries, our MEA can still achieve 0.85-0.99× victim models’ accuracies when the number of queries varies in {1x,5x}, and the extraction is more successful with 5x queries as expected. This facilitates the follow-up AIA and AET. Even with small query budgets (0.1x and 0.5x), extraction is often successful. More results are available in Appendix C. We also noticed that AG news prefers news data, while reviews data is superior to news data on TP-US, Blog and Yelp. Intuitively, one can attribute this preference to the genre similarity, i.e., news data is close to AG news, while distant from TP-US, Blog and Yelp. To rigorously study this phenomenon, we calculate the uni-gram and 5-gram overlapping between test sets and different queries in the 1x setting. Table 3 corroborates that there is a positive correlation between the accuracy and the lexicon similarity. From now, unless otherwise mentioned, because of their effectiveness (c.f.,
Table 2), we will use news data as queries for AG news, and reviews data as queries for TP-US, Blog and Yelp.2
4.3 AIA
For AIA, we conduct our studies on TP-US, AG news and Blog datasets, as there is no matching demographic information for Yelp. AIA is appraised via the following metrics:
• For demographic variables (i.e., gender and age): 1−X , where X is the average prediction accuracy of the attack models on these two variables. • For named entities: 1 − F , where F is the F1 score between the ground truths and the
prediction by the attackers on the presence of all named entities.
Following Coavoux et al. (2018); Lyu et al. (2020a), we denote the value of 1 − X or 1 − F as empirical privacy, i.e., the inverse accuracy or F1 score of the attacker, higher means better empirical privacy, i.e., lower attack performance.
We first randomly split each dataset in Table 1 into two halves. The first half (denoted as DV ) is used to train a victim model, whereas the second half (denoted as DA) is specifically reserved as the public data for the training of AIA attack model. On the extracted model from MEA, attackers can determine how to infer the private attributes from the BERT representation h of the extracted model over DA. Each attack model consists of a multi-layer feed forward network and a binary classifier, which takes the h as the inputs and emits the predicted private attribute. Once the attack models are obtained, we measure the empirical privacy by the ability of the attack model to predict accurately the specific private attribute in DV . Apart from the standard three corpora used for MEA (c.f., Section 4.2), in AIA, we also consider DA (2nd half) as queries, which is derived from the same distribution as DV . It is worth noting that for AG news, we use the filtered AG news (c.f., Appendix A) with sensitive entity information for AIA.
To gauge the private information leakage, we consider a majority class prediction of each attribute as a baseline. To evaluate whether our extracted model can help enhance AIA, we also take the pretrained BERT without (w/o) fine-tuning as a baseline. Table 4 shows that compared to the pretrained only BERT, the attack model built on the BERT representation of the extracted model indeed largely enhances the attribute inference of the training data of the victim model — more than 4x effective for AG news compared with the majority baseline, even when MEA is based on the queries from different data distribution. This implies that target model predictions inadvertently capture sensitive information about users, such as their gender, age, and other important attributes, apart from the useful
2Empirically, we do not have access to the training data of the victim model.
information for the main task (c.f., Table 2). By contrast, BERT (w/o fine-tuning) is a plain model that did not contain any information about the target model training data.
Interestingly, compared with queries from the same distribution, Table 4 shows that queries from different distributions make AIA easier (see the best results corresponding to the lower privacy protections in bold in Table 4). We believe this anti-intuitive phenomenon is caused by the posterior probability, as the posterior probability of the same distribution is sharper than that of the different distribution.3 This argument can be also confirmed from Section 5, in which we use a temperature coefficient τ at the softmax layer to control the sharpness of the posterior probability.
We speculate that the effectiveness of AIA is related to the undesired deep model memorisation of the victim model, which can be spread to the extracted model through model prediction, incurring information leakage.
We further investigate which kind of attribute is more vulnerable, i.e., the relationship between attribute distribution (histogram variance) and privacy leakage. We empirically found that, compared with the attribute with higher variance, attribute with lower variance is harder to attack.4
4.4 AET
Since we have access to the parameters of the locally extracted model, we craft white-box adversarial examples on it and test whether such examples are transferable to the target model. We evaluate sample crafting using the metric of transferability, which refers to the percentage of adversarial examples transferring from the extracted model to the victim model. We use Blog, TP-US, AG news (full) and Yelp for AET.
How We Generate Natural Adversarial Examples? Following Sun et al. (2020), we first leverage the gradients of the gold labels w.r.t the embeddings of the input tokens to find the most informative tokens. Then we corrupt the selected tokens with the following six sources of typos: 1) Insertion; 2) Deletion; 3) Swap; 4) Mistype: Mistyping a word though keyboard, such as “oh”→ “0h”; 5) Pronounce: Wrongly typing due to the close pronounce of the word, such as “egg”→ “agg”; 6) Replace-W: Replace the word by the frequent human behavioural keyboard typo based on the statistics.5 Note that the above operations are constrained by the character distribution on the keyboard. This approach is denoted as advbert.
To evaluate whether our extracted model is needed to mount transferable attacks, we also attack it by using black-box adversarial examples. Moreover, following (Sun et al., 2020), we also experiment with a variant of adv-bert, where the target tokens are randomly selected instead of from the maximum gradients, namely random adv-bet. Compared with
the adversarial examples crafted by black-box and random adv-bert approaches, Table 5 shows that the adversarial examples crafted on our extracted model in a white-box manner make the target model
3Please refer to the Appendix C for the detailed analysis. 4Please refer to the Appendix C for the detail. 5https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings
more vulnerable to adversarial examples in terms of transferability — more than twice effective in the best case. This validates that our extracted model, which is designed to be a high-fidelity imitation of the victim model, considerably enhances the adversarial example transferability, thus severely damaging the output integrity of the target model.
We examine potential factors that contribute to the successful transferability. We found that collecting a larger number of queries contributes to a better attack performance, i.e., 5x queries generally results in much better transferability compared with 1x. This implies that the extracted model with higher fidelity (closer to the victim model, c.f., Table 2) can considerably enhance the adversarial example transferability.
4.5 ARCHITECTURE MISMATCH
In practice, it is more likely that the adversary does not know the victim’s model architecture. A natural question is whether model extraction is still possible even when the extracted models and the victim models have different architectures. To study the influence of the architectural mismatch, we fix the architecture of the extracted model, while varying the victim model from BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019b) to XLNET (Yang et al., 2019). According to Table 6, when there is an architecture mismatch between the victim model and the extracted model, the efficacy of AIA and AET is alleviated as expected. However, the leakage of the private information is still severe (c.f., the majority class in Table 4). Surprisingly, we observe that for AG news (full), MEA cannot benefit from a more accurate victim, which is different from the findings in Hinton et al. (2015). We conjecture such difference is ascribed to the distribution mismatch between the training data of the victim model and the queries. We will conduct an in-depth study on this in the future.
5 DEFENCE
Although we primarily focus on the vulnerabilities of BERT-based APIs in this work, we briefly discuss several counter strategies the victim model may adopt to reduce the informativeness of prediction while minimising the overall drop in API performance (Shokri et al., 2017).
Hard label only. The posterior probability usually leaks more information from the victim model, thus victim model can choose to only return the hard label.
Softening predictions. A temperature coefficient τ on softmax layer manipulates the distribution of the posterior probability. A higher τ leads to smoother probability, whereas a lower one produces a sharper distribution. When τ is approaching 0, the posterior probability becomes a hard label.
Table 7 indicates that although varying temperature on softmax cannot defend the victim model against MEA, it is an effective defensive approach to AIA when τ = 0.5, i.e., closer to the hard label. Similarly, compared with ND, hard label can help mitigate all attacks to some extent.6
However, there is no defence that is effective against all our attacks (c.f., Table 2, Table 4, Table 5), as all these defences preserve the rank of the most confident label. Models can still be effectively stolen
6We observe the similar behaviours for Yelp and Blog.
and exploited using just the hard label or the smoothed predictions returned by the black-box API. This further validates that the adversary only needs to have access to the victim model’s hard label, and does not always need to have access to the confidence scores for our attacks to be successful.
6 DISCUSSION
Understanding how well our attacks work in various settings is important for defenders to know how vulnerable their systems are. Extensive experiments in this paper indicate that the privacy and robustness of an NLP system depend on the model complexity as well as the task. For example, the privacy leakage of the victim model becomes more serious by inferring from the extracted model for AG news and Blog, while this phenomenon is less obvious for TP-US dataset (c.f., Table 4). In terms of robustness against adversarial example transferability, Blog is more vulnerable (c.f., Table 5).
Adversarial attacks focus more on the study of the robustness of a model. However, under the context of business, we believe adversarial attacks can also be utilised for other purposes. For instance, if a business competitor manages to spot incorrect predictions, they can improve the robustness of their model while launching an advertising campaign against the victim model with these adversarial examples. If a rival company directly leverages black-box adversarial attacks on the victim model, its owner can detect the suspicious querying, which involves intensive similar queries (Jin et al., 2019; Li et al., 2020; Garg & Ramakrishnan, 2020), thereby banning the abnormal usage. Since queries used for our model extraction are genuine instances generated on the Internet, it is unlikely to be suspended by the cloud services. As evidenced in Section 4.4, the victim model is vulnerable to our proposed AET.
Defence against all our investigated attacks in this work is a hard and open problem. An ideal defence should resist against all the possible attacks while striving to have a minimal impact on legitimate users of the model (Orekondy et al., 2019). While current defences are marginally effective, they may fail when adversaries adapt to the defence — sophisticated adversaries might anticipate these defences and develop simple modifications to their attacks to circumvent these defences (Krishna et al., 2019). We hope that this work highlights the need for more research in the development of effective countermeasures to defend against these attacks, or at least to increase the cost of adversaries.
7 CONCLUSIONS
This work goes far beyond only model extraction from BERT-based APIs, we also identified that the extracted model can largely enhance the privacy leakage and adversarial example transferability even in difficult scenarios (e.g., limited query budget, queries from different distributions). Extensive experiments based on representative NLP datasets and tasks under various settings demonstrate the effectiveness of our attacks against BERT-based APIs. We hope that our in-depth investigation can provide new insights, and arouse the awareness of the community for building more trustworthy BERT-based API. A number of avenues for further work are attractive. More broadly, we expect to extend our work to more complex NLP tasks, and develop defences that can ensure privacy, robustness, and accuracy simultaneously.
A DATASET DESCRIPTION
Trustpilot (TP) Trustpilot Sentiment dataset (Hovy et al., 2015) contains reviews associated with a sentiment score on a five point scale, and each review is associated with 3 attributes: gender, age and location, which are self-reported by users. The original dataset is comprised of reviews from different locations, however in this paper, we only derive TP-US for study. Following Coavoux et al. (2018), we extract examples containing information of both gender and age, and treat them as the private information. We categorise “age” into two groups: “under 34” (U34) and “over 45” (O45).
AG news We use AG news corpus (Del Corso et al., 2005). This task is to predict the topic label of the document, with four different topics in total. Following (Zhang et al., 2015; Jin et al., 2019), we use both “title” and “description” fields as the input document.
We use full AG news dataset for MEA and AET, which we call AG news (full). As AIA requires entity information, we use the corpus filtered by Coavoux et al. (2018)7, which we call AG news. The resultant AG news merely includes sentences with the five most frequent person entities, and each sentence contains at least one of these named entities. Thus, the attacker aims to identify these five entities as 5 independent binary classification tasks.
Blog posts (Blog) We derive a blog posts dataset (Blog) from the blog authorship corpus presented (Schler et al., 2006). We recycle the corpus preprocessed by Coavoux et al. (2018), which covers 10 different topics. Similar to TP-US, the private variables are comprised of the age and gender of the author. And the age attribute is binned into two categories, “under 20” (U20) and “over 30” (O30).
Yelp Polarity (Yelp) Yelp dataset is a document-level sentiment classification (Zhang et al., 2015). The original dataset is in a five point scale (1-5), while the polarised version assigns negative labels to the rating of 1 and 2 and positive ones to 4 and 5.
B AIA ALGORITHM
The main algorithm for Attribute Inference Attack (AIA) is shown in Algorithm 1. For each dataset, once the extracted model g′V is built, we query g ′ V with the available public data DA to collect the BERT representation h(xi) for each xi ∈ DA. For each sensitive attribute s, a specific inference model (c.f., Section 4.3) is trained on {(h(xi), si)}, in order to infer the private attributes of the interest; in our case, they are gender, age and named entities (c.f., Table 1).
In more detail, in Algorithm 1, given DA, we take all the non-sensitive attributes xi as input, and the sensitive attribute si as label to train an AIA attack model. During test time, attacker could feed the non-sensitive attributes of any input into the trained model to infer the sensitive attribute. In the case when the attacker gets the non-sensitive attributes of any training record of the victim model, the attacker can successfully infer its sensitive attributes, thus causing privacy leakage of the victim model training data (c.f., Table 4, we use the non-sensitive attributes of DV as test data, and demonstrate the sensitive attribute privacy leakage of DV ). Note that the non-sensitive attributes of the victim training data could be accessible to any attacker.
C ABLATION STUDY
Query Size Due to the budget limit, malicious users cannot issue massive requests. To investigate the attack performance of model extraction under the low-resource setting, we conduct two additional experiments, which only utilise 0.1x and 0.5x size of the training data of the victim models respectively. According to Table 8, although some datasets such as Blog suffer from a drastic drop, the overall performance of the extracted models is comparable to the victim models. In addition, distant domains
7https://github.com/mcoavoux/pnet/tree/master/datasets.
Algorithm 1 Attribute inference attack 1: Input: extracted model g′V , labelled auxiliary data DA = (xi, si), BERT representation layer h,
non-sensitive attributes x∗ 2: Query g′V with DA and collect {(h(xi), si)|(xi, si) ∈ DA}. 3: Train an inference model f on {(h(xi), si)}. 4: Query g′V with x
∗ to get the target BERT representation h(x∗) 5: return f(h(x∗))
exhibit significant degradation, when compared to the close ones. For example, sampling 0.1x-5x queries from news data present a more stable attack performance against the victim model trained on AG news than Blog.
Impact Factor on AIA In Section 6, we found that there is a correlation between the success of AIA and temperature τ on the softmax layer. We conjecture that the causal factor is the sharpness of the posterior probability, i.e., if the model is less confident on its most likely prediction, then AIA is more likely to be successful. This speculation is confirmed by Figure 2, where the higher posterior probability leads to a higher empirical privacy.
Figure 3 and Table 9 indicate that AIA is also affected by the distribution of attributes. Attributes with higher variances cause more information leakage or a lower empirical privacy. For example, for AG-news, entity 2-4 with higher variances result in lower empirical privacy, while entity 0-1 are more resistant to AIA. For TP-US and Blog, as age and gender exhibit similar distribution, AIA performance gap across these two attributes is less obvious, as evidenced by the last two rows in Table 9.
D ADVERSARIAL EXAMPLES
We provide several adversarial examples generated by adv-bert (Sun et al., 2020) in Table 10. Note that all these examples cause a misclassification on both extracted models and victim models.
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
All Same Data Diff. (2nd half) Data Diff. Data Diff. (temp=0.5) Data Diff. (temp=5)
mean median privacy
Correlation between Maximum Posterior Probability and Privacy (AG news)
(a) AG news
0.2000
0.4000
0.6000
0.8000
1.0000
All Same Data Diff. (2nd half) Data Diff. Data Diff. (temp=0.5) Data Diff. (temp=5)
mean median privacy
Correlation between Maximum Posterior Probability and Privacy (TP-US)
(b) TP-US
Correlation between Maximum Posterior Probability and Privacy (Blog)
AG news entity 0 entity 1 entity 2 entity 3 entity 4
All Same 15.61 15.10 7.71 6.95 5.49 Data Diff. (news) 14.79 12.38 3.84 5.33 2.02 | 1. What is the main contribution of the paper regarding model extraction attacks and attribute inference attacks?
2. What are the strengths of the proposed approach, particularly in its novelty and differentiation from prior works?
3. What are the weaknesses of the paper, especially regarding its assumptions, limitations, and motivation?
4. Do you have any questions regarding the paper's experiments and their relation to real-world scenarios?
5. How does the reviewer assess the impact of the paper's findings, and what are the potential implications for NLP systems? | Review | Review
################################
Summary:
This paper presents a model extraction attack (MEA) for BERT-based models that are hosted behind an API. Using the model obtained in this step, the work aims to subsequently demonstrate attribute inference attacks (AIA) to expose sensitive information of the underlying data used during fine-tuning and adversarial example transfer (AET) that can be used to attack the hosted model.
################################
Reasons for score:
Overall, I lean toward reject. The underlying idea is interesting and timely, and core to this interest is that "the adversary can steal a BERT-based API (the victim model), without knowing the victim model's architecture, parameters or the training data distribution." As demonstrated, a substantial portion of the architecture (BERT) is known and the exploration of fine-tuning as the only mechanism for tailoring the model (rather than continual pretraining) limits potential impact.
################################
Strengths:
Broad interest. The underlying ideas are of general interest, especially given recent examples of language models hosted behind APIs. The notion that they can be efficiently reproduced from that API and that they may in turn leak training information is an emerging concern.
Clear differentiation from prior work. In particular, the section on comparison to knowledge distillation is helpful in grounding the setting for experimentation.
################################
Weaknesses:
Limitations. There is an implicit assumption that the models being hosted behind APIs are fine-tuned BERT models. Limitations of this should be more explicitly discussed. Many works in specific domains (e.g., legal, biomedical, etc.) appear to rely on continual pretraining to integrate sensitive data rather than fine-tuning toward a single task. Others even appear to train these models from scratch on data. It's unclear how common the case of fine-tuned BERT models behind APIs are from this paper.
Motivation of AET. The motivation of adversarial attacks against a pay-per-query API are unclear. Yes, it's possible to cause the API to create incorrect predictions, but why is that problematic for the owner of the model? It's clearly undesirable with respect to creating robust models, but as presented it's unclear why this is problematic.
Impact. Similar to the point above, the assertion that "modern NLP systems typically leverage the fine-tuning methodology by adding a few task-specific layers on top of the publicly available BERT base" is not substantiated by this work or by citation. While BERT has certainly become abundant, many recent advances are either not BERT-based (though perhaps the underlying transformer architecture) or do more than fine-tuning.
Knowledge of black-box model. While a stated goal is that a knowledge of the architecture and training data is not required, the experiments leverage a knowledge of the architecture (BERT) and appear to share an architecture for layers used during fine-tuning.
################################
Questions:
Given the positioning of "stealing" a model, how many queries are required to obtain an approximate model? How many are required if knowledge of the previously issued queries is known?
Can you provide pointers to models that are BERT-based and fine-tuned for a specific task only? |
ICLR | Title
EXPLORING VULNERABILITIES OF BERT-BASED APIS
Abstract
Natural language processing (NLP) tasks, ranging from text classification to text generation, have been revolutionised by pretrained BERT models. This allows corporations to easily build powerful APIs by encapsulating fine-tuned BERT models. These BERT-based APIs are often designed to not only provide reliable service but also protect intellectual properties or privacy-sensitive information of the training data. However, a series of privacy and robustness issues may still exist when a fine-tuned BERT model is deployed as a service. In this work, we first present an effective model extraction attack, where the adversary can practically steal a BERT-based API (the target/victim model). We then demonstrate: (1) how the extracted model can be further exploited to develop effective attribute inference attack to expose sensitive information of the training data of the victim model; (2) how the extracted model can lead to highly transferable adversarial attacks against the victim model. Extensive experiments on multiple benchmark datasets under various realistic settings validate the potential privacy and adversarial vulnerabilities of BERT-based APIs.
1 INTRODUCTION
The emergence of Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) has revolutionised the natural language processing (NLP) field, leading to state-of-the-art performance on a wide range of NLP tasks with minimal task-specific supervision. In the meantime, with the increasing success of contextualised pretrained representations for transfer learning, powerful NLP models can be easily built by fine-tuning the pretrained models like BERT or XLNet (Yang et al., 2019). Building NLP models on pretrained representations typically only require several task-specific layers or just a single feedforward layer on top of BERT. To protect data privacy, system integrity and Intellectual Property (IP), commercial NLP models such as task-specific BERT models are often made indirectly accessible through pay-per-query prediction APIs (Krishna et al., 2019) . This leaves model prediction the only information an attacker can access.
Prior works have found that existing NLP APIs are still vulnerable to model extraction attack, which reconstructs a copy of the remote NLP model based on carefully-designed queries and the outputs of the API (Krishna et al., 2019; Wallace et al., 2020). Pretrained BERT models further make it easier to apply model extraction attack to specialised NLP models obtained by fine-tuning pretrained BERT models (Krishna et al., 2019). In addition to model extraction, it is important to ask the following two questions: 1) will the extracted model also leaks sensitive information about the training data in the target model; and 2) whether the extracted model can cause more vulnerabilities of the target model (i.e. the black-box API).
To answer the above two questions, in this work, we first launch a model extraction attack, where the adversary queries the target model with the goal to steal it and turn it into a white-box model. With the extracted model, we further demonstrate that: 1) it is possible to infer sensitive information about the training data; and 2) the extracted model can be exploited to generate highly transferable adversarial attacks against the remote victim model behind the API. Our results highlight the risks of publicly-hosted NLP APIs being stolen and attacked if they are trained by fine-tuning BERT.
Contributions: First, we demonstrate that the extracted model can be exploited by an attribute inference attack to expose sensitive information about the original training data, leading to a significant privacy leakage. Second, we show that adversarial examples crafted on the extracted model are highly
transferable to the target model, exposing more adversarial vulnerabilities of the target model. Third, extensive experiments with the extracted model on benchmark NLP datasets highlight the potential privacy issues and adversarial vulnerabilities of BERT-based APIs. We also show that both attacks developed on the extracted model can evade the investigated defence strategies.
2 RELATED WORK
2.1 MODEL EXTRACTION ATTACK (MEA)
Model extraction attacks (also referred to as “stealing” or “reverse-engineering”) have been studied both empirically and theoretically, for simple classification tasks (Tramèr et al., 2016), vision tasks (Orekondy et al., 2019), and NLP tasks (Krishna et al., 2019; Wallace et al., 2020). As opposed to stealing parameters (Tramèr et al., 2016), hyperparameters (Wang & Gong, 2018), architectures (Oh et al., 2019), training data information (Shokri et al., 2017) and decision boundaries (Tramèr et al., 2016; Papernot et al., 2017), in this work, we attempt to create a local copy or steal the functionality of a black-box victim model (Krishna et al., 2019; Orekondy et al., 2019), that is a model that replicates the performance of the victim model as closely as possible. If reconstruction is successful, the attacker has effectively stolen the intellectual property.
Furthermore, this extracted model could be used as a reconnaissance step to facilitate later attacks (Krishna et al., 2019). For instance, the adversary could use the extracted model to facilitate private information inference about the training data of the victim model, or to construct adversarial examples that will force the victim model to make incorrect predictions.
2.2 ATTRIBUTE INFERENCE ATTACK
Fredrikson et al. (2014) first proposed model inversion attack on biomedical data. The goal is to infer some missing attributes of an input feature vector based on the interaction with a trained ML model. Since deep neural networks have the ability to memorise arbitrary information (Zhang et al., 2017), the private information can be memorised by BERT as well, which poses a threat to information leakage (Krishna et al., 2019). In NLP application, the input text often provides sufficient clues to portray the author, such as gender, age, and other important attributes. For example, sentiment analysis tasks often have privacy implications for authors whose text is used to train models. Prior works (Coavoux et al., 2018) have shown that user attributes can be easily detectable from online review data, as used extensively in sentiment analysis results (Hovy et al., 2015). One might argue that sensitive information like gender, age, location and password are all not explicitly included in model predictions. Nonetheless, model predictions are produced from the input text, it can meanwhile encode personal information which might be exploited for adversarial usages, especially a modern deep learning model owns more capacity than they need to perform well on their tasks (Zhang et al., 2017). The naive solution of removing protected attributes is insufficient: other features may be highly correlated with, and thus predictive of, the protected attributes (Pedreshi et al., 2008).
2.3 ADVERSARIAL TRANSFERABILITY AGAINST NLP SYSTEM
An important property of adversarial examples is their transferability (Szegedy et al., 2014; Goodfellow et al., 2015; Papernot et al., 2017). It has been shown that adversarial examples generated against one network can also successfully fool other networks (Liu et al., 2016; Papernot et al., 2017), especially the adversarial image examples in computer vision. Similarly, in NLP domain, adversarial examples that are designed to manipulate the substitute model can also be misclassified by the target model are considered transferable (Papernot et al., 2017; Ebrahimi et al., 2018b). Adversarial transferability against NLP system remains largely unexplored. Few recent works have attempted to transfer adversarial examples to the NLP systems (Sun et al., 2020; Wallace et al., 2020), however, it is oblivious how the transferability works against BERT-based APIs, and whether the transferability would succeed when the victim model and the substitute (extracted) model have different architectures.
3 ATTACKING BERT-BASED API
In this work, we consider an adversary attempting to steal or attack BERT-based APIs, either for financial gain or to exploit private information or model errors. As shown in Figure 1, the whole attack pipeline against BERT-based APIs can be summarised into two phases. In phase one (model extraction attack (MEA)), we first sample queries, label them by the victim API, and then train an extracted model on the resulting data. In phase two, we conduct attribute inference attack (AIA) and adversarial example transfer (AET) based on the extracted model. We empirically validate that the extracted model can help enhance privacy leakage and adversarial example transferability in Section 4.3 and Section 4.4.
We remark that our attack pipeline is applicable to many remote BERT-based APIs, as we assume: (a) the capabilities required are limited to observing model output by the APIs; (b) the number of queries is limited.
3.1 VICTIM MODEL: BERT-BASED API
Modern NLP systems are typically based on a pretrained BERT (Devlin et al., 2018; Liu et al., 2019a; Nogueira & Cho, 2019; Joshi et al., 2020). BERT produces rich natural language representations which transfer well to most downstream NLP tasks (sentiment analysis, topic classification, etc.). Modern NLP systems typically leverage the fine-tuning methodology by adding a few task-specific layers on top of the publicly available BERT base,1 and fine-tune the whole model.
3.2 MODEL EXTRACTION ATTACK (MEA)
Model extraction attack aims to steal an intellectual model from cloud services (Tramèr et al., 2016; Orekondy et al., 2019; Krishna et al., 2019; Wallace et al., 2020). In this attack, we assume the victim model is a commercially available black-box API. An adversary with black-box query access to the victim model attempts to reconstruct a local copy (“extracted model”) of the victim model. In a nutshell, we perform model extraction attack in a transfer learning setting, where both the adversary and the victim model fine-tune a pretrained BERT. The goal is to extract a model with comparable accuracy to the victim model. Generally, MEA can be formulated as a two-step approach, as illustrated by the top figure in Figure 1:
1https://github.com/google-research/bert
1. Attacker crafts a set of inputs as queries (transfer set), then sends them to the victim model (BERT-based API) to obtain predictions;
2. Attacker reconstructs a copy of the victim model as an “extracted model” by using the queried query-prediction pairs.
Since the attacker does not have training data for the target model, we apply a task-specific query generator to construct m queries {xi}m1 to the victim model. For each xi, target model returns a K-dim posterior probability vector yi ∈ [0, 1]k, ∑ k y k i = 1. The resulting dataset {xi,yi}m1 is used to train the extracted model. Once the extracted model is obtained, the attacker does not have to pay the provider of the original API anymore for the prediction of new data points.
3.3 ATTRIBUTE INFERENCE ATTACK (AIA)
Next, we investigate how to use the extracted model to aid the attribute inference of the private training data of the victim model, i.e., attribute inference attack (AIA) (Song & Raghunathan, 2020). We remark that AIA is different from inferring attribute distribution as in model inversion attack (Yeom et al., 2018). The intuition behind AIA is that the BERT representation generated by the extracted model can be used to infer the sensitive attribute of the private training data of the victim model (Li et al., 2018b; Coavoux et al., 2018; Lyu et al., 2020b). Note that in our work, the only explicit information that is accessible to the attacker is model prediction given by the victim model to the chosen inputs, rather than the original BERT representation. We specifically exploit BERT representation of the extracted model, as it encodes the most informative message for the follow-up classification. A more detailed description can be referred to Appendix B.
3.4 ADVERSARIAL EXAMPLE TRANSFER (AET)
Due to the success of BERT-based models, numerous works have been proposed to evaluate the vulnerability of BERT based models to adversarial attacks (Jin et al., 2019; Sun et al., 2020). However, most recent works for adversarial example transfer focus on the black-box setting (Gao et al., 2018; Ebrahimi et al., 2018a). In such a setting, the adversary attacks the model via the query feedback only. To circumvent this issue, we leverage the transferability of adversarial examples: we first generate adversarial examples for our extracted model, then transfer them to the BERT-based APIs. The intuition lies in two facts: 1) the rationale of a good model should rely on the salient words; 2) the functionally similarity between our extracted model and the victim model allows for the direct transfer of adversarial examples obtained via gradient-based attacks, which is able to locate the most informative words (Sun et al., 2020). Here our extracted model serves as a surrogate to craft adversarial examples in a white-box manner.
4 EXPERIMENTS AND ANALYSIS
4.1 NLP TASKS AND DATASETS
We extract models on four diverse NLP datasets that focus on two main tasks: sentiment analysis and topic classification. The four NLP datasets include TP-US from Trustpilot Sentiment dataset (Hovy et al., 2015), AG news corpus (Del Corso et al., 2005), Blog posts dataset from the blog authorship corpus (Schler et al., 2006), and YELP dataset (Zhang et al., 2015). Table 1 summarises the statistics of the used datasets. A more detailed description can be referred to Appendix A.
4.2 MEA
To assess the functional similarity between the victim model and the extracted one, we compare the accuracy of two models, i.e., the closer accuracy indicates a higher similarity. In line with prior work (Krishna et al., 2019), we first choose the size of the resulting transfer set (queries) to be comparable (e.g., 1x) to the size of victim’s training set, then scale up to 5x.
Attack Strategies We first study model extraction through simulated experiments: we train victim models, query them as if they are black-box APIs, and then train the extracted model to mimic the victim model. We assume that the attacker has access to the freely available pretrained BERT model used by the victim model.
Query Distribution To investigate how the data distribution of queries (PA) may impact the attack on the victim model trained on data from PV (c.f., Table 1), we experiment with the following experiments.
1. We use the same architecture, hyperparameters, and the original data as the victim (All Same).
2. We use the same architecture and hyperparameters as the victim, but sample queries from different distribution (Data Different).
The second scenario makes fewer assumptions and is more realistic and challenging, as the attacker may not know the target data distribution as a prior. Therefore, in addition to the same data distribution as the victim, we additionally investigate the query distribution PA sourced from the following corpora:
• Reviews data: Yelp and Amazon reviews dataset (Zhang et al., 2015). It is worth noting that we exclude Yelp reviews dataset from the Yelp task to guarantee a fair evaluation. • News data: CNN/DailyMail dataset (Hermann et al., 2015)
Regarding the experiments of MEA, our general findings from Table 2 include: (1) using same data (All Same) as queries achieves the best extraction performance, validating that the closeness of the domain between the victim training data and queries is positively correlated to the extraction; (2) using same data can achieve comparable accuracies, even outperform the victim models, we hypothesise this is due to the regularising effect of training on soft-labels (Hinton et al., 2015); (3) our MEA is effective despite the fact that queries may come from different distributions. Using samples from different corpora (review and news) as queries, our MEA can still achieve 0.85-0.99× victim models’ accuracies when the number of queries varies in {1x,5x}, and the extraction is more successful with 5x queries as expected. This facilitates the follow-up AIA and AET. Even with small query budgets (0.1x and 0.5x), extraction is often successful. More results are available in Appendix C. We also noticed that AG news prefers news data, while reviews data is superior to news data on TP-US, Blog and Yelp. Intuitively, one can attribute this preference to the genre similarity, i.e., news data is close to AG news, while distant from TP-US, Blog and Yelp. To rigorously study this phenomenon, we calculate the uni-gram and 5-gram overlapping between test sets and different queries in the 1x setting. Table 3 corroborates that there is a positive correlation between the accuracy and the lexicon similarity. From now, unless otherwise mentioned, because of their effectiveness (c.f.,
Table 2), we will use news data as queries for AG news, and reviews data as queries for TP-US, Blog and Yelp.2
4.3 AIA
For AIA, we conduct our studies on TP-US, AG news and Blog datasets, as there is no matching demographic information for Yelp. AIA is appraised via the following metrics:
• For demographic variables (i.e., gender and age): 1−X , where X is the average prediction accuracy of the attack models on these two variables. • For named entities: 1 − F , where F is the F1 score between the ground truths and the
prediction by the attackers on the presence of all named entities.
Following Coavoux et al. (2018); Lyu et al. (2020a), we denote the value of 1 − X or 1 − F as empirical privacy, i.e., the inverse accuracy or F1 score of the attacker, higher means better empirical privacy, i.e., lower attack performance.
We first randomly split each dataset in Table 1 into two halves. The first half (denoted as DV ) is used to train a victim model, whereas the second half (denoted as DA) is specifically reserved as the public data for the training of AIA attack model. On the extracted model from MEA, attackers can determine how to infer the private attributes from the BERT representation h of the extracted model over DA. Each attack model consists of a multi-layer feed forward network and a binary classifier, which takes the h as the inputs and emits the predicted private attribute. Once the attack models are obtained, we measure the empirical privacy by the ability of the attack model to predict accurately the specific private attribute in DV . Apart from the standard three corpora used for MEA (c.f., Section 4.2), in AIA, we also consider DA (2nd half) as queries, which is derived from the same distribution as DV . It is worth noting that for AG news, we use the filtered AG news (c.f., Appendix A) with sensitive entity information for AIA.
To gauge the private information leakage, we consider a majority class prediction of each attribute as a baseline. To evaluate whether our extracted model can help enhance AIA, we also take the pretrained BERT without (w/o) fine-tuning as a baseline. Table 4 shows that compared to the pretrained only BERT, the attack model built on the BERT representation of the extracted model indeed largely enhances the attribute inference of the training data of the victim model — more than 4x effective for AG news compared with the majority baseline, even when MEA is based on the queries from different data distribution. This implies that target model predictions inadvertently capture sensitive information about users, such as their gender, age, and other important attributes, apart from the useful
2Empirically, we do not have access to the training data of the victim model.
information for the main task (c.f., Table 2). By contrast, BERT (w/o fine-tuning) is a plain model that did not contain any information about the target model training data.
Interestingly, compared with queries from the same distribution, Table 4 shows that queries from different distributions make AIA easier (see the best results corresponding to the lower privacy protections in bold in Table 4). We believe this anti-intuitive phenomenon is caused by the posterior probability, as the posterior probability of the same distribution is sharper than that of the different distribution.3 This argument can be also confirmed from Section 5, in which we use a temperature coefficient τ at the softmax layer to control the sharpness of the posterior probability.
We speculate that the effectiveness of AIA is related to the undesired deep model memorisation of the victim model, which can be spread to the extracted model through model prediction, incurring information leakage.
We further investigate which kind of attribute is more vulnerable, i.e., the relationship between attribute distribution (histogram variance) and privacy leakage. We empirically found that, compared with the attribute with higher variance, attribute with lower variance is harder to attack.4
4.4 AET
Since we have access to the parameters of the locally extracted model, we craft white-box adversarial examples on it and test whether such examples are transferable to the target model. We evaluate sample crafting using the metric of transferability, which refers to the percentage of adversarial examples transferring from the extracted model to the victim model. We use Blog, TP-US, AG news (full) and Yelp for AET.
How We Generate Natural Adversarial Examples? Following Sun et al. (2020), we first leverage the gradients of the gold labels w.r.t the embeddings of the input tokens to find the most informative tokens. Then we corrupt the selected tokens with the following six sources of typos: 1) Insertion; 2) Deletion; 3) Swap; 4) Mistype: Mistyping a word though keyboard, such as “oh”→ “0h”; 5) Pronounce: Wrongly typing due to the close pronounce of the word, such as “egg”→ “agg”; 6) Replace-W: Replace the word by the frequent human behavioural keyboard typo based on the statistics.5 Note that the above operations are constrained by the character distribution on the keyboard. This approach is denoted as advbert.
To evaluate whether our extracted model is needed to mount transferable attacks, we also attack it by using black-box adversarial examples. Moreover, following (Sun et al., 2020), we also experiment with a variant of adv-bert, where the target tokens are randomly selected instead of from the maximum gradients, namely random adv-bet. Compared with
the adversarial examples crafted by black-box and random adv-bert approaches, Table 5 shows that the adversarial examples crafted on our extracted model in a white-box manner make the target model
3Please refer to the Appendix C for the detailed analysis. 4Please refer to the Appendix C for the detail. 5https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings
more vulnerable to adversarial examples in terms of transferability — more than twice effective in the best case. This validates that our extracted model, which is designed to be a high-fidelity imitation of the victim model, considerably enhances the adversarial example transferability, thus severely damaging the output integrity of the target model.
We examine potential factors that contribute to the successful transferability. We found that collecting a larger number of queries contributes to a better attack performance, i.e., 5x queries generally results in much better transferability compared with 1x. This implies that the extracted model with higher fidelity (closer to the victim model, c.f., Table 2) can considerably enhance the adversarial example transferability.
4.5 ARCHITECTURE MISMATCH
In practice, it is more likely that the adversary does not know the victim’s model architecture. A natural question is whether model extraction is still possible even when the extracted models and the victim models have different architectures. To study the influence of the architectural mismatch, we fix the architecture of the extracted model, while varying the victim model from BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019b) to XLNET (Yang et al., 2019). According to Table 6, when there is an architecture mismatch between the victim model and the extracted model, the efficacy of AIA and AET is alleviated as expected. However, the leakage of the private information is still severe (c.f., the majority class in Table 4). Surprisingly, we observe that for AG news (full), MEA cannot benefit from a more accurate victim, which is different from the findings in Hinton et al. (2015). We conjecture such difference is ascribed to the distribution mismatch between the training data of the victim model and the queries. We will conduct an in-depth study on this in the future.
5 DEFENCE
Although we primarily focus on the vulnerabilities of BERT-based APIs in this work, we briefly discuss several counter strategies the victim model may adopt to reduce the informativeness of prediction while minimising the overall drop in API performance (Shokri et al., 2017).
Hard label only. The posterior probability usually leaks more information from the victim model, thus victim model can choose to only return the hard label.
Softening predictions. A temperature coefficient τ on softmax layer manipulates the distribution of the posterior probability. A higher τ leads to smoother probability, whereas a lower one produces a sharper distribution. When τ is approaching 0, the posterior probability becomes a hard label.
Table 7 indicates that although varying temperature on softmax cannot defend the victim model against MEA, it is an effective defensive approach to AIA when τ = 0.5, i.e., closer to the hard label. Similarly, compared with ND, hard label can help mitigate all attacks to some extent.6
However, there is no defence that is effective against all our attacks (c.f., Table 2, Table 4, Table 5), as all these defences preserve the rank of the most confident label. Models can still be effectively stolen
6We observe the similar behaviours for Yelp and Blog.
and exploited using just the hard label or the smoothed predictions returned by the black-box API. This further validates that the adversary only needs to have access to the victim model’s hard label, and does not always need to have access to the confidence scores for our attacks to be successful.
6 DISCUSSION
Understanding how well our attacks work in various settings is important for defenders to know how vulnerable their systems are. Extensive experiments in this paper indicate that the privacy and robustness of an NLP system depend on the model complexity as well as the task. For example, the privacy leakage of the victim model becomes more serious by inferring from the extracted model for AG news and Blog, while this phenomenon is less obvious for TP-US dataset (c.f., Table 4). In terms of robustness against adversarial example transferability, Blog is more vulnerable (c.f., Table 5).
Adversarial attacks focus more on the study of the robustness of a model. However, under the context of business, we believe adversarial attacks can also be utilised for other purposes. For instance, if a business competitor manages to spot incorrect predictions, they can improve the robustness of their model while launching an advertising campaign against the victim model with these adversarial examples. If a rival company directly leverages black-box adversarial attacks on the victim model, its owner can detect the suspicious querying, which involves intensive similar queries (Jin et al., 2019; Li et al., 2020; Garg & Ramakrishnan, 2020), thereby banning the abnormal usage. Since queries used for our model extraction are genuine instances generated on the Internet, it is unlikely to be suspended by the cloud services. As evidenced in Section 4.4, the victim model is vulnerable to our proposed AET.
Defence against all our investigated attacks in this work is a hard and open problem. An ideal defence should resist against all the possible attacks while striving to have a minimal impact on legitimate users of the model (Orekondy et al., 2019). While current defences are marginally effective, they may fail when adversaries adapt to the defence — sophisticated adversaries might anticipate these defences and develop simple modifications to their attacks to circumvent these defences (Krishna et al., 2019). We hope that this work highlights the need for more research in the development of effective countermeasures to defend against these attacks, or at least to increase the cost of adversaries.
7 CONCLUSIONS
This work goes far beyond only model extraction from BERT-based APIs, we also identified that the extracted model can largely enhance the privacy leakage and adversarial example transferability even in difficult scenarios (e.g., limited query budget, queries from different distributions). Extensive experiments based on representative NLP datasets and tasks under various settings demonstrate the effectiveness of our attacks against BERT-based APIs. We hope that our in-depth investigation can provide new insights, and arouse the awareness of the community for building more trustworthy BERT-based API. A number of avenues for further work are attractive. More broadly, we expect to extend our work to more complex NLP tasks, and develop defences that can ensure privacy, robustness, and accuracy simultaneously.
A DATASET DESCRIPTION
Trustpilot (TP) Trustpilot Sentiment dataset (Hovy et al., 2015) contains reviews associated with a sentiment score on a five point scale, and each review is associated with 3 attributes: gender, age and location, which are self-reported by users. The original dataset is comprised of reviews from different locations, however in this paper, we only derive TP-US for study. Following Coavoux et al. (2018), we extract examples containing information of both gender and age, and treat them as the private information. We categorise “age” into two groups: “under 34” (U34) and “over 45” (O45).
AG news We use AG news corpus (Del Corso et al., 2005). This task is to predict the topic label of the document, with four different topics in total. Following (Zhang et al., 2015; Jin et al., 2019), we use both “title” and “description” fields as the input document.
We use full AG news dataset for MEA and AET, which we call AG news (full). As AIA requires entity information, we use the corpus filtered by Coavoux et al. (2018)7, which we call AG news. The resultant AG news merely includes sentences with the five most frequent person entities, and each sentence contains at least one of these named entities. Thus, the attacker aims to identify these five entities as 5 independent binary classification tasks.
Blog posts (Blog) We derive a blog posts dataset (Blog) from the blog authorship corpus presented (Schler et al., 2006). We recycle the corpus preprocessed by Coavoux et al. (2018), which covers 10 different topics. Similar to TP-US, the private variables are comprised of the age and gender of the author. And the age attribute is binned into two categories, “under 20” (U20) and “over 30” (O30).
Yelp Polarity (Yelp) Yelp dataset is a document-level sentiment classification (Zhang et al., 2015). The original dataset is in a five point scale (1-5), while the polarised version assigns negative labels to the rating of 1 and 2 and positive ones to 4 and 5.
B AIA ALGORITHM
The main algorithm for Attribute Inference Attack (AIA) is shown in Algorithm 1. For each dataset, once the extracted model g′V is built, we query g ′ V with the available public data DA to collect the BERT representation h(xi) for each xi ∈ DA. For each sensitive attribute s, a specific inference model (c.f., Section 4.3) is trained on {(h(xi), si)}, in order to infer the private attributes of the interest; in our case, they are gender, age and named entities (c.f., Table 1).
In more detail, in Algorithm 1, given DA, we take all the non-sensitive attributes xi as input, and the sensitive attribute si as label to train an AIA attack model. During test time, attacker could feed the non-sensitive attributes of any input into the trained model to infer the sensitive attribute. In the case when the attacker gets the non-sensitive attributes of any training record of the victim model, the attacker can successfully infer its sensitive attributes, thus causing privacy leakage of the victim model training data (c.f., Table 4, we use the non-sensitive attributes of DV as test data, and demonstrate the sensitive attribute privacy leakage of DV ). Note that the non-sensitive attributes of the victim training data could be accessible to any attacker.
C ABLATION STUDY
Query Size Due to the budget limit, malicious users cannot issue massive requests. To investigate the attack performance of model extraction under the low-resource setting, we conduct two additional experiments, which only utilise 0.1x and 0.5x size of the training data of the victim models respectively. According to Table 8, although some datasets such as Blog suffer from a drastic drop, the overall performance of the extracted models is comparable to the victim models. In addition, distant domains
7https://github.com/mcoavoux/pnet/tree/master/datasets.
Algorithm 1 Attribute inference attack 1: Input: extracted model g′V , labelled auxiliary data DA = (xi, si), BERT representation layer h,
non-sensitive attributes x∗ 2: Query g′V with DA and collect {(h(xi), si)|(xi, si) ∈ DA}. 3: Train an inference model f on {(h(xi), si)}. 4: Query g′V with x
∗ to get the target BERT representation h(x∗) 5: return f(h(x∗))
exhibit significant degradation, when compared to the close ones. For example, sampling 0.1x-5x queries from news data present a more stable attack performance against the victim model trained on AG news than Blog.
Impact Factor on AIA In Section 6, we found that there is a correlation between the success of AIA and temperature τ on the softmax layer. We conjecture that the causal factor is the sharpness of the posterior probability, i.e., if the model is less confident on its most likely prediction, then AIA is more likely to be successful. This speculation is confirmed by Figure 2, where the higher posterior probability leads to a higher empirical privacy.
Figure 3 and Table 9 indicate that AIA is also affected by the distribution of attributes. Attributes with higher variances cause more information leakage or a lower empirical privacy. For example, for AG-news, entity 2-4 with higher variances result in lower empirical privacy, while entity 0-1 are more resistant to AIA. For TP-US and Blog, as age and gender exhibit similar distribution, AIA performance gap across these two attributes is less obvious, as evidenced by the last two rows in Table 9.
D ADVERSARIAL EXAMPLES
We provide several adversarial examples generated by adv-bert (Sun et al., 2020) in Table 10. Note that all these examples cause a misclassification on both extracted models and victim models.
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
All Same Data Diff. (2nd half) Data Diff. Data Diff. (temp=0.5) Data Diff. (temp=5)
mean median privacy
Correlation between Maximum Posterior Probability and Privacy (AG news)
(a) AG news
0.2000
0.4000
0.6000
0.8000
1.0000
All Same Data Diff. (2nd half) Data Diff. Data Diff. (temp=0.5) Data Diff. (temp=5)
mean median privacy
Correlation between Maximum Posterior Probability and Privacy (TP-US)
(b) TP-US
Correlation between Maximum Posterior Probability and Privacy (Blog)
AG news entity 0 entity 1 entity 2 entity 3 entity 4
All Same 15.61 15.10 7.71 6.95 5.49 Data Diff. (news) 14.79 12.38 3.84 5.33 2.02 | 1. What is the main contribution of the paper regarding model extraction attacks on neural network-based models?
2. What are the concerns regarding the proposed method, particularly in terms of its applicability in real-world scenarios?
3. How does the reviewer assess the effectiveness of the attribute inference attack and the transferability of adversarial attacks using the extracted model?
4. Are there any questions regarding the supporting evidence or references provided in the paper for its claims?
5. How does the reviewer evaluate the similarity measurement between the victim model and the extracted model, and its impact on the success of the attack? | Review | Review
The paper is motivated by a challenging problem in deploying a neural network-based model for sensitive domain and research in this direction is essential for making such model usable for sensitive domains. The paper presents a model extraction attack, where the adversary can steal a BERT- based API (i.e. the victim model), without knowing the victim model’s architecture, parameters or the training data distribution. The model extraction attack, where the adversary queries the target model with the goal to steal it and turn it into a white-box model. They demonstrated using simulated experiments that how the extracted model can be exploited to develop effective attribute inference attack to expose sensitive information of the training data. They claimed that the extracted model can lead to highly transferable adversarial attacks against the original model (victim model).
The model extraction step of the proposed method is the main concern for me. Conclusions maid by simulated experiments on model extraction attack might not hold for a real experiment. The simulated experiments make both victim model and extracted model accessible and thus measuring functional similarity is fairly easy. However, without the knowledge of the victim model and with limited query budget, the simulated experiment might not resemble a real-scenario. Some explanations with real scenarios would make the claim more realistic.
Some thoughts:
Re: “Modern NLP systems are typically based on a pre-trained BERT. ”: provide references or evidence to support the statement.
Re: “Model extraction attack aims to steal an intellectual model from cloud services.”: provide references or evidence to support the statement.
Re: “Most existing adversarial attacks on BERT are white-box settings, requiring knowledge of either the model internals (e.g., model architecture, hyperparameters) or training data.”: provide references or evidence to support the statement.
Re: “The intuition lies in the fact that the similarity of our extracted model and the victim model allows for direct transfer of adversarial examples obtained via gradient-based attacks.” — BERT part is same for both victim and extracted model but rest is still unknown and how the complexity of the similarity measurement increases for a real scenario?
Re: “We measure the accuracy (on the same held-out test set for evaluation purposes) between the outputs of the victim model and the extracted model to assess their functional similarity.” — Can this be arbitrarily true by accident? Is there a robust way that we can use to measure the similarity? |
ICLR | Title
EXPLORING VULNERABILITIES OF BERT-BASED APIS
Abstract
Natural language processing (NLP) tasks, ranging from text classification to text generation, have been revolutionised by pretrained BERT models. This allows corporations to easily build powerful APIs by encapsulating fine-tuned BERT models. These BERT-based APIs are often designed to not only provide reliable service but also protect intellectual properties or privacy-sensitive information of the training data. However, a series of privacy and robustness issues may still exist when a fine-tuned BERT model is deployed as a service. In this work, we first present an effective model extraction attack, where the adversary can practically steal a BERT-based API (the target/victim model). We then demonstrate: (1) how the extracted model can be further exploited to develop effective attribute inference attack to expose sensitive information of the training data of the victim model; (2) how the extracted model can lead to highly transferable adversarial attacks against the victim model. Extensive experiments on multiple benchmark datasets under various realistic settings validate the potential privacy and adversarial vulnerabilities of BERT-based APIs.
1 INTRODUCTION
The emergence of Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2018) has revolutionised the natural language processing (NLP) field, leading to state-of-the-art performance on a wide range of NLP tasks with minimal task-specific supervision. In the meantime, with the increasing success of contextualised pretrained representations for transfer learning, powerful NLP models can be easily built by fine-tuning the pretrained models like BERT or XLNet (Yang et al., 2019). Building NLP models on pretrained representations typically only require several task-specific layers or just a single feedforward layer on top of BERT. To protect data privacy, system integrity and Intellectual Property (IP), commercial NLP models such as task-specific BERT models are often made indirectly accessible through pay-per-query prediction APIs (Krishna et al., 2019) . This leaves model prediction the only information an attacker can access.
Prior works have found that existing NLP APIs are still vulnerable to model extraction attack, which reconstructs a copy of the remote NLP model based on carefully-designed queries and the outputs of the API (Krishna et al., 2019; Wallace et al., 2020). Pretrained BERT models further make it easier to apply model extraction attack to specialised NLP models obtained by fine-tuning pretrained BERT models (Krishna et al., 2019). In addition to model extraction, it is important to ask the following two questions: 1) will the extracted model also leaks sensitive information about the training data in the target model; and 2) whether the extracted model can cause more vulnerabilities of the target model (i.e. the black-box API).
To answer the above two questions, in this work, we first launch a model extraction attack, where the adversary queries the target model with the goal to steal it and turn it into a white-box model. With the extracted model, we further demonstrate that: 1) it is possible to infer sensitive information about the training data; and 2) the extracted model can be exploited to generate highly transferable adversarial attacks against the remote victim model behind the API. Our results highlight the risks of publicly-hosted NLP APIs being stolen and attacked if they are trained by fine-tuning BERT.
Contributions: First, we demonstrate that the extracted model can be exploited by an attribute inference attack to expose sensitive information about the original training data, leading to a significant privacy leakage. Second, we show that adversarial examples crafted on the extracted model are highly
transferable to the target model, exposing more adversarial vulnerabilities of the target model. Third, extensive experiments with the extracted model on benchmark NLP datasets highlight the potential privacy issues and adversarial vulnerabilities of BERT-based APIs. We also show that both attacks developed on the extracted model can evade the investigated defence strategies.
2 RELATED WORK
2.1 MODEL EXTRACTION ATTACK (MEA)
Model extraction attacks (also referred to as “stealing” or “reverse-engineering”) have been studied both empirically and theoretically, for simple classification tasks (Tramèr et al., 2016), vision tasks (Orekondy et al., 2019), and NLP tasks (Krishna et al., 2019; Wallace et al., 2020). As opposed to stealing parameters (Tramèr et al., 2016), hyperparameters (Wang & Gong, 2018), architectures (Oh et al., 2019), training data information (Shokri et al., 2017) and decision boundaries (Tramèr et al., 2016; Papernot et al., 2017), in this work, we attempt to create a local copy or steal the functionality of a black-box victim model (Krishna et al., 2019; Orekondy et al., 2019), that is a model that replicates the performance of the victim model as closely as possible. If reconstruction is successful, the attacker has effectively stolen the intellectual property.
Furthermore, this extracted model could be used as a reconnaissance step to facilitate later attacks (Krishna et al., 2019). For instance, the adversary could use the extracted model to facilitate private information inference about the training data of the victim model, or to construct adversarial examples that will force the victim model to make incorrect predictions.
2.2 ATTRIBUTE INFERENCE ATTACK
Fredrikson et al. (2014) first proposed model inversion attack on biomedical data. The goal is to infer some missing attributes of an input feature vector based on the interaction with a trained ML model. Since deep neural networks have the ability to memorise arbitrary information (Zhang et al., 2017), the private information can be memorised by BERT as well, which poses a threat to information leakage (Krishna et al., 2019). In NLP application, the input text often provides sufficient clues to portray the author, such as gender, age, and other important attributes. For example, sentiment analysis tasks often have privacy implications for authors whose text is used to train models. Prior works (Coavoux et al., 2018) have shown that user attributes can be easily detectable from online review data, as used extensively in sentiment analysis results (Hovy et al., 2015). One might argue that sensitive information like gender, age, location and password are all not explicitly included in model predictions. Nonetheless, model predictions are produced from the input text, it can meanwhile encode personal information which might be exploited for adversarial usages, especially a modern deep learning model owns more capacity than they need to perform well on their tasks (Zhang et al., 2017). The naive solution of removing protected attributes is insufficient: other features may be highly correlated with, and thus predictive of, the protected attributes (Pedreshi et al., 2008).
2.3 ADVERSARIAL TRANSFERABILITY AGAINST NLP SYSTEM
An important property of adversarial examples is their transferability (Szegedy et al., 2014; Goodfellow et al., 2015; Papernot et al., 2017). It has been shown that adversarial examples generated against one network can also successfully fool other networks (Liu et al., 2016; Papernot et al., 2017), especially the adversarial image examples in computer vision. Similarly, in NLP domain, adversarial examples that are designed to manipulate the substitute model can also be misclassified by the target model are considered transferable (Papernot et al., 2017; Ebrahimi et al., 2018b). Adversarial transferability against NLP system remains largely unexplored. Few recent works have attempted to transfer adversarial examples to the NLP systems (Sun et al., 2020; Wallace et al., 2020), however, it is oblivious how the transferability works against BERT-based APIs, and whether the transferability would succeed when the victim model and the substitute (extracted) model have different architectures.
3 ATTACKING BERT-BASED API
In this work, we consider an adversary attempting to steal or attack BERT-based APIs, either for financial gain or to exploit private information or model errors. As shown in Figure 1, the whole attack pipeline against BERT-based APIs can be summarised into two phases. In phase one (model extraction attack (MEA)), we first sample queries, label them by the victim API, and then train an extracted model on the resulting data. In phase two, we conduct attribute inference attack (AIA) and adversarial example transfer (AET) based on the extracted model. We empirically validate that the extracted model can help enhance privacy leakage and adversarial example transferability in Section 4.3 and Section 4.4.
We remark that our attack pipeline is applicable to many remote BERT-based APIs, as we assume: (a) the capabilities required are limited to observing model output by the APIs; (b) the number of queries is limited.
3.1 VICTIM MODEL: BERT-BASED API
Modern NLP systems are typically based on a pretrained BERT (Devlin et al., 2018; Liu et al., 2019a; Nogueira & Cho, 2019; Joshi et al., 2020). BERT produces rich natural language representations which transfer well to most downstream NLP tasks (sentiment analysis, topic classification, etc.). Modern NLP systems typically leverage the fine-tuning methodology by adding a few task-specific layers on top of the publicly available BERT base,1 and fine-tune the whole model.
3.2 MODEL EXTRACTION ATTACK (MEA)
Model extraction attack aims to steal an intellectual model from cloud services (Tramèr et al., 2016; Orekondy et al., 2019; Krishna et al., 2019; Wallace et al., 2020). In this attack, we assume the victim model is a commercially available black-box API. An adversary with black-box query access to the victim model attempts to reconstruct a local copy (“extracted model”) of the victim model. In a nutshell, we perform model extraction attack in a transfer learning setting, where both the adversary and the victim model fine-tune a pretrained BERT. The goal is to extract a model with comparable accuracy to the victim model. Generally, MEA can be formulated as a two-step approach, as illustrated by the top figure in Figure 1:
1https://github.com/google-research/bert
1. Attacker crafts a set of inputs as queries (transfer set), then sends them to the victim model (BERT-based API) to obtain predictions;
2. Attacker reconstructs a copy of the victim model as an “extracted model” by using the queried query-prediction pairs.
Since the attacker does not have training data for the target model, we apply a task-specific query generator to construct m queries {xi}m1 to the victim model. For each xi, target model returns a K-dim posterior probability vector yi ∈ [0, 1]k, ∑ k y k i = 1. The resulting dataset {xi,yi}m1 is used to train the extracted model. Once the extracted model is obtained, the attacker does not have to pay the provider of the original API anymore for the prediction of new data points.
3.3 ATTRIBUTE INFERENCE ATTACK (AIA)
Next, we investigate how to use the extracted model to aid the attribute inference of the private training data of the victim model, i.e., attribute inference attack (AIA) (Song & Raghunathan, 2020). We remark that AIA is different from inferring attribute distribution as in model inversion attack (Yeom et al., 2018). The intuition behind AIA is that the BERT representation generated by the extracted model can be used to infer the sensitive attribute of the private training data of the victim model (Li et al., 2018b; Coavoux et al., 2018; Lyu et al., 2020b). Note that in our work, the only explicit information that is accessible to the attacker is model prediction given by the victim model to the chosen inputs, rather than the original BERT representation. We specifically exploit BERT representation of the extracted model, as it encodes the most informative message for the follow-up classification. A more detailed description can be referred to Appendix B.
3.4 ADVERSARIAL EXAMPLE TRANSFER (AET)
Due to the success of BERT-based models, numerous works have been proposed to evaluate the vulnerability of BERT based models to adversarial attacks (Jin et al., 2019; Sun et al., 2020). However, most recent works for adversarial example transfer focus on the black-box setting (Gao et al., 2018; Ebrahimi et al., 2018a). In such a setting, the adversary attacks the model via the query feedback only. To circumvent this issue, we leverage the transferability of adversarial examples: we first generate adversarial examples for our extracted model, then transfer them to the BERT-based APIs. The intuition lies in two facts: 1) the rationale of a good model should rely on the salient words; 2) the functionally similarity between our extracted model and the victim model allows for the direct transfer of adversarial examples obtained via gradient-based attacks, which is able to locate the most informative words (Sun et al., 2020). Here our extracted model serves as a surrogate to craft adversarial examples in a white-box manner.
4 EXPERIMENTS AND ANALYSIS
4.1 NLP TASKS AND DATASETS
We extract models on four diverse NLP datasets that focus on two main tasks: sentiment analysis and topic classification. The four NLP datasets include TP-US from Trustpilot Sentiment dataset (Hovy et al., 2015), AG news corpus (Del Corso et al., 2005), Blog posts dataset from the blog authorship corpus (Schler et al., 2006), and YELP dataset (Zhang et al., 2015). Table 1 summarises the statistics of the used datasets. A more detailed description can be referred to Appendix A.
4.2 MEA
To assess the functional similarity between the victim model and the extracted one, we compare the accuracy of two models, i.e., the closer accuracy indicates a higher similarity. In line with prior work (Krishna et al., 2019), we first choose the size of the resulting transfer set (queries) to be comparable (e.g., 1x) to the size of victim’s training set, then scale up to 5x.
Attack Strategies We first study model extraction through simulated experiments: we train victim models, query them as if they are black-box APIs, and then train the extracted model to mimic the victim model. We assume that the attacker has access to the freely available pretrained BERT model used by the victim model.
Query Distribution To investigate how the data distribution of queries (PA) may impact the attack on the victim model trained on data from PV (c.f., Table 1), we experiment with the following experiments.
1. We use the same architecture, hyperparameters, and the original data as the victim (All Same).
2. We use the same architecture and hyperparameters as the victim, but sample queries from different distribution (Data Different).
The second scenario makes fewer assumptions and is more realistic and challenging, as the attacker may not know the target data distribution as a prior. Therefore, in addition to the same data distribution as the victim, we additionally investigate the query distribution PA sourced from the following corpora:
• Reviews data: Yelp and Amazon reviews dataset (Zhang et al., 2015). It is worth noting that we exclude Yelp reviews dataset from the Yelp task to guarantee a fair evaluation. • News data: CNN/DailyMail dataset (Hermann et al., 2015)
Regarding the experiments of MEA, our general findings from Table 2 include: (1) using same data (All Same) as queries achieves the best extraction performance, validating that the closeness of the domain between the victim training data and queries is positively correlated to the extraction; (2) using same data can achieve comparable accuracies, even outperform the victim models, we hypothesise this is due to the regularising effect of training on soft-labels (Hinton et al., 2015); (3) our MEA is effective despite the fact that queries may come from different distributions. Using samples from different corpora (review and news) as queries, our MEA can still achieve 0.85-0.99× victim models’ accuracies when the number of queries varies in {1x,5x}, and the extraction is more successful with 5x queries as expected. This facilitates the follow-up AIA and AET. Even with small query budgets (0.1x and 0.5x), extraction is often successful. More results are available in Appendix C. We also noticed that AG news prefers news data, while reviews data is superior to news data on TP-US, Blog and Yelp. Intuitively, one can attribute this preference to the genre similarity, i.e., news data is close to AG news, while distant from TP-US, Blog and Yelp. To rigorously study this phenomenon, we calculate the uni-gram and 5-gram overlapping between test sets and different queries in the 1x setting. Table 3 corroborates that there is a positive correlation between the accuracy and the lexicon similarity. From now, unless otherwise mentioned, because of their effectiveness (c.f.,
Table 2), we will use news data as queries for AG news, and reviews data as queries for TP-US, Blog and Yelp.2
4.3 AIA
For AIA, we conduct our studies on TP-US, AG news and Blog datasets, as there is no matching demographic information for Yelp. AIA is appraised via the following metrics:
• For demographic variables (i.e., gender and age): 1−X , where X is the average prediction accuracy of the attack models on these two variables. • For named entities: 1 − F , where F is the F1 score between the ground truths and the
prediction by the attackers on the presence of all named entities.
Following Coavoux et al. (2018); Lyu et al. (2020a), we denote the value of 1 − X or 1 − F as empirical privacy, i.e., the inverse accuracy or F1 score of the attacker, higher means better empirical privacy, i.e., lower attack performance.
We first randomly split each dataset in Table 1 into two halves. The first half (denoted as DV ) is used to train a victim model, whereas the second half (denoted as DA) is specifically reserved as the public data for the training of AIA attack model. On the extracted model from MEA, attackers can determine how to infer the private attributes from the BERT representation h of the extracted model over DA. Each attack model consists of a multi-layer feed forward network and a binary classifier, which takes the h as the inputs and emits the predicted private attribute. Once the attack models are obtained, we measure the empirical privacy by the ability of the attack model to predict accurately the specific private attribute in DV . Apart from the standard three corpora used for MEA (c.f., Section 4.2), in AIA, we also consider DA (2nd half) as queries, which is derived from the same distribution as DV . It is worth noting that for AG news, we use the filtered AG news (c.f., Appendix A) with sensitive entity information for AIA.
To gauge the private information leakage, we consider a majority class prediction of each attribute as a baseline. To evaluate whether our extracted model can help enhance AIA, we also take the pretrained BERT without (w/o) fine-tuning as a baseline. Table 4 shows that compared to the pretrained only BERT, the attack model built on the BERT representation of the extracted model indeed largely enhances the attribute inference of the training data of the victim model — more than 4x effective for AG news compared with the majority baseline, even when MEA is based on the queries from different data distribution. This implies that target model predictions inadvertently capture sensitive information about users, such as their gender, age, and other important attributes, apart from the useful
2Empirically, we do not have access to the training data of the victim model.
information for the main task (c.f., Table 2). By contrast, BERT (w/o fine-tuning) is a plain model that did not contain any information about the target model training data.
Interestingly, compared with queries from the same distribution, Table 4 shows that queries from different distributions make AIA easier (see the best results corresponding to the lower privacy protections in bold in Table 4). We believe this anti-intuitive phenomenon is caused by the posterior probability, as the posterior probability of the same distribution is sharper than that of the different distribution.3 This argument can be also confirmed from Section 5, in which we use a temperature coefficient τ at the softmax layer to control the sharpness of the posterior probability.
We speculate that the effectiveness of AIA is related to the undesired deep model memorisation of the victim model, which can be spread to the extracted model through model prediction, incurring information leakage.
We further investigate which kind of attribute is more vulnerable, i.e., the relationship between attribute distribution (histogram variance) and privacy leakage. We empirically found that, compared with the attribute with higher variance, attribute with lower variance is harder to attack.4
4.4 AET
Since we have access to the parameters of the locally extracted model, we craft white-box adversarial examples on it and test whether such examples are transferable to the target model. We evaluate sample crafting using the metric of transferability, which refers to the percentage of adversarial examples transferring from the extracted model to the victim model. We use Blog, TP-US, AG news (full) and Yelp for AET.
How We Generate Natural Adversarial Examples? Following Sun et al. (2020), we first leverage the gradients of the gold labels w.r.t the embeddings of the input tokens to find the most informative tokens. Then we corrupt the selected tokens with the following six sources of typos: 1) Insertion; 2) Deletion; 3) Swap; 4) Mistype: Mistyping a word though keyboard, such as “oh”→ “0h”; 5) Pronounce: Wrongly typing due to the close pronounce of the word, such as “egg”→ “agg”; 6) Replace-W: Replace the word by the frequent human behavioural keyboard typo based on the statistics.5 Note that the above operations are constrained by the character distribution on the keyboard. This approach is denoted as advbert.
To evaluate whether our extracted model is needed to mount transferable attacks, we also attack it by using black-box adversarial examples. Moreover, following (Sun et al., 2020), we also experiment with a variant of adv-bert, where the target tokens are randomly selected instead of from the maximum gradients, namely random adv-bet. Compared with
the adversarial examples crafted by black-box and random adv-bert approaches, Table 5 shows that the adversarial examples crafted on our extracted model in a white-box manner make the target model
3Please refer to the Appendix C for the detailed analysis. 4Please refer to the Appendix C for the detail. 5https://en.wikipedia.org/wiki/Wikipedia:Lists_of_common_misspellings
more vulnerable to adversarial examples in terms of transferability — more than twice effective in the best case. This validates that our extracted model, which is designed to be a high-fidelity imitation of the victim model, considerably enhances the adversarial example transferability, thus severely damaging the output integrity of the target model.
We examine potential factors that contribute to the successful transferability. We found that collecting a larger number of queries contributes to a better attack performance, i.e., 5x queries generally results in much better transferability compared with 1x. This implies that the extracted model with higher fidelity (closer to the victim model, c.f., Table 2) can considerably enhance the adversarial example transferability.
4.5 ARCHITECTURE MISMATCH
In practice, it is more likely that the adversary does not know the victim’s model architecture. A natural question is whether model extraction is still possible even when the extracted models and the victim models have different architectures. To study the influence of the architectural mismatch, we fix the architecture of the extracted model, while varying the victim model from BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019b) to XLNET (Yang et al., 2019). According to Table 6, when there is an architecture mismatch between the victim model and the extracted model, the efficacy of AIA and AET is alleviated as expected. However, the leakage of the private information is still severe (c.f., the majority class in Table 4). Surprisingly, we observe that for AG news (full), MEA cannot benefit from a more accurate victim, which is different from the findings in Hinton et al. (2015). We conjecture such difference is ascribed to the distribution mismatch between the training data of the victim model and the queries. We will conduct an in-depth study on this in the future.
5 DEFENCE
Although we primarily focus on the vulnerabilities of BERT-based APIs in this work, we briefly discuss several counter strategies the victim model may adopt to reduce the informativeness of prediction while minimising the overall drop in API performance (Shokri et al., 2017).
Hard label only. The posterior probability usually leaks more information from the victim model, thus victim model can choose to only return the hard label.
Softening predictions. A temperature coefficient τ on softmax layer manipulates the distribution of the posterior probability. A higher τ leads to smoother probability, whereas a lower one produces a sharper distribution. When τ is approaching 0, the posterior probability becomes a hard label.
Table 7 indicates that although varying temperature on softmax cannot defend the victim model against MEA, it is an effective defensive approach to AIA when τ = 0.5, i.e., closer to the hard label. Similarly, compared with ND, hard label can help mitigate all attacks to some extent.6
However, there is no defence that is effective against all our attacks (c.f., Table 2, Table 4, Table 5), as all these defences preserve the rank of the most confident label. Models can still be effectively stolen
6We observe the similar behaviours for Yelp and Blog.
and exploited using just the hard label or the smoothed predictions returned by the black-box API. This further validates that the adversary only needs to have access to the victim model’s hard label, and does not always need to have access to the confidence scores for our attacks to be successful.
6 DISCUSSION
Understanding how well our attacks work in various settings is important for defenders to know how vulnerable their systems are. Extensive experiments in this paper indicate that the privacy and robustness of an NLP system depend on the model complexity as well as the task. For example, the privacy leakage of the victim model becomes more serious by inferring from the extracted model for AG news and Blog, while this phenomenon is less obvious for TP-US dataset (c.f., Table 4). In terms of robustness against adversarial example transferability, Blog is more vulnerable (c.f., Table 5).
Adversarial attacks focus more on the study of the robustness of a model. However, under the context of business, we believe adversarial attacks can also be utilised for other purposes. For instance, if a business competitor manages to spot incorrect predictions, they can improve the robustness of their model while launching an advertising campaign against the victim model with these adversarial examples. If a rival company directly leverages black-box adversarial attacks on the victim model, its owner can detect the suspicious querying, which involves intensive similar queries (Jin et al., 2019; Li et al., 2020; Garg & Ramakrishnan, 2020), thereby banning the abnormal usage. Since queries used for our model extraction are genuine instances generated on the Internet, it is unlikely to be suspended by the cloud services. As evidenced in Section 4.4, the victim model is vulnerable to our proposed AET.
Defence against all our investigated attacks in this work is a hard and open problem. An ideal defence should resist against all the possible attacks while striving to have a minimal impact on legitimate users of the model (Orekondy et al., 2019). While current defences are marginally effective, they may fail when adversaries adapt to the defence — sophisticated adversaries might anticipate these defences and develop simple modifications to their attacks to circumvent these defences (Krishna et al., 2019). We hope that this work highlights the need for more research in the development of effective countermeasures to defend against these attacks, or at least to increase the cost of adversaries.
7 CONCLUSIONS
This work goes far beyond only model extraction from BERT-based APIs, we also identified that the extracted model can largely enhance the privacy leakage and adversarial example transferability even in difficult scenarios (e.g., limited query budget, queries from different distributions). Extensive experiments based on representative NLP datasets and tasks under various settings demonstrate the effectiveness of our attacks against BERT-based APIs. We hope that our in-depth investigation can provide new insights, and arouse the awareness of the community for building more trustworthy BERT-based API. A number of avenues for further work are attractive. More broadly, we expect to extend our work to more complex NLP tasks, and develop defences that can ensure privacy, robustness, and accuracy simultaneously.
A DATASET DESCRIPTION
Trustpilot (TP) Trustpilot Sentiment dataset (Hovy et al., 2015) contains reviews associated with a sentiment score on a five point scale, and each review is associated with 3 attributes: gender, age and location, which are self-reported by users. The original dataset is comprised of reviews from different locations, however in this paper, we only derive TP-US for study. Following Coavoux et al. (2018), we extract examples containing information of both gender and age, and treat them as the private information. We categorise “age” into two groups: “under 34” (U34) and “over 45” (O45).
AG news We use AG news corpus (Del Corso et al., 2005). This task is to predict the topic label of the document, with four different topics in total. Following (Zhang et al., 2015; Jin et al., 2019), we use both “title” and “description” fields as the input document.
We use full AG news dataset for MEA and AET, which we call AG news (full). As AIA requires entity information, we use the corpus filtered by Coavoux et al. (2018)7, which we call AG news. The resultant AG news merely includes sentences with the five most frequent person entities, and each sentence contains at least one of these named entities. Thus, the attacker aims to identify these five entities as 5 independent binary classification tasks.
Blog posts (Blog) We derive a blog posts dataset (Blog) from the blog authorship corpus presented (Schler et al., 2006). We recycle the corpus preprocessed by Coavoux et al. (2018), which covers 10 different topics. Similar to TP-US, the private variables are comprised of the age and gender of the author. And the age attribute is binned into two categories, “under 20” (U20) and “over 30” (O30).
Yelp Polarity (Yelp) Yelp dataset is a document-level sentiment classification (Zhang et al., 2015). The original dataset is in a five point scale (1-5), while the polarised version assigns negative labels to the rating of 1 and 2 and positive ones to 4 and 5.
B AIA ALGORITHM
The main algorithm for Attribute Inference Attack (AIA) is shown in Algorithm 1. For each dataset, once the extracted model g′V is built, we query g ′ V with the available public data DA to collect the BERT representation h(xi) for each xi ∈ DA. For each sensitive attribute s, a specific inference model (c.f., Section 4.3) is trained on {(h(xi), si)}, in order to infer the private attributes of the interest; in our case, they are gender, age and named entities (c.f., Table 1).
In more detail, in Algorithm 1, given DA, we take all the non-sensitive attributes xi as input, and the sensitive attribute si as label to train an AIA attack model. During test time, attacker could feed the non-sensitive attributes of any input into the trained model to infer the sensitive attribute. In the case when the attacker gets the non-sensitive attributes of any training record of the victim model, the attacker can successfully infer its sensitive attributes, thus causing privacy leakage of the victim model training data (c.f., Table 4, we use the non-sensitive attributes of DV as test data, and demonstrate the sensitive attribute privacy leakage of DV ). Note that the non-sensitive attributes of the victim training data could be accessible to any attacker.
C ABLATION STUDY
Query Size Due to the budget limit, malicious users cannot issue massive requests. To investigate the attack performance of model extraction under the low-resource setting, we conduct two additional experiments, which only utilise 0.1x and 0.5x size of the training data of the victim models respectively. According to Table 8, although some datasets such as Blog suffer from a drastic drop, the overall performance of the extracted models is comparable to the victim models. In addition, distant domains
7https://github.com/mcoavoux/pnet/tree/master/datasets.
Algorithm 1 Attribute inference attack 1: Input: extracted model g′V , labelled auxiliary data DA = (xi, si), BERT representation layer h,
non-sensitive attributes x∗ 2: Query g′V with DA and collect {(h(xi), si)|(xi, si) ∈ DA}. 3: Train an inference model f on {(h(xi), si)}. 4: Query g′V with x
∗ to get the target BERT representation h(x∗) 5: return f(h(x∗))
exhibit significant degradation, when compared to the close ones. For example, sampling 0.1x-5x queries from news data present a more stable attack performance against the victim model trained on AG news than Blog.
Impact Factor on AIA In Section 6, we found that there is a correlation between the success of AIA and temperature τ on the softmax layer. We conjecture that the causal factor is the sharpness of the posterior probability, i.e., if the model is less confident on its most likely prediction, then AIA is more likely to be successful. This speculation is confirmed by Figure 2, where the higher posterior probability leads to a higher empirical privacy.
Figure 3 and Table 9 indicate that AIA is also affected by the distribution of attributes. Attributes with higher variances cause more information leakage or a lower empirical privacy. For example, for AG-news, entity 2-4 with higher variances result in lower empirical privacy, while entity 0-1 are more resistant to AIA. For TP-US and Blog, as age and gender exhibit similar distribution, AIA performance gap across these two attributes is less obvious, as evidenced by the last two rows in Table 9.
D ADVERSARIAL EXAMPLES
We provide several adversarial examples generated by adv-bert (Sun et al., 2020) in Table 10. Note that all these examples cause a misclassification on both extracted models and victim models.
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
All Same Data Diff. (2nd half) Data Diff. Data Diff. (temp=0.5) Data Diff. (temp=5)
mean median privacy
Correlation between Maximum Posterior Probability and Privacy (AG news)
(a) AG news
0.2000
0.4000
0.6000
0.8000
1.0000
All Same Data Diff. (2nd half) Data Diff. Data Diff. (temp=0.5) Data Diff. (temp=5)
mean median privacy
Correlation between Maximum Posterior Probability and Privacy (TP-US)
(b) TP-US
Correlation between Maximum Posterior Probability and Privacy (Blog)
AG news entity 0 entity 1 entity 2 entity 3 entity 4
All Same 15.61 15.10 7.71 6.95 5.49 Data Diff. (news) 14.79 12.38 3.84 5.33 2.02 | 1. What are the strengths and weaknesses of the proposed approach regarding vulnerability analysis?
2. How effective are the reported attacks compared to competitive baselines?
3. Are there any concerns about experimental setup or assumptions made in the paper?
4. How does the reviewer assess the overall quality and relevance of the work?
5. Are there any suggestions for improving the paper, such as including qualitative analysis or exploring different setups? | Review | Review
Summary: This paper is studying the vulnerabilities of modern BERT-based classifiers, which a service provider is hosting using a black-box inference API. Consistent with prior work [2], the authors succeed in extracting high performing copies of the APIs, by training models using the outputs of the API to queries (akin to distillation). The authors then study two attacks on the copy model --- private attribute identification of sentences in the API's training data & adversarial example transfer from the white-box copy model to the black-box API. The authors report high attack success rates, better than those from competitive baselines (which do not require constructing a copy model). A few defences are also explored but are ineffective to prevent these attacks.
Strengths of the Paper:
While model extraction on BERT models has been studied previously [2], this paper goes beyond the setting of utility theft and explores information leakage and adversarial example transfer. These are extremely practical real-world settings. Moreover, the paper uses modern NLP techniques (finetuning BERT), which is ubiquitous in NLP systems these days.
The reported attacks seem to significantly outperform some competitive baselines which didn't use an extracted model. While I have concerns about the experimental setup (below), these are very interesting results highlighting vulnerabilities of the models. This can encourage more research in defending against model extraction.
Weaknesses of the Paper:
Query distribution: These distributions seem fairly similar to the downstream task for all datasets, for instance, "reviews" contains Yelp reviews, which is one of the datasets the victim model was trained on (I suspect some amount of overlap at the very least). The best MEA scores are observed when the domains are aligned, which might not be a practical setting for an attacker who has no knowledge of the victim model's training distribution. I suggest, at the very least, authors to provide n-gram overlap statistics between their preferred query distribution and downstream test set (the GPT2 paper [3] had similar statistics). The paper's story will be stronger if a corpus like Wikipedia is used for the query distribution, with the same set of downstream datasets.
AIA Attacks: I have a few concerns here. First, isn't access to private attributes in half of the victim data (D_a) too strong an assumption? In a more practical setting, an attacker will have no access to D_a. It's even possible that the attacker doesn't know the output space of attributes. I think the more interesting setting is where the attacker is able to infer some information about the training data without supervising a classifier with gold data (D_a), perhaps using something like model inversion. This information need not be a private binary label, it could even be some canary string like a credit card number [4]. One more concern I had here was regarding the main baseline in this experiment, "BERT (w/o fine-tuning)". I find it quite strange that this is much worse than the majority class in two datasets. What happens when you fine-tune it on D_a? (using the standard practice of [CLS] vector for classification). This is a valid baseline if access to D_a is assumed, I think this will do quite well if it is possible to infer the private variable from the text.
Adversarial example transfer: My main concern here is that "transfer rate" by itself is insufficient. You can make transfer rate 100% by retrieving examples from the target adversarial class. The more interesting evaluation is, what fraction of adversarial examples are both (1) transferred correctly; (2) not adversarial to a human (the changes are so minor that humans ignore them). Some kind of human evaluation for (2) will be helpful. Also, a good baseline here would be using adv-bert but with randomly chosen words (instead of white-box gradients), and an upper bound with adv-bert attacks on the victim model itself.
Overall Recommendation:
While this is a very practically important setting, I'm not entirely convinced the proposed attacks work. My main concerns are regarding some of the experimental decisions and lack of baselines while comparing attacks. Overall I think the paper needs more work to be ready for publication.
Other Feedback:
While these points are not a make or break for me, they will make the paper stronger. It will be nice to include some fine-grained qualitative analysis of the adversarial examples (along with samples), perhaps highlighting why generating that example would only be possible with access to an extracted model, and confirming the victim API model generates the same example. It will also be nice to see work beyond classification setting. Setups like question answering, machine translation, unconditional text generation are exciting testbeds which might be a lot more vulnerable to AIA style attacks than classifiers. With GPT3, black-box text generation APIs are probably going to get very common in the next 2-3 years!
Errors / Typos / Stylistic:
I had some trouble understanding parts of the paper. I think with a bit more polishing and careful proof-reading, the paper will be easier to understand. There were also a few incorrect statements. I've pointed them below along with typos / stylistic suggestions,
"commercial NLP models such as Google’s BERT and OpenAI’s GPT-2 (Radford et al., 2019) are often made indirectly accessible through pay-per-query prediction APIs." --> This is not a correct statement, both pretrained models are freely available
"and NLP tasks (Chandrasekaran et al., 2020)." is a mis-citation, you probably wanted to cite Pal et al. 2019 [1] or Krishna et al. 2020 [2] here?
In 3.2 and the Abstract / Intro I would remove the claim that "architecture, hyperparameter is not known", since both the victim / attacker are finetuning BERT.
There's some unnecessary mathiness in 3.2 (variables which are not referred to later on, like f_{bert}_theta*). I would suggest avoiding variables unless you plan to re-use them to reduce confusion.
In Table 3 I would suggest reporting attack success rather than privacy, to be consistent with other tables in paper (higher means more attack success)
Table 4 caption, "Transferability is the ratio" --> "Transferability is the percentage"?
After Author Response: I really appreciate the author's efforts over the course of the rebuttal period for rigorously testing their method with several new baselines in such a short period of time.
For AIA attacks, the baseline numbers provided in the rebuttal are helpful but raise concerns about whether the proposed AIA attacks are working. I find it hard to believe that victim models have less private information than extracted models in 2 out of 3 datasets, and I suspect some other factors are contributing to this counterintuitive trend (like you said, maybe dark knowledge). I will stick with my stance that the AIA setting is broken since you are inferring private attributes using information from an identically distributed D_a (I think model inversion is a more valid setting to measure leakage).
For adversarial attack baselines, I agree with your argument that conducting black-box attacks directly on the victim models may need minimal difference queries which can be detected on the API side. However, you are going to need several orders of magnitude more queries to do extraction in the first place (which may or may not be easy to detect). I still encourage you to run this baseline in the next version of the paper, instead of only doing black-box attacks on extracted models. These minimal difference checks may not be in place, and directly doing black-box attacks on the victim model are much easier than extracting and then constructing adversarial examples. It is good to know what additional benefit you get by doing model extraction.
Overall, I have decided to raise my score to 6 (more like ~5.5-6). This is conditional on the authors performing much more rigorous hypothesis-driven testing in the next version of the paper (just like they did in the rebuttal) to really validate the hypothesis "extracting models make APIs more vulnerable to adversarial attacks".
References
[1] - https://arxiv.org/abs/1905.09165
[2] - https://arxiv.org/abs/1910.12366
[3] - https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf
[4] - https://arxiv.org/abs/1802.08232 |
ICLR | Title
Learning-Augmented
k
-means Clustering
Abstract
k-means clustering is a well-studied problem due to its wide applicability. Unfortunately, there exist strong theoretical limits on the performance of any algorithm for the k-means problem on worst-case inputs. To overcome this barrier, we consider a scenario where “advice” is provided to help perform clustering. Specifically, we consider the k-means problem augmented with a predictor that, given any point, returns its cluster label in an approximately optimal clustering up to some, possibly adversarial, error. We present an algorithm whose performance improves along with the accuracy of the predictor, even though naı̈vely following the accurate predictor can still lead to a high clustering cost. Thus if the predictor is sufficiently accurate, we can retrieve a close to optimal clustering with nearly optimal runtime, breaking known computational barriers for algorithms that do not have access to such advice. We evaluate our algorithms on real datasets and show significant improvements in the quality of clustering.
1 INTRODUCTION
Clustering is a fundamental task in data analysis that is typically one of the first methods used to understand the structure of large datasets. The most common formulation of clustering is the kmeans problem where given a set P ⊂ Rd of n points, the goal is to find a set of centers C ⊂ Rd of k points to minimize the objective cost(P,C) = ∑ p∈P minc∈C ‖p− c‖22. (1)
Despite decades of work, there exist strong theoretical limitations about the performance of any algorithm for the k-means problem. Finding the optimal set C is NP-hard even for the case of k = 2 (Dasgupta, 2008) and even finding an approximate solution with objective value that is within a factor 1.07 of the optimal solution is NP-hard (Cohen-Addad & S., 2019; Lee et al., 2017). Furthermore, the best-known practical polynomial time algorithms can only provably achieve a large constant factor approximation to the optimal clustering, e.g., the 50-approximation in Song & Rajasekaran (2010), or use techniques such as linear programming that do not scale, e.g., the 6.357- approximation in Ahmadian et al. (2020).
A natural approach to overcome these computational barriers is to leverage the fact that in many applications, the input is often not arbitrary and contains auxiliary information that can be used to construct a good clustering, e.g., in many applications, the input can be similar to past instances. Thus, it is reasonable to create a (possibly erroneous) predictor by using auxiliary information or through clusterings of similar datasets, which can inform the proper label of an item in our current dataset. Indeed, inspired by the developments in machine learning, many recent papers have studied algorithms augmented with predictions (Mitzenmacher & Vassilvitskii, 2020). Such algorithms utilize a predictor that, when invoked, provides an (imperfect) prediction for future inputs. The predictions are then used by the algorithm to improve performance (see references in Section 1.3).
Hence, we consider the problem of k-means clustering given additional access to a predictor that outputs advice for which points should be clustered together, by outputting a label for each point. The goal is to find k centers that minimize objective (1) and assign each point to one of these centers.
The question is then whether one can utilize such predictions to boost the accuracy and runtime of clustering of new datasets. Our results demonstrate the answer in the affirmative.
Formal learning-augmented problem definition. Given a set P ⊆ Rd of n points, the goal is to find a set of k points C (called centers) to minimize objective (1). In the learning-augmented setting, we assume we have access to a predictor Π that provides information about the label of each point consistent with a (1+α)-approximately optimal clustering C. We say that a predictor has label error rate λ ≤ α if for each label i ∈ [k] := {1, . . . , k}, Π errs on at most a λ ≤ α fraction of all points in cluster i in C, and Π errs on at most a λ ≤ α fraction of all points given label i by Π. In other words, Π has at least (1− λ) precision and recall for each label. Our predictor model subsumes both random and adversarial errors by the predictor. For example if the cluster sizes are somewhat well-balanced, then a special case of our model is when Π(p) outputs the correct label of point p ∈ P with some probability 1 − λ and otherwise outputs a random label in [k] with probability λ. The example where the predictor outputs an adversarial label instead of a random label with probability λ also falls under our model. For more detail, see Theorems 2.1 and 3.4. We also adjust our algorithm to have better performance when the errors are random rather than adversarial in the supplementary material.
1.1 MOTIVATION FOR OUR WORK
We first motivate studying k-means clustering under the learning-augmented algorithms framework.
Overcoming theoretical barriers. As stated above, no polynomial time algorithm can achieve better than a constant factor approximation to the optimal clustering. In addition, the best provable approximation guarantees by polynomial time algorithms have a large constant factor (for example the 50 approximation in Song & Rajasekaran (2010)), or use methods which do not scale (such as the linear programming based algorithm in Ahmadian et al. (2020) which gives a 6.357-approximation). Therefore, it is of interest to study whether a natural assumption can overcome these complexity barriers. In our work, we show that knowing the true labels up to some possibly adversarial noise can give us arbitrarily good clusterings, depending on the noise level, which breaks these computational barriers. Furthermore, we present an algorithm that runs in nearly linear time, rather than just polynomial time. Lastly, we introduce tools from the robust statistics literature to study k-means clustering rather than the distance-based sampling procedure that is commonly analyzed (this is the basis of kmeans++). This new toolkit and connection could have further applications in other learning-augmented clustering problems.
Practical considerations. In practice, good predictors can be learned for datasets with auxiliary information. For a concrete example, we can take any dataset that has a train/test split and use a clustering on the training dataset to help us cluster the testing portion of the dataset. Therefore, datasets do not have to be specifically curated to fit our modelling assumption, which is a requirement in other modelling formulations that leverage extra information such as the SSAC model discussed in Section 1.3. A predictor can also be created from the natural class of datasets that vary over time, such as Census data or spectral clustering for temporal graphs (graphs slowly varying over time). For this class of datasets, a clustering from an earlier time step can function as a predictor for later time steps. Lastly, we can simply use the labels given by another clustering algorithm (such as kmeans++) or heuristic as a predictor. Therefore, predictors are readily and easily available for a wide class of natural datasets.
Following the predictor alone is insufficient. Given a predictor that outputs noisy labels, it is conceivable that its output alone can give us a good clustering relative to optimal. However, this is not the case and naı̈vely using the label provided by the predictor for each point can result in an arbitrarily bad solution, even when the predictor errs with low probability. For example, consider a cluster of n2 points at the origin and a cluster of n 2 points at x = 1. Then for k = 2, choosing centers at the origin and at x = 1 induces a k-means clustering cost of zero. However, even for a predictor that errs with probability 1n , some point will be mislabeled with constant probability, which results in a positive k-means clustering cost, and so does not provide a relative error approximation. Thus, using the provided labels by the predictor can induce an arbitrarily bad clustering, even as the label error rate of the predictor tends to zero. This subtlety makes the model rich and interesting, and requires us to create non-trivial clustering algorithms.
Predictors with adversarial errors. Since the predictor is separate from the clustering algorithm, interference with the output of the predictor following the clustering algorithm’s query can be a source of non-random noise. Thus any scenario in which communication is performed over a noisy channel (for example, if the predictor is hosted at one server and the algorithm is hosted at another server) is susceptible to such errors. Another source of adversarial failure by the predictor is when the predictor is trained on a dataset that can be generated by an adversary, such as in the context of adversarial machine learning. Moreover, our algorithms have better guarantees when the predictor does not fail adversarially, e.g., see the supplementary material).
1.2 OUR RESULTS
In this paper we study “learning-augmented” methods for efficient k-means clustering. Our contributions are both theoretical and empirical. On the theoretical side, we introduce an algorithm that provably solves the k-means problem almost optimally, given access to a predictor that outputs a label for each point p ∈ P according to a (1 + α)-approximately optimal clustering, up to some noise. Specifically, suppose we have access to a predictor Π with label error rate λ upper bounded by a parameter α. Then, Algorithm 1 outputs a set of centers C̃ in Õ(knd) time1, such that cost(P, C̃) ≤ (1 +O(α)) · cost(P,Copt), where Copt is an optimal set of centers. We improve the runtime in Section 3 by introducing Algorithm 3, which has the same error guarantees, but uses Õ(nd) runtime, which is nearly optimal since one needs at least nd time to read the points for dense inputs (Theorem 3.4, and Remark A.14).
To output labels for all points, Algorithm 3 requires n queries to the predictor. However, if the goal is to just output centers for each cluster, then we only require Õ(k/α) queries. This is essentially optimal; we show in Theorem 3.5 that any polynomial time algorithm must perform approximately Ω̃(k/α) queries to output a 1+α-approximate solution assuming the Exponential Time Hypothesis, a well known complexity-theoretic assumption (Impagliazzo & Paturi, 2001). Note that one could ignore the oracle entirely, but then one is limited by the constant factor hardness for polynomial time algorithms, which we bypass with a small number of queries.
Surprisingly, we do not require assumptions that the input is well-separated or approximation-stable (Braverman et al., 2011; Balcan et al., 2013), which are assumed in other works. Finally in the supplementary material, we also give a learning-augmented algorithm for the related problem of k-median clustering, which has less algebraic structure than that of k-means clustering. We also consider a deletion predictor, which either outputs a correct label or a failure symbol ⊥ and give a (1 + α)-approximation algorithm even when the “deletion rate” is 1− 1/poly(k). On the empirical side, we evaluate our algorithms on real and synthetic datasets. We experimentally show that good predictors can be learned for all of our varied datasets, which can aid in clustering. We also show our methodology is more robust than other heuristics such as random sampling.
1.3 RELATED WORK
Learning-augmented algorithms. Our paper adds to the growing body of work on learningaugmented algorithms. In this framework, additional “advice” from a possibly erroneous predictor is used to improve performance of classical algorithms. For example, a common predictor is a “heaviness” predictor that outputs how “important” a given input point is. It has been shown that such predictors can be learned using modern machine learning techniques or other methods on training datasets and can be successfully applied to similar testing datasets. This methodology has found applications in improving data structures (Kraska et al., 2018; Mitzenmacher, 2018), streaming algorithms (Hsu et al., 2019; Jiang et al., 2020), online algorithms (Lykouris & Vassilvtiskii, 2018; Purohit et al., 2018), graph algorithms (Dai et al., 2017), and many other domains (Mousavi et al., 2015; Wang et al., 2016; Bora et al., 2017; Sablayrolles et al., 2019; Dong et al., 2020; Sanchez et al., 2020; Eden et al., 2021). See Mitzenmacher & Vassilvitskii (2020) for an overview and applications.
Clustering with additional information. There have been numerous works that study clustering in a semi-supervised setting where extra information is given. Basu et al. (2004) gave an active learning framework of clustering with “must-link”/“cannot-link” constraints, where an algorithm is allowed
1The notation Õ hides logarithmic factors.
to interact with a predictor that determines if two points must or cannot belong to the same cluster. Their objective function is different than that of k-means and they do not give theoretical bounds on the quality of their solution. Balcan & Blum (2008) and Awasthi et al. (2017) studied an interactive framework for clustering, where a predictor interactively provides feedback about whether or not to split a current cluster or merge two clusters. Vikram & Dasgupta (2016) also worked with an interactive oracle but for the Bayesian hierarchical clustering problem. These works differ from ours in their assumptions since their predictors must answer different questions about partitions of the input points. In contrast, Howe (2017) used logistic regression to aid k-means clustering but do not give any theoretical guarantees.
The framework closest in spirit to ours is the semi-supervised active clustering framework (SSAC) introduced by Ashtiani et al. (2016) and further studied by Kim & Ghosh (2017); Mazumdar & Saha (2017); Gamlath et al. (2018); Ailon et al. (2018); Chien et al. (2018); Huleihel et al. (2019). The goal of this framework is also to produce a (1 + α)-approximate clustering while minimizing the number of queries to a predictor that instead answers queries of the form “same-cluster(u, v)”, which returns 1 if points u, v ∈ P are in the same cluster in a particular optimal clustering and 0 otherwise. Our work differs from the SSAC framework in terms of both runtime guarantees, techniques used, and model assumptions, as detailed below.
We briefly compare to the most relevant works in the SSAC framework, which are Ailon et al. (2018) and Mazumdar & Saha (2017). First, the runtime of Ailon et al. (2018) is O(ndk9/α4) even for a perfectly accurate predictor, while the algorithm of Mazumdar & Saha (2017) uses O(nk2) queries and runtime Õ(ndk2). By comparison, we use significantly fewer queries, with near linear runtime Õ(nd) even for an erroneous predictor. Moreover, a predictor of Mazumdar & Saha (2017) independently fails each query with probability p so that repeating with pairs containing the same point can determine the correct label of a point whereas our oracle will always repeatedly fail with the same query, so that repeated queries do not help.
The SSAC framework uses the predictor to perform importance sampling to obtain a sufficient number of points from each cluster whereas we use techniques from robust mean estimation, dimensionality reduction, and approximate nearest neighbor data structures. Moreover, it is unclear how the SSAC predictor can be implemented in practice to handle adversarial corruptions. One may consider simulating the SSAC predictor using information from individual points by simply checking if the labels of the two input points are the same. However, if a particular input is mislabeled, then all of the pairs containing this input can also be reported incorrectly, which violates their independent noise assumption. Finally, the noisy predictor algorithm in Ailon et al. (2018) invokes a step of recovering a hidden clique in a stochastic block model, making it prohibitively costly to implement.
Lastly, in the SSAC framework, datasets need to be specifically created to fit into their model since one requires pairwise information. In contrast, our predictor requires information about individual points, which can be learned from either a training dataset, from past similar datasets, or from another approximate or heuristic clustering and is able to handle adversarial corruptions. Thus, we obtain significantly faster algorithms while using an arguably more realistic predictor.
Approximation stability. Another approach to overcome the NP-hardness of approximation for k-means clustering is the assumption that the underlying dataset follows certain distributional properties. Introduced by Balcan et al. (2013), the notion of (c, α)-approximate stability (Agarwal et al., 2015; Awasthi et al., 2019; Balcan et al., 2020) requires that every c-approximation is α-close to the optimal solution in terms of the fraction of incorrectly clustered points. In contrast, we allow inputs so that an arbitrarily small fraction of incorrectly clustered points can induce arbitrarily bad approximations, as previously discussed, e.g., in Section 1.1.
2 LEARNING-AUGMENTED k-MEANS ALGORITHM
Preliminaries. We use [n] to denote the set {1, . . . , n}. Given the set of cluster centers C, we can partition the input points P into k clusters {C1, . . . , Ck} according to the closest center to each point. If a point is grouped in Ci in the clustering, we refer to its label as i. Note that labels can be arbitrarily permuted as long as the labeling across the points of each cluster is consistent. It is well-known that in k-means clustering, the i-th center is given by the coordinate-wise mean of the
Algorithm 1 Learning-augmented k-means clustering Input: A point set X with labels given by a predictor
Π with label error rate λ Output: (1 + O(α))-approximate k-means clustering
of X 1: for i = 1 to i = k do 2: Let Yi be the set of points with label i. 3: Run CRDEST for each of the d coordinates of Yi.
4: Let C ′i be the coordinate-wise outputs of CRDEST. 5: end for 6: Return clustering with centers C ′1, . . . , C ′k.
Algorithm 2 Coordinate-wise estimation CRDEST Input: Points x1, . . . , x2m ∈ R, corrup-
tion level λ ≤ α 1: Randomly partition the points into
two groups X1, X2 of size m. 2: Let I = [a, b] be the shortest interval
containing m(1− 5α) points of X1. 3: Z ← X2 ∩ I 4: z ← 1|Z| ∑ x∈Z x 5: Return z
points in Ci. Given x ∈ Rd and a set C ⊂ Rd, we define d(x,C) = minc∈C ‖x − c‖2. Note that there may be many approximately optimal clusterings but we consider a fixed one for our analysis.
2.1 OUR ALGORITHM
Our main result is an algorithm for outputting a clustering that achieves a (1 + 20α) approximation 2 to the optimal objective cost when given access to approximations of the correct labeling of the points in P . We first present a suboptimal algorithm in Algorithm 1 for intuition and then optimize the runtime in Algorithm 3, which is provided in Section 3.
The intuition for Algorithm 1 is as follows. We first address the problem of identifying an approximate center for each cluster. Let Copt1 , · · · , C opt k be an optimal grouping of the points and consider all the points labeled i by our predictor for some fixed 1 ≤ i ≤ k. Since our predictor can err, a large number of points that are not in Copti may also be labeled i. This is especially problematic when points that are “significantly far” from cluster Copti are given the label i, which may increase the objective function arbitrarily if we simply take the mean of the points labeled i by the predictor.
To filter out such outliers, we consider a two step view from the robust statistics literature, e.g., Prasad et al. (2019); these two steps can respectively be interpreted as a “training” phase and a “testing” phase that removes “bad” outliers. We first randomly partition the points that are given label i into two groups, X1 and X2, of equal size. We then estimate the mean of C opt i using a coordinate-wise approach through Algorithm 2 (CRDEST), decomposing the total cost as the sum of the costs in each dimension.
For each coordinate, we find the smallest interval I that contains a (1 − 4α) fraction of the points in X1. We show that for label error rate λ ≤ α, this “training” phase removes any outliers and thus provides a rough estimation to the location of the “true” points that are labeled i. To remove dependency issues, we then “test” X2 on I by computing the mean of X2 ∩ I . This allows us to get empirical centers that are a sufficiently good approximation to the coordinates of the true center for each coordinate. We then repeat on the other labels. The key insight is that the error from meanestimation can be directly charged to the approximation error due to the special structure of the k-means problem. Our main theoretical result considers predictors that err on at most a λ-fraction of all cluster labels. Note that all omitted proofs appear in the supplementary material. Theorem 2.1. Let α ∈ (10 log n/ √ n, 1/7), Π be a predictor with label error rate λ ≤ α, and γ ≥ 1 a sufficiently large constant. If each cluster in the (1 + α)-approximately optimal k-means clustering of the predictor has at least γηk/α points, then Algorithm 1 can be used to output a (1 + 20α)-approximation to the k-means objective with prob. 1− 1/η, using O(kdn log n) runtime.
We improve the running time to O(nd log n+ poly(k, log n)) in Theorem 3.4 in Section 3. Our algorithms can also tolerate similar error rates when failures correspond to random labels, adversarial labels, or a special failure symbol.
2Note that we have not attempted to optimize the constant 20.
Error rate λ vs. accuracy parameter α. We emphasize that λ is the error rate of the predictor and α is only some loose upper bound on λ. It is reasonable that some algorithms can provide lossy guarantees on their outputs, which translates to the desired loose upper bound α on the accuracy of the predictor. Even if is not known, multiple instances of the algorithm can be run in parallel with separate exponentially decreasing “guesses” for the value α. We can simply return the best clustering among these algorithms, which will provide the same theoretical guarantees as if we set α = 1.01λ , for example. Thus α does not need to be known in advance and it does not need to be tuned as a hyperparameter.
3 NEARLY OPTIMAL RUNTIME ALGORITHM
We now describe Algorithm 3, which is an optimized runtime version of Algorithm 1 and whose guarantees we present in Theorem 3.4. The bottleneck for Algorithm 1 is that after selecting k empirical centers, it must still assign each of the n points to the closest empirical center. The main intuition for Algorithm 3 is that although reading all points usesO(nd) time, we do not need to spend O(dk) time per point to find its closest empirical center, if we set up the correct data structures. In fact, as long as we assign each point to a “relatively good” center, the assigned clustering is still a “good” approximation to the optimal solution. Thus we proceed in a similar manner as before to sample a number of input points and find the optimal k centers for the sampled points.
We use dimensionality reduction and an approximate nearest neighbor (ANN) data structure to efficiently assign each point to a “sufficiently close” center. Namely if a point p ∈ P should be assigned to its closest empirical Ci then p must be assigned to some empirical center Cj such that ‖p − Cj‖2 ≤ 2‖p − Ci‖2. Hence, points that are not assigned to their optimal centers only incur a “small” penalty due to the ANN data structure and so the cost of the clustering does not increase “too much” in expectation. Formally, we need the following definitions.
Theorem 3.1 (JL transform). Johnson & Lindenstrauss (1984) Let d(·, ·) be the standard Euclidean norm. There exists a family of linear maps A : Rd → Rk and an absolute constant C > 0 such that for any x, y ∈ Rd, Pr [φ ∈ A, d(φ(x), φ(y)) ∈ (1± α)d(x, y)] ≥ 1− e−Cα2k.
Definition 3.2 (Terminal dimension reduction). Given a set of points called terminals C ⊂ Rd, we call a map f : Rd → Rk a terminal dimension reduction with distortion D if for every terminal c ∈ C and point p ∈ Rd, we have d(p, c) ≤ d(f(p), f(c)) ≤ D · d(p, c).
Definition 3.3 (Approximate nearest neighbor search). Given a set P of n points in a metric space (X, d), a (c, r)-approximate nearest neighbor search (ANN) data structure takes any query point q ∈ X with non-empty {p ∈ P : 0 < d(p, q) ≤ r} and outputs a point in {p ∈ P : 0 < d(p, q) ≤ cr}.
To justify the guarantees of Algorithm 3, we need runtime guarantees on creating a suitable dimensionality reduction map and an ANN data structure. These are from Makarychev et al. (2019) and Indyk & Motwani (1998); Har-Peled et al. (2012); Andoni et al. (2018) respectively, and are stated in Theorems A.12 and A.13 in the supplementary section. They ensure that each point is mapped to a “good” center. Thus, we obtain our main result describing the guarantees of Algorithm 3. Theorem 3.4. Let α ∈ (10 log n/ √ n, 1/7), Π be a predictor with label error rate λ ≤ α, and γ ≥ 1 be a sufficiently large constant. If each cluster in the optimal k-means clustering of the predictor has at least γk log k/α points, then Algorithm 3 outputs a (1 + 20α)-approximation to the k-means objective with probability at least 3/4, using O(nd log n+ poly(k, log n)) total time.
Note that if we wish to only output the k centers rather than labeling all of the input points, then the query complexity of Algorithm 3 is Õ(k/α) (see Step 1 of Algorithm 3) with high probability. We show in the supplementary material that this is nearly optimal.
Theorem 3.5. For any δ ∈ (0, 1], any algorithm that makes O ( k1−δ
α logn
) queries to the predictor
with label error rate α cannot output a (1 + Cα)-approximation to the optimal k-means clustering cost in time 2O(n 1−δ) time, assuming the Exponential Time Hypothesis.
Algorithm 3 Fast learning-augmented algorithm for k-means clustering. Input: A point set X , a predictor Π with label error rate λ ≤ α, and a tradeoff parameter ζ Output: A (1 + α)-approximate k-means clustering of X
1: Form S by sampling each point of X with probability 100 log kα|Ax| where Ax is the set of points with the same label as x according to Π. 2: Let C1, . . . , Ck be the output of Algorithm 1 on S. 3: Let φ2 be a random JL linear map with distortion 54 , i.e., dimension O(log n). 4: Let φ1 be a terminal dimension reduction with distortion 54 . 5: Let φ := φ1 ◦ φ2 be the composition map. 6: Let A be a (2, r)-ANN data structure on the points φ(C1), . . . , φ(Ck). 7: for x ∈ X do 8: Let `x be the label of x from Π. 9: %← d(x,C`x)
10: Query A to find the closest center φ(Cpx) to x with r = % 2 . 11: if d(x,Cpx) < 2d(x,C`x) then 12: Assign label px to x. 13: else 14: Assign label `x to x. 15: end if 16: end for
4 EXPERIMENTS
In this section we evaluate Algorithm 1 empirically on real datasets. We choose to implement Algorithm 1, as opposed to the runtime optimal Algorithm 3, for simplicity and because the goal of our experiments is to highlight the error guarantees of our methodology, which both algorithms share. Further, we will see that Algorithm 1 is already very fast compared to alternatives. Thus, we implement the simpler of the two algorithms. We primarily fix the number of clusters to be k = 10 and k = 25 throughout our experiments for all datasets. Note that our predictors can readily generalize to other values of k but we focus on these two values for clarity. All of our experiments were done on a CPU with i5 2.7 GHz dual core and 8 GB RAM. Furthermore, all our experimental results are averaged over 20 independent trials and ± one standard deviation error is shaded when applicable. We give the full details of our datasets below.
1) Oregon: Dataset of 9 graph snapshots sampled across 3 months from an internet router communication network (Leskovec et al., 2005). We then use the top two eigenvectors of the normalized Laplacian matrix to give us node embeddings into R2 for each graph which gives us 9 datasets, one for each graph. Each dataset has roughly n ∼ 104 points. This is an instance of spectral clustering. 2) PHY: Dataset from KDD cup 2004 (kdd, 2004). We take 104 random samples to form our dataset. 3) CIFAR10: Testing portion of CIFAR-10 (n = 104, dimension 3072) (Krizhevsky, 2009).
Baselines. We compare against the following algorithms. Additional experimental results on Lloyd’s heuristic are given in Section E.3 in the supplementary material.
1) kmeans++: We measure the performance of our algorithm in comparison to the kmeans++ seeding algorithm. Since kmeans++ is a randomized algorithm, we take the average clustering cost after running kmeans++ seeding on 20 independent trials. We then standardize this value to have cost 1.0 and report all other costs in terms of this normalization. For example, the cost 2.0 means that the clustering cost is twice that of the average kmeans++ clustering cost. We also use the labels of kmeans++ as the predictor in the input for Algorithm 1 (denoted as “Alg + kmeans++”) which serves to highlight the fact that one can use any heuristic or approximate clustering algorithm as a predictor.
2) Random sampling: For this algorithm, we subsample the predictor labels with probability q ranging from 1% to 50%. We then construct the k-means centers using the labels of the sampled points and measure the clustering cost using the whole dataset. We use the best value of q in our range every time to give this baseline as much power as possible. We emphasize that random sampling cannot have theoretical guarantees since the random sample can be corrupted (similarly
as in the example in Section 1.1). Thus some outlier detection steps (such as our algorithms) are required.
Predictor Description. We use the following predictors in our experiments.
1) Nearest neighbor: We use this predictor for the Oregon dataset. We find the best clustering of the node embeddings in Graph #1. In practice, this means running many steps of Lloyd’s algorithm until convergence after initial seeding by kmeans++. Our predictor takes as input a point in R2 representing a node embedding of any of the later 8 graphs and outputs the label of the closest node in Graph #1.
2) Noisy predictor. This is the main predictor for PHY. We form this predictor by first finding the best k-means solution on our datasets. This again means initial seeding by kmeans++ and then many steps of Lloyd’s algorithm until convergence. We then randomly corrupt the resulting labels by changing them to a uniformly random label independently with error probability ranging from 0 to 1. We report the cost of clustering using only these noisy labels versus processing these labels using Algorithm 1.
3) Neural network. We use a standard neural network architecture (ResNet18) trained on the training portion of the CIFAR-10 dataset as the oracle for the testing portion which we use in our experiments. We used a pretrained model obtained from Huy (2020). Note that the neural network is predicting the class of the input image. However, the class value is highly correlated with the optimal k-means cluster group.
Summary of results. Our experiments show that our algorithm can leverage predictors to significantly improve the cost of k-means clustering and that good predictors can be easily tailored to the data at hand. The cost of k-means clustering reduces significantly after applying our algorithm compared to just using the predictor labels for two of our predictors. Lastly, the quality of the predictor remains high for the Oregon dataset even though the later graphs have changed and “moved away” from Graph #1.
Selecting α in Algorithm 2. In practice, the choice of α to use in our algorithm depends on the given predictor whose properties may be unknown. Since our goal is to minimize the k-means clustering objective (1), we can simply pick the ‘best’ value. To do so, we iterate over a small range of possible α from .01 to .15 in Algorithm 2 with a step size of 0.01 and select the clustering that results in the lowest objective cost. The range is fixed for all of our experiments. (See Paragraph 2.1
4.1 RESULTS
Oregon. We first compare our algorithm with Graph #1 as the predictor against various baselines. This is shown in Figures 1(a) and Figure 1(b). In the k = 10 case, Figure 1(a) shows that the predictor returns a clustering better than using just the kmeans++ seeding, which is normalized to have cost 1.0. This is to be expected since the subsequent graphs represent a similar network as Graph #1, just sampled later in time. However, the clustering improves significantly after using our algorithm on the predictor labels as the average cost drops by 55%. We also see that using our algorithm after kmeans++ is also sufficient to give significant decrease in clustering cost. Lastly,
random sampling also gives comparable results. This can be explained because we are iterating over a large range of subsampling probabilities for random sampling.
In the k = 25 case, Figure 1(b) shows that the oracle performance degrades and is worse than the baseline in 5 of the 8 graphs. However our algorithm again improves the quality of the clustering over the oracle across all graphs. Using kmeans++ as the predictor in our algorithm also improves the cost of clustering. The performance of random sampling is also worse. For example in Graph #3 for k = 25, it performed the worst out of all the tested algorithms.
Our algorithm also remains competitive with kmeans++ seeding even if the predictor for the Oregon dataset is highly corrupted. We consider a later graph, Graph #5, and corrupt the labels of the predictor randomly with probability q ranging from 1% to 25% for the k = 10 case in Figure 1(c). While the cost of clustering using just the predictor labels can become increasingly worse, our algorithm is able to sufficiently “clean” the predictions. In addition, the cost of random sampling also gets worse as the corruptions increase, implying that it is much more sensitive to noise than our algorithm. The qualitatively similar plot for k = 25 is given in the supplementary section. Note that in spectral clustering, one may wish to get a mapping to Rd for d > 2. We envision that our results translate to those settings as well since having higher order spectral information only results in a stronger predictor. We continue the discussion on the PHY and CIFAR-10 datasets in Section E.
Comparison to Lloyd’s Heuristic. In Section E.3, we provide additional results on experiments using Lloyd’s heuristic. In summary, we give both theoretical and empirical justifications for why our algorithms are superior to blindly following a predictor and then running Lloyd’s heuristic.
ACKNOWLEDGEMENTS
Zhili Feng, David P. Woodruf, and Samson Zhou would like to thank partial support from NSF grant No. CCF- 181584, Office of Naval Research (ONR) grant N00014-18-1-256, and a Simons Investigator Award. Sandeep Silwal was supported in part by a NSF Graduate Research Fellowship Program.
A APPENDIX
Theorem A.1 (Chernoff Bounds). Let X1, . . . , Xn be independent random variables taking values in {0, 1}. Let X = ∑n i=1Xi denote their sum and let µ = E[X] denote the sum’s expected value. Then for any δ ∈ (0, 1) and t > 0,
Pr [X ≤ (1− δ)µ] ≤ e− δ2µ 2 .
For any δ > 0,
Pr [X ≥ (1 + δ)µ] ≤ e− δ2µ 3 .
Furthermore,
Pr [|X − µ| ≥ t] ≤ e− t 2 4n .
A.1 PROOF OF THEOREM 2.1
We first prove Theorem 2.1, which shows that Algorithm 1 provides a (1 +α)-approximation to the optimal k-means clustering, but uses suboptimal time compared to a faster algorithm we present in Section 3. All omitted proofs of lemmas appear in Section A.2.
We first show that for each coordinate, the empirical center for any (1 − α)-fraction of the input points provides a good approximation to the optimal k-means clustering cost.
Lemma A.2. Let P,Q ⊆ R be sets of points on the real line such that |P | ≥ (1 − α)n and |Q| ≤ αn. Let X = P ∪Q, CP be the mean of P and CX be the mean of X . Then cost(X,CP ) ≤(
1 + α1−α2 ) cost(X,CX).
We now show that a conceptual interval I∗ ⊂ R with “small” length contains a significant fraction of the true points. Ultimately, we will show that the interval I computed in the “training” phase in CRDEST has smaller length than I∗ with high probability and yet I also contains a significant fraction of the true points. The main purpose of I∗ (and eventually I) is to filter out extreme outliers because the “testing” phase only considers points in I ∩X2. Lemma A.3. For a fixed set X ⊆ R, let C be the mean of X and σ2 = 12|X| ∑ x∈X(x−C)2 be the
variance. Then the interval I∗ = [ C − σ√
α , C + σ√ α
] contains at least a (1 − 4α) fraction of the
points in X .
Using Lemma A.3, we show that the interval I that is computed in the “training” phase contains a significant fraction of the true points.
Lemma A.4. Let m be a sufficiently large consatnt. We have that I := [a, b] contains at least a 1− 6α fraction of points of X2 and b− a ≤ 2σ/ √ α, with high probability, i.e., 1− 1/poly(m).
We next show that the optimal clustering on a subset obtained by independently sampling each input point provides a rough approximation of the optimal clustering. That is, the optimal center is well-approximated by the empirical center of the sampled points.
Lemma A.5. Let S be a set of points obtained by independently sampling each point of X ⊆ Rd with probability p = 12 . Let C be the optimal center of these points and CS be the empirical center of these points. Conditioned on |S| ≥ 1, then E[CS ] = x and there exists a constant γ such that for η ≥ 1 and |X| > ηγkα ,
E [ ‖CS − x‖22 ] ≤ γ |X|2 · (∑ x∈X ‖x− x‖22 ) Pr [cost(X,CS) > (1 + α) cost(X,C)] < 1/(ηk).
Using Lemma A.2, Lemma A.4, and Lemma A.5, we justify the correctness of the subroutine CRDEST.
Lemma A.6. Let α ∈ (10 log n/ √ n, 1/7). Let P,Q ⊆ R be sets of points on the real line such that |P | ≥ (1 − α)2m and |Q| ≤ 2αm, and X = P ∪ Q. Let C be the center of P . Then CRDEST on input set X outputs a point C ′ such that with probability at least 1 − 1/(ηk), cost(P,C ′) ≤ (1 + 18α)(1 + α) ( 1 + α(1−α)2 ) cost(P,C).
Using CRDEST as a subroutine for each coordinate, we now prove Theorem 2.1, justifying the correctness of Algorithm 1 by generalizing to all coordinates and centers and analyzing the runtime of Algorithm 1.
Proof of Theorem 2.1. Since Π has label error rate λ ≤ α, then by definition of label error rate, at least a (1 − α) fraction of the points in each cluster are correctly labeled. Note that the kmeans clustering cost can be decomposed into the sum of the costs induced by the centers in each dimension. Specifically, for a set C = {C1, . . . , Ck} of optimal centers,
cost(X, C) := ∑ x∈X d(x, C)2 = k∑ i=1 ∑ x∈Si d(x,Ci) 2,
where Si is the set of points in X that are assigned to center Ci. For a particular i ∈ [k], we have
∑ x∈Si d(x,Ci) 2 = ∑ x∈Si d∑ j=1 d(xj , (Ci)j) 2,
where xj and (Ci)j are the j-th coordinate of x and Ci, respectively.
By Lemma A.6, the cost induced by CRDEST for each dimension in each center C ′i is a (1 + α)approximation of the total clustering cost for the optimal centerCi in that dimension with probability 1− 1/(ηk). That is,∑
x∈Si
d(xj , (C ′ i)j) 2 ≤ (1 + 18α)(1 + α)(1 + α/(1− α)2) ∑ x∈Si d(xj , (Ci)j) 2
for each j ∈ [d]. Thus, taking a sum over all dimensions j ∈ [d] and union bounding over all centers i ∈ [k], we have that the total cost induced by Algorithm 1 is a (1 + 20α)-approximation to the optimal k-means clustering cost with probability at least 1− 1/η. To analyze the time complexity of Algorithm 1, first consider the subroutine CRDEST. It takes O(kdn) time to first split each of the points in each cluster and dimension into two disjoint groups. Finding the smallest interval that contains a certain number of points can be done by first sorting the points and then iterating from the smallest point to the largest point and taking the smallest interval that contains enough points. This requires O(n log n) time for each dimension and each center, which results in O(kdn log n) total time. Once each of the intervals is found, computing the approximate center then takes O(kdn) total time. Hence, the total running time of Algorithm 1 is O(kdn log n).
A.2 PROOF OF AUXILIARY LEMMAS
Lemma A.2. Let P,Q ⊆ R be sets of points on the real line such that |P | ≥ (1 − α)n and |Q| ≤ αn. Let X = P ∪Q, CP be the mean of P and CX be the mean of X . Then cost(X,CP ) ≤(
1 + α1−α2 ) cost(X,CX).
Proof. Suppose without loss of generality, that CX = 0 and CP ≤ 0, so that CQ ≥ 0, where CQ is the mean of Q. Then it is well-known, e.g., see Inaba et al. (1994), that
cost(X,CP ) = cost(X,CX) + |X| · |CP − CX |2.
Hence, it suffices to show that |X| · |CP − CX |2 ≤ α(1−α)2 cost(X,CX).
SinceCX = 0 we have |P |·CP = −|Q|·CQ, with |P | ≥ (1−α)n and |Q| ≤ αn. Let |P | = (1−%)n and |Q| = %n for some % ≤ α. Thus, CQ = − 1−%% · CP . By convexity, we thus have that
cost(Q,CX) ≥ |Q| · (1− %)2
%2 · |CP |2
= n(1− %)2
% · |CP |2
≥ n(1− α) 2
α · |CP |2.
Therefore, we have
|CP − CX |2 = |CP |2 ≤ α
n(1− α)2 cost(Q,CX) ≤
α
n(1− α)2 cost(X,CX).
Thus, |X| · |CP − CX |2 ≤
α
(1− α)2 cost(X,CX),
as desired. Lemma A.3. For a fixed set X ⊆ R, let C be the mean of X and σ2 = 12|X| ∑ x∈X(x−C)2 be the
variance. Then the interval I∗ = [ C − σ√
α , C + σ√ α
] contains at least a (1 − 4α) fraction of the
points in X .
Proof. Note that any point x ∈ X \ I∗ satisfies |x − C|2 > σ2/(4α). Thus, if more than a 4α fraction of the points of X are outside of I∗, then the total variance is larger than σ2, which is a contradiction.
For ease of presentation, we analyze λ = 12 and we note that the analysis extends easily to general λ. We now prove the technical lemma that we will use in the proof of Lemma A.8. Lemma A.7. We have
m∑ j=1
( m j ) j · 2m = Θ ( 1 m ) .
Proof. Let m be sufficiently large. A Chernoff bound implies that for a sufficiently large constant C, ∑
|j−m/2|≥C √ m
( m j ) 2m ≤ 1 m2 .
Furthermore, ∑ j≥C′m
( m j ) j · 2m = O ( 1 m ) · ∑ j≥1 ( m j ) 2m = O ( 1 m ) so the upper bound on the desired relation holds. A similar analysis provides a lower bound.
Lemma A.4. Let m be a sufficiently large consatnt. We have that I := [a, b] contains at least a 1− 6α fraction of points of X2 and b− a ≤ 2σ/ √ α, with high probability, i.e., 1− 1/poly(m).
Proof. By Lemma A.3, I∗ contains at least 2m(1 − 4α) of the points in X . Hence, by applying an additive Chernoff bound for t = O( √ m logm) and for sufficiently large m, we have that the number of points in I∗ ∩ X1 is at least m(1 − 5α) with high probability. Since I is the interval of minimal length with at least m(1 − 5α) points, then the length of I is at most the length of I∗. Moreover, again applying Chernoff bounds, we have that the number of points in I ∩X2 is at least m(1− 6α). More formally, suppose we have a set of 2m points that we randomly partition into two sets X1 and X2. Consider any fixed interval J that has at least 2cm total points for c ≥ 1 − 5α (note there
are at most O(m2) intervals in total since our points are in one dimension). Let J1 and J2 denote the number of points in J that are in X1 and X2 respectively. By a Chernoff bound, we have that both J1 and J2 are at least mc(1 − α) with high probability. In particular, |J1 − J2| ≤ αmc with high probability. Thus by using a union bound, all intervals with at least cm total points satisfy the property that the number of points partitioned to X1 and the number of points partitioned to X2 differ by at most αmc with high probability. Conditioning on this event, I must also contain m(1− 6α) points in X2 since it contains at least m(1− 5α) points in X1, as desired.
Lemma A.8. Let S be a set of points obtained by independently sampling each point of X ⊆ Rd with probability 12 , and let CS be the centroid of S. Let x be the centroid of X . Conditioned on |S| ≥ 1, we have E[CS ] = x, and there exists a constant γ such that
E [ ‖CS − x‖22 ] ≤ γ |X|2 · (∑ x∈X ‖x− x‖22 ) .
Proof. We first prove that E[CS ] = x. Note that by the law of iterated expectations,
E[CS ] = E|S|E[CS | |S| ].
Let xi1 , . . . , xi|S| be a random permutation of the elements in S, so that for each 1 ≤ j ≤ |S|, we have E[xij ] = x. Now conditioning on the size of S, we can write
CS = xi1 + · · ·+ xi|S|
|S| .
Therefore,
E[CS | |S| ] = x · |S| |S| = x
and it follows that E[CS ] = x.
To prove that
E [ ‖CS − x‖2 ] ≤ γ |X|2 · (∑ x∈X ‖x− x‖2 ) ,
we again condition on |S|. Suppose that |S| = j. Then,
CS − x = (xi1 − x) + · · ·+ (xij − x)
j
Now let yit = xit − x for all 1 ≤ t ≤ j. Therefore,
E |S|=j
[ ‖CS − x‖2 ] = 1 j2 · E [ ‖yi1 + · · ·+ yij‖2 ] = 1
j · E[‖yi1‖2] + j − 1 j · E[yTi1yi2 ].
Note that xi1 is uniform over elements in X , so it follows that
E[‖yi1‖2] = 1 |X| ∑ x∈X ‖x− x‖2.
Now if j ≥ 2, we have that E[yTi1yi2 ] = ∑ a<b y T a yb(|X|
2 ) = ‖∑i yi‖2 −∑i ‖yi‖2|X|(|X| − 1) ≤ 0 since ∑ i yi = 0 by definition. Hence,
E |S|≥2
[ ‖CS − x‖2 ] ≤ 1 j · |X| ∑ x∈X ‖x− x‖2.
Now the probability that |S| = j for j ≥ 2 is precisely (|X| j ) /2|X|, so we have
Pr [|S| ≥ 2] · E |S|≥2
[ ‖CS − x‖2 ] ≤ 1 |X| · (∑ x∈X ‖x− x‖2 ) · |X|∑ j=1 (|X| j ) j · 2|X| .
From Lemma A.7, we have that |X|∑ j=1
(|X| j ) j · 2|X| ≤ c |X|
for some constant c so it follows that
E‖CS − x‖2 ≤ c′
|X|2 · (∑ x∈X ‖x− x‖2 ) for some constant c′.
For j = 1, note that
E |S|=j=1
[ ‖CS − x‖2 ] = 1 |X| ∑ x∈X ‖x− x‖2.
Moreover, we have Pr [|S| = 1] = |X| 2|X| and Pr [|S| = 0] = 1 2|X|
. Thus from the law of total expectation, we have
E [ ‖CS − x‖2 ] = Pr [|S| < 2] · E
|S|<2
[ ‖CS − x‖2 ] + Pr [|S| ≥ 2] · E
|S|≥2
[ ‖CS − x‖2 ] ≤ |X|
2|X| · 1 |X| ∑ x∈X ( ‖x− x‖2 ) + c′
|X|2 · (∑ x∈X ‖x− x‖2 )
≤ γ |X|2 · (∑ x∈X ‖x− x‖2 ) for some constant γ, as desired.
Lemma A.9. Let S be a set of points obtained by independently sampling each point of X ⊆ Rd with probability p = 12 . Let C be the optimal center of X and CS be the empirical center of S. Let γ ≥ 1 be the constant from Lemma A.8. Then for η ≥ 1 and |X| > ηγkα ,
Pr [cost(X,CS) > (1 + α) cost(X,C)] < 1/(ηk).
Proof. By Lemma A.8 and Markov’s inequality, we have
Pr [ ‖CS − C‖22 ≥ ηγk |X|2 ∑ x∈X x2 ] ≤ 1 ηk .
We have ∑ x∈X ‖x− CS‖22 = ∑ x∈X ‖x− C‖22 + |X| · ‖C − CS‖22,
so that by Lemma A.8 ∑ x∈X ‖x− CS‖22 ≤ ( 1 + ηγk |X| )∑ x∈X ‖x− C‖22
= ( 1 + ηγk
|X|
) cost(X,C),
with probability at least 1 − 1ηk . Hence for |X| ≥ ηγk α , the approximate centroid of each cluster induces a (1 + α)-approximation to the cost of the corresponding cluster.
Lemma A.5. Let S be a set of points obtained by independently sampling each point of X ⊆ Rd with probability p = 12 . Let C be the optimal center of these points and CS be the empirical center of these points. Conditioned on |S| ≥ 1, then E[CS ] = x and there exists a constant γ such that for η ≥ 1 and |X| > ηγkα ,
E [ ‖CS − x‖22 ] ≤ γ |X|2 · (∑ x∈X ‖x− x‖22 ) Pr [cost(X,CS) > (1 + α) cost(X,C)] < 1/(ηk).
Proof. Lemma A.5 follows immediately from Lemma A.8 and Lemma A.9. Lemma A.6. Let α ∈ (10 log n/ √ n, 1/7). Let P,Q ⊆ R be sets of points on the real line such that |P | ≥ (1 − α)2m and |Q| ≤ 2αm, and X = P ∪ Q. Let C be the center of P . Then CRDEST on input set X outputs a point C ′ such that with probability at least 1 − 1/(ηk), cost(P,C ′) ≤ (1 + 18α)(1 + α) ( 1 + α(1−α)2 ) cost(P,C).
Proof. Let α ∈ (10 log n/ √ n, 1/7). Then from Lemma A.4, we have that I ∩X contains at least (1− 6α)m points of P ∩X2 and at most 2αm points of Q in an interval of length 2σ/ √ α, where
σ2 = 1 2|P | ∑ p∈p (p− C)2 = 1 2|P | · cost(P,C).
From Lemma A.2, we have that cost(P,C0) ≤ ( 1 + α
(1− α)2
) cost(P,C1),
where C0 is the center of I ∩ P ∩X2 and C1 is the center of P ∩X2. For sufficiently large m and from Lemma A.9, we have that
cost(P,C1) ≤ (1 + α) cost(P,C), with probability at least 1 − 1/(ηk). Thus, it remains to show that cost(P,C ′) ≤ (1 + O(α)) cost(P,C0).
Since C0 is the center of I ∩ P ∩X2 and C ′ is the center of I ∩X2, then we have |I ∩ P ∩X2|C0 + ∑ q∈I∩Q∩X2 q = |I ∩X2|C ′. Since I has length 2σ/ √ α, then q ∈ [ C0 − 2σ√α , C0 + 2σ√ α ] . Because |I ∩ P ∩X2| ≥ (1 − 6α)m and |Q| = 2αm, then for sufficiently small α, we have that |C ′ − C0| ≤ 6 √ ασ. Note that we have cost(P,C ′) = cost(P,C0) + |P | · |C0 − C ′|2, so that cost(P,C ′) ≤ cost(P,C0) + |P | · 36ασ2. Finally, σ2 = 12|P | · cost(P,C) and cost(P,C) ≤ cost(P,C0) due to the optimality of C. This implies
cost(P,C ′) ≤ cost(P,C0) + |P | · 36ασ2
≤ cost(P,C0) + |P | · 36α · 1
2|P | · cost(P,C)
≤ cost(P,C0) + 18α cost(P,C0) = (1 + 18α) cost(P,C0),
as desired. Thus putting things together, we have cost(P,C ′) ≤ (1 + 18α)(1 + α) ( 1 + α
(1− α)2
) cost(P,C).
A.3 PROOF OF THEOREM 3.4
We now give the proofs for optimal query complexity and runtime. We first require the following analogue to Lemma A.5:
Lemma A.10. Let S be a set of points obtained by independently sampling each point of X ⊆ Rd with probability p = min (
1, 100 log kα|S| ) . Let C be the optimal center of these points and CS be the
empirical center of these points. Conditioned on |S| ≥ 1, then E[CS ] = x and for |X| > γkα ,
E [ ‖CS − x‖22 ] ≤ γ p|X|2 · (∑ x∈X ‖x− x‖22 ) for some constant γ.
Lemma A.11. For α ∈ (10 log n/ √ n, 1/7), let Π be a predictor with error rate λ ≤ α/2. If each cluster has at least γk log k/α points, then Algorithm 3 outputs a (1 + 20α)-approximation to the k-means objective value with probability at least 3/4.
Proof. Since S samples each of points independently with probability proportional to cluster sizes given by Π, for a fixed i ∈ [k] at least 90 log kα points with label i are sampled, with probability at least 1− 1k4 from Chernoff bounds. Let γ1, . . . , γk be the empirical means corresponding to each of the sampled points with labels 1, . . . , k, respectively, and let Γ0 = {γ1, . . . , γk}. Let C1, . . . , Ck be centers of a (1 + α)-approximate optimal solution C with corresponding clusters X1, . . . , Xk. By Lemma A.10, we have that
E [ ‖Ci − γi‖22 ] ≤ γ p|Xi|2 · (∑ x∈Xi ‖x− Ci‖22 ) ,
where p = min (
1, 100 log kα|S|
) . By Markov’s inequality, we have that
∑ i∈[k] ‖Ci − γi‖22 ≤ 100 ∑ i∈[k] γ p|Xi|2 · (∑ x∈Xi ‖x− Ci‖22 )
with probability at least 0.99. Similar to the proof of Lemma A.9, we use the identity∑ x∈Xi ‖x− γi‖22 = ∑ x∈Xi ‖x− Ci‖22 + |Xi| · ‖Ci − γi‖22.
Hence, we have that cost(X,Γ0) ≤ (1 + α) · cost(X,C),
with probability at least 0.99.
Suppose Π has error rate λ ≤ α and each error chooses a label uniformly at random from the k possible labels. Then by definition of error rate, at most α/2 fraction of the points are erroneously labeled for each cluster. Each cluster in the optimal k-means clustering of the predictor Π has at least n/(ζk) points, so that at least a (1 − α) fraction of the points in each cluster are correctly labeled. Thus, by the same argument as in the proof of Lemma A.6, we have that Algorithm 1 outputs a set of centers C1, . . . , Ck such that for Γ = {C1, . . . , Ck}, we have
cost(X,Γ) ≤ (1 + 18α) (
1− α (1− α)2
) · cost(X,Γ0),
with sufficiently large probability. Let E be the event that cost(X,Γ) ≤ (1 +α)(1 + 18α) ( 1− α(1−α)2 ) · cost(X, C), so that Pr [E ] ≥ 1 − 1/poly(k). Conditioned on E , let X1 be the subset of X that is assigned the correct label by Π, and let X2 be the subset of X assigned the incorrect label. For each point x ∈ X1 assigned the
correct label `x by Π, the closest center to x in Γ is C`x , so Algorithm 3 will always label x with `x. Thus,
cost(X1,Γ) ≤ cost(X,Γ) ≤ (1 + α)(1 + 18α) (
1− α (1− α)2
) · cost(X, C),
conditioned on E . On the other hand, if x ∈ X2 is assigned an incorrect label `x by Π, then the (2, r)-approximate nearest neighbor data assigns the label px to x, where φ(Cpx) is the closest center to φ(x) in the projected space. Recall that φ is the composition map φ1◦φ2, where φ1 has a terminal dimension reduction with distortion 5/4, and φ2 is a random JL linear map with distortion 5/4. Thus the distance between x and Cpx is a 2-approximation between x and its closest center Ci. Hence, by assigning all points x to their respective centers Cpx , we have d(x,Cpx) ≤ 2 cost(x,Γ). Since each point x ∈ X is assigned the incorrect label with probability λ ≤ α/2, the expected cost of the labels assigned to X2 is α cost(X,Γ). By Markov’s inequality, the cost of the labels assigned to X2 is at most 10α cost(X,Γ) < 10α(1 + α) cost(X, C), with probability at least 1− 15 , conditioned on E . Therefore by a union bound, the total cost is at most (1 + 20α) · cost(X, C), with probability at least 3/4.
We need the following theorems on the quality of the data structures utilized in Algorithm 3. Theorem A.12. Makarychev et al. (2019) For every set C ⊂ Rd of size k, a parameter 0 < α < 12 and the standard Euclidean norm d(·, ·), there exists a terminal dimension reduction f : C → Rd′
with distortion (1 + α), where d′ = O (
log k α2
) . The dimension reduction can be computed in
polynomial time. Theorem A.13. Indyk & Motwani (1998); Har-Peled et al. (2012); Andoni et al. (2018) For α > 0, there exists a (1+α, r)-ANN data structure over R equipped with the standard Euclidean norm that achieves query time O ( d · lognα2 ) and space S := O ( 1 α2 log 1 α + d(n+ q) ) , where q := lognα2 . The
runtime of building the data structure is O(S + ndq).
We now prove Theorem 3.4. Theorem 3.4. Let α ∈ (10 log n/ √ n, 1/7), Π be a predictor with label error rate λ ≤ α, and γ ≥ 1 be a sufficiently large constant. If each cluster in the optimal k-means clustering of the predictor has at least γk log k/α points, then Algorithm 3 outputs a (1 + 20α)-approximation to the k-means objective with probability at least 3/4, using O(nd log n+ poly(k, log n)) total time.
Proof. The approximation guarantee of the algorithm follows from Lemma A.11. To analyze the running time, we first note that we apply a JL matrix with dimensionO(log n) to each of the n points in Rd, which usesO(nd log n) time. As a result of the JL embedding, each of the n points has dimension O(log n). Thus, by Theorem A.12, constructing the terminal embedding uses poly(k, log n) time. As a result of the terminal embedding, each of the k possible centers has dimension O(log k). Hence, by Theorem A.13, constructing the (2, r)-ANN data structure for the k possible centers uses O(k log2 k) time. Subsequently, each query to the data structure uses O(log2 k) time. Therefore, the overall runtime is O(nd log n+ poly(k, log n)).
A.4 REMARK ON TRULY-POLYNOMIAL TIME ALGORITHMS VS. PTAS/PRAS.
Remark A.14. We emphasize that the runtime of our algorithm in Theorem 2.1 is truly polynomial in all input parameters n, d, k and 1/α (and even near-linear in the input size nd). Although there exist polynomial-time randomized approximation schemes for k-means clustering, e.g., Inaba et al. (1994); Feldman et al. (2007); Kumar et al. (2004), their runtimes all have exponential dependency on k and 1/α, i.e., 2poly(k,1/α). However, this does not suffice for many applications, since k and 1/α should be treated as input parameters rather than constants. For example, it is undesirable to pay an exponential amount of time to linearly improve the accuracy α of the algorithm. Similarly, if the number of desired clusters k = O(log2 n), then the runtime would be exponential. Thus we believe the exponential improvement of Theorem 2.1 over existing PRAS in terms of k and 1/α is significant.
A.5 REMARK ON POSSIBLE INSTANTIATIONS OF PREDICTOR
Remark A.15. We can instantiate Theorem 2.1 with various versions of the predictor. Assume each cluster in the (1 + α)-approximately optimal k-means clustering of the predictor has size at least n/(ζk) for some tradeoff parameter ζ ∈ [1, ( √ n)/(8k log n)]. Then the clustering quality and runtime guarantees of Theorem 2.1 hold if the predictor Π is such that
1. Π outputs the right label for each point independently with probability 1−λ and otherwise outputs a random label for λ ≤ O(α/ζ),
2. Π outputs the right label for each point independently with probability 1−λ and otherwise outputs an adversarial label for λ ≤ O(α/(kζ)).
In addition, if the predictor Π outputs a failure symbol when it fails, then for constant ζ > 0, there exists an algorithm (see supplementary material) that outputs a (1+α)-approximation to the k-means objective with probability at least 2/3, even when Π has failure rate λ = 1− 1/ poly(k). Note that this remark (but not Theorem 2.1) assumes that each of the k clusters in the (1 + α)-approximately optimal clustering has at least nζk points. This is a natural assumption that the clusters are “roughly balanced” which often holds in practice, e.g., for Zipfian distributions.
B DELETION PREDICTOR
In this section, we present a fast and simple algorithm for k-means clustering, given access to a label predictor Π with deletion rate λ. That is, for each point, the predictor Π either outputs a label for the point consistent with an optimal k-means clustering algorithm with probability λ, or outputs nothing at all (or a failure symbol ⊥) with probability 1− λ. Since the deletion predictor fails explicitly, we can actually achieve a (1 + α)-approximation even when λ = 1− 1poly(k) .
Our algorithm first queries all points in the inputX . Although the predictor does not output the label for each point, for each cluster Ci with a sufficiently large number of points, with high probability, the predictor assigns at least λ2 |Ci| points of Ci to the correct label. We show that if |Ci| = Ω ( k α ) , then with high probability, the empirical center is a good estimator for the true center. That is, the kmeans objective using the centroid of the points labeled i is a (1 +α)-approximation to the k-means objective using the true center of Ci. We give the full details in Algorithm 4.
To show that the empirical center is a good estimator for the true center, recall that a common approach for mean estimation is to sample roughly anO ( 1 α2 ) number of points uniformly at random with replacement. The argument follows from observing that each sample is an unbiased estimator of the true mean, and repeating O ( 1 α2 ) times sufficiently upper bounds the variance.
Observe that the predictor can be viewed as sampling the points from each cluster without replacement. Thus, for sufficiently large cluster sizes, we actually have a huge number of samples, which intuitively should sufficiently upper bound the variance. Moreover, the empirical mean is again an unbiased estimator of the true mean. Thus, although the above analysis does not quite hold due to dependencies between the number of samples and the resulting averaging term, we show that the above intuition does hold.
Algorithm 4 Linear time k-means algorithm with access to a label predictor Π with deletion rate λ. Input: A point set x ∈ X with labels given by a label predictor Π with deletion rate λ. Output: A (1 + α)-approximate k-means clustering of X .
1: for each label i ∈ [k] do 2: Let Si be the set of points labeled i. 3: ci ← 1|Si| · ∑ x∈Si x 4: end for 5: for all points x ∈ X do 6: if x is unlabeled then 7: `x ← arg min d(x, ci) 8: Assign label `x to x. 9: end if
10: end for
We first show that independently sampling points uniformly at random from a sufficiently large point set guarantees a (1 +α)-approximation to the objective cost. Inaba et al. (1994); Ailon et al. (2018) proved a similar statement for sampling with replacement.
It remains to justify the correctness of Algorithm 4 by arguing that with high probability, the overall k-means cost is preserved up to a (1+α)-factor by the empirical means. We also analyze the running time of Algorithm 4.
Theorem B.1. If each cluster in the optimal k-means clustering of the predictor Π has at least 3kα points, then Algorithm 4 outputs a (1 +α)-approximation to the k-means objective with probability at least 23 , using O(kdn) total time.
Proof. We first justify the correctness of Algorithm 4. Suppose each cluster in the optimal k-means clustering of the predictor Π has at least 3kα points. Let C = {c1, . . . , ck} be the optimal centers selected by Π and let CS = {c′1, . . . , c′k} be the empirical centers chosen by Algorithm 4. For each i ∈ [k], let Ci be the points of X that are assigned to Ci by the predictor Π. By Lemma A.9 with η = 3, the approximate centroid of a cluster induces a (1 + α)-approximation to the cost of the corresponding cluster so that
cost(Ci, c ′ i) ≤ (1 + α) cost(Ci, ci),
with probability at least 1− 13k . Taking a union bound over all k clusters, we have that∑ i∈[k] cost(Ci, c ′ i) ≤ ∑ i∈[k] (1 + α) cost(Ci, ci),
with probability at least 23 . Equivalently, cost(X,C) ≤ (1 + α) cost(X,CS). To analyze the running time of Algorithm 4, observe that the estimated centroids for all labels can be computed in O(dn) time. Subsequently, assigning each unlabeled point to the closest estimated centroid uses O(kd) time for each unlabeled point. Thus, the total running time is O(kdn).
C k-MEDIAN CLUSTERING
We first recall that a well-known result states that the geometric median that results from uniformly sampling a number of points from the input is a “good” approximation to the actual geometric median for the 1-median problem.
Theorem C.1. Krauthgamer (2019) Given a set P of n points in Rd, the geometric median of a sample of O ( d α2 log d α ) points of P provides a (1 + α)-approximation to the 1-median clustering problem with probability at least 1− 1/poly(d). Note that we can first apply Theorem A.12 to project all points to a space with dimension O (
1 α2 log k α
) before applying Theorem C.1. Instead of computing the geometric median, we re-
call the following procedure that produces a (1 + α)-approximation to the geometric median.
Theorem C.2. Cohen et al. (2016) There exists an algorithm that outputs a (1 + α)-approximation to the geometric median in O ( nd log3 nα ) time.
We give our algorithm in full in Algorithm 5. Theorem C.3. For α ∈ (0, 1), let Π be a predictor with error rate λ = O (
α4
k log kα log log k α
) . If each
cluster in the optimal k-median clustering of the predictor has at least n/(ζk) points, then there exists an algorithm that outputs a (1 +α)-approximation to the k-median objective with probability at least 1− 1/ poly(k), using O(nd log3 n+ poly(k, log n)) total time. Proof. Observe that Algorithm 5 samples O (
1 α4 log 2 k α
) points for each of the clusters labeled i,
with i ∈ [k]. Thus Algorithm 5 samples O ( k α4 log 2 k α ) points in total. For λ = O ( α4
k log kα log log k α ) with a sufficiently small constant, the expected number of incorrectly labeled points sampled by Algorithm 5 is less than 132 . Thus, by Markov’s inequality, the probability that no incorrectly labeled
Algorithm 5 Learning-Augmented k-median Clustering Input: A point set x ∈ X with labels given by a predictor Π with error rate λ. Output: A (1 + α)-approximate k-median clustering of X .
1: Use a terminal embedding to project all points into a space with dimension O (
1 α2 log k α
) .
2: for i = 1 to i = k do 3: Let `i be the most common remaining label. 4: Sample O ( 1 α4 log 2 k α ) points with label `i.
5: Let C ′i be a ( 1 + α4 ) -approximation to the geometric median of the sampled points. 6: end for 7: Return C ′1, . . . , C ′k.
points are sampled by Algorithm 5 is at least 34 . Conditioned on the event that no incorrectly labeled points are sampled by Algorithm 5, then by Theorem C.1, the empirical geometric median for each cluster induces a ( 1 + α4 ) -approximation to the optimal geometric median in the projected space.
Hence the set of k empirical geometric medians induces a ( 1 + α4 ) -approximation to the optimal k-median clustering cost in the projected space. Since the projected space is the result of a terminal embedding, the set of k empirical geometric medians for the sampled points in the projected space induces a k-median clustering cost that is a ( 1 + α4 ) -approximation to the k-median clustering cost induced by the set of k empirical geometric medians for the sampled points in the original space. Taking the set of k empirical geometric medians for the sampled points in the original space induces a ( 1 + α4 )2 -approximation to the k-median clustering cost. We take a ( 1 + α4 ) -approximation to each of the geometric medians. Thus for sufficiently small α, Algorithm 5 outputs a (1 + α)approximation to the k-median clustering problem.
To embed the points into the space of dimension O (
1 α2 log k α
) , Algorithm 5 spends O(nd log n)
total time. By Theorem C.2, it takes O(nd log3 n) total time to compute the approximate geometric medians.
D LOWER BOUNDS
MAX-E3-LIN-2 is the optimization problem of maximizing the number of equations satisfied by a system of linear equations of Z2 with exactly 3 distinct variables in each equation. EK-MAX-E3LIN-2 is the problem of MAX-E3-LIN-2 when each variable appears in exactly k equations. Fotakis et al. (2016) showed that assuming the exponential time hypothesis (ETH) (Impagliazzo & Paturi, 2001), there exists an absolute constant C1 such that MAX k-SAT (and thus MAX k-CSP) instances with fewer than O(nk−1) clauses cannot be approximated within a factor of C1 in time 2O(n 1−δ) for any δ > 0. As a consequence, the reduction by Håstad (2001) shows that there exist absolute constants C2, C3 such that EK-MAX-E3-LIN-2 with k ≥ C2 cannot be approximated within a factor of C3 in time 2O(n
1−δ) for any δ > 0. Hence, the reduction by Chlebı́k & Chlebı́ková (2006) shows that there exists a constantC4 such that approximating the minimum vertex cover of 4-regular graphs within a factor of C4 cannot be done in time 2O(n
1−δ) for any δ > 0. Thus the reduction by Lee et al. (2017) shows that there exists a constant C5 such that approximating k-means within a factor ofC5 cannot be done in time 2O(n
1−δ) for any δ > 0, assuming ETH. Namely, the reduction of Lee et al. (2017) shows that an algorithm that provides a C5-approximation to the optimal k-means clustering can be used to compute a C4-approximation to the minimum vertex cover.
Theorem D.1. If ETH is true, then there does not exist an algorithm A that takes a set S of n 1−δ
logn
vertices and finds a C4-approximation to the minimum vertex | 1. What is the focus of the paper regarding clustering problems?
2. What are the strengths of the proposed approach, particularly in its efficiency and scalability?
3. What are the weaknesses of the paper, especially concerning the practicality and feasibility of the predictor?
4. Do you have any questions about the technical aspects of the algorithm and its analysis?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper considers the problem of k-means clustering with the aid of a predictor which supplies a proxy to the optimal clustering subject to some possible errors. The motivation for this setting is the inherent computational issues with solving the vanilla k-means clustering problem. For this model, the authors propose and analyze an efficient algorithm whose approximation factor scales gracefully with the predictor error guarantees.
Review
Strengths:
The problem is interesting. Given the fact that hard clustering is generally NP-hard in the worst-case, it make sense to try and mimic/find scenarios where this discouraging fact can be bypassed, e.g., by introducing reasonable side information. As the authors mentioned, perhaps the first attempt for introducing such side information is the SSAC framework. This paper introduce another possible approach.
The paper is well-written and motivated.
The numerical part is thorough.
Weaknesses:
Similarly to the SSAC framework, it is unclear to me if such a predicator(s) exist in practice. I appreciate that, at least, based on the way things are presented the current framework seems to be more "practical" than the SSAC framework. In particular, I am not completely sure what is required by the predictor; unless I am missing something trivial, we need the \alpha to be less than 1/7, which to me seems completely not easy to obtain. How you managed to do that in your experiments?
I am not sure if it is a real weakness, but I find both the algorithm and (especially) the analysis quite standard or at least not surprising. Perhaps the authors could elaborate a bit more on the technical novelty in their proofs. |
ICLR | Title
Learning-Augmented
k
-means Clustering
Abstract
k-means clustering is a well-studied problem due to its wide applicability. Unfortunately, there exist strong theoretical limits on the performance of any algorithm for the k-means problem on worst-case inputs. To overcome this barrier, we consider a scenario where “advice” is provided to help perform clustering. Specifically, we consider the k-means problem augmented with a predictor that, given any point, returns its cluster label in an approximately optimal clustering up to some, possibly adversarial, error. We present an algorithm whose performance improves along with the accuracy of the predictor, even though naı̈vely following the accurate predictor can still lead to a high clustering cost. Thus if the predictor is sufficiently accurate, we can retrieve a close to optimal clustering with nearly optimal runtime, breaking known computational barriers for algorithms that do not have access to such advice. We evaluate our algorithms on real datasets and show significant improvements in the quality of clustering.
1 INTRODUCTION
Clustering is a fundamental task in data analysis that is typically one of the first methods used to understand the structure of large datasets. The most common formulation of clustering is the kmeans problem where given a set P ⊂ Rd of n points, the goal is to find a set of centers C ⊂ Rd of k points to minimize the objective cost(P,C) = ∑ p∈P minc∈C ‖p− c‖22. (1)
Despite decades of work, there exist strong theoretical limitations about the performance of any algorithm for the k-means problem. Finding the optimal set C is NP-hard even for the case of k = 2 (Dasgupta, 2008) and even finding an approximate solution with objective value that is within a factor 1.07 of the optimal solution is NP-hard (Cohen-Addad & S., 2019; Lee et al., 2017). Furthermore, the best-known practical polynomial time algorithms can only provably achieve a large constant factor approximation to the optimal clustering, e.g., the 50-approximation in Song & Rajasekaran (2010), or use techniques such as linear programming that do not scale, e.g., the 6.357- approximation in Ahmadian et al. (2020).
A natural approach to overcome these computational barriers is to leverage the fact that in many applications, the input is often not arbitrary and contains auxiliary information that can be used to construct a good clustering, e.g., in many applications, the input can be similar to past instances. Thus, it is reasonable to create a (possibly erroneous) predictor by using auxiliary information or through clusterings of similar datasets, which can inform the proper label of an item in our current dataset. Indeed, inspired by the developments in machine learning, many recent papers have studied algorithms augmented with predictions (Mitzenmacher & Vassilvitskii, 2020). Such algorithms utilize a predictor that, when invoked, provides an (imperfect) prediction for future inputs. The predictions are then used by the algorithm to improve performance (see references in Section 1.3).
Hence, we consider the problem of k-means clustering given additional access to a predictor that outputs advice for which points should be clustered together, by outputting a label for each point. The goal is to find k centers that minimize objective (1) and assign each point to one of these centers.
The question is then whether one can utilize such predictions to boost the accuracy and runtime of clustering of new datasets. Our results demonstrate the answer in the affirmative.
Formal learning-augmented problem definition. Given a set P ⊆ Rd of n points, the goal is to find a set of k points C (called centers) to minimize objective (1). In the learning-augmented setting, we assume we have access to a predictor Π that provides information about the label of each point consistent with a (1+α)-approximately optimal clustering C. We say that a predictor has label error rate λ ≤ α if for each label i ∈ [k] := {1, . . . , k}, Π errs on at most a λ ≤ α fraction of all points in cluster i in C, and Π errs on at most a λ ≤ α fraction of all points given label i by Π. In other words, Π has at least (1− λ) precision and recall for each label. Our predictor model subsumes both random and adversarial errors by the predictor. For example if the cluster sizes are somewhat well-balanced, then a special case of our model is when Π(p) outputs the correct label of point p ∈ P with some probability 1 − λ and otherwise outputs a random label in [k] with probability λ. The example where the predictor outputs an adversarial label instead of a random label with probability λ also falls under our model. For more detail, see Theorems 2.1 and 3.4. We also adjust our algorithm to have better performance when the errors are random rather than adversarial in the supplementary material.
1.1 MOTIVATION FOR OUR WORK
We first motivate studying k-means clustering under the learning-augmented algorithms framework.
Overcoming theoretical barriers. As stated above, no polynomial time algorithm can achieve better than a constant factor approximation to the optimal clustering. In addition, the best provable approximation guarantees by polynomial time algorithms have a large constant factor (for example the 50 approximation in Song & Rajasekaran (2010)), or use methods which do not scale (such as the linear programming based algorithm in Ahmadian et al. (2020) which gives a 6.357-approximation). Therefore, it is of interest to study whether a natural assumption can overcome these complexity barriers. In our work, we show that knowing the true labels up to some possibly adversarial noise can give us arbitrarily good clusterings, depending on the noise level, which breaks these computational barriers. Furthermore, we present an algorithm that runs in nearly linear time, rather than just polynomial time. Lastly, we introduce tools from the robust statistics literature to study k-means clustering rather than the distance-based sampling procedure that is commonly analyzed (this is the basis of kmeans++). This new toolkit and connection could have further applications in other learning-augmented clustering problems.
Practical considerations. In practice, good predictors can be learned for datasets with auxiliary information. For a concrete example, we can take any dataset that has a train/test split and use a clustering on the training dataset to help us cluster the testing portion of the dataset. Therefore, datasets do not have to be specifically curated to fit our modelling assumption, which is a requirement in other modelling formulations that leverage extra information such as the SSAC model discussed in Section 1.3. A predictor can also be created from the natural class of datasets that vary over time, such as Census data or spectral clustering for temporal graphs (graphs slowly varying over time). For this class of datasets, a clustering from an earlier time step can function as a predictor for later time steps. Lastly, we can simply use the labels given by another clustering algorithm (such as kmeans++) or heuristic as a predictor. Therefore, predictors are readily and easily available for a wide class of natural datasets.
Following the predictor alone is insufficient. Given a predictor that outputs noisy labels, it is conceivable that its output alone can give us a good clustering relative to optimal. However, this is not the case and naı̈vely using the label provided by the predictor for each point can result in an arbitrarily bad solution, even when the predictor errs with low probability. For example, consider a cluster of n2 points at the origin and a cluster of n 2 points at x = 1. Then for k = 2, choosing centers at the origin and at x = 1 induces a k-means clustering cost of zero. However, even for a predictor that errs with probability 1n , some point will be mislabeled with constant probability, which results in a positive k-means clustering cost, and so does not provide a relative error approximation. Thus, using the provided labels by the predictor can induce an arbitrarily bad clustering, even as the label error rate of the predictor tends to zero. This subtlety makes the model rich and interesting, and requires us to create non-trivial clustering algorithms.
Predictors with adversarial errors. Since the predictor is separate from the clustering algorithm, interference with the output of the predictor following the clustering algorithm’s query can be a source of non-random noise. Thus any scenario in which communication is performed over a noisy channel (for example, if the predictor is hosted at one server and the algorithm is hosted at another server) is susceptible to such errors. Another source of adversarial failure by the predictor is when the predictor is trained on a dataset that can be generated by an adversary, such as in the context of adversarial machine learning. Moreover, our algorithms have better guarantees when the predictor does not fail adversarially, e.g., see the supplementary material).
1.2 OUR RESULTS
In this paper we study “learning-augmented” methods for efficient k-means clustering. Our contributions are both theoretical and empirical. On the theoretical side, we introduce an algorithm that provably solves the k-means problem almost optimally, given access to a predictor that outputs a label for each point p ∈ P according to a (1 + α)-approximately optimal clustering, up to some noise. Specifically, suppose we have access to a predictor Π with label error rate λ upper bounded by a parameter α. Then, Algorithm 1 outputs a set of centers C̃ in Õ(knd) time1, such that cost(P, C̃) ≤ (1 +O(α)) · cost(P,Copt), where Copt is an optimal set of centers. We improve the runtime in Section 3 by introducing Algorithm 3, which has the same error guarantees, but uses Õ(nd) runtime, which is nearly optimal since one needs at least nd time to read the points for dense inputs (Theorem 3.4, and Remark A.14).
To output labels for all points, Algorithm 3 requires n queries to the predictor. However, if the goal is to just output centers for each cluster, then we only require Õ(k/α) queries. This is essentially optimal; we show in Theorem 3.5 that any polynomial time algorithm must perform approximately Ω̃(k/α) queries to output a 1+α-approximate solution assuming the Exponential Time Hypothesis, a well known complexity-theoretic assumption (Impagliazzo & Paturi, 2001). Note that one could ignore the oracle entirely, but then one is limited by the constant factor hardness for polynomial time algorithms, which we bypass with a small number of queries.
Surprisingly, we do not require assumptions that the input is well-separated or approximation-stable (Braverman et al., 2011; Balcan et al., 2013), which are assumed in other works. Finally in the supplementary material, we also give a learning-augmented algorithm for the related problem of k-median clustering, which has less algebraic structure than that of k-means clustering. We also consider a deletion predictor, which either outputs a correct label or a failure symbol ⊥ and give a (1 + α)-approximation algorithm even when the “deletion rate” is 1− 1/poly(k). On the empirical side, we evaluate our algorithms on real and synthetic datasets. We experimentally show that good predictors can be learned for all of our varied datasets, which can aid in clustering. We also show our methodology is more robust than other heuristics such as random sampling.
1.3 RELATED WORK
Learning-augmented algorithms. Our paper adds to the growing body of work on learningaugmented algorithms. In this framework, additional “advice” from a possibly erroneous predictor is used to improve performance of classical algorithms. For example, a common predictor is a “heaviness” predictor that outputs how “important” a given input point is. It has been shown that such predictors can be learned using modern machine learning techniques or other methods on training datasets and can be successfully applied to similar testing datasets. This methodology has found applications in improving data structures (Kraska et al., 2018; Mitzenmacher, 2018), streaming algorithms (Hsu et al., 2019; Jiang et al., 2020), online algorithms (Lykouris & Vassilvtiskii, 2018; Purohit et al., 2018), graph algorithms (Dai et al., 2017), and many other domains (Mousavi et al., 2015; Wang et al., 2016; Bora et al., 2017; Sablayrolles et al., 2019; Dong et al., 2020; Sanchez et al., 2020; Eden et al., 2021). See Mitzenmacher & Vassilvitskii (2020) for an overview and applications.
Clustering with additional information. There have been numerous works that study clustering in a semi-supervised setting where extra information is given. Basu et al. (2004) gave an active learning framework of clustering with “must-link”/“cannot-link” constraints, where an algorithm is allowed
1The notation Õ hides logarithmic factors.
to interact with a predictor that determines if two points must or cannot belong to the same cluster. Their objective function is different than that of k-means and they do not give theoretical bounds on the quality of their solution. Balcan & Blum (2008) and Awasthi et al. (2017) studied an interactive framework for clustering, where a predictor interactively provides feedback about whether or not to split a current cluster or merge two clusters. Vikram & Dasgupta (2016) also worked with an interactive oracle but for the Bayesian hierarchical clustering problem. These works differ from ours in their assumptions since their predictors must answer different questions about partitions of the input points. In contrast, Howe (2017) used logistic regression to aid k-means clustering but do not give any theoretical guarantees.
The framework closest in spirit to ours is the semi-supervised active clustering framework (SSAC) introduced by Ashtiani et al. (2016) and further studied by Kim & Ghosh (2017); Mazumdar & Saha (2017); Gamlath et al. (2018); Ailon et al. (2018); Chien et al. (2018); Huleihel et al. (2019). The goal of this framework is also to produce a (1 + α)-approximate clustering while minimizing the number of queries to a predictor that instead answers queries of the form “same-cluster(u, v)”, which returns 1 if points u, v ∈ P are in the same cluster in a particular optimal clustering and 0 otherwise. Our work differs from the SSAC framework in terms of both runtime guarantees, techniques used, and model assumptions, as detailed below.
We briefly compare to the most relevant works in the SSAC framework, which are Ailon et al. (2018) and Mazumdar & Saha (2017). First, the runtime of Ailon et al. (2018) is O(ndk9/α4) even for a perfectly accurate predictor, while the algorithm of Mazumdar & Saha (2017) uses O(nk2) queries and runtime Õ(ndk2). By comparison, we use significantly fewer queries, with near linear runtime Õ(nd) even for an erroneous predictor. Moreover, a predictor of Mazumdar & Saha (2017) independently fails each query with probability p so that repeating with pairs containing the same point can determine the correct label of a point whereas our oracle will always repeatedly fail with the same query, so that repeated queries do not help.
The SSAC framework uses the predictor to perform importance sampling to obtain a sufficient number of points from each cluster whereas we use techniques from robust mean estimation, dimensionality reduction, and approximate nearest neighbor data structures. Moreover, it is unclear how the SSAC predictor can be implemented in practice to handle adversarial corruptions. One may consider simulating the SSAC predictor using information from individual points by simply checking if the labels of the two input points are the same. However, if a particular input is mislabeled, then all of the pairs containing this input can also be reported incorrectly, which violates their independent noise assumption. Finally, the noisy predictor algorithm in Ailon et al. (2018) invokes a step of recovering a hidden clique in a stochastic block model, making it prohibitively costly to implement.
Lastly, in the SSAC framework, datasets need to be specifically created to fit into their model since one requires pairwise information. In contrast, our predictor requires information about individual points, which can be learned from either a training dataset, from past similar datasets, or from another approximate or heuristic clustering and is able to handle adversarial corruptions. Thus, we obtain significantly faster algorithms while using an arguably more realistic predictor.
Approximation stability. Another approach to overcome the NP-hardness of approximation for k-means clustering is the assumption that the underlying dataset follows certain distributional properties. Introduced by Balcan et al. (2013), the notion of (c, α)-approximate stability (Agarwal et al., 2015; Awasthi et al., 2019; Balcan et al., 2020) requires that every c-approximation is α-close to the optimal solution in terms of the fraction of incorrectly clustered points. In contrast, we allow inputs so that an arbitrarily small fraction of incorrectly clustered points can induce arbitrarily bad approximations, as previously discussed, e.g., in Section 1.1.
2 LEARNING-AUGMENTED k-MEANS ALGORITHM
Preliminaries. We use [n] to denote the set {1, . . . , n}. Given the set of cluster centers C, we can partition the input points P into k clusters {C1, . . . , Ck} according to the closest center to each point. If a point is grouped in Ci in the clustering, we refer to its label as i. Note that labels can be arbitrarily permuted as long as the labeling across the points of each cluster is consistent. It is well-known that in k-means clustering, the i-th center is given by the coordinate-wise mean of the
Algorithm 1 Learning-augmented k-means clustering Input: A point set X with labels given by a predictor
Π with label error rate λ Output: (1 + O(α))-approximate k-means clustering
of X 1: for i = 1 to i = k do 2: Let Yi be the set of points with label i. 3: Run CRDEST for each of the d coordinates of Yi.
4: Let C ′i be the coordinate-wise outputs of CRDEST. 5: end for 6: Return clustering with centers C ′1, . . . , C ′k.
Algorithm 2 Coordinate-wise estimation CRDEST Input: Points x1, . . . , x2m ∈ R, corrup-
tion level λ ≤ α 1: Randomly partition the points into
two groups X1, X2 of size m. 2: Let I = [a, b] be the shortest interval
containing m(1− 5α) points of X1. 3: Z ← X2 ∩ I 4: z ← 1|Z| ∑ x∈Z x 5: Return z
points in Ci. Given x ∈ Rd and a set C ⊂ Rd, we define d(x,C) = minc∈C ‖x − c‖2. Note that there may be many approximately optimal clusterings but we consider a fixed one for our analysis.
2.1 OUR ALGORITHM
Our main result is an algorithm for outputting a clustering that achieves a (1 + 20α) approximation 2 to the optimal objective cost when given access to approximations of the correct labeling of the points in P . We first present a suboptimal algorithm in Algorithm 1 for intuition and then optimize the runtime in Algorithm 3, which is provided in Section 3.
The intuition for Algorithm 1 is as follows. We first address the problem of identifying an approximate center for each cluster. Let Copt1 , · · · , C opt k be an optimal grouping of the points and consider all the points labeled i by our predictor for some fixed 1 ≤ i ≤ k. Since our predictor can err, a large number of points that are not in Copti may also be labeled i. This is especially problematic when points that are “significantly far” from cluster Copti are given the label i, which may increase the objective function arbitrarily if we simply take the mean of the points labeled i by the predictor.
To filter out such outliers, we consider a two step view from the robust statistics literature, e.g., Prasad et al. (2019); these two steps can respectively be interpreted as a “training” phase and a “testing” phase that removes “bad” outliers. We first randomly partition the points that are given label i into two groups, X1 and X2, of equal size. We then estimate the mean of C opt i using a coordinate-wise approach through Algorithm 2 (CRDEST), decomposing the total cost as the sum of the costs in each dimension.
For each coordinate, we find the smallest interval I that contains a (1 − 4α) fraction of the points in X1. We show that for label error rate λ ≤ α, this “training” phase removes any outliers and thus provides a rough estimation to the location of the “true” points that are labeled i. To remove dependency issues, we then “test” X2 on I by computing the mean of X2 ∩ I . This allows us to get empirical centers that are a sufficiently good approximation to the coordinates of the true center for each coordinate. We then repeat on the other labels. The key insight is that the error from meanestimation can be directly charged to the approximation error due to the special structure of the k-means problem. Our main theoretical result considers predictors that err on at most a λ-fraction of all cluster labels. Note that all omitted proofs appear in the supplementary material. Theorem 2.1. Let α ∈ (10 log n/ √ n, 1/7), Π be a predictor with label error rate λ ≤ α, and γ ≥ 1 a sufficiently large constant. If each cluster in the (1 + α)-approximately optimal k-means clustering of the predictor has at least γηk/α points, then Algorithm 1 can be used to output a (1 + 20α)-approximation to the k-means objective with prob. 1− 1/η, using O(kdn log n) runtime.
We improve the running time to O(nd log n+ poly(k, log n)) in Theorem 3.4 in Section 3. Our algorithms can also tolerate similar error rates when failures correspond to random labels, adversarial labels, or a special failure symbol.
2Note that we have not attempted to optimize the constant 20.
Error rate λ vs. accuracy parameter α. We emphasize that λ is the error rate of the predictor and α is only some loose upper bound on λ. It is reasonable that some algorithms can provide lossy guarantees on their outputs, which translates to the desired loose upper bound α on the accuracy of the predictor. Even if is not known, multiple instances of the algorithm can be run in parallel with separate exponentially decreasing “guesses” for the value α. We can simply return the best clustering among these algorithms, which will provide the same theoretical guarantees as if we set α = 1.01λ , for example. Thus α does not need to be known in advance and it does not need to be tuned as a hyperparameter.
3 NEARLY OPTIMAL RUNTIME ALGORITHM
We now describe Algorithm 3, which is an optimized runtime version of Algorithm 1 and whose guarantees we present in Theorem 3.4. The bottleneck for Algorithm 1 is that after selecting k empirical centers, it must still assign each of the n points to the closest empirical center. The main intuition for Algorithm 3 is that although reading all points usesO(nd) time, we do not need to spend O(dk) time per point to find its closest empirical center, if we set up the correct data structures. In fact, as long as we assign each point to a “relatively good” center, the assigned clustering is still a “good” approximation to the optimal solution. Thus we proceed in a similar manner as before to sample a number of input points and find the optimal k centers for the sampled points.
We use dimensionality reduction and an approximate nearest neighbor (ANN) data structure to efficiently assign each point to a “sufficiently close” center. Namely if a point p ∈ P should be assigned to its closest empirical Ci then p must be assigned to some empirical center Cj such that ‖p − Cj‖2 ≤ 2‖p − Ci‖2. Hence, points that are not assigned to their optimal centers only incur a “small” penalty due to the ANN data structure and so the cost of the clustering does not increase “too much” in expectation. Formally, we need the following definitions.
Theorem 3.1 (JL transform). Johnson & Lindenstrauss (1984) Let d(·, ·) be the standard Euclidean norm. There exists a family of linear maps A : Rd → Rk and an absolute constant C > 0 such that for any x, y ∈ Rd, Pr [φ ∈ A, d(φ(x), φ(y)) ∈ (1± α)d(x, y)] ≥ 1− e−Cα2k.
Definition 3.2 (Terminal dimension reduction). Given a set of points called terminals C ⊂ Rd, we call a map f : Rd → Rk a terminal dimension reduction with distortion D if for every terminal c ∈ C and point p ∈ Rd, we have d(p, c) ≤ d(f(p), f(c)) ≤ D · d(p, c).
Definition 3.3 (Approximate nearest neighbor search). Given a set P of n points in a metric space (X, d), a (c, r)-approximate nearest neighbor search (ANN) data structure takes any query point q ∈ X with non-empty {p ∈ P : 0 < d(p, q) ≤ r} and outputs a point in {p ∈ P : 0 < d(p, q) ≤ cr}.
To justify the guarantees of Algorithm 3, we need runtime guarantees on creating a suitable dimensionality reduction map and an ANN data structure. These are from Makarychev et al. (2019) and Indyk & Motwani (1998); Har-Peled et al. (2012); Andoni et al. (2018) respectively, and are stated in Theorems A.12 and A.13 in the supplementary section. They ensure that each point is mapped to a “good” center. Thus, we obtain our main result describing the guarantees of Algorithm 3. Theorem 3.4. Let α ∈ (10 log n/ √ n, 1/7), Π be a predictor with label error rate λ ≤ α, and γ ≥ 1 be a sufficiently large constant. If each cluster in the optimal k-means clustering of the predictor has at least γk log k/α points, then Algorithm 3 outputs a (1 + 20α)-approximation to the k-means objective with probability at least 3/4, using O(nd log n+ poly(k, log n)) total time.
Note that if we wish to only output the k centers rather than labeling all of the input points, then the query complexity of Algorithm 3 is Õ(k/α) (see Step 1 of Algorithm 3) with high probability. We show in the supplementary material that this is nearly optimal.
Theorem 3.5. For any δ ∈ (0, 1], any algorithm that makes O ( k1−δ
α logn
) queries to the predictor
with label error rate α cannot output a (1 + Cα)-approximation to the optimal k-means clustering cost in time 2O(n 1−δ) time, assuming the Exponential Time Hypothesis.
Algorithm 3 Fast learning-augmented algorithm for k-means clustering. Input: A point set X , a predictor Π with label error rate λ ≤ α, and a tradeoff parameter ζ Output: A (1 + α)-approximate k-means clustering of X
1: Form S by sampling each point of X with probability 100 log kα|Ax| where Ax is the set of points with the same label as x according to Π. 2: Let C1, . . . , Ck be the output of Algorithm 1 on S. 3: Let φ2 be a random JL linear map with distortion 54 , i.e., dimension O(log n). 4: Let φ1 be a terminal dimension reduction with distortion 54 . 5: Let φ := φ1 ◦ φ2 be the composition map. 6: Let A be a (2, r)-ANN data structure on the points φ(C1), . . . , φ(Ck). 7: for x ∈ X do 8: Let `x be the label of x from Π. 9: %← d(x,C`x)
10: Query A to find the closest center φ(Cpx) to x with r = % 2 . 11: if d(x,Cpx) < 2d(x,C`x) then 12: Assign label px to x. 13: else 14: Assign label `x to x. 15: end if 16: end for
4 EXPERIMENTS
In this section we evaluate Algorithm 1 empirically on real datasets. We choose to implement Algorithm 1, as opposed to the runtime optimal Algorithm 3, for simplicity and because the goal of our experiments is to highlight the error guarantees of our methodology, which both algorithms share. Further, we will see that Algorithm 1 is already very fast compared to alternatives. Thus, we implement the simpler of the two algorithms. We primarily fix the number of clusters to be k = 10 and k = 25 throughout our experiments for all datasets. Note that our predictors can readily generalize to other values of k but we focus on these two values for clarity. All of our experiments were done on a CPU with i5 2.7 GHz dual core and 8 GB RAM. Furthermore, all our experimental results are averaged over 20 independent trials and ± one standard deviation error is shaded when applicable. We give the full details of our datasets below.
1) Oregon: Dataset of 9 graph snapshots sampled across 3 months from an internet router communication network (Leskovec et al., 2005). We then use the top two eigenvectors of the normalized Laplacian matrix to give us node embeddings into R2 for each graph which gives us 9 datasets, one for each graph. Each dataset has roughly n ∼ 104 points. This is an instance of spectral clustering. 2) PHY: Dataset from KDD cup 2004 (kdd, 2004). We take 104 random samples to form our dataset. 3) CIFAR10: Testing portion of CIFAR-10 (n = 104, dimension 3072) (Krizhevsky, 2009).
Baselines. We compare against the following algorithms. Additional experimental results on Lloyd’s heuristic are given in Section E.3 in the supplementary material.
1) kmeans++: We measure the performance of our algorithm in comparison to the kmeans++ seeding algorithm. Since kmeans++ is a randomized algorithm, we take the average clustering cost after running kmeans++ seeding on 20 independent trials. We then standardize this value to have cost 1.0 and report all other costs in terms of this normalization. For example, the cost 2.0 means that the clustering cost is twice that of the average kmeans++ clustering cost. We also use the labels of kmeans++ as the predictor in the input for Algorithm 1 (denoted as “Alg + kmeans++”) which serves to highlight the fact that one can use any heuristic or approximate clustering algorithm as a predictor.
2) Random sampling: For this algorithm, we subsample the predictor labels with probability q ranging from 1% to 50%. We then construct the k-means centers using the labels of the sampled points and measure the clustering cost using the whole dataset. We use the best value of q in our range every time to give this baseline as much power as possible. We emphasize that random sampling cannot have theoretical guarantees since the random sample can be corrupted (similarly
as in the example in Section 1.1). Thus some outlier detection steps (such as our algorithms) are required.
Predictor Description. We use the following predictors in our experiments.
1) Nearest neighbor: We use this predictor for the Oregon dataset. We find the best clustering of the node embeddings in Graph #1. In practice, this means running many steps of Lloyd’s algorithm until convergence after initial seeding by kmeans++. Our predictor takes as input a point in R2 representing a node embedding of any of the later 8 graphs and outputs the label of the closest node in Graph #1.
2) Noisy predictor. This is the main predictor for PHY. We form this predictor by first finding the best k-means solution on our datasets. This again means initial seeding by kmeans++ and then many steps of Lloyd’s algorithm until convergence. We then randomly corrupt the resulting labels by changing them to a uniformly random label independently with error probability ranging from 0 to 1. We report the cost of clustering using only these noisy labels versus processing these labels using Algorithm 1.
3) Neural network. We use a standard neural network architecture (ResNet18) trained on the training portion of the CIFAR-10 dataset as the oracle for the testing portion which we use in our experiments. We used a pretrained model obtained from Huy (2020). Note that the neural network is predicting the class of the input image. However, the class value is highly correlated with the optimal k-means cluster group.
Summary of results. Our experiments show that our algorithm can leverage predictors to significantly improve the cost of k-means clustering and that good predictors can be easily tailored to the data at hand. The cost of k-means clustering reduces significantly after applying our algorithm compared to just using the predictor labels for two of our predictors. Lastly, the quality of the predictor remains high for the Oregon dataset even though the later graphs have changed and “moved away” from Graph #1.
Selecting α in Algorithm 2. In practice, the choice of α to use in our algorithm depends on the given predictor whose properties may be unknown. Since our goal is to minimize the k-means clustering objective (1), we can simply pick the ‘best’ value. To do so, we iterate over a small range of possible α from .01 to .15 in Algorithm 2 with a step size of 0.01 and select the clustering that results in the lowest objective cost. The range is fixed for all of our experiments. (See Paragraph 2.1
4.1 RESULTS
Oregon. We first compare our algorithm with Graph #1 as the predictor against various baselines. This is shown in Figures 1(a) and Figure 1(b). In the k = 10 case, Figure 1(a) shows that the predictor returns a clustering better than using just the kmeans++ seeding, which is normalized to have cost 1.0. This is to be expected since the subsequent graphs represent a similar network as Graph #1, just sampled later in time. However, the clustering improves significantly after using our algorithm on the predictor labels as the average cost drops by 55%. We also see that using our algorithm after kmeans++ is also sufficient to give significant decrease in clustering cost. Lastly,
random sampling also gives comparable results. This can be explained because we are iterating over a large range of subsampling probabilities for random sampling.
In the k = 25 case, Figure 1(b) shows that the oracle performance degrades and is worse than the baseline in 5 of the 8 graphs. However our algorithm again improves the quality of the clustering over the oracle across all graphs. Using kmeans++ as the predictor in our algorithm also improves the cost of clustering. The performance of random sampling is also worse. For example in Graph #3 for k = 25, it performed the worst out of all the tested algorithms.
Our algorithm also remains competitive with kmeans++ seeding even if the predictor for the Oregon dataset is highly corrupted. We consider a later graph, Graph #5, and corrupt the labels of the predictor randomly with probability q ranging from 1% to 25% for the k = 10 case in Figure 1(c). While the cost of clustering using just the predictor labels can become increasingly worse, our algorithm is able to sufficiently “clean” the predictions. In addition, the cost of random sampling also gets worse as the corruptions increase, implying that it is much more sensitive to noise than our algorithm. The qualitatively similar plot for k = 25 is given in the supplementary section. Note that in spectral clustering, one may wish to get a mapping to Rd for d > 2. We envision that our results translate to those settings as well since having higher order spectral information only results in a stronger predictor. We continue the discussion on the PHY and CIFAR-10 datasets in Section E.
Comparison to Lloyd’s Heuristic. In Section E.3, we provide additional results on experiments using Lloyd’s heuristic. In summary, we give both theoretical and empirical justifications for why our algorithms are superior to blindly following a predictor and then running Lloyd’s heuristic.
ACKNOWLEDGEMENTS
Zhili Feng, David P. Woodruf, and Samson Zhou would like to thank partial support from NSF grant No. CCF- 181584, Office of Naval Research (ONR) grant N00014-18-1-256, and a Simons Investigator Award. Sandeep Silwal was supported in part by a NSF Graduate Research Fellowship Program.
A APPENDIX
Theorem A.1 (Chernoff Bounds). Let X1, . . . , Xn be independent random variables taking values in {0, 1}. Let X = ∑n i=1Xi denote their sum and let µ = E[X] denote the sum’s expected value. Then for any δ ∈ (0, 1) and t > 0,
Pr [X ≤ (1− δ)µ] ≤ e− δ2µ 2 .
For any δ > 0,
Pr [X ≥ (1 + δ)µ] ≤ e− δ2µ 3 .
Furthermore,
Pr [|X − µ| ≥ t] ≤ e− t 2 4n .
A.1 PROOF OF THEOREM 2.1
We first prove Theorem 2.1, which shows that Algorithm 1 provides a (1 +α)-approximation to the optimal k-means clustering, but uses suboptimal time compared to a faster algorithm we present in Section 3. All omitted proofs of lemmas appear in Section A.2.
We first show that for each coordinate, the empirical center for any (1 − α)-fraction of the input points provides a good approximation to the optimal k-means clustering cost.
Lemma A.2. Let P,Q ⊆ R be sets of points on the real line such that |P | ≥ (1 − α)n and |Q| ≤ αn. Let X = P ∪Q, CP be the mean of P and CX be the mean of X . Then cost(X,CP ) ≤(
1 + α1−α2 ) cost(X,CX).
We now show that a conceptual interval I∗ ⊂ R with “small” length contains a significant fraction of the true points. Ultimately, we will show that the interval I computed in the “training” phase in CRDEST has smaller length than I∗ with high probability and yet I also contains a significant fraction of the true points. The main purpose of I∗ (and eventually I) is to filter out extreme outliers because the “testing” phase only considers points in I ∩X2. Lemma A.3. For a fixed set X ⊆ R, let C be the mean of X and σ2 = 12|X| ∑ x∈X(x−C)2 be the
variance. Then the interval I∗ = [ C − σ√
α , C + σ√ α
] contains at least a (1 − 4α) fraction of the
points in X .
Using Lemma A.3, we show that the interval I that is computed in the “training” phase contains a significant fraction of the true points.
Lemma A.4. Let m be a sufficiently large consatnt. We have that I := [a, b] contains at least a 1− 6α fraction of points of X2 and b− a ≤ 2σ/ √ α, with high probability, i.e., 1− 1/poly(m).
We next show that the optimal clustering on a subset obtained by independently sampling each input point provides a rough approximation of the optimal clustering. That is, the optimal center is well-approximated by the empirical center of the sampled points.
Lemma A.5. Let S be a set of points obtained by independently sampling each point of X ⊆ Rd with probability p = 12 . Let C be the optimal center of these points and CS be the empirical center of these points. Conditioned on |S| ≥ 1, then E[CS ] = x and there exists a constant γ such that for η ≥ 1 and |X| > ηγkα ,
E [ ‖CS − x‖22 ] ≤ γ |X|2 · (∑ x∈X ‖x− x‖22 ) Pr [cost(X,CS) > (1 + α) cost(X,C)] < 1/(ηk).
Using Lemma A.2, Lemma A.4, and Lemma A.5, we justify the correctness of the subroutine CRDEST.
Lemma A.6. Let α ∈ (10 log n/ √ n, 1/7). Let P,Q ⊆ R be sets of points on the real line such that |P | ≥ (1 − α)2m and |Q| ≤ 2αm, and X = P ∪ Q. Let C be the center of P . Then CRDEST on input set X outputs a point C ′ such that with probability at least 1 − 1/(ηk), cost(P,C ′) ≤ (1 + 18α)(1 + α) ( 1 + α(1−α)2 ) cost(P,C).
Using CRDEST as a subroutine for each coordinate, we now prove Theorem 2.1, justifying the correctness of Algorithm 1 by generalizing to all coordinates and centers and analyzing the runtime of Algorithm 1.
Proof of Theorem 2.1. Since Π has label error rate λ ≤ α, then by definition of label error rate, at least a (1 − α) fraction of the points in each cluster are correctly labeled. Note that the kmeans clustering cost can be decomposed into the sum of the costs induced by the centers in each dimension. Specifically, for a set C = {C1, . . . , Ck} of optimal centers,
cost(X, C) := ∑ x∈X d(x, C)2 = k∑ i=1 ∑ x∈Si d(x,Ci) 2,
where Si is the set of points in X that are assigned to center Ci. For a particular i ∈ [k], we have
∑ x∈Si d(x,Ci) 2 = ∑ x∈Si d∑ j=1 d(xj , (Ci)j) 2,
where xj and (Ci)j are the j-th coordinate of x and Ci, respectively.
By Lemma A.6, the cost induced by CRDEST for each dimension in each center C ′i is a (1 + α)approximation of the total clustering cost for the optimal centerCi in that dimension with probability 1− 1/(ηk). That is,∑
x∈Si
d(xj , (C ′ i)j) 2 ≤ (1 + 18α)(1 + α)(1 + α/(1− α)2) ∑ x∈Si d(xj , (Ci)j) 2
for each j ∈ [d]. Thus, taking a sum over all dimensions j ∈ [d] and union bounding over all centers i ∈ [k], we have that the total cost induced by Algorithm 1 is a (1 + 20α)-approximation to the optimal k-means clustering cost with probability at least 1− 1/η. To analyze the time complexity of Algorithm 1, first consider the subroutine CRDEST. It takes O(kdn) time to first split each of the points in each cluster and dimension into two disjoint groups. Finding the smallest interval that contains a certain number of points can be done by first sorting the points and then iterating from the smallest point to the largest point and taking the smallest interval that contains enough points. This requires O(n log n) time for each dimension and each center, which results in O(kdn log n) total time. Once each of the intervals is found, computing the approximate center then takes O(kdn) total time. Hence, the total running time of Algorithm 1 is O(kdn log n).
A.2 PROOF OF AUXILIARY LEMMAS
Lemma A.2. Let P,Q ⊆ R be sets of points on the real line such that |P | ≥ (1 − α)n and |Q| ≤ αn. Let X = P ∪Q, CP be the mean of P and CX be the mean of X . Then cost(X,CP ) ≤(
1 + α1−α2 ) cost(X,CX).
Proof. Suppose without loss of generality, that CX = 0 and CP ≤ 0, so that CQ ≥ 0, where CQ is the mean of Q. Then it is well-known, e.g., see Inaba et al. (1994), that
cost(X,CP ) = cost(X,CX) + |X| · |CP − CX |2.
Hence, it suffices to show that |X| · |CP − CX |2 ≤ α(1−α)2 cost(X,CX).
SinceCX = 0 we have |P |·CP = −|Q|·CQ, with |P | ≥ (1−α)n and |Q| ≤ αn. Let |P | = (1−%)n and |Q| = %n for some % ≤ α. Thus, CQ = − 1−%% · CP . By convexity, we thus have that
cost(Q,CX) ≥ |Q| · (1− %)2
%2 · |CP |2
= n(1− %)2
% · |CP |2
≥ n(1− α) 2
α · |CP |2.
Therefore, we have
|CP − CX |2 = |CP |2 ≤ α
n(1− α)2 cost(Q,CX) ≤
α
n(1− α)2 cost(X,CX).
Thus, |X| · |CP − CX |2 ≤
α
(1− α)2 cost(X,CX),
as desired. Lemma A.3. For a fixed set X ⊆ R, let C be the mean of X and σ2 = 12|X| ∑ x∈X(x−C)2 be the
variance. Then the interval I∗ = [ C − σ√
α , C + σ√ α
] contains at least a (1 − 4α) fraction of the
points in X .
Proof. Note that any point x ∈ X \ I∗ satisfies |x − C|2 > σ2/(4α). Thus, if more than a 4α fraction of the points of X are outside of I∗, then the total variance is larger than σ2, which is a contradiction.
For ease of presentation, we analyze λ = 12 and we note that the analysis extends easily to general λ. We now prove the technical lemma that we will use in the proof of Lemma A.8. Lemma A.7. We have
m∑ j=1
( m j ) j · 2m = Θ ( 1 m ) .
Proof. Let m be sufficiently large. A Chernoff bound implies that for a sufficiently large constant C, ∑
|j−m/2|≥C √ m
( m j ) 2m ≤ 1 m2 .
Furthermore, ∑ j≥C′m
( m j ) j · 2m = O ( 1 m ) · ∑ j≥1 ( m j ) 2m = O ( 1 m ) so the upper bound on the desired relation holds. A similar analysis provides a lower bound.
Lemma A.4. Let m be a sufficiently large consatnt. We have that I := [a, b] contains at least a 1− 6α fraction of points of X2 and b− a ≤ 2σ/ √ α, with high probability, i.e., 1− 1/poly(m).
Proof. By Lemma A.3, I∗ contains at least 2m(1 − 4α) of the points in X . Hence, by applying an additive Chernoff bound for t = O( √ m logm) and for sufficiently large m, we have that the number of points in I∗ ∩ X1 is at least m(1 − 5α) with high probability. Since I is the interval of minimal length with at least m(1 − 5α) points, then the length of I is at most the length of I∗. Moreover, again applying Chernoff bounds, we have that the number of points in I ∩X2 is at least m(1− 6α). More formally, suppose we have a set of 2m points that we randomly partition into two sets X1 and X2. Consider any fixed interval J that has at least 2cm total points for c ≥ 1 − 5α (note there
are at most O(m2) intervals in total since our points are in one dimension). Let J1 and J2 denote the number of points in J that are in X1 and X2 respectively. By a Chernoff bound, we have that both J1 and J2 are at least mc(1 − α) with high probability. In particular, |J1 − J2| ≤ αmc with high probability. Thus by using a union bound, all intervals with at least cm total points satisfy the property that the number of points partitioned to X1 and the number of points partitioned to X2 differ by at most αmc with high probability. Conditioning on this event, I must also contain m(1− 6α) points in X2 since it contains at least m(1− 5α) points in X1, as desired.
Lemma A.8. Let S be a set of points obtained by independently sampling each point of X ⊆ Rd with probability 12 , and let CS be the centroid of S. Let x be the centroid of X . Conditioned on |S| ≥ 1, we have E[CS ] = x, and there exists a constant γ such that
E [ ‖CS − x‖22 ] ≤ γ |X|2 · (∑ x∈X ‖x− x‖22 ) .
Proof. We first prove that E[CS ] = x. Note that by the law of iterated expectations,
E[CS ] = E|S|E[CS | |S| ].
Let xi1 , . . . , xi|S| be a random permutation of the elements in S, so that for each 1 ≤ j ≤ |S|, we have E[xij ] = x. Now conditioning on the size of S, we can write
CS = xi1 + · · ·+ xi|S|
|S| .
Therefore,
E[CS | |S| ] = x · |S| |S| = x
and it follows that E[CS ] = x.
To prove that
E [ ‖CS − x‖2 ] ≤ γ |X|2 · (∑ x∈X ‖x− x‖2 ) ,
we again condition on |S|. Suppose that |S| = j. Then,
CS − x = (xi1 − x) + · · ·+ (xij − x)
j
Now let yit = xit − x for all 1 ≤ t ≤ j. Therefore,
E |S|=j
[ ‖CS − x‖2 ] = 1 j2 · E [ ‖yi1 + · · ·+ yij‖2 ] = 1
j · E[‖yi1‖2] + j − 1 j · E[yTi1yi2 ].
Note that xi1 is uniform over elements in X , so it follows that
E[‖yi1‖2] = 1 |X| ∑ x∈X ‖x− x‖2.
Now if j ≥ 2, we have that E[yTi1yi2 ] = ∑ a<b y T a yb(|X|
2 ) = ‖∑i yi‖2 −∑i ‖yi‖2|X|(|X| − 1) ≤ 0 since ∑ i yi = 0 by definition. Hence,
E |S|≥2
[ ‖CS − x‖2 ] ≤ 1 j · |X| ∑ x∈X ‖x− x‖2.
Now the probability that |S| = j for j ≥ 2 is precisely (|X| j ) /2|X|, so we have
Pr [|S| ≥ 2] · E |S|≥2
[ ‖CS − x‖2 ] ≤ 1 |X| · (∑ x∈X ‖x− x‖2 ) · |X|∑ j=1 (|X| j ) j · 2|X| .
From Lemma A.7, we have that |X|∑ j=1
(|X| j ) j · 2|X| ≤ c |X|
for some constant c so it follows that
E‖CS − x‖2 ≤ c′
|X|2 · (∑ x∈X ‖x− x‖2 ) for some constant c′.
For j = 1, note that
E |S|=j=1
[ ‖CS − x‖2 ] = 1 |X| ∑ x∈X ‖x− x‖2.
Moreover, we have Pr [|S| = 1] = |X| 2|X| and Pr [|S| = 0] = 1 2|X|
. Thus from the law of total expectation, we have
E [ ‖CS − x‖2 ] = Pr [|S| < 2] · E
|S|<2
[ ‖CS − x‖2 ] + Pr [|S| ≥ 2] · E
|S|≥2
[ ‖CS − x‖2 ] ≤ |X|
2|X| · 1 |X| ∑ x∈X ( ‖x− x‖2 ) + c′
|X|2 · (∑ x∈X ‖x− x‖2 )
≤ γ |X|2 · (∑ x∈X ‖x− x‖2 ) for some constant γ, as desired.
Lemma A.9. Let S be a set of points obtained by independently sampling each point of X ⊆ Rd with probability p = 12 . Let C be the optimal center of X and CS be the empirical center of S. Let γ ≥ 1 be the constant from Lemma A.8. Then for η ≥ 1 and |X| > ηγkα ,
Pr [cost(X,CS) > (1 + α) cost(X,C)] < 1/(ηk).
Proof. By Lemma A.8 and Markov’s inequality, we have
Pr [ ‖CS − C‖22 ≥ ηγk |X|2 ∑ x∈X x2 ] ≤ 1 ηk .
We have ∑ x∈X ‖x− CS‖22 = ∑ x∈X ‖x− C‖22 + |X| · ‖C − CS‖22,
so that by Lemma A.8 ∑ x∈X ‖x− CS‖22 ≤ ( 1 + ηγk |X| )∑ x∈X ‖x− C‖22
= ( 1 + ηγk
|X|
) cost(X,C),
with probability at least 1 − 1ηk . Hence for |X| ≥ ηγk α , the approximate centroid of each cluster induces a (1 + α)-approximation to the cost of the corresponding cluster.
Lemma A.5. Let S be a set of points obtained by independently sampling each point of X ⊆ Rd with probability p = 12 . Let C be the optimal center of these points and CS be the empirical center of these points. Conditioned on |S| ≥ 1, then E[CS ] = x and there exists a constant γ such that for η ≥ 1 and |X| > ηγkα ,
E [ ‖CS − x‖22 ] ≤ γ |X|2 · (∑ x∈X ‖x− x‖22 ) Pr [cost(X,CS) > (1 + α) cost(X,C)] < 1/(ηk).
Proof. Lemma A.5 follows immediately from Lemma A.8 and Lemma A.9. Lemma A.6. Let α ∈ (10 log n/ √ n, 1/7). Let P,Q ⊆ R be sets of points on the real line such that |P | ≥ (1 − α)2m and |Q| ≤ 2αm, and X = P ∪ Q. Let C be the center of P . Then CRDEST on input set X outputs a point C ′ such that with probability at least 1 − 1/(ηk), cost(P,C ′) ≤ (1 + 18α)(1 + α) ( 1 + α(1−α)2 ) cost(P,C).
Proof. Let α ∈ (10 log n/ √ n, 1/7). Then from Lemma A.4, we have that I ∩X contains at least (1− 6α)m points of P ∩X2 and at most 2αm points of Q in an interval of length 2σ/ √ α, where
σ2 = 1 2|P | ∑ p∈p (p− C)2 = 1 2|P | · cost(P,C).
From Lemma A.2, we have that cost(P,C0) ≤ ( 1 + α
(1− α)2
) cost(P,C1),
where C0 is the center of I ∩ P ∩X2 and C1 is the center of P ∩X2. For sufficiently large m and from Lemma A.9, we have that
cost(P,C1) ≤ (1 + α) cost(P,C), with probability at least 1 − 1/(ηk). Thus, it remains to show that cost(P,C ′) ≤ (1 + O(α)) cost(P,C0).
Since C0 is the center of I ∩ P ∩X2 and C ′ is the center of I ∩X2, then we have |I ∩ P ∩X2|C0 + ∑ q∈I∩Q∩X2 q = |I ∩X2|C ′. Since I has length 2σ/ √ α, then q ∈ [ C0 − 2σ√α , C0 + 2σ√ α ] . Because |I ∩ P ∩X2| ≥ (1 − 6α)m and |Q| = 2αm, then for sufficiently small α, we have that |C ′ − C0| ≤ 6 √ ασ. Note that we have cost(P,C ′) = cost(P,C0) + |P | · |C0 − C ′|2, so that cost(P,C ′) ≤ cost(P,C0) + |P | · 36ασ2. Finally, σ2 = 12|P | · cost(P,C) and cost(P,C) ≤ cost(P,C0) due to the optimality of C. This implies
cost(P,C ′) ≤ cost(P,C0) + |P | · 36ασ2
≤ cost(P,C0) + |P | · 36α · 1
2|P | · cost(P,C)
≤ cost(P,C0) + 18α cost(P,C0) = (1 + 18α) cost(P,C0),
as desired. Thus putting things together, we have cost(P,C ′) ≤ (1 + 18α)(1 + α) ( 1 + α
(1− α)2
) cost(P,C).
A.3 PROOF OF THEOREM 3.4
We now give the proofs for optimal query complexity and runtime. We first require the following analogue to Lemma A.5:
Lemma A.10. Let S be a set of points obtained by independently sampling each point of X ⊆ Rd with probability p = min (
1, 100 log kα|S| ) . Let C be the optimal center of these points and CS be the
empirical center of these points. Conditioned on |S| ≥ 1, then E[CS ] = x and for |X| > γkα ,
E [ ‖CS − x‖22 ] ≤ γ p|X|2 · (∑ x∈X ‖x− x‖22 ) for some constant γ.
Lemma A.11. For α ∈ (10 log n/ √ n, 1/7), let Π be a predictor with error rate λ ≤ α/2. If each cluster has at least γk log k/α points, then Algorithm 3 outputs a (1 + 20α)-approximation to the k-means objective value with probability at least 3/4.
Proof. Since S samples each of points independently with probability proportional to cluster sizes given by Π, for a fixed i ∈ [k] at least 90 log kα points with label i are sampled, with probability at least 1− 1k4 from Chernoff bounds. Let γ1, . . . , γk be the empirical means corresponding to each of the sampled points with labels 1, . . . , k, respectively, and let Γ0 = {γ1, . . . , γk}. Let C1, . . . , Ck be centers of a (1 + α)-approximate optimal solution C with corresponding clusters X1, . . . , Xk. By Lemma A.10, we have that
E [ ‖Ci − γi‖22 ] ≤ γ p|Xi|2 · (∑ x∈Xi ‖x− Ci‖22 ) ,
where p = min (
1, 100 log kα|S|
) . By Markov’s inequality, we have that
∑ i∈[k] ‖Ci − γi‖22 ≤ 100 ∑ i∈[k] γ p|Xi|2 · (∑ x∈Xi ‖x− Ci‖22 )
with probability at least 0.99. Similar to the proof of Lemma A.9, we use the identity∑ x∈Xi ‖x− γi‖22 = ∑ x∈Xi ‖x− Ci‖22 + |Xi| · ‖Ci − γi‖22.
Hence, we have that cost(X,Γ0) ≤ (1 + α) · cost(X,C),
with probability at least 0.99.
Suppose Π has error rate λ ≤ α and each error chooses a label uniformly at random from the k possible labels. Then by definition of error rate, at most α/2 fraction of the points are erroneously labeled for each cluster. Each cluster in the optimal k-means clustering of the predictor Π has at least n/(ζk) points, so that at least a (1 − α) fraction of the points in each cluster are correctly labeled. Thus, by the same argument as in the proof of Lemma A.6, we have that Algorithm 1 outputs a set of centers C1, . . . , Ck such that for Γ = {C1, . . . , Ck}, we have
cost(X,Γ) ≤ (1 + 18α) (
1− α (1− α)2
) · cost(X,Γ0),
with sufficiently large probability. Let E be the event that cost(X,Γ) ≤ (1 +α)(1 + 18α) ( 1− α(1−α)2 ) · cost(X, C), so that Pr [E ] ≥ 1 − 1/poly(k). Conditioned on E , let X1 be the subset of X that is assigned the correct label by Π, and let X2 be the subset of X assigned the incorrect label. For each point x ∈ X1 assigned the
correct label `x by Π, the closest center to x in Γ is C`x , so Algorithm 3 will always label x with `x. Thus,
cost(X1,Γ) ≤ cost(X,Γ) ≤ (1 + α)(1 + 18α) (
1− α (1− α)2
) · cost(X, C),
conditioned on E . On the other hand, if x ∈ X2 is assigned an incorrect label `x by Π, then the (2, r)-approximate nearest neighbor data assigns the label px to x, where φ(Cpx) is the closest center to φ(x) in the projected space. Recall that φ is the composition map φ1◦φ2, where φ1 has a terminal dimension reduction with distortion 5/4, and φ2 is a random JL linear map with distortion 5/4. Thus the distance between x and Cpx is a 2-approximation between x and its closest center Ci. Hence, by assigning all points x to their respective centers Cpx , we have d(x,Cpx) ≤ 2 cost(x,Γ). Since each point x ∈ X is assigned the incorrect label with probability λ ≤ α/2, the expected cost of the labels assigned to X2 is α cost(X,Γ). By Markov’s inequality, the cost of the labels assigned to X2 is at most 10α cost(X,Γ) < 10α(1 + α) cost(X, C), with probability at least 1− 15 , conditioned on E . Therefore by a union bound, the total cost is at most (1 + 20α) · cost(X, C), with probability at least 3/4.
We need the following theorems on the quality of the data structures utilized in Algorithm 3. Theorem A.12. Makarychev et al. (2019) For every set C ⊂ Rd of size k, a parameter 0 < α < 12 and the standard Euclidean norm d(·, ·), there exists a terminal dimension reduction f : C → Rd′
with distortion (1 + α), where d′ = O (
log k α2
) . The dimension reduction can be computed in
polynomial time. Theorem A.13. Indyk & Motwani (1998); Har-Peled et al. (2012); Andoni et al. (2018) For α > 0, there exists a (1+α, r)-ANN data structure over R equipped with the standard Euclidean norm that achieves query time O ( d · lognα2 ) and space S := O ( 1 α2 log 1 α + d(n+ q) ) , where q := lognα2 . The
runtime of building the data structure is O(S + ndq).
We now prove Theorem 3.4. Theorem 3.4. Let α ∈ (10 log n/ √ n, 1/7), Π be a predictor with label error rate λ ≤ α, and γ ≥ 1 be a sufficiently large constant. If each cluster in the optimal k-means clustering of the predictor has at least γk log k/α points, then Algorithm 3 outputs a (1 + 20α)-approximation to the k-means objective with probability at least 3/4, using O(nd log n+ poly(k, log n)) total time.
Proof. The approximation guarantee of the algorithm follows from Lemma A.11. To analyze the running time, we first note that we apply a JL matrix with dimensionO(log n) to each of the n points in Rd, which usesO(nd log n) time. As a result of the JL embedding, each of the n points has dimension O(log n). Thus, by Theorem A.12, constructing the terminal embedding uses poly(k, log n) time. As a result of the terminal embedding, each of the k possible centers has dimension O(log k). Hence, by Theorem A.13, constructing the (2, r)-ANN data structure for the k possible centers uses O(k log2 k) time. Subsequently, each query to the data structure uses O(log2 k) time. Therefore, the overall runtime is O(nd log n+ poly(k, log n)).
A.4 REMARK ON TRULY-POLYNOMIAL TIME ALGORITHMS VS. PTAS/PRAS.
Remark A.14. We emphasize that the runtime of our algorithm in Theorem 2.1 is truly polynomial in all input parameters n, d, k and 1/α (and even near-linear in the input size nd). Although there exist polynomial-time randomized approximation schemes for k-means clustering, e.g., Inaba et al. (1994); Feldman et al. (2007); Kumar et al. (2004), their runtimes all have exponential dependency on k and 1/α, i.e., 2poly(k,1/α). However, this does not suffice for many applications, since k and 1/α should be treated as input parameters rather than constants. For example, it is undesirable to pay an exponential amount of time to linearly improve the accuracy α of the algorithm. Similarly, if the number of desired clusters k = O(log2 n), then the runtime would be exponential. Thus we believe the exponential improvement of Theorem 2.1 over existing PRAS in terms of k and 1/α is significant.
A.5 REMARK ON POSSIBLE INSTANTIATIONS OF PREDICTOR
Remark A.15. We can instantiate Theorem 2.1 with various versions of the predictor. Assume each cluster in the (1 + α)-approximately optimal k-means clustering of the predictor has size at least n/(ζk) for some tradeoff parameter ζ ∈ [1, ( √ n)/(8k log n)]. Then the clustering quality and runtime guarantees of Theorem 2.1 hold if the predictor Π is such that
1. Π outputs the right label for each point independently with probability 1−λ and otherwise outputs a random label for λ ≤ O(α/ζ),
2. Π outputs the right label for each point independently with probability 1−λ and otherwise outputs an adversarial label for λ ≤ O(α/(kζ)).
In addition, if the predictor Π outputs a failure symbol when it fails, then for constant ζ > 0, there exists an algorithm (see supplementary material) that outputs a (1+α)-approximation to the k-means objective with probability at least 2/3, even when Π has failure rate λ = 1− 1/ poly(k). Note that this remark (but not Theorem 2.1) assumes that each of the k clusters in the (1 + α)-approximately optimal clustering has at least nζk points. This is a natural assumption that the clusters are “roughly balanced” which often holds in practice, e.g., for Zipfian distributions.
B DELETION PREDICTOR
In this section, we present a fast and simple algorithm for k-means clustering, given access to a label predictor Π with deletion rate λ. That is, for each point, the predictor Π either outputs a label for the point consistent with an optimal k-means clustering algorithm with probability λ, or outputs nothing at all (or a failure symbol ⊥) with probability 1− λ. Since the deletion predictor fails explicitly, we can actually achieve a (1 + α)-approximation even when λ = 1− 1poly(k) .
Our algorithm first queries all points in the inputX . Although the predictor does not output the label for each point, for each cluster Ci with a sufficiently large number of points, with high probability, the predictor assigns at least λ2 |Ci| points of Ci to the correct label. We show that if |Ci| = Ω ( k α ) , then with high probability, the empirical center is a good estimator for the true center. That is, the kmeans objective using the centroid of the points labeled i is a (1 +α)-approximation to the k-means objective using the true center of Ci. We give the full details in Algorithm 4.
To show that the empirical center is a good estimator for the true center, recall that a common approach for mean estimation is to sample roughly anO ( 1 α2 ) number of points uniformly at random with replacement. The argument follows from observing that each sample is an unbiased estimator of the true mean, and repeating O ( 1 α2 ) times sufficiently upper bounds the variance.
Observe that the predictor can be viewed as sampling the points from each cluster without replacement. Thus, for sufficiently large cluster sizes, we actually have a huge number of samples, which intuitively should sufficiently upper bound the variance. Moreover, the empirical mean is again an unbiased estimator of the true mean. Thus, although the above analysis does not quite hold due to dependencies between the number of samples and the resulting averaging term, we show that the above intuition does hold.
Algorithm 4 Linear time k-means algorithm with access to a label predictor Π with deletion rate λ. Input: A point set x ∈ X with labels given by a label predictor Π with deletion rate λ. Output: A (1 + α)-approximate k-means clustering of X .
1: for each label i ∈ [k] do 2: Let Si be the set of points labeled i. 3: ci ← 1|Si| · ∑ x∈Si x 4: end for 5: for all points x ∈ X do 6: if x is unlabeled then 7: `x ← arg min d(x, ci) 8: Assign label `x to x. 9: end if
10: end for
We first show that independently sampling points uniformly at random from a sufficiently large point set guarantees a (1 +α)-approximation to the objective cost. Inaba et al. (1994); Ailon et al. (2018) proved a similar statement for sampling with replacement.
It remains to justify the correctness of Algorithm 4 by arguing that with high probability, the overall k-means cost is preserved up to a (1+α)-factor by the empirical means. We also analyze the running time of Algorithm 4.
Theorem B.1. If each cluster in the optimal k-means clustering of the predictor Π has at least 3kα points, then Algorithm 4 outputs a (1 +α)-approximation to the k-means objective with probability at least 23 , using O(kdn) total time.
Proof. We first justify the correctness of Algorithm 4. Suppose each cluster in the optimal k-means clustering of the predictor Π has at least 3kα points. Let C = {c1, . . . , ck} be the optimal centers selected by Π and let CS = {c′1, . . . , c′k} be the empirical centers chosen by Algorithm 4. For each i ∈ [k], let Ci be the points of X that are assigned to Ci by the predictor Π. By Lemma A.9 with η = 3, the approximate centroid of a cluster induces a (1 + α)-approximation to the cost of the corresponding cluster so that
cost(Ci, c ′ i) ≤ (1 + α) cost(Ci, ci),
with probability at least 1− 13k . Taking a union bound over all k clusters, we have that∑ i∈[k] cost(Ci, c ′ i) ≤ ∑ i∈[k] (1 + α) cost(Ci, ci),
with probability at least 23 . Equivalently, cost(X,C) ≤ (1 + α) cost(X,CS). To analyze the running time of Algorithm 4, observe that the estimated centroids for all labels can be computed in O(dn) time. Subsequently, assigning each unlabeled point to the closest estimated centroid uses O(kd) time for each unlabeled point. Thus, the total running time is O(kdn).
C k-MEDIAN CLUSTERING
We first recall that a well-known result states that the geometric median that results from uniformly sampling a number of points from the input is a “good” approximation to the actual geometric median for the 1-median problem.
Theorem C.1. Krauthgamer (2019) Given a set P of n points in Rd, the geometric median of a sample of O ( d α2 log d α ) points of P provides a (1 + α)-approximation to the 1-median clustering problem with probability at least 1− 1/poly(d). Note that we can first apply Theorem A.12 to project all points to a space with dimension O (
1 α2 log k α
) before applying Theorem C.1. Instead of computing the geometric median, we re-
call the following procedure that produces a (1 + α)-approximation to the geometric median.
Theorem C.2. Cohen et al. (2016) There exists an algorithm that outputs a (1 + α)-approximation to the geometric median in O ( nd log3 nα ) time.
We give our algorithm in full in Algorithm 5. Theorem C.3. For α ∈ (0, 1), let Π be a predictor with error rate λ = O (
α4
k log kα log log k α
) . If each
cluster in the optimal k-median clustering of the predictor has at least n/(ζk) points, then there exists an algorithm that outputs a (1 +α)-approximation to the k-median objective with probability at least 1− 1/ poly(k), using O(nd log3 n+ poly(k, log n)) total time. Proof. Observe that Algorithm 5 samples O (
1 α4 log 2 k α
) points for each of the clusters labeled i,
with i ∈ [k]. Thus Algorithm 5 samples O ( k α4 log 2 k α ) points in total. For λ = O ( α4
k log kα log log k α ) with a sufficiently small constant, the expected number of incorrectly labeled points sampled by Algorithm 5 is less than 132 . Thus, by Markov’s inequality, the probability that no incorrectly labeled
Algorithm 5 Learning-Augmented k-median Clustering Input: A point set x ∈ X with labels given by a predictor Π with error rate λ. Output: A (1 + α)-approximate k-median clustering of X .
1: Use a terminal embedding to project all points into a space with dimension O (
1 α2 log k α
) .
2: for i = 1 to i = k do 3: Let `i be the most common remaining label. 4: Sample O ( 1 α4 log 2 k α ) points with label `i.
5: Let C ′i be a ( 1 + α4 ) -approximation to the geometric median of the sampled points. 6: end for 7: Return C ′1, . . . , C ′k.
points are sampled by Algorithm 5 is at least 34 . Conditioned on the event that no incorrectly labeled points are sampled by Algorithm 5, then by Theorem C.1, the empirical geometric median for each cluster induces a ( 1 + α4 ) -approximation to the optimal geometric median in the projected space.
Hence the set of k empirical geometric medians induces a ( 1 + α4 ) -approximation to the optimal k-median clustering cost in the projected space. Since the projected space is the result of a terminal embedding, the set of k empirical geometric medians for the sampled points in the projected space induces a k-median clustering cost that is a ( 1 + α4 ) -approximation to the k-median clustering cost induced by the set of k empirical geometric medians for the sampled points in the original space. Taking the set of k empirical geometric medians for the sampled points in the original space induces a ( 1 + α4 )2 -approximation to the k-median clustering cost. We take a ( 1 + α4 ) -approximation to each of the geometric medians. Thus for sufficiently small α, Algorithm 5 outputs a (1 + α)approximation to the k-median clustering problem.
To embed the points into the space of dimension O (
1 α2 log k α
) , Algorithm 5 spends O(nd log n)
total time. By Theorem C.2, it takes O(nd log3 n) total time to compute the approximate geometric medians.
D LOWER BOUNDS
MAX-E3-LIN-2 is the optimization problem of maximizing the number of equations satisfied by a system of linear equations of Z2 with exactly 3 distinct variables in each equation. EK-MAX-E3LIN-2 is the problem of MAX-E3-LIN-2 when each variable appears in exactly k equations. Fotakis et al. (2016) showed that assuming the exponential time hypothesis (ETH) (Impagliazzo & Paturi, 2001), there exists an absolute constant C1 such that MAX k-SAT (and thus MAX k-CSP) instances with fewer than O(nk−1) clauses cannot be approximated within a factor of C1 in time 2O(n 1−δ) for any δ > 0. As a consequence, the reduction by Håstad (2001) shows that there exist absolute constants C2, C3 such that EK-MAX-E3-LIN-2 with k ≥ C2 cannot be approximated within a factor of C3 in time 2O(n
1−δ) for any δ > 0. Hence, the reduction by Chlebı́k & Chlebı́ková (2006) shows that there exists a constantC4 such that approximating the minimum vertex cover of 4-regular graphs within a factor of C4 cannot be done in time 2O(n
1−δ) for any δ > 0. Thus the reduction by Lee et al. (2017) shows that there exists a constant C5 such that approximating k-means within a factor ofC5 cannot be done in time 2O(n
1−δ) for any δ > 0, assuming ETH. Namely, the reduction of Lee et al. (2017) shows that an algorithm that provides a C5-approximation to the optimal k-means clustering can be used to compute a C4-approximation to the minimum vertex cover.
Theorem D.1. If ETH is true, then there does not exist an algorithm A that takes a set S of n 1−δ
logn
vertices and finds a C4-approximation to the minimum vertex | 1. What is the main contribution of the paper on k-means problem in a learning-augmented setting?
2. What are the strengths of the proposed algorithm, particularly in its ability to leverage predictions and provide guarantees?
3. How does the reviewer assess the limitations of the work, especially regarding the algorithm's sensitivity to prediction accuracy?
4. What are some concerns regarding proof details and experiment results, such as the application of Chernoff bound, the correlation between Alg+Predictor and Alg+k-means++, and the choice of baseline in experiments?
5. How does the reviewer view the overall significance and novelty of the paper in the context of previous works on learning-augmented algorithms? | Summary Of The Paper
Review | Summary Of The Paper
The paper studies k-means problem in a learning-augmented setting. Recall that in k-means, a point set
P
⊂
R
d
is given and the goal is to find a set
C
of
k
centers, such that
cost
(
P
,
C
)
=
∑
p
∈
P
min
c
∈
C
|
p
−
c
|
2
2
is minimized. In addition to the input point set, a predicted solution, which is an approximately optimal clustering of
P
, is also provided in the proposed learning-augmented setting. The algorithm can access this solution by querying the predictor the cluster that a point
x
belongs to.
The main result is an algorithm that can leverage the predictions. For
α
∈
(
10
log
n
/
n
,
1
/
7
)
, given access to a predictor with label error rate
λ
≤
α
, and let
γ
≥
1
be a sufficiently large constant, if the predicted solution satisfies all clusters has at least
γ
k
log
k
/
α
points, then the algorithm outputs a
(
1
+
20
α
)
-approximate solution with probability at least
3
/
4
, using
O
(
n
d
log
n
+
poly
(
k
,
log
n
)
)
total time. Furthermore, the algorithm only uses
O
~
(
k
/
α
)
queries to the predictor for outputting the centers. The authors claim that for any
δ
∈
(
0
,
1
]
, any algorithm makes
O
(
k
(
1
−
δ
)
/
(
α
log
n
)
)
queries to the predictor with label rate
α
cannot output a
(
1
+
C
α
)
-approximate solution for k-means problem in
2
O
(
n
(
1
−
δ
)
)
time, assuming ETH. The authors also experimented with their algorithm on three datasets, along with three different predictors. The experiment result shows that their algorithm significantly improves the performance when using with predictor or k-means++, and has a stable accuracy against corrupted predictors.
Technically, since it is guaranteed that the predicted solution is a
(
1
+
α
)
approximately optimal solution, with some of the labels corrupted, Algorithm 1 aims to reconstruct centers of each cluster
X
in
(
1
+
α
)
-approximately optimal solution, i.e. find point
c
such that
cost
(
X
,
c
)
≤
(
1
+
O
(
α
)
)
cost
(
X
,
C
X
)
. Notice that
d
dimensions are independent of each other in k-means objective. Hence Algorithm 1 can estimate each coordinate of
c
independently, which is implemented by Algorithm 2. Then Algorithm 2 (randomly) divides the input (1D) data points into two parts, and one part is used for the computation of interval
I
, an estimation of the value range of the uncorrupted data points. Then the points outside
I
are filtered out and the mean of the remaining coordinates are returned as the final estimation.
Review
I’d like to point out that this setting/algorithm is conceptually different (and weaker) from many previous papers on learning-augmented algorithms (cf. [Competitive Caching with Machine Learned Advice, Lykouris and Vassilvitskii, J. ACM]), in the sense that this algorithm does not have strong guarantee when the prediction is very inaccurate (for instance, Theorem 2.1 requires \alpha to be at most 1/ 7). Instead, the focus of this algorithm seems to be “de-noising” an already good but slightly noisy predictor. This could be a limitation of the work, since “good” predictors themselves may not be easily obtained, and it could be that the major issue of using predictions is not to be “misled”. Nonetheless, I can still see that this “de-noising” setting make sense, and it is relevant in dealing with adversarial data set. But I’d still like to see how your algorithm performs, when the predictor is inaccurate (e.g., it is only 10 approximation?)
I also have other concerns about some proof details and the experiment results. The major ones are listed as follows. 1. Lemma A.4. In the proof, one of the step is to use Chernoff bound to argue |I \cap X_2| >= m(1 – 6 \alpha). Can you elaborate how Chernoff bound is applied? Note that I and X_2 are not independent and both of them are random (recalling that X_2 is the complement of X_1, and I is defined w.r.t. X_1, so they both depend on the random set X_1). 2. In Figure 1 (a), it seems Alg+Predictor and Alg+k-means++ are highly correlated, but the performance of predictor and k-means++ are shown to be quite different in some of the graphs. Can you explain why this happens? Also, this correlation is not observed in Figure 1 (b). Can you also explain why is the difference? 3. In your experiments, it seems k-means++ baseline only runs the random seeding step, without any iteration of Lloyd heuristic. I suggest to run at least one round of Lloyd’s heuristic after your seeding as another baseline, since your Algorithm 1 is essentially Lloyd if in Algorithm 2 no bad point is eliminated.
Overall, this is an interesting paper, and it is somewhat the first of its kind, in the sense that it studies how machine-learned advice could help to improve the time complexity of k-clustering, instead of looking at the competitive ratio as in the commonly studied online setting. I would be glad to accept the paper if the authors could properly address my major concerns in the follow-up discussions.
Minor comments:
Lemma A.9, please specify what “these points” refer to, in the second line of the statement.
In the proof of Lemma A.11, could you remind the definition of p, in the displaymath?
Also in the proof of A.11, 2. The equation below ‘By Markov’s Inequality, …’, should be ‘\sum_{i \in [k]} |X_i|*||C_i-\gamma_i||2^2’ instead of ‘\sum{i \in [k]} ||C_i-\gamma_i||_2^2’ ?
Page 9, the paragraph about CIFAR-10. You mentioned that your algorithm “could improve upon this highly precise predictor” – I think the claim is a bit weird, because the neural network predictor is for the purpose of classification, while your task is clustering. The objectives might be related, but I don’t see a strong correlation, and it’s possible that this neural network baseline has a bad clustering cost. Hence, your improvement over the neural network one is not clearly convincing to me. I suggest to also show the clustering cost of the neural network predictor.
I don’t get why your Theorem 2.1 needs O(kdn) time – it seems to be Algorithm 1 + Algorithm 2 only takes O(nd) time, because each iteration of the for-loop in Algorithm 1 takes O(|Y_i|) time/accesses to the predictor, and \sum_i |Y_i| = n? This also confuses me about the necessity/improvement of Theorem 3.4. |
ICLR | Title
Learning-Augmented
k
-means Clustering
Abstract
k-means clustering is a well-studied problem due to its wide applicability. Unfortunately, there exist strong theoretical limits on the performance of any algorithm for the k-means problem on worst-case inputs. To overcome this barrier, we consider a scenario where “advice” is provided to help perform clustering. Specifically, we consider the k-means problem augmented with a predictor that, given any point, returns its cluster label in an approximately optimal clustering up to some, possibly adversarial, error. We present an algorithm whose performance improves along with the accuracy of the predictor, even though naı̈vely following the accurate predictor can still lead to a high clustering cost. Thus if the predictor is sufficiently accurate, we can retrieve a close to optimal clustering with nearly optimal runtime, breaking known computational barriers for algorithms that do not have access to such advice. We evaluate our algorithms on real datasets and show significant improvements in the quality of clustering.
1 INTRODUCTION
Clustering is a fundamental task in data analysis that is typically one of the first methods used to understand the structure of large datasets. The most common formulation of clustering is the kmeans problem where given a set P ⊂ Rd of n points, the goal is to find a set of centers C ⊂ Rd of k points to minimize the objective cost(P,C) = ∑ p∈P minc∈C ‖p− c‖22. (1)
Despite decades of work, there exist strong theoretical limitations about the performance of any algorithm for the k-means problem. Finding the optimal set C is NP-hard even for the case of k = 2 (Dasgupta, 2008) and even finding an approximate solution with objective value that is within a factor 1.07 of the optimal solution is NP-hard (Cohen-Addad & S., 2019; Lee et al., 2017). Furthermore, the best-known practical polynomial time algorithms can only provably achieve a large constant factor approximation to the optimal clustering, e.g., the 50-approximation in Song & Rajasekaran (2010), or use techniques such as linear programming that do not scale, e.g., the 6.357- approximation in Ahmadian et al. (2020).
A natural approach to overcome these computational barriers is to leverage the fact that in many applications, the input is often not arbitrary and contains auxiliary information that can be used to construct a good clustering, e.g., in many applications, the input can be similar to past instances. Thus, it is reasonable to create a (possibly erroneous) predictor by using auxiliary information or through clusterings of similar datasets, which can inform the proper label of an item in our current dataset. Indeed, inspired by the developments in machine learning, many recent papers have studied algorithms augmented with predictions (Mitzenmacher & Vassilvitskii, 2020). Such algorithms utilize a predictor that, when invoked, provides an (imperfect) prediction for future inputs. The predictions are then used by the algorithm to improve performance (see references in Section 1.3).
Hence, we consider the problem of k-means clustering given additional access to a predictor that outputs advice for which points should be clustered together, by outputting a label for each point. The goal is to find k centers that minimize objective (1) and assign each point to one of these centers.
The question is then whether one can utilize such predictions to boost the accuracy and runtime of clustering of new datasets. Our results demonstrate the answer in the affirmative.
Formal learning-augmented problem definition. Given a set P ⊆ Rd of n points, the goal is to find a set of k points C (called centers) to minimize objective (1). In the learning-augmented setting, we assume we have access to a predictor Π that provides information about the label of each point consistent with a (1+α)-approximately optimal clustering C. We say that a predictor has label error rate λ ≤ α if for each label i ∈ [k] := {1, . . . , k}, Π errs on at most a λ ≤ α fraction of all points in cluster i in C, and Π errs on at most a λ ≤ α fraction of all points given label i by Π. In other words, Π has at least (1− λ) precision and recall for each label. Our predictor model subsumes both random and adversarial errors by the predictor. For example if the cluster sizes are somewhat well-balanced, then a special case of our model is when Π(p) outputs the correct label of point p ∈ P with some probability 1 − λ and otherwise outputs a random label in [k] with probability λ. The example where the predictor outputs an adversarial label instead of a random label with probability λ also falls under our model. For more detail, see Theorems 2.1 and 3.4. We also adjust our algorithm to have better performance when the errors are random rather than adversarial in the supplementary material.
1.1 MOTIVATION FOR OUR WORK
We first motivate studying k-means clustering under the learning-augmented algorithms framework.
Overcoming theoretical barriers. As stated above, no polynomial time algorithm can achieve better than a constant factor approximation to the optimal clustering. In addition, the best provable approximation guarantees by polynomial time algorithms have a large constant factor (for example the 50 approximation in Song & Rajasekaran (2010)), or use methods which do not scale (such as the linear programming based algorithm in Ahmadian et al. (2020) which gives a 6.357-approximation). Therefore, it is of interest to study whether a natural assumption can overcome these complexity barriers. In our work, we show that knowing the true labels up to some possibly adversarial noise can give us arbitrarily good clusterings, depending on the noise level, which breaks these computational barriers. Furthermore, we present an algorithm that runs in nearly linear time, rather than just polynomial time. Lastly, we introduce tools from the robust statistics literature to study k-means clustering rather than the distance-based sampling procedure that is commonly analyzed (this is the basis of kmeans++). This new toolkit and connection could have further applications in other learning-augmented clustering problems.
Practical considerations. In practice, good predictors can be learned for datasets with auxiliary information. For a concrete example, we can take any dataset that has a train/test split and use a clustering on the training dataset to help us cluster the testing portion of the dataset. Therefore, datasets do not have to be specifically curated to fit our modelling assumption, which is a requirement in other modelling formulations that leverage extra information such as the SSAC model discussed in Section 1.3. A predictor can also be created from the natural class of datasets that vary over time, such as Census data or spectral clustering for temporal graphs (graphs slowly varying over time). For this class of datasets, a clustering from an earlier time step can function as a predictor for later time steps. Lastly, we can simply use the labels given by another clustering algorithm (such as kmeans++) or heuristic as a predictor. Therefore, predictors are readily and easily available for a wide class of natural datasets.
Following the predictor alone is insufficient. Given a predictor that outputs noisy labels, it is conceivable that its output alone can give us a good clustering relative to optimal. However, this is not the case and naı̈vely using the label provided by the predictor for each point can result in an arbitrarily bad solution, even when the predictor errs with low probability. For example, consider a cluster of n2 points at the origin and a cluster of n 2 points at x = 1. Then for k = 2, choosing centers at the origin and at x = 1 induces a k-means clustering cost of zero. However, even for a predictor that errs with probability 1n , some point will be mislabeled with constant probability, which results in a positive k-means clustering cost, and so does not provide a relative error approximation. Thus, using the provided labels by the predictor can induce an arbitrarily bad clustering, even as the label error rate of the predictor tends to zero. This subtlety makes the model rich and interesting, and requires us to create non-trivial clustering algorithms.
Predictors with adversarial errors. Since the predictor is separate from the clustering algorithm, interference with the output of the predictor following the clustering algorithm’s query can be a source of non-random noise. Thus any scenario in which communication is performed over a noisy channel (for example, if the predictor is hosted at one server and the algorithm is hosted at another server) is susceptible to such errors. Another source of adversarial failure by the predictor is when the predictor is trained on a dataset that can be generated by an adversary, such as in the context of adversarial machine learning. Moreover, our algorithms have better guarantees when the predictor does not fail adversarially, e.g., see the supplementary material).
1.2 OUR RESULTS
In this paper we study “learning-augmented” methods for efficient k-means clustering. Our contributions are both theoretical and empirical. On the theoretical side, we introduce an algorithm that provably solves the k-means problem almost optimally, given access to a predictor that outputs a label for each point p ∈ P according to a (1 + α)-approximately optimal clustering, up to some noise. Specifically, suppose we have access to a predictor Π with label error rate λ upper bounded by a parameter α. Then, Algorithm 1 outputs a set of centers C̃ in Õ(knd) time1, such that cost(P, C̃) ≤ (1 +O(α)) · cost(P,Copt), where Copt is an optimal set of centers. We improve the runtime in Section 3 by introducing Algorithm 3, which has the same error guarantees, but uses Õ(nd) runtime, which is nearly optimal since one needs at least nd time to read the points for dense inputs (Theorem 3.4, and Remark A.14).
To output labels for all points, Algorithm 3 requires n queries to the predictor. However, if the goal is to just output centers for each cluster, then we only require Õ(k/α) queries. This is essentially optimal; we show in Theorem 3.5 that any polynomial time algorithm must perform approximately Ω̃(k/α) queries to output a 1+α-approximate solution assuming the Exponential Time Hypothesis, a well known complexity-theoretic assumption (Impagliazzo & Paturi, 2001). Note that one could ignore the oracle entirely, but then one is limited by the constant factor hardness for polynomial time algorithms, which we bypass with a small number of queries.
Surprisingly, we do not require assumptions that the input is well-separated or approximation-stable (Braverman et al., 2011; Balcan et al., 2013), which are assumed in other works. Finally in the supplementary material, we also give a learning-augmented algorithm for the related problem of k-median clustering, which has less algebraic structure than that of k-means clustering. We also consider a deletion predictor, which either outputs a correct label or a failure symbol ⊥ and give a (1 + α)-approximation algorithm even when the “deletion rate” is 1− 1/poly(k). On the empirical side, we evaluate our algorithms on real and synthetic datasets. We experimentally show that good predictors can be learned for all of our varied datasets, which can aid in clustering. We also show our methodology is more robust than other heuristics such as random sampling.
1.3 RELATED WORK
Learning-augmented algorithms. Our paper adds to the growing body of work on learningaugmented algorithms. In this framework, additional “advice” from a possibly erroneous predictor is used to improve performance of classical algorithms. For example, a common predictor is a “heaviness” predictor that outputs how “important” a given input point is. It has been shown that such predictors can be learned using modern machine learning techniques or other methods on training datasets and can be successfully applied to similar testing datasets. This methodology has found applications in improving data structures (Kraska et al., 2018; Mitzenmacher, 2018), streaming algorithms (Hsu et al., 2019; Jiang et al., 2020), online algorithms (Lykouris & Vassilvtiskii, 2018; Purohit et al., 2018), graph algorithms (Dai et al., 2017), and many other domains (Mousavi et al., 2015; Wang et al., 2016; Bora et al., 2017; Sablayrolles et al., 2019; Dong et al., 2020; Sanchez et al., 2020; Eden et al., 2021). See Mitzenmacher & Vassilvitskii (2020) for an overview and applications.
Clustering with additional information. There have been numerous works that study clustering in a semi-supervised setting where extra information is given. Basu et al. (2004) gave an active learning framework of clustering with “must-link”/“cannot-link” constraints, where an algorithm is allowed
1The notation Õ hides logarithmic factors.
to interact with a predictor that determines if two points must or cannot belong to the same cluster. Their objective function is different than that of k-means and they do not give theoretical bounds on the quality of their solution. Balcan & Blum (2008) and Awasthi et al. (2017) studied an interactive framework for clustering, where a predictor interactively provides feedback about whether or not to split a current cluster or merge two clusters. Vikram & Dasgupta (2016) also worked with an interactive oracle but for the Bayesian hierarchical clustering problem. These works differ from ours in their assumptions since their predictors must answer different questions about partitions of the input points. In contrast, Howe (2017) used logistic regression to aid k-means clustering but do not give any theoretical guarantees.
The framework closest in spirit to ours is the semi-supervised active clustering framework (SSAC) introduced by Ashtiani et al. (2016) and further studied by Kim & Ghosh (2017); Mazumdar & Saha (2017); Gamlath et al. (2018); Ailon et al. (2018); Chien et al. (2018); Huleihel et al. (2019). The goal of this framework is also to produce a (1 + α)-approximate clustering while minimizing the number of queries to a predictor that instead answers queries of the form “same-cluster(u, v)”, which returns 1 if points u, v ∈ P are in the same cluster in a particular optimal clustering and 0 otherwise. Our work differs from the SSAC framework in terms of both runtime guarantees, techniques used, and model assumptions, as detailed below.
We briefly compare to the most relevant works in the SSAC framework, which are Ailon et al. (2018) and Mazumdar & Saha (2017). First, the runtime of Ailon et al. (2018) is O(ndk9/α4) even for a perfectly accurate predictor, while the algorithm of Mazumdar & Saha (2017) uses O(nk2) queries and runtime Õ(ndk2). By comparison, we use significantly fewer queries, with near linear runtime Õ(nd) even for an erroneous predictor. Moreover, a predictor of Mazumdar & Saha (2017) independently fails each query with probability p so that repeating with pairs containing the same point can determine the correct label of a point whereas our oracle will always repeatedly fail with the same query, so that repeated queries do not help.
The SSAC framework uses the predictor to perform importance sampling to obtain a sufficient number of points from each cluster whereas we use techniques from robust mean estimation, dimensionality reduction, and approximate nearest neighbor data structures. Moreover, it is unclear how the SSAC predictor can be implemented in practice to handle adversarial corruptions. One may consider simulating the SSAC predictor using information from individual points by simply checking if the labels of the two input points are the same. However, if a particular input is mislabeled, then all of the pairs containing this input can also be reported incorrectly, which violates their independent noise assumption. Finally, the noisy predictor algorithm in Ailon et al. (2018) invokes a step of recovering a hidden clique in a stochastic block model, making it prohibitively costly to implement.
Lastly, in the SSAC framework, datasets need to be specifically created to fit into their model since one requires pairwise information. In contrast, our predictor requires information about individual points, which can be learned from either a training dataset, from past similar datasets, or from another approximate or heuristic clustering and is able to handle adversarial corruptions. Thus, we obtain significantly faster algorithms while using an arguably more realistic predictor.
Approximation stability. Another approach to overcome the NP-hardness of approximation for k-means clustering is the assumption that the underlying dataset follows certain distributional properties. Introduced by Balcan et al. (2013), the notion of (c, α)-approximate stability (Agarwal et al., 2015; Awasthi et al., 2019; Balcan et al., 2020) requires that every c-approximation is α-close to the optimal solution in terms of the fraction of incorrectly clustered points. In contrast, we allow inputs so that an arbitrarily small fraction of incorrectly clustered points can induce arbitrarily bad approximations, as previously discussed, e.g., in Section 1.1.
2 LEARNING-AUGMENTED k-MEANS ALGORITHM
Preliminaries. We use [n] to denote the set {1, . . . , n}. Given the set of cluster centers C, we can partition the input points P into k clusters {C1, . . . , Ck} according to the closest center to each point. If a point is grouped in Ci in the clustering, we refer to its label as i. Note that labels can be arbitrarily permuted as long as the labeling across the points of each cluster is consistent. It is well-known that in k-means clustering, the i-th center is given by the coordinate-wise mean of the
Algorithm 1 Learning-augmented k-means clustering Input: A point set X with labels given by a predictor
Π with label error rate λ Output: (1 + O(α))-approximate k-means clustering
of X 1: for i = 1 to i = k do 2: Let Yi be the set of points with label i. 3: Run CRDEST for each of the d coordinates of Yi.
4: Let C ′i be the coordinate-wise outputs of CRDEST. 5: end for 6: Return clustering with centers C ′1, . . . , C ′k.
Algorithm 2 Coordinate-wise estimation CRDEST Input: Points x1, . . . , x2m ∈ R, corrup-
tion level λ ≤ α 1: Randomly partition the points into
two groups X1, X2 of size m. 2: Let I = [a, b] be the shortest interval
containing m(1− 5α) points of X1. 3: Z ← X2 ∩ I 4: z ← 1|Z| ∑ x∈Z x 5: Return z
points in Ci. Given x ∈ Rd and a set C ⊂ Rd, we define d(x,C) = minc∈C ‖x − c‖2. Note that there may be many approximately optimal clusterings but we consider a fixed one for our analysis.
2.1 OUR ALGORITHM
Our main result is an algorithm for outputting a clustering that achieves a (1 + 20α) approximation 2 to the optimal objective cost when given access to approximations of the correct labeling of the points in P . We first present a suboptimal algorithm in Algorithm 1 for intuition and then optimize the runtime in Algorithm 3, which is provided in Section 3.
The intuition for Algorithm 1 is as follows. We first address the problem of identifying an approximate center for each cluster. Let Copt1 , · · · , C opt k be an optimal grouping of the points and consider all the points labeled i by our predictor for some fixed 1 ≤ i ≤ k. Since our predictor can err, a large number of points that are not in Copti may also be labeled i. This is especially problematic when points that are “significantly far” from cluster Copti are given the label i, which may increase the objective function arbitrarily if we simply take the mean of the points labeled i by the predictor.
To filter out such outliers, we consider a two step view from the robust statistics literature, e.g., Prasad et al. (2019); these two steps can respectively be interpreted as a “training” phase and a “testing” phase that removes “bad” outliers. We first randomly partition the points that are given label i into two groups, X1 and X2, of equal size. We then estimate the mean of C opt i using a coordinate-wise approach through Algorithm 2 (CRDEST), decomposing the total cost as the sum of the costs in each dimension.
For each coordinate, we find the smallest interval I that contains a (1 − 4α) fraction of the points in X1. We show that for label error rate λ ≤ α, this “training” phase removes any outliers and thus provides a rough estimation to the location of the “true” points that are labeled i. To remove dependency issues, we then “test” X2 on I by computing the mean of X2 ∩ I . This allows us to get empirical centers that are a sufficiently good approximation to the coordinates of the true center for each coordinate. We then repeat on the other labels. The key insight is that the error from meanestimation can be directly charged to the approximation error due to the special structure of the k-means problem. Our main theoretical result considers predictors that err on at most a λ-fraction of all cluster labels. Note that all omitted proofs appear in the supplementary material. Theorem 2.1. Let α ∈ (10 log n/ √ n, 1/7), Π be a predictor with label error rate λ ≤ α, and γ ≥ 1 a sufficiently large constant. If each cluster in the (1 + α)-approximately optimal k-means clustering of the predictor has at least γηk/α points, then Algorithm 1 can be used to output a (1 + 20α)-approximation to the k-means objective with prob. 1− 1/η, using O(kdn log n) runtime.
We improve the running time to O(nd log n+ poly(k, log n)) in Theorem 3.4 in Section 3. Our algorithms can also tolerate similar error rates when failures correspond to random labels, adversarial labels, or a special failure symbol.
2Note that we have not attempted to optimize the constant 20.
Error rate λ vs. accuracy parameter α. We emphasize that λ is the error rate of the predictor and α is only some loose upper bound on λ. It is reasonable that some algorithms can provide lossy guarantees on their outputs, which translates to the desired loose upper bound α on the accuracy of the predictor. Even if is not known, multiple instances of the algorithm can be run in parallel with separate exponentially decreasing “guesses” for the value α. We can simply return the best clustering among these algorithms, which will provide the same theoretical guarantees as if we set α = 1.01λ , for example. Thus α does not need to be known in advance and it does not need to be tuned as a hyperparameter.
3 NEARLY OPTIMAL RUNTIME ALGORITHM
We now describe Algorithm 3, which is an optimized runtime version of Algorithm 1 and whose guarantees we present in Theorem 3.4. The bottleneck for Algorithm 1 is that after selecting k empirical centers, it must still assign each of the n points to the closest empirical center. The main intuition for Algorithm 3 is that although reading all points usesO(nd) time, we do not need to spend O(dk) time per point to find its closest empirical center, if we set up the correct data structures. In fact, as long as we assign each point to a “relatively good” center, the assigned clustering is still a “good” approximation to the optimal solution. Thus we proceed in a similar manner as before to sample a number of input points and find the optimal k centers for the sampled points.
We use dimensionality reduction and an approximate nearest neighbor (ANN) data structure to efficiently assign each point to a “sufficiently close” center. Namely if a point p ∈ P should be assigned to its closest empirical Ci then p must be assigned to some empirical center Cj such that ‖p − Cj‖2 ≤ 2‖p − Ci‖2. Hence, points that are not assigned to their optimal centers only incur a “small” penalty due to the ANN data structure and so the cost of the clustering does not increase “too much” in expectation. Formally, we need the following definitions.
Theorem 3.1 (JL transform). Johnson & Lindenstrauss (1984) Let d(·, ·) be the standard Euclidean norm. There exists a family of linear maps A : Rd → Rk and an absolute constant C > 0 such that for any x, y ∈ Rd, Pr [φ ∈ A, d(φ(x), φ(y)) ∈ (1± α)d(x, y)] ≥ 1− e−Cα2k.
Definition 3.2 (Terminal dimension reduction). Given a set of points called terminals C ⊂ Rd, we call a map f : Rd → Rk a terminal dimension reduction with distortion D if for every terminal c ∈ C and point p ∈ Rd, we have d(p, c) ≤ d(f(p), f(c)) ≤ D · d(p, c).
Definition 3.3 (Approximate nearest neighbor search). Given a set P of n points in a metric space (X, d), a (c, r)-approximate nearest neighbor search (ANN) data structure takes any query point q ∈ X with non-empty {p ∈ P : 0 < d(p, q) ≤ r} and outputs a point in {p ∈ P : 0 < d(p, q) ≤ cr}.
To justify the guarantees of Algorithm 3, we need runtime guarantees on creating a suitable dimensionality reduction map and an ANN data structure. These are from Makarychev et al. (2019) and Indyk & Motwani (1998); Har-Peled et al. (2012); Andoni et al. (2018) respectively, and are stated in Theorems A.12 and A.13 in the supplementary section. They ensure that each point is mapped to a “good” center. Thus, we obtain our main result describing the guarantees of Algorithm 3. Theorem 3.4. Let α ∈ (10 log n/ √ n, 1/7), Π be a predictor with label error rate λ ≤ α, and γ ≥ 1 be a sufficiently large constant. If each cluster in the optimal k-means clustering of the predictor has at least γk log k/α points, then Algorithm 3 outputs a (1 + 20α)-approximation to the k-means objective with probability at least 3/4, using O(nd log n+ poly(k, log n)) total time.
Note that if we wish to only output the k centers rather than labeling all of the input points, then the query complexity of Algorithm 3 is Õ(k/α) (see Step 1 of Algorithm 3) with high probability. We show in the supplementary material that this is nearly optimal.
Theorem 3.5. For any δ ∈ (0, 1], any algorithm that makes O ( k1−δ
α logn
) queries to the predictor
with label error rate α cannot output a (1 + Cα)-approximation to the optimal k-means clustering cost in time 2O(n 1−δ) time, assuming the Exponential Time Hypothesis.
Algorithm 3 Fast learning-augmented algorithm for k-means clustering. Input: A point set X , a predictor Π with label error rate λ ≤ α, and a tradeoff parameter ζ Output: A (1 + α)-approximate k-means clustering of X
1: Form S by sampling each point of X with probability 100 log kα|Ax| where Ax is the set of points with the same label as x according to Π. 2: Let C1, . . . , Ck be the output of Algorithm 1 on S. 3: Let φ2 be a random JL linear map with distortion 54 , i.e., dimension O(log n). 4: Let φ1 be a terminal dimension reduction with distortion 54 . 5: Let φ := φ1 ◦ φ2 be the composition map. 6: Let A be a (2, r)-ANN data structure on the points φ(C1), . . . , φ(Ck). 7: for x ∈ X do 8: Let `x be the label of x from Π. 9: %← d(x,C`x)
10: Query A to find the closest center φ(Cpx) to x with r = % 2 . 11: if d(x,Cpx) < 2d(x,C`x) then 12: Assign label px to x. 13: else 14: Assign label `x to x. 15: end if 16: end for
4 EXPERIMENTS
In this section we evaluate Algorithm 1 empirically on real datasets. We choose to implement Algorithm 1, as opposed to the runtime optimal Algorithm 3, for simplicity and because the goal of our experiments is to highlight the error guarantees of our methodology, which both algorithms share. Further, we will see that Algorithm 1 is already very fast compared to alternatives. Thus, we implement the simpler of the two algorithms. We primarily fix the number of clusters to be k = 10 and k = 25 throughout our experiments for all datasets. Note that our predictors can readily generalize to other values of k but we focus on these two values for clarity. All of our experiments were done on a CPU with i5 2.7 GHz dual core and 8 GB RAM. Furthermore, all our experimental results are averaged over 20 independent trials and ± one standard deviation error is shaded when applicable. We give the full details of our datasets below.
1) Oregon: Dataset of 9 graph snapshots sampled across 3 months from an internet router communication network (Leskovec et al., 2005). We then use the top two eigenvectors of the normalized Laplacian matrix to give us node embeddings into R2 for each graph which gives us 9 datasets, one for each graph. Each dataset has roughly n ∼ 104 points. This is an instance of spectral clustering. 2) PHY: Dataset from KDD cup 2004 (kdd, 2004). We take 104 random samples to form our dataset. 3) CIFAR10: Testing portion of CIFAR-10 (n = 104, dimension 3072) (Krizhevsky, 2009).
Baselines. We compare against the following algorithms. Additional experimental results on Lloyd’s heuristic are given in Section E.3 in the supplementary material.
1) kmeans++: We measure the performance of our algorithm in comparison to the kmeans++ seeding algorithm. Since kmeans++ is a randomized algorithm, we take the average clustering cost after running kmeans++ seeding on 20 independent trials. We then standardize this value to have cost 1.0 and report all other costs in terms of this normalization. For example, the cost 2.0 means that the clustering cost is twice that of the average kmeans++ clustering cost. We also use the labels of kmeans++ as the predictor in the input for Algorithm 1 (denoted as “Alg + kmeans++”) which serves to highlight the fact that one can use any heuristic or approximate clustering algorithm as a predictor.
2) Random sampling: For this algorithm, we subsample the predictor labels with probability q ranging from 1% to 50%. We then construct the k-means centers using the labels of the sampled points and measure the clustering cost using the whole dataset. We use the best value of q in our range every time to give this baseline as much power as possible. We emphasize that random sampling cannot have theoretical guarantees since the random sample can be corrupted (similarly
as in the example in Section 1.1). Thus some outlier detection steps (such as our algorithms) are required.
Predictor Description. We use the following predictors in our experiments.
1) Nearest neighbor: We use this predictor for the Oregon dataset. We find the best clustering of the node embeddings in Graph #1. In practice, this means running many steps of Lloyd’s algorithm until convergence after initial seeding by kmeans++. Our predictor takes as input a point in R2 representing a node embedding of any of the later 8 graphs and outputs the label of the closest node in Graph #1.
2) Noisy predictor. This is the main predictor for PHY. We form this predictor by first finding the best k-means solution on our datasets. This again means initial seeding by kmeans++ and then many steps of Lloyd’s algorithm until convergence. We then randomly corrupt the resulting labels by changing them to a uniformly random label independently with error probability ranging from 0 to 1. We report the cost of clustering using only these noisy labels versus processing these labels using Algorithm 1.
3) Neural network. We use a standard neural network architecture (ResNet18) trained on the training portion of the CIFAR-10 dataset as the oracle for the testing portion which we use in our experiments. We used a pretrained model obtained from Huy (2020). Note that the neural network is predicting the class of the input image. However, the class value is highly correlated with the optimal k-means cluster group.
Summary of results. Our experiments show that our algorithm can leverage predictors to significantly improve the cost of k-means clustering and that good predictors can be easily tailored to the data at hand. The cost of k-means clustering reduces significantly after applying our algorithm compared to just using the predictor labels for two of our predictors. Lastly, the quality of the predictor remains high for the Oregon dataset even though the later graphs have changed and “moved away” from Graph #1.
Selecting α in Algorithm 2. In practice, the choice of α to use in our algorithm depends on the given predictor whose properties may be unknown. Since our goal is to minimize the k-means clustering objective (1), we can simply pick the ‘best’ value. To do so, we iterate over a small range of possible α from .01 to .15 in Algorithm 2 with a step size of 0.01 and select the clustering that results in the lowest objective cost. The range is fixed for all of our experiments. (See Paragraph 2.1
4.1 RESULTS
Oregon. We first compare our algorithm with Graph #1 as the predictor against various baselines. This is shown in Figures 1(a) and Figure 1(b). In the k = 10 case, Figure 1(a) shows that the predictor returns a clustering better than using just the kmeans++ seeding, which is normalized to have cost 1.0. This is to be expected since the subsequent graphs represent a similar network as Graph #1, just sampled later in time. However, the clustering improves significantly after using our algorithm on the predictor labels as the average cost drops by 55%. We also see that using our algorithm after kmeans++ is also sufficient to give significant decrease in clustering cost. Lastly,
random sampling also gives comparable results. This can be explained because we are iterating over a large range of subsampling probabilities for random sampling.
In the k = 25 case, Figure 1(b) shows that the oracle performance degrades and is worse than the baseline in 5 of the 8 graphs. However our algorithm again improves the quality of the clustering over the oracle across all graphs. Using kmeans++ as the predictor in our algorithm also improves the cost of clustering. The performance of random sampling is also worse. For example in Graph #3 for k = 25, it performed the worst out of all the tested algorithms.
Our algorithm also remains competitive with kmeans++ seeding even if the predictor for the Oregon dataset is highly corrupted. We consider a later graph, Graph #5, and corrupt the labels of the predictor randomly with probability q ranging from 1% to 25% for the k = 10 case in Figure 1(c). While the cost of clustering using just the predictor labels can become increasingly worse, our algorithm is able to sufficiently “clean” the predictions. In addition, the cost of random sampling also gets worse as the corruptions increase, implying that it is much more sensitive to noise than our algorithm. The qualitatively similar plot for k = 25 is given in the supplementary section. Note that in spectral clustering, one may wish to get a mapping to Rd for d > 2. We envision that our results translate to those settings as well since having higher order spectral information only results in a stronger predictor. We continue the discussion on the PHY and CIFAR-10 datasets in Section E.
Comparison to Lloyd’s Heuristic. In Section E.3, we provide additional results on experiments using Lloyd’s heuristic. In summary, we give both theoretical and empirical justifications for why our algorithms are superior to blindly following a predictor and then running Lloyd’s heuristic.
ACKNOWLEDGEMENTS
Zhili Feng, David P. Woodruf, and Samson Zhou would like to thank partial support from NSF grant No. CCF- 181584, Office of Naval Research (ONR) grant N00014-18-1-256, and a Simons Investigator Award. Sandeep Silwal was supported in part by a NSF Graduate Research Fellowship Program.
A APPENDIX
Theorem A.1 (Chernoff Bounds). Let X1, . . . , Xn be independent random variables taking values in {0, 1}. Let X = ∑n i=1Xi denote their sum and let µ = E[X] denote the sum’s expected value. Then for any δ ∈ (0, 1) and t > 0,
Pr [X ≤ (1− δ)µ] ≤ e− δ2µ 2 .
For any δ > 0,
Pr [X ≥ (1 + δ)µ] ≤ e− δ2µ 3 .
Furthermore,
Pr [|X − µ| ≥ t] ≤ e− t 2 4n .
A.1 PROOF OF THEOREM 2.1
We first prove Theorem 2.1, which shows that Algorithm 1 provides a (1 +α)-approximation to the optimal k-means clustering, but uses suboptimal time compared to a faster algorithm we present in Section 3. All omitted proofs of lemmas appear in Section A.2.
We first show that for each coordinate, the empirical center for any (1 − α)-fraction of the input points provides a good approximation to the optimal k-means clustering cost.
Lemma A.2. Let P,Q ⊆ R be sets of points on the real line such that |P | ≥ (1 − α)n and |Q| ≤ αn. Let X = P ∪Q, CP be the mean of P and CX be the mean of X . Then cost(X,CP ) ≤(
1 + α1−α2 ) cost(X,CX).
We now show that a conceptual interval I∗ ⊂ R with “small” length contains a significant fraction of the true points. Ultimately, we will show that the interval I computed in the “training” phase in CRDEST has smaller length than I∗ with high probability and yet I also contains a significant fraction of the true points. The main purpose of I∗ (and eventually I) is to filter out extreme outliers because the “testing” phase only considers points in I ∩X2. Lemma A.3. For a fixed set X ⊆ R, let C be the mean of X and σ2 = 12|X| ∑ x∈X(x−C)2 be the
variance. Then the interval I∗ = [ C − σ√
α , C + σ√ α
] contains at least a (1 − 4α) fraction of the
points in X .
Using Lemma A.3, we show that the interval I that is computed in the “training” phase contains a significant fraction of the true points.
Lemma A.4. Let m be a sufficiently large consatnt. We have that I := [a, b] contains at least a 1− 6α fraction of points of X2 and b− a ≤ 2σ/ √ α, with high probability, i.e., 1− 1/poly(m).
We next show that the optimal clustering on a subset obtained by independently sampling each input point provides a rough approximation of the optimal clustering. That is, the optimal center is well-approximated by the empirical center of the sampled points.
Lemma A.5. Let S be a set of points obtained by independently sampling each point of X ⊆ Rd with probability p = 12 . Let C be the optimal center of these points and CS be the empirical center of these points. Conditioned on |S| ≥ 1, then E[CS ] = x and there exists a constant γ such that for η ≥ 1 and |X| > ηγkα ,
E [ ‖CS − x‖22 ] ≤ γ |X|2 · (∑ x∈X ‖x− x‖22 ) Pr [cost(X,CS) > (1 + α) cost(X,C)] < 1/(ηk).
Using Lemma A.2, Lemma A.4, and Lemma A.5, we justify the correctness of the subroutine CRDEST.
Lemma A.6. Let α ∈ (10 log n/ √ n, 1/7). Let P,Q ⊆ R be sets of points on the real line such that |P | ≥ (1 − α)2m and |Q| ≤ 2αm, and X = P ∪ Q. Let C be the center of P . Then CRDEST on input set X outputs a point C ′ such that with probability at least 1 − 1/(ηk), cost(P,C ′) ≤ (1 + 18α)(1 + α) ( 1 + α(1−α)2 ) cost(P,C).
Using CRDEST as a subroutine for each coordinate, we now prove Theorem 2.1, justifying the correctness of Algorithm 1 by generalizing to all coordinates and centers and analyzing the runtime of Algorithm 1.
Proof of Theorem 2.1. Since Π has label error rate λ ≤ α, then by definition of label error rate, at least a (1 − α) fraction of the points in each cluster are correctly labeled. Note that the kmeans clustering cost can be decomposed into the sum of the costs induced by the centers in each dimension. Specifically, for a set C = {C1, . . . , Ck} of optimal centers,
cost(X, C) := ∑ x∈X d(x, C)2 = k∑ i=1 ∑ x∈Si d(x,Ci) 2,
where Si is the set of points in X that are assigned to center Ci. For a particular i ∈ [k], we have
∑ x∈Si d(x,Ci) 2 = ∑ x∈Si d∑ j=1 d(xj , (Ci)j) 2,
where xj and (Ci)j are the j-th coordinate of x and Ci, respectively.
By Lemma A.6, the cost induced by CRDEST for each dimension in each center C ′i is a (1 + α)approximation of the total clustering cost for the optimal centerCi in that dimension with probability 1− 1/(ηk). That is,∑
x∈Si
d(xj , (C ′ i)j) 2 ≤ (1 + 18α)(1 + α)(1 + α/(1− α)2) ∑ x∈Si d(xj , (Ci)j) 2
for each j ∈ [d]. Thus, taking a sum over all dimensions j ∈ [d] and union bounding over all centers i ∈ [k], we have that the total cost induced by Algorithm 1 is a (1 + 20α)-approximation to the optimal k-means clustering cost with probability at least 1− 1/η. To analyze the time complexity of Algorithm 1, first consider the subroutine CRDEST. It takes O(kdn) time to first split each of the points in each cluster and dimension into two disjoint groups. Finding the smallest interval that contains a certain number of points can be done by first sorting the points and then iterating from the smallest point to the largest point and taking the smallest interval that contains enough points. This requires O(n log n) time for each dimension and each center, which results in O(kdn log n) total time. Once each of the intervals is found, computing the approximate center then takes O(kdn) total time. Hence, the total running time of Algorithm 1 is O(kdn log n).
A.2 PROOF OF AUXILIARY LEMMAS
Lemma A.2. Let P,Q ⊆ R be sets of points on the real line such that |P | ≥ (1 − α)n and |Q| ≤ αn. Let X = P ∪Q, CP be the mean of P and CX be the mean of X . Then cost(X,CP ) ≤(
1 + α1−α2 ) cost(X,CX).
Proof. Suppose without loss of generality, that CX = 0 and CP ≤ 0, so that CQ ≥ 0, where CQ is the mean of Q. Then it is well-known, e.g., see Inaba et al. (1994), that
cost(X,CP ) = cost(X,CX) + |X| · |CP − CX |2.
Hence, it suffices to show that |X| · |CP − CX |2 ≤ α(1−α)2 cost(X,CX).
SinceCX = 0 we have |P |·CP = −|Q|·CQ, with |P | ≥ (1−α)n and |Q| ≤ αn. Let |P | = (1−%)n and |Q| = %n for some % ≤ α. Thus, CQ = − 1−%% · CP . By convexity, we thus have that
cost(Q,CX) ≥ |Q| · (1− %)2
%2 · |CP |2
= n(1− %)2
% · |CP |2
≥ n(1− α) 2
α · |CP |2.
Therefore, we have
|CP − CX |2 = |CP |2 ≤ α
n(1− α)2 cost(Q,CX) ≤
α
n(1− α)2 cost(X,CX).
Thus, |X| · |CP − CX |2 ≤
α
(1− α)2 cost(X,CX),
as desired. Lemma A.3. For a fixed set X ⊆ R, let C be the mean of X and σ2 = 12|X| ∑ x∈X(x−C)2 be the
variance. Then the interval I∗ = [ C − σ√
α , C + σ√ α
] contains at least a (1 − 4α) fraction of the
points in X .
Proof. Note that any point x ∈ X \ I∗ satisfies |x − C|2 > σ2/(4α). Thus, if more than a 4α fraction of the points of X are outside of I∗, then the total variance is larger than σ2, which is a contradiction.
For ease of presentation, we analyze λ = 12 and we note that the analysis extends easily to general λ. We now prove the technical lemma that we will use in the proof of Lemma A.8. Lemma A.7. We have
m∑ j=1
( m j ) j · 2m = Θ ( 1 m ) .
Proof. Let m be sufficiently large. A Chernoff bound implies that for a sufficiently large constant C, ∑
|j−m/2|≥C √ m
( m j ) 2m ≤ 1 m2 .
Furthermore, ∑ j≥C′m
( m j ) j · 2m = O ( 1 m ) · ∑ j≥1 ( m j ) 2m = O ( 1 m ) so the upper bound on the desired relation holds. A similar analysis provides a lower bound.
Lemma A.4. Let m be a sufficiently large consatnt. We have that I := [a, b] contains at least a 1− 6α fraction of points of X2 and b− a ≤ 2σ/ √ α, with high probability, i.e., 1− 1/poly(m).
Proof. By Lemma A.3, I∗ contains at least 2m(1 − 4α) of the points in X . Hence, by applying an additive Chernoff bound for t = O( √ m logm) and for sufficiently large m, we have that the number of points in I∗ ∩ X1 is at least m(1 − 5α) with high probability. Since I is the interval of minimal length with at least m(1 − 5α) points, then the length of I is at most the length of I∗. Moreover, again applying Chernoff bounds, we have that the number of points in I ∩X2 is at least m(1− 6α). More formally, suppose we have a set of 2m points that we randomly partition into two sets X1 and X2. Consider any fixed interval J that has at least 2cm total points for c ≥ 1 − 5α (note there
are at most O(m2) intervals in total since our points are in one dimension). Let J1 and J2 denote the number of points in J that are in X1 and X2 respectively. By a Chernoff bound, we have that both J1 and J2 are at least mc(1 − α) with high probability. In particular, |J1 − J2| ≤ αmc with high probability. Thus by using a union bound, all intervals with at least cm total points satisfy the property that the number of points partitioned to X1 and the number of points partitioned to X2 differ by at most αmc with high probability. Conditioning on this event, I must also contain m(1− 6α) points in X2 since it contains at least m(1− 5α) points in X1, as desired.
Lemma A.8. Let S be a set of points obtained by independently sampling each point of X ⊆ Rd with probability 12 , and let CS be the centroid of S. Let x be the centroid of X . Conditioned on |S| ≥ 1, we have E[CS ] = x, and there exists a constant γ such that
E [ ‖CS − x‖22 ] ≤ γ |X|2 · (∑ x∈X ‖x− x‖22 ) .
Proof. We first prove that E[CS ] = x. Note that by the law of iterated expectations,
E[CS ] = E|S|E[CS | |S| ].
Let xi1 , . . . , xi|S| be a random permutation of the elements in S, so that for each 1 ≤ j ≤ |S|, we have E[xij ] = x. Now conditioning on the size of S, we can write
CS = xi1 + · · ·+ xi|S|
|S| .
Therefore,
E[CS | |S| ] = x · |S| |S| = x
and it follows that E[CS ] = x.
To prove that
E [ ‖CS − x‖2 ] ≤ γ |X|2 · (∑ x∈X ‖x− x‖2 ) ,
we again condition on |S|. Suppose that |S| = j. Then,
CS − x = (xi1 − x) + · · ·+ (xij − x)
j
Now let yit = xit − x for all 1 ≤ t ≤ j. Therefore,
E |S|=j
[ ‖CS − x‖2 ] = 1 j2 · E [ ‖yi1 + · · ·+ yij‖2 ] = 1
j · E[‖yi1‖2] + j − 1 j · E[yTi1yi2 ].
Note that xi1 is uniform over elements in X , so it follows that
E[‖yi1‖2] = 1 |X| ∑ x∈X ‖x− x‖2.
Now if j ≥ 2, we have that E[yTi1yi2 ] = ∑ a<b y T a yb(|X|
2 ) = ‖∑i yi‖2 −∑i ‖yi‖2|X|(|X| − 1) ≤ 0 since ∑ i yi = 0 by definition. Hence,
E |S|≥2
[ ‖CS − x‖2 ] ≤ 1 j · |X| ∑ x∈X ‖x− x‖2.
Now the probability that |S| = j for j ≥ 2 is precisely (|X| j ) /2|X|, so we have
Pr [|S| ≥ 2] · E |S|≥2
[ ‖CS − x‖2 ] ≤ 1 |X| · (∑ x∈X ‖x− x‖2 ) · |X|∑ j=1 (|X| j ) j · 2|X| .
From Lemma A.7, we have that |X|∑ j=1
(|X| j ) j · 2|X| ≤ c |X|
for some constant c so it follows that
E‖CS − x‖2 ≤ c′
|X|2 · (∑ x∈X ‖x− x‖2 ) for some constant c′.
For j = 1, note that
E |S|=j=1
[ ‖CS − x‖2 ] = 1 |X| ∑ x∈X ‖x− x‖2.
Moreover, we have Pr [|S| = 1] = |X| 2|X| and Pr [|S| = 0] = 1 2|X|
. Thus from the law of total expectation, we have
E [ ‖CS − x‖2 ] = Pr [|S| < 2] · E
|S|<2
[ ‖CS − x‖2 ] + Pr [|S| ≥ 2] · E
|S|≥2
[ ‖CS − x‖2 ] ≤ |X|
2|X| · 1 |X| ∑ x∈X ( ‖x− x‖2 ) + c′
|X|2 · (∑ x∈X ‖x− x‖2 )
≤ γ |X|2 · (∑ x∈X ‖x− x‖2 ) for some constant γ, as desired.
Lemma A.9. Let S be a set of points obtained by independently sampling each point of X ⊆ Rd with probability p = 12 . Let C be the optimal center of X and CS be the empirical center of S. Let γ ≥ 1 be the constant from Lemma A.8. Then for η ≥ 1 and |X| > ηγkα ,
Pr [cost(X,CS) > (1 + α) cost(X,C)] < 1/(ηk).
Proof. By Lemma A.8 and Markov’s inequality, we have
Pr [ ‖CS − C‖22 ≥ ηγk |X|2 ∑ x∈X x2 ] ≤ 1 ηk .
We have ∑ x∈X ‖x− CS‖22 = ∑ x∈X ‖x− C‖22 + |X| · ‖C − CS‖22,
so that by Lemma A.8 ∑ x∈X ‖x− CS‖22 ≤ ( 1 + ηγk |X| )∑ x∈X ‖x− C‖22
= ( 1 + ηγk
|X|
) cost(X,C),
with probability at least 1 − 1ηk . Hence for |X| ≥ ηγk α , the approximate centroid of each cluster induces a (1 + α)-approximation to the cost of the corresponding cluster.
Lemma A.5. Let S be a set of points obtained by independently sampling each point of X ⊆ Rd with probability p = 12 . Let C be the optimal center of these points and CS be the empirical center of these points. Conditioned on |S| ≥ 1, then E[CS ] = x and there exists a constant γ such that for η ≥ 1 and |X| > ηγkα ,
E [ ‖CS − x‖22 ] ≤ γ |X|2 · (∑ x∈X ‖x− x‖22 ) Pr [cost(X,CS) > (1 + α) cost(X,C)] < 1/(ηk).
Proof. Lemma A.5 follows immediately from Lemma A.8 and Lemma A.9. Lemma A.6. Let α ∈ (10 log n/ √ n, 1/7). Let P,Q ⊆ R be sets of points on the real line such that |P | ≥ (1 − α)2m and |Q| ≤ 2αm, and X = P ∪ Q. Let C be the center of P . Then CRDEST on input set X outputs a point C ′ such that with probability at least 1 − 1/(ηk), cost(P,C ′) ≤ (1 + 18α)(1 + α) ( 1 + α(1−α)2 ) cost(P,C).
Proof. Let α ∈ (10 log n/ √ n, 1/7). Then from Lemma A.4, we have that I ∩X contains at least (1− 6α)m points of P ∩X2 and at most 2αm points of Q in an interval of length 2σ/ √ α, where
σ2 = 1 2|P | ∑ p∈p (p− C)2 = 1 2|P | · cost(P,C).
From Lemma A.2, we have that cost(P,C0) ≤ ( 1 + α
(1− α)2
) cost(P,C1),
where C0 is the center of I ∩ P ∩X2 and C1 is the center of P ∩X2. For sufficiently large m and from Lemma A.9, we have that
cost(P,C1) ≤ (1 + α) cost(P,C), with probability at least 1 − 1/(ηk). Thus, it remains to show that cost(P,C ′) ≤ (1 + O(α)) cost(P,C0).
Since C0 is the center of I ∩ P ∩X2 and C ′ is the center of I ∩X2, then we have |I ∩ P ∩X2|C0 + ∑ q∈I∩Q∩X2 q = |I ∩X2|C ′. Since I has length 2σ/ √ α, then q ∈ [ C0 − 2σ√α , C0 + 2σ√ α ] . Because |I ∩ P ∩X2| ≥ (1 − 6α)m and |Q| = 2αm, then for sufficiently small α, we have that |C ′ − C0| ≤ 6 √ ασ. Note that we have cost(P,C ′) = cost(P,C0) + |P | · |C0 − C ′|2, so that cost(P,C ′) ≤ cost(P,C0) + |P | · 36ασ2. Finally, σ2 = 12|P | · cost(P,C) and cost(P,C) ≤ cost(P,C0) due to the optimality of C. This implies
cost(P,C ′) ≤ cost(P,C0) + |P | · 36ασ2
≤ cost(P,C0) + |P | · 36α · 1
2|P | · cost(P,C)
≤ cost(P,C0) + 18α cost(P,C0) = (1 + 18α) cost(P,C0),
as desired. Thus putting things together, we have cost(P,C ′) ≤ (1 + 18α)(1 + α) ( 1 + α
(1− α)2
) cost(P,C).
A.3 PROOF OF THEOREM 3.4
We now give the proofs for optimal query complexity and runtime. We first require the following analogue to Lemma A.5:
Lemma A.10. Let S be a set of points obtained by independently sampling each point of X ⊆ Rd with probability p = min (
1, 100 log kα|S| ) . Let C be the optimal center of these points and CS be the
empirical center of these points. Conditioned on |S| ≥ 1, then E[CS ] = x and for |X| > γkα ,
E [ ‖CS − x‖22 ] ≤ γ p|X|2 · (∑ x∈X ‖x− x‖22 ) for some constant γ.
Lemma A.11. For α ∈ (10 log n/ √ n, 1/7), let Π be a predictor with error rate λ ≤ α/2. If each cluster has at least γk log k/α points, then Algorithm 3 outputs a (1 + 20α)-approximation to the k-means objective value with probability at least 3/4.
Proof. Since S samples each of points independently with probability proportional to cluster sizes given by Π, for a fixed i ∈ [k] at least 90 log kα points with label i are sampled, with probability at least 1− 1k4 from Chernoff bounds. Let γ1, . . . , γk be the empirical means corresponding to each of the sampled points with labels 1, . . . , k, respectively, and let Γ0 = {γ1, . . . , γk}. Let C1, . . . , Ck be centers of a (1 + α)-approximate optimal solution C with corresponding clusters X1, . . . , Xk. By Lemma A.10, we have that
E [ ‖Ci − γi‖22 ] ≤ γ p|Xi|2 · (∑ x∈Xi ‖x− Ci‖22 ) ,
where p = min (
1, 100 log kα|S|
) . By Markov’s inequality, we have that
∑ i∈[k] ‖Ci − γi‖22 ≤ 100 ∑ i∈[k] γ p|Xi|2 · (∑ x∈Xi ‖x− Ci‖22 )
with probability at least 0.99. Similar to the proof of Lemma A.9, we use the identity∑ x∈Xi ‖x− γi‖22 = ∑ x∈Xi ‖x− Ci‖22 + |Xi| · ‖Ci − γi‖22.
Hence, we have that cost(X,Γ0) ≤ (1 + α) · cost(X,C),
with probability at least 0.99.
Suppose Π has error rate λ ≤ α and each error chooses a label uniformly at random from the k possible labels. Then by definition of error rate, at most α/2 fraction of the points are erroneously labeled for each cluster. Each cluster in the optimal k-means clustering of the predictor Π has at least n/(ζk) points, so that at least a (1 − α) fraction of the points in each cluster are correctly labeled. Thus, by the same argument as in the proof of Lemma A.6, we have that Algorithm 1 outputs a set of centers C1, . . . , Ck such that for Γ = {C1, . . . , Ck}, we have
cost(X,Γ) ≤ (1 + 18α) (
1− α (1− α)2
) · cost(X,Γ0),
with sufficiently large probability. Let E be the event that cost(X,Γ) ≤ (1 +α)(1 + 18α) ( 1− α(1−α)2 ) · cost(X, C), so that Pr [E ] ≥ 1 − 1/poly(k). Conditioned on E , let X1 be the subset of X that is assigned the correct label by Π, and let X2 be the subset of X assigned the incorrect label. For each point x ∈ X1 assigned the
correct label `x by Π, the closest center to x in Γ is C`x , so Algorithm 3 will always label x with `x. Thus,
cost(X1,Γ) ≤ cost(X,Γ) ≤ (1 + α)(1 + 18α) (
1− α (1− α)2
) · cost(X, C),
conditioned on E . On the other hand, if x ∈ X2 is assigned an incorrect label `x by Π, then the (2, r)-approximate nearest neighbor data assigns the label px to x, where φ(Cpx) is the closest center to φ(x) in the projected space. Recall that φ is the composition map φ1◦φ2, where φ1 has a terminal dimension reduction with distortion 5/4, and φ2 is a random JL linear map with distortion 5/4. Thus the distance between x and Cpx is a 2-approximation between x and its closest center Ci. Hence, by assigning all points x to their respective centers Cpx , we have d(x,Cpx) ≤ 2 cost(x,Γ). Since each point x ∈ X is assigned the incorrect label with probability λ ≤ α/2, the expected cost of the labels assigned to X2 is α cost(X,Γ). By Markov’s inequality, the cost of the labels assigned to X2 is at most 10α cost(X,Γ) < 10α(1 + α) cost(X, C), with probability at least 1− 15 , conditioned on E . Therefore by a union bound, the total cost is at most (1 + 20α) · cost(X, C), with probability at least 3/4.
We need the following theorems on the quality of the data structures utilized in Algorithm 3. Theorem A.12. Makarychev et al. (2019) For every set C ⊂ Rd of size k, a parameter 0 < α < 12 and the standard Euclidean norm d(·, ·), there exists a terminal dimension reduction f : C → Rd′
with distortion (1 + α), where d′ = O (
log k α2
) . The dimension reduction can be computed in
polynomial time. Theorem A.13. Indyk & Motwani (1998); Har-Peled et al. (2012); Andoni et al. (2018) For α > 0, there exists a (1+α, r)-ANN data structure over R equipped with the standard Euclidean norm that achieves query time O ( d · lognα2 ) and space S := O ( 1 α2 log 1 α + d(n+ q) ) , where q := lognα2 . The
runtime of building the data structure is O(S + ndq).
We now prove Theorem 3.4. Theorem 3.4. Let α ∈ (10 log n/ √ n, 1/7), Π be a predictor with label error rate λ ≤ α, and γ ≥ 1 be a sufficiently large constant. If each cluster in the optimal k-means clustering of the predictor has at least γk log k/α points, then Algorithm 3 outputs a (1 + 20α)-approximation to the k-means objective with probability at least 3/4, using O(nd log n+ poly(k, log n)) total time.
Proof. The approximation guarantee of the algorithm follows from Lemma A.11. To analyze the running time, we first note that we apply a JL matrix with dimensionO(log n) to each of the n points in Rd, which usesO(nd log n) time. As a result of the JL embedding, each of the n points has dimension O(log n). Thus, by Theorem A.12, constructing the terminal embedding uses poly(k, log n) time. As a result of the terminal embedding, each of the k possible centers has dimension O(log k). Hence, by Theorem A.13, constructing the (2, r)-ANN data structure for the k possible centers uses O(k log2 k) time. Subsequently, each query to the data structure uses O(log2 k) time. Therefore, the overall runtime is O(nd log n+ poly(k, log n)).
A.4 REMARK ON TRULY-POLYNOMIAL TIME ALGORITHMS VS. PTAS/PRAS.
Remark A.14. We emphasize that the runtime of our algorithm in Theorem 2.1 is truly polynomial in all input parameters n, d, k and 1/α (and even near-linear in the input size nd). Although there exist polynomial-time randomized approximation schemes for k-means clustering, e.g., Inaba et al. (1994); Feldman et al. (2007); Kumar et al. (2004), their runtimes all have exponential dependency on k and 1/α, i.e., 2poly(k,1/α). However, this does not suffice for many applications, since k and 1/α should be treated as input parameters rather than constants. For example, it is undesirable to pay an exponential amount of time to linearly improve the accuracy α of the algorithm. Similarly, if the number of desired clusters k = O(log2 n), then the runtime would be exponential. Thus we believe the exponential improvement of Theorem 2.1 over existing PRAS in terms of k and 1/α is significant.
A.5 REMARK ON POSSIBLE INSTANTIATIONS OF PREDICTOR
Remark A.15. We can instantiate Theorem 2.1 with various versions of the predictor. Assume each cluster in the (1 + α)-approximately optimal k-means clustering of the predictor has size at least n/(ζk) for some tradeoff parameter ζ ∈ [1, ( √ n)/(8k log n)]. Then the clustering quality and runtime guarantees of Theorem 2.1 hold if the predictor Π is such that
1. Π outputs the right label for each point independently with probability 1−λ and otherwise outputs a random label for λ ≤ O(α/ζ),
2. Π outputs the right label for each point independently with probability 1−λ and otherwise outputs an adversarial label for λ ≤ O(α/(kζ)).
In addition, if the predictor Π outputs a failure symbol when it fails, then for constant ζ > 0, there exists an algorithm (see supplementary material) that outputs a (1+α)-approximation to the k-means objective with probability at least 2/3, even when Π has failure rate λ = 1− 1/ poly(k). Note that this remark (but not Theorem 2.1) assumes that each of the k clusters in the (1 + α)-approximately optimal clustering has at least nζk points. This is a natural assumption that the clusters are “roughly balanced” which often holds in practice, e.g., for Zipfian distributions.
B DELETION PREDICTOR
In this section, we present a fast and simple algorithm for k-means clustering, given access to a label predictor Π with deletion rate λ. That is, for each point, the predictor Π either outputs a label for the point consistent with an optimal k-means clustering algorithm with probability λ, or outputs nothing at all (or a failure symbol ⊥) with probability 1− λ. Since the deletion predictor fails explicitly, we can actually achieve a (1 + α)-approximation even when λ = 1− 1poly(k) .
Our algorithm first queries all points in the inputX . Although the predictor does not output the label for each point, for each cluster Ci with a sufficiently large number of points, with high probability, the predictor assigns at least λ2 |Ci| points of Ci to the correct label. We show that if |Ci| = Ω ( k α ) , then with high probability, the empirical center is a good estimator for the true center. That is, the kmeans objective using the centroid of the points labeled i is a (1 +α)-approximation to the k-means objective using the true center of Ci. We give the full details in Algorithm 4.
To show that the empirical center is a good estimator for the true center, recall that a common approach for mean estimation is to sample roughly anO ( 1 α2 ) number of points uniformly at random with replacement. The argument follows from observing that each sample is an unbiased estimator of the true mean, and repeating O ( 1 α2 ) times sufficiently upper bounds the variance.
Observe that the predictor can be viewed as sampling the points from each cluster without replacement. Thus, for sufficiently large cluster sizes, we actually have a huge number of samples, which intuitively should sufficiently upper bound the variance. Moreover, the empirical mean is again an unbiased estimator of the true mean. Thus, although the above analysis does not quite hold due to dependencies between the number of samples and the resulting averaging term, we show that the above intuition does hold.
Algorithm 4 Linear time k-means algorithm with access to a label predictor Π with deletion rate λ. Input: A point set x ∈ X with labels given by a label predictor Π with deletion rate λ. Output: A (1 + α)-approximate k-means clustering of X .
1: for each label i ∈ [k] do 2: Let Si be the set of points labeled i. 3: ci ← 1|Si| · ∑ x∈Si x 4: end for 5: for all points x ∈ X do 6: if x is unlabeled then 7: `x ← arg min d(x, ci) 8: Assign label `x to x. 9: end if
10: end for
We first show that independently sampling points uniformly at random from a sufficiently large point set guarantees a (1 +α)-approximation to the objective cost. Inaba et al. (1994); Ailon et al. (2018) proved a similar statement for sampling with replacement.
It remains to justify the correctness of Algorithm 4 by arguing that with high probability, the overall k-means cost is preserved up to a (1+α)-factor by the empirical means. We also analyze the running time of Algorithm 4.
Theorem B.1. If each cluster in the optimal k-means clustering of the predictor Π has at least 3kα points, then Algorithm 4 outputs a (1 +α)-approximation to the k-means objective with probability at least 23 , using O(kdn) total time.
Proof. We first justify the correctness of Algorithm 4. Suppose each cluster in the optimal k-means clustering of the predictor Π has at least 3kα points. Let C = {c1, . . . , ck} be the optimal centers selected by Π and let CS = {c′1, . . . , c′k} be the empirical centers chosen by Algorithm 4. For each i ∈ [k], let Ci be the points of X that are assigned to Ci by the predictor Π. By Lemma A.9 with η = 3, the approximate centroid of a cluster induces a (1 + α)-approximation to the cost of the corresponding cluster so that
cost(Ci, c ′ i) ≤ (1 + α) cost(Ci, ci),
with probability at least 1− 13k . Taking a union bound over all k clusters, we have that∑ i∈[k] cost(Ci, c ′ i) ≤ ∑ i∈[k] (1 + α) cost(Ci, ci),
with probability at least 23 . Equivalently, cost(X,C) ≤ (1 + α) cost(X,CS). To analyze the running time of Algorithm 4, observe that the estimated centroids for all labels can be computed in O(dn) time. Subsequently, assigning each unlabeled point to the closest estimated centroid uses O(kd) time for each unlabeled point. Thus, the total running time is O(kdn).
C k-MEDIAN CLUSTERING
We first recall that a well-known result states that the geometric median that results from uniformly sampling a number of points from the input is a “good” approximation to the actual geometric median for the 1-median problem.
Theorem C.1. Krauthgamer (2019) Given a set P of n points in Rd, the geometric median of a sample of O ( d α2 log d α ) points of P provides a (1 + α)-approximation to the 1-median clustering problem with probability at least 1− 1/poly(d). Note that we can first apply Theorem A.12 to project all points to a space with dimension O (
1 α2 log k α
) before applying Theorem C.1. Instead of computing the geometric median, we re-
call the following procedure that produces a (1 + α)-approximation to the geometric median.
Theorem C.2. Cohen et al. (2016) There exists an algorithm that outputs a (1 + α)-approximation to the geometric median in O ( nd log3 nα ) time.
We give our algorithm in full in Algorithm 5. Theorem C.3. For α ∈ (0, 1), let Π be a predictor with error rate λ = O (
α4
k log kα log log k α
) . If each
cluster in the optimal k-median clustering of the predictor has at least n/(ζk) points, then there exists an algorithm that outputs a (1 +α)-approximation to the k-median objective with probability at least 1− 1/ poly(k), using O(nd log3 n+ poly(k, log n)) total time. Proof. Observe that Algorithm 5 samples O (
1 α4 log 2 k α
) points for each of the clusters labeled i,
with i ∈ [k]. Thus Algorithm 5 samples O ( k α4 log 2 k α ) points in total. For λ = O ( α4
k log kα log log k α ) with a sufficiently small constant, the expected number of incorrectly labeled points sampled by Algorithm 5 is less than 132 . Thus, by Markov’s inequality, the probability that no incorrectly labeled
Algorithm 5 Learning-Augmented k-median Clustering Input: A point set x ∈ X with labels given by a predictor Π with error rate λ. Output: A (1 + α)-approximate k-median clustering of X .
1: Use a terminal embedding to project all points into a space with dimension O (
1 α2 log k α
) .
2: for i = 1 to i = k do 3: Let `i be the most common remaining label. 4: Sample O ( 1 α4 log 2 k α ) points with label `i.
5: Let C ′i be a ( 1 + α4 ) -approximation to the geometric median of the sampled points. 6: end for 7: Return C ′1, . . . , C ′k.
points are sampled by Algorithm 5 is at least 34 . Conditioned on the event that no incorrectly labeled points are sampled by Algorithm 5, then by Theorem C.1, the empirical geometric median for each cluster induces a ( 1 + α4 ) -approximation to the optimal geometric median in the projected space.
Hence the set of k empirical geometric medians induces a ( 1 + α4 ) -approximation to the optimal k-median clustering cost in the projected space. Since the projected space is the result of a terminal embedding, the set of k empirical geometric medians for the sampled points in the projected space induces a k-median clustering cost that is a ( 1 + α4 ) -approximation to the k-median clustering cost induced by the set of k empirical geometric medians for the sampled points in the original space. Taking the set of k empirical geometric medians for the sampled points in the original space induces a ( 1 + α4 )2 -approximation to the k-median clustering cost. We take a ( 1 + α4 ) -approximation to each of the geometric medians. Thus for sufficiently small α, Algorithm 5 outputs a (1 + α)approximation to the k-median clustering problem.
To embed the points into the space of dimension O (
1 α2 log k α
) , Algorithm 5 spends O(nd log n)
total time. By Theorem C.2, it takes O(nd log3 n) total time to compute the approximate geometric medians.
D LOWER BOUNDS
MAX-E3-LIN-2 is the optimization problem of maximizing the number of equations satisfied by a system of linear equations of Z2 with exactly 3 distinct variables in each equation. EK-MAX-E3LIN-2 is the problem of MAX-E3-LIN-2 when each variable appears in exactly k equations. Fotakis et al. (2016) showed that assuming the exponential time hypothesis (ETH) (Impagliazzo & Paturi, 2001), there exists an absolute constant C1 such that MAX k-SAT (and thus MAX k-CSP) instances with fewer than O(nk−1) clauses cannot be approximated within a factor of C1 in time 2O(n 1−δ) for any δ > 0. As a consequence, the reduction by Håstad (2001) shows that there exist absolute constants C2, C3 such that EK-MAX-E3-LIN-2 with k ≥ C2 cannot be approximated within a factor of C3 in time 2O(n
1−δ) for any δ > 0. Hence, the reduction by Chlebı́k & Chlebı́ková (2006) shows that there exists a constantC4 such that approximating the minimum vertex cover of 4-regular graphs within a factor of C4 cannot be done in time 2O(n
1−δ) for any δ > 0. Thus the reduction by Lee et al. (2017) shows that there exists a constant C5 such that approximating k-means within a factor ofC5 cannot be done in time 2O(n
1−δ) for any δ > 0, assuming ETH. Namely, the reduction of Lee et al. (2017) shows that an algorithm that provides a C5-approximation to the optimal k-means clustering can be used to compute a C4-approximation to the minimum vertex cover.
Theorem D.1. If ETH is true, then there does not exist an algorithm A that takes a set S of n 1−δ
logn
vertices and finds a C4-approximation to the minimum vertex | 1. What is the main contribution of the paper regarding the k-means clustering problem?
2. What are the strengths and weaknesses of the proposed method, particularly in its theoretical guarantee and empirical performance?
3. Do you have any concerns or suggestions regarding the paper's experimental design and dataset choices?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any other relevant works or references that the authors could consider to enhance their analysis and discussion? | Summary Of The Paper
Review | Summary Of The Paper
This work aims to find a solution of the k-means clustering problem based on (prior) predicted labels. Here the predicted labels could be obtained from some clustering algorithms or some supervised models, with additional noise. A polynomial time procedure (Algorithm 1) is proposed as follows: for each fitted component from the predicted labels, a robust mean is computed (in a coordinate wise way). These robust means are output as the final clustering solution. The running time is
O
(
k
n
d
log
n
)
. The paper rigorously establishes a theoretical guarantee on the approximation ratio of the solution assuming the predictor label has a bounded label error. To further improve the running time, the authors utilize dimension reduction technique to cluster
O
(
k
/
α
)
points in dimension
O
(
log
n
)
and then obtain the label for original data points with approximate nearest neighbor data structure. This modified approach (Algorithm 3) has running time
O
(
n
d
log
n
+
poly
(
k
,
log
n
)
)
and attains a solution with a similar theoretical guarantee. To empirically evaluate the proposed method, the authors perform experiments on synthetic data and a few real datasets. The experiments demonstrate that the proposed method with
k
-means++ initialization achieves better performance than
k
-means++; moreover, the performance is competitive and robust even when the predictor labels are corrupted.
Review
The paper is nicely written, with the motivation clearly stated. The
k
-means clustering is a fundamental, yet important problem. The authors consider a setting in which some prior knowledge about the clustering is available, and derive a statistical method to recover the solution with some provable guarantees.
Weakness
• The theorem applies when the label error is small, less than 1/7. However, it might be non-trivial to obtain a predictor with that quality in the first place. For example, in the experiments, the initial solution are derived from
k
-means (Lloyd's) algorithm, which might require many initial seeds to attain a good solution. Are there any guarantees can be made when the initial label error is larger?
• When
k
gets larger, the
k
-means algorithm (even with
k
-means++) solution can be stuck at local minima, with arbitrarily worse objective [1]. How would algo+
k
-means++/predictor behave compare with
k
-means++ (with multiple seeds)? Can the algorithm help escape the local minima and attain a much better solution? There is a collection of synthetic datasets [1] for k-means to understand the performance of the algorithm. I suggest the authors take these benchmark datasets into consideration for the experiments for evaluation. In the current experiment, only
k
=
10
and
k
=
25
are tested, and it is hard to see the comparison of algorithms when
k
gets larger, which is a more challenging case for the
k
-means problem.
• Minor suggestion: the average of
k
-means objectives with multiple seeds are used as a baseline, I think the minimal
k
-means objective over multiple seeds is more reasonable.
[1] Jin, Chi, et al. "Local maxima in the likelihood of gaussian mixture models: Structural results and algorithmic consequences." Advances in neural information processing systems 29 (2016): 4116-4124. [2] Fränti, Pasi, and Sami Sieranoja. "K-means properties on six clustering benchmark datasets." Applied Intelligence 48.12 (2018): 4743-4759. |
ICLR | Title
Grassmannian Class Representation in Deep Learning
Abstract
We generalize the class representative vector found in deep classification networks to linear subspaces and show that the new formulation enables the simultaneous enhancement of the inter-class discrimination and intra-class feature variation. Traditionally, the logit is computed by the inner product between a feature and the class vector. In our modeling, classes are subspaces and the logit is defined as the norm of the projection from a feature onto the subspace. Since the set of subspaces forms Grassmann manifolds, finding the optimal subspace representation for classes is to optimize the loss on a Grassmannian. We integrate the Riemannian SGD into existing deep learning frameworks such that the class subspaces in a Grassmannian are jointly optimized with other model parameters in Euclidean. Compared to the vector form, subspaces have two appealing properties: they can be multi-dimensional and they are scaleless. Empirically, we reveal that these distinct characteristics improve various tasks. (1) Image classification. The new formulation brings the top-1 accuracy of ResNet50-D on ImageNet-1K from 78.04% to 79.37% using the standard augmentation in 100 training epochs. This confirms that the representative capability of subspaces is more powerful than vectors. (2) Feature transfer. Subspaces provide freedom for features to vary and we observed that the intra-class variability of features increases when the subspace dimensions are larger. Consequently, the quality of features is better for downstream tasks. The average transfer accuracy across 6 datasets improves from 77.98% to 80.12% compared to the strong baseline of vanilla softmax. (3) Long-tail classification. The scaleless property of subspaces benefits classification in the long-tail scenario and improves the accuracy of ImageNet-LT from 46.83% to 48.94% compared to the standard formulation. With these encouraging results, we believe that more applications could benefit from the Grassmannian class representation. Codes will be released.
1 INTRODUCTION
The idea of representing classes as linear subspaces in machine learning can be dated back, at least, to 1973 (Watanabe & Pakvasa (1973)), yet it is mostly ignored in the current deep learning literature. In this paper, we revisit the scheme of representing classes as linear subspaces in the deep learning context. To be specific, each class i is associated with a linear subspace Si, and for any feature vector x, the i-th class logit is defined as the norm of projection
li := ∥∥projSix∥∥ . (1)
Since a subspace is a point in the Grassmann manifold (Absil et al. (2009)), we call this formulation the Grassmannian class representation. In the following, we answer the two critical questions,
1. Is Grassmannian class representation useful in real applications?
2. How to optimize the subspaces in training?
The procedure fully-connected layer → softmax → cross-entropy loss is the standard practice in deep classification networks. Each column of the weight matrix of the fullyconnected layer is called the class representative vector and serves as a prototype for one class. This representation of class has achieved huge success, yet it is not without imperfections.
In the study of transferable features, researchers noticed a dilemma that representations with higher classification accuracy on the original task lead to less transferable features for downstream tasks (Kornblith et al. (2021); Müller et al. (2019)). This is connected to the fact that they tend to collapse intra-class variability of representations, resulting in loss of information in the logits about the resemblances between instances of different classes. Furthermore, the neural collapse phenomenon (Papyan et al. (2020)) indicates that as training progresses, the intra-class variation becomes negligible, and features collapse to their class-means. So this dilemma inherently originates from the practice of representing classes by a single vector. The Grassmannian class representation shed light on this issue as features of each class are allowed to vary in a high-dimensional subspace without incurring losses in classification.
In the study of the long-tail classification, researchers found that the norm of class representative vectors is highly related to the number of training instances in the corresponding class (Kang et al. (2019)) and the recognition accuracy is affected. To counter this effect, the class representative vector is typically been rescaled to unit length during training (Liu et al. (2019)) or re-calibrated in an extra post-processing step (Kang et al. (2019)). In addition to these techniques, the Grassmannian class representation provides a natural and elegant solution for this as subspace is scaleless.
It is well known that the set of k-dimensional linear subspaces form a Grassmann manifold, so finding the optimal subspace representation for classes is to optimize on the Grassmann manifold. Thus for the second question, the natural solution is to use the geometric optimization (Edelman et al. (1998)), which optimizes an objective function under the constraint of a given manifold. Points being optimized are moving along geodesics instead of following the direction of Euclidean gradients. The preliminary concepts of geometric optimization are reviewed in Section 3, and the technical details of subspace learning are presented in Section 4. We implemented an efficient Riemannian SGD for optimization in Grassmann manifold as shown in Algorithm 1, which integrates the geometric optimization algorithms to deep learning frameworks so that both the linear subspaces in Grassmannian and model weights in Euclidean are jointly optimized.
Going back to the first question, we experiment on three concrete tasks in Section 5 to demonstrate the practicality and effectiveness of Grassmannian class representation. We find that (1) Grassmannian class representation improves large-scale image classification accuracy. (2) Grassmannian class representation produces high-quality features that can better transfer to downstream tasks. (3) Grassmannian class representation improves the long-tail classification accuracy. With these encouraging results, we believe that Grassmannian class representation is a promising formulation and more applications may benefit from its attractive features.
2 RELATED WORK
Geometric Optimization Edelman et al. (1998) developed the geometric Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds in their seminal paper. Riemannian SGD was introduced in Bonnabel (2013) with an analysis on convergence and there are variants such as Riemannian SGD with momentum (Roy et al. (2018)) or adaptive (Kasai et al. (2019)). Other popular Euclidean optimization methods such as Adam are also studied in the Riemannian manifold context (Becigneul & Ganea (2019)). Lezcano-Casado & Martınez-Rubio (2019) study the special case of SO(n) and U(n) and uses the exponential map to enable Euclidean optimization methods for Lie groups. The idea was generalized into trivialization in Lezcano Casado (2019). Our Riemannian SGD Algorithm 1 is tailored for Grassmannian, so we have a closed-form equation for geodesics. Other applications of geometric optimization include matrix completion (Mishra & Sepulchre (2014); Li et al. (2015b;a); Nimishakavi et al. (2018)), hyperbolic taxonomy embedding (Nickel & Kiela (2018)), etc. Hamm & Lee (2008) propose the Grassmann discriminant analysis, in which features are modeled as linear subspaces. These applications are mostly using shallow models. Zhang et al. (2018) use subspaces to model clusters in unsupervised learning, which share similar spirit with our work. Simon et al. (2020) model classes as subspaces in few-shot learning, however, their subspaces are computed from data matrix rather than explicitly parametrized and learned. Roy et al. (2019) use Stiefel manifold to construct Mahalanobis distance matrix in Siamese networks in order to improve feature embeddings of deep metric learning.
Orthogonal Constraints in Deep Learning There are works that enforce orthogonality on weights, which study the regularization effect of orthogonal constraints. Contrastingly, we used orthogonal matrices as the numerical representation of the geometry object of subspaces and focus on the representation of classes. The approaches of enforcing orthogonality include regularizations (Arjovsky et al. (2016); Xie et al. (2017a); Bansal et al. (2018); Qi et al. (2020); Wang et al. (2020), etc.), geometric constraints (Ozay & Okatani (2018); Harandi & Fernando (2016)) and paraunitary systems (Su et al. (2022)). Orthogonally constrained data is also explored by Huang et al. (2018).
Improving Diversity in Feature Learning Grassmannian class representation encourages the intra-class variation implicitly by providing a subspace to vary. In metric learning, there are efforts to explicitly encourage feature diversity. For example, SoftTriplet Loss (Qian et al. (2019)) models each class as local clusters with several centers. Zhang et al. (2017) use a global orthogonal regularization to encourage local descriptors to spread out in the features space. Yu et al. (2020) propose to learn low-dimensional structures from the maximal coding rate reduction principle. The subspaces are estimated using PCA on feature vectors after the training. In our formulation, subspaces are directly optimized in the Grassmann manifold during training.
Normalized Classification Weights Normalizing class representative vectors has been found useful in representation learning (Wang et al. (2017; 2018); Deng et al. (2019)) and long-tail classification (Liu et al. (2019); Wang et al. (2021)). However, works such as ArcFace (Deng et al. (2019)) focus on adding an extra margin to suppress intra-class variance. In contrast, our subspace formulation encourages intra-class variation.
3 PRELIMINARIES
In this section, we briefly review the essential concepts in geometric optimization. Detailed exposition can be found in Edelman et al. (1998) and Absil et al. (2009). Given an n-dimensional Euclidean space Rn, the set of k-dimensional linear subspaces forms the Grassmann manifold G(k, n). A computational-friendly representation for subspace S ∈ G(k, n) is an orthonormal matrix S ∈ Rn×k, where STS = Ik and Ik is the k × k identity matrix. Columns of matrix S can be interpreted as an orthonormal basis for the subspace S. The matrix representation is not unique, as right multiplying by an orthonormal matrix will get a new matrix representing the same subspace. Formally, Grassmannian is a quotient space of the Stiefel manifold and the orthogonal group G(k, n) = St(k, n)/O(k), where St(k, n) = {X ∈ Rn×k|XTX = Ik} and O(k) = {X ∈ Rk×k|XTX = Ik}. When the context is clear, we use the notation of space S and one of its matrix representations S interchangeably. The tangent space of the Grassmann manifold at S consists of all n× k matrices T such that STT = 0. Given a function f : G(k, n)→ R defined on the Grassmann manifold, the Riemannian gradient of f at point S ∈ G(k, n) is given by (Edelman et al., 1998, Equ. (2.70)),
∇f(S) = fS − SST fS , (2)
where fS is the Euclidean gradient with elements (fS)ij = ∂f∂Sij . When performing gradient descend on the Grassmann manifold, and suppose the current point is S and the current Riemannian gradient is G, then the next point is the endpoint of S moving along the geodesic toward the tangent G with some step size. The formula of the geodesic is given by (Edelman et al., 1998, Equ. (2.65)),
S(t) = (SV cos(tΣ) + U sin(tΣ))V T , (3)
where UΣV T = G is the thin singular value decomposition of G.
4 LEARNING THE GRASSMANNIAN CLASS REPRESENTATION
Denote the weight of the last fully-connected layer in a classification network by W ∈ Rn×C and the bias by b ∈ RC , where n is the dimension of features and C is the number of classes. The i-th column vector wi of W is called the i-th class representative vector. The i-th logit is computed as the inner product between a feature x and the class vector (and optionally offset by a bias bi), namely wTi x + bi. We extend this well-established formula to a multi-dimensional subspace form
li := ∥∥projSix∥∥ , (4)
where Si ∈ G(k, n) is a k-dimensional subspace in the n-dimensional feature space. We call Si the i-th class representative space, or class space in short. Comparing the new logit to the standard one, the inner product of feature x with class vector is replaced by the norm of the subspace projection projSix and the bias term is omitted. We found that re-normalizing features to a constant length γ
improves training. Incorporating this, Equation (4) becomes ∥∥∥projSi γx‖x‖∥∥∥. To simplify notation, we assume feature x has been properly re-normalized throughout this paper unless otherwise specified.
The application of the subspace class representation requires two modifications to an existing network. Firstly, the last fully-connected layer is replaced by its geometric counterpart, which is detailed in Section 4.1. The new geometric layer will transform features to logits using Equation (4). Secondly, the optimizer should be extended to process the new geometric layer simultaneously, which is explained in Section 4.2. Parameters of the geometric layer are optimized using Geometric SGD, while all other parameters are optimized as usual using the standard SGD algorithm.
4.1 GRASSMANNIAN CLASS REPRESENTATION
Suppose for class i, i = 1, 2, . . . , C, its subspace representation is Si ∈ G(ki, n), where the dimension ki is a hyperparameter and is fixed during training. Then the tuple of subspaces (S1, S2, . . . , SC) will be optimized in the product space G(k1, n)×G(k2, n)×· · ·×G(kC , n). Denote a matrix instantiation of Si as Si ∈ Rn×k, where the column vectors form an orthonormal basis Si, then we concatenate the matrices into a big matrix
S = [S1 S2 · · · SC ] ∈ Rn×(k1+k2+···+kC). (5)
The matrix S contains the parameters that are optimized numerically. For feature x, the product STi x gives the coordinate of projSix under the orthonormal basis formed by the columns of Si. By definition in Equation (4), the logit for class i and feature x is computed by
li = ∥∥projSix∥∥ = ∥∥STi x∥∥ . (6)
Grassmannian Fully-Connected Layer We can implement a geometric fully-connected layer using the plain old fully-connected layer. The shape of the weight S is n× (k1 + k2 + · · ·+ kC), as shown in Equation (5). In the forward pass, the input feature is multiplied with the weight matrix to get a temporary vector t = STx, then the first element of the output is the norm of the sub-vector (t1, . . . , tk1), and the second element of the output is the norm of (tk1+1, tk1+2, . . . , tk1+k2), etc.
Parameter Initialization Each matrix instantiation of the subspace should be initialized as an orthonormal matrix. The geometric optimization algorithm described in Section 4.2 ensures their orthonormality during training. Specifically, for Grassmannian fully-connected layer, each block Si of the weight S in Equation (5) is orthonormal. The whole matrix S needs not be orthonormal.
4.2 OPTIMIZE THE SUBSPACES
Geometric optimization is to optimize functions defined on manifolds. The key step is to find the Riemannian gradient of the loss function and then descend along the geodesic. Here the manifold in concern is the Grassmannian G(k, n). As an intuitive example, G(1, 2) consists of all lines through the origin in a two-dimensional plane. We can visualize it as a unit circle where each point on the unit circle represents the line passing through it. Antipodal points represent the same line. To illustrate
Algorithm 1 An Iteration of the Riemannian SGD with Momentum for Grassmannian at Step t
Input: Learning rate γ > 0, momentum µ ∈ [0, 1), Grassmannian weight matrix S(t) ∈ Rn×k, momentum buffer M (t−1) ∈ Rn×k, Euclidean gradient D ∈ Rn×k.
1: Compute Riemannian gradient G← (In − SST )D. . Equation (8) 2: Approximately parallel transport M to the tangent space of current point S(t) by projection
M ← (In − SST )M (t−1). (11) 3: New momentum M (t) ← µM + G. . PyTorch version 4: Move along geodesic using equation (3). If UΣV T = M (t) is the thin singular value decompo-
sition, then S(t+1) ← ( S(t)V cos(γΣ) + U sin(γΣ) ) V T .
5: (Optional) Re-orthogonalization S(t+1) by QR decomposition. . For numerical stability
how geometric optimization works, we define a toy problem on G(1, 2) that maximizes the norm of the projection of a fixed vector x0 onto a line through the origin, namely
max S∈G(1,2) ‖projSx0‖ . (7)
As shown in Figure 1, we represent S with a unit vector w ∈ S. Suppose at step t, the current point is w(t), then it is easy to compute that the Euclidean gradient at w(t) is d = x0, and the Riemannian gradient g is the Euclidean gradient d projected to the tangent space of G(1, 2) at point w(t). The next iterative point w(t+1) is to move w(t) along the geodesic toward the direction g. Without geometric optimization, the next iterative point would have lied at w(t) + γd, jumping outside of the manifold.
The following proposition computes the Riemannian gradient we needed. Proposition 1. Let S ∈ Rn×k be a matrix instantiation of subspace S ∈ G(k, n), and x ∈ Rn is a vector in Euclidean space, then the Riemannian gradient G of l(S,x) = ‖projSx‖ w.r.t. S is
G = 1
l (In − SST )xxTS. (8)
Proof. Rewrite ‖projSx‖ = √ xTSSTx, and compute the Euclidean derivatives as
∂l
∂S =
1 l xxTS, ∂l ∂x = 1 l SSTx. (9)
Then Equation (8) follows from Equation (2).
We give a geometric interpretation of Proposition 1. Let w1 be the unit vector along direction projSx, then expand it to an orthonormal basis of S, say {w1,w2, . . . ,wk}. Since Riemannian gradient is invariant to the matrix instantiation, we can set S = [w1 w2 · · · wk]. Then Equation (8) becomes
G = [ (In − SST )x 0 · · · 0 ] , (10)
since wi ⊥ x, i = 2, 3, . . . , k and wT1 x = l. Equation (10) shows that in the single-sample case, only one basis vector w1 needs to be rotated towards vector x, where w1 is the unit vector in S that is closest to x.
Riemannian SGD During training, parameters of non-geometric layers are optimized as usual using the vanilla SGD algorithm. For geometric layers such as the Grassmannian fully-connected layer, their parameters are optimized using the Riemannian SGD algorithm. The pseudo-code of the Riemannian SGD with momentum, which we implemented in our experiments, is described in Algorithm 1. We only show the code for the single-sample, single Grassmannian case. It is trivial to extend them to the batch version and the product of Grassmannians. Note that in step 2, we use projection to approximate the parallel translation of momentum for efficiency, and in step 5 an optional extra orthogonalization can improve numerical stability. The momentum update formula is adapted from the PyTorch implementation of the vanilla SGD. Weight decay does not apply here since spaces are scaleless. Algorithm 1 works together with the vanilla SGD and modifies the gradient from Euclidean to Grassmannian on-the-fly for geometric parameters.
5 EXPERIMENT
In this section, we study the influence of Grassmannian class representation through experiments. Firstly, in Section 5.1, we show that the expressive power of Grassmannian class representation improves accuracy in large-scale image classification. Secondly, in Section 5.2, we show that the Grassmannian class representation improves the feature transferability by allowing larger intra-class variation. Thirdly, in Section 5.3, we demonstrated that the scaleless property of the Grassmannian class representation improves the classification accuracy in the long-tail scenario. Additional experiments on hyper-parameter choices and design decisions are presented in Appendix B.
We choose the vanilla softmax loss and the cosine softmax loss (without margin) as baselines since they reflect the current typical class representations. The former uses a plain vector and the latter uses a normalized vector. Other innovations on losses, such as adding margins (Deng et al. (2019)), re-balancing class-wise gradients (Wang et al. (2021)), are orthogonal to our contribution.
5.1 GRASSMANNIAN CLASS REPRESENTATION IMPROVES CLASSIFICATION ACCURACY
We apply the Grassmannian class representation to large-scale classification, where consistent improvement over baselines is shown. We then analyze the characteristics of both the learned features and the learned class subspaces. On the feature representation side, we compare the feature sparsity and intra-class variability. On the class representation side, we visualize the principal angles between any pair of classes, a concept that only appears when classes are Grassmannian.
Experimental Setting We use the ResNet50-D (He et al. (2019)) architecture as the base model, and benchmark on ImageNet-1K (Deng et al. (2009)). ResNet50-D is a slight modification of the original ResNet-50 (He et al. (2016)) with about 1% improvement in accuracy. ImageNet-1K is a large-scale image classification dataset containing 1.28M training images and 50K validation images in 1000 categories. We set γ = 25 for both cosine softmax and the Grassmannian class representation. Our method replaces the last fully-connected layer of ResNet50-D by a Grassmannian fully-connected layer. To reduce the number of hyper-parameters, we simply set the subspace dimension k to be the same for all classes. We vary the hyper-parameter k in the range [1, 2, 4, 8, 16]. Since the dimension of feature is 2048, the Grassmannian fully-connected layer has the geometry of Π1000i=1 G(k, 2048).
Training Strategy All settings share the same training strategy. Each training includes 100 epochs with total batch size 256 on 8 NVIDIA Tesla V100 GPUs. SGD is used for baselines and Riemannian SGD described in Algorithm 1 is used for Grassmannian class representations. The momentum is 0.9 and the weight decay is 0.0001. The initial learning rate is 0.1 and then follows the cosine learning rate decay. The checkpoint with best validation score is used. The input size is 224× 224 and we use the standard augmentation for ImageNet, namely, random resized crop followed by random horizontal flip. The code is implemented using the mmclassification (MMClassification Contributors (2020)) package, and uses PyTorch as the training backend. Note that to make the number of experiments tractable due to our limited computation resources, we omitted many tricks that has shown to improve representation learning, such as stronger augmentation (Cubuk et al. (2020)), longer training (Wightman et al. (2021)), adding margins (Deng et al. (2019)) etc., and focus on the improvements solely contributed by the Grassmannian formulation.
Feature Norm Regularization We noticed that the norm of the feature (before re-normalization) decreases as training progresses (details see Appendix A). For example, in the case of k = 16, the average norm of feature decreases from 1.051 at epoch 10 to 0.332 at epoch 100. Although the norm of the feature does not affect inference result due to the feature re-normalization when computing logits, we empirically find that encouraging the norm to be larger than a constant L improves the training. Specifically, we propose a feature norm regularization loss LFN,
LFN = 1
K ∑ i 1 2 (relu (L− ‖xi‖))2 , (12)
where xi is the feature of the i-th sample before normalization and K is the number of features with norm larger than L. In our experiments, L = 1 and the loss is directly added to the softmax loss
with equal weight. We also tried larger values of L or to regularize the norm of feature on both sides, however, they degrade the performance.
Results The validation accuracies of different models on ImageNet-1K is listed in Table 1. All models with the Grassmannian class representation achieve higher top-1 and top-5 accuracies than the vanilla softmax and the cosine softmax. A general trend is that, with larger subspace dimension k, the accuracy improvement is greater. When subspace dimension is 16, the top-1 accuracy is 79.21%, which is 1.17% points higher than the vanilla softmax loss. With feature norm regularization, the top-1 accuracy further improves from 79.12% to 79.37% for dimension 8.
Intra-Class Variability Increases with Dimension The intra-class variability is measured by the mean pair-wise angles (in degrees) between features within the same class, and then average over all classes. The inter-class variability is the average of mean pair-wise angles between features from different classes. Following the convention in the study of neural collapse (Papyan et al. (2020)), we use the global centered training feature to compute variabilities. Kornblith et al. (2021) showed that alternative objectives that improve accuracy, including label smoothing, dropout, sigmoid, cosine softmax, logit normalization, etc., collapse the intra-class variability in representation, which in consequence degrades the quality of feature on downstream tasks. However, this conclusion does not apply when the classes are modeled by subspaces. The intra-class variability does reduces from baseline’s 60.12 to Grassmannian formulation’s 56.52 when the subspace dimension k = 1, however, as k increases, both the top-1 accuracy and the intra-class variability grow. This indicates that representing classes as subspaces enables the simultaneous improvement of class discriminative power and expansion of intra-class variability.
Feature Sparsity The feature sparsity is measured by the average percentage of zero activations on the validation set. As shown in Table 1, the feature from vanilla softmax networks are very dense, with only 0.55% zero activations. Cosine softmax and Grassmannian class representations all result in more sparse representations, with around 78% zero activations. The feature norm regularization decreases the sparsity about a half.
Principal Angles Between Class Representative Spaces When classes are subspaces, relationships between two classes can be measured by k angles called principal angles, which contain richer information than a single angle between two class vectors. The principal angles between two k-dimensional subspaces S and R are recursively defined as (Absil et al. (2006))
cos(θi) = max s∈S max r∈R
sTr = sTi ri, s.t.‖s‖ = ‖r‖ = 1, sTsj = rTrj = 0, j = 1, . . . , i− 1, (13)
for i = 1, . . . , k and θi ∈ [0, π/2]. In Figure 2, we illustrate the smallest and largest principal angles between any pair of classes for a model with k = 8. From the figure, we can see that the smallest principal angle reflects class similarity, and the largest principal angle is around π/2. A smaller angle means the two classes are correlated in some directions, and a π/2 angle means that some directions in one class subspace is completely irrelevant (orthogonal) to the other class.
5.2 GRASSMANNIAN CLASS REPRESENTATION IMPROVES FEATURE TRANSFERABILITY
In this section we compare the linear transferability of the features learned by different models trained on the ImageNet-1K dataset. The feature transfer benchmark dataset includes CIFAR-10 (Krizhevsky et al. (2009)), CIFAR-100 (Krizhevsky et al. (2009)), Food-101 (Bossard et al. (2014)), Oxford-IIIT Pets (Parkhi et al. (2012)), Stanford Cars (Krause et al. (2013)), and Oxford 102 Flowers (Nilsback & Zisserman (2008)). For each of the transfer dataset, we use the same trained models as in Table 1 to extract their features. Then all features are normalized to unit length. We fit linear SVM with one-vs-rest multi-class policy on the training set, and report the accuracies on their test set. The regularization hyper-parameter for SVM is grid searched with candidates [0.1, 0.2, 0.5, 1, 2, 5, 10, 15, 20] and determined by five-fold cross-validation on the training set.
Results As shown in Table 2, the cosine softmax and the Grassmannian with subspace dimension k = 1 has comparable transfer performance, but both are lower than the vanilla softmax. However, when the subspace dimension increases, the transfer performance gradually improves, and when k = 16, the transfer performance is on par with vanilla softmax. The feature norm regularization improves the transfer quality, as shown in the k = 1, 8 cases. We hypothesize that this might relate to the fact that features with norm regularization are less sparse, so more information are encoded.
Class Separation The class separation is measured by the index R2, which is defined as one minus the ratio of the average intra-class cosine distance to the overall average cosine distance (Kornblith et al., 2021, Eq. (11)). Kornblith et al. (2021) found that greater class separation R2 is associated with less transferable features. This may explain the feature transfer performance of Grassmannian class
representations. The vanilla softmax has lower separation (0.495) compared to the cosine softmax (0.528) and the Grassmannian class representation with subspace dimension k = 1 (0.534). From subspace dimension k = 1 to k = 16, the separation from Grassmannian models decreases from a high value (0.534) to a low value (0.395). The change in class separation is roughly in line with the change of transfer performances.
5.3 SCALELESS OF SUBSPACE IMPROVES LONG-TAIL RECOGNITION
We benchmark its effectiveness in long-tail classification using the ImageNet-LT dataset (Liu et al. (2019)). ImageNet-LT is a subset of ImageNet-1K, where the number of images per class ranges from 5 to 1280. There are totally 115.8K images, roughly 1/10 the size of ImageNet-1K. We use the same ResNet50-D networks as in Section 5.1. All training settings including optimizer, augmentation, initial learning rate are also kept the same except we modify the total epochs to 200 and the learning rate is decayed by 1/10 at epoch 150, 180, and 195. The last checkpoint is used for evaluation. We use the instance-balanced sampling, as it was reported by Kang et al. (2019) that class-balanced sampling, and square-root sampling both degrade the performance.
We report the top-1 accuracies on the test set in Table 3. We find that both the cosine softmax and the Grassmannian class representation with small subspace dimension improve the long-tail classification accuracy. Specifically, the cosine softmax is 1.62% higher in score compared to the vanilla softmax, and the Grassmannian class representation with subspace dimension k = 1 is 2.11% higher in score compared to the vanilla softmax. However, when the subspace dimension increases, the accuracy drops. We notice that for few-shot classes, there are not enough sample to learn a good higher dimensional subspace for its representation, as the accuracy of few-shot classes degrade significantly when dimension are large. Too few training data for a class is an example scenario when larger dimension does not offer much help.
6 LIMITATION AND FUTURE DIRECTION
One problem that remains open is how to choose the optimal dimension. Currently, we treat it as a hyper-parameter and decide it through experiments. Computational side, geometric optimization incurs some computational overhead since it contains SVD decomposition. This might hinder the training speed when k is very large. The Grassmannian class representation allows for greater intra-class variability, but we did not explicitly promote the intra-class variability in any form. It will be very interesting to explore ways to explicitly encourage intra-class variability. For example, a potential way is to combine it with self-supervised learning. We hope our work would stimulate progress in these directions.
7 CONCLUSION
In this work, we proposed to use linear subspaces as the class prototype in deep neural networks. The geometric structure of the related Grassmannian fully-connected layer and the Grassmannian convolutional layer are products of Grassmannian. We optimize the subspaces using geometric optimization and provide an efficient Riemannian SGD implementation tailored for Grassmannians. We apply the new formulation to large-scale image classification, feature transfer, and long-tail classification tasks. Experiments demonstrate that the new Grassmannian class representation is able to improve performances in these settings.
A TECHNICAL DETAILS
Alternative Implementation of Riemannian SGD The step 4 of Algorithm 1 is called retraction in geometric optimization. There are alternative implementations of retraction other than moving parameters along the geodesic. For example, replace step 4 with the Euclidean gradient update and followed by the re-orthogonalization via QR decomposition in Step 5. The subspace parameter may move away from the Grassmannian after the Euclidean gradient update, but it will be pulled back to the manifold after the QR re-orthogonalization (details see Absil et al. (2009, Equ. (4.11))). For ease of reference, we call this version of Riemannian SGD as “Algorithm 1 variant”. We compare the two implementations in the first two rows of Table 4. The results show that the Grassmannian class representation is effective on both versions of Riemannian SGD.
Necessity of Grassmannian Formulation and Geometric Optimization To show that the necessity of constraining the subspace parameters to lie in the Grassmannian, we replace the Riemannian SGD with the vanilla SGD and compare it with Riemannian SGD. Note that with SGD, the logit formula ‖STi x‖ no longer means the projection norm because Si is not orthogonal anymore. The result is shown at the third row of Table 4, from which we observe a significant performance drop for the unconstrained setting.
Numerical Stability of Algorithm 1 The numerical stability issue is caused by the accumulation of tiny computational errors of Equation (3). After many iterations, the resultant matrix S might not be perfectly orthogonal. For example, after 100, 1000, and 5000 iterations of the Grassmannian ResNet50-D with subspace dimension k = 8, we observed that the error max i‖STi Si − I‖∞ is 1.9e-5, 9.6e-5 and 3.7e-4, respectively. After 50 epochs, the error accumulates to 0.0075. One can run step 5 every 100 iterations to keep the error at low level and the computational cost is neglectable. For this reason, we marked this step as “optional”.
Decreasing Feature Norm During Training We show the changes of average norm on the validation set of ImageNet from epoch 10 to epoch 100 in Figure 3. The subspace dimension k = 16.
B HYPER-PARAMETERS AND DESIGN DECISIONS
Choice of Gamma We use γ = 25 throughout the main text. Here we give more results with different choice of γ when subspace dimension k = 8 in Table 5. Due to that we conducted this set of experiments in early exploration stage, the learning rate decay policy is to divide by 10 at epochs 30, 60 and 90, which is different from our main results using the cosine learning rate schedule. The top-1 accuracy is slightly lower than the cosine learning rate counter part. Other training settings such as augmentation are the same as in Table 1.
Importance of Re-Normalizing Features Re-normalizing the feature is critical to effectively learn the class representative subspaces. Below we provide training results without feature re-normalization in Table 6. There are significant performance drop without re-normalization. For reference, the cosine softmax also requires feature re-normalization for effective learning.
Importance of Joint Training Joint training the subspaces and the features is essential. To support this claim, we add an experiment that only fine-tunes the class subspaces from weights pre-trained using the regular softmax (third row of Table 7). For comparison, we also add another experiment that fine-tunes all parameters (fourth row of Table 7). We find that if the feature is fixed, changing the regular fc to the geometric version does not increase performance noticeably (top-1 from 78.04% to 78.14%). But when all parameters are free to learn, the pre-trained weights is a better initialization than the random initialization (top-1 from 79.12% to 79.44%).
More Results of FN We present more results using the feature norm regularization trick in Table 8. From the results, we observe that FN also works for the baseline Cosine Softmax. For Grassmannian + FN, the performance reaches peak at dimension k = 8 and then decreases when k = 16.
Stronger Augmentation Improves Accuracy Generally speaking, stronger augmentation mitigates the overfitting problem and benefits models with larger capacity. To demonstrate the effect of stronger augmentations, we run experiments using RandAug (Cubuk et al. (2020)) in Table 9. We can see that stronger augmentation indeed further increases the accuracy. Together with longer training and SyncBN, the top-1 accuracy for ResNet50-D reaches 80.17%.
C MORE BASELINES
We have compared the proposed method with vanilla softmax and the cosine softmax the main text. In this section we compare with baselines that use the same amount of parameters, and run experiments on different network structures.
Multi-FC We add multiple classification fc layer to the network. During training, these independent fcs are trained side by side, and their losses are averaged. During testing, the logits are first averaged, and then followed by softmax to output the prediction probability.
SoftTriple In the SoftTriple loss (Qian et al. (2019)), each class is modeled by multiple centers. The logit is a weighted average of logits computed from individual class centers. We adapted the official code into our codebase to train on the ImageNet dataset. The recommended parameters are used. Specifically, λ = 20, γ = 0.1, τ = 0.2 and δ = 0.01.
For the above two settings, we use the same training protocols as in Table 1. Results are shown in Table 10, from which we find that the Grassmannian class representation is the most effective one.
More Architectures We show experiments on ResNet101-D and ResNeXt (Xie et al. (2017b)) in Table 11. The training settings are the same as in Table 1, namely, we use the standard augmentation, cosine learning rate schedule, and train for 100 epochs. The results show that our formulation is effective across different model architectures.
D TRAINING SPEED AND SVD SPEED
During inference, the computational cost is K times the vanilla softmax. Since it is mostly matrix multiplication, the GPU acceleration can speed up even further. For example, on a V100 GPU, the average time of multiplying a 1000 × 2048 matrix with a 2048 dimensional vector is 20 ± 2.9µs, while multiplying an 8000× 2048 matrix with a 2048 dimensional vector takes about 105± 7.6µs. The cost is neglectable compared to the network forward time.
During training, the most costly operation in Algorithm 1 is SVD. We measure the actual iteration time during training in Table 12. We observe that when K is small, it is as fast as the vanilla softmax. When k = 8, the full training needs roughly 1.7x time compared to vanilla softmax (this can be reduced greatly with the new version of PyTorch, as we will discuss below).
Since the release of PyTorch 1.13, they supported the fast approximate SVD algorithm GESVDA. We saw great speed improvement in the case of k = 8 and k = 16. The benchmark time is shown
in Table 13. With computational optimizations as such, we expect the computational cost of SVD would be minimal for k ≤ 32.
E PYTORCH CODE FOR RIEMANNIAN SGD
We provide a sample implementation of Algorithm 1 in Figure 4 using PyTorch (Paszke et al. (2019)). The sample code checks if a parameter is geometric by checking whether it has an ‘geometry’ attribute. If not, then it runs the original SGD on that parameter. If the ‘geometry’ property is not None, then it is a list of numbers indicating the dimension of class representative subspaces for all classes. If all the dimensions are the same, then it goes to the batch version (line 23 of the code in Figure 4). Otherwise, it goes to the for loop version (line 46 of the code in Figure 4). | 1. What is the focus and contribution of the paper on neural networks?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its implementation and experimental demonstrations?
3. Do you have any concerns or questions about the method's numerical stability, optimization, and relationship to other representations?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any additional experiments or considerations that could enhance the paper's findings or provide further insights into its contributions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The last layer of a neural network trained to classify instances usually linearly projects the feature vector from the previous layer to compute log odds of a class. This paper proposes replacement of that linear projection with a norm of the projection of the feature vector into a subspace. The paper shows how to optimize all weights of such a network and studies the advantages of the representation. The authors argue that the learned features transfer better in downstream tasks.
Strengths And Weaknesses
The strengths: The paper is clearly written and well-motivated. The method is sufficiently described for a reader's implementation. The experiments are diverse: Beyond testing accuracy on ImageNet, the paper also studies feature transfer to new datasets as well as learning from small number of examples.
The weaknesses: Some of the improvements demonstrated in experiments are fairly small (e.g. Table 1). In fact, given the variability in numbers one might get in all the experiments through small tinkering with known tricks, it is difficult to know if the demonstrated advantages would hold under further optimization for any given task (and if they are statistically significant across initializations in learning), although experiments and illustrations as a whole do paint a convincing picture. Some of the choices of the implementation are not fully explained.
Overall, my first impression is positive.
Questions:
Given the need for step 5 in the algorithm (orthogonalization) for numerical stability, why bother with the geodesic at all and simply do gradient descent on S followed by step 5? Are there some illustrations of why it fails?
Just looking at Eq. 6, one might ask if the orthogonal subspace basis and Grassman manifold view is really necessary, or if the benefit simply comes from the quadratic form of the logit computation (instead of linear). I.e., going beyond the previous question: Can the optimization be simply done on unconstrained S_i? Or for that matter can the logit be l_i=x' W x, with unconstrained W (gradient descent optimization of W tends to regularize it to be low rank anyhow).
Given the quadratic nature of the computation, is there a relationship to tensor product representations (Smolenski et al), where such computations are done in all layers of a network? (and do you plan to move your subspace projection into earlier layers, too?)
Norm regularization (12), as well as in equation 4 (to renormalize x to constant \gamma norm) may play big roles in learning reducing the real effect of the subspace modeling (and also do you do both of these things or just (12)?)
In the first ImageNet experiment, how would you account for the change in the modeling power by simply having more parameters in the last layer?
In the transfer experiments, I am assuming that the issue above no longer exists, because you treat the features from the previous layer the same way (i.e. not through fine-tuned subspace projections, but using a linear classifier). Is that right?
If the above is right, then Table 2 may be slightly confusing, as results for ImageNet seem to be copied from Table 1, where logits a computed using norm of the subspace projections, but for the rest of the datasets, they are computed using linear projections.
Finally, the premise of the experiments is that the joint training of the backbone and the (subspace-based) classifier results in features that are better in the ways described in the paper. If you initialize the network trained with regular softmax or cosine softmax classifier layer, and then switch to the subspace-based layer, what happens? Can keeping the features fixed and finding good subspaces increase accuracy? Does further training of the network change the features and how? (or is this not a meaningful experiment because of the lack of the bias term in your model?)
Clarity, Quality, Novelty And Reproducibility
As I mentioned above, I think the paper is clear and reproducible. |
ICLR | Title
Grassmannian Class Representation in Deep Learning
Abstract
We generalize the class representative vector found in deep classification networks to linear subspaces and show that the new formulation enables the simultaneous enhancement of the inter-class discrimination and intra-class feature variation. Traditionally, the logit is computed by the inner product between a feature and the class vector. In our modeling, classes are subspaces and the logit is defined as the norm of the projection from a feature onto the subspace. Since the set of subspaces forms Grassmann manifolds, finding the optimal subspace representation for classes is to optimize the loss on a Grassmannian. We integrate the Riemannian SGD into existing deep learning frameworks such that the class subspaces in a Grassmannian are jointly optimized with other model parameters in Euclidean. Compared to the vector form, subspaces have two appealing properties: they can be multi-dimensional and they are scaleless. Empirically, we reveal that these distinct characteristics improve various tasks. (1) Image classification. The new formulation brings the top-1 accuracy of ResNet50-D on ImageNet-1K from 78.04% to 79.37% using the standard augmentation in 100 training epochs. This confirms that the representative capability of subspaces is more powerful than vectors. (2) Feature transfer. Subspaces provide freedom for features to vary and we observed that the intra-class variability of features increases when the subspace dimensions are larger. Consequently, the quality of features is better for downstream tasks. The average transfer accuracy across 6 datasets improves from 77.98% to 80.12% compared to the strong baseline of vanilla softmax. (3) Long-tail classification. The scaleless property of subspaces benefits classification in the long-tail scenario and improves the accuracy of ImageNet-LT from 46.83% to 48.94% compared to the standard formulation. With these encouraging results, we believe that more applications could benefit from the Grassmannian class representation. Codes will be released.
1 INTRODUCTION
The idea of representing classes as linear subspaces in machine learning can be dated back, at least, to 1973 (Watanabe & Pakvasa (1973)), yet it is mostly ignored in the current deep learning literature. In this paper, we revisit the scheme of representing classes as linear subspaces in the deep learning context. To be specific, each class i is associated with a linear subspace Si, and for any feature vector x, the i-th class logit is defined as the norm of projection
li := ∥∥projSix∥∥ . (1)
Since a subspace is a point in the Grassmann manifold (Absil et al. (2009)), we call this formulation the Grassmannian class representation. In the following, we answer the two critical questions,
1. Is Grassmannian class representation useful in real applications?
2. How to optimize the subspaces in training?
The procedure fully-connected layer → softmax → cross-entropy loss is the standard practice in deep classification networks. Each column of the weight matrix of the fullyconnected layer is called the class representative vector and serves as a prototype for one class. This representation of class has achieved huge success, yet it is not without imperfections.
In the study of transferable features, researchers noticed a dilemma that representations with higher classification accuracy on the original task lead to less transferable features for downstream tasks (Kornblith et al. (2021); Müller et al. (2019)). This is connected to the fact that they tend to collapse intra-class variability of representations, resulting in loss of information in the logits about the resemblances between instances of different classes. Furthermore, the neural collapse phenomenon (Papyan et al. (2020)) indicates that as training progresses, the intra-class variation becomes negligible, and features collapse to their class-means. So this dilemma inherently originates from the practice of representing classes by a single vector. The Grassmannian class representation shed light on this issue as features of each class are allowed to vary in a high-dimensional subspace without incurring losses in classification.
In the study of the long-tail classification, researchers found that the norm of class representative vectors is highly related to the number of training instances in the corresponding class (Kang et al. (2019)) and the recognition accuracy is affected. To counter this effect, the class representative vector is typically been rescaled to unit length during training (Liu et al. (2019)) or re-calibrated in an extra post-processing step (Kang et al. (2019)). In addition to these techniques, the Grassmannian class representation provides a natural and elegant solution for this as subspace is scaleless.
It is well known that the set of k-dimensional linear subspaces form a Grassmann manifold, so finding the optimal subspace representation for classes is to optimize on the Grassmann manifold. Thus for the second question, the natural solution is to use the geometric optimization (Edelman et al. (1998)), which optimizes an objective function under the constraint of a given manifold. Points being optimized are moving along geodesics instead of following the direction of Euclidean gradients. The preliminary concepts of geometric optimization are reviewed in Section 3, and the technical details of subspace learning are presented in Section 4. We implemented an efficient Riemannian SGD for optimization in Grassmann manifold as shown in Algorithm 1, which integrates the geometric optimization algorithms to deep learning frameworks so that both the linear subspaces in Grassmannian and model weights in Euclidean are jointly optimized.
Going back to the first question, we experiment on three concrete tasks in Section 5 to demonstrate the practicality and effectiveness of Grassmannian class representation. We find that (1) Grassmannian class representation improves large-scale image classification accuracy. (2) Grassmannian class representation produces high-quality features that can better transfer to downstream tasks. (3) Grassmannian class representation improves the long-tail classification accuracy. With these encouraging results, we believe that Grassmannian class representation is a promising formulation and more applications may benefit from its attractive features.
2 RELATED WORK
Geometric Optimization Edelman et al. (1998) developed the geometric Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds in their seminal paper. Riemannian SGD was introduced in Bonnabel (2013) with an analysis on convergence and there are variants such as Riemannian SGD with momentum (Roy et al. (2018)) or adaptive (Kasai et al. (2019)). Other popular Euclidean optimization methods such as Adam are also studied in the Riemannian manifold context (Becigneul & Ganea (2019)). Lezcano-Casado & Martınez-Rubio (2019) study the special case of SO(n) and U(n) and uses the exponential map to enable Euclidean optimization methods for Lie groups. The idea was generalized into trivialization in Lezcano Casado (2019). Our Riemannian SGD Algorithm 1 is tailored for Grassmannian, so we have a closed-form equation for geodesics. Other applications of geometric optimization include matrix completion (Mishra & Sepulchre (2014); Li et al. (2015b;a); Nimishakavi et al. (2018)), hyperbolic taxonomy embedding (Nickel & Kiela (2018)), etc. Hamm & Lee (2008) propose the Grassmann discriminant analysis, in which features are modeled as linear subspaces. These applications are mostly using shallow models. Zhang et al. (2018) use subspaces to model clusters in unsupervised learning, which share similar spirit with our work. Simon et al. (2020) model classes as subspaces in few-shot learning, however, their subspaces are computed from data matrix rather than explicitly parametrized and learned. Roy et al. (2019) use Stiefel manifold to construct Mahalanobis distance matrix in Siamese networks in order to improve feature embeddings of deep metric learning.
Orthogonal Constraints in Deep Learning There are works that enforce orthogonality on weights, which study the regularization effect of orthogonal constraints. Contrastingly, we used orthogonal matrices as the numerical representation of the geometry object of subspaces and focus on the representation of classes. The approaches of enforcing orthogonality include regularizations (Arjovsky et al. (2016); Xie et al. (2017a); Bansal et al. (2018); Qi et al. (2020); Wang et al. (2020), etc.), geometric constraints (Ozay & Okatani (2018); Harandi & Fernando (2016)) and paraunitary systems (Su et al. (2022)). Orthogonally constrained data is also explored by Huang et al. (2018).
Improving Diversity in Feature Learning Grassmannian class representation encourages the intra-class variation implicitly by providing a subspace to vary. In metric learning, there are efforts to explicitly encourage feature diversity. For example, SoftTriplet Loss (Qian et al. (2019)) models each class as local clusters with several centers. Zhang et al. (2017) use a global orthogonal regularization to encourage local descriptors to spread out in the features space. Yu et al. (2020) propose to learn low-dimensional structures from the maximal coding rate reduction principle. The subspaces are estimated using PCA on feature vectors after the training. In our formulation, subspaces are directly optimized in the Grassmann manifold during training.
Normalized Classification Weights Normalizing class representative vectors has been found useful in representation learning (Wang et al. (2017; 2018); Deng et al. (2019)) and long-tail classification (Liu et al. (2019); Wang et al. (2021)). However, works such as ArcFace (Deng et al. (2019)) focus on adding an extra margin to suppress intra-class variance. In contrast, our subspace formulation encourages intra-class variation.
3 PRELIMINARIES
In this section, we briefly review the essential concepts in geometric optimization. Detailed exposition can be found in Edelman et al. (1998) and Absil et al. (2009). Given an n-dimensional Euclidean space Rn, the set of k-dimensional linear subspaces forms the Grassmann manifold G(k, n). A computational-friendly representation for subspace S ∈ G(k, n) is an orthonormal matrix S ∈ Rn×k, where STS = Ik and Ik is the k × k identity matrix. Columns of matrix S can be interpreted as an orthonormal basis for the subspace S. The matrix representation is not unique, as right multiplying by an orthonormal matrix will get a new matrix representing the same subspace. Formally, Grassmannian is a quotient space of the Stiefel manifold and the orthogonal group G(k, n) = St(k, n)/O(k), where St(k, n) = {X ∈ Rn×k|XTX = Ik} and O(k) = {X ∈ Rk×k|XTX = Ik}. When the context is clear, we use the notation of space S and one of its matrix representations S interchangeably. The tangent space of the Grassmann manifold at S consists of all n× k matrices T such that STT = 0. Given a function f : G(k, n)→ R defined on the Grassmann manifold, the Riemannian gradient of f at point S ∈ G(k, n) is given by (Edelman et al., 1998, Equ. (2.70)),
∇f(S) = fS − SST fS , (2)
where fS is the Euclidean gradient with elements (fS)ij = ∂f∂Sij . When performing gradient descend on the Grassmann manifold, and suppose the current point is S and the current Riemannian gradient is G, then the next point is the endpoint of S moving along the geodesic toward the tangent G with some step size. The formula of the geodesic is given by (Edelman et al., 1998, Equ. (2.65)),
S(t) = (SV cos(tΣ) + U sin(tΣ))V T , (3)
where UΣV T = G is the thin singular value decomposition of G.
4 LEARNING THE GRASSMANNIAN CLASS REPRESENTATION
Denote the weight of the last fully-connected layer in a classification network by W ∈ Rn×C and the bias by b ∈ RC , where n is the dimension of features and C is the number of classes. The i-th column vector wi of W is called the i-th class representative vector. The i-th logit is computed as the inner product between a feature x and the class vector (and optionally offset by a bias bi), namely wTi x + bi. We extend this well-established formula to a multi-dimensional subspace form
li := ∥∥projSix∥∥ , (4)
where Si ∈ G(k, n) is a k-dimensional subspace in the n-dimensional feature space. We call Si the i-th class representative space, or class space in short. Comparing the new logit to the standard one, the inner product of feature x with class vector is replaced by the norm of the subspace projection projSix and the bias term is omitted. We found that re-normalizing features to a constant length γ
improves training. Incorporating this, Equation (4) becomes ∥∥∥projSi γx‖x‖∥∥∥. To simplify notation, we assume feature x has been properly re-normalized throughout this paper unless otherwise specified.
The application of the subspace class representation requires two modifications to an existing network. Firstly, the last fully-connected layer is replaced by its geometric counterpart, which is detailed in Section 4.1. The new geometric layer will transform features to logits using Equation (4). Secondly, the optimizer should be extended to process the new geometric layer simultaneously, which is explained in Section 4.2. Parameters of the geometric layer are optimized using Geometric SGD, while all other parameters are optimized as usual using the standard SGD algorithm.
4.1 GRASSMANNIAN CLASS REPRESENTATION
Suppose for class i, i = 1, 2, . . . , C, its subspace representation is Si ∈ G(ki, n), where the dimension ki is a hyperparameter and is fixed during training. Then the tuple of subspaces (S1, S2, . . . , SC) will be optimized in the product space G(k1, n)×G(k2, n)×· · ·×G(kC , n). Denote a matrix instantiation of Si as Si ∈ Rn×k, where the column vectors form an orthonormal basis Si, then we concatenate the matrices into a big matrix
S = [S1 S2 · · · SC ] ∈ Rn×(k1+k2+···+kC). (5)
The matrix S contains the parameters that are optimized numerically. For feature x, the product STi x gives the coordinate of projSix under the orthonormal basis formed by the columns of Si. By definition in Equation (4), the logit for class i and feature x is computed by
li = ∥∥projSix∥∥ = ∥∥STi x∥∥ . (6)
Grassmannian Fully-Connected Layer We can implement a geometric fully-connected layer using the plain old fully-connected layer. The shape of the weight S is n× (k1 + k2 + · · ·+ kC), as shown in Equation (5). In the forward pass, the input feature is multiplied with the weight matrix to get a temporary vector t = STx, then the first element of the output is the norm of the sub-vector (t1, . . . , tk1), and the second element of the output is the norm of (tk1+1, tk1+2, . . . , tk1+k2), etc.
Parameter Initialization Each matrix instantiation of the subspace should be initialized as an orthonormal matrix. The geometric optimization algorithm described in Section 4.2 ensures their orthonormality during training. Specifically, for Grassmannian fully-connected layer, each block Si of the weight S in Equation (5) is orthonormal. The whole matrix S needs not be orthonormal.
4.2 OPTIMIZE THE SUBSPACES
Geometric optimization is to optimize functions defined on manifolds. The key step is to find the Riemannian gradient of the loss function and then descend along the geodesic. Here the manifold in concern is the Grassmannian G(k, n). As an intuitive example, G(1, 2) consists of all lines through the origin in a two-dimensional plane. We can visualize it as a unit circle where each point on the unit circle represents the line passing through it. Antipodal points represent the same line. To illustrate
Algorithm 1 An Iteration of the Riemannian SGD with Momentum for Grassmannian at Step t
Input: Learning rate γ > 0, momentum µ ∈ [0, 1), Grassmannian weight matrix S(t) ∈ Rn×k, momentum buffer M (t−1) ∈ Rn×k, Euclidean gradient D ∈ Rn×k.
1: Compute Riemannian gradient G← (In − SST )D. . Equation (8) 2: Approximately parallel transport M to the tangent space of current point S(t) by projection
M ← (In − SST )M (t−1). (11) 3: New momentum M (t) ← µM + G. . PyTorch version 4: Move along geodesic using equation (3). If UΣV T = M (t) is the thin singular value decompo-
sition, then S(t+1) ← ( S(t)V cos(γΣ) + U sin(γΣ) ) V T .
5: (Optional) Re-orthogonalization S(t+1) by QR decomposition. . For numerical stability
how geometric optimization works, we define a toy problem on G(1, 2) that maximizes the norm of the projection of a fixed vector x0 onto a line through the origin, namely
max S∈G(1,2) ‖projSx0‖ . (7)
As shown in Figure 1, we represent S with a unit vector w ∈ S. Suppose at step t, the current point is w(t), then it is easy to compute that the Euclidean gradient at w(t) is d = x0, and the Riemannian gradient g is the Euclidean gradient d projected to the tangent space of G(1, 2) at point w(t). The next iterative point w(t+1) is to move w(t) along the geodesic toward the direction g. Without geometric optimization, the next iterative point would have lied at w(t) + γd, jumping outside of the manifold.
The following proposition computes the Riemannian gradient we needed. Proposition 1. Let S ∈ Rn×k be a matrix instantiation of subspace S ∈ G(k, n), and x ∈ Rn is a vector in Euclidean space, then the Riemannian gradient G of l(S,x) = ‖projSx‖ w.r.t. S is
G = 1
l (In − SST )xxTS. (8)
Proof. Rewrite ‖projSx‖ = √ xTSSTx, and compute the Euclidean derivatives as
∂l
∂S =
1 l xxTS, ∂l ∂x = 1 l SSTx. (9)
Then Equation (8) follows from Equation (2).
We give a geometric interpretation of Proposition 1. Let w1 be the unit vector along direction projSx, then expand it to an orthonormal basis of S, say {w1,w2, . . . ,wk}. Since Riemannian gradient is invariant to the matrix instantiation, we can set S = [w1 w2 · · · wk]. Then Equation (8) becomes
G = [ (In − SST )x 0 · · · 0 ] , (10)
since wi ⊥ x, i = 2, 3, . . . , k and wT1 x = l. Equation (10) shows that in the single-sample case, only one basis vector w1 needs to be rotated towards vector x, where w1 is the unit vector in S that is closest to x.
Riemannian SGD During training, parameters of non-geometric layers are optimized as usual using the vanilla SGD algorithm. For geometric layers such as the Grassmannian fully-connected layer, their parameters are optimized using the Riemannian SGD algorithm. The pseudo-code of the Riemannian SGD with momentum, which we implemented in our experiments, is described in Algorithm 1. We only show the code for the single-sample, single Grassmannian case. It is trivial to extend them to the batch version and the product of Grassmannians. Note that in step 2, we use projection to approximate the parallel translation of momentum for efficiency, and in step 5 an optional extra orthogonalization can improve numerical stability. The momentum update formula is adapted from the PyTorch implementation of the vanilla SGD. Weight decay does not apply here since spaces are scaleless. Algorithm 1 works together with the vanilla SGD and modifies the gradient from Euclidean to Grassmannian on-the-fly for geometric parameters.
5 EXPERIMENT
In this section, we study the influence of Grassmannian class representation through experiments. Firstly, in Section 5.1, we show that the expressive power of Grassmannian class representation improves accuracy in large-scale image classification. Secondly, in Section 5.2, we show that the Grassmannian class representation improves the feature transferability by allowing larger intra-class variation. Thirdly, in Section 5.3, we demonstrated that the scaleless property of the Grassmannian class representation improves the classification accuracy in the long-tail scenario. Additional experiments on hyper-parameter choices and design decisions are presented in Appendix B.
We choose the vanilla softmax loss and the cosine softmax loss (without margin) as baselines since they reflect the current typical class representations. The former uses a plain vector and the latter uses a normalized vector. Other innovations on losses, such as adding margins (Deng et al. (2019)), re-balancing class-wise gradients (Wang et al. (2021)), are orthogonal to our contribution.
5.1 GRASSMANNIAN CLASS REPRESENTATION IMPROVES CLASSIFICATION ACCURACY
We apply the Grassmannian class representation to large-scale classification, where consistent improvement over baselines is shown. We then analyze the characteristics of both the learned features and the learned class subspaces. On the feature representation side, we compare the feature sparsity and intra-class variability. On the class representation side, we visualize the principal angles between any pair of classes, a concept that only appears when classes are Grassmannian.
Experimental Setting We use the ResNet50-D (He et al. (2019)) architecture as the base model, and benchmark on ImageNet-1K (Deng et al. (2009)). ResNet50-D is a slight modification of the original ResNet-50 (He et al. (2016)) with about 1% improvement in accuracy. ImageNet-1K is a large-scale image classification dataset containing 1.28M training images and 50K validation images in 1000 categories. We set γ = 25 for both cosine softmax and the Grassmannian class representation. Our method replaces the last fully-connected layer of ResNet50-D by a Grassmannian fully-connected layer. To reduce the number of hyper-parameters, we simply set the subspace dimension k to be the same for all classes. We vary the hyper-parameter k in the range [1, 2, 4, 8, 16]. Since the dimension of feature is 2048, the Grassmannian fully-connected layer has the geometry of Π1000i=1 G(k, 2048).
Training Strategy All settings share the same training strategy. Each training includes 100 epochs with total batch size 256 on 8 NVIDIA Tesla V100 GPUs. SGD is used for baselines and Riemannian SGD described in Algorithm 1 is used for Grassmannian class representations. The momentum is 0.9 and the weight decay is 0.0001. The initial learning rate is 0.1 and then follows the cosine learning rate decay. The checkpoint with best validation score is used. The input size is 224× 224 and we use the standard augmentation for ImageNet, namely, random resized crop followed by random horizontal flip. The code is implemented using the mmclassification (MMClassification Contributors (2020)) package, and uses PyTorch as the training backend. Note that to make the number of experiments tractable due to our limited computation resources, we omitted many tricks that has shown to improve representation learning, such as stronger augmentation (Cubuk et al. (2020)), longer training (Wightman et al. (2021)), adding margins (Deng et al. (2019)) etc., and focus on the improvements solely contributed by the Grassmannian formulation.
Feature Norm Regularization We noticed that the norm of the feature (before re-normalization) decreases as training progresses (details see Appendix A). For example, in the case of k = 16, the average norm of feature decreases from 1.051 at epoch 10 to 0.332 at epoch 100. Although the norm of the feature does not affect inference result due to the feature re-normalization when computing logits, we empirically find that encouraging the norm to be larger than a constant L improves the training. Specifically, we propose a feature norm regularization loss LFN,
LFN = 1
K ∑ i 1 2 (relu (L− ‖xi‖))2 , (12)
where xi is the feature of the i-th sample before normalization and K is the number of features with norm larger than L. In our experiments, L = 1 and the loss is directly added to the softmax loss
with equal weight. We also tried larger values of L or to regularize the norm of feature on both sides, however, they degrade the performance.
Results The validation accuracies of different models on ImageNet-1K is listed in Table 1. All models with the Grassmannian class representation achieve higher top-1 and top-5 accuracies than the vanilla softmax and the cosine softmax. A general trend is that, with larger subspace dimension k, the accuracy improvement is greater. When subspace dimension is 16, the top-1 accuracy is 79.21%, which is 1.17% points higher than the vanilla softmax loss. With feature norm regularization, the top-1 accuracy further improves from 79.12% to 79.37% for dimension 8.
Intra-Class Variability Increases with Dimension The intra-class variability is measured by the mean pair-wise angles (in degrees) between features within the same class, and then average over all classes. The inter-class variability is the average of mean pair-wise angles between features from different classes. Following the convention in the study of neural collapse (Papyan et al. (2020)), we use the global centered training feature to compute variabilities. Kornblith et al. (2021) showed that alternative objectives that improve accuracy, including label smoothing, dropout, sigmoid, cosine softmax, logit normalization, etc., collapse the intra-class variability in representation, which in consequence degrades the quality of feature on downstream tasks. However, this conclusion does not apply when the classes are modeled by subspaces. The intra-class variability does reduces from baseline’s 60.12 to Grassmannian formulation’s 56.52 when the subspace dimension k = 1, however, as k increases, both the top-1 accuracy and the intra-class variability grow. This indicates that representing classes as subspaces enables the simultaneous improvement of class discriminative power and expansion of intra-class variability.
Feature Sparsity The feature sparsity is measured by the average percentage of zero activations on the validation set. As shown in Table 1, the feature from vanilla softmax networks are very dense, with only 0.55% zero activations. Cosine softmax and Grassmannian class representations all result in more sparse representations, with around 78% zero activations. The feature norm regularization decreases the sparsity about a half.
Principal Angles Between Class Representative Spaces When classes are subspaces, relationships between two classes can be measured by k angles called principal angles, which contain richer information than a single angle between two class vectors. The principal angles between two k-dimensional subspaces S and R are recursively defined as (Absil et al. (2006))
cos(θi) = max s∈S max r∈R
sTr = sTi ri, s.t.‖s‖ = ‖r‖ = 1, sTsj = rTrj = 0, j = 1, . . . , i− 1, (13)
for i = 1, . . . , k and θi ∈ [0, π/2]. In Figure 2, we illustrate the smallest and largest principal angles between any pair of classes for a model with k = 8. From the figure, we can see that the smallest principal angle reflects class similarity, and the largest principal angle is around π/2. A smaller angle means the two classes are correlated in some directions, and a π/2 angle means that some directions in one class subspace is completely irrelevant (orthogonal) to the other class.
5.2 GRASSMANNIAN CLASS REPRESENTATION IMPROVES FEATURE TRANSFERABILITY
In this section we compare the linear transferability of the features learned by different models trained on the ImageNet-1K dataset. The feature transfer benchmark dataset includes CIFAR-10 (Krizhevsky et al. (2009)), CIFAR-100 (Krizhevsky et al. (2009)), Food-101 (Bossard et al. (2014)), Oxford-IIIT Pets (Parkhi et al. (2012)), Stanford Cars (Krause et al. (2013)), and Oxford 102 Flowers (Nilsback & Zisserman (2008)). For each of the transfer dataset, we use the same trained models as in Table 1 to extract their features. Then all features are normalized to unit length. We fit linear SVM with one-vs-rest multi-class policy on the training set, and report the accuracies on their test set. The regularization hyper-parameter for SVM is grid searched with candidates [0.1, 0.2, 0.5, 1, 2, 5, 10, 15, 20] and determined by five-fold cross-validation on the training set.
Results As shown in Table 2, the cosine softmax and the Grassmannian with subspace dimension k = 1 has comparable transfer performance, but both are lower than the vanilla softmax. However, when the subspace dimension increases, the transfer performance gradually improves, and when k = 16, the transfer performance is on par with vanilla softmax. The feature norm regularization improves the transfer quality, as shown in the k = 1, 8 cases. We hypothesize that this might relate to the fact that features with norm regularization are less sparse, so more information are encoded.
Class Separation The class separation is measured by the index R2, which is defined as one minus the ratio of the average intra-class cosine distance to the overall average cosine distance (Kornblith et al., 2021, Eq. (11)). Kornblith et al. (2021) found that greater class separation R2 is associated with less transferable features. This may explain the feature transfer performance of Grassmannian class
representations. The vanilla softmax has lower separation (0.495) compared to the cosine softmax (0.528) and the Grassmannian class representation with subspace dimension k = 1 (0.534). From subspace dimension k = 1 to k = 16, the separation from Grassmannian models decreases from a high value (0.534) to a low value (0.395). The change in class separation is roughly in line with the change of transfer performances.
5.3 SCALELESS OF SUBSPACE IMPROVES LONG-TAIL RECOGNITION
We benchmark its effectiveness in long-tail classification using the ImageNet-LT dataset (Liu et al. (2019)). ImageNet-LT is a subset of ImageNet-1K, where the number of images per class ranges from 5 to 1280. There are totally 115.8K images, roughly 1/10 the size of ImageNet-1K. We use the same ResNet50-D networks as in Section 5.1. All training settings including optimizer, augmentation, initial learning rate are also kept the same except we modify the total epochs to 200 and the learning rate is decayed by 1/10 at epoch 150, 180, and 195. The last checkpoint is used for evaluation. We use the instance-balanced sampling, as it was reported by Kang et al. (2019) that class-balanced sampling, and square-root sampling both degrade the performance.
We report the top-1 accuracies on the test set in Table 3. We find that both the cosine softmax and the Grassmannian class representation with small subspace dimension improve the long-tail classification accuracy. Specifically, the cosine softmax is 1.62% higher in score compared to the vanilla softmax, and the Grassmannian class representation with subspace dimension k = 1 is 2.11% higher in score compared to the vanilla softmax. However, when the subspace dimension increases, the accuracy drops. We notice that for few-shot classes, there are not enough sample to learn a good higher dimensional subspace for its representation, as the accuracy of few-shot classes degrade significantly when dimension are large. Too few training data for a class is an example scenario when larger dimension does not offer much help.
6 LIMITATION AND FUTURE DIRECTION
One problem that remains open is how to choose the optimal dimension. Currently, we treat it as a hyper-parameter and decide it through experiments. Computational side, geometric optimization incurs some computational overhead since it contains SVD decomposition. This might hinder the training speed when k is very large. The Grassmannian class representation allows for greater intra-class variability, but we did not explicitly promote the intra-class variability in any form. It will be very interesting to explore ways to explicitly encourage intra-class variability. For example, a potential way is to combine it with self-supervised learning. We hope our work would stimulate progress in these directions.
7 CONCLUSION
In this work, we proposed to use linear subspaces as the class prototype in deep neural networks. The geometric structure of the related Grassmannian fully-connected layer and the Grassmannian convolutional layer are products of Grassmannian. We optimize the subspaces using geometric optimization and provide an efficient Riemannian SGD implementation tailored for Grassmannians. We apply the new formulation to large-scale image classification, feature transfer, and long-tail classification tasks. Experiments demonstrate that the new Grassmannian class representation is able to improve performances in these settings.
A TECHNICAL DETAILS
Alternative Implementation of Riemannian SGD The step 4 of Algorithm 1 is called retraction in geometric optimization. There are alternative implementations of retraction other than moving parameters along the geodesic. For example, replace step 4 with the Euclidean gradient update and followed by the re-orthogonalization via QR decomposition in Step 5. The subspace parameter may move away from the Grassmannian after the Euclidean gradient update, but it will be pulled back to the manifold after the QR re-orthogonalization (details see Absil et al. (2009, Equ. (4.11))). For ease of reference, we call this version of Riemannian SGD as “Algorithm 1 variant”. We compare the two implementations in the first two rows of Table 4. The results show that the Grassmannian class representation is effective on both versions of Riemannian SGD.
Necessity of Grassmannian Formulation and Geometric Optimization To show that the necessity of constraining the subspace parameters to lie in the Grassmannian, we replace the Riemannian SGD with the vanilla SGD and compare it with Riemannian SGD. Note that with SGD, the logit formula ‖STi x‖ no longer means the projection norm because Si is not orthogonal anymore. The result is shown at the third row of Table 4, from which we observe a significant performance drop for the unconstrained setting.
Numerical Stability of Algorithm 1 The numerical stability issue is caused by the accumulation of tiny computational errors of Equation (3). After many iterations, the resultant matrix S might not be perfectly orthogonal. For example, after 100, 1000, and 5000 iterations of the Grassmannian ResNet50-D with subspace dimension k = 8, we observed that the error max i‖STi Si − I‖∞ is 1.9e-5, 9.6e-5 and 3.7e-4, respectively. After 50 epochs, the error accumulates to 0.0075. One can run step 5 every 100 iterations to keep the error at low level and the computational cost is neglectable. For this reason, we marked this step as “optional”.
Decreasing Feature Norm During Training We show the changes of average norm on the validation set of ImageNet from epoch 10 to epoch 100 in Figure 3. The subspace dimension k = 16.
B HYPER-PARAMETERS AND DESIGN DECISIONS
Choice of Gamma We use γ = 25 throughout the main text. Here we give more results with different choice of γ when subspace dimension k = 8 in Table 5. Due to that we conducted this set of experiments in early exploration stage, the learning rate decay policy is to divide by 10 at epochs 30, 60 and 90, which is different from our main results using the cosine learning rate schedule. The top-1 accuracy is slightly lower than the cosine learning rate counter part. Other training settings such as augmentation are the same as in Table 1.
Importance of Re-Normalizing Features Re-normalizing the feature is critical to effectively learn the class representative subspaces. Below we provide training results without feature re-normalization in Table 6. There are significant performance drop without re-normalization. For reference, the cosine softmax also requires feature re-normalization for effective learning.
Importance of Joint Training Joint training the subspaces and the features is essential. To support this claim, we add an experiment that only fine-tunes the class subspaces from weights pre-trained using the regular softmax (third row of Table 7). For comparison, we also add another experiment that fine-tunes all parameters (fourth row of Table 7). We find that if the feature is fixed, changing the regular fc to the geometric version does not increase performance noticeably (top-1 from 78.04% to 78.14%). But when all parameters are free to learn, the pre-trained weights is a better initialization than the random initialization (top-1 from 79.12% to 79.44%).
More Results of FN We present more results using the feature norm regularization trick in Table 8. From the results, we observe that FN also works for the baseline Cosine Softmax. For Grassmannian + FN, the performance reaches peak at dimension k = 8 and then decreases when k = 16.
Stronger Augmentation Improves Accuracy Generally speaking, stronger augmentation mitigates the overfitting problem and benefits models with larger capacity. To demonstrate the effect of stronger augmentations, we run experiments using RandAug (Cubuk et al. (2020)) in Table 9. We can see that stronger augmentation indeed further increases the accuracy. Together with longer training and SyncBN, the top-1 accuracy for ResNet50-D reaches 80.17%.
C MORE BASELINES
We have compared the proposed method with vanilla softmax and the cosine softmax the main text. In this section we compare with baselines that use the same amount of parameters, and run experiments on different network structures.
Multi-FC We add multiple classification fc layer to the network. During training, these independent fcs are trained side by side, and their losses are averaged. During testing, the logits are first averaged, and then followed by softmax to output the prediction probability.
SoftTriple In the SoftTriple loss (Qian et al. (2019)), each class is modeled by multiple centers. The logit is a weighted average of logits computed from individual class centers. We adapted the official code into our codebase to train on the ImageNet dataset. The recommended parameters are used. Specifically, λ = 20, γ = 0.1, τ = 0.2 and δ = 0.01.
For the above two settings, we use the same training protocols as in Table 1. Results are shown in Table 10, from which we find that the Grassmannian class representation is the most effective one.
More Architectures We show experiments on ResNet101-D and ResNeXt (Xie et al. (2017b)) in Table 11. The training settings are the same as in Table 1, namely, we use the standard augmentation, cosine learning rate schedule, and train for 100 epochs. The results show that our formulation is effective across different model architectures.
D TRAINING SPEED AND SVD SPEED
During inference, the computational cost is K times the vanilla softmax. Since it is mostly matrix multiplication, the GPU acceleration can speed up even further. For example, on a V100 GPU, the average time of multiplying a 1000 × 2048 matrix with a 2048 dimensional vector is 20 ± 2.9µs, while multiplying an 8000× 2048 matrix with a 2048 dimensional vector takes about 105± 7.6µs. The cost is neglectable compared to the network forward time.
During training, the most costly operation in Algorithm 1 is SVD. We measure the actual iteration time during training in Table 12. We observe that when K is small, it is as fast as the vanilla softmax. When k = 8, the full training needs roughly 1.7x time compared to vanilla softmax (this can be reduced greatly with the new version of PyTorch, as we will discuss below).
Since the release of PyTorch 1.13, they supported the fast approximate SVD algorithm GESVDA. We saw great speed improvement in the case of k = 8 and k = 16. The benchmark time is shown
in Table 13. With computational optimizations as such, we expect the computational cost of SVD would be minimal for k ≤ 32.
E PYTORCH CODE FOR RIEMANNIAN SGD
We provide a sample implementation of Algorithm 1 in Figure 4 using PyTorch (Paszke et al. (2019)). The sample code checks if a parameter is geometric by checking whether it has an ‘geometry’ attribute. If not, then it runs the original SGD on that parameter. If the ‘geometry’ property is not None, then it is a list of numbers indicating the dimension of class representative subspaces for all classes. If all the dimensions are the same, then it goes to the batch version (line 23 of the code in Figure 4). Otherwise, it goes to the for loop version (line 46 of the code in Figure 4). | 1. What is the main contribution of the paper regarding the learning formulation?
2. What are the strengths of the proposed approach, particularly in its presentation and technical solidity?
3. What are the weaknesses of the paper, especially regarding its experimental validation and novelty?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The overall goal of the submission is a learning formulation that simultaneously models inter-class discrimination while promoting intra class variation in classification. To this end, it considers a linear subspace approach that is scaleless and thus in theory more suitable for long tail classification than the vector counterpart. Since set of subspaces form a Grassmann manifold, the submission replaces the fully connected layer in deep networks with a geometric one and optimizes it with Riemannian SGD. The method is validated on various benchmarks where it is shown to improve the vanilla baseline on transfer learning as well as long tail classification tasks.
Strengths And Weaknesses
Strength:
-Presentation: The submission is easy to read and follow. It is well written with intuitions provided where necessary. The problem is well motivated and contextualized in the broader scientific context.
-Technically solid and grounded work.
-Interesting empirical results wrt baselines considered in the submission and thus a promising direction.
Weakness:
Technical Novelty, from Deep (Riemannian) Manifold Learning perspective, is somewhat marginal.
Experimental Validation is heavily centered around Grassmanian baselines. While this submission cites several works that explicitly encourage/promote intra-class variablity, comparison with such baselines is completely missing.
While the problem of promoting intra class variability is of great interest in deep learning, the proposed method does not explicitly model it as such. I concede it is not a strong weakness and based on the empirical findings in this work, future work can address this explicitly.
Clarity, Quality, Novelty And Reproducibility
-The paper is well written and easy to read. Related work is fairly covered.
The idea of replacing fully connected layer with a geometric layer and the resulting impact on transfer learning and long tail classification is an interesting technical contribution.
The authors have promised to released the code for reproducibility and also provided enough technical details in the submission. |
ICLR | Title
Grassmannian Class Representation in Deep Learning
Abstract
We generalize the class representative vector found in deep classification networks to linear subspaces and show that the new formulation enables the simultaneous enhancement of the inter-class discrimination and intra-class feature variation. Traditionally, the logit is computed by the inner product between a feature and the class vector. In our modeling, classes are subspaces and the logit is defined as the norm of the projection from a feature onto the subspace. Since the set of subspaces forms Grassmann manifolds, finding the optimal subspace representation for classes is to optimize the loss on a Grassmannian. We integrate the Riemannian SGD into existing deep learning frameworks such that the class subspaces in a Grassmannian are jointly optimized with other model parameters in Euclidean. Compared to the vector form, subspaces have two appealing properties: they can be multi-dimensional and they are scaleless. Empirically, we reveal that these distinct characteristics improve various tasks. (1) Image classification. The new formulation brings the top-1 accuracy of ResNet50-D on ImageNet-1K from 78.04% to 79.37% using the standard augmentation in 100 training epochs. This confirms that the representative capability of subspaces is more powerful than vectors. (2) Feature transfer. Subspaces provide freedom for features to vary and we observed that the intra-class variability of features increases when the subspace dimensions are larger. Consequently, the quality of features is better for downstream tasks. The average transfer accuracy across 6 datasets improves from 77.98% to 80.12% compared to the strong baseline of vanilla softmax. (3) Long-tail classification. The scaleless property of subspaces benefits classification in the long-tail scenario and improves the accuracy of ImageNet-LT from 46.83% to 48.94% compared to the standard formulation. With these encouraging results, we believe that more applications could benefit from the Grassmannian class representation. Codes will be released.
1 INTRODUCTION
The idea of representing classes as linear subspaces in machine learning can be dated back, at least, to 1973 (Watanabe & Pakvasa (1973)), yet it is mostly ignored in the current deep learning literature. In this paper, we revisit the scheme of representing classes as linear subspaces in the deep learning context. To be specific, each class i is associated with a linear subspace Si, and for any feature vector x, the i-th class logit is defined as the norm of projection
li := ∥∥projSix∥∥ . (1)
Since a subspace is a point in the Grassmann manifold (Absil et al. (2009)), we call this formulation the Grassmannian class representation. In the following, we answer the two critical questions,
1. Is Grassmannian class representation useful in real applications?
2. How to optimize the subspaces in training?
The procedure fully-connected layer → softmax → cross-entropy loss is the standard practice in deep classification networks. Each column of the weight matrix of the fullyconnected layer is called the class representative vector and serves as a prototype for one class. This representation of class has achieved huge success, yet it is not without imperfections.
In the study of transferable features, researchers noticed a dilemma that representations with higher classification accuracy on the original task lead to less transferable features for downstream tasks (Kornblith et al. (2021); Müller et al. (2019)). This is connected to the fact that they tend to collapse intra-class variability of representations, resulting in loss of information in the logits about the resemblances between instances of different classes. Furthermore, the neural collapse phenomenon (Papyan et al. (2020)) indicates that as training progresses, the intra-class variation becomes negligible, and features collapse to their class-means. So this dilemma inherently originates from the practice of representing classes by a single vector. The Grassmannian class representation shed light on this issue as features of each class are allowed to vary in a high-dimensional subspace without incurring losses in classification.
In the study of the long-tail classification, researchers found that the norm of class representative vectors is highly related to the number of training instances in the corresponding class (Kang et al. (2019)) and the recognition accuracy is affected. To counter this effect, the class representative vector is typically been rescaled to unit length during training (Liu et al. (2019)) or re-calibrated in an extra post-processing step (Kang et al. (2019)). In addition to these techniques, the Grassmannian class representation provides a natural and elegant solution for this as subspace is scaleless.
It is well known that the set of k-dimensional linear subspaces form a Grassmann manifold, so finding the optimal subspace representation for classes is to optimize on the Grassmann manifold. Thus for the second question, the natural solution is to use the geometric optimization (Edelman et al. (1998)), which optimizes an objective function under the constraint of a given manifold. Points being optimized are moving along geodesics instead of following the direction of Euclidean gradients. The preliminary concepts of geometric optimization are reviewed in Section 3, and the technical details of subspace learning are presented in Section 4. We implemented an efficient Riemannian SGD for optimization in Grassmann manifold as shown in Algorithm 1, which integrates the geometric optimization algorithms to deep learning frameworks so that both the linear subspaces in Grassmannian and model weights in Euclidean are jointly optimized.
Going back to the first question, we experiment on three concrete tasks in Section 5 to demonstrate the practicality and effectiveness of Grassmannian class representation. We find that (1) Grassmannian class representation improves large-scale image classification accuracy. (2) Grassmannian class representation produces high-quality features that can better transfer to downstream tasks. (3) Grassmannian class representation improves the long-tail classification accuracy. With these encouraging results, we believe that Grassmannian class representation is a promising formulation and more applications may benefit from its attractive features.
2 RELATED WORK
Geometric Optimization Edelman et al. (1998) developed the geometric Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds in their seminal paper. Riemannian SGD was introduced in Bonnabel (2013) with an analysis on convergence and there are variants such as Riemannian SGD with momentum (Roy et al. (2018)) or adaptive (Kasai et al. (2019)). Other popular Euclidean optimization methods such as Adam are also studied in the Riemannian manifold context (Becigneul & Ganea (2019)). Lezcano-Casado & Martınez-Rubio (2019) study the special case of SO(n) and U(n) and uses the exponential map to enable Euclidean optimization methods for Lie groups. The idea was generalized into trivialization in Lezcano Casado (2019). Our Riemannian SGD Algorithm 1 is tailored for Grassmannian, so we have a closed-form equation for geodesics. Other applications of geometric optimization include matrix completion (Mishra & Sepulchre (2014); Li et al. (2015b;a); Nimishakavi et al. (2018)), hyperbolic taxonomy embedding (Nickel & Kiela (2018)), etc. Hamm & Lee (2008) propose the Grassmann discriminant analysis, in which features are modeled as linear subspaces. These applications are mostly using shallow models. Zhang et al. (2018) use subspaces to model clusters in unsupervised learning, which share similar spirit with our work. Simon et al. (2020) model classes as subspaces in few-shot learning, however, their subspaces are computed from data matrix rather than explicitly parametrized and learned. Roy et al. (2019) use Stiefel manifold to construct Mahalanobis distance matrix in Siamese networks in order to improve feature embeddings of deep metric learning.
Orthogonal Constraints in Deep Learning There are works that enforce orthogonality on weights, which study the regularization effect of orthogonal constraints. Contrastingly, we used orthogonal matrices as the numerical representation of the geometry object of subspaces and focus on the representation of classes. The approaches of enforcing orthogonality include regularizations (Arjovsky et al. (2016); Xie et al. (2017a); Bansal et al. (2018); Qi et al. (2020); Wang et al. (2020), etc.), geometric constraints (Ozay & Okatani (2018); Harandi & Fernando (2016)) and paraunitary systems (Su et al. (2022)). Orthogonally constrained data is also explored by Huang et al. (2018).
Improving Diversity in Feature Learning Grassmannian class representation encourages the intra-class variation implicitly by providing a subspace to vary. In metric learning, there are efforts to explicitly encourage feature diversity. For example, SoftTriplet Loss (Qian et al. (2019)) models each class as local clusters with several centers. Zhang et al. (2017) use a global orthogonal regularization to encourage local descriptors to spread out in the features space. Yu et al. (2020) propose to learn low-dimensional structures from the maximal coding rate reduction principle. The subspaces are estimated using PCA on feature vectors after the training. In our formulation, subspaces are directly optimized in the Grassmann manifold during training.
Normalized Classification Weights Normalizing class representative vectors has been found useful in representation learning (Wang et al. (2017; 2018); Deng et al. (2019)) and long-tail classification (Liu et al. (2019); Wang et al. (2021)). However, works such as ArcFace (Deng et al. (2019)) focus on adding an extra margin to suppress intra-class variance. In contrast, our subspace formulation encourages intra-class variation.
3 PRELIMINARIES
In this section, we briefly review the essential concepts in geometric optimization. Detailed exposition can be found in Edelman et al. (1998) and Absil et al. (2009). Given an n-dimensional Euclidean space Rn, the set of k-dimensional linear subspaces forms the Grassmann manifold G(k, n). A computational-friendly representation for subspace S ∈ G(k, n) is an orthonormal matrix S ∈ Rn×k, where STS = Ik and Ik is the k × k identity matrix. Columns of matrix S can be interpreted as an orthonormal basis for the subspace S. The matrix representation is not unique, as right multiplying by an orthonormal matrix will get a new matrix representing the same subspace. Formally, Grassmannian is a quotient space of the Stiefel manifold and the orthogonal group G(k, n) = St(k, n)/O(k), where St(k, n) = {X ∈ Rn×k|XTX = Ik} and O(k) = {X ∈ Rk×k|XTX = Ik}. When the context is clear, we use the notation of space S and one of its matrix representations S interchangeably. The tangent space of the Grassmann manifold at S consists of all n× k matrices T such that STT = 0. Given a function f : G(k, n)→ R defined on the Grassmann manifold, the Riemannian gradient of f at point S ∈ G(k, n) is given by (Edelman et al., 1998, Equ. (2.70)),
∇f(S) = fS − SST fS , (2)
where fS is the Euclidean gradient with elements (fS)ij = ∂f∂Sij . When performing gradient descend on the Grassmann manifold, and suppose the current point is S and the current Riemannian gradient is G, then the next point is the endpoint of S moving along the geodesic toward the tangent G with some step size. The formula of the geodesic is given by (Edelman et al., 1998, Equ. (2.65)),
S(t) = (SV cos(tΣ) + U sin(tΣ))V T , (3)
where UΣV T = G is the thin singular value decomposition of G.
4 LEARNING THE GRASSMANNIAN CLASS REPRESENTATION
Denote the weight of the last fully-connected layer in a classification network by W ∈ Rn×C and the bias by b ∈ RC , where n is the dimension of features and C is the number of classes. The i-th column vector wi of W is called the i-th class representative vector. The i-th logit is computed as the inner product between a feature x and the class vector (and optionally offset by a bias bi), namely wTi x + bi. We extend this well-established formula to a multi-dimensional subspace form
li := ∥∥projSix∥∥ , (4)
where Si ∈ G(k, n) is a k-dimensional subspace in the n-dimensional feature space. We call Si the i-th class representative space, or class space in short. Comparing the new logit to the standard one, the inner product of feature x with class vector is replaced by the norm of the subspace projection projSix and the bias term is omitted. We found that re-normalizing features to a constant length γ
improves training. Incorporating this, Equation (4) becomes ∥∥∥projSi γx‖x‖∥∥∥. To simplify notation, we assume feature x has been properly re-normalized throughout this paper unless otherwise specified.
The application of the subspace class representation requires two modifications to an existing network. Firstly, the last fully-connected layer is replaced by its geometric counterpart, which is detailed in Section 4.1. The new geometric layer will transform features to logits using Equation (4). Secondly, the optimizer should be extended to process the new geometric layer simultaneously, which is explained in Section 4.2. Parameters of the geometric layer are optimized using Geometric SGD, while all other parameters are optimized as usual using the standard SGD algorithm.
4.1 GRASSMANNIAN CLASS REPRESENTATION
Suppose for class i, i = 1, 2, . . . , C, its subspace representation is Si ∈ G(ki, n), where the dimension ki is a hyperparameter and is fixed during training. Then the tuple of subspaces (S1, S2, . . . , SC) will be optimized in the product space G(k1, n)×G(k2, n)×· · ·×G(kC , n). Denote a matrix instantiation of Si as Si ∈ Rn×k, where the column vectors form an orthonormal basis Si, then we concatenate the matrices into a big matrix
S = [S1 S2 · · · SC ] ∈ Rn×(k1+k2+···+kC). (5)
The matrix S contains the parameters that are optimized numerically. For feature x, the product STi x gives the coordinate of projSix under the orthonormal basis formed by the columns of Si. By definition in Equation (4), the logit for class i and feature x is computed by
li = ∥∥projSix∥∥ = ∥∥STi x∥∥ . (6)
Grassmannian Fully-Connected Layer We can implement a geometric fully-connected layer using the plain old fully-connected layer. The shape of the weight S is n× (k1 + k2 + · · ·+ kC), as shown in Equation (5). In the forward pass, the input feature is multiplied with the weight matrix to get a temporary vector t = STx, then the first element of the output is the norm of the sub-vector (t1, . . . , tk1), and the second element of the output is the norm of (tk1+1, tk1+2, . . . , tk1+k2), etc.
Parameter Initialization Each matrix instantiation of the subspace should be initialized as an orthonormal matrix. The geometric optimization algorithm described in Section 4.2 ensures their orthonormality during training. Specifically, for Grassmannian fully-connected layer, each block Si of the weight S in Equation (5) is orthonormal. The whole matrix S needs not be orthonormal.
4.2 OPTIMIZE THE SUBSPACES
Geometric optimization is to optimize functions defined on manifolds. The key step is to find the Riemannian gradient of the loss function and then descend along the geodesic. Here the manifold in concern is the Grassmannian G(k, n). As an intuitive example, G(1, 2) consists of all lines through the origin in a two-dimensional plane. We can visualize it as a unit circle where each point on the unit circle represents the line passing through it. Antipodal points represent the same line. To illustrate
Algorithm 1 An Iteration of the Riemannian SGD with Momentum for Grassmannian at Step t
Input: Learning rate γ > 0, momentum µ ∈ [0, 1), Grassmannian weight matrix S(t) ∈ Rn×k, momentum buffer M (t−1) ∈ Rn×k, Euclidean gradient D ∈ Rn×k.
1: Compute Riemannian gradient G← (In − SST )D. . Equation (8) 2: Approximately parallel transport M to the tangent space of current point S(t) by projection
M ← (In − SST )M (t−1). (11) 3: New momentum M (t) ← µM + G. . PyTorch version 4: Move along geodesic using equation (3). If UΣV T = M (t) is the thin singular value decompo-
sition, then S(t+1) ← ( S(t)V cos(γΣ) + U sin(γΣ) ) V T .
5: (Optional) Re-orthogonalization S(t+1) by QR decomposition. . For numerical stability
how geometric optimization works, we define a toy problem on G(1, 2) that maximizes the norm of the projection of a fixed vector x0 onto a line through the origin, namely
max S∈G(1,2) ‖projSx0‖ . (7)
As shown in Figure 1, we represent S with a unit vector w ∈ S. Suppose at step t, the current point is w(t), then it is easy to compute that the Euclidean gradient at w(t) is d = x0, and the Riemannian gradient g is the Euclidean gradient d projected to the tangent space of G(1, 2) at point w(t). The next iterative point w(t+1) is to move w(t) along the geodesic toward the direction g. Without geometric optimization, the next iterative point would have lied at w(t) + γd, jumping outside of the manifold.
The following proposition computes the Riemannian gradient we needed. Proposition 1. Let S ∈ Rn×k be a matrix instantiation of subspace S ∈ G(k, n), and x ∈ Rn is a vector in Euclidean space, then the Riemannian gradient G of l(S,x) = ‖projSx‖ w.r.t. S is
G = 1
l (In − SST )xxTS. (8)
Proof. Rewrite ‖projSx‖ = √ xTSSTx, and compute the Euclidean derivatives as
∂l
∂S =
1 l xxTS, ∂l ∂x = 1 l SSTx. (9)
Then Equation (8) follows from Equation (2).
We give a geometric interpretation of Proposition 1. Let w1 be the unit vector along direction projSx, then expand it to an orthonormal basis of S, say {w1,w2, . . . ,wk}. Since Riemannian gradient is invariant to the matrix instantiation, we can set S = [w1 w2 · · · wk]. Then Equation (8) becomes
G = [ (In − SST )x 0 · · · 0 ] , (10)
since wi ⊥ x, i = 2, 3, . . . , k and wT1 x = l. Equation (10) shows that in the single-sample case, only one basis vector w1 needs to be rotated towards vector x, where w1 is the unit vector in S that is closest to x.
Riemannian SGD During training, parameters of non-geometric layers are optimized as usual using the vanilla SGD algorithm. For geometric layers such as the Grassmannian fully-connected layer, their parameters are optimized using the Riemannian SGD algorithm. The pseudo-code of the Riemannian SGD with momentum, which we implemented in our experiments, is described in Algorithm 1. We only show the code for the single-sample, single Grassmannian case. It is trivial to extend them to the batch version and the product of Grassmannians. Note that in step 2, we use projection to approximate the parallel translation of momentum for efficiency, and in step 5 an optional extra orthogonalization can improve numerical stability. The momentum update formula is adapted from the PyTorch implementation of the vanilla SGD. Weight decay does not apply here since spaces are scaleless. Algorithm 1 works together with the vanilla SGD and modifies the gradient from Euclidean to Grassmannian on-the-fly for geometric parameters.
5 EXPERIMENT
In this section, we study the influence of Grassmannian class representation through experiments. Firstly, in Section 5.1, we show that the expressive power of Grassmannian class representation improves accuracy in large-scale image classification. Secondly, in Section 5.2, we show that the Grassmannian class representation improves the feature transferability by allowing larger intra-class variation. Thirdly, in Section 5.3, we demonstrated that the scaleless property of the Grassmannian class representation improves the classification accuracy in the long-tail scenario. Additional experiments on hyper-parameter choices and design decisions are presented in Appendix B.
We choose the vanilla softmax loss and the cosine softmax loss (without margin) as baselines since they reflect the current typical class representations. The former uses a plain vector and the latter uses a normalized vector. Other innovations on losses, such as adding margins (Deng et al. (2019)), re-balancing class-wise gradients (Wang et al. (2021)), are orthogonal to our contribution.
5.1 GRASSMANNIAN CLASS REPRESENTATION IMPROVES CLASSIFICATION ACCURACY
We apply the Grassmannian class representation to large-scale classification, where consistent improvement over baselines is shown. We then analyze the characteristics of both the learned features and the learned class subspaces. On the feature representation side, we compare the feature sparsity and intra-class variability. On the class representation side, we visualize the principal angles between any pair of classes, a concept that only appears when classes are Grassmannian.
Experimental Setting We use the ResNet50-D (He et al. (2019)) architecture as the base model, and benchmark on ImageNet-1K (Deng et al. (2009)). ResNet50-D is a slight modification of the original ResNet-50 (He et al. (2016)) with about 1% improvement in accuracy. ImageNet-1K is a large-scale image classification dataset containing 1.28M training images and 50K validation images in 1000 categories. We set γ = 25 for both cosine softmax and the Grassmannian class representation. Our method replaces the last fully-connected layer of ResNet50-D by a Grassmannian fully-connected layer. To reduce the number of hyper-parameters, we simply set the subspace dimension k to be the same for all classes. We vary the hyper-parameter k in the range [1, 2, 4, 8, 16]. Since the dimension of feature is 2048, the Grassmannian fully-connected layer has the geometry of Π1000i=1 G(k, 2048).
Training Strategy All settings share the same training strategy. Each training includes 100 epochs with total batch size 256 on 8 NVIDIA Tesla V100 GPUs. SGD is used for baselines and Riemannian SGD described in Algorithm 1 is used for Grassmannian class representations. The momentum is 0.9 and the weight decay is 0.0001. The initial learning rate is 0.1 and then follows the cosine learning rate decay. The checkpoint with best validation score is used. The input size is 224× 224 and we use the standard augmentation for ImageNet, namely, random resized crop followed by random horizontal flip. The code is implemented using the mmclassification (MMClassification Contributors (2020)) package, and uses PyTorch as the training backend. Note that to make the number of experiments tractable due to our limited computation resources, we omitted many tricks that has shown to improve representation learning, such as stronger augmentation (Cubuk et al. (2020)), longer training (Wightman et al. (2021)), adding margins (Deng et al. (2019)) etc., and focus on the improvements solely contributed by the Grassmannian formulation.
Feature Norm Regularization We noticed that the norm of the feature (before re-normalization) decreases as training progresses (details see Appendix A). For example, in the case of k = 16, the average norm of feature decreases from 1.051 at epoch 10 to 0.332 at epoch 100. Although the norm of the feature does not affect inference result due to the feature re-normalization when computing logits, we empirically find that encouraging the norm to be larger than a constant L improves the training. Specifically, we propose a feature norm regularization loss LFN,
LFN = 1
K ∑ i 1 2 (relu (L− ‖xi‖))2 , (12)
where xi is the feature of the i-th sample before normalization and K is the number of features with norm larger than L. In our experiments, L = 1 and the loss is directly added to the softmax loss
with equal weight. We also tried larger values of L or to regularize the norm of feature on both sides, however, they degrade the performance.
Results The validation accuracies of different models on ImageNet-1K is listed in Table 1. All models with the Grassmannian class representation achieve higher top-1 and top-5 accuracies than the vanilla softmax and the cosine softmax. A general trend is that, with larger subspace dimension k, the accuracy improvement is greater. When subspace dimension is 16, the top-1 accuracy is 79.21%, which is 1.17% points higher than the vanilla softmax loss. With feature norm regularization, the top-1 accuracy further improves from 79.12% to 79.37% for dimension 8.
Intra-Class Variability Increases with Dimension The intra-class variability is measured by the mean pair-wise angles (in degrees) between features within the same class, and then average over all classes. The inter-class variability is the average of mean pair-wise angles between features from different classes. Following the convention in the study of neural collapse (Papyan et al. (2020)), we use the global centered training feature to compute variabilities. Kornblith et al. (2021) showed that alternative objectives that improve accuracy, including label smoothing, dropout, sigmoid, cosine softmax, logit normalization, etc., collapse the intra-class variability in representation, which in consequence degrades the quality of feature on downstream tasks. However, this conclusion does not apply when the classes are modeled by subspaces. The intra-class variability does reduces from baseline’s 60.12 to Grassmannian formulation’s 56.52 when the subspace dimension k = 1, however, as k increases, both the top-1 accuracy and the intra-class variability grow. This indicates that representing classes as subspaces enables the simultaneous improvement of class discriminative power and expansion of intra-class variability.
Feature Sparsity The feature sparsity is measured by the average percentage of zero activations on the validation set. As shown in Table 1, the feature from vanilla softmax networks are very dense, with only 0.55% zero activations. Cosine softmax and Grassmannian class representations all result in more sparse representations, with around 78% zero activations. The feature norm regularization decreases the sparsity about a half.
Principal Angles Between Class Representative Spaces When classes are subspaces, relationships between two classes can be measured by k angles called principal angles, which contain richer information than a single angle between two class vectors. The principal angles between two k-dimensional subspaces S and R are recursively defined as (Absil et al. (2006))
cos(θi) = max s∈S max r∈R
sTr = sTi ri, s.t.‖s‖ = ‖r‖ = 1, sTsj = rTrj = 0, j = 1, . . . , i− 1, (13)
for i = 1, . . . , k and θi ∈ [0, π/2]. In Figure 2, we illustrate the smallest and largest principal angles between any pair of classes for a model with k = 8. From the figure, we can see that the smallest principal angle reflects class similarity, and the largest principal angle is around π/2. A smaller angle means the two classes are correlated in some directions, and a π/2 angle means that some directions in one class subspace is completely irrelevant (orthogonal) to the other class.
5.2 GRASSMANNIAN CLASS REPRESENTATION IMPROVES FEATURE TRANSFERABILITY
In this section we compare the linear transferability of the features learned by different models trained on the ImageNet-1K dataset. The feature transfer benchmark dataset includes CIFAR-10 (Krizhevsky et al. (2009)), CIFAR-100 (Krizhevsky et al. (2009)), Food-101 (Bossard et al. (2014)), Oxford-IIIT Pets (Parkhi et al. (2012)), Stanford Cars (Krause et al. (2013)), and Oxford 102 Flowers (Nilsback & Zisserman (2008)). For each of the transfer dataset, we use the same trained models as in Table 1 to extract their features. Then all features are normalized to unit length. We fit linear SVM with one-vs-rest multi-class policy on the training set, and report the accuracies on their test set. The regularization hyper-parameter for SVM is grid searched with candidates [0.1, 0.2, 0.5, 1, 2, 5, 10, 15, 20] and determined by five-fold cross-validation on the training set.
Results As shown in Table 2, the cosine softmax and the Grassmannian with subspace dimension k = 1 has comparable transfer performance, but both are lower than the vanilla softmax. However, when the subspace dimension increases, the transfer performance gradually improves, and when k = 16, the transfer performance is on par with vanilla softmax. The feature norm regularization improves the transfer quality, as shown in the k = 1, 8 cases. We hypothesize that this might relate to the fact that features with norm regularization are less sparse, so more information are encoded.
Class Separation The class separation is measured by the index R2, which is defined as one minus the ratio of the average intra-class cosine distance to the overall average cosine distance (Kornblith et al., 2021, Eq. (11)). Kornblith et al. (2021) found that greater class separation R2 is associated with less transferable features. This may explain the feature transfer performance of Grassmannian class
representations. The vanilla softmax has lower separation (0.495) compared to the cosine softmax (0.528) and the Grassmannian class representation with subspace dimension k = 1 (0.534). From subspace dimension k = 1 to k = 16, the separation from Grassmannian models decreases from a high value (0.534) to a low value (0.395). The change in class separation is roughly in line with the change of transfer performances.
5.3 SCALELESS OF SUBSPACE IMPROVES LONG-TAIL RECOGNITION
We benchmark its effectiveness in long-tail classification using the ImageNet-LT dataset (Liu et al. (2019)). ImageNet-LT is a subset of ImageNet-1K, where the number of images per class ranges from 5 to 1280. There are totally 115.8K images, roughly 1/10 the size of ImageNet-1K. We use the same ResNet50-D networks as in Section 5.1. All training settings including optimizer, augmentation, initial learning rate are also kept the same except we modify the total epochs to 200 and the learning rate is decayed by 1/10 at epoch 150, 180, and 195. The last checkpoint is used for evaluation. We use the instance-balanced sampling, as it was reported by Kang et al. (2019) that class-balanced sampling, and square-root sampling both degrade the performance.
We report the top-1 accuracies on the test set in Table 3. We find that both the cosine softmax and the Grassmannian class representation with small subspace dimension improve the long-tail classification accuracy. Specifically, the cosine softmax is 1.62% higher in score compared to the vanilla softmax, and the Grassmannian class representation with subspace dimension k = 1 is 2.11% higher in score compared to the vanilla softmax. However, when the subspace dimension increases, the accuracy drops. We notice that for few-shot classes, there are not enough sample to learn a good higher dimensional subspace for its representation, as the accuracy of few-shot classes degrade significantly when dimension are large. Too few training data for a class is an example scenario when larger dimension does not offer much help.
6 LIMITATION AND FUTURE DIRECTION
One problem that remains open is how to choose the optimal dimension. Currently, we treat it as a hyper-parameter and decide it through experiments. Computational side, geometric optimization incurs some computational overhead since it contains SVD decomposition. This might hinder the training speed when k is very large. The Grassmannian class representation allows for greater intra-class variability, but we did not explicitly promote the intra-class variability in any form. It will be very interesting to explore ways to explicitly encourage intra-class variability. For example, a potential way is to combine it with self-supervised learning. We hope our work would stimulate progress in these directions.
7 CONCLUSION
In this work, we proposed to use linear subspaces as the class prototype in deep neural networks. The geometric structure of the related Grassmannian fully-connected layer and the Grassmannian convolutional layer are products of Grassmannian. We optimize the subspaces using geometric optimization and provide an efficient Riemannian SGD implementation tailored for Grassmannians. We apply the new formulation to large-scale image classification, feature transfer, and long-tail classification tasks. Experiments demonstrate that the new Grassmannian class representation is able to improve performances in these settings.
A TECHNICAL DETAILS
Alternative Implementation of Riemannian SGD The step 4 of Algorithm 1 is called retraction in geometric optimization. There are alternative implementations of retraction other than moving parameters along the geodesic. For example, replace step 4 with the Euclidean gradient update and followed by the re-orthogonalization via QR decomposition in Step 5. The subspace parameter may move away from the Grassmannian after the Euclidean gradient update, but it will be pulled back to the manifold after the QR re-orthogonalization (details see Absil et al. (2009, Equ. (4.11))). For ease of reference, we call this version of Riemannian SGD as “Algorithm 1 variant”. We compare the two implementations in the first two rows of Table 4. The results show that the Grassmannian class representation is effective on both versions of Riemannian SGD.
Necessity of Grassmannian Formulation and Geometric Optimization To show that the necessity of constraining the subspace parameters to lie in the Grassmannian, we replace the Riemannian SGD with the vanilla SGD and compare it with Riemannian SGD. Note that with SGD, the logit formula ‖STi x‖ no longer means the projection norm because Si is not orthogonal anymore. The result is shown at the third row of Table 4, from which we observe a significant performance drop for the unconstrained setting.
Numerical Stability of Algorithm 1 The numerical stability issue is caused by the accumulation of tiny computational errors of Equation (3). After many iterations, the resultant matrix S might not be perfectly orthogonal. For example, after 100, 1000, and 5000 iterations of the Grassmannian ResNet50-D with subspace dimension k = 8, we observed that the error max i‖STi Si − I‖∞ is 1.9e-5, 9.6e-5 and 3.7e-4, respectively. After 50 epochs, the error accumulates to 0.0075. One can run step 5 every 100 iterations to keep the error at low level and the computational cost is neglectable. For this reason, we marked this step as “optional”.
Decreasing Feature Norm During Training We show the changes of average norm on the validation set of ImageNet from epoch 10 to epoch 100 in Figure 3. The subspace dimension k = 16.
B HYPER-PARAMETERS AND DESIGN DECISIONS
Choice of Gamma We use γ = 25 throughout the main text. Here we give more results with different choice of γ when subspace dimension k = 8 in Table 5. Due to that we conducted this set of experiments in early exploration stage, the learning rate decay policy is to divide by 10 at epochs 30, 60 and 90, which is different from our main results using the cosine learning rate schedule. The top-1 accuracy is slightly lower than the cosine learning rate counter part. Other training settings such as augmentation are the same as in Table 1.
Importance of Re-Normalizing Features Re-normalizing the feature is critical to effectively learn the class representative subspaces. Below we provide training results without feature re-normalization in Table 6. There are significant performance drop without re-normalization. For reference, the cosine softmax also requires feature re-normalization for effective learning.
Importance of Joint Training Joint training the subspaces and the features is essential. To support this claim, we add an experiment that only fine-tunes the class subspaces from weights pre-trained using the regular softmax (third row of Table 7). For comparison, we also add another experiment that fine-tunes all parameters (fourth row of Table 7). We find that if the feature is fixed, changing the regular fc to the geometric version does not increase performance noticeably (top-1 from 78.04% to 78.14%). But when all parameters are free to learn, the pre-trained weights is a better initialization than the random initialization (top-1 from 79.12% to 79.44%).
More Results of FN We present more results using the feature norm regularization trick in Table 8. From the results, we observe that FN also works for the baseline Cosine Softmax. For Grassmannian + FN, the performance reaches peak at dimension k = 8 and then decreases when k = 16.
Stronger Augmentation Improves Accuracy Generally speaking, stronger augmentation mitigates the overfitting problem and benefits models with larger capacity. To demonstrate the effect of stronger augmentations, we run experiments using RandAug (Cubuk et al. (2020)) in Table 9. We can see that stronger augmentation indeed further increases the accuracy. Together with longer training and SyncBN, the top-1 accuracy for ResNet50-D reaches 80.17%.
C MORE BASELINES
We have compared the proposed method with vanilla softmax and the cosine softmax the main text. In this section we compare with baselines that use the same amount of parameters, and run experiments on different network structures.
Multi-FC We add multiple classification fc layer to the network. During training, these independent fcs are trained side by side, and their losses are averaged. During testing, the logits are first averaged, and then followed by softmax to output the prediction probability.
SoftTriple In the SoftTriple loss (Qian et al. (2019)), each class is modeled by multiple centers. The logit is a weighted average of logits computed from individual class centers. We adapted the official code into our codebase to train on the ImageNet dataset. The recommended parameters are used. Specifically, λ = 20, γ = 0.1, τ = 0.2 and δ = 0.01.
For the above two settings, we use the same training protocols as in Table 1. Results are shown in Table 10, from which we find that the Grassmannian class representation is the most effective one.
More Architectures We show experiments on ResNet101-D and ResNeXt (Xie et al. (2017b)) in Table 11. The training settings are the same as in Table 1, namely, we use the standard augmentation, cosine learning rate schedule, and train for 100 epochs. The results show that our formulation is effective across different model architectures.
D TRAINING SPEED AND SVD SPEED
During inference, the computational cost is K times the vanilla softmax. Since it is mostly matrix multiplication, the GPU acceleration can speed up even further. For example, on a V100 GPU, the average time of multiplying a 1000 × 2048 matrix with a 2048 dimensional vector is 20 ± 2.9µs, while multiplying an 8000× 2048 matrix with a 2048 dimensional vector takes about 105± 7.6µs. The cost is neglectable compared to the network forward time.
During training, the most costly operation in Algorithm 1 is SVD. We measure the actual iteration time during training in Table 12. We observe that when K is small, it is as fast as the vanilla softmax. When k = 8, the full training needs roughly 1.7x time compared to vanilla softmax (this can be reduced greatly with the new version of PyTorch, as we will discuss below).
Since the release of PyTorch 1.13, they supported the fast approximate SVD algorithm GESVDA. We saw great speed improvement in the case of k = 8 and k = 16. The benchmark time is shown
in Table 13. With computational optimizations as such, we expect the computational cost of SVD would be minimal for k ≤ 32.
E PYTORCH CODE FOR RIEMANNIAN SGD
We provide a sample implementation of Algorithm 1 in Figure 4 using PyTorch (Paszke et al. (2019)). The sample code checks if a parameter is geometric by checking whether it has an ‘geometry’ attribute. If not, then it runs the original SGD on that parameter. If the ‘geometry’ property is not None, then it is a list of numbers indicating the dimension of class representative subspaces for all classes. If all the dimensions are the same, then it goes to the batch version (line 23 of the code in Figure 4). Otherwise, it goes to the for loop version (line 46 of the code in Figure 4). | 1. What is the main contribution of the paper regarding interpreting high-dimensional feature output?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. Do you have any questions or concerns about the methodology, experiments, or results presented in the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper points out that using softmax does not take the intra-class and inter-class feature variation into account, and this paper aims to interpret the high dimensional feature output that lies in a union of linear subspaces. In classification, each feature representation falls into one of the subspaces (K classes), where each subspace is a Grassmannian manifold. To achieve this, this paper incorporates the Riemannian SGD into the ResNet50-D backbones to optimize the network and the k subspaces. The authors have validated that such an assumption is powerful and outperforms the softmax and cross-entropy combination in ImageNet-1K classification, feature transfer, and Long-tail classification.
Strengths And Weaknesses
Pros,
Replacing the de-facto combination of softmax and cross-entropy is interesting. They provide intensive experiments to validate their claim.
The experiments on the long-tail tasks provide a new sight to handle the data imbalance issue. It correlated to an extra memory bank or dictionary somehow.
Cons,
The authors miss an important reference [1]. The proposed method has been employed and used in subspace clustering instead of classification. The authors should clarify clearly your contribution and difference.
[1] Scalable Deep k-Subspace Clustering, Tong Zhang, Pan Ji, Mehrtash Harandi, Richard Hartley, Ian Reid, ACCV 2018.
Unclear parts:
When there are more than two identical eigenvalues, there will be a sign flip issue in the corresponding eigenvectors. It may lead to the subspace update in a different way. Thus, I would like the authors to provide an analysis of such randomness due to subspace updating.
In table 2, as the Grassmannian with 16 dimensions is better than 8, why there is no FN experiment on 16 dimensions? I am also wondering how the sparsity is related to the FN. Comparing dimension 8, with FN and without FN the accuracy is quite similar but sparsity changes a lot.
The batch normalization will project the feature space to a unit sphere, which will be against the linear subspace assumption. Could the authors explain more on this direction and how you solve this issue?
Besides, I am also wondering how data augmentation affects the accuracy since the authors did not use augmentation in their implementation.
Clarity, Quality, Novelty And Reproducibility
Clarity. This paper put their goal straightforward and clear. The paper is very easy to follow.
Novelty, considering the reference [1], the novelty of the algorithm is limited. But I do appreciate that the paper finds good applications instead.
Reproducibility, it should be easy to reproduce the paper given the details. |
ICLR | Title
Grassmannian Class Representation in Deep Learning
Abstract
We generalize the class representative vector found in deep classification networks to linear subspaces and show that the new formulation enables the simultaneous enhancement of the inter-class discrimination and intra-class feature variation. Traditionally, the logit is computed by the inner product between a feature and the class vector. In our modeling, classes are subspaces and the logit is defined as the norm of the projection from a feature onto the subspace. Since the set of subspaces forms Grassmann manifolds, finding the optimal subspace representation for classes is to optimize the loss on a Grassmannian. We integrate the Riemannian SGD into existing deep learning frameworks such that the class subspaces in a Grassmannian are jointly optimized with other model parameters in Euclidean. Compared to the vector form, subspaces have two appealing properties: they can be multi-dimensional and they are scaleless. Empirically, we reveal that these distinct characteristics improve various tasks. (1) Image classification. The new formulation brings the top-1 accuracy of ResNet50-D on ImageNet-1K from 78.04% to 79.37% using the standard augmentation in 100 training epochs. This confirms that the representative capability of subspaces is more powerful than vectors. (2) Feature transfer. Subspaces provide freedom for features to vary and we observed that the intra-class variability of features increases when the subspace dimensions are larger. Consequently, the quality of features is better for downstream tasks. The average transfer accuracy across 6 datasets improves from 77.98% to 80.12% compared to the strong baseline of vanilla softmax. (3) Long-tail classification. The scaleless property of subspaces benefits classification in the long-tail scenario and improves the accuracy of ImageNet-LT from 46.83% to 48.94% compared to the standard formulation. With these encouraging results, we believe that more applications could benefit from the Grassmannian class representation. Codes will be released.
1 INTRODUCTION
The idea of representing classes as linear subspaces in machine learning can be dated back, at least, to 1973 (Watanabe & Pakvasa (1973)), yet it is mostly ignored in the current deep learning literature. In this paper, we revisit the scheme of representing classes as linear subspaces in the deep learning context. To be specific, each class i is associated with a linear subspace Si, and for any feature vector x, the i-th class logit is defined as the norm of projection
li := ∥∥projSix∥∥ . (1)
Since a subspace is a point in the Grassmann manifold (Absil et al. (2009)), we call this formulation the Grassmannian class representation. In the following, we answer the two critical questions,
1. Is Grassmannian class representation useful in real applications?
2. How to optimize the subspaces in training?
The procedure fully-connected layer → softmax → cross-entropy loss is the standard practice in deep classification networks. Each column of the weight matrix of the fullyconnected layer is called the class representative vector and serves as a prototype for one class. This representation of class has achieved huge success, yet it is not without imperfections.
In the study of transferable features, researchers noticed a dilemma that representations with higher classification accuracy on the original task lead to less transferable features for downstream tasks (Kornblith et al. (2021); Müller et al. (2019)). This is connected to the fact that they tend to collapse intra-class variability of representations, resulting in loss of information in the logits about the resemblances between instances of different classes. Furthermore, the neural collapse phenomenon (Papyan et al. (2020)) indicates that as training progresses, the intra-class variation becomes negligible, and features collapse to their class-means. So this dilemma inherently originates from the practice of representing classes by a single vector. The Grassmannian class representation shed light on this issue as features of each class are allowed to vary in a high-dimensional subspace without incurring losses in classification.
In the study of the long-tail classification, researchers found that the norm of class representative vectors is highly related to the number of training instances in the corresponding class (Kang et al. (2019)) and the recognition accuracy is affected. To counter this effect, the class representative vector is typically been rescaled to unit length during training (Liu et al. (2019)) or re-calibrated in an extra post-processing step (Kang et al. (2019)). In addition to these techniques, the Grassmannian class representation provides a natural and elegant solution for this as subspace is scaleless.
It is well known that the set of k-dimensional linear subspaces form a Grassmann manifold, so finding the optimal subspace representation for classes is to optimize on the Grassmann manifold. Thus for the second question, the natural solution is to use the geometric optimization (Edelman et al. (1998)), which optimizes an objective function under the constraint of a given manifold. Points being optimized are moving along geodesics instead of following the direction of Euclidean gradients. The preliminary concepts of geometric optimization are reviewed in Section 3, and the technical details of subspace learning are presented in Section 4. We implemented an efficient Riemannian SGD for optimization in Grassmann manifold as shown in Algorithm 1, which integrates the geometric optimization algorithms to deep learning frameworks so that both the linear subspaces in Grassmannian and model weights in Euclidean are jointly optimized.
Going back to the first question, we experiment on three concrete tasks in Section 5 to demonstrate the practicality and effectiveness of Grassmannian class representation. We find that (1) Grassmannian class representation improves large-scale image classification accuracy. (2) Grassmannian class representation produces high-quality features that can better transfer to downstream tasks. (3) Grassmannian class representation improves the long-tail classification accuracy. With these encouraging results, we believe that Grassmannian class representation is a promising formulation and more applications may benefit from its attractive features.
2 RELATED WORK
Geometric Optimization Edelman et al. (1998) developed the geometric Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds in their seminal paper. Riemannian SGD was introduced in Bonnabel (2013) with an analysis on convergence and there are variants such as Riemannian SGD with momentum (Roy et al. (2018)) or adaptive (Kasai et al. (2019)). Other popular Euclidean optimization methods such as Adam are also studied in the Riemannian manifold context (Becigneul & Ganea (2019)). Lezcano-Casado & Martınez-Rubio (2019) study the special case of SO(n) and U(n) and uses the exponential map to enable Euclidean optimization methods for Lie groups. The idea was generalized into trivialization in Lezcano Casado (2019). Our Riemannian SGD Algorithm 1 is tailored for Grassmannian, so we have a closed-form equation for geodesics. Other applications of geometric optimization include matrix completion (Mishra & Sepulchre (2014); Li et al. (2015b;a); Nimishakavi et al. (2018)), hyperbolic taxonomy embedding (Nickel & Kiela (2018)), etc. Hamm & Lee (2008) propose the Grassmann discriminant analysis, in which features are modeled as linear subspaces. These applications are mostly using shallow models. Zhang et al. (2018) use subspaces to model clusters in unsupervised learning, which share similar spirit with our work. Simon et al. (2020) model classes as subspaces in few-shot learning, however, their subspaces are computed from data matrix rather than explicitly parametrized and learned. Roy et al. (2019) use Stiefel manifold to construct Mahalanobis distance matrix in Siamese networks in order to improve feature embeddings of deep metric learning.
Orthogonal Constraints in Deep Learning There are works that enforce orthogonality on weights, which study the regularization effect of orthogonal constraints. Contrastingly, we used orthogonal matrices as the numerical representation of the geometry object of subspaces and focus on the representation of classes. The approaches of enforcing orthogonality include regularizations (Arjovsky et al. (2016); Xie et al. (2017a); Bansal et al. (2018); Qi et al. (2020); Wang et al. (2020), etc.), geometric constraints (Ozay & Okatani (2018); Harandi & Fernando (2016)) and paraunitary systems (Su et al. (2022)). Orthogonally constrained data is also explored by Huang et al. (2018).
Improving Diversity in Feature Learning Grassmannian class representation encourages the intra-class variation implicitly by providing a subspace to vary. In metric learning, there are efforts to explicitly encourage feature diversity. For example, SoftTriplet Loss (Qian et al. (2019)) models each class as local clusters with several centers. Zhang et al. (2017) use a global orthogonal regularization to encourage local descriptors to spread out in the features space. Yu et al. (2020) propose to learn low-dimensional structures from the maximal coding rate reduction principle. The subspaces are estimated using PCA on feature vectors after the training. In our formulation, subspaces are directly optimized in the Grassmann manifold during training.
Normalized Classification Weights Normalizing class representative vectors has been found useful in representation learning (Wang et al. (2017; 2018); Deng et al. (2019)) and long-tail classification (Liu et al. (2019); Wang et al. (2021)). However, works such as ArcFace (Deng et al. (2019)) focus on adding an extra margin to suppress intra-class variance. In contrast, our subspace formulation encourages intra-class variation.
3 PRELIMINARIES
In this section, we briefly review the essential concepts in geometric optimization. Detailed exposition can be found in Edelman et al. (1998) and Absil et al. (2009). Given an n-dimensional Euclidean space Rn, the set of k-dimensional linear subspaces forms the Grassmann manifold G(k, n). A computational-friendly representation for subspace S ∈ G(k, n) is an orthonormal matrix S ∈ Rn×k, where STS = Ik and Ik is the k × k identity matrix. Columns of matrix S can be interpreted as an orthonormal basis for the subspace S. The matrix representation is not unique, as right multiplying by an orthonormal matrix will get a new matrix representing the same subspace. Formally, Grassmannian is a quotient space of the Stiefel manifold and the orthogonal group G(k, n) = St(k, n)/O(k), where St(k, n) = {X ∈ Rn×k|XTX = Ik} and O(k) = {X ∈ Rk×k|XTX = Ik}. When the context is clear, we use the notation of space S and one of its matrix representations S interchangeably. The tangent space of the Grassmann manifold at S consists of all n× k matrices T such that STT = 0. Given a function f : G(k, n)→ R defined on the Grassmann manifold, the Riemannian gradient of f at point S ∈ G(k, n) is given by (Edelman et al., 1998, Equ. (2.70)),
∇f(S) = fS − SST fS , (2)
where fS is the Euclidean gradient with elements (fS)ij = ∂f∂Sij . When performing gradient descend on the Grassmann manifold, and suppose the current point is S and the current Riemannian gradient is G, then the next point is the endpoint of S moving along the geodesic toward the tangent G with some step size. The formula of the geodesic is given by (Edelman et al., 1998, Equ. (2.65)),
S(t) = (SV cos(tΣ) + U sin(tΣ))V T , (3)
where UΣV T = G is the thin singular value decomposition of G.
4 LEARNING THE GRASSMANNIAN CLASS REPRESENTATION
Denote the weight of the last fully-connected layer in a classification network by W ∈ Rn×C and the bias by b ∈ RC , where n is the dimension of features and C is the number of classes. The i-th column vector wi of W is called the i-th class representative vector. The i-th logit is computed as the inner product between a feature x and the class vector (and optionally offset by a bias bi), namely wTi x + bi. We extend this well-established formula to a multi-dimensional subspace form
li := ∥∥projSix∥∥ , (4)
where Si ∈ G(k, n) is a k-dimensional subspace in the n-dimensional feature space. We call Si the i-th class representative space, or class space in short. Comparing the new logit to the standard one, the inner product of feature x with class vector is replaced by the norm of the subspace projection projSix and the bias term is omitted. We found that re-normalizing features to a constant length γ
improves training. Incorporating this, Equation (4) becomes ∥∥∥projSi γx‖x‖∥∥∥. To simplify notation, we assume feature x has been properly re-normalized throughout this paper unless otherwise specified.
The application of the subspace class representation requires two modifications to an existing network. Firstly, the last fully-connected layer is replaced by its geometric counterpart, which is detailed in Section 4.1. The new geometric layer will transform features to logits using Equation (4). Secondly, the optimizer should be extended to process the new geometric layer simultaneously, which is explained in Section 4.2. Parameters of the geometric layer are optimized using Geometric SGD, while all other parameters are optimized as usual using the standard SGD algorithm.
4.1 GRASSMANNIAN CLASS REPRESENTATION
Suppose for class i, i = 1, 2, . . . , C, its subspace representation is Si ∈ G(ki, n), where the dimension ki is a hyperparameter and is fixed during training. Then the tuple of subspaces (S1, S2, . . . , SC) will be optimized in the product space G(k1, n)×G(k2, n)×· · ·×G(kC , n). Denote a matrix instantiation of Si as Si ∈ Rn×k, where the column vectors form an orthonormal basis Si, then we concatenate the matrices into a big matrix
S = [S1 S2 · · · SC ] ∈ Rn×(k1+k2+···+kC). (5)
The matrix S contains the parameters that are optimized numerically. For feature x, the product STi x gives the coordinate of projSix under the orthonormal basis formed by the columns of Si. By definition in Equation (4), the logit for class i and feature x is computed by
li = ∥∥projSix∥∥ = ∥∥STi x∥∥ . (6)
Grassmannian Fully-Connected Layer We can implement a geometric fully-connected layer using the plain old fully-connected layer. The shape of the weight S is n× (k1 + k2 + · · ·+ kC), as shown in Equation (5). In the forward pass, the input feature is multiplied with the weight matrix to get a temporary vector t = STx, then the first element of the output is the norm of the sub-vector (t1, . . . , tk1), and the second element of the output is the norm of (tk1+1, tk1+2, . . . , tk1+k2), etc.
Parameter Initialization Each matrix instantiation of the subspace should be initialized as an orthonormal matrix. The geometric optimization algorithm described in Section 4.2 ensures their orthonormality during training. Specifically, for Grassmannian fully-connected layer, each block Si of the weight S in Equation (5) is orthonormal. The whole matrix S needs not be orthonormal.
4.2 OPTIMIZE THE SUBSPACES
Geometric optimization is to optimize functions defined on manifolds. The key step is to find the Riemannian gradient of the loss function and then descend along the geodesic. Here the manifold in concern is the Grassmannian G(k, n). As an intuitive example, G(1, 2) consists of all lines through the origin in a two-dimensional plane. We can visualize it as a unit circle where each point on the unit circle represents the line passing through it. Antipodal points represent the same line. To illustrate
Algorithm 1 An Iteration of the Riemannian SGD with Momentum for Grassmannian at Step t
Input: Learning rate γ > 0, momentum µ ∈ [0, 1), Grassmannian weight matrix S(t) ∈ Rn×k, momentum buffer M (t−1) ∈ Rn×k, Euclidean gradient D ∈ Rn×k.
1: Compute Riemannian gradient G← (In − SST )D. . Equation (8) 2: Approximately parallel transport M to the tangent space of current point S(t) by projection
M ← (In − SST )M (t−1). (11) 3: New momentum M (t) ← µM + G. . PyTorch version 4: Move along geodesic using equation (3). If UΣV T = M (t) is the thin singular value decompo-
sition, then S(t+1) ← ( S(t)V cos(γΣ) + U sin(γΣ) ) V T .
5: (Optional) Re-orthogonalization S(t+1) by QR decomposition. . For numerical stability
how geometric optimization works, we define a toy problem on G(1, 2) that maximizes the norm of the projection of a fixed vector x0 onto a line through the origin, namely
max S∈G(1,2) ‖projSx0‖ . (7)
As shown in Figure 1, we represent S with a unit vector w ∈ S. Suppose at step t, the current point is w(t), then it is easy to compute that the Euclidean gradient at w(t) is d = x0, and the Riemannian gradient g is the Euclidean gradient d projected to the tangent space of G(1, 2) at point w(t). The next iterative point w(t+1) is to move w(t) along the geodesic toward the direction g. Without geometric optimization, the next iterative point would have lied at w(t) + γd, jumping outside of the manifold.
The following proposition computes the Riemannian gradient we needed. Proposition 1. Let S ∈ Rn×k be a matrix instantiation of subspace S ∈ G(k, n), and x ∈ Rn is a vector in Euclidean space, then the Riemannian gradient G of l(S,x) = ‖projSx‖ w.r.t. S is
G = 1
l (In − SST )xxTS. (8)
Proof. Rewrite ‖projSx‖ = √ xTSSTx, and compute the Euclidean derivatives as
∂l
∂S =
1 l xxTS, ∂l ∂x = 1 l SSTx. (9)
Then Equation (8) follows from Equation (2).
We give a geometric interpretation of Proposition 1. Let w1 be the unit vector along direction projSx, then expand it to an orthonormal basis of S, say {w1,w2, . . . ,wk}. Since Riemannian gradient is invariant to the matrix instantiation, we can set S = [w1 w2 · · · wk]. Then Equation (8) becomes
G = [ (In − SST )x 0 · · · 0 ] , (10)
since wi ⊥ x, i = 2, 3, . . . , k and wT1 x = l. Equation (10) shows that in the single-sample case, only one basis vector w1 needs to be rotated towards vector x, where w1 is the unit vector in S that is closest to x.
Riemannian SGD During training, parameters of non-geometric layers are optimized as usual using the vanilla SGD algorithm. For geometric layers such as the Grassmannian fully-connected layer, their parameters are optimized using the Riemannian SGD algorithm. The pseudo-code of the Riemannian SGD with momentum, which we implemented in our experiments, is described in Algorithm 1. We only show the code for the single-sample, single Grassmannian case. It is trivial to extend them to the batch version and the product of Grassmannians. Note that in step 2, we use projection to approximate the parallel translation of momentum for efficiency, and in step 5 an optional extra orthogonalization can improve numerical stability. The momentum update formula is adapted from the PyTorch implementation of the vanilla SGD. Weight decay does not apply here since spaces are scaleless. Algorithm 1 works together with the vanilla SGD and modifies the gradient from Euclidean to Grassmannian on-the-fly for geometric parameters.
5 EXPERIMENT
In this section, we study the influence of Grassmannian class representation through experiments. Firstly, in Section 5.1, we show that the expressive power of Grassmannian class representation improves accuracy in large-scale image classification. Secondly, in Section 5.2, we show that the Grassmannian class representation improves the feature transferability by allowing larger intra-class variation. Thirdly, in Section 5.3, we demonstrated that the scaleless property of the Grassmannian class representation improves the classification accuracy in the long-tail scenario. Additional experiments on hyper-parameter choices and design decisions are presented in Appendix B.
We choose the vanilla softmax loss and the cosine softmax loss (without margin) as baselines since they reflect the current typical class representations. The former uses a plain vector and the latter uses a normalized vector. Other innovations on losses, such as adding margins (Deng et al. (2019)), re-balancing class-wise gradients (Wang et al. (2021)), are orthogonal to our contribution.
5.1 GRASSMANNIAN CLASS REPRESENTATION IMPROVES CLASSIFICATION ACCURACY
We apply the Grassmannian class representation to large-scale classification, where consistent improvement over baselines is shown. We then analyze the characteristics of both the learned features and the learned class subspaces. On the feature representation side, we compare the feature sparsity and intra-class variability. On the class representation side, we visualize the principal angles between any pair of classes, a concept that only appears when classes are Grassmannian.
Experimental Setting We use the ResNet50-D (He et al. (2019)) architecture as the base model, and benchmark on ImageNet-1K (Deng et al. (2009)). ResNet50-D is a slight modification of the original ResNet-50 (He et al. (2016)) with about 1% improvement in accuracy. ImageNet-1K is a large-scale image classification dataset containing 1.28M training images and 50K validation images in 1000 categories. We set γ = 25 for both cosine softmax and the Grassmannian class representation. Our method replaces the last fully-connected layer of ResNet50-D by a Grassmannian fully-connected layer. To reduce the number of hyper-parameters, we simply set the subspace dimension k to be the same for all classes. We vary the hyper-parameter k in the range [1, 2, 4, 8, 16]. Since the dimension of feature is 2048, the Grassmannian fully-connected layer has the geometry of Π1000i=1 G(k, 2048).
Training Strategy All settings share the same training strategy. Each training includes 100 epochs with total batch size 256 on 8 NVIDIA Tesla V100 GPUs. SGD is used for baselines and Riemannian SGD described in Algorithm 1 is used for Grassmannian class representations. The momentum is 0.9 and the weight decay is 0.0001. The initial learning rate is 0.1 and then follows the cosine learning rate decay. The checkpoint with best validation score is used. The input size is 224× 224 and we use the standard augmentation for ImageNet, namely, random resized crop followed by random horizontal flip. The code is implemented using the mmclassification (MMClassification Contributors (2020)) package, and uses PyTorch as the training backend. Note that to make the number of experiments tractable due to our limited computation resources, we omitted many tricks that has shown to improve representation learning, such as stronger augmentation (Cubuk et al. (2020)), longer training (Wightman et al. (2021)), adding margins (Deng et al. (2019)) etc., and focus on the improvements solely contributed by the Grassmannian formulation.
Feature Norm Regularization We noticed that the norm of the feature (before re-normalization) decreases as training progresses (details see Appendix A). For example, in the case of k = 16, the average norm of feature decreases from 1.051 at epoch 10 to 0.332 at epoch 100. Although the norm of the feature does not affect inference result due to the feature re-normalization when computing logits, we empirically find that encouraging the norm to be larger than a constant L improves the training. Specifically, we propose a feature norm regularization loss LFN,
LFN = 1
K ∑ i 1 2 (relu (L− ‖xi‖))2 , (12)
where xi is the feature of the i-th sample before normalization and K is the number of features with norm larger than L. In our experiments, L = 1 and the loss is directly added to the softmax loss
with equal weight. We also tried larger values of L or to regularize the norm of feature on both sides, however, they degrade the performance.
Results The validation accuracies of different models on ImageNet-1K is listed in Table 1. All models with the Grassmannian class representation achieve higher top-1 and top-5 accuracies than the vanilla softmax and the cosine softmax. A general trend is that, with larger subspace dimension k, the accuracy improvement is greater. When subspace dimension is 16, the top-1 accuracy is 79.21%, which is 1.17% points higher than the vanilla softmax loss. With feature norm regularization, the top-1 accuracy further improves from 79.12% to 79.37% for dimension 8.
Intra-Class Variability Increases with Dimension The intra-class variability is measured by the mean pair-wise angles (in degrees) between features within the same class, and then average over all classes. The inter-class variability is the average of mean pair-wise angles between features from different classes. Following the convention in the study of neural collapse (Papyan et al. (2020)), we use the global centered training feature to compute variabilities. Kornblith et al. (2021) showed that alternative objectives that improve accuracy, including label smoothing, dropout, sigmoid, cosine softmax, logit normalization, etc., collapse the intra-class variability in representation, which in consequence degrades the quality of feature on downstream tasks. However, this conclusion does not apply when the classes are modeled by subspaces. The intra-class variability does reduces from baseline’s 60.12 to Grassmannian formulation’s 56.52 when the subspace dimension k = 1, however, as k increases, both the top-1 accuracy and the intra-class variability grow. This indicates that representing classes as subspaces enables the simultaneous improvement of class discriminative power and expansion of intra-class variability.
Feature Sparsity The feature sparsity is measured by the average percentage of zero activations on the validation set. As shown in Table 1, the feature from vanilla softmax networks are very dense, with only 0.55% zero activations. Cosine softmax and Grassmannian class representations all result in more sparse representations, with around 78% zero activations. The feature norm regularization decreases the sparsity about a half.
Principal Angles Between Class Representative Spaces When classes are subspaces, relationships between two classes can be measured by k angles called principal angles, which contain richer information than a single angle between two class vectors. The principal angles between two k-dimensional subspaces S and R are recursively defined as (Absil et al. (2006))
cos(θi) = max s∈S max r∈R
sTr = sTi ri, s.t.‖s‖ = ‖r‖ = 1, sTsj = rTrj = 0, j = 1, . . . , i− 1, (13)
for i = 1, . . . , k and θi ∈ [0, π/2]. In Figure 2, we illustrate the smallest and largest principal angles between any pair of classes for a model with k = 8. From the figure, we can see that the smallest principal angle reflects class similarity, and the largest principal angle is around π/2. A smaller angle means the two classes are correlated in some directions, and a π/2 angle means that some directions in one class subspace is completely irrelevant (orthogonal) to the other class.
5.2 GRASSMANNIAN CLASS REPRESENTATION IMPROVES FEATURE TRANSFERABILITY
In this section we compare the linear transferability of the features learned by different models trained on the ImageNet-1K dataset. The feature transfer benchmark dataset includes CIFAR-10 (Krizhevsky et al. (2009)), CIFAR-100 (Krizhevsky et al. (2009)), Food-101 (Bossard et al. (2014)), Oxford-IIIT Pets (Parkhi et al. (2012)), Stanford Cars (Krause et al. (2013)), and Oxford 102 Flowers (Nilsback & Zisserman (2008)). For each of the transfer dataset, we use the same trained models as in Table 1 to extract their features. Then all features are normalized to unit length. We fit linear SVM with one-vs-rest multi-class policy on the training set, and report the accuracies on their test set. The regularization hyper-parameter for SVM is grid searched with candidates [0.1, 0.2, 0.5, 1, 2, 5, 10, 15, 20] and determined by five-fold cross-validation on the training set.
Results As shown in Table 2, the cosine softmax and the Grassmannian with subspace dimension k = 1 has comparable transfer performance, but both are lower than the vanilla softmax. However, when the subspace dimension increases, the transfer performance gradually improves, and when k = 16, the transfer performance is on par with vanilla softmax. The feature norm regularization improves the transfer quality, as shown in the k = 1, 8 cases. We hypothesize that this might relate to the fact that features with norm regularization are less sparse, so more information are encoded.
Class Separation The class separation is measured by the index R2, which is defined as one minus the ratio of the average intra-class cosine distance to the overall average cosine distance (Kornblith et al., 2021, Eq. (11)). Kornblith et al. (2021) found that greater class separation R2 is associated with less transferable features. This may explain the feature transfer performance of Grassmannian class
representations. The vanilla softmax has lower separation (0.495) compared to the cosine softmax (0.528) and the Grassmannian class representation with subspace dimension k = 1 (0.534). From subspace dimension k = 1 to k = 16, the separation from Grassmannian models decreases from a high value (0.534) to a low value (0.395). The change in class separation is roughly in line with the change of transfer performances.
5.3 SCALELESS OF SUBSPACE IMPROVES LONG-TAIL RECOGNITION
We benchmark its effectiveness in long-tail classification using the ImageNet-LT dataset (Liu et al. (2019)). ImageNet-LT is a subset of ImageNet-1K, where the number of images per class ranges from 5 to 1280. There are totally 115.8K images, roughly 1/10 the size of ImageNet-1K. We use the same ResNet50-D networks as in Section 5.1. All training settings including optimizer, augmentation, initial learning rate are also kept the same except we modify the total epochs to 200 and the learning rate is decayed by 1/10 at epoch 150, 180, and 195. The last checkpoint is used for evaluation. We use the instance-balanced sampling, as it was reported by Kang et al. (2019) that class-balanced sampling, and square-root sampling both degrade the performance.
We report the top-1 accuracies on the test set in Table 3. We find that both the cosine softmax and the Grassmannian class representation with small subspace dimension improve the long-tail classification accuracy. Specifically, the cosine softmax is 1.62% higher in score compared to the vanilla softmax, and the Grassmannian class representation with subspace dimension k = 1 is 2.11% higher in score compared to the vanilla softmax. However, when the subspace dimension increases, the accuracy drops. We notice that for few-shot classes, there are not enough sample to learn a good higher dimensional subspace for its representation, as the accuracy of few-shot classes degrade significantly when dimension are large. Too few training data for a class is an example scenario when larger dimension does not offer much help.
6 LIMITATION AND FUTURE DIRECTION
One problem that remains open is how to choose the optimal dimension. Currently, we treat it as a hyper-parameter and decide it through experiments. Computational side, geometric optimization incurs some computational overhead since it contains SVD decomposition. This might hinder the training speed when k is very large. The Grassmannian class representation allows for greater intra-class variability, but we did not explicitly promote the intra-class variability in any form. It will be very interesting to explore ways to explicitly encourage intra-class variability. For example, a potential way is to combine it with self-supervised learning. We hope our work would stimulate progress in these directions.
7 CONCLUSION
In this work, we proposed to use linear subspaces as the class prototype in deep neural networks. The geometric structure of the related Grassmannian fully-connected layer and the Grassmannian convolutional layer are products of Grassmannian. We optimize the subspaces using geometric optimization and provide an efficient Riemannian SGD implementation tailored for Grassmannians. We apply the new formulation to large-scale image classification, feature transfer, and long-tail classification tasks. Experiments demonstrate that the new Grassmannian class representation is able to improve performances in these settings.
A TECHNICAL DETAILS
Alternative Implementation of Riemannian SGD The step 4 of Algorithm 1 is called retraction in geometric optimization. There are alternative implementations of retraction other than moving parameters along the geodesic. For example, replace step 4 with the Euclidean gradient update and followed by the re-orthogonalization via QR decomposition in Step 5. The subspace parameter may move away from the Grassmannian after the Euclidean gradient update, but it will be pulled back to the manifold after the QR re-orthogonalization (details see Absil et al. (2009, Equ. (4.11))). For ease of reference, we call this version of Riemannian SGD as “Algorithm 1 variant”. We compare the two implementations in the first two rows of Table 4. The results show that the Grassmannian class representation is effective on both versions of Riemannian SGD.
Necessity of Grassmannian Formulation and Geometric Optimization To show that the necessity of constraining the subspace parameters to lie in the Grassmannian, we replace the Riemannian SGD with the vanilla SGD and compare it with Riemannian SGD. Note that with SGD, the logit formula ‖STi x‖ no longer means the projection norm because Si is not orthogonal anymore. The result is shown at the third row of Table 4, from which we observe a significant performance drop for the unconstrained setting.
Numerical Stability of Algorithm 1 The numerical stability issue is caused by the accumulation of tiny computational errors of Equation (3). After many iterations, the resultant matrix S might not be perfectly orthogonal. For example, after 100, 1000, and 5000 iterations of the Grassmannian ResNet50-D with subspace dimension k = 8, we observed that the error max i‖STi Si − I‖∞ is 1.9e-5, 9.6e-5 and 3.7e-4, respectively. After 50 epochs, the error accumulates to 0.0075. One can run step 5 every 100 iterations to keep the error at low level and the computational cost is neglectable. For this reason, we marked this step as “optional”.
Decreasing Feature Norm During Training We show the changes of average norm on the validation set of ImageNet from epoch 10 to epoch 100 in Figure 3. The subspace dimension k = 16.
B HYPER-PARAMETERS AND DESIGN DECISIONS
Choice of Gamma We use γ = 25 throughout the main text. Here we give more results with different choice of γ when subspace dimension k = 8 in Table 5. Due to that we conducted this set of experiments in early exploration stage, the learning rate decay policy is to divide by 10 at epochs 30, 60 and 90, which is different from our main results using the cosine learning rate schedule. The top-1 accuracy is slightly lower than the cosine learning rate counter part. Other training settings such as augmentation are the same as in Table 1.
Importance of Re-Normalizing Features Re-normalizing the feature is critical to effectively learn the class representative subspaces. Below we provide training results without feature re-normalization in Table 6. There are significant performance drop without re-normalization. For reference, the cosine softmax also requires feature re-normalization for effective learning.
Importance of Joint Training Joint training the subspaces and the features is essential. To support this claim, we add an experiment that only fine-tunes the class subspaces from weights pre-trained using the regular softmax (third row of Table 7). For comparison, we also add another experiment that fine-tunes all parameters (fourth row of Table 7). We find that if the feature is fixed, changing the regular fc to the geometric version does not increase performance noticeably (top-1 from 78.04% to 78.14%). But when all parameters are free to learn, the pre-trained weights is a better initialization than the random initialization (top-1 from 79.12% to 79.44%).
More Results of FN We present more results using the feature norm regularization trick in Table 8. From the results, we observe that FN also works for the baseline Cosine Softmax. For Grassmannian + FN, the performance reaches peak at dimension k = 8 and then decreases when k = 16.
Stronger Augmentation Improves Accuracy Generally speaking, stronger augmentation mitigates the overfitting problem and benefits models with larger capacity. To demonstrate the effect of stronger augmentations, we run experiments using RandAug (Cubuk et al. (2020)) in Table 9. We can see that stronger augmentation indeed further increases the accuracy. Together with longer training and SyncBN, the top-1 accuracy for ResNet50-D reaches 80.17%.
C MORE BASELINES
We have compared the proposed method with vanilla softmax and the cosine softmax the main text. In this section we compare with baselines that use the same amount of parameters, and run experiments on different network structures.
Multi-FC We add multiple classification fc layer to the network. During training, these independent fcs are trained side by side, and their losses are averaged. During testing, the logits are first averaged, and then followed by softmax to output the prediction probability.
SoftTriple In the SoftTriple loss (Qian et al. (2019)), each class is modeled by multiple centers. The logit is a weighted average of logits computed from individual class centers. We adapted the official code into our codebase to train on the ImageNet dataset. The recommended parameters are used. Specifically, λ = 20, γ = 0.1, τ = 0.2 and δ = 0.01.
For the above two settings, we use the same training protocols as in Table 1. Results are shown in Table 10, from which we find that the Grassmannian class representation is the most effective one.
More Architectures We show experiments on ResNet101-D and ResNeXt (Xie et al. (2017b)) in Table 11. The training settings are the same as in Table 1, namely, we use the standard augmentation, cosine learning rate schedule, and train for 100 epochs. The results show that our formulation is effective across different model architectures.
D TRAINING SPEED AND SVD SPEED
During inference, the computational cost is K times the vanilla softmax. Since it is mostly matrix multiplication, the GPU acceleration can speed up even further. For example, on a V100 GPU, the average time of multiplying a 1000 × 2048 matrix with a 2048 dimensional vector is 20 ± 2.9µs, while multiplying an 8000× 2048 matrix with a 2048 dimensional vector takes about 105± 7.6µs. The cost is neglectable compared to the network forward time.
During training, the most costly operation in Algorithm 1 is SVD. We measure the actual iteration time during training in Table 12. We observe that when K is small, it is as fast as the vanilla softmax. When k = 8, the full training needs roughly 1.7x time compared to vanilla softmax (this can be reduced greatly with the new version of PyTorch, as we will discuss below).
Since the release of PyTorch 1.13, they supported the fast approximate SVD algorithm GESVDA. We saw great speed improvement in the case of k = 8 and k = 16. The benchmark time is shown
in Table 13. With computational optimizations as such, we expect the computational cost of SVD would be minimal for k ≤ 32.
E PYTORCH CODE FOR RIEMANNIAN SGD
We provide a sample implementation of Algorithm 1 in Figure 4 using PyTorch (Paszke et al. (2019)). The sample code checks if a parameter is geometric by checking whether it has an ‘geometry’ attribute. If not, then it runs the original SGD on that parameter. If the ‘geometry’ property is not None, then it is a list of numbers indicating the dimension of class representative subspaces for all classes. If all the dimensions are the same, then it goes to the batch version (line 23 of the code in Figure 4). Otherwise, it goes to the for loop version (line 46 of the code in Figure 4). | 1. What is the focus and contribution of the paper regarding traditional class representative vectors in deep neural networks?
2. What are the strengths of the proposed approach, particularly in introducing subspace learning and geometric optimization?
3. What are the weaknesses of the paper, especially regarding scalability, comparisons with other methods, and the role of the FN loss?
4. Do you have any questions regarding the proposed method's performance with k=1 and its similarity to cosine softmax?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this paper, traditional class representative vectors deep neural networks are replaced by linear subspaces on Grassmann manifolds. It is supposed to be more informative for intra-class feature variations. The proposed method optimizes the subspaces using geometric optimization, with an efficient Riemannian SGD implementation tailored for Grassmannians. Experiments on image classification, feature transfer, and long-tail classification tasks show that the new method improves the vanilla softmax and cosine softmax.
Strengths And Weaknesses
Strength:
(1) Introducing subspace learning in deep neural networks is interesting, and introducing geometric optimization with Riemannian SGD is useful to solve this problem.
(2) Experimental results on three tasks show improvements over traditional softmax methods.
(3) The paper is well written and organized.
Weaknesses:
(1) With larger k, the proposed method introduces much more parameters, not to say the SVD operation in the Riemannian SGD solver. As in Table 2, without FN only k=16 shows improvements. This would make a very difficult scalability for large-scale learning, for example, with millions of classes in training face recognition models.
(2) If computation is not an issue, traditionally there are also methods in expanding the class representative vectors, e.g. multiple experts and fusion. It is not clear if the improvement is due to enlarged classification parameters or due to the new learning framework. Therefore, it would be better to show a comparison against multiple experts.
(3) The FN loss contributes a lot for the improvements. However, it should be a general trick that can also be applied on the traditional softmax baselines, which should also be reported for a fair comparison. Without the FN loss it appears that the improvement of the proposed method is still limited.
(4) With k=1 the linear subspaces degrade to class vectors. In such case what is the difference between the proposed method with k=1 and cosine softmax? They perform quite similar with each other across all the three tables.
(5) How will the proposed method incorporate margin parameters and what would be its effect?
Clarity, Quality, Novelty And Reproducibility
Clarity: good.
Quality: fair.
Novelty: good.
Reproducibility: good. |
ICLR | Title
Grassmannian Class Representation in Deep Learning
Abstract
We generalize the class representative vector found in deep classification networks to linear subspaces and show that the new formulation enables the simultaneous enhancement of the inter-class discrimination and intra-class feature variation. Traditionally, the logit is computed by the inner product between a feature and the class vector. In our modeling, classes are subspaces and the logit is defined as the norm of the projection from a feature onto the subspace. Since the set of subspaces forms Grassmann manifolds, finding the optimal subspace representation for classes is to optimize the loss on a Grassmannian. We integrate the Riemannian SGD into existing deep learning frameworks such that the class subspaces in a Grassmannian are jointly optimized with other model parameters in Euclidean. Compared to the vector form, subspaces have two appealing properties: they can be multi-dimensional and they are scaleless. Empirically, we reveal that these distinct characteristics improve various tasks. (1) Image classification. The new formulation brings the top-1 accuracy of ResNet50-D on ImageNet-1K from 78.04% to 79.37% using the standard augmentation in 100 training epochs. This confirms that the representative capability of subspaces is more powerful than vectors. (2) Feature transfer. Subspaces provide freedom for features to vary and we observed that the intra-class variability of features increases when the subspace dimensions are larger. Consequently, the quality of features is better for downstream tasks. The average transfer accuracy across 6 datasets improves from 77.98% to 80.12% compared to the strong baseline of vanilla softmax. (3) Long-tail classification. The scaleless property of subspaces benefits classification in the long-tail scenario and improves the accuracy of ImageNet-LT from 46.83% to 48.94% compared to the standard formulation. With these encouraging results, we believe that more applications could benefit from the Grassmannian class representation. Codes will be released.
1 INTRODUCTION
The idea of representing classes as linear subspaces in machine learning can be dated back, at least, to 1973 (Watanabe & Pakvasa (1973)), yet it is mostly ignored in the current deep learning literature. In this paper, we revisit the scheme of representing classes as linear subspaces in the deep learning context. To be specific, each class i is associated with a linear subspace Si, and for any feature vector x, the i-th class logit is defined as the norm of projection
li := ∥∥projSix∥∥ . (1)
Since a subspace is a point in the Grassmann manifold (Absil et al. (2009)), we call this formulation the Grassmannian class representation. In the following, we answer the two critical questions,
1. Is Grassmannian class representation useful in real applications?
2. How to optimize the subspaces in training?
The procedure fully-connected layer → softmax → cross-entropy loss is the standard practice in deep classification networks. Each column of the weight matrix of the fullyconnected layer is called the class representative vector and serves as a prototype for one class. This representation of class has achieved huge success, yet it is not without imperfections.
In the study of transferable features, researchers noticed a dilemma that representations with higher classification accuracy on the original task lead to less transferable features for downstream tasks (Kornblith et al. (2021); Müller et al. (2019)). This is connected to the fact that they tend to collapse intra-class variability of representations, resulting in loss of information in the logits about the resemblances between instances of different classes. Furthermore, the neural collapse phenomenon (Papyan et al. (2020)) indicates that as training progresses, the intra-class variation becomes negligible, and features collapse to their class-means. So this dilemma inherently originates from the practice of representing classes by a single vector. The Grassmannian class representation shed light on this issue as features of each class are allowed to vary in a high-dimensional subspace without incurring losses in classification.
In the study of the long-tail classification, researchers found that the norm of class representative vectors is highly related to the number of training instances in the corresponding class (Kang et al. (2019)) and the recognition accuracy is affected. To counter this effect, the class representative vector is typically been rescaled to unit length during training (Liu et al. (2019)) or re-calibrated in an extra post-processing step (Kang et al. (2019)). In addition to these techniques, the Grassmannian class representation provides a natural and elegant solution for this as subspace is scaleless.
It is well known that the set of k-dimensional linear subspaces form a Grassmann manifold, so finding the optimal subspace representation for classes is to optimize on the Grassmann manifold. Thus for the second question, the natural solution is to use the geometric optimization (Edelman et al. (1998)), which optimizes an objective function under the constraint of a given manifold. Points being optimized are moving along geodesics instead of following the direction of Euclidean gradients. The preliminary concepts of geometric optimization are reviewed in Section 3, and the technical details of subspace learning are presented in Section 4. We implemented an efficient Riemannian SGD for optimization in Grassmann manifold as shown in Algorithm 1, which integrates the geometric optimization algorithms to deep learning frameworks so that both the linear subspaces in Grassmannian and model weights in Euclidean are jointly optimized.
Going back to the first question, we experiment on three concrete tasks in Section 5 to demonstrate the practicality and effectiveness of Grassmannian class representation. We find that (1) Grassmannian class representation improves large-scale image classification accuracy. (2) Grassmannian class representation produces high-quality features that can better transfer to downstream tasks. (3) Grassmannian class representation improves the long-tail classification accuracy. With these encouraging results, we believe that Grassmannian class representation is a promising formulation and more applications may benefit from its attractive features.
2 RELATED WORK
Geometric Optimization Edelman et al. (1998) developed the geometric Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds in their seminal paper. Riemannian SGD was introduced in Bonnabel (2013) with an analysis on convergence and there are variants such as Riemannian SGD with momentum (Roy et al. (2018)) or adaptive (Kasai et al. (2019)). Other popular Euclidean optimization methods such as Adam are also studied in the Riemannian manifold context (Becigneul & Ganea (2019)). Lezcano-Casado & Martınez-Rubio (2019) study the special case of SO(n) and U(n) and uses the exponential map to enable Euclidean optimization methods for Lie groups. The idea was generalized into trivialization in Lezcano Casado (2019). Our Riemannian SGD Algorithm 1 is tailored for Grassmannian, so we have a closed-form equation for geodesics. Other applications of geometric optimization include matrix completion (Mishra & Sepulchre (2014); Li et al. (2015b;a); Nimishakavi et al. (2018)), hyperbolic taxonomy embedding (Nickel & Kiela (2018)), etc. Hamm & Lee (2008) propose the Grassmann discriminant analysis, in which features are modeled as linear subspaces. These applications are mostly using shallow models. Zhang et al. (2018) use subspaces to model clusters in unsupervised learning, which share similar spirit with our work. Simon et al. (2020) model classes as subspaces in few-shot learning, however, their subspaces are computed from data matrix rather than explicitly parametrized and learned. Roy et al. (2019) use Stiefel manifold to construct Mahalanobis distance matrix in Siamese networks in order to improve feature embeddings of deep metric learning.
Orthogonal Constraints in Deep Learning There are works that enforce orthogonality on weights, which study the regularization effect of orthogonal constraints. Contrastingly, we used orthogonal matrices as the numerical representation of the geometry object of subspaces and focus on the representation of classes. The approaches of enforcing orthogonality include regularizations (Arjovsky et al. (2016); Xie et al. (2017a); Bansal et al. (2018); Qi et al. (2020); Wang et al. (2020), etc.), geometric constraints (Ozay & Okatani (2018); Harandi & Fernando (2016)) and paraunitary systems (Su et al. (2022)). Orthogonally constrained data is also explored by Huang et al. (2018).
Improving Diversity in Feature Learning Grassmannian class representation encourages the intra-class variation implicitly by providing a subspace to vary. In metric learning, there are efforts to explicitly encourage feature diversity. For example, SoftTriplet Loss (Qian et al. (2019)) models each class as local clusters with several centers. Zhang et al. (2017) use a global orthogonal regularization to encourage local descriptors to spread out in the features space. Yu et al. (2020) propose to learn low-dimensional structures from the maximal coding rate reduction principle. The subspaces are estimated using PCA on feature vectors after the training. In our formulation, subspaces are directly optimized in the Grassmann manifold during training.
Normalized Classification Weights Normalizing class representative vectors has been found useful in representation learning (Wang et al. (2017; 2018); Deng et al. (2019)) and long-tail classification (Liu et al. (2019); Wang et al. (2021)). However, works such as ArcFace (Deng et al. (2019)) focus on adding an extra margin to suppress intra-class variance. In contrast, our subspace formulation encourages intra-class variation.
3 PRELIMINARIES
In this section, we briefly review the essential concepts in geometric optimization. Detailed exposition can be found in Edelman et al. (1998) and Absil et al. (2009). Given an n-dimensional Euclidean space Rn, the set of k-dimensional linear subspaces forms the Grassmann manifold G(k, n). A computational-friendly representation for subspace S ∈ G(k, n) is an orthonormal matrix S ∈ Rn×k, where STS = Ik and Ik is the k × k identity matrix. Columns of matrix S can be interpreted as an orthonormal basis for the subspace S. The matrix representation is not unique, as right multiplying by an orthonormal matrix will get a new matrix representing the same subspace. Formally, Grassmannian is a quotient space of the Stiefel manifold and the orthogonal group G(k, n) = St(k, n)/O(k), where St(k, n) = {X ∈ Rn×k|XTX = Ik} and O(k) = {X ∈ Rk×k|XTX = Ik}. When the context is clear, we use the notation of space S and one of its matrix representations S interchangeably. The tangent space of the Grassmann manifold at S consists of all n× k matrices T such that STT = 0. Given a function f : G(k, n)→ R defined on the Grassmann manifold, the Riemannian gradient of f at point S ∈ G(k, n) is given by (Edelman et al., 1998, Equ. (2.70)),
∇f(S) = fS − SST fS , (2)
where fS is the Euclidean gradient with elements (fS)ij = ∂f∂Sij . When performing gradient descend on the Grassmann manifold, and suppose the current point is S and the current Riemannian gradient is G, then the next point is the endpoint of S moving along the geodesic toward the tangent G with some step size. The formula of the geodesic is given by (Edelman et al., 1998, Equ. (2.65)),
S(t) = (SV cos(tΣ) + U sin(tΣ))V T , (3)
where UΣV T = G is the thin singular value decomposition of G.
4 LEARNING THE GRASSMANNIAN CLASS REPRESENTATION
Denote the weight of the last fully-connected layer in a classification network by W ∈ Rn×C and the bias by b ∈ RC , where n is the dimension of features and C is the number of classes. The i-th column vector wi of W is called the i-th class representative vector. The i-th logit is computed as the inner product between a feature x and the class vector (and optionally offset by a bias bi), namely wTi x + bi. We extend this well-established formula to a multi-dimensional subspace form
li := ∥∥projSix∥∥ , (4)
where Si ∈ G(k, n) is a k-dimensional subspace in the n-dimensional feature space. We call Si the i-th class representative space, or class space in short. Comparing the new logit to the standard one, the inner product of feature x with class vector is replaced by the norm of the subspace projection projSix and the bias term is omitted. We found that re-normalizing features to a constant length γ
improves training. Incorporating this, Equation (4) becomes ∥∥∥projSi γx‖x‖∥∥∥. To simplify notation, we assume feature x has been properly re-normalized throughout this paper unless otherwise specified.
The application of the subspace class representation requires two modifications to an existing network. Firstly, the last fully-connected layer is replaced by its geometric counterpart, which is detailed in Section 4.1. The new geometric layer will transform features to logits using Equation (4). Secondly, the optimizer should be extended to process the new geometric layer simultaneously, which is explained in Section 4.2. Parameters of the geometric layer are optimized using Geometric SGD, while all other parameters are optimized as usual using the standard SGD algorithm.
4.1 GRASSMANNIAN CLASS REPRESENTATION
Suppose for class i, i = 1, 2, . . . , C, its subspace representation is Si ∈ G(ki, n), where the dimension ki is a hyperparameter and is fixed during training. Then the tuple of subspaces (S1, S2, . . . , SC) will be optimized in the product space G(k1, n)×G(k2, n)×· · ·×G(kC , n). Denote a matrix instantiation of Si as Si ∈ Rn×k, where the column vectors form an orthonormal basis Si, then we concatenate the matrices into a big matrix
S = [S1 S2 · · · SC ] ∈ Rn×(k1+k2+···+kC). (5)
The matrix S contains the parameters that are optimized numerically. For feature x, the product STi x gives the coordinate of projSix under the orthonormal basis formed by the columns of Si. By definition in Equation (4), the logit for class i and feature x is computed by
li = ∥∥projSix∥∥ = ∥∥STi x∥∥ . (6)
Grassmannian Fully-Connected Layer We can implement a geometric fully-connected layer using the plain old fully-connected layer. The shape of the weight S is n× (k1 + k2 + · · ·+ kC), as shown in Equation (5). In the forward pass, the input feature is multiplied with the weight matrix to get a temporary vector t = STx, then the first element of the output is the norm of the sub-vector (t1, . . . , tk1), and the second element of the output is the norm of (tk1+1, tk1+2, . . . , tk1+k2), etc.
Parameter Initialization Each matrix instantiation of the subspace should be initialized as an orthonormal matrix. The geometric optimization algorithm described in Section 4.2 ensures their orthonormality during training. Specifically, for Grassmannian fully-connected layer, each block Si of the weight S in Equation (5) is orthonormal. The whole matrix S needs not be orthonormal.
4.2 OPTIMIZE THE SUBSPACES
Geometric optimization is to optimize functions defined on manifolds. The key step is to find the Riemannian gradient of the loss function and then descend along the geodesic. Here the manifold in concern is the Grassmannian G(k, n). As an intuitive example, G(1, 2) consists of all lines through the origin in a two-dimensional plane. We can visualize it as a unit circle where each point on the unit circle represents the line passing through it. Antipodal points represent the same line. To illustrate
Algorithm 1 An Iteration of the Riemannian SGD with Momentum for Grassmannian at Step t
Input: Learning rate γ > 0, momentum µ ∈ [0, 1), Grassmannian weight matrix S(t) ∈ Rn×k, momentum buffer M (t−1) ∈ Rn×k, Euclidean gradient D ∈ Rn×k.
1: Compute Riemannian gradient G← (In − SST )D. . Equation (8) 2: Approximately parallel transport M to the tangent space of current point S(t) by projection
M ← (In − SST )M (t−1). (11) 3: New momentum M (t) ← µM + G. . PyTorch version 4: Move along geodesic using equation (3). If UΣV T = M (t) is the thin singular value decompo-
sition, then S(t+1) ← ( S(t)V cos(γΣ) + U sin(γΣ) ) V T .
5: (Optional) Re-orthogonalization S(t+1) by QR decomposition. . For numerical stability
how geometric optimization works, we define a toy problem on G(1, 2) that maximizes the norm of the projection of a fixed vector x0 onto a line through the origin, namely
max S∈G(1,2) ‖projSx0‖ . (7)
As shown in Figure 1, we represent S with a unit vector w ∈ S. Suppose at step t, the current point is w(t), then it is easy to compute that the Euclidean gradient at w(t) is d = x0, and the Riemannian gradient g is the Euclidean gradient d projected to the tangent space of G(1, 2) at point w(t). The next iterative point w(t+1) is to move w(t) along the geodesic toward the direction g. Without geometric optimization, the next iterative point would have lied at w(t) + γd, jumping outside of the manifold.
The following proposition computes the Riemannian gradient we needed. Proposition 1. Let S ∈ Rn×k be a matrix instantiation of subspace S ∈ G(k, n), and x ∈ Rn is a vector in Euclidean space, then the Riemannian gradient G of l(S,x) = ‖projSx‖ w.r.t. S is
G = 1
l (In − SST )xxTS. (8)
Proof. Rewrite ‖projSx‖ = √ xTSSTx, and compute the Euclidean derivatives as
∂l
∂S =
1 l xxTS, ∂l ∂x = 1 l SSTx. (9)
Then Equation (8) follows from Equation (2).
We give a geometric interpretation of Proposition 1. Let w1 be the unit vector along direction projSx, then expand it to an orthonormal basis of S, say {w1,w2, . . . ,wk}. Since Riemannian gradient is invariant to the matrix instantiation, we can set S = [w1 w2 · · · wk]. Then Equation (8) becomes
G = [ (In − SST )x 0 · · · 0 ] , (10)
since wi ⊥ x, i = 2, 3, . . . , k and wT1 x = l. Equation (10) shows that in the single-sample case, only one basis vector w1 needs to be rotated towards vector x, where w1 is the unit vector in S that is closest to x.
Riemannian SGD During training, parameters of non-geometric layers are optimized as usual using the vanilla SGD algorithm. For geometric layers such as the Grassmannian fully-connected layer, their parameters are optimized using the Riemannian SGD algorithm. The pseudo-code of the Riemannian SGD with momentum, which we implemented in our experiments, is described in Algorithm 1. We only show the code for the single-sample, single Grassmannian case. It is trivial to extend them to the batch version and the product of Grassmannians. Note that in step 2, we use projection to approximate the parallel translation of momentum for efficiency, and in step 5 an optional extra orthogonalization can improve numerical stability. The momentum update formula is adapted from the PyTorch implementation of the vanilla SGD. Weight decay does not apply here since spaces are scaleless. Algorithm 1 works together with the vanilla SGD and modifies the gradient from Euclidean to Grassmannian on-the-fly for geometric parameters.
5 EXPERIMENT
In this section, we study the influence of Grassmannian class representation through experiments. Firstly, in Section 5.1, we show that the expressive power of Grassmannian class representation improves accuracy in large-scale image classification. Secondly, in Section 5.2, we show that the Grassmannian class representation improves the feature transferability by allowing larger intra-class variation. Thirdly, in Section 5.3, we demonstrated that the scaleless property of the Grassmannian class representation improves the classification accuracy in the long-tail scenario. Additional experiments on hyper-parameter choices and design decisions are presented in Appendix B.
We choose the vanilla softmax loss and the cosine softmax loss (without margin) as baselines since they reflect the current typical class representations. The former uses a plain vector and the latter uses a normalized vector. Other innovations on losses, such as adding margins (Deng et al. (2019)), re-balancing class-wise gradients (Wang et al. (2021)), are orthogonal to our contribution.
5.1 GRASSMANNIAN CLASS REPRESENTATION IMPROVES CLASSIFICATION ACCURACY
We apply the Grassmannian class representation to large-scale classification, where consistent improvement over baselines is shown. We then analyze the characteristics of both the learned features and the learned class subspaces. On the feature representation side, we compare the feature sparsity and intra-class variability. On the class representation side, we visualize the principal angles between any pair of classes, a concept that only appears when classes are Grassmannian.
Experimental Setting We use the ResNet50-D (He et al. (2019)) architecture as the base model, and benchmark on ImageNet-1K (Deng et al. (2009)). ResNet50-D is a slight modification of the original ResNet-50 (He et al. (2016)) with about 1% improvement in accuracy. ImageNet-1K is a large-scale image classification dataset containing 1.28M training images and 50K validation images in 1000 categories. We set γ = 25 for both cosine softmax and the Grassmannian class representation. Our method replaces the last fully-connected layer of ResNet50-D by a Grassmannian fully-connected layer. To reduce the number of hyper-parameters, we simply set the subspace dimension k to be the same for all classes. We vary the hyper-parameter k in the range [1, 2, 4, 8, 16]. Since the dimension of feature is 2048, the Grassmannian fully-connected layer has the geometry of Π1000i=1 G(k, 2048).
Training Strategy All settings share the same training strategy. Each training includes 100 epochs with total batch size 256 on 8 NVIDIA Tesla V100 GPUs. SGD is used for baselines and Riemannian SGD described in Algorithm 1 is used for Grassmannian class representations. The momentum is 0.9 and the weight decay is 0.0001. The initial learning rate is 0.1 and then follows the cosine learning rate decay. The checkpoint with best validation score is used. The input size is 224× 224 and we use the standard augmentation for ImageNet, namely, random resized crop followed by random horizontal flip. The code is implemented using the mmclassification (MMClassification Contributors (2020)) package, and uses PyTorch as the training backend. Note that to make the number of experiments tractable due to our limited computation resources, we omitted many tricks that has shown to improve representation learning, such as stronger augmentation (Cubuk et al. (2020)), longer training (Wightman et al. (2021)), adding margins (Deng et al. (2019)) etc., and focus on the improvements solely contributed by the Grassmannian formulation.
Feature Norm Regularization We noticed that the norm of the feature (before re-normalization) decreases as training progresses (details see Appendix A). For example, in the case of k = 16, the average norm of feature decreases from 1.051 at epoch 10 to 0.332 at epoch 100. Although the norm of the feature does not affect inference result due to the feature re-normalization when computing logits, we empirically find that encouraging the norm to be larger than a constant L improves the training. Specifically, we propose a feature norm regularization loss LFN,
LFN = 1
K ∑ i 1 2 (relu (L− ‖xi‖))2 , (12)
where xi is the feature of the i-th sample before normalization and K is the number of features with norm larger than L. In our experiments, L = 1 and the loss is directly added to the softmax loss
with equal weight. We also tried larger values of L or to regularize the norm of feature on both sides, however, they degrade the performance.
Results The validation accuracies of different models on ImageNet-1K is listed in Table 1. All models with the Grassmannian class representation achieve higher top-1 and top-5 accuracies than the vanilla softmax and the cosine softmax. A general trend is that, with larger subspace dimension k, the accuracy improvement is greater. When subspace dimension is 16, the top-1 accuracy is 79.21%, which is 1.17% points higher than the vanilla softmax loss. With feature norm regularization, the top-1 accuracy further improves from 79.12% to 79.37% for dimension 8.
Intra-Class Variability Increases with Dimension The intra-class variability is measured by the mean pair-wise angles (in degrees) between features within the same class, and then average over all classes. The inter-class variability is the average of mean pair-wise angles between features from different classes. Following the convention in the study of neural collapse (Papyan et al. (2020)), we use the global centered training feature to compute variabilities. Kornblith et al. (2021) showed that alternative objectives that improve accuracy, including label smoothing, dropout, sigmoid, cosine softmax, logit normalization, etc., collapse the intra-class variability in representation, which in consequence degrades the quality of feature on downstream tasks. However, this conclusion does not apply when the classes are modeled by subspaces. The intra-class variability does reduces from baseline’s 60.12 to Grassmannian formulation’s 56.52 when the subspace dimension k = 1, however, as k increases, both the top-1 accuracy and the intra-class variability grow. This indicates that representing classes as subspaces enables the simultaneous improvement of class discriminative power and expansion of intra-class variability.
Feature Sparsity The feature sparsity is measured by the average percentage of zero activations on the validation set. As shown in Table 1, the feature from vanilla softmax networks are very dense, with only 0.55% zero activations. Cosine softmax and Grassmannian class representations all result in more sparse representations, with around 78% zero activations. The feature norm regularization decreases the sparsity about a half.
Principal Angles Between Class Representative Spaces When classes are subspaces, relationships between two classes can be measured by k angles called principal angles, which contain richer information than a single angle between two class vectors. The principal angles between two k-dimensional subspaces S and R are recursively defined as (Absil et al. (2006))
cos(θi) = max s∈S max r∈R
sTr = sTi ri, s.t.‖s‖ = ‖r‖ = 1, sTsj = rTrj = 0, j = 1, . . . , i− 1, (13)
for i = 1, . . . , k and θi ∈ [0, π/2]. In Figure 2, we illustrate the smallest and largest principal angles between any pair of classes for a model with k = 8. From the figure, we can see that the smallest principal angle reflects class similarity, and the largest principal angle is around π/2. A smaller angle means the two classes are correlated in some directions, and a π/2 angle means that some directions in one class subspace is completely irrelevant (orthogonal) to the other class.
5.2 GRASSMANNIAN CLASS REPRESENTATION IMPROVES FEATURE TRANSFERABILITY
In this section we compare the linear transferability of the features learned by different models trained on the ImageNet-1K dataset. The feature transfer benchmark dataset includes CIFAR-10 (Krizhevsky et al. (2009)), CIFAR-100 (Krizhevsky et al. (2009)), Food-101 (Bossard et al. (2014)), Oxford-IIIT Pets (Parkhi et al. (2012)), Stanford Cars (Krause et al. (2013)), and Oxford 102 Flowers (Nilsback & Zisserman (2008)). For each of the transfer dataset, we use the same trained models as in Table 1 to extract their features. Then all features are normalized to unit length. We fit linear SVM with one-vs-rest multi-class policy on the training set, and report the accuracies on their test set. The regularization hyper-parameter for SVM is grid searched with candidates [0.1, 0.2, 0.5, 1, 2, 5, 10, 15, 20] and determined by five-fold cross-validation on the training set.
Results As shown in Table 2, the cosine softmax and the Grassmannian with subspace dimension k = 1 has comparable transfer performance, but both are lower than the vanilla softmax. However, when the subspace dimension increases, the transfer performance gradually improves, and when k = 16, the transfer performance is on par with vanilla softmax. The feature norm regularization improves the transfer quality, as shown in the k = 1, 8 cases. We hypothesize that this might relate to the fact that features with norm regularization are less sparse, so more information are encoded.
Class Separation The class separation is measured by the index R2, which is defined as one minus the ratio of the average intra-class cosine distance to the overall average cosine distance (Kornblith et al., 2021, Eq. (11)). Kornblith et al. (2021) found that greater class separation R2 is associated with less transferable features. This may explain the feature transfer performance of Grassmannian class
representations. The vanilla softmax has lower separation (0.495) compared to the cosine softmax (0.528) and the Grassmannian class representation with subspace dimension k = 1 (0.534). From subspace dimension k = 1 to k = 16, the separation from Grassmannian models decreases from a high value (0.534) to a low value (0.395). The change in class separation is roughly in line with the change of transfer performances.
5.3 SCALELESS OF SUBSPACE IMPROVES LONG-TAIL RECOGNITION
We benchmark its effectiveness in long-tail classification using the ImageNet-LT dataset (Liu et al. (2019)). ImageNet-LT is a subset of ImageNet-1K, where the number of images per class ranges from 5 to 1280. There are totally 115.8K images, roughly 1/10 the size of ImageNet-1K. We use the same ResNet50-D networks as in Section 5.1. All training settings including optimizer, augmentation, initial learning rate are also kept the same except we modify the total epochs to 200 and the learning rate is decayed by 1/10 at epoch 150, 180, and 195. The last checkpoint is used for evaluation. We use the instance-balanced sampling, as it was reported by Kang et al. (2019) that class-balanced sampling, and square-root sampling both degrade the performance.
We report the top-1 accuracies on the test set in Table 3. We find that both the cosine softmax and the Grassmannian class representation with small subspace dimension improve the long-tail classification accuracy. Specifically, the cosine softmax is 1.62% higher in score compared to the vanilla softmax, and the Grassmannian class representation with subspace dimension k = 1 is 2.11% higher in score compared to the vanilla softmax. However, when the subspace dimension increases, the accuracy drops. We notice that for few-shot classes, there are not enough sample to learn a good higher dimensional subspace for its representation, as the accuracy of few-shot classes degrade significantly when dimension are large. Too few training data for a class is an example scenario when larger dimension does not offer much help.
6 LIMITATION AND FUTURE DIRECTION
One problem that remains open is how to choose the optimal dimension. Currently, we treat it as a hyper-parameter and decide it through experiments. Computational side, geometric optimization incurs some computational overhead since it contains SVD decomposition. This might hinder the training speed when k is very large. The Grassmannian class representation allows for greater intra-class variability, but we did not explicitly promote the intra-class variability in any form. It will be very interesting to explore ways to explicitly encourage intra-class variability. For example, a potential way is to combine it with self-supervised learning. We hope our work would stimulate progress in these directions.
7 CONCLUSION
In this work, we proposed to use linear subspaces as the class prototype in deep neural networks. The geometric structure of the related Grassmannian fully-connected layer and the Grassmannian convolutional layer are products of Grassmannian. We optimize the subspaces using geometric optimization and provide an efficient Riemannian SGD implementation tailored for Grassmannians. We apply the new formulation to large-scale image classification, feature transfer, and long-tail classification tasks. Experiments demonstrate that the new Grassmannian class representation is able to improve performances in these settings.
A TECHNICAL DETAILS
Alternative Implementation of Riemannian SGD The step 4 of Algorithm 1 is called retraction in geometric optimization. There are alternative implementations of retraction other than moving parameters along the geodesic. For example, replace step 4 with the Euclidean gradient update and followed by the re-orthogonalization via QR decomposition in Step 5. The subspace parameter may move away from the Grassmannian after the Euclidean gradient update, but it will be pulled back to the manifold after the QR re-orthogonalization (details see Absil et al. (2009, Equ. (4.11))). For ease of reference, we call this version of Riemannian SGD as “Algorithm 1 variant”. We compare the two implementations in the first two rows of Table 4. The results show that the Grassmannian class representation is effective on both versions of Riemannian SGD.
Necessity of Grassmannian Formulation and Geometric Optimization To show that the necessity of constraining the subspace parameters to lie in the Grassmannian, we replace the Riemannian SGD with the vanilla SGD and compare it with Riemannian SGD. Note that with SGD, the logit formula ‖STi x‖ no longer means the projection norm because Si is not orthogonal anymore. The result is shown at the third row of Table 4, from which we observe a significant performance drop for the unconstrained setting.
Numerical Stability of Algorithm 1 The numerical stability issue is caused by the accumulation of tiny computational errors of Equation (3). After many iterations, the resultant matrix S might not be perfectly orthogonal. For example, after 100, 1000, and 5000 iterations of the Grassmannian ResNet50-D with subspace dimension k = 8, we observed that the error max i‖STi Si − I‖∞ is 1.9e-5, 9.6e-5 and 3.7e-4, respectively. After 50 epochs, the error accumulates to 0.0075. One can run step 5 every 100 iterations to keep the error at low level and the computational cost is neglectable. For this reason, we marked this step as “optional”.
Decreasing Feature Norm During Training We show the changes of average norm on the validation set of ImageNet from epoch 10 to epoch 100 in Figure 3. The subspace dimension k = 16.
B HYPER-PARAMETERS AND DESIGN DECISIONS
Choice of Gamma We use γ = 25 throughout the main text. Here we give more results with different choice of γ when subspace dimension k = 8 in Table 5. Due to that we conducted this set of experiments in early exploration stage, the learning rate decay policy is to divide by 10 at epochs 30, 60 and 90, which is different from our main results using the cosine learning rate schedule. The top-1 accuracy is slightly lower than the cosine learning rate counter part. Other training settings such as augmentation are the same as in Table 1.
Importance of Re-Normalizing Features Re-normalizing the feature is critical to effectively learn the class representative subspaces. Below we provide training results without feature re-normalization in Table 6. There are significant performance drop without re-normalization. For reference, the cosine softmax also requires feature re-normalization for effective learning.
Importance of Joint Training Joint training the subspaces and the features is essential. To support this claim, we add an experiment that only fine-tunes the class subspaces from weights pre-trained using the regular softmax (third row of Table 7). For comparison, we also add another experiment that fine-tunes all parameters (fourth row of Table 7). We find that if the feature is fixed, changing the regular fc to the geometric version does not increase performance noticeably (top-1 from 78.04% to 78.14%). But when all parameters are free to learn, the pre-trained weights is a better initialization than the random initialization (top-1 from 79.12% to 79.44%).
More Results of FN We present more results using the feature norm regularization trick in Table 8. From the results, we observe that FN also works for the baseline Cosine Softmax. For Grassmannian + FN, the performance reaches peak at dimension k = 8 and then decreases when k = 16.
Stronger Augmentation Improves Accuracy Generally speaking, stronger augmentation mitigates the overfitting problem and benefits models with larger capacity. To demonstrate the effect of stronger augmentations, we run experiments using RandAug (Cubuk et al. (2020)) in Table 9. We can see that stronger augmentation indeed further increases the accuracy. Together with longer training and SyncBN, the top-1 accuracy for ResNet50-D reaches 80.17%.
C MORE BASELINES
We have compared the proposed method with vanilla softmax and the cosine softmax the main text. In this section we compare with baselines that use the same amount of parameters, and run experiments on different network structures.
Multi-FC We add multiple classification fc layer to the network. During training, these independent fcs are trained side by side, and their losses are averaged. During testing, the logits are first averaged, and then followed by softmax to output the prediction probability.
SoftTriple In the SoftTriple loss (Qian et al. (2019)), each class is modeled by multiple centers. The logit is a weighted average of logits computed from individual class centers. We adapted the official code into our codebase to train on the ImageNet dataset. The recommended parameters are used. Specifically, λ = 20, γ = 0.1, τ = 0.2 and δ = 0.01.
For the above two settings, we use the same training protocols as in Table 1. Results are shown in Table 10, from which we find that the Grassmannian class representation is the most effective one.
More Architectures We show experiments on ResNet101-D and ResNeXt (Xie et al. (2017b)) in Table 11. The training settings are the same as in Table 1, namely, we use the standard augmentation, cosine learning rate schedule, and train for 100 epochs. The results show that our formulation is effective across different model architectures.
D TRAINING SPEED AND SVD SPEED
During inference, the computational cost is K times the vanilla softmax. Since it is mostly matrix multiplication, the GPU acceleration can speed up even further. For example, on a V100 GPU, the average time of multiplying a 1000 × 2048 matrix with a 2048 dimensional vector is 20 ± 2.9µs, while multiplying an 8000× 2048 matrix with a 2048 dimensional vector takes about 105± 7.6µs. The cost is neglectable compared to the network forward time.
During training, the most costly operation in Algorithm 1 is SVD. We measure the actual iteration time during training in Table 12. We observe that when K is small, it is as fast as the vanilla softmax. When k = 8, the full training needs roughly 1.7x time compared to vanilla softmax (this can be reduced greatly with the new version of PyTorch, as we will discuss below).
Since the release of PyTorch 1.13, they supported the fast approximate SVD algorithm GESVDA. We saw great speed improvement in the case of k = 8 and k = 16. The benchmark time is shown
in Table 13. With computational optimizations as such, we expect the computational cost of SVD would be minimal for k ≤ 32.
E PYTORCH CODE FOR RIEMANNIAN SGD
We provide a sample implementation of Algorithm 1 in Figure 4 using PyTorch (Paszke et al. (2019)). The sample code checks if a parameter is geometric by checking whether it has an ‘geometry’ attribute. If not, then it runs the original SGD on that parameter. If the ‘geometry’ property is not None, then it is a list of numbers indicating the dimension of class representative subspaces for all classes. If all the dimensions are the same, then it goes to the batch version (line 23 of the code in Figure 4). Otherwise, it goes to the for loop version (line 46 of the code in Figure 4). | 1. What is the main contribution of the paper regarding class representation in deep learning?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the novelty, clarity, quality, and reproducibility of the paper's content?
4. Are there any questions or concerns regarding the motivation, formulation, and empirical results of the paper? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a method to represent a class as a subspace in the deep learning regime. The contributions of this paper are the formulation of classes as subspaces, the Grassmannian layer to update subspaces, learning the Grassmannian layer using constraint optimization. The core contribution is to represent a class as a subspace where we can easily replace the softmax layer with fully-connected networks with the projection between subsapaces and the input feature. The experiments show strong performance improvement on some cases on large datasets (e.g., ImageNet). As an example of its effectiveness compared to the softmax one, the top-1 accuracy of ImageNet1K is improved by 1.3%.
Strengths And Weaknesses
Strengths:
The feature norm regularization is somewhat novel for training the neural network under geometry constraints.
This work has a strong performance in term of empirical results compared to prior methods in image classification and transfer learning.
The empirical results also show that the representations using the Grassmannian layer is more sparse.
Weaknesses :
This work has unclear motivation why second order representations in the form of linear subspaces yields better performance compared to the first order representations. There is no motivating examples nor theories when subspaces are suitable representing classes. Citing the work Watanabe and Pakvasa does not directly describe why the linear subspace approach is a better model to represent classes in the era of deep learning.
The novelty of this work is marginal with many overlapping points and contributions compared to the work of Simon et al. The problems of image classification and transfer learning are covered by the work of Simon et al. that enjoys the superiority of linear subspaces over prototypes (a single vector) to represent classes in few-shot learning. In experiments, this work does not discuss or even compare with the proposed method.
The proposed method updaates linear subspaces in classifiers with some constraints, and also this is not novel as some other works by Harandi and Fernando “Generalized BackPropagation Etude De Cas: Orthogonality” and Roy et al., “Siamese Networks: The Tale of Two Manifolds” have discussed similar concepts (i.e. the geometry aware layers) and implemented the proposed method for image classification but this works has no comparison to these prior works.
Moreover, the types of data feasibly represented using the linear subspace method are also not discussed in the paper. Is the proposed method only applicable for visual data?
This statement “The whole matrix S needs not be orthonormal”. For discriminative purposes, even though it needs more investigation, the straightforward idea is to force subspaces as different as possible to avoid collapses (see Simon et al., Arjovsky et al., Ozay and Okatani, and Harandi et al.). However, this work does not have any discussion about this idea nor include the idea to discriminate between subspaces.
There are no properties of the proposed Grassmannian class representation layers. For instance, what are the properties and benefits preserving the orthogonality for each subspace? what are properties of not preserving subspaces coming from different to be orthogonal?
The design of this approach is somehow limited for a neural network module. How is the design of the proposed method with the multi-layer version of fully-connected layers (if possible)?
The experiments require some prior methods for comparison with some variants in class representations, e.g., prototypes (the average of all representations within a class), non-learnable subspaces (a similar concept as in Simon et al.), .
The performance of long-tail classification is marginally improved compared to cosine softmax. That shows that the proposed method might not be quite effective in addressing such issue compared to transfer learning and common image classification.
Is there any comparison in terms of speed between softmax strategies and the Grassmanian one? The discussion of trade-off between the performance gain and the processing time is crucial for this type of method because it usually requires additional processing time especially with contstraint optimization.
The experiments are also lacking of comparison to some other models as a backbone. For instance, the proposed method can compare the methods using transformer models, another ResNet type (e.g., ResNet101), VGG, Inception.
The feature sparsity is not very clear, is that 78% zero activations on the feature before the Grasmannian layer? or the elements of the Grassmannian layer (i.e., each subspace)? or the output after the Grassmannian layer?
References:
Simon et al, “Adaptive Subspaces for Few-Shot Learning,“ CVPR, 2020.
Harandi et al., “Extrinsic Methods for Coding and Dictionary Learning on Grassmann Manifolds,” IJCV, 2015.
Arjovsky et al. "Unitary evolution recurrent neural networks," ICML, 2016.
Ozay and Okatani, "Training CNNs with normalized kernels," AAAI, 2018.
Clarity, Quality, Novelty And Reproducibility
The novelty of this work is considered marginal with some reasons explained in the weaknesses, the clarity of the statement needs to be improved, and currently there is no code for reproducibility. |
ICLR | Title
GEASS: Neural causal feature selection for high-dimensional biological data
Abstract
Identifying nonlinear causal relationships in high-dimensional biological data is an important task. However, current neural network based causality detection approaches for such data suffer from poor interpretability and cannot scale well to the high dimensional regime. Here we present GEASS (Granger fEAture Selection of Spatiotemporal data), which identifies sparse Granger causal interacting features of high dimensional spatiotemporal data by a single neural network. GEASS maximizes sparsity-regularized modified transfer entropy with a theoretical guarantee of recovering features with spatial/temporal Granger causal relationships. The sparsity regularization is achieved by a novel combinatorial stochastic gate layer to select sparse non-overlapping feature subsets. We demonstrate the efficacy of GEASS in several synthetic datasets and real biological data from single-cell RNA sequencing and spatial transcriptomics.
1 INTRODUCTION
Advances in single-cell omics research enable full characterizations of high-dimensional gene dynamics in biological systems on a either temporal or spatial scale. An example for the temporal case is single-cell RNA sequencing (scRNA-seq) trajectories, where cells are sampled from a dynamical biological process, sequenced, and ordered based on either real sampled time or inferred pseudo-time (Cannoodt et al., 2016; Saelens et al., 2019). Gene dynamics along the specified cell order encodes information of causal regulation for the underlying biological process. An example for the spatial case is single-cell level spatial transcriptomics (e.g. SeqFISH+ (Eng et al., 2019), Merfish (Fang et al., 2022)), in which cells from a tissue slice are sequenced with their spatial coordinates preserved (Moses and Pachter, 2022; Rao et al., 2021; Palla et al., 2022). Spatial profiling allows investigations of the cellular interplay, corresponding to conditional gene expression change caused by neighborhood phenotypic states. However, despite the potential significance, data-driven causal discovery for such data remains largely unexplored, especially for the spatial omics data.
Identifications of causal regulatory patterns in such data can be reformulated into the general task of causal feature selection in observational data with intrinsic structures, e.g. spatial data or temporal data. Identifications of causal interactions in time-series has lead to valuable findings in multiple disciplines, including but not limited to, economy, climate science, and biology (Hoover, 2006; Kamiński et al., 2001; Runge et al., 2019a).
Learning directed causal relationships in temporal/spatial data is feasible as time and space both induce asymmetric dependencies. In the case of time-series data, a feature in the future cannot have effect on past values of other features. For spatial data, a similar definition of causal dependency can be established (Herrera Gómez et al., 2014).
The concept of Granger causality is proposed in order to uncover the assymetric causal dependency (Granger, 1969; Shojaie and Fox, 2022). In time-series data, this would translate to identifying one variable’s causal relationship with other variables based on how well the historical observations of other variables can predict the variable’s present value. The application of Granger causality in a spatial context corresponds to predicting significant relationships between neighboring observations of other variables and the specified variable (Mielke et al., 2020), which is a key insight used in recent works aimed to discover cellular interaction patterns in spatial omics data (Fischer et al., 2021; Valdés-Sosa et al., 18).
In the nonlinear regime, information-theoretic measures such as directed information, transfer entropy (Schreiber, 2000), and partial transfer entropy (Staniek and Lehnertz, 2008), are used as a counterpart of linear Granger causality. Moreover, some works consider modeling conditional independence (CI) in time-series data to identify the underlying causal graph (Entner and Hoyer, 2010; Malinsky and Spirtes, 2018; Moneta et al., 2011; Runge et al., 2019a; Pfister et al., 2019; Mastakouri et al., 2021). Two examples are VarLINGAM (Hyvärinen et al., 2010) and PCMCI (Runge et al., 2019b), which are generalizations of LINGAM (Shimizu et al., 2006) and PC (Spirtes et al., 2000) respectively. Finally, multiple recent works have proposed to use neural network approaches to model the nonlinear Granger causality, including MLP, LSTM, and neural-ODE based approaches, resulting in improved prediction power for nonlinear time-series dynamics (Li et al., 2017; Tank et al., 2021; Nauta et al., 2019; Yin and Barucca, 2022; Bellot et al., 2021).
Despite the success of these methods in various systems of interest, multiple challenges limit their use in high-dimensional biological datasets.
• Although linear methods (LINGAM, linear Granger causality) have succeeded in various settings and can potentially scale to high feature numbers, these methods may completely fail when the feature dependency in data is highly complex and nonlinear.
• As the number of conditional independencies generally scales exponentially or at least polynomially with the feature size, applying causal discovery methods which are based on CI tests to high-dimensional data is not realistic. Distinctively, Granger-causality based methods are built with a prediction model for each feature in the data. The time complexity of solving the stacked prediction model for all features is of polynomial level with respect to the feature size.
• In previous methods, the number of causal edges between features is assumed to be sparse (edge sparsity) to maximize interpretability of the identified causal graph. However, in biological data, there exists a large proportion of nuisance features. Also, one functional gene may activate a large number of downstream genes in neighboring cells. Sparsifying the number of interacting features (feature sparsity) has the potential to improve causal discovery in biological systems, which remains to be explored.
• While a large number of methods are designed for causal discovery in time-series data, only a limited number of present works aim for causal discovery in general graph-structured data. Time-series based methods cannot be directly adopted on data with multi-branch trajectory dynamics or spatial structures.
Our contribution. In this work, we present GEASS (Granger fEAture Selection of Spatiotemporal data), which identifies causally interacting features of high dimensional temporal / spatial data by a single neural network. GEASS considers the aforementioned feature sparsity instead of edge sparsity, thus selects most significant interacting features for downstream causal discovery. Our contributions are three-folds.
1. Instead of direct causal discovery in data, we formulate the task as two steps of causal feature selection and causal graph identification. We provide a novel solution of causal feature selection problem in general graph-structured data by the use of modified transfer entropy maximization with theoretical guarantees.
2. In order to solve our proposed optimization problem, we design a novel combinatorial stochastic gate layer to select non-overlapping sparse feature sets with a newly designed initialization procedure.
3. We demonstrate the power of our method by benchmarking it on both temporal data and spatial data of multiple settings. Our method gives accurate and robust causal feature identification and reveals novel biology in real datasets.
1.1 RELATED WORKS
Neural Granger causality. Despite the large body of work based on linear Granger causal discovery, neural Granger causality still remains an active area of research. Various neural network architectures, such as MLP, sequential model, and attention-based architecture (Tank et al., 2021; Nauta et al., 2019; Khanna and Tan, 2019; Sun et al., 2021), have been proposed for nonlinear Granger causality
discovery. A recent work uses the information of proxy variable to learn latent confounder for Granger causality by a dual-decoder neural network (Yin and Barucca, 2022). One recent biologyoriented work extends the definition of Granger causality to DAGs, where the use of a linear graph neural network is proposed to model underlying Granger causality (Wu et al., 2021). Meanwhile, a neural-ODE based approach has been proposed to reformulate the Granger causality problem in terms of local dependence graph identification (Bellot et al., 2021).
Causal feature selection. The task of causal feature selection has been considered by multiple groups. Most works in this category uses constraint-based methods to identify each feature’s causal relation with all other features, equivalent of identifying the whole causal graph structure, including VARLINGAM, tsFCI, SVAR-FCI, and PCMCI (Hyvärinen et al., 2010; Entner and Hoyer, 2010; Malinsky and Spirtes, 2018; Moneta et al., 2011; Runge et al., 2019a). Meanwhile, seqICP focus on identifying the direct or indirect cause for each feature assuming sufficient interventions in the dataset (Pfister et al., 2019). SyPI tackles the causal feature selection problem without the assumption of causal sufficiency and avoids issues in multi-hypothesis testing by construction of the correct conditional set (Mastakouri et al., 2021). Finally, Guo et al. (2022) considers dual correction of causal feature selection to control both false positive rates and false negative rates.
2 MODIFIED TRANSFER ENTROPY (MTE)
In order to tackle the issue that a neural network may overfit each model therefore overestimates the number of causal interactions, we need a prediction-free loss function that directly indicates causal signficance. In this work, we propose a novel function, modified transfer entropy (mTE), based on transfer entropy (Schreiber, 2000) as a metric of causal interaction significance.
Transfer entropy is a information-theoretic measure of cross dependence (Schreiber, 2000). Consider two vectorized time series xt and yt for t ∈ 1, ..., T . In a Markovian model, the transfer entropy from x to y at time t is defined as the mutual information between the present value xt and the future value yt+1, conditioning on yt to eliminate possible autocorrelation: TEt(x,y) = I(xt;yt+1|yt). By the use of mutual information, transfer entropy is able to model general nonlinear dependencies beyond linear Granger causality. In this work, we further consider the generalization of transfer entropy on graph structured xi and yi, where i denotes a vertex on the data graph G = (V,E):
TEi(x,y) := I(x i;yN(i)|yi), where N(i) := {j|(i, j) ∈ E}. (1)
Note here the graph can be either directed (the time-series case) or undirected (the spatial case). In this study, we introduce a novel function, modified transfer entropy, that enables the application of bivariate transfer entropy for causal discovery in high-dimensional data. Our key insight is to consider two feature subsets in the dataset that maximizes the mutual information difference: Definition 2.1. Let X = [x1x2 . . .xn] ∈ Rp×n be a matrix containing graph-structured vector series xi, with i as vertices of the data graph G = (V,E). Suppose S1 and S2 be two subsets of {1, 2, ..., p}. The modified transfer entropy mTEi(S1, S2) and its maximum mTE∗i are defined by
mTEi(S1, S2) := I(x i S1 ;x N(i) S2 )− I(xiS1 ;x i S2); mTE ∗ i := max
S1,S2 mTEi(S1, S2). (2)
Note the mTE function requires strictly stronger dependence than the analogically defined transfer entropy TEi(S1, S2), as shown by the proposition below (The proof can be seen at Appendix A.1): Proposition 2.2. ∀S1, S2 ⊂ {1, ..., p},mTEi(S1, S2) > 0 ⇒ TEi(S1, S2) > 0.
Let (S∗1 , S ∗ 2 ) be one of the maximizers with the smallest size of |S1 ∪S2|, and denote S∗ := S∗1 ∪S∗2 (note (S∗1 , S ∗ 2 ) may not be unique). Under some mild assumptions listed below, we are able to provide the theoretical justification for mTE maximization in the time-series setting (Theorem 2.4). A proof can be seen in Appendix A.3.
Assumptions:
A1-A3 Causal Markov assumption, faithfulness, and causal sufficiency for the causal graph.
A4 Ergodicity and Stationarity of the stochastic process defined by the causal graph, meaning the ensemble average equals time average, and the functional relationships encoded by the causal graph do not change by time (or location). This also leads to mTEi(S1, S2) is constant across i.
A5 DAG causal graph: We assume XT = [t1, ..., tm, um+1, ..., up] up to a permutation, where ti are causally interacting features forming a directed acyclic graph (DAG), and uk are nuisance features that may correlate with ti. An illustration based on the time series setting can be seen in Figure 1.
A6 Interaction regularity: Given two disjoint feature sets A,B, such that A is a subset of the parent features of B or B is a subset of child features of A. Then conditioning on any other feature set C such that I(Ai, BN(i)|Ci), I(Ai, BN(i)|CN(i)) > 0, we have:
∀i,min{I(Ai, BN(i)|Ci), I(Ai, BN(i)|CN(i))} > I(Ai, Bi|Ci). (3) Remark 2.3. Here our only additional assumption from prevalent literatures (Pearl, 2009; Spirtes et al., 2000) is A6, which aims to filter out features with spurious causations and regularize the algorithmic complexity of causal interactions, thus enabling information-theoretic analysis. A6 has direct connections with the concept of conditional transfer entropy (Faes et al., 2016; Shahsavari Baboukani et al., 2020); further discussions can be seen at Appendix A.2. Theorem 2.4. Given A1-A6, S∗ := (S∗1 ∪S∗2 ) ⊆ {1, ...,m} (the index set of true interacting features described in A5). Moreover, each feature in S∗ is connected to other features in the set S∗.
3 NEURAL OPTIMIZATION OF MODIFIED TRANSFER ENTROPY
With Theorem 3.1 stated below, we are able to give a theoretical guarantee of the l0-penalized optimization of mTE. A proof can be seen at Appendix A.4. Here ⊙ stands for the Hardmard product. Theorem 3.1. Assume A1-A6 holds and f, g, h define one-to-one mappings on X⊙1S1(for f) or X⊙ 1S2(for g, h). Then ∃λ > 0, such that for (4), any solution (S∗1 ∪ S∗2 ) satisfies S∗ := (S∗1 ∪ S∗2 ) ⊆ {1, ...,m}. Moreover, each feature in S∗ is connected to other features in the set.
min f,g,h,S1,S2
−(I(f(xi ⊙ 1S1);h(xN(i) ⊙ 1S2))− I(f(xi ⊙ 1S1); g(xi ⊙ 1S2))) + λ|S1 ∪ S2| (4)
Remark 3.2. The estimation of mutual information by various approaches is an active field itself (Belghazi et al., 2018; Hjelm et al., 2018; McAllester and Stratos, 2020; Zhang et al., 2019). In contrast, by this theorem, we show that an accurate estimation of the transfer entropy (such as in (Zhang et al., 2019)) may not be needed as optimizing the upper bound of the modified transfer entropy automatically gives the best feature subset selection. Remark 3.3. Our theoretical guarantee is derived based on one-to-one embeddings f, g, h. In a neural network, the injectivity may be enforced with various architecture designs yet may not perfectly hold. Empirically, we have found that the optimization of mTE is robust to the embedding injectivity, compared with the original transfer entropy. This is due to our stricter design of the mTE function (Proposition 2.2) and is further illustrated by our experiments in the next section.
Given Theorem 3.1, we are able to construct a neural network for optimizing the proposed loss function. However, the estimation of mutual information is not directly tractable. In this case, because mutual information is invariant by one-to-one transforms, we can restrict the function class of f, g, h in the optimization problem (4) as flows transforming the original feature distributions into Gaussian distributions with fixed dimensionality. We are able to formulate the target for neural network optimization by the explicit formula for mutual information between Gaussians: I(X,Y ) = 1 2 log detΣX detΣY detΣ[X,Y ]
. The Gaussian regularization can be applied either by regularizing over the discrepancy between embedding distributions [f, g, h] and Gaussian distributions or by applying a adversarial training procedure. In this work, we have implemented the former approach, constructing means and covariance matrices for the concatenated embedding as learnable parameters and minimize the cross entropy between target distributions and the parametrized Gaussian distributions.
3.1 COMBINATORIAL STOCHASTIC GATES
In order to solve the optimization problem, we need to learn two sparse sets S1, S2, which involves combinatorial optimization, making the task impractical for high-dimensional data. To overcome this issue, we use a stochastic gate based approach (Yamada et al., 2020; Lindenbaum et al., 2021), which performs probabilistic relaxation of deterministic l0 norms. In order to explicitly construct S1 and S2 by stochastic gates, we define two random vectors T 1 and T 2 ranging in [0, 1] with lengths equal to the feature number, with each element independently sampled from STG distribution defined as: T id = max(0,min(1, µ i d + ϵ i d)), where ϵ i d ∼ N(0, σ2i ) is i.i.d. sampled with fixed variance and µid is a parameter trainable by reparametrization (Miller et al., 2017; Figurnov et al., 2018).
The new loss function applying stochastic gates can be formulated as: ET 1,T 2 − [Î(f(X̃S1);h(WX̃S2))− Î(f(X̃S1); g(X̃S2))] + p∑
d=1
[λ1P(T 1d > 0) + λ2P(T 2d ∈ (0, 1))],
s.t. X̃S1 = X ⊙ T 1 ⊙ T 2, X̃S2 = X ⊙ T 1 ⊙ (1− T 2). (5)
Here Î is defined as the empirical Gaussian mutual information: Î(X,Y ) = 12 log det Σ̂X det Σ̂Y det Σ̂[X,Y ] , and W is defined as the graph diffusion operator: Wxi = xN(i). In our construction, T 1 controls the sparsity of feature selection, while T 2 controls the expectation of overlap between X̃S1 and X̃S2 . Denoting the Gaussian error function as erf(), the regularization term for the first layer is of form:
p∑ d=1 P(T 1d > 0) = p∑ i=1 ( 1 2 − 1 2 erf( µ1d√ 2σ1 )). (6)
The regularization term for the second layer can be expressed as: p∑
d=1
P(T 2d ∈ (0, 1)) = p∑
d=1
P(T 2d > 0)− P(T 2d ≥ 1) = 1
2 p∑ d=1 (erf( µ2d√ 2σ2 )− erf(µ 2 d − 1√ 2σ2 )). (7)
We are able to show strong consistency for our stochastic-gate based feature selection scheme by the theorem below (A proof can be seen at Appendix A.5): Theorem 3.4. Assume A1-A6 and f, g, h are one-to-one Gaussian embeddings as described above. For the optimal solution of (5), denote a sample of stochastic gate as T 1, T 2 and denote the ground truth interacting feature set as S, then there exists λ1, λ2 > 0 for (5) such that as n → ∞,
∀i ∈ {0, 1}, P(Bi ⊆ S) a.s.−−→ 1, where Bi := {d|T 1d > 0, T 2d = i}. (8)
In practice, we also have observed the method’s solution highly depends on the stochastic gate initialization. Here we provide a heuristic initialization scheme that shows superior empirical performance. Details of the initialization scheme can be seen in Appendix B.
3.2 PROPOSED NETWORK ARCHITECTURE
Our proposed network architecture is summarized in Figure 2. For an input dataset X ∈ Rp×n and its corresponding graph adjacency matrix A ∈ Rn×n, we first pass each feature through two sequential stochastic gate layers T 1, T 2. The l0 penalty is conducted on the first STG layer, while the second STG layer is regularized with the 0-1 penalty, consistent with the descriptions in the previous section.
After passing each feature, denote T̂ 2i = 1− T 2i , we have two intermediate embeddings defined by X̃S1 = X ⊙ T 1 ⊙ T 2 and X̃S2 = X ⊙ T 1 ⊙ T̂ 2 respectively. Then these two embeddings are passed through MLP1 (f ) and MLP2 (g) to generate Gaussian embeddings f(X̃S1), g(X̃S2) corresponding to (5). For the design of function h, we consider two crucial elements: 1. an additional layer to aggregate the information from different nodes in xN(i); 2. the injectivity of mappings f, g, h. Note f, h in (5) are automatically enforced to be injective on interacting features to maximize the first term of mTE, but g is not. Therefore, our final design of h is the composition of first applying g (enforcing the injectivity of g), a mean aggregation layer without self-loop consistent with the GCN design (Kipf and Welling, 2016) by multiplying the adjacency matrix A, and another MLP layer (MLP3). Finally, we compute the minus empirical Gaussian mTE Î(f, g)− Î(f, h) and add the cross-entropy penalty between the concatenated embedding distribution and a learnable Gaussian distribution.
3.3 OUTPUT INTERPRETATION
Upon the algorithm convergence, GEASS provides both outputs of active features (B0 ∪ B1) and embeddings (f, g, h) produced by causally interacting features. In this paper, we emphasize the use of the identified interacting features B0 ∪B1. The output of embeddings (f, g, h) may be complex and nonlinear, potentially requiring additional architectures to maximize its interpretability.
By the construction of GEASS, we are able to get two separate sparse feature subsets as source features B1 and sink features B0. These features may be used as inputs to further proper causal analysis, such as LPCMCI (Gerhardus and Runge, 2020) for time-series data, which despite its statistical power in depicting possible lags, identifying latent confounders, and allowing nonlinear tests, can only work on data with moderate feature sizes. Also, these features may be used in other machine learning models for improved model interpretability.
4 EXPERIMENTS
4.1 GAUSSIAN TIME-SERIES WITH POSSIBLE NONLINEARITY
In order to benchmark the method in time-series data, we consider two settings: 1. Minor effect of latent processes, with autocorrelation present; 2. Significant effect of latent processes, with autocorrelation present. Both settings are modeled by Gaussian structural processes with an underlying causal graph. Further details can be seen in Appendix C.1.
We test the false discovery rate (FDR) and F1 score between ground truth interacting features and recovered features as two metrics for high-dimensional data causal discovery. We compare GEASS with two categories of methods, namely conditional independence based (CI-based) methods and Granger causality based (GC-based) methods respectively. The first method category includes VAR-LINGAM (Hyvärinen et al., 2010), PCMCI (Runge et al., 2019b), and LPCMCI (Gerhardus and Runge, 2020). Among them, despite the statistical power, LPCMCI is not included in our experiment as it fails to converge in given time in our preliminary experiments. The second method category includes a neural-network based generalized vector autoregression model GVAR Granger (Marcinkevičs and Vogt, 2021), and Grid-net which generalizes the definition of Granger causality to Directed Acyclic Graph (DAG) (Wu et al., 2021); moreover we include two state-of-the-art approaches, DCM and NGM implemented in (Bellot et al., 2021) that use neural ODE to model nonlinear dependence graph.
Table 1 shows our benchmarking results. Among the alternative methods, GVAR and GrID-net fail in all settings as they are not designed for causal feature selection. VAR-LINGAM achieves high accuracy in linear settings while fails in nonlinear settings. In contrast, PCMCI fails when latent processes contribute to both true causally interacting features and nuisance features, creating spurious correlations. Empirically we also observe that DCM and NGM achieves comparable performance
when the dynamics are linear but performs worse in the nonlinear setting, where the dynamics are more irregular. Finally, GEASS consistently gives accurate causal feature identifications (high F1) and low false discovery rate (low FDR) in all settings considered.
GVAR (GC) .94 (.00) .11 (.00) .94 (.00) .11 (.00) .94 (.00) .11 (.00) .94 (.00) .11 (.00) GrID-net (GC) 1.0 (.00) .00 (.00) 1.0 (.00) .00 (.00) 1.0 (.00) .00 (.00) 1.0 (.00) .00 (.00) DCM (GC) .12 (.20) .88 (.20) .65 (.12) .35 (.12) .18 (.09) .82 (.09) .93 (.11) .07 (.11) NGM (GC) .07 (.08) .88 (.04) .48 (.17) .50 (.17) .00 (.00) .91 (.00) .62 (.25) .38 (.25) GEASS (Ours) .05 (.15) .97 (.10) .03 (.06) .92 (.05) .03 (.07) .90 (.04) .00 (.00) .91 (.00)
Furthermore, we evaluate different methods’ scalability with respect to the feature size. (Experimental details can be seen at Appendix C.1.2). As described before, we anticipate high computational complexity of both conditional independence based methods and neural network based methods with respect to the feature size, which prohibits further use of these methods for high-dimensional biological data analysis, where the feature number is typically at the scale of 103 − 104. Meanwhile, GEASS constructs a single neural network with parameters approximately proportional to p, thus largely reducing the complexity in the high-dimensional regime. We benchmark PCMCI, GVAR, GrID-net, NGM, GEASS, and an additional combination of GEASS with a downstream CI-test based causal graph identification method LPCMCI. Our experimental result shows
the superior performance of GEASS as well as GEASS+LPCMCI in time complexity, consistent with our qualitative analysis (Figure 3).
4.2 SIMULATED SPATIAL OMICS DATA WITH CELL TYPE CONFOUNDER
In order to jointly consider spatial confounders and corresponding autocorrelation patterns that are potentially enriched in specific niches, we consider the case of spatial omics data, where the autocorrelation is modeled by a higher likelihood of same type of cells in the neighborhood, and the confounder (nuisance features) is modeled by a coherent shift of global gene expression for each cell type. We first simulate scRNA-seq datasets, then each synthetic scRNA-seq dataset is assigned to a fixed size grid with cell type labels simulated by Ising model simulation. We then add artificial genes that are spatially correlated with neighboring cell’s given gene set. Finally each dataset is normalized and log1p transformed as the standard pipeline in Scanpy (Wolf et al., 2018).
The majority of methods are not available as their focus is on time-series data. Therefore in order to perform our benchmarking study, we compare GEASS with Lasso Granger, as well as our implemented L1-regularized version of NCEM, an approach proposed to detect interactions in spatial omics data (Fischer et al., 2021). Finally, we also implemented a method that maximizes over the original transfer entropy to select causal features (TE).
As shown in Table 2, the original LASSO cannot identify causal features because of the strong correlation between features. L1-NCEM alleviates the issue by conditioning on cell type labels in regression. TE outperforms linear methods yet generates a number of false positives, as it may learn spurious causations as discussed in Remark 3.3. Finally, GEASS consistently outperforms over other methods in identifying causal features of data as shown by both high F1 score and low FDR.
4.3 SCRNA-SEQ PANCREATIC ENDOCRINOGENESIS TRAJECTORY
We test GEASS on the pancreatic endocrinogenesis trajectory data, which is a standard dataset for scRNA-seq trajectory inference task (Bergen et al., 2020; Bastidas-Ponce et al., 2019). The pancreas trajectory data contains 3696 cells and 27998 genes. After preprocessing, lowly-expressed genes are filtered out as the standard pipeline in scVelo (Bergen et al., 2020), with remaining 2000 genes for further analysis. We aim to use GEASS to identify causally-related genes along the developmental trajectory to reveal underlying biology. (See Appendix C.3 for experimental details).
scRNA-seq data provides a snapshot of cell population distribution therefore time-series based analysis methods cannot be directly applied. However, due to GEASS’s flexible setting in forward operator W , we are able to define the time flow by RNA velocity analysis. RNA velocity analysis uses the additional information of intron RNAs to infer the underlying dynamics of gene expression change. Thus, we are able to define a velocity kernel matrix Avelo, which provides weighted adjacency relationships of cells based on velocity direction and cell phenotypic proximity.
GEASS identifies 50 causally-related features with high biological relevance. For example, the gene list includes the key transcriptional regulator NEUROG3, which is required for the specification of a common precursor of the 4 pancreatic terminal states (uni, 2021). As the ground truth causal interactions here are unknown, for further quantitative validation, we assume the underlying biological process is driven by a causal cascade of gene interactions, meaning target genes activated in earlier phases of the trajectory further cause downstream gene activation at later phases. In this case, the higher a gene velocity is, the more likely the gene is associated with causal gene-gene relationships. Our benchmarking result here suggests GEASS achieves the best performance in selecting genes with high mean velocity likelihood, compared with alternative gene selection schemes with fixed gene number (50) including high-expressed genes (HEG), highly-variable genes (HVG), and genes having high correlation with inferred latent time (HCG) (Figure 4).
4.4 MERFISH HUMAN CORTEX SINGLE-CELL LEVEL SPATIAL TRANSCRIPTOMICS
Spatial transcriptomics represent a wide category of method that can achieve spatial profiling of gene expression in tissues (Moses and Pachter, 2022; Rao et al., 2021; Palla et al., 2022). By the additional information of spatial locations, such measurements enable deeper understandings of cellular interactions (Palla et al., 2022; Jerby-Arnon and Regev, 2022; Fischer et al., 2021). However, current computational methods revealing interaction modules (Jerby-Arnon and Regev, 2022) or niche effects (Fischer et al., 2021; Raredon et al., 2023) for spatial omics data lacks causal interpretation. Applying GEASS, we aim to reveal underlying causal intercellular patterns to fully utilize the potential of spatial omics data for biological discovery.
Here we use GEASS on a recent published MERFISH dataset measuring spatially-resolved single-cell gene expression of human cortex (Fang et al., 2022). The dataset we used comprises of 3044 cells and 4000 genes; each cell is annotated as one of the eight cell types: excitatory neurons (EXC), inhibitory neurons (INC), astrocytes (ASC), microglial cells (MGC), oligodendrocytes (OGC), oligodendrocyte progenitor cells (OPC), endothelial cells (ENDO), and mural cells (MURAL) as shown by the first panel of Figure 6 in Appendix D. Our GEASS analysis selects 9 genes, namely FILIP1, SLC17A7, MYH11, RP11-10j21.2, PIRT, C3ORF67, TRDMT1, RGS8, SPTLC2 (Appendix Figure 6), with further experimental details available in Appendix C.4. Among these genes, MYH11, RP11-10j21.2, and TRDMT1 are enriched at the endothelial cells adjacent with mural cells, corresponding to underlying vascular structures (marked by ellipses in the first panel of Appendix Figure 6). We next aim to verify if their expression difference with those of non-adjacent endothelial cells is statistically significant. Indeed, by applying the Wilcoxon rank-sum test, we have found significant enrichments for both MYH11 and TRDMT1, with p-values 0.003 and 0.015 respectively, while the p-value for the gene RP11-10j21.2 is not significant (0.5) due to the gene expression sparsity. The finding is consistent with the MERFISH images, which reveals rich cellular interactions between neuronal cells and the blood vessels (Fang et al., 2022). Therefore, these identified marker genes of vascular structure may encode underlying meaningful cellular interactions.
Next, we focus on two GEASS identified genes, C3ORF67 and PIRT, which are highly expressed at nearby spatial locations. In order to confirm the possible causal relationship between the two genes, we consider three models: 1. the two genes are expressed in the same cell without spatial causal relationships; 2. The expression of C3ORF67 in each cell causes the expression of PIRT in neighboring cells (C3ORF67 → PIRT); 3. The expression of PIRT in each cell causes the expression of C3ORF67 in neighboring cells (PIRT → C3ORF67). To this end, we first compare Pearson and Spearman p-values of intracellular correlation (model 1), C3ORF67 to neighboring PIRT
(model 2), and PIRT to neighboring C3ORF67 (model 3). Our comparison shows for the p-values of both correlation measures, model 3 is favored (0.004, 0.001) over model 1 (0.014, 0.003) and model 2 (0.049, 0.004). The validity of model 3 (PIRT → C3ORF67) is further supported by a linear model predicting C3ORF67 expression by both intracellular and neighbor expression of PIRT, where the neighboring cell effect coefficient is significant at the confidence level of 0.01 by bootstrap, while the alternative model’s corresponding coefficient is not significant. Our finding is consistent with the predicted role of PIRT in transmembrane transporter binding and phosphatidylinositol-mediated signaling (Safran et al., 2021). As the role of C3ORF67 in human cortex remains unclear, this revealed causal link may lead to novel biological discoveries with further experimental validations.
5 CONCLUSIONS
In this work, we present GEASS, a causal feature selection method based on information-theoretic tools and neural networks. GEASS is able to scale to high dimensions and identify sparse interacting features. We provide both theoretical gaurantees and empirical validations of GEASS on synthetic and real biological data. Our results show GEASS can be integrated into high-dimensional spatiotemporal data analysis pipelines to provide unique insights for further findings.
Limitations. GEASS is a method designed for nonlinear causal feature selection. GEASS does not provide a causal graph itself as it optimizes a latent embedding corresponding to different causal mechanisms. Therefore, in applications where a causal graph output is favored, constraint-based methods may need to be applied after GEASS. Moreover, when underlying causal graph has a large number of vertices, the sparsity assumption is violated and GEASS is not gauranteed to work. Also, further efforts may be taken to incorporate lag selections for GEASS.
Broader impact. We anticipate a wide use of GEASS in high-dimensional graph-structured data, especially for high-dimensional biological data such as single cell trajectories and spatial omics measurements. Applying GEASS along with causal graph identification methods to a wider range of real biological data may greatly facilitate downstream biological discoveries.
ACKNOWLEDGEMENTS
The authors thank Ofir Lindenbaum, Boaz Nadler, Yifei Min, and Ronen Basri for helpful discussions. Y.K. acknowledges support by NIH grants R01GM131642, UM1DA051410, U54AG076043, P50CA121974, and U01DA053628.
APPENDIX
A PROOFS
A.1 PROOF OF PROPOSITION 2.2.
Proposition 2.2. ∀S1, S2 ⊂ {1, ..., p},mTEi(S1, S2) > 0 ⇒ TEi(S1, S2) > 0.
Proof. By standard properties of mutual information (Cover, 1999) we have
TEi(XS1 , XS2) = I(X i S1 ;X j:(i,j)∈E S2 |XiS2)
= I(XiS1 ;X j:(i,j)∈E S2 , XiS2)− I(X i S1 ;X i S2) = I(XiS1 ;X j:(i,j)∈E S2 )− I(XiS1 ;X i S2) + I(X i S1 ;X i S2 |X j:(i,j)∈E S2 ).
(9)
Therefore TEi(S1, S2) ≥ mTEi(S1, S2) holds, thus mTEi(S1, S2) > 0 ⇒ TEi(S1, S2) > 0.
A.2 DISCUSSION OF ASSUMPTION A6.
Our assumption A6 is based on the concept of conditional mutual entropy, which aims to filter out possible indirect causal relationships.
Here are two simple examples to see why TE/mTE can have problems with indirect causal interactions in the time-series setting: consider the relationships: st → wt → vt+1; st → wt+1 → vt+1. Then in both cases, we may have: I(st, vt+1)− I(st, vt) > 0 and I(st, vt+1|vt) > 0 although there are no direct causal relationship between s and v. Note in our setting, we include the possibility of such indirect interaction by allowing correlation between nuisance features and true interacting features.
The issue can be resolved by considering the conditional mutual information I(st, vt+1|wt) or I(st, vt+1|wt+1), which equals 0. This insight is also addressed the concept of conditional transfer entropy:
Definition (Conditional transfer entropy) (Shahsavari Baboukani et al., 2020). Assume X and Y are the features of interest and the conditioning features are Z. Denote − as [1, 2, ..., t], then we have
cTEt(X,Y, Z) = I(Yt+1, X−|Y−, Z−).
The classical formulation of conditional transfer entropy is widely used in high-dimensional observational data to learn direct causal dependencies (Faes et al., 2016; Shahsavari Baboukani et al., 2020). It implicitly assumes that, there is direct causal relationship between X and Y if ∀Z, t, cTEt(X,Y, Z) > 0. Here, we extend this assumption in the context of conditional mTE covering both examples described above. The conditional mTEs are defined in analogy to cTE for generalized graph-structured data in the Markovian model setting:
Definition (Two forms of conditional mTE). Assume X and Y are the feature sets of interest and the conditioning features are Z. Then we have
cmTE1i (X,Y, Z) = I(X i, Y N(i)|Zi)− I(Xi, Y i|Zi);
cmTE2i (X,Y, Z) = I(X i, Y N(i)|ZN(i))− I(Xi, Y i|Zi);
By controlling the two forms of conditional mTE to be larger than zero, we rule out both possibilities of Xi → Zi → Y N(i) and Xi → ZN(i) → Y N(i), as mTE is a stricter version of the original transfer entropy as discussed in Proposition 2.2. In summary, our A6 can be reformulated as ∀Z, i, cmTE1i (X,Y, Z) > 0; cmTE 2 i (X,Y, Z) > 0 for ground truth interacting X,Y in non-degenerating cases, where Z does not fully overlap with X/Y in the same point.
A.3 PROOF OF THEOREM 2.4.
Theorem 2.4. Given A1-A6, S∗ := (S∗1 ∪ S∗2 ) ⊆ {1, ...,m} (the index set of true interacting features described in A5). Moreover, each feature in S∗ is connected to other features in the set S∗.
Proof. Step 1. First we prove S∗1∩S∗2 = ∅. If not, assume p is an overlapping element. For simplicity, we denote N(i) := {j|(i, j) ∈ E}, A = XS∗1 , B = XS∗2 . Then we have
mTE(S∗1 , S ∗ 2 )−mTE(S∗1 \ p, S∗2 )
= I(Ai \ pi, pi;BN(i) \ pN(i), p{N(i)})− I(Ai, pi;Bi, pi)− I(Ai \ pi;BN(i) \ pN(i), pN(i)) + I(Ai;Bi, pi)
= I(pi;BN(i) \ pN(i), pN(i)|Ai \ pi)− I(pi;Bi \ pi, pi|Ai \ pi) < 0. (10)
Therefore removing p would increase the value of mTE, leading to a contradiction.
Step 2. Now we prove nuisance signals cannot be in either S∗1 or S∗2 . Otherwise, first we assume a set of nuisance signals U is in S∗1 . Here we denote A := XS∗1 , B := XS∗2 . As U only interacts with variables at the same time point, U can only interact with BN(i) via indirect links through a subset of interacting features at i. Denote this feature set as PaU (B)i ⊆ {ti1, ..., tim}, and the difference set Pa−U (B) i := PaU (B) i \ Bi. Then we first note Pa−U (B)i cannot be an empty set. Otherwise, denote S1 := S∗1 \ U , noting the non-overlapness between A and B we would have
mTE(S∗1 , S ∗ 2 )−mTE(S1, S∗2 )
= I(Ai \ U i, U i;BN(i))− I(Ai \ U i, U i;Bi)− I(Ai \ U i;BN(i)) + I(Ai \ U i;Bi) = I(U i;BN(i)|Ai \ U i)− I(U i;Bi|Ai \ U i) = −h(U i|BN(i), Ai \ U i) + h(U i|Bi, Ai \ U i) ≤ −h(U i|BN(i), Ai \ U i) + h(U i|PaU (B)i, Ai \ U i) (Conditioning reduces entropy) ≤ 0. (11)
This means (S1, S∗2 )’s mTE is not smaller than (S ∗ 1 , S ∗ 2 )’s while having a smaller union size, leading to a contradiction. Then because Pa−U (B) does not overlap with either U and B, with A6 we have
mTE(S∗1 , S ∗ 2 )−mTE(S∗1 ∪ Index(Pa−U (B)), S ∗ 2 )
= I(Ai \ U i, U i;BN(i))− I(Ai \ U i, U i;Bi)− I(Ai \ U i, U i, Pa−U (B) i;BN(i)) + I(Ai \ U i, U i, Pa−U (B) i;Bi)
= I(Pa−U (B) i;Bi|Ai)− I(Pa−U (B)
i;BN(i)|Ai) A6 ≤ 0.
(12)
The equal sign above is taken iff. Pa−U (B) i ⊆ Ai. Further we have
mTE(S∗1 ∪ Index(Pa−U (B)), S ∗ 2 )−mTE(S1 ∪ Index(Pa−U (B)), S ∗ 2 )
= I(Ai \ U i, U i, Pa−U (B) i;BN(i))− I(Ai \ U i, U i, Pa−U (B) i;Bi) − I(Ai \ U i, Pa−U (B) i;BN(i)) + I(Ai \ U i, Pa−U (B) i;Bi) = I(U i;BN(i)|Pa−U (B) i, Ai \ U i)− I(U i;Bi|Pa−U (B)
i, Ai \ U i) = −h(U i|BN(i), Pa−U (B) i, Ai \ U i) + h(U i|Bi, Pa−U (B) i, Ai \ U i) ≤ −h(U i|BN(i), Pa−U (B) i, Ai \ U i) + h(U i|PaU (B)i, Ai \ U i) ≤ 0.
(13)
Therefore, in all possible cases, mTE(S1 ∪ Index(Pa−U (B)i), S∗2 ) is either strictly larger than mTE(S∗1 , S ∗ 2 ) or equal with mTE(S ∗ 1 , S ∗ 2 ) but with smaller union size, leading to a contradiction.
Next, given the result above, we assume a nuisance signal set U is in S∗2 , and S ∗ 1 does not include any nuisance features. Then as U only interacts with variables at the same time point, UN(i) can only interact with S∗1 via indirect links through a subset of interacting features at N(i). Denote the whole intermediate feature set for A as ChU (A)N(i) ⊆ {tN(i)1 , ..., t N(i) m }, and Ch−U (A)N(i) := ChU (A) N(i) \ AN(i). Then same as above, denote S2 = S∗2 \ U , if Ch−U (A) is an empty set we would have
mTE(S∗1 , S ∗ 2 )−mTE(S∗1 , S2)
= I(Ai;BN(i) \ UN(i), UN(i))− I(Ai;Bi \ U i, U i)− I(Ai;BN(i) \ UN(i)) + I(Ai;Bi \ U i, U i) = I(Ai;UN(i)|BN(i) \ UN(i))− I(Ai;U i|Bi \ U i) = −h(UN(i)|BN(i) \ UN(i), Ai) + h(U i|Bi \ U i, Ai) ≤ −h(UN(i)|BN(i) \ UN(i), Ai) + h(U i|ChU (A)i, Bi \ U i) ≤ 0.
(14)
Above derivation holds due to stationarity (as |N(i)| ≡ 1 in the time series setting). Therefore Ch−U (A) cannot be an empty set. Because of the non-overlapness between Ch − U (A) and either A or U , with A6, we have
mTE(S∗1 , S ∗ 2 )−mTE(S∗1 , S2 ∪ Index(Ch−U (A)))
= I(Ai;BN(i) \ UN(i), UN(i))− I(Ai;Bi \ U i, U i) − I(Ai;BN(i) \ UN(i), UN(i), Ch−U (A) N(i)) + I(Ai;Bi \ U i, U i, Ch−U (A) i)
= I(Ai;Ch−U (A) i|Bi)− I(Ai;Ch−U (A)
N(i)|BN(i)) A6 ≤ 0.
(15)
The equal sign above is taken iff. Ch−U (A) i ⊆ Bi. Further we have
mTE(S∗1 , S ∗ 2 ∪ Index(Ch−U (A)))−mTE(S ∗ 1 , S2 ∪ Index(Ch−U (A)))
= I(Ai;BN(i) \ UN(i), UN(i), Ch−U (A) N(i))− I(Ai;Bi \ U i, U i, Ch−U (A) i) − I(Ai;BN(i) \ UN(i), Ch−U (A) N(i)) + I(Ai;Bi \ U i, Ch−U (A) i) = I(Ai;UN(i)|BN(i) \ UN(i), Ch−U (A) N(i))− I(Ai;U i|Bi \ U i, Ch−U (A) i) ≤ 0.
(16)
Therefore, in all possible cases, mTE(S∗1 , S2 ∪ Index(Ch−U (A))) is either strictly larger than mTE(S∗1 , S ∗ 2 ) or equal with mTE(S ∗ 1 , S ∗ 2 ) but with smaller union size, leading to a contradiction.
Step 3. Moreover, if there exists a component in S∗1 ∪ S∗2 not connected to any other feature components, denote the feature as q. Then, in this case with A1-4, the feature q is independent of any other features in S∗1 ∪ S∗2 . From step 1 it can be deduced that q cannot be in both S∗1 , S∗2 . Therefore in this case, we have mTE(S∗1 − q, S∗2 − q) = mTE(S∗1 , S∗2 ) thus leading to the contradiction of finding an (S1, S2) with the same mTE but smaller |S1 ∪ S2|.
A.4 PROOF OF THEOREM 3.1.
Theorem 3.1. Assume A1-A6 holds and f, g, h define one-to-one mappings on X⊙1S1(for f) or X⊙ 1S2(for g, h). Then ∃λ > 0, such that for (4), any solution (S∗1 ∪ S∗2 ) satisfies S∗ := (S∗1 ∪ S∗2 ) ⊆ {1, ...,m}. Moreover, each feature in S∗ is connected to other features in the set.
min f,g,h,S1,S2
−(I(f(xi ⊙ 1S1);h(xN(i) ⊙ 1S2))− I(f(xi ⊙ 1S1); g(xi ⊙ 1S2))) + λ|S1 ∪ S2|
Proof. With A4 (ergodicity and stationarity), the optimization problem 4 is equivalent to
min f,g,h,S1,S2
−(I(f(xiS1);h(x N(i) S2 ))− I(f(xiS1); g(x i S2))) + λ|S1 ∪ S2|. (17)
Given the assumption that f, g, h define injective mappings on xiS1 ,x i S2 respectively, and one-to-one transformation does not change mutual information, we have the optimization problem is equivalent to
min S1,S2
−(I(xiS1 ;x N(i) S2 )− I(xiS1 ;x i S2)) + λ|S1 ∪ S2|. (18)
Using Theorem 2.4, a minimizer of the mTE term with the smallest union size satisfies S∗ := (S∗1 ∪ S∗2 ) ⊆ {1, ...,m}. Moreover, each feature in S∗1 ∪ S∗2 is connected to other features in the set. Note that with our definition of optimal S1, S2, the minimal gap between mTE(S∗1 , S ∗ 2 ) and any other value mTE(S1, S2) with smaller |S1 ∪ S2| size is larger than zero. Denote the minimal gap as δ, and take λ < δ|S∗1∪S∗2 | , then for these other solutions, we have
−mTE(S1, S2) + λ|S1 ∪ S2| ≥ −mTE(S∗1 , S∗2 ) + δ + λ|S1 ∪ S2| ≥ −mTE(S∗1 , S∗2 ) + δ > −mTE(S∗1 , S∗2 ) + λ|S∗1 ∪ S∗2 |.
(19)
Meanwhile, for the (S1, S2) with larger union size, with the definition of the mTE, we have
−mTE(S1, S2) + λ|S1 ∪ S2| ≥ −mTE(S∗1 , S∗2 ) + λ|S1 ∪ S2| = −mTE(S∗1 , S∗2 ) + λ(|S1 ∪ S2| − |S∗1 ∪ S∗2 |) + λ|S∗1 ∪ S∗2 | > −mTE(S∗1 , S∗2 ) + λ|S∗1 ∪ S∗2 |.
(20)
Therefore, when taking λ ∈ (0, δ|S∗1∪S∗2 | ), the desired optimal S1, S2 by mTE is the optimal output of the constructed optimization problem.
A.5 PROOF OF THEOREM 3.4.
Theorem 3.4. Assume A1-A6 and f, g, h are one-to-one Gaussian embeddings as described above. Denote for the optimal solution of (5), a sample of stochastic gate is given by T 1, T 2 and denote the ground truth interacting feature set as S, then there exists λ1, λ2 > 0 for (5) such that as n → ∞,
∀i ∈ {0, 1}, P(Bi ⊆ S) a.s.−−→ 1, where Bi := {d|T 1d > 0, T 2d = i}.
Proof. In the following proof for simplicity we denote x̃S1 = x⊙T 1⊙T 2; x̃S2 = x⊙T 1⊙(1−T 2). Step 1. Given f, g, h projects input distributions into joint Gaussian distributions with fixed dimensionality, by convergence of Gaussian covariance matrices, we have:
Σ̂(f(x̃iS1), h(x̃ N(i) S2
)) = 1
n n∑ i=1 [f(x̃iS1);h(x̃ N(i) S2 )][f(x̃iS1);h(x̃ N(i) S2 )]T a.s.−−→ Σ f(x̃iS1 ),h(x̃ N(i) S2 ) ;
Σ̂(f(x̃iS1), g(x̃ i S2)) =
1
n n∑ i=1 [f(x̃iS1); g(x̃ i S2)][f(x̃ i S1); g(x̃ i S2)] T a.s.−−→ Σf(x̃iS1 ),g(x̃iS2 ). (21)
As in the Gaussian case, the mutual information between jointly Gaussian r.v.s is a function of the covariance matrix, we have
Î(f(x̃iS1);h(x̃ N(i) S2 )) a.s.−−→ I(f(x̃iS1);h(x̃ N(i) S2 )) = I(x̃iS1 ; x̃ N(i) S2 );
Î(f(x̃iS1); g(x̃ i S2)) a.s.−−→ I(f(x̃iS1); g(x̃ i S2)) = I(x̃ i S1 ; x̃ i S2);
P( lim N→∞ Empirical mTE = mTE) = 1.
(22)
Step 2. Importantly, in our formulation eq (5), the T1, T2 are sampled once in one epoch, meaning they are fixed across features for computing mTE. Further note that ∑p d=1 P(T 1d > 0) =
E||T 1||0; ∑p
d=1 P(T 2d ∈ (0, 1)) = E||1T 2∈(0,1)||0. This means denoting the value of eq (5) as L, we have
L a.s.−−→ ET 1,T 2 [−mTE(1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) + λ1||T 1||0 + λ2||1T 2∈(0,1)||0] ≥ min
T 1,T 2 −mTE(1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) + λ1||T 1||0 + λ2||1T 2∈(0,1)||0.
(23)
Note with step 1 of the proof of theorem 2.4, for any T1 we have
−mTE(1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) + λ1||T 1||0 + λ2||1T 2∈(0,1)||0 ≥ −mTE(1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) + λ1||T 1||0,
(24)
which is taken when ∀d,P(T 2d = 1) = 0/1,P(T 2d = 0) = 1/0. In this case,
||T 1||0 = ||T 1 ⊙ T 2||0 + ||T 1 ⊙ (1− T 2)||0.
Applying theorem 3.1, we have for λ1 = λ in theorem 3.1,
min T 1,T 2
−mTE(1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) + λ1||T 1||0 + λ2||1T 2∈(0,1)||0
= −mTE(S∗1 , S∗2 ) + λ|S∗1 ∪ S∗2 | := L∗. (25)
Here (S∗1 , S ∗ 2 ) satisfies properties described by theorem 3.1. Note the minimizer may not be unique, denote the set containing all minimizers as {(S∗1 , S∗2 )}. Then the equal sign in eq (23) holds if and only if P((1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) ∈ {(S∗1 , S∗2 )}) = 1. Further noting ∀d,P(T 2d = 1) = 0/1,P(T 2d = 0) = 1/0, and our analysis above holds as n → ∞ with probability 1 by a.s. convergence, we finally have
P( lim N→∞ P(B1 ⊆ S) = 1) = 1; P( lim N→∞ P(B0 ⊆ S) = 1) = 1
holds.
B GATE INITIALIZATION
Our proposed initialization scheme is based on analysis of the linear case. Assume
f(XS1) = Xa, g(XS2) = Xb,
where a, b ∈ Rp represents two feature loadings. Then:
1. a, b should be non-overlapping, therefore we expect |aT b| to be small. 2. We should have f(X) ≈ Wg(X) to maximize the mTE.
The constraint can be formulated into a regression problem WXb = Xa, therefore a natural solution is given by a = X†WXb = (XTX)−1XTWXb. In this case, ||aT b|| = ||bT (XTX)−1XTWXb|| = ||b||2(XTX)−1XTWX . Given b is normalized, it can be shown that the optimal b corresponds to the eigenvector with least absolute eigenvalue of matrix (XTX)−1XTWX .
After getting a, b, we select a quantile threshold over a/(a + b) to initialize the second stochastic gate layer. The first stochastic gate layer is initialized with uniform weights.
C EXPERIMENTAL DETAILS
C.1 TIME-SERIES BENCHMARKING STUDY
In the study the causal processes is simulated with Python package Tigramite. Among the total 100 features, there are 6 interacting features {1, 2, 3, 4, 5, 6}. The causal links are: 1->2 with time lag 2, 2->3 with time lag 1, 5->4 with time lag 1, 1->5 with time lag 1, 3->6 with time lag 3. These features also have autocorrelations with time lags ranging from 1 to 3. There is also a latent confounder modeled by Tigramite interacting with feature 0 and feature 2. In the case of strong latent process, the latent confounder also have effects on other 43 features. All other features (93/50) not mentioned above are nuisance features with white noise dynamics. The forward operator is defined by 5-neighbor lower triangular matrix.
C.1.1 ALGORITHM IMPLEMENTATION
If not particularly mentioned, default settings of the algorithms are used throughout.
• VAR-LINGAM. The VAR-LINGAM algorithm is implemented in the Python package LINGAM, available at https://github.com/cdt15/lingam. VAR-LINGAM gives a weighted matrix as output. Therefore in our benchmarking study, we choose the most significant edge corresponding features with the number matching the sparsity level.
• PCMCI. The PCMCI algorithm is implemented in the Python package Tigramite, which gives a weighted matrix as output. We choose the most significant edge corresponding features with the number matching the sparsity level.
• GVAR. The GVAR algorithm is implemented at https://github.com/i6092467/GVAR. The sparsity parameter is set to be 1. We use the stable training option in GVAR, which trains the first and second half of the time series respectively to optimize over edge selection sparsity level then train on the whole time series, giving a binary output and no threshold selection is needed.
• Grid-net. The Grid-net algorithm is implemented at https://github.com/alexw16/gridnet. The parameter set: order=5, hidden_layer_size = 10, end_epoch=50, batch_size = 50, lmbd=1 is used throughout our study. After the training finishes, we choose the most significant edge corresponding features with the number matching the sparsity level.
• DCM, NGM. The two algorithms are both implemented at https://github.com/alexisbellot/ Graphical-modelling-continuous-time. For DCM, the default setting is used, and we use hidden dim = 10 for NGM. After both training finishes, we choose the most significant edge corresponding features with the number matching the sparsity level.
• GEASS. We use the same training parameters in all time-series settings, with the key sparsity regularization parameter λ1 set with 0.04/0.05 based on a validation set, and the rest parameter settings are consistent with default.
C.1.2 SCALABILITY ANALYSIS
We test PCMCI, GVAR, GrID-net, NGM, GEASS, GEASS+LPCMCI’s running time with consistent settings described in the above section. (LPCMCI’s setting is consistent with PCMCI’s setting). We use the same data generation pipeline and select the set of the total feature numbers as [100, 200, 400, 800, 1600].
C.2 SIMULATED SPATIAL OMICS DATA BENCHMARKING STUDY
In the study the spatial omics data is simulated with Python package Scsim (Kotliar et al., 2019). 1000 genes are simulated in total, while 990 genes are cell-type-specificly expressed. The rest 10 genes each has a functional relationship (linear/nonlinear) with one cell-type-specific genes plus the noise term in order to model the cell-type-specific interactions. The data is then normalized and log-transformed according to the standard Scanpy pipeline (Wolf et al., 2018). The forward operator is defined by 4-neighbor adjacency matrix.
C.2.1 ALGORITHM IMPLEMENTATION
If not particularly mentioned, default settings of the algorithms are used throughout.
• Lasso Granger. The Lasso algorithm is implemented by Scipy with tuned α (0.12) to match the sparsity level.
• NCEM. NCEM (Linear) is a linear graph neural network, which in the grid case corresponds to a standard linear regression based on neighbors and the cell type label. Based on the original work, we implemented our equivalent version by Lasso regression with α = 0.019 to match the sparsity level.
• GEASS. We use the same training parameters in all settings, with the key sparsity regularization parameter λ1 set with 0.02 based on a validation set, and the latent dimension number is set to be 64.
• TE. To give a fair comparison, we use the same architecture as GEASS except for the loss function is changed. We use the same training parameters in all time-series settings, with the key sparsity regularization parameter λ1 set with 0.05 based on a validation set, and the latent dimension number is set to be 64 consistent with GEASS.
C.3 SCRNA-SEQ PANCREAS TRAJECTORY
The data preprocessing is consistent with the scVelo tutorial: https://scvelo.readthedocs.io/ VelocityBasics/ (Bergen et al., 2020). The parameter set: λ1 = 0.06, λ2 = 0.1. Here because the gene regulatory network is fully connected and activated in cascade along the developmental trajectory, we consider the opposite initialization with b be the largest eigenvalues corresponding eigenvectors of the matrix (XTX)−1XTWX .
C.4 MERFISH SPATIAL TRANSCRIPTOMICS DATA
The data is downloaded from Dryad and preprocessed with the standard Scanpy pipeline (Wolf et al., 2018): first normalize and log-transform the data by default functions in Scanpy then select 1000 highly variable genes by default functions in Scanpy (Wolf et al., 2018). The forward operator is defined by 5-neighbor adjacency matrix. The GEASS parameter set is consistent with those used in the spatial omics benchmarking.
D ADDITIONAL EXPERIMENTAL RESULTS | 1. What is the focus and contribution of the paper on feature selection for causal discovery?
2. What are the strengths of the proposed approach, particularly in terms of its optimization and gate-based design?
3. What are the weaknesses of the paper regarding its clarity, experimentation, and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The manuscript proposed a feature selection model for the causal discovery task based on the optimization of multi-dimensional transfer entropy of the input data by combinatorial stochastic gates. The proposed model was evaluated on by synthetic time series datasets with the presence of latent process, synthetic spatial progression of the scRNA-seq data, as well as the real (pancreatic endocrinogenesis trajectory) data. Experiment results indicate superior performance of the model over both classic and recent causal discovery methods.
Strengths And Weaknesses
Strength: The methodology involved in the optimized feature subset selection for causal discovery (Theorem 3.1, 3.4) and the design of the stochastic gate-based approach is important to the field of data mining. Experiment results show superior performance on both synthetic and real data, further validating the effectiveness of the model.
Weakness: First of all, in section 3.2, the author mentioned that “GEASS provides both outputs of active features and embeddings produced by causally interacting features. In this paper, we emphasize the use of the former as the latter embedding output may be complex and nonlinear, potentially requiring additional architectures to maximize its interpretability.” The statement is confusing as it did not specify where the “embeddings produced by causally interacting features” were obtained: whether they were from the stochastic gates, or the MLPs indicated in Fig. 2? Further, there are no explanations for the design and purpose of the MLPs in Fig. 2. Also, the term “STG layer” in Fig. 2 has never been mentioned throughout the manuscript. The reviewer guesses it is referring to the stochastic gates but is not sure about it.
Clarity, Quality, Novelty And Reproducibility
Clarity: The manuscript is clearly written, although a full understanding of the algorithm details would need a frequent reference to the appendix and cited literature.
Quality: The quality of the manuscript is good, with an extensive investigation of the algorithm design, proof of the key theorems, and insights into the model development.
Reproducibility: The manuscript did not provide any code repositories associated with the model. With the current description of the methodology, the work can be potentially reproduced, yet not guaranteed. |
ICLR | Title
GEASS: Neural causal feature selection for high-dimensional biological data
Abstract
Identifying nonlinear causal relationships in high-dimensional biological data is an important task. However, current neural network based causality detection approaches for such data suffer from poor interpretability and cannot scale well to the high dimensional regime. Here we present GEASS (Granger fEAture Selection of Spatiotemporal data), which identifies sparse Granger causal interacting features of high dimensional spatiotemporal data by a single neural network. GEASS maximizes sparsity-regularized modified transfer entropy with a theoretical guarantee of recovering features with spatial/temporal Granger causal relationships. The sparsity regularization is achieved by a novel combinatorial stochastic gate layer to select sparse non-overlapping feature subsets. We demonstrate the efficacy of GEASS in several synthetic datasets and real biological data from single-cell RNA sequencing and spatial transcriptomics.
1 INTRODUCTION
Advances in single-cell omics research enable full characterizations of high-dimensional gene dynamics in biological systems on a either temporal or spatial scale. An example for the temporal case is single-cell RNA sequencing (scRNA-seq) trajectories, where cells are sampled from a dynamical biological process, sequenced, and ordered based on either real sampled time or inferred pseudo-time (Cannoodt et al., 2016; Saelens et al., 2019). Gene dynamics along the specified cell order encodes information of causal regulation for the underlying biological process. An example for the spatial case is single-cell level spatial transcriptomics (e.g. SeqFISH+ (Eng et al., 2019), Merfish (Fang et al., 2022)), in which cells from a tissue slice are sequenced with their spatial coordinates preserved (Moses and Pachter, 2022; Rao et al., 2021; Palla et al., 2022). Spatial profiling allows investigations of the cellular interplay, corresponding to conditional gene expression change caused by neighborhood phenotypic states. However, despite the potential significance, data-driven causal discovery for such data remains largely unexplored, especially for the spatial omics data.
Identifications of causal regulatory patterns in such data can be reformulated into the general task of causal feature selection in observational data with intrinsic structures, e.g. spatial data or temporal data. Identifications of causal interactions in time-series has lead to valuable findings in multiple disciplines, including but not limited to, economy, climate science, and biology (Hoover, 2006; Kamiński et al., 2001; Runge et al., 2019a).
Learning directed causal relationships in temporal/spatial data is feasible as time and space both induce asymmetric dependencies. In the case of time-series data, a feature in the future cannot have effect on past values of other features. For spatial data, a similar definition of causal dependency can be established (Herrera Gómez et al., 2014).
The concept of Granger causality is proposed in order to uncover the assymetric causal dependency (Granger, 1969; Shojaie and Fox, 2022). In time-series data, this would translate to identifying one variable’s causal relationship with other variables based on how well the historical observations of other variables can predict the variable’s present value. The application of Granger causality in a spatial context corresponds to predicting significant relationships between neighboring observations of other variables and the specified variable (Mielke et al., 2020), which is a key insight used in recent works aimed to discover cellular interaction patterns in spatial omics data (Fischer et al., 2021; Valdés-Sosa et al., 18).
In the nonlinear regime, information-theoretic measures such as directed information, transfer entropy (Schreiber, 2000), and partial transfer entropy (Staniek and Lehnertz, 2008), are used as a counterpart of linear Granger causality. Moreover, some works consider modeling conditional independence (CI) in time-series data to identify the underlying causal graph (Entner and Hoyer, 2010; Malinsky and Spirtes, 2018; Moneta et al., 2011; Runge et al., 2019a; Pfister et al., 2019; Mastakouri et al., 2021). Two examples are VarLINGAM (Hyvärinen et al., 2010) and PCMCI (Runge et al., 2019b), which are generalizations of LINGAM (Shimizu et al., 2006) and PC (Spirtes et al., 2000) respectively. Finally, multiple recent works have proposed to use neural network approaches to model the nonlinear Granger causality, including MLP, LSTM, and neural-ODE based approaches, resulting in improved prediction power for nonlinear time-series dynamics (Li et al., 2017; Tank et al., 2021; Nauta et al., 2019; Yin and Barucca, 2022; Bellot et al., 2021).
Despite the success of these methods in various systems of interest, multiple challenges limit their use in high-dimensional biological datasets.
• Although linear methods (LINGAM, linear Granger causality) have succeeded in various settings and can potentially scale to high feature numbers, these methods may completely fail when the feature dependency in data is highly complex and nonlinear.
• As the number of conditional independencies generally scales exponentially or at least polynomially with the feature size, applying causal discovery methods which are based on CI tests to high-dimensional data is not realistic. Distinctively, Granger-causality based methods are built with a prediction model for each feature in the data. The time complexity of solving the stacked prediction model for all features is of polynomial level with respect to the feature size.
• In previous methods, the number of causal edges between features is assumed to be sparse (edge sparsity) to maximize interpretability of the identified causal graph. However, in biological data, there exists a large proportion of nuisance features. Also, one functional gene may activate a large number of downstream genes in neighboring cells. Sparsifying the number of interacting features (feature sparsity) has the potential to improve causal discovery in biological systems, which remains to be explored.
• While a large number of methods are designed for causal discovery in time-series data, only a limited number of present works aim for causal discovery in general graph-structured data. Time-series based methods cannot be directly adopted on data with multi-branch trajectory dynamics or spatial structures.
Our contribution. In this work, we present GEASS (Granger fEAture Selection of Spatiotemporal data), which identifies causally interacting features of high dimensional temporal / spatial data by a single neural network. GEASS considers the aforementioned feature sparsity instead of edge sparsity, thus selects most significant interacting features for downstream causal discovery. Our contributions are three-folds.
1. Instead of direct causal discovery in data, we formulate the task as two steps of causal feature selection and causal graph identification. We provide a novel solution of causal feature selection problem in general graph-structured data by the use of modified transfer entropy maximization with theoretical guarantees.
2. In order to solve our proposed optimization problem, we design a novel combinatorial stochastic gate layer to select non-overlapping sparse feature sets with a newly designed initialization procedure.
3. We demonstrate the power of our method by benchmarking it on both temporal data and spatial data of multiple settings. Our method gives accurate and robust causal feature identification and reveals novel biology in real datasets.
1.1 RELATED WORKS
Neural Granger causality. Despite the large body of work based on linear Granger causal discovery, neural Granger causality still remains an active area of research. Various neural network architectures, such as MLP, sequential model, and attention-based architecture (Tank et al., 2021; Nauta et al., 2019; Khanna and Tan, 2019; Sun et al., 2021), have been proposed for nonlinear Granger causality
discovery. A recent work uses the information of proxy variable to learn latent confounder for Granger causality by a dual-decoder neural network (Yin and Barucca, 2022). One recent biologyoriented work extends the definition of Granger causality to DAGs, where the use of a linear graph neural network is proposed to model underlying Granger causality (Wu et al., 2021). Meanwhile, a neural-ODE based approach has been proposed to reformulate the Granger causality problem in terms of local dependence graph identification (Bellot et al., 2021).
Causal feature selection. The task of causal feature selection has been considered by multiple groups. Most works in this category uses constraint-based methods to identify each feature’s causal relation with all other features, equivalent of identifying the whole causal graph structure, including VARLINGAM, tsFCI, SVAR-FCI, and PCMCI (Hyvärinen et al., 2010; Entner and Hoyer, 2010; Malinsky and Spirtes, 2018; Moneta et al., 2011; Runge et al., 2019a). Meanwhile, seqICP focus on identifying the direct or indirect cause for each feature assuming sufficient interventions in the dataset (Pfister et al., 2019). SyPI tackles the causal feature selection problem without the assumption of causal sufficiency and avoids issues in multi-hypothesis testing by construction of the correct conditional set (Mastakouri et al., 2021). Finally, Guo et al. (2022) considers dual correction of causal feature selection to control both false positive rates and false negative rates.
2 MODIFIED TRANSFER ENTROPY (MTE)
In order to tackle the issue that a neural network may overfit each model therefore overestimates the number of causal interactions, we need a prediction-free loss function that directly indicates causal signficance. In this work, we propose a novel function, modified transfer entropy (mTE), based on transfer entropy (Schreiber, 2000) as a metric of causal interaction significance.
Transfer entropy is a information-theoretic measure of cross dependence (Schreiber, 2000). Consider two vectorized time series xt and yt for t ∈ 1, ..., T . In a Markovian model, the transfer entropy from x to y at time t is defined as the mutual information between the present value xt and the future value yt+1, conditioning on yt to eliminate possible autocorrelation: TEt(x,y) = I(xt;yt+1|yt). By the use of mutual information, transfer entropy is able to model general nonlinear dependencies beyond linear Granger causality. In this work, we further consider the generalization of transfer entropy on graph structured xi and yi, where i denotes a vertex on the data graph G = (V,E):
TEi(x,y) := I(x i;yN(i)|yi), where N(i) := {j|(i, j) ∈ E}. (1)
Note here the graph can be either directed (the time-series case) or undirected (the spatial case). In this study, we introduce a novel function, modified transfer entropy, that enables the application of bivariate transfer entropy for causal discovery in high-dimensional data. Our key insight is to consider two feature subsets in the dataset that maximizes the mutual information difference: Definition 2.1. Let X = [x1x2 . . .xn] ∈ Rp×n be a matrix containing graph-structured vector series xi, with i as vertices of the data graph G = (V,E). Suppose S1 and S2 be two subsets of {1, 2, ..., p}. The modified transfer entropy mTEi(S1, S2) and its maximum mTE∗i are defined by
mTEi(S1, S2) := I(x i S1 ;x N(i) S2 )− I(xiS1 ;x i S2); mTE ∗ i := max
S1,S2 mTEi(S1, S2). (2)
Note the mTE function requires strictly stronger dependence than the analogically defined transfer entropy TEi(S1, S2), as shown by the proposition below (The proof can be seen at Appendix A.1): Proposition 2.2. ∀S1, S2 ⊂ {1, ..., p},mTEi(S1, S2) > 0 ⇒ TEi(S1, S2) > 0.
Let (S∗1 , S ∗ 2 ) be one of the maximizers with the smallest size of |S1 ∪S2|, and denote S∗ := S∗1 ∪S∗2 (note (S∗1 , S ∗ 2 ) may not be unique). Under some mild assumptions listed below, we are able to provide the theoretical justification for mTE maximization in the time-series setting (Theorem 2.4). A proof can be seen in Appendix A.3.
Assumptions:
A1-A3 Causal Markov assumption, faithfulness, and causal sufficiency for the causal graph.
A4 Ergodicity and Stationarity of the stochastic process defined by the causal graph, meaning the ensemble average equals time average, and the functional relationships encoded by the causal graph do not change by time (or location). This also leads to mTEi(S1, S2) is constant across i.
A5 DAG causal graph: We assume XT = [t1, ..., tm, um+1, ..., up] up to a permutation, where ti are causally interacting features forming a directed acyclic graph (DAG), and uk are nuisance features that may correlate with ti. An illustration based on the time series setting can be seen in Figure 1.
A6 Interaction regularity: Given two disjoint feature sets A,B, such that A is a subset of the parent features of B or B is a subset of child features of A. Then conditioning on any other feature set C such that I(Ai, BN(i)|Ci), I(Ai, BN(i)|CN(i)) > 0, we have:
∀i,min{I(Ai, BN(i)|Ci), I(Ai, BN(i)|CN(i))} > I(Ai, Bi|Ci). (3) Remark 2.3. Here our only additional assumption from prevalent literatures (Pearl, 2009; Spirtes et al., 2000) is A6, which aims to filter out features with spurious causations and regularize the algorithmic complexity of causal interactions, thus enabling information-theoretic analysis. A6 has direct connections with the concept of conditional transfer entropy (Faes et al., 2016; Shahsavari Baboukani et al., 2020); further discussions can be seen at Appendix A.2. Theorem 2.4. Given A1-A6, S∗ := (S∗1 ∪S∗2 ) ⊆ {1, ...,m} (the index set of true interacting features described in A5). Moreover, each feature in S∗ is connected to other features in the set S∗.
3 NEURAL OPTIMIZATION OF MODIFIED TRANSFER ENTROPY
With Theorem 3.1 stated below, we are able to give a theoretical guarantee of the l0-penalized optimization of mTE. A proof can be seen at Appendix A.4. Here ⊙ stands for the Hardmard product. Theorem 3.1. Assume A1-A6 holds and f, g, h define one-to-one mappings on X⊙1S1(for f) or X⊙ 1S2(for g, h). Then ∃λ > 0, such that for (4), any solution (S∗1 ∪ S∗2 ) satisfies S∗ := (S∗1 ∪ S∗2 ) ⊆ {1, ...,m}. Moreover, each feature in S∗ is connected to other features in the set.
min f,g,h,S1,S2
−(I(f(xi ⊙ 1S1);h(xN(i) ⊙ 1S2))− I(f(xi ⊙ 1S1); g(xi ⊙ 1S2))) + λ|S1 ∪ S2| (4)
Remark 3.2. The estimation of mutual information by various approaches is an active field itself (Belghazi et al., 2018; Hjelm et al., 2018; McAllester and Stratos, 2020; Zhang et al., 2019). In contrast, by this theorem, we show that an accurate estimation of the transfer entropy (such as in (Zhang et al., 2019)) may not be needed as optimizing the upper bound of the modified transfer entropy automatically gives the best feature subset selection. Remark 3.3. Our theoretical guarantee is derived based on one-to-one embeddings f, g, h. In a neural network, the injectivity may be enforced with various architecture designs yet may not perfectly hold. Empirically, we have found that the optimization of mTE is robust to the embedding injectivity, compared with the original transfer entropy. This is due to our stricter design of the mTE function (Proposition 2.2) and is further illustrated by our experiments in the next section.
Given Theorem 3.1, we are able to construct a neural network for optimizing the proposed loss function. However, the estimation of mutual information is not directly tractable. In this case, because mutual information is invariant by one-to-one transforms, we can restrict the function class of f, g, h in the optimization problem (4) as flows transforming the original feature distributions into Gaussian distributions with fixed dimensionality. We are able to formulate the target for neural network optimization by the explicit formula for mutual information between Gaussians: I(X,Y ) = 1 2 log detΣX detΣY detΣ[X,Y ]
. The Gaussian regularization can be applied either by regularizing over the discrepancy between embedding distributions [f, g, h] and Gaussian distributions or by applying a adversarial training procedure. In this work, we have implemented the former approach, constructing means and covariance matrices for the concatenated embedding as learnable parameters and minimize the cross entropy between target distributions and the parametrized Gaussian distributions.
3.1 COMBINATORIAL STOCHASTIC GATES
In order to solve the optimization problem, we need to learn two sparse sets S1, S2, which involves combinatorial optimization, making the task impractical for high-dimensional data. To overcome this issue, we use a stochastic gate based approach (Yamada et al., 2020; Lindenbaum et al., 2021), which performs probabilistic relaxation of deterministic l0 norms. In order to explicitly construct S1 and S2 by stochastic gates, we define two random vectors T 1 and T 2 ranging in [0, 1] with lengths equal to the feature number, with each element independently sampled from STG distribution defined as: T id = max(0,min(1, µ i d + ϵ i d)), where ϵ i d ∼ N(0, σ2i ) is i.i.d. sampled with fixed variance and µid is a parameter trainable by reparametrization (Miller et al., 2017; Figurnov et al., 2018).
The new loss function applying stochastic gates can be formulated as: ET 1,T 2 − [Î(f(X̃S1);h(WX̃S2))− Î(f(X̃S1); g(X̃S2))] + p∑
d=1
[λ1P(T 1d > 0) + λ2P(T 2d ∈ (0, 1))],
s.t. X̃S1 = X ⊙ T 1 ⊙ T 2, X̃S2 = X ⊙ T 1 ⊙ (1− T 2). (5)
Here Î is defined as the empirical Gaussian mutual information: Î(X,Y ) = 12 log det Σ̂X det Σ̂Y det Σ̂[X,Y ] , and W is defined as the graph diffusion operator: Wxi = xN(i). In our construction, T 1 controls the sparsity of feature selection, while T 2 controls the expectation of overlap between X̃S1 and X̃S2 . Denoting the Gaussian error function as erf(), the regularization term for the first layer is of form:
p∑ d=1 P(T 1d > 0) = p∑ i=1 ( 1 2 − 1 2 erf( µ1d√ 2σ1 )). (6)
The regularization term for the second layer can be expressed as: p∑
d=1
P(T 2d ∈ (0, 1)) = p∑
d=1
P(T 2d > 0)− P(T 2d ≥ 1) = 1
2 p∑ d=1 (erf( µ2d√ 2σ2 )− erf(µ 2 d − 1√ 2σ2 )). (7)
We are able to show strong consistency for our stochastic-gate based feature selection scheme by the theorem below (A proof can be seen at Appendix A.5): Theorem 3.4. Assume A1-A6 and f, g, h are one-to-one Gaussian embeddings as described above. For the optimal solution of (5), denote a sample of stochastic gate as T 1, T 2 and denote the ground truth interacting feature set as S, then there exists λ1, λ2 > 0 for (5) such that as n → ∞,
∀i ∈ {0, 1}, P(Bi ⊆ S) a.s.−−→ 1, where Bi := {d|T 1d > 0, T 2d = i}. (8)
In practice, we also have observed the method’s solution highly depends on the stochastic gate initialization. Here we provide a heuristic initialization scheme that shows superior empirical performance. Details of the initialization scheme can be seen in Appendix B.
3.2 PROPOSED NETWORK ARCHITECTURE
Our proposed network architecture is summarized in Figure 2. For an input dataset X ∈ Rp×n and its corresponding graph adjacency matrix A ∈ Rn×n, we first pass each feature through two sequential stochastic gate layers T 1, T 2. The l0 penalty is conducted on the first STG layer, while the second STG layer is regularized with the 0-1 penalty, consistent with the descriptions in the previous section.
After passing each feature, denote T̂ 2i = 1− T 2i , we have two intermediate embeddings defined by X̃S1 = X ⊙ T 1 ⊙ T 2 and X̃S2 = X ⊙ T 1 ⊙ T̂ 2 respectively. Then these two embeddings are passed through MLP1 (f ) and MLP2 (g) to generate Gaussian embeddings f(X̃S1), g(X̃S2) corresponding to (5). For the design of function h, we consider two crucial elements: 1. an additional layer to aggregate the information from different nodes in xN(i); 2. the injectivity of mappings f, g, h. Note f, h in (5) are automatically enforced to be injective on interacting features to maximize the first term of mTE, but g is not. Therefore, our final design of h is the composition of first applying g (enforcing the injectivity of g), a mean aggregation layer without self-loop consistent with the GCN design (Kipf and Welling, 2016) by multiplying the adjacency matrix A, and another MLP layer (MLP3). Finally, we compute the minus empirical Gaussian mTE Î(f, g)− Î(f, h) and add the cross-entropy penalty between the concatenated embedding distribution and a learnable Gaussian distribution.
3.3 OUTPUT INTERPRETATION
Upon the algorithm convergence, GEASS provides both outputs of active features (B0 ∪ B1) and embeddings (f, g, h) produced by causally interacting features. In this paper, we emphasize the use of the identified interacting features B0 ∪B1. The output of embeddings (f, g, h) may be complex and nonlinear, potentially requiring additional architectures to maximize its interpretability.
By the construction of GEASS, we are able to get two separate sparse feature subsets as source features B1 and sink features B0. These features may be used as inputs to further proper causal analysis, such as LPCMCI (Gerhardus and Runge, 2020) for time-series data, which despite its statistical power in depicting possible lags, identifying latent confounders, and allowing nonlinear tests, can only work on data with moderate feature sizes. Also, these features may be used in other machine learning models for improved model interpretability.
4 EXPERIMENTS
4.1 GAUSSIAN TIME-SERIES WITH POSSIBLE NONLINEARITY
In order to benchmark the method in time-series data, we consider two settings: 1. Minor effect of latent processes, with autocorrelation present; 2. Significant effect of latent processes, with autocorrelation present. Both settings are modeled by Gaussian structural processes with an underlying causal graph. Further details can be seen in Appendix C.1.
We test the false discovery rate (FDR) and F1 score between ground truth interacting features and recovered features as two metrics for high-dimensional data causal discovery. We compare GEASS with two categories of methods, namely conditional independence based (CI-based) methods and Granger causality based (GC-based) methods respectively. The first method category includes VAR-LINGAM (Hyvärinen et al., 2010), PCMCI (Runge et al., 2019b), and LPCMCI (Gerhardus and Runge, 2020). Among them, despite the statistical power, LPCMCI is not included in our experiment as it fails to converge in given time in our preliminary experiments. The second method category includes a neural-network based generalized vector autoregression model GVAR Granger (Marcinkevičs and Vogt, 2021), and Grid-net which generalizes the definition of Granger causality to Directed Acyclic Graph (DAG) (Wu et al., 2021); moreover we include two state-of-the-art approaches, DCM and NGM implemented in (Bellot et al., 2021) that use neural ODE to model nonlinear dependence graph.
Table 1 shows our benchmarking results. Among the alternative methods, GVAR and GrID-net fail in all settings as they are not designed for causal feature selection. VAR-LINGAM achieves high accuracy in linear settings while fails in nonlinear settings. In contrast, PCMCI fails when latent processes contribute to both true causally interacting features and nuisance features, creating spurious correlations. Empirically we also observe that DCM and NGM achieves comparable performance
when the dynamics are linear but performs worse in the nonlinear setting, where the dynamics are more irregular. Finally, GEASS consistently gives accurate causal feature identifications (high F1) and low false discovery rate (low FDR) in all settings considered.
GVAR (GC) .94 (.00) .11 (.00) .94 (.00) .11 (.00) .94 (.00) .11 (.00) .94 (.00) .11 (.00) GrID-net (GC) 1.0 (.00) .00 (.00) 1.0 (.00) .00 (.00) 1.0 (.00) .00 (.00) 1.0 (.00) .00 (.00) DCM (GC) .12 (.20) .88 (.20) .65 (.12) .35 (.12) .18 (.09) .82 (.09) .93 (.11) .07 (.11) NGM (GC) .07 (.08) .88 (.04) .48 (.17) .50 (.17) .00 (.00) .91 (.00) .62 (.25) .38 (.25) GEASS (Ours) .05 (.15) .97 (.10) .03 (.06) .92 (.05) .03 (.07) .90 (.04) .00 (.00) .91 (.00)
Furthermore, we evaluate different methods’ scalability with respect to the feature size. (Experimental details can be seen at Appendix C.1.2). As described before, we anticipate high computational complexity of both conditional independence based methods and neural network based methods with respect to the feature size, which prohibits further use of these methods for high-dimensional biological data analysis, where the feature number is typically at the scale of 103 − 104. Meanwhile, GEASS constructs a single neural network with parameters approximately proportional to p, thus largely reducing the complexity in the high-dimensional regime. We benchmark PCMCI, GVAR, GrID-net, NGM, GEASS, and an additional combination of GEASS with a downstream CI-test based causal graph identification method LPCMCI. Our experimental result shows
the superior performance of GEASS as well as GEASS+LPCMCI in time complexity, consistent with our qualitative analysis (Figure 3).
4.2 SIMULATED SPATIAL OMICS DATA WITH CELL TYPE CONFOUNDER
In order to jointly consider spatial confounders and corresponding autocorrelation patterns that are potentially enriched in specific niches, we consider the case of spatial omics data, where the autocorrelation is modeled by a higher likelihood of same type of cells in the neighborhood, and the confounder (nuisance features) is modeled by a coherent shift of global gene expression for each cell type. We first simulate scRNA-seq datasets, then each synthetic scRNA-seq dataset is assigned to a fixed size grid with cell type labels simulated by Ising model simulation. We then add artificial genes that are spatially correlated with neighboring cell’s given gene set. Finally each dataset is normalized and log1p transformed as the standard pipeline in Scanpy (Wolf et al., 2018).
The majority of methods are not available as their focus is on time-series data. Therefore in order to perform our benchmarking study, we compare GEASS with Lasso Granger, as well as our implemented L1-regularized version of NCEM, an approach proposed to detect interactions in spatial omics data (Fischer et al., 2021). Finally, we also implemented a method that maximizes over the original transfer entropy to select causal features (TE).
As shown in Table 2, the original LASSO cannot identify causal features because of the strong correlation between features. L1-NCEM alleviates the issue by conditioning on cell type labels in regression. TE outperforms linear methods yet generates a number of false positives, as it may learn spurious causations as discussed in Remark 3.3. Finally, GEASS consistently outperforms over other methods in identifying causal features of data as shown by both high F1 score and low FDR.
4.3 SCRNA-SEQ PANCREATIC ENDOCRINOGENESIS TRAJECTORY
We test GEASS on the pancreatic endocrinogenesis trajectory data, which is a standard dataset for scRNA-seq trajectory inference task (Bergen et al., 2020; Bastidas-Ponce et al., 2019). The pancreas trajectory data contains 3696 cells and 27998 genes. After preprocessing, lowly-expressed genes are filtered out as the standard pipeline in scVelo (Bergen et al., 2020), with remaining 2000 genes for further analysis. We aim to use GEASS to identify causally-related genes along the developmental trajectory to reveal underlying biology. (See Appendix C.3 for experimental details).
scRNA-seq data provides a snapshot of cell population distribution therefore time-series based analysis methods cannot be directly applied. However, due to GEASS’s flexible setting in forward operator W , we are able to define the time flow by RNA velocity analysis. RNA velocity analysis uses the additional information of intron RNAs to infer the underlying dynamics of gene expression change. Thus, we are able to define a velocity kernel matrix Avelo, which provides weighted adjacency relationships of cells based on velocity direction and cell phenotypic proximity.
GEASS identifies 50 causally-related features with high biological relevance. For example, the gene list includes the key transcriptional regulator NEUROG3, which is required for the specification of a common precursor of the 4 pancreatic terminal states (uni, 2021). As the ground truth causal interactions here are unknown, for further quantitative validation, we assume the underlying biological process is driven by a causal cascade of gene interactions, meaning target genes activated in earlier phases of the trajectory further cause downstream gene activation at later phases. In this case, the higher a gene velocity is, the more likely the gene is associated with causal gene-gene relationships. Our benchmarking result here suggests GEASS achieves the best performance in selecting genes with high mean velocity likelihood, compared with alternative gene selection schemes with fixed gene number (50) including high-expressed genes (HEG), highly-variable genes (HVG), and genes having high correlation with inferred latent time (HCG) (Figure 4).
4.4 MERFISH HUMAN CORTEX SINGLE-CELL LEVEL SPATIAL TRANSCRIPTOMICS
Spatial transcriptomics represent a wide category of method that can achieve spatial profiling of gene expression in tissues (Moses and Pachter, 2022; Rao et al., 2021; Palla et al., 2022). By the additional information of spatial locations, such measurements enable deeper understandings of cellular interactions (Palla et al., 2022; Jerby-Arnon and Regev, 2022; Fischer et al., 2021). However, current computational methods revealing interaction modules (Jerby-Arnon and Regev, 2022) or niche effects (Fischer et al., 2021; Raredon et al., 2023) for spatial omics data lacks causal interpretation. Applying GEASS, we aim to reveal underlying causal intercellular patterns to fully utilize the potential of spatial omics data for biological discovery.
Here we use GEASS on a recent published MERFISH dataset measuring spatially-resolved single-cell gene expression of human cortex (Fang et al., 2022). The dataset we used comprises of 3044 cells and 4000 genes; each cell is annotated as one of the eight cell types: excitatory neurons (EXC), inhibitory neurons (INC), astrocytes (ASC), microglial cells (MGC), oligodendrocytes (OGC), oligodendrocyte progenitor cells (OPC), endothelial cells (ENDO), and mural cells (MURAL) as shown by the first panel of Figure 6 in Appendix D. Our GEASS analysis selects 9 genes, namely FILIP1, SLC17A7, MYH11, RP11-10j21.2, PIRT, C3ORF67, TRDMT1, RGS8, SPTLC2 (Appendix Figure 6), with further experimental details available in Appendix C.4. Among these genes, MYH11, RP11-10j21.2, and TRDMT1 are enriched at the endothelial cells adjacent with mural cells, corresponding to underlying vascular structures (marked by ellipses in the first panel of Appendix Figure 6). We next aim to verify if their expression difference with those of non-adjacent endothelial cells is statistically significant. Indeed, by applying the Wilcoxon rank-sum test, we have found significant enrichments for both MYH11 and TRDMT1, with p-values 0.003 and 0.015 respectively, while the p-value for the gene RP11-10j21.2 is not significant (0.5) due to the gene expression sparsity. The finding is consistent with the MERFISH images, which reveals rich cellular interactions between neuronal cells and the blood vessels (Fang et al., 2022). Therefore, these identified marker genes of vascular structure may encode underlying meaningful cellular interactions.
Next, we focus on two GEASS identified genes, C3ORF67 and PIRT, which are highly expressed at nearby spatial locations. In order to confirm the possible causal relationship between the two genes, we consider three models: 1. the two genes are expressed in the same cell without spatial causal relationships; 2. The expression of C3ORF67 in each cell causes the expression of PIRT in neighboring cells (C3ORF67 → PIRT); 3. The expression of PIRT in each cell causes the expression of C3ORF67 in neighboring cells (PIRT → C3ORF67). To this end, we first compare Pearson and Spearman p-values of intracellular correlation (model 1), C3ORF67 to neighboring PIRT
(model 2), and PIRT to neighboring C3ORF67 (model 3). Our comparison shows for the p-values of both correlation measures, model 3 is favored (0.004, 0.001) over model 1 (0.014, 0.003) and model 2 (0.049, 0.004). The validity of model 3 (PIRT → C3ORF67) is further supported by a linear model predicting C3ORF67 expression by both intracellular and neighbor expression of PIRT, where the neighboring cell effect coefficient is significant at the confidence level of 0.01 by bootstrap, while the alternative model’s corresponding coefficient is not significant. Our finding is consistent with the predicted role of PIRT in transmembrane transporter binding and phosphatidylinositol-mediated signaling (Safran et al., 2021). As the role of C3ORF67 in human cortex remains unclear, this revealed causal link may lead to novel biological discoveries with further experimental validations.
5 CONCLUSIONS
In this work, we present GEASS, a causal feature selection method based on information-theoretic tools and neural networks. GEASS is able to scale to high dimensions and identify sparse interacting features. We provide both theoretical gaurantees and empirical validations of GEASS on synthetic and real biological data. Our results show GEASS can be integrated into high-dimensional spatiotemporal data analysis pipelines to provide unique insights for further findings.
Limitations. GEASS is a method designed for nonlinear causal feature selection. GEASS does not provide a causal graph itself as it optimizes a latent embedding corresponding to different causal mechanisms. Therefore, in applications where a causal graph output is favored, constraint-based methods may need to be applied after GEASS. Moreover, when underlying causal graph has a large number of vertices, the sparsity assumption is violated and GEASS is not gauranteed to work. Also, further efforts may be taken to incorporate lag selections for GEASS.
Broader impact. We anticipate a wide use of GEASS in high-dimensional graph-structured data, especially for high-dimensional biological data such as single cell trajectories and spatial omics measurements. Applying GEASS along with causal graph identification methods to a wider range of real biological data may greatly facilitate downstream biological discoveries.
ACKNOWLEDGEMENTS
The authors thank Ofir Lindenbaum, Boaz Nadler, Yifei Min, and Ronen Basri for helpful discussions. Y.K. acknowledges support by NIH grants R01GM131642, UM1DA051410, U54AG076043, P50CA121974, and U01DA053628.
APPENDIX
A PROOFS
A.1 PROOF OF PROPOSITION 2.2.
Proposition 2.2. ∀S1, S2 ⊂ {1, ..., p},mTEi(S1, S2) > 0 ⇒ TEi(S1, S2) > 0.
Proof. By standard properties of mutual information (Cover, 1999) we have
TEi(XS1 , XS2) = I(X i S1 ;X j:(i,j)∈E S2 |XiS2)
= I(XiS1 ;X j:(i,j)∈E S2 , XiS2)− I(X i S1 ;X i S2) = I(XiS1 ;X j:(i,j)∈E S2 )− I(XiS1 ;X i S2) + I(X i S1 ;X i S2 |X j:(i,j)∈E S2 ).
(9)
Therefore TEi(S1, S2) ≥ mTEi(S1, S2) holds, thus mTEi(S1, S2) > 0 ⇒ TEi(S1, S2) > 0.
A.2 DISCUSSION OF ASSUMPTION A6.
Our assumption A6 is based on the concept of conditional mutual entropy, which aims to filter out possible indirect causal relationships.
Here are two simple examples to see why TE/mTE can have problems with indirect causal interactions in the time-series setting: consider the relationships: st → wt → vt+1; st → wt+1 → vt+1. Then in both cases, we may have: I(st, vt+1)− I(st, vt) > 0 and I(st, vt+1|vt) > 0 although there are no direct causal relationship between s and v. Note in our setting, we include the possibility of such indirect interaction by allowing correlation between nuisance features and true interacting features.
The issue can be resolved by considering the conditional mutual information I(st, vt+1|wt) or I(st, vt+1|wt+1), which equals 0. This insight is also addressed the concept of conditional transfer entropy:
Definition (Conditional transfer entropy) (Shahsavari Baboukani et al., 2020). Assume X and Y are the features of interest and the conditioning features are Z. Denote − as [1, 2, ..., t], then we have
cTEt(X,Y, Z) = I(Yt+1, X−|Y−, Z−).
The classical formulation of conditional transfer entropy is widely used in high-dimensional observational data to learn direct causal dependencies (Faes et al., 2016; Shahsavari Baboukani et al., 2020). It implicitly assumes that, there is direct causal relationship between X and Y if ∀Z, t, cTEt(X,Y, Z) > 0. Here, we extend this assumption in the context of conditional mTE covering both examples described above. The conditional mTEs are defined in analogy to cTE for generalized graph-structured data in the Markovian model setting:
Definition (Two forms of conditional mTE). Assume X and Y are the feature sets of interest and the conditioning features are Z. Then we have
cmTE1i (X,Y, Z) = I(X i, Y N(i)|Zi)− I(Xi, Y i|Zi);
cmTE2i (X,Y, Z) = I(X i, Y N(i)|ZN(i))− I(Xi, Y i|Zi);
By controlling the two forms of conditional mTE to be larger than zero, we rule out both possibilities of Xi → Zi → Y N(i) and Xi → ZN(i) → Y N(i), as mTE is a stricter version of the original transfer entropy as discussed in Proposition 2.2. In summary, our A6 can be reformulated as ∀Z, i, cmTE1i (X,Y, Z) > 0; cmTE 2 i (X,Y, Z) > 0 for ground truth interacting X,Y in non-degenerating cases, where Z does not fully overlap with X/Y in the same point.
A.3 PROOF OF THEOREM 2.4.
Theorem 2.4. Given A1-A6, S∗ := (S∗1 ∪ S∗2 ) ⊆ {1, ...,m} (the index set of true interacting features described in A5). Moreover, each feature in S∗ is connected to other features in the set S∗.
Proof. Step 1. First we prove S∗1∩S∗2 = ∅. If not, assume p is an overlapping element. For simplicity, we denote N(i) := {j|(i, j) ∈ E}, A = XS∗1 , B = XS∗2 . Then we have
mTE(S∗1 , S ∗ 2 )−mTE(S∗1 \ p, S∗2 )
= I(Ai \ pi, pi;BN(i) \ pN(i), p{N(i)})− I(Ai, pi;Bi, pi)− I(Ai \ pi;BN(i) \ pN(i), pN(i)) + I(Ai;Bi, pi)
= I(pi;BN(i) \ pN(i), pN(i)|Ai \ pi)− I(pi;Bi \ pi, pi|Ai \ pi) < 0. (10)
Therefore removing p would increase the value of mTE, leading to a contradiction.
Step 2. Now we prove nuisance signals cannot be in either S∗1 or S∗2 . Otherwise, first we assume a set of nuisance signals U is in S∗1 . Here we denote A := XS∗1 , B := XS∗2 . As U only interacts with variables at the same time point, U can only interact with BN(i) via indirect links through a subset of interacting features at i. Denote this feature set as PaU (B)i ⊆ {ti1, ..., tim}, and the difference set Pa−U (B) i := PaU (B) i \ Bi. Then we first note Pa−U (B)i cannot be an empty set. Otherwise, denote S1 := S∗1 \ U , noting the non-overlapness between A and B we would have
mTE(S∗1 , S ∗ 2 )−mTE(S1, S∗2 )
= I(Ai \ U i, U i;BN(i))− I(Ai \ U i, U i;Bi)− I(Ai \ U i;BN(i)) + I(Ai \ U i;Bi) = I(U i;BN(i)|Ai \ U i)− I(U i;Bi|Ai \ U i) = −h(U i|BN(i), Ai \ U i) + h(U i|Bi, Ai \ U i) ≤ −h(U i|BN(i), Ai \ U i) + h(U i|PaU (B)i, Ai \ U i) (Conditioning reduces entropy) ≤ 0. (11)
This means (S1, S∗2 )’s mTE is not smaller than (S ∗ 1 , S ∗ 2 )’s while having a smaller union size, leading to a contradiction. Then because Pa−U (B) does not overlap with either U and B, with A6 we have
mTE(S∗1 , S ∗ 2 )−mTE(S∗1 ∪ Index(Pa−U (B)), S ∗ 2 )
= I(Ai \ U i, U i;BN(i))− I(Ai \ U i, U i;Bi)− I(Ai \ U i, U i, Pa−U (B) i;BN(i)) + I(Ai \ U i, U i, Pa−U (B) i;Bi)
= I(Pa−U (B) i;Bi|Ai)− I(Pa−U (B)
i;BN(i)|Ai) A6 ≤ 0.
(12)
The equal sign above is taken iff. Pa−U (B) i ⊆ Ai. Further we have
mTE(S∗1 ∪ Index(Pa−U (B)), S ∗ 2 )−mTE(S1 ∪ Index(Pa−U (B)), S ∗ 2 )
= I(Ai \ U i, U i, Pa−U (B) i;BN(i))− I(Ai \ U i, U i, Pa−U (B) i;Bi) − I(Ai \ U i, Pa−U (B) i;BN(i)) + I(Ai \ U i, Pa−U (B) i;Bi) = I(U i;BN(i)|Pa−U (B) i, Ai \ U i)− I(U i;Bi|Pa−U (B)
i, Ai \ U i) = −h(U i|BN(i), Pa−U (B) i, Ai \ U i) + h(U i|Bi, Pa−U (B) i, Ai \ U i) ≤ −h(U i|BN(i), Pa−U (B) i, Ai \ U i) + h(U i|PaU (B)i, Ai \ U i) ≤ 0.
(13)
Therefore, in all possible cases, mTE(S1 ∪ Index(Pa−U (B)i), S∗2 ) is either strictly larger than mTE(S∗1 , S ∗ 2 ) or equal with mTE(S ∗ 1 , S ∗ 2 ) but with smaller union size, leading to a contradiction.
Next, given the result above, we assume a nuisance signal set U is in S∗2 , and S ∗ 1 does not include any nuisance features. Then as U only interacts with variables at the same time point, UN(i) can only interact with S∗1 via indirect links through a subset of interacting features at N(i). Denote the whole intermediate feature set for A as ChU (A)N(i) ⊆ {tN(i)1 , ..., t N(i) m }, and Ch−U (A)N(i) := ChU (A) N(i) \ AN(i). Then same as above, denote S2 = S∗2 \ U , if Ch−U (A) is an empty set we would have
mTE(S∗1 , S ∗ 2 )−mTE(S∗1 , S2)
= I(Ai;BN(i) \ UN(i), UN(i))− I(Ai;Bi \ U i, U i)− I(Ai;BN(i) \ UN(i)) + I(Ai;Bi \ U i, U i) = I(Ai;UN(i)|BN(i) \ UN(i))− I(Ai;U i|Bi \ U i) = −h(UN(i)|BN(i) \ UN(i), Ai) + h(U i|Bi \ U i, Ai) ≤ −h(UN(i)|BN(i) \ UN(i), Ai) + h(U i|ChU (A)i, Bi \ U i) ≤ 0.
(14)
Above derivation holds due to stationarity (as |N(i)| ≡ 1 in the time series setting). Therefore Ch−U (A) cannot be an empty set. Because of the non-overlapness between Ch − U (A) and either A or U , with A6, we have
mTE(S∗1 , S ∗ 2 )−mTE(S∗1 , S2 ∪ Index(Ch−U (A)))
= I(Ai;BN(i) \ UN(i), UN(i))− I(Ai;Bi \ U i, U i) − I(Ai;BN(i) \ UN(i), UN(i), Ch−U (A) N(i)) + I(Ai;Bi \ U i, U i, Ch−U (A) i)
= I(Ai;Ch−U (A) i|Bi)− I(Ai;Ch−U (A)
N(i)|BN(i)) A6 ≤ 0.
(15)
The equal sign above is taken iff. Ch−U (A) i ⊆ Bi. Further we have
mTE(S∗1 , S ∗ 2 ∪ Index(Ch−U (A)))−mTE(S ∗ 1 , S2 ∪ Index(Ch−U (A)))
= I(Ai;BN(i) \ UN(i), UN(i), Ch−U (A) N(i))− I(Ai;Bi \ U i, U i, Ch−U (A) i) − I(Ai;BN(i) \ UN(i), Ch−U (A) N(i)) + I(Ai;Bi \ U i, Ch−U (A) i) = I(Ai;UN(i)|BN(i) \ UN(i), Ch−U (A) N(i))− I(Ai;U i|Bi \ U i, Ch−U (A) i) ≤ 0.
(16)
Therefore, in all possible cases, mTE(S∗1 , S2 ∪ Index(Ch−U (A))) is either strictly larger than mTE(S∗1 , S ∗ 2 ) or equal with mTE(S ∗ 1 , S ∗ 2 ) but with smaller union size, leading to a contradiction.
Step 3. Moreover, if there exists a component in S∗1 ∪ S∗2 not connected to any other feature components, denote the feature as q. Then, in this case with A1-4, the feature q is independent of any other features in S∗1 ∪ S∗2 . From step 1 it can be deduced that q cannot be in both S∗1 , S∗2 . Therefore in this case, we have mTE(S∗1 − q, S∗2 − q) = mTE(S∗1 , S∗2 ) thus leading to the contradiction of finding an (S1, S2) with the same mTE but smaller |S1 ∪ S2|.
A.4 PROOF OF THEOREM 3.1.
Theorem 3.1. Assume A1-A6 holds and f, g, h define one-to-one mappings on X⊙1S1(for f) or X⊙ 1S2(for g, h). Then ∃λ > 0, such that for (4), any solution (S∗1 ∪ S∗2 ) satisfies S∗ := (S∗1 ∪ S∗2 ) ⊆ {1, ...,m}. Moreover, each feature in S∗ is connected to other features in the set.
min f,g,h,S1,S2
−(I(f(xi ⊙ 1S1);h(xN(i) ⊙ 1S2))− I(f(xi ⊙ 1S1); g(xi ⊙ 1S2))) + λ|S1 ∪ S2|
Proof. With A4 (ergodicity and stationarity), the optimization problem 4 is equivalent to
min f,g,h,S1,S2
−(I(f(xiS1);h(x N(i) S2 ))− I(f(xiS1); g(x i S2))) + λ|S1 ∪ S2|. (17)
Given the assumption that f, g, h define injective mappings on xiS1 ,x i S2 respectively, and one-to-one transformation does not change mutual information, we have the optimization problem is equivalent to
min S1,S2
−(I(xiS1 ;x N(i) S2 )− I(xiS1 ;x i S2)) + λ|S1 ∪ S2|. (18)
Using Theorem 2.4, a minimizer of the mTE term with the smallest union size satisfies S∗ := (S∗1 ∪ S∗2 ) ⊆ {1, ...,m}. Moreover, each feature in S∗1 ∪ S∗2 is connected to other features in the set. Note that with our definition of optimal S1, S2, the minimal gap between mTE(S∗1 , S ∗ 2 ) and any other value mTE(S1, S2) with smaller |S1 ∪ S2| size is larger than zero. Denote the minimal gap as δ, and take λ < δ|S∗1∪S∗2 | , then for these other solutions, we have
−mTE(S1, S2) + λ|S1 ∪ S2| ≥ −mTE(S∗1 , S∗2 ) + δ + λ|S1 ∪ S2| ≥ −mTE(S∗1 , S∗2 ) + δ > −mTE(S∗1 , S∗2 ) + λ|S∗1 ∪ S∗2 |.
(19)
Meanwhile, for the (S1, S2) with larger union size, with the definition of the mTE, we have
−mTE(S1, S2) + λ|S1 ∪ S2| ≥ −mTE(S∗1 , S∗2 ) + λ|S1 ∪ S2| = −mTE(S∗1 , S∗2 ) + λ(|S1 ∪ S2| − |S∗1 ∪ S∗2 |) + λ|S∗1 ∪ S∗2 | > −mTE(S∗1 , S∗2 ) + λ|S∗1 ∪ S∗2 |.
(20)
Therefore, when taking λ ∈ (0, δ|S∗1∪S∗2 | ), the desired optimal S1, S2 by mTE is the optimal output of the constructed optimization problem.
A.5 PROOF OF THEOREM 3.4.
Theorem 3.4. Assume A1-A6 and f, g, h are one-to-one Gaussian embeddings as described above. Denote for the optimal solution of (5), a sample of stochastic gate is given by T 1, T 2 and denote the ground truth interacting feature set as S, then there exists λ1, λ2 > 0 for (5) such that as n → ∞,
∀i ∈ {0, 1}, P(Bi ⊆ S) a.s.−−→ 1, where Bi := {d|T 1d > 0, T 2d = i}.
Proof. In the following proof for simplicity we denote x̃S1 = x⊙T 1⊙T 2; x̃S2 = x⊙T 1⊙(1−T 2). Step 1. Given f, g, h projects input distributions into joint Gaussian distributions with fixed dimensionality, by convergence of Gaussian covariance matrices, we have:
Σ̂(f(x̃iS1), h(x̃ N(i) S2
)) = 1
n n∑ i=1 [f(x̃iS1);h(x̃ N(i) S2 )][f(x̃iS1);h(x̃ N(i) S2 )]T a.s.−−→ Σ f(x̃iS1 ),h(x̃ N(i) S2 ) ;
Σ̂(f(x̃iS1), g(x̃ i S2)) =
1
n n∑ i=1 [f(x̃iS1); g(x̃ i S2)][f(x̃ i S1); g(x̃ i S2)] T a.s.−−→ Σf(x̃iS1 ),g(x̃iS2 ). (21)
As in the Gaussian case, the mutual information between jointly Gaussian r.v.s is a function of the covariance matrix, we have
Î(f(x̃iS1);h(x̃ N(i) S2 )) a.s.−−→ I(f(x̃iS1);h(x̃ N(i) S2 )) = I(x̃iS1 ; x̃ N(i) S2 );
Î(f(x̃iS1); g(x̃ i S2)) a.s.−−→ I(f(x̃iS1); g(x̃ i S2)) = I(x̃ i S1 ; x̃ i S2);
P( lim N→∞ Empirical mTE = mTE) = 1.
(22)
Step 2. Importantly, in our formulation eq (5), the T1, T2 are sampled once in one epoch, meaning they are fixed across features for computing mTE. Further note that ∑p d=1 P(T 1d > 0) =
E||T 1||0; ∑p
d=1 P(T 2d ∈ (0, 1)) = E||1T 2∈(0,1)||0. This means denoting the value of eq (5) as L, we have
L a.s.−−→ ET 1,T 2 [−mTE(1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) + λ1||T 1||0 + λ2||1T 2∈(0,1)||0] ≥ min
T 1,T 2 −mTE(1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) + λ1||T 1||0 + λ2||1T 2∈(0,1)||0.
(23)
Note with step 1 of the proof of theorem 2.4, for any T1 we have
−mTE(1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) + λ1||T 1||0 + λ2||1T 2∈(0,1)||0 ≥ −mTE(1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) + λ1||T 1||0,
(24)
which is taken when ∀d,P(T 2d = 1) = 0/1,P(T 2d = 0) = 1/0. In this case,
||T 1||0 = ||T 1 ⊙ T 2||0 + ||T 1 ⊙ (1− T 2)||0.
Applying theorem 3.1, we have for λ1 = λ in theorem 3.1,
min T 1,T 2
−mTE(1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) + λ1||T 1||0 + λ2||1T 2∈(0,1)||0
= −mTE(S∗1 , S∗2 ) + λ|S∗1 ∪ S∗2 | := L∗. (25)
Here (S∗1 , S ∗ 2 ) satisfies properties described by theorem 3.1. Note the minimizer may not be unique, denote the set containing all minimizers as {(S∗1 , S∗2 )}. Then the equal sign in eq (23) holds if and only if P((1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) ∈ {(S∗1 , S∗2 )}) = 1. Further noting ∀d,P(T 2d = 1) = 0/1,P(T 2d = 0) = 1/0, and our analysis above holds as n → ∞ with probability 1 by a.s. convergence, we finally have
P( lim N→∞ P(B1 ⊆ S) = 1) = 1; P( lim N→∞ P(B0 ⊆ S) = 1) = 1
holds.
B GATE INITIALIZATION
Our proposed initialization scheme is based on analysis of the linear case. Assume
f(XS1) = Xa, g(XS2) = Xb,
where a, b ∈ Rp represents two feature loadings. Then:
1. a, b should be non-overlapping, therefore we expect |aT b| to be small. 2. We should have f(X) ≈ Wg(X) to maximize the mTE.
The constraint can be formulated into a regression problem WXb = Xa, therefore a natural solution is given by a = X†WXb = (XTX)−1XTWXb. In this case, ||aT b|| = ||bT (XTX)−1XTWXb|| = ||b||2(XTX)−1XTWX . Given b is normalized, it can be shown that the optimal b corresponds to the eigenvector with least absolute eigenvalue of matrix (XTX)−1XTWX .
After getting a, b, we select a quantile threshold over a/(a + b) to initialize the second stochastic gate layer. The first stochastic gate layer is initialized with uniform weights.
C EXPERIMENTAL DETAILS
C.1 TIME-SERIES BENCHMARKING STUDY
In the study the causal processes is simulated with Python package Tigramite. Among the total 100 features, there are 6 interacting features {1, 2, 3, 4, 5, 6}. The causal links are: 1->2 with time lag 2, 2->3 with time lag 1, 5->4 with time lag 1, 1->5 with time lag 1, 3->6 with time lag 3. These features also have autocorrelations with time lags ranging from 1 to 3. There is also a latent confounder modeled by Tigramite interacting with feature 0 and feature 2. In the case of strong latent process, the latent confounder also have effects on other 43 features. All other features (93/50) not mentioned above are nuisance features with white noise dynamics. The forward operator is defined by 5-neighbor lower triangular matrix.
C.1.1 ALGORITHM IMPLEMENTATION
If not particularly mentioned, default settings of the algorithms are used throughout.
• VAR-LINGAM. The VAR-LINGAM algorithm is implemented in the Python package LINGAM, available at https://github.com/cdt15/lingam. VAR-LINGAM gives a weighted matrix as output. Therefore in our benchmarking study, we choose the most significant edge corresponding features with the number matching the sparsity level.
• PCMCI. The PCMCI algorithm is implemented in the Python package Tigramite, which gives a weighted matrix as output. We choose the most significant edge corresponding features with the number matching the sparsity level.
• GVAR. The GVAR algorithm is implemented at https://github.com/i6092467/GVAR. The sparsity parameter is set to be 1. We use the stable training option in GVAR, which trains the first and second half of the time series respectively to optimize over edge selection sparsity level then train on the whole time series, giving a binary output and no threshold selection is needed.
• Grid-net. The Grid-net algorithm is implemented at https://github.com/alexw16/gridnet. The parameter set: order=5, hidden_layer_size = 10, end_epoch=50, batch_size = 50, lmbd=1 is used throughout our study. After the training finishes, we choose the most significant edge corresponding features with the number matching the sparsity level.
• DCM, NGM. The two algorithms are both implemented at https://github.com/alexisbellot/ Graphical-modelling-continuous-time. For DCM, the default setting is used, and we use hidden dim = 10 for NGM. After both training finishes, we choose the most significant edge corresponding features with the number matching the sparsity level.
• GEASS. We use the same training parameters in all time-series settings, with the key sparsity regularization parameter λ1 set with 0.04/0.05 based on a validation set, and the rest parameter settings are consistent with default.
C.1.2 SCALABILITY ANALYSIS
We test PCMCI, GVAR, GrID-net, NGM, GEASS, GEASS+LPCMCI’s running time with consistent settings described in the above section. (LPCMCI’s setting is consistent with PCMCI’s setting). We use the same data generation pipeline and select the set of the total feature numbers as [100, 200, 400, 800, 1600].
C.2 SIMULATED SPATIAL OMICS DATA BENCHMARKING STUDY
In the study the spatial omics data is simulated with Python package Scsim (Kotliar et al., 2019). 1000 genes are simulated in total, while 990 genes are cell-type-specificly expressed. The rest 10 genes each has a functional relationship (linear/nonlinear) with one cell-type-specific genes plus the noise term in order to model the cell-type-specific interactions. The data is then normalized and log-transformed according to the standard Scanpy pipeline (Wolf et al., 2018). The forward operator is defined by 4-neighbor adjacency matrix.
C.2.1 ALGORITHM IMPLEMENTATION
If not particularly mentioned, default settings of the algorithms are used throughout.
• Lasso Granger. The Lasso algorithm is implemented by Scipy with tuned α (0.12) to match the sparsity level.
• NCEM. NCEM (Linear) is a linear graph neural network, which in the grid case corresponds to a standard linear regression based on neighbors and the cell type label. Based on the original work, we implemented our equivalent version by Lasso regression with α = 0.019 to match the sparsity level.
• GEASS. We use the same training parameters in all settings, with the key sparsity regularization parameter λ1 set with 0.02 based on a validation set, and the latent dimension number is set to be 64.
• TE. To give a fair comparison, we use the same architecture as GEASS except for the loss function is changed. We use the same training parameters in all time-series settings, with the key sparsity regularization parameter λ1 set with 0.05 based on a validation set, and the latent dimension number is set to be 64 consistent with GEASS.
C.3 SCRNA-SEQ PANCREAS TRAJECTORY
The data preprocessing is consistent with the scVelo tutorial: https://scvelo.readthedocs.io/ VelocityBasics/ (Bergen et al., 2020). The parameter set: λ1 = 0.06, λ2 = 0.1. Here because the gene regulatory network is fully connected and activated in cascade along the developmental trajectory, we consider the opposite initialization with b be the largest eigenvalues corresponding eigenvectors of the matrix (XTX)−1XTWX .
C.4 MERFISH SPATIAL TRANSCRIPTOMICS DATA
The data is downloaded from Dryad and preprocessed with the standard Scanpy pipeline (Wolf et al., 2018): first normalize and log-transform the data by default functions in Scanpy then select 1000 highly variable genes by default functions in Scanpy (Wolf et al., 2018). The forward operator is defined by 5-neighbor adjacency matrix. The GEASS parameter set is consistent with those used in the spatial omics benchmarking.
D ADDITIONAL EXPERIMENTAL RESULTS | 1. What is the focus and contribution of the paper on causal discovery in data?
2. What are the strengths of the proposed approach, particularly in terms of its applicability to various datasets?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or weaknesses in the paper, such as limitations in the proposed method or unclear aspects of the theoretical analysis? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors introduce a framework of causal discovery in data using two steps: a causal feature selection in a general graph structured data and causal graph identification. To do the former, the authors define a multi-dimensional transfer entropy maximization metric and device ways to optimise it. They test out the framework in single cell and spatial data.
Strengths And Weaknesses
The problem of detecting causal features in spatial transcriptomics is an important problem. The authors promote a nice framework and benchmark on a variety of datasets.
Clarity, Quality, Novelty And Reproducibility
It is quite well written. |
ICLR | Title
GEASS: Neural causal feature selection for high-dimensional biological data
Abstract
Identifying nonlinear causal relationships in high-dimensional biological data is an important task. However, current neural network based causality detection approaches for such data suffer from poor interpretability and cannot scale well to the high dimensional regime. Here we present GEASS (Granger fEAture Selection of Spatiotemporal data), which identifies sparse Granger causal interacting features of high dimensional spatiotemporal data by a single neural network. GEASS maximizes sparsity-regularized modified transfer entropy with a theoretical guarantee of recovering features with spatial/temporal Granger causal relationships. The sparsity regularization is achieved by a novel combinatorial stochastic gate layer to select sparse non-overlapping feature subsets. We demonstrate the efficacy of GEASS in several synthetic datasets and real biological data from single-cell RNA sequencing and spatial transcriptomics.
1 INTRODUCTION
Advances in single-cell omics research enable full characterizations of high-dimensional gene dynamics in biological systems on a either temporal or spatial scale. An example for the temporal case is single-cell RNA sequencing (scRNA-seq) trajectories, where cells are sampled from a dynamical biological process, sequenced, and ordered based on either real sampled time or inferred pseudo-time (Cannoodt et al., 2016; Saelens et al., 2019). Gene dynamics along the specified cell order encodes information of causal regulation for the underlying biological process. An example for the spatial case is single-cell level spatial transcriptomics (e.g. SeqFISH+ (Eng et al., 2019), Merfish (Fang et al., 2022)), in which cells from a tissue slice are sequenced with their spatial coordinates preserved (Moses and Pachter, 2022; Rao et al., 2021; Palla et al., 2022). Spatial profiling allows investigations of the cellular interplay, corresponding to conditional gene expression change caused by neighborhood phenotypic states. However, despite the potential significance, data-driven causal discovery for such data remains largely unexplored, especially for the spatial omics data.
Identifications of causal regulatory patterns in such data can be reformulated into the general task of causal feature selection in observational data with intrinsic structures, e.g. spatial data or temporal data. Identifications of causal interactions in time-series has lead to valuable findings in multiple disciplines, including but not limited to, economy, climate science, and biology (Hoover, 2006; Kamiński et al., 2001; Runge et al., 2019a).
Learning directed causal relationships in temporal/spatial data is feasible as time and space both induce asymmetric dependencies. In the case of time-series data, a feature in the future cannot have effect on past values of other features. For spatial data, a similar definition of causal dependency can be established (Herrera Gómez et al., 2014).
The concept of Granger causality is proposed in order to uncover the assymetric causal dependency (Granger, 1969; Shojaie and Fox, 2022). In time-series data, this would translate to identifying one variable’s causal relationship with other variables based on how well the historical observations of other variables can predict the variable’s present value. The application of Granger causality in a spatial context corresponds to predicting significant relationships between neighboring observations of other variables and the specified variable (Mielke et al., 2020), which is a key insight used in recent works aimed to discover cellular interaction patterns in spatial omics data (Fischer et al., 2021; Valdés-Sosa et al., 18).
In the nonlinear regime, information-theoretic measures such as directed information, transfer entropy (Schreiber, 2000), and partial transfer entropy (Staniek and Lehnertz, 2008), are used as a counterpart of linear Granger causality. Moreover, some works consider modeling conditional independence (CI) in time-series data to identify the underlying causal graph (Entner and Hoyer, 2010; Malinsky and Spirtes, 2018; Moneta et al., 2011; Runge et al., 2019a; Pfister et al., 2019; Mastakouri et al., 2021). Two examples are VarLINGAM (Hyvärinen et al., 2010) and PCMCI (Runge et al., 2019b), which are generalizations of LINGAM (Shimizu et al., 2006) and PC (Spirtes et al., 2000) respectively. Finally, multiple recent works have proposed to use neural network approaches to model the nonlinear Granger causality, including MLP, LSTM, and neural-ODE based approaches, resulting in improved prediction power for nonlinear time-series dynamics (Li et al., 2017; Tank et al., 2021; Nauta et al., 2019; Yin and Barucca, 2022; Bellot et al., 2021).
Despite the success of these methods in various systems of interest, multiple challenges limit their use in high-dimensional biological datasets.
• Although linear methods (LINGAM, linear Granger causality) have succeeded in various settings and can potentially scale to high feature numbers, these methods may completely fail when the feature dependency in data is highly complex and nonlinear.
• As the number of conditional independencies generally scales exponentially or at least polynomially with the feature size, applying causal discovery methods which are based on CI tests to high-dimensional data is not realistic. Distinctively, Granger-causality based methods are built with a prediction model for each feature in the data. The time complexity of solving the stacked prediction model for all features is of polynomial level with respect to the feature size.
• In previous methods, the number of causal edges between features is assumed to be sparse (edge sparsity) to maximize interpretability of the identified causal graph. However, in biological data, there exists a large proportion of nuisance features. Also, one functional gene may activate a large number of downstream genes in neighboring cells. Sparsifying the number of interacting features (feature sparsity) has the potential to improve causal discovery in biological systems, which remains to be explored.
• While a large number of methods are designed for causal discovery in time-series data, only a limited number of present works aim for causal discovery in general graph-structured data. Time-series based methods cannot be directly adopted on data with multi-branch trajectory dynamics or spatial structures.
Our contribution. In this work, we present GEASS (Granger fEAture Selection of Spatiotemporal data), which identifies causally interacting features of high dimensional temporal / spatial data by a single neural network. GEASS considers the aforementioned feature sparsity instead of edge sparsity, thus selects most significant interacting features for downstream causal discovery. Our contributions are three-folds.
1. Instead of direct causal discovery in data, we formulate the task as two steps of causal feature selection and causal graph identification. We provide a novel solution of causal feature selection problem in general graph-structured data by the use of modified transfer entropy maximization with theoretical guarantees.
2. In order to solve our proposed optimization problem, we design a novel combinatorial stochastic gate layer to select non-overlapping sparse feature sets with a newly designed initialization procedure.
3. We demonstrate the power of our method by benchmarking it on both temporal data and spatial data of multiple settings. Our method gives accurate and robust causal feature identification and reveals novel biology in real datasets.
1.1 RELATED WORKS
Neural Granger causality. Despite the large body of work based on linear Granger causal discovery, neural Granger causality still remains an active area of research. Various neural network architectures, such as MLP, sequential model, and attention-based architecture (Tank et al., 2021; Nauta et al., 2019; Khanna and Tan, 2019; Sun et al., 2021), have been proposed for nonlinear Granger causality
discovery. A recent work uses the information of proxy variable to learn latent confounder for Granger causality by a dual-decoder neural network (Yin and Barucca, 2022). One recent biologyoriented work extends the definition of Granger causality to DAGs, where the use of a linear graph neural network is proposed to model underlying Granger causality (Wu et al., 2021). Meanwhile, a neural-ODE based approach has been proposed to reformulate the Granger causality problem in terms of local dependence graph identification (Bellot et al., 2021).
Causal feature selection. The task of causal feature selection has been considered by multiple groups. Most works in this category uses constraint-based methods to identify each feature’s causal relation with all other features, equivalent of identifying the whole causal graph structure, including VARLINGAM, tsFCI, SVAR-FCI, and PCMCI (Hyvärinen et al., 2010; Entner and Hoyer, 2010; Malinsky and Spirtes, 2018; Moneta et al., 2011; Runge et al., 2019a). Meanwhile, seqICP focus on identifying the direct or indirect cause for each feature assuming sufficient interventions in the dataset (Pfister et al., 2019). SyPI tackles the causal feature selection problem without the assumption of causal sufficiency and avoids issues in multi-hypothesis testing by construction of the correct conditional set (Mastakouri et al., 2021). Finally, Guo et al. (2022) considers dual correction of causal feature selection to control both false positive rates and false negative rates.
2 MODIFIED TRANSFER ENTROPY (MTE)
In order to tackle the issue that a neural network may overfit each model therefore overestimates the number of causal interactions, we need a prediction-free loss function that directly indicates causal signficance. In this work, we propose a novel function, modified transfer entropy (mTE), based on transfer entropy (Schreiber, 2000) as a metric of causal interaction significance.
Transfer entropy is a information-theoretic measure of cross dependence (Schreiber, 2000). Consider two vectorized time series xt and yt for t ∈ 1, ..., T . In a Markovian model, the transfer entropy from x to y at time t is defined as the mutual information between the present value xt and the future value yt+1, conditioning on yt to eliminate possible autocorrelation: TEt(x,y) = I(xt;yt+1|yt). By the use of mutual information, transfer entropy is able to model general nonlinear dependencies beyond linear Granger causality. In this work, we further consider the generalization of transfer entropy on graph structured xi and yi, where i denotes a vertex on the data graph G = (V,E):
TEi(x,y) := I(x i;yN(i)|yi), where N(i) := {j|(i, j) ∈ E}. (1)
Note here the graph can be either directed (the time-series case) or undirected (the spatial case). In this study, we introduce a novel function, modified transfer entropy, that enables the application of bivariate transfer entropy for causal discovery in high-dimensional data. Our key insight is to consider two feature subsets in the dataset that maximizes the mutual information difference: Definition 2.1. Let X = [x1x2 . . .xn] ∈ Rp×n be a matrix containing graph-structured vector series xi, with i as vertices of the data graph G = (V,E). Suppose S1 and S2 be two subsets of {1, 2, ..., p}. The modified transfer entropy mTEi(S1, S2) and its maximum mTE∗i are defined by
mTEi(S1, S2) := I(x i S1 ;x N(i) S2 )− I(xiS1 ;x i S2); mTE ∗ i := max
S1,S2 mTEi(S1, S2). (2)
Note the mTE function requires strictly stronger dependence than the analogically defined transfer entropy TEi(S1, S2), as shown by the proposition below (The proof can be seen at Appendix A.1): Proposition 2.2. ∀S1, S2 ⊂ {1, ..., p},mTEi(S1, S2) > 0 ⇒ TEi(S1, S2) > 0.
Let (S∗1 , S ∗ 2 ) be one of the maximizers with the smallest size of |S1 ∪S2|, and denote S∗ := S∗1 ∪S∗2 (note (S∗1 , S ∗ 2 ) may not be unique). Under some mild assumptions listed below, we are able to provide the theoretical justification for mTE maximization in the time-series setting (Theorem 2.4). A proof can be seen in Appendix A.3.
Assumptions:
A1-A3 Causal Markov assumption, faithfulness, and causal sufficiency for the causal graph.
A4 Ergodicity and Stationarity of the stochastic process defined by the causal graph, meaning the ensemble average equals time average, and the functional relationships encoded by the causal graph do not change by time (or location). This also leads to mTEi(S1, S2) is constant across i.
A5 DAG causal graph: We assume XT = [t1, ..., tm, um+1, ..., up] up to a permutation, where ti are causally interacting features forming a directed acyclic graph (DAG), and uk are nuisance features that may correlate with ti. An illustration based on the time series setting can be seen in Figure 1.
A6 Interaction regularity: Given two disjoint feature sets A,B, such that A is a subset of the parent features of B or B is a subset of child features of A. Then conditioning on any other feature set C such that I(Ai, BN(i)|Ci), I(Ai, BN(i)|CN(i)) > 0, we have:
∀i,min{I(Ai, BN(i)|Ci), I(Ai, BN(i)|CN(i))} > I(Ai, Bi|Ci). (3) Remark 2.3. Here our only additional assumption from prevalent literatures (Pearl, 2009; Spirtes et al., 2000) is A6, which aims to filter out features with spurious causations and regularize the algorithmic complexity of causal interactions, thus enabling information-theoretic analysis. A6 has direct connections with the concept of conditional transfer entropy (Faes et al., 2016; Shahsavari Baboukani et al., 2020); further discussions can be seen at Appendix A.2. Theorem 2.4. Given A1-A6, S∗ := (S∗1 ∪S∗2 ) ⊆ {1, ...,m} (the index set of true interacting features described in A5). Moreover, each feature in S∗ is connected to other features in the set S∗.
3 NEURAL OPTIMIZATION OF MODIFIED TRANSFER ENTROPY
With Theorem 3.1 stated below, we are able to give a theoretical guarantee of the l0-penalized optimization of mTE. A proof can be seen at Appendix A.4. Here ⊙ stands for the Hardmard product. Theorem 3.1. Assume A1-A6 holds and f, g, h define one-to-one mappings on X⊙1S1(for f) or X⊙ 1S2(for g, h). Then ∃λ > 0, such that for (4), any solution (S∗1 ∪ S∗2 ) satisfies S∗ := (S∗1 ∪ S∗2 ) ⊆ {1, ...,m}. Moreover, each feature in S∗ is connected to other features in the set.
min f,g,h,S1,S2
−(I(f(xi ⊙ 1S1);h(xN(i) ⊙ 1S2))− I(f(xi ⊙ 1S1); g(xi ⊙ 1S2))) + λ|S1 ∪ S2| (4)
Remark 3.2. The estimation of mutual information by various approaches is an active field itself (Belghazi et al., 2018; Hjelm et al., 2018; McAllester and Stratos, 2020; Zhang et al., 2019). In contrast, by this theorem, we show that an accurate estimation of the transfer entropy (such as in (Zhang et al., 2019)) may not be needed as optimizing the upper bound of the modified transfer entropy automatically gives the best feature subset selection. Remark 3.3. Our theoretical guarantee is derived based on one-to-one embeddings f, g, h. In a neural network, the injectivity may be enforced with various architecture designs yet may not perfectly hold. Empirically, we have found that the optimization of mTE is robust to the embedding injectivity, compared with the original transfer entropy. This is due to our stricter design of the mTE function (Proposition 2.2) and is further illustrated by our experiments in the next section.
Given Theorem 3.1, we are able to construct a neural network for optimizing the proposed loss function. However, the estimation of mutual information is not directly tractable. In this case, because mutual information is invariant by one-to-one transforms, we can restrict the function class of f, g, h in the optimization problem (4) as flows transforming the original feature distributions into Gaussian distributions with fixed dimensionality. We are able to formulate the target for neural network optimization by the explicit formula for mutual information between Gaussians: I(X,Y ) = 1 2 log detΣX detΣY detΣ[X,Y ]
. The Gaussian regularization can be applied either by regularizing over the discrepancy between embedding distributions [f, g, h] and Gaussian distributions or by applying a adversarial training procedure. In this work, we have implemented the former approach, constructing means and covariance matrices for the concatenated embedding as learnable parameters and minimize the cross entropy between target distributions and the parametrized Gaussian distributions.
3.1 COMBINATORIAL STOCHASTIC GATES
In order to solve the optimization problem, we need to learn two sparse sets S1, S2, which involves combinatorial optimization, making the task impractical for high-dimensional data. To overcome this issue, we use a stochastic gate based approach (Yamada et al., 2020; Lindenbaum et al., 2021), which performs probabilistic relaxation of deterministic l0 norms. In order to explicitly construct S1 and S2 by stochastic gates, we define two random vectors T 1 and T 2 ranging in [0, 1] with lengths equal to the feature number, with each element independently sampled from STG distribution defined as: T id = max(0,min(1, µ i d + ϵ i d)), where ϵ i d ∼ N(0, σ2i ) is i.i.d. sampled with fixed variance and µid is a parameter trainable by reparametrization (Miller et al., 2017; Figurnov et al., 2018).
The new loss function applying stochastic gates can be formulated as: ET 1,T 2 − [Î(f(X̃S1);h(WX̃S2))− Î(f(X̃S1); g(X̃S2))] + p∑
d=1
[λ1P(T 1d > 0) + λ2P(T 2d ∈ (0, 1))],
s.t. X̃S1 = X ⊙ T 1 ⊙ T 2, X̃S2 = X ⊙ T 1 ⊙ (1− T 2). (5)
Here Î is defined as the empirical Gaussian mutual information: Î(X,Y ) = 12 log det Σ̂X det Σ̂Y det Σ̂[X,Y ] , and W is defined as the graph diffusion operator: Wxi = xN(i). In our construction, T 1 controls the sparsity of feature selection, while T 2 controls the expectation of overlap between X̃S1 and X̃S2 . Denoting the Gaussian error function as erf(), the regularization term for the first layer is of form:
p∑ d=1 P(T 1d > 0) = p∑ i=1 ( 1 2 − 1 2 erf( µ1d√ 2σ1 )). (6)
The regularization term for the second layer can be expressed as: p∑
d=1
P(T 2d ∈ (0, 1)) = p∑
d=1
P(T 2d > 0)− P(T 2d ≥ 1) = 1
2 p∑ d=1 (erf( µ2d√ 2σ2 )− erf(µ 2 d − 1√ 2σ2 )). (7)
We are able to show strong consistency for our stochastic-gate based feature selection scheme by the theorem below (A proof can be seen at Appendix A.5): Theorem 3.4. Assume A1-A6 and f, g, h are one-to-one Gaussian embeddings as described above. For the optimal solution of (5), denote a sample of stochastic gate as T 1, T 2 and denote the ground truth interacting feature set as S, then there exists λ1, λ2 > 0 for (5) such that as n → ∞,
∀i ∈ {0, 1}, P(Bi ⊆ S) a.s.−−→ 1, where Bi := {d|T 1d > 0, T 2d = i}. (8)
In practice, we also have observed the method’s solution highly depends on the stochastic gate initialization. Here we provide a heuristic initialization scheme that shows superior empirical performance. Details of the initialization scheme can be seen in Appendix B.
3.2 PROPOSED NETWORK ARCHITECTURE
Our proposed network architecture is summarized in Figure 2. For an input dataset X ∈ Rp×n and its corresponding graph adjacency matrix A ∈ Rn×n, we first pass each feature through two sequential stochastic gate layers T 1, T 2. The l0 penalty is conducted on the first STG layer, while the second STG layer is regularized with the 0-1 penalty, consistent with the descriptions in the previous section.
After passing each feature, denote T̂ 2i = 1− T 2i , we have two intermediate embeddings defined by X̃S1 = X ⊙ T 1 ⊙ T 2 and X̃S2 = X ⊙ T 1 ⊙ T̂ 2 respectively. Then these two embeddings are passed through MLP1 (f ) and MLP2 (g) to generate Gaussian embeddings f(X̃S1), g(X̃S2) corresponding to (5). For the design of function h, we consider two crucial elements: 1. an additional layer to aggregate the information from different nodes in xN(i); 2. the injectivity of mappings f, g, h. Note f, h in (5) are automatically enforced to be injective on interacting features to maximize the first term of mTE, but g is not. Therefore, our final design of h is the composition of first applying g (enforcing the injectivity of g), a mean aggregation layer without self-loop consistent with the GCN design (Kipf and Welling, 2016) by multiplying the adjacency matrix A, and another MLP layer (MLP3). Finally, we compute the minus empirical Gaussian mTE Î(f, g)− Î(f, h) and add the cross-entropy penalty between the concatenated embedding distribution and a learnable Gaussian distribution.
3.3 OUTPUT INTERPRETATION
Upon the algorithm convergence, GEASS provides both outputs of active features (B0 ∪ B1) and embeddings (f, g, h) produced by causally interacting features. In this paper, we emphasize the use of the identified interacting features B0 ∪B1. The output of embeddings (f, g, h) may be complex and nonlinear, potentially requiring additional architectures to maximize its interpretability.
By the construction of GEASS, we are able to get two separate sparse feature subsets as source features B1 and sink features B0. These features may be used as inputs to further proper causal analysis, such as LPCMCI (Gerhardus and Runge, 2020) for time-series data, which despite its statistical power in depicting possible lags, identifying latent confounders, and allowing nonlinear tests, can only work on data with moderate feature sizes. Also, these features may be used in other machine learning models for improved model interpretability.
4 EXPERIMENTS
4.1 GAUSSIAN TIME-SERIES WITH POSSIBLE NONLINEARITY
In order to benchmark the method in time-series data, we consider two settings: 1. Minor effect of latent processes, with autocorrelation present; 2. Significant effect of latent processes, with autocorrelation present. Both settings are modeled by Gaussian structural processes with an underlying causal graph. Further details can be seen in Appendix C.1.
We test the false discovery rate (FDR) and F1 score between ground truth interacting features and recovered features as two metrics for high-dimensional data causal discovery. We compare GEASS with two categories of methods, namely conditional independence based (CI-based) methods and Granger causality based (GC-based) methods respectively. The first method category includes VAR-LINGAM (Hyvärinen et al., 2010), PCMCI (Runge et al., 2019b), and LPCMCI (Gerhardus and Runge, 2020). Among them, despite the statistical power, LPCMCI is not included in our experiment as it fails to converge in given time in our preliminary experiments. The second method category includes a neural-network based generalized vector autoregression model GVAR Granger (Marcinkevičs and Vogt, 2021), and Grid-net which generalizes the definition of Granger causality to Directed Acyclic Graph (DAG) (Wu et al., 2021); moreover we include two state-of-the-art approaches, DCM and NGM implemented in (Bellot et al., 2021) that use neural ODE to model nonlinear dependence graph.
Table 1 shows our benchmarking results. Among the alternative methods, GVAR and GrID-net fail in all settings as they are not designed for causal feature selection. VAR-LINGAM achieves high accuracy in linear settings while fails in nonlinear settings. In contrast, PCMCI fails when latent processes contribute to both true causally interacting features and nuisance features, creating spurious correlations. Empirically we also observe that DCM and NGM achieves comparable performance
when the dynamics are linear but performs worse in the nonlinear setting, where the dynamics are more irregular. Finally, GEASS consistently gives accurate causal feature identifications (high F1) and low false discovery rate (low FDR) in all settings considered.
GVAR (GC) .94 (.00) .11 (.00) .94 (.00) .11 (.00) .94 (.00) .11 (.00) .94 (.00) .11 (.00) GrID-net (GC) 1.0 (.00) .00 (.00) 1.0 (.00) .00 (.00) 1.0 (.00) .00 (.00) 1.0 (.00) .00 (.00) DCM (GC) .12 (.20) .88 (.20) .65 (.12) .35 (.12) .18 (.09) .82 (.09) .93 (.11) .07 (.11) NGM (GC) .07 (.08) .88 (.04) .48 (.17) .50 (.17) .00 (.00) .91 (.00) .62 (.25) .38 (.25) GEASS (Ours) .05 (.15) .97 (.10) .03 (.06) .92 (.05) .03 (.07) .90 (.04) .00 (.00) .91 (.00)
Furthermore, we evaluate different methods’ scalability with respect to the feature size. (Experimental details can be seen at Appendix C.1.2). As described before, we anticipate high computational complexity of both conditional independence based methods and neural network based methods with respect to the feature size, which prohibits further use of these methods for high-dimensional biological data analysis, where the feature number is typically at the scale of 103 − 104. Meanwhile, GEASS constructs a single neural network with parameters approximately proportional to p, thus largely reducing the complexity in the high-dimensional regime. We benchmark PCMCI, GVAR, GrID-net, NGM, GEASS, and an additional combination of GEASS with a downstream CI-test based causal graph identification method LPCMCI. Our experimental result shows
the superior performance of GEASS as well as GEASS+LPCMCI in time complexity, consistent with our qualitative analysis (Figure 3).
4.2 SIMULATED SPATIAL OMICS DATA WITH CELL TYPE CONFOUNDER
In order to jointly consider spatial confounders and corresponding autocorrelation patterns that are potentially enriched in specific niches, we consider the case of spatial omics data, where the autocorrelation is modeled by a higher likelihood of same type of cells in the neighborhood, and the confounder (nuisance features) is modeled by a coherent shift of global gene expression for each cell type. We first simulate scRNA-seq datasets, then each synthetic scRNA-seq dataset is assigned to a fixed size grid with cell type labels simulated by Ising model simulation. We then add artificial genes that are spatially correlated with neighboring cell’s given gene set. Finally each dataset is normalized and log1p transformed as the standard pipeline in Scanpy (Wolf et al., 2018).
The majority of methods are not available as their focus is on time-series data. Therefore in order to perform our benchmarking study, we compare GEASS with Lasso Granger, as well as our implemented L1-regularized version of NCEM, an approach proposed to detect interactions in spatial omics data (Fischer et al., 2021). Finally, we also implemented a method that maximizes over the original transfer entropy to select causal features (TE).
As shown in Table 2, the original LASSO cannot identify causal features because of the strong correlation between features. L1-NCEM alleviates the issue by conditioning on cell type labels in regression. TE outperforms linear methods yet generates a number of false positives, as it may learn spurious causations as discussed in Remark 3.3. Finally, GEASS consistently outperforms over other methods in identifying causal features of data as shown by both high F1 score and low FDR.
4.3 SCRNA-SEQ PANCREATIC ENDOCRINOGENESIS TRAJECTORY
We test GEASS on the pancreatic endocrinogenesis trajectory data, which is a standard dataset for scRNA-seq trajectory inference task (Bergen et al., 2020; Bastidas-Ponce et al., 2019). The pancreas trajectory data contains 3696 cells and 27998 genes. After preprocessing, lowly-expressed genes are filtered out as the standard pipeline in scVelo (Bergen et al., 2020), with remaining 2000 genes for further analysis. We aim to use GEASS to identify causally-related genes along the developmental trajectory to reveal underlying biology. (See Appendix C.3 for experimental details).
scRNA-seq data provides a snapshot of cell population distribution therefore time-series based analysis methods cannot be directly applied. However, due to GEASS’s flexible setting in forward operator W , we are able to define the time flow by RNA velocity analysis. RNA velocity analysis uses the additional information of intron RNAs to infer the underlying dynamics of gene expression change. Thus, we are able to define a velocity kernel matrix Avelo, which provides weighted adjacency relationships of cells based on velocity direction and cell phenotypic proximity.
GEASS identifies 50 causally-related features with high biological relevance. For example, the gene list includes the key transcriptional regulator NEUROG3, which is required for the specification of a common precursor of the 4 pancreatic terminal states (uni, 2021). As the ground truth causal interactions here are unknown, for further quantitative validation, we assume the underlying biological process is driven by a causal cascade of gene interactions, meaning target genes activated in earlier phases of the trajectory further cause downstream gene activation at later phases. In this case, the higher a gene velocity is, the more likely the gene is associated with causal gene-gene relationships. Our benchmarking result here suggests GEASS achieves the best performance in selecting genes with high mean velocity likelihood, compared with alternative gene selection schemes with fixed gene number (50) including high-expressed genes (HEG), highly-variable genes (HVG), and genes having high correlation with inferred latent time (HCG) (Figure 4).
4.4 MERFISH HUMAN CORTEX SINGLE-CELL LEVEL SPATIAL TRANSCRIPTOMICS
Spatial transcriptomics represent a wide category of method that can achieve spatial profiling of gene expression in tissues (Moses and Pachter, 2022; Rao et al., 2021; Palla et al., 2022). By the additional information of spatial locations, such measurements enable deeper understandings of cellular interactions (Palla et al., 2022; Jerby-Arnon and Regev, 2022; Fischer et al., 2021). However, current computational methods revealing interaction modules (Jerby-Arnon and Regev, 2022) or niche effects (Fischer et al., 2021; Raredon et al., 2023) for spatial omics data lacks causal interpretation. Applying GEASS, we aim to reveal underlying causal intercellular patterns to fully utilize the potential of spatial omics data for biological discovery.
Here we use GEASS on a recent published MERFISH dataset measuring spatially-resolved single-cell gene expression of human cortex (Fang et al., 2022). The dataset we used comprises of 3044 cells and 4000 genes; each cell is annotated as one of the eight cell types: excitatory neurons (EXC), inhibitory neurons (INC), astrocytes (ASC), microglial cells (MGC), oligodendrocytes (OGC), oligodendrocyte progenitor cells (OPC), endothelial cells (ENDO), and mural cells (MURAL) as shown by the first panel of Figure 6 in Appendix D. Our GEASS analysis selects 9 genes, namely FILIP1, SLC17A7, MYH11, RP11-10j21.2, PIRT, C3ORF67, TRDMT1, RGS8, SPTLC2 (Appendix Figure 6), with further experimental details available in Appendix C.4. Among these genes, MYH11, RP11-10j21.2, and TRDMT1 are enriched at the endothelial cells adjacent with mural cells, corresponding to underlying vascular structures (marked by ellipses in the first panel of Appendix Figure 6). We next aim to verify if their expression difference with those of non-adjacent endothelial cells is statistically significant. Indeed, by applying the Wilcoxon rank-sum test, we have found significant enrichments for both MYH11 and TRDMT1, with p-values 0.003 and 0.015 respectively, while the p-value for the gene RP11-10j21.2 is not significant (0.5) due to the gene expression sparsity. The finding is consistent with the MERFISH images, which reveals rich cellular interactions between neuronal cells and the blood vessels (Fang et al., 2022). Therefore, these identified marker genes of vascular structure may encode underlying meaningful cellular interactions.
Next, we focus on two GEASS identified genes, C3ORF67 and PIRT, which are highly expressed at nearby spatial locations. In order to confirm the possible causal relationship between the two genes, we consider three models: 1. the two genes are expressed in the same cell without spatial causal relationships; 2. The expression of C3ORF67 in each cell causes the expression of PIRT in neighboring cells (C3ORF67 → PIRT); 3. The expression of PIRT in each cell causes the expression of C3ORF67 in neighboring cells (PIRT → C3ORF67). To this end, we first compare Pearson and Spearman p-values of intracellular correlation (model 1), C3ORF67 to neighboring PIRT
(model 2), and PIRT to neighboring C3ORF67 (model 3). Our comparison shows for the p-values of both correlation measures, model 3 is favored (0.004, 0.001) over model 1 (0.014, 0.003) and model 2 (0.049, 0.004). The validity of model 3 (PIRT → C3ORF67) is further supported by a linear model predicting C3ORF67 expression by both intracellular and neighbor expression of PIRT, where the neighboring cell effect coefficient is significant at the confidence level of 0.01 by bootstrap, while the alternative model’s corresponding coefficient is not significant. Our finding is consistent with the predicted role of PIRT in transmembrane transporter binding and phosphatidylinositol-mediated signaling (Safran et al., 2021). As the role of C3ORF67 in human cortex remains unclear, this revealed causal link may lead to novel biological discoveries with further experimental validations.
5 CONCLUSIONS
In this work, we present GEASS, a causal feature selection method based on information-theoretic tools and neural networks. GEASS is able to scale to high dimensions and identify sparse interacting features. We provide both theoretical gaurantees and empirical validations of GEASS on synthetic and real biological data. Our results show GEASS can be integrated into high-dimensional spatiotemporal data analysis pipelines to provide unique insights for further findings.
Limitations. GEASS is a method designed for nonlinear causal feature selection. GEASS does not provide a causal graph itself as it optimizes a latent embedding corresponding to different causal mechanisms. Therefore, in applications where a causal graph output is favored, constraint-based methods may need to be applied after GEASS. Moreover, when underlying causal graph has a large number of vertices, the sparsity assumption is violated and GEASS is not gauranteed to work. Also, further efforts may be taken to incorporate lag selections for GEASS.
Broader impact. We anticipate a wide use of GEASS in high-dimensional graph-structured data, especially for high-dimensional biological data such as single cell trajectories and spatial omics measurements. Applying GEASS along with causal graph identification methods to a wider range of real biological data may greatly facilitate downstream biological discoveries.
ACKNOWLEDGEMENTS
The authors thank Ofir Lindenbaum, Boaz Nadler, Yifei Min, and Ronen Basri for helpful discussions. Y.K. acknowledges support by NIH grants R01GM131642, UM1DA051410, U54AG076043, P50CA121974, and U01DA053628.
APPENDIX
A PROOFS
A.1 PROOF OF PROPOSITION 2.2.
Proposition 2.2. ∀S1, S2 ⊂ {1, ..., p},mTEi(S1, S2) > 0 ⇒ TEi(S1, S2) > 0.
Proof. By standard properties of mutual information (Cover, 1999) we have
TEi(XS1 , XS2) = I(X i S1 ;X j:(i,j)∈E S2 |XiS2)
= I(XiS1 ;X j:(i,j)∈E S2 , XiS2)− I(X i S1 ;X i S2) = I(XiS1 ;X j:(i,j)∈E S2 )− I(XiS1 ;X i S2) + I(X i S1 ;X i S2 |X j:(i,j)∈E S2 ).
(9)
Therefore TEi(S1, S2) ≥ mTEi(S1, S2) holds, thus mTEi(S1, S2) > 0 ⇒ TEi(S1, S2) > 0.
A.2 DISCUSSION OF ASSUMPTION A6.
Our assumption A6 is based on the concept of conditional mutual entropy, which aims to filter out possible indirect causal relationships.
Here are two simple examples to see why TE/mTE can have problems with indirect causal interactions in the time-series setting: consider the relationships: st → wt → vt+1; st → wt+1 → vt+1. Then in both cases, we may have: I(st, vt+1)− I(st, vt) > 0 and I(st, vt+1|vt) > 0 although there are no direct causal relationship between s and v. Note in our setting, we include the possibility of such indirect interaction by allowing correlation between nuisance features and true interacting features.
The issue can be resolved by considering the conditional mutual information I(st, vt+1|wt) or I(st, vt+1|wt+1), which equals 0. This insight is also addressed the concept of conditional transfer entropy:
Definition (Conditional transfer entropy) (Shahsavari Baboukani et al., 2020). Assume X and Y are the features of interest and the conditioning features are Z. Denote − as [1, 2, ..., t], then we have
cTEt(X,Y, Z) = I(Yt+1, X−|Y−, Z−).
The classical formulation of conditional transfer entropy is widely used in high-dimensional observational data to learn direct causal dependencies (Faes et al., 2016; Shahsavari Baboukani et al., 2020). It implicitly assumes that, there is direct causal relationship between X and Y if ∀Z, t, cTEt(X,Y, Z) > 0. Here, we extend this assumption in the context of conditional mTE covering both examples described above. The conditional mTEs are defined in analogy to cTE for generalized graph-structured data in the Markovian model setting:
Definition (Two forms of conditional mTE). Assume X and Y are the feature sets of interest and the conditioning features are Z. Then we have
cmTE1i (X,Y, Z) = I(X i, Y N(i)|Zi)− I(Xi, Y i|Zi);
cmTE2i (X,Y, Z) = I(X i, Y N(i)|ZN(i))− I(Xi, Y i|Zi);
By controlling the two forms of conditional mTE to be larger than zero, we rule out both possibilities of Xi → Zi → Y N(i) and Xi → ZN(i) → Y N(i), as mTE is a stricter version of the original transfer entropy as discussed in Proposition 2.2. In summary, our A6 can be reformulated as ∀Z, i, cmTE1i (X,Y, Z) > 0; cmTE 2 i (X,Y, Z) > 0 for ground truth interacting X,Y in non-degenerating cases, where Z does not fully overlap with X/Y in the same point.
A.3 PROOF OF THEOREM 2.4.
Theorem 2.4. Given A1-A6, S∗ := (S∗1 ∪ S∗2 ) ⊆ {1, ...,m} (the index set of true interacting features described in A5). Moreover, each feature in S∗ is connected to other features in the set S∗.
Proof. Step 1. First we prove S∗1∩S∗2 = ∅. If not, assume p is an overlapping element. For simplicity, we denote N(i) := {j|(i, j) ∈ E}, A = XS∗1 , B = XS∗2 . Then we have
mTE(S∗1 , S ∗ 2 )−mTE(S∗1 \ p, S∗2 )
= I(Ai \ pi, pi;BN(i) \ pN(i), p{N(i)})− I(Ai, pi;Bi, pi)− I(Ai \ pi;BN(i) \ pN(i), pN(i)) + I(Ai;Bi, pi)
= I(pi;BN(i) \ pN(i), pN(i)|Ai \ pi)− I(pi;Bi \ pi, pi|Ai \ pi) < 0. (10)
Therefore removing p would increase the value of mTE, leading to a contradiction.
Step 2. Now we prove nuisance signals cannot be in either S∗1 or S∗2 . Otherwise, first we assume a set of nuisance signals U is in S∗1 . Here we denote A := XS∗1 , B := XS∗2 . As U only interacts with variables at the same time point, U can only interact with BN(i) via indirect links through a subset of interacting features at i. Denote this feature set as PaU (B)i ⊆ {ti1, ..., tim}, and the difference set Pa−U (B) i := PaU (B) i \ Bi. Then we first note Pa−U (B)i cannot be an empty set. Otherwise, denote S1 := S∗1 \ U , noting the non-overlapness between A and B we would have
mTE(S∗1 , S ∗ 2 )−mTE(S1, S∗2 )
= I(Ai \ U i, U i;BN(i))− I(Ai \ U i, U i;Bi)− I(Ai \ U i;BN(i)) + I(Ai \ U i;Bi) = I(U i;BN(i)|Ai \ U i)− I(U i;Bi|Ai \ U i) = −h(U i|BN(i), Ai \ U i) + h(U i|Bi, Ai \ U i) ≤ −h(U i|BN(i), Ai \ U i) + h(U i|PaU (B)i, Ai \ U i) (Conditioning reduces entropy) ≤ 0. (11)
This means (S1, S∗2 )’s mTE is not smaller than (S ∗ 1 , S ∗ 2 )’s while having a smaller union size, leading to a contradiction. Then because Pa−U (B) does not overlap with either U and B, with A6 we have
mTE(S∗1 , S ∗ 2 )−mTE(S∗1 ∪ Index(Pa−U (B)), S ∗ 2 )
= I(Ai \ U i, U i;BN(i))− I(Ai \ U i, U i;Bi)− I(Ai \ U i, U i, Pa−U (B) i;BN(i)) + I(Ai \ U i, U i, Pa−U (B) i;Bi)
= I(Pa−U (B) i;Bi|Ai)− I(Pa−U (B)
i;BN(i)|Ai) A6 ≤ 0.
(12)
The equal sign above is taken iff. Pa−U (B) i ⊆ Ai. Further we have
mTE(S∗1 ∪ Index(Pa−U (B)), S ∗ 2 )−mTE(S1 ∪ Index(Pa−U (B)), S ∗ 2 )
= I(Ai \ U i, U i, Pa−U (B) i;BN(i))− I(Ai \ U i, U i, Pa−U (B) i;Bi) − I(Ai \ U i, Pa−U (B) i;BN(i)) + I(Ai \ U i, Pa−U (B) i;Bi) = I(U i;BN(i)|Pa−U (B) i, Ai \ U i)− I(U i;Bi|Pa−U (B)
i, Ai \ U i) = −h(U i|BN(i), Pa−U (B) i, Ai \ U i) + h(U i|Bi, Pa−U (B) i, Ai \ U i) ≤ −h(U i|BN(i), Pa−U (B) i, Ai \ U i) + h(U i|PaU (B)i, Ai \ U i) ≤ 0.
(13)
Therefore, in all possible cases, mTE(S1 ∪ Index(Pa−U (B)i), S∗2 ) is either strictly larger than mTE(S∗1 , S ∗ 2 ) or equal with mTE(S ∗ 1 , S ∗ 2 ) but with smaller union size, leading to a contradiction.
Next, given the result above, we assume a nuisance signal set U is in S∗2 , and S ∗ 1 does not include any nuisance features. Then as U only interacts with variables at the same time point, UN(i) can only interact with S∗1 via indirect links through a subset of interacting features at N(i). Denote the whole intermediate feature set for A as ChU (A)N(i) ⊆ {tN(i)1 , ..., t N(i) m }, and Ch−U (A)N(i) := ChU (A) N(i) \ AN(i). Then same as above, denote S2 = S∗2 \ U , if Ch−U (A) is an empty set we would have
mTE(S∗1 , S ∗ 2 )−mTE(S∗1 , S2)
= I(Ai;BN(i) \ UN(i), UN(i))− I(Ai;Bi \ U i, U i)− I(Ai;BN(i) \ UN(i)) + I(Ai;Bi \ U i, U i) = I(Ai;UN(i)|BN(i) \ UN(i))− I(Ai;U i|Bi \ U i) = −h(UN(i)|BN(i) \ UN(i), Ai) + h(U i|Bi \ U i, Ai) ≤ −h(UN(i)|BN(i) \ UN(i), Ai) + h(U i|ChU (A)i, Bi \ U i) ≤ 0.
(14)
Above derivation holds due to stationarity (as |N(i)| ≡ 1 in the time series setting). Therefore Ch−U (A) cannot be an empty set. Because of the non-overlapness between Ch − U (A) and either A or U , with A6, we have
mTE(S∗1 , S ∗ 2 )−mTE(S∗1 , S2 ∪ Index(Ch−U (A)))
= I(Ai;BN(i) \ UN(i), UN(i))− I(Ai;Bi \ U i, U i) − I(Ai;BN(i) \ UN(i), UN(i), Ch−U (A) N(i)) + I(Ai;Bi \ U i, U i, Ch−U (A) i)
= I(Ai;Ch−U (A) i|Bi)− I(Ai;Ch−U (A)
N(i)|BN(i)) A6 ≤ 0.
(15)
The equal sign above is taken iff. Ch−U (A) i ⊆ Bi. Further we have
mTE(S∗1 , S ∗ 2 ∪ Index(Ch−U (A)))−mTE(S ∗ 1 , S2 ∪ Index(Ch−U (A)))
= I(Ai;BN(i) \ UN(i), UN(i), Ch−U (A) N(i))− I(Ai;Bi \ U i, U i, Ch−U (A) i) − I(Ai;BN(i) \ UN(i), Ch−U (A) N(i)) + I(Ai;Bi \ U i, Ch−U (A) i) = I(Ai;UN(i)|BN(i) \ UN(i), Ch−U (A) N(i))− I(Ai;U i|Bi \ U i, Ch−U (A) i) ≤ 0.
(16)
Therefore, in all possible cases, mTE(S∗1 , S2 ∪ Index(Ch−U (A))) is either strictly larger than mTE(S∗1 , S ∗ 2 ) or equal with mTE(S ∗ 1 , S ∗ 2 ) but with smaller union size, leading to a contradiction.
Step 3. Moreover, if there exists a component in S∗1 ∪ S∗2 not connected to any other feature components, denote the feature as q. Then, in this case with A1-4, the feature q is independent of any other features in S∗1 ∪ S∗2 . From step 1 it can be deduced that q cannot be in both S∗1 , S∗2 . Therefore in this case, we have mTE(S∗1 − q, S∗2 − q) = mTE(S∗1 , S∗2 ) thus leading to the contradiction of finding an (S1, S2) with the same mTE but smaller |S1 ∪ S2|.
A.4 PROOF OF THEOREM 3.1.
Theorem 3.1. Assume A1-A6 holds and f, g, h define one-to-one mappings on X⊙1S1(for f) or X⊙ 1S2(for g, h). Then ∃λ > 0, such that for (4), any solution (S∗1 ∪ S∗2 ) satisfies S∗ := (S∗1 ∪ S∗2 ) ⊆ {1, ...,m}. Moreover, each feature in S∗ is connected to other features in the set.
min f,g,h,S1,S2
−(I(f(xi ⊙ 1S1);h(xN(i) ⊙ 1S2))− I(f(xi ⊙ 1S1); g(xi ⊙ 1S2))) + λ|S1 ∪ S2|
Proof. With A4 (ergodicity and stationarity), the optimization problem 4 is equivalent to
min f,g,h,S1,S2
−(I(f(xiS1);h(x N(i) S2 ))− I(f(xiS1); g(x i S2))) + λ|S1 ∪ S2|. (17)
Given the assumption that f, g, h define injective mappings on xiS1 ,x i S2 respectively, and one-to-one transformation does not change mutual information, we have the optimization problem is equivalent to
min S1,S2
−(I(xiS1 ;x N(i) S2 )− I(xiS1 ;x i S2)) + λ|S1 ∪ S2|. (18)
Using Theorem 2.4, a minimizer of the mTE term with the smallest union size satisfies S∗ := (S∗1 ∪ S∗2 ) ⊆ {1, ...,m}. Moreover, each feature in S∗1 ∪ S∗2 is connected to other features in the set. Note that with our definition of optimal S1, S2, the minimal gap between mTE(S∗1 , S ∗ 2 ) and any other value mTE(S1, S2) with smaller |S1 ∪ S2| size is larger than zero. Denote the minimal gap as δ, and take λ < δ|S∗1∪S∗2 | , then for these other solutions, we have
−mTE(S1, S2) + λ|S1 ∪ S2| ≥ −mTE(S∗1 , S∗2 ) + δ + λ|S1 ∪ S2| ≥ −mTE(S∗1 , S∗2 ) + δ > −mTE(S∗1 , S∗2 ) + λ|S∗1 ∪ S∗2 |.
(19)
Meanwhile, for the (S1, S2) with larger union size, with the definition of the mTE, we have
−mTE(S1, S2) + λ|S1 ∪ S2| ≥ −mTE(S∗1 , S∗2 ) + λ|S1 ∪ S2| = −mTE(S∗1 , S∗2 ) + λ(|S1 ∪ S2| − |S∗1 ∪ S∗2 |) + λ|S∗1 ∪ S∗2 | > −mTE(S∗1 , S∗2 ) + λ|S∗1 ∪ S∗2 |.
(20)
Therefore, when taking λ ∈ (0, δ|S∗1∪S∗2 | ), the desired optimal S1, S2 by mTE is the optimal output of the constructed optimization problem.
A.5 PROOF OF THEOREM 3.4.
Theorem 3.4. Assume A1-A6 and f, g, h are one-to-one Gaussian embeddings as described above. Denote for the optimal solution of (5), a sample of stochastic gate is given by T 1, T 2 and denote the ground truth interacting feature set as S, then there exists λ1, λ2 > 0 for (5) such that as n → ∞,
∀i ∈ {0, 1}, P(Bi ⊆ S) a.s.−−→ 1, where Bi := {d|T 1d > 0, T 2d = i}.
Proof. In the following proof for simplicity we denote x̃S1 = x⊙T 1⊙T 2; x̃S2 = x⊙T 1⊙(1−T 2). Step 1. Given f, g, h projects input distributions into joint Gaussian distributions with fixed dimensionality, by convergence of Gaussian covariance matrices, we have:
Σ̂(f(x̃iS1), h(x̃ N(i) S2
)) = 1
n n∑ i=1 [f(x̃iS1);h(x̃ N(i) S2 )][f(x̃iS1);h(x̃ N(i) S2 )]T a.s.−−→ Σ f(x̃iS1 ),h(x̃ N(i) S2 ) ;
Σ̂(f(x̃iS1), g(x̃ i S2)) =
1
n n∑ i=1 [f(x̃iS1); g(x̃ i S2)][f(x̃ i S1); g(x̃ i S2)] T a.s.−−→ Σf(x̃iS1 ),g(x̃iS2 ). (21)
As in the Gaussian case, the mutual information between jointly Gaussian r.v.s is a function of the covariance matrix, we have
Î(f(x̃iS1);h(x̃ N(i) S2 )) a.s.−−→ I(f(x̃iS1);h(x̃ N(i) S2 )) = I(x̃iS1 ; x̃ N(i) S2 );
Î(f(x̃iS1); g(x̃ i S2)) a.s.−−→ I(f(x̃iS1); g(x̃ i S2)) = I(x̃ i S1 ; x̃ i S2);
P( lim N→∞ Empirical mTE = mTE) = 1.
(22)
Step 2. Importantly, in our formulation eq (5), the T1, T2 are sampled once in one epoch, meaning they are fixed across features for computing mTE. Further note that ∑p d=1 P(T 1d > 0) =
E||T 1||0; ∑p
d=1 P(T 2d ∈ (0, 1)) = E||1T 2∈(0,1)||0. This means denoting the value of eq (5) as L, we have
L a.s.−−→ ET 1,T 2 [−mTE(1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) + λ1||T 1||0 + λ2||1T 2∈(0,1)||0] ≥ min
T 1,T 2 −mTE(1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) + λ1||T 1||0 + λ2||1T 2∈(0,1)||0.
(23)
Note with step 1 of the proof of theorem 2.4, for any T1 we have
−mTE(1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) + λ1||T 1||0 + λ2||1T 2∈(0,1)||0 ≥ −mTE(1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) + λ1||T 1||0,
(24)
which is taken when ∀d,P(T 2d = 1) = 0/1,P(T 2d = 0) = 1/0. In this case,
||T 1||0 = ||T 1 ⊙ T 2||0 + ||T 1 ⊙ (1− T 2)||0.
Applying theorem 3.1, we have for λ1 = λ in theorem 3.1,
min T 1,T 2
−mTE(1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) + λ1||T 1||0 + λ2||1T 2∈(0,1)||0
= −mTE(S∗1 , S∗2 ) + λ|S∗1 ∪ S∗2 | := L∗. (25)
Here (S∗1 , S ∗ 2 ) satisfies properties described by theorem 3.1. Note the minimizer may not be unique, denote the set containing all minimizers as {(S∗1 , S∗2 )}. Then the equal sign in eq (23) holds if and only if P((1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) ∈ {(S∗1 , S∗2 )}) = 1. Further noting ∀d,P(T 2d = 1) = 0/1,P(T 2d = 0) = 1/0, and our analysis above holds as n → ∞ with probability 1 by a.s. convergence, we finally have
P( lim N→∞ P(B1 ⊆ S) = 1) = 1; P( lim N→∞ P(B0 ⊆ S) = 1) = 1
holds.
B GATE INITIALIZATION
Our proposed initialization scheme is based on analysis of the linear case. Assume
f(XS1) = Xa, g(XS2) = Xb,
where a, b ∈ Rp represents two feature loadings. Then:
1. a, b should be non-overlapping, therefore we expect |aT b| to be small. 2. We should have f(X) ≈ Wg(X) to maximize the mTE.
The constraint can be formulated into a regression problem WXb = Xa, therefore a natural solution is given by a = X†WXb = (XTX)−1XTWXb. In this case, ||aT b|| = ||bT (XTX)−1XTWXb|| = ||b||2(XTX)−1XTWX . Given b is normalized, it can be shown that the optimal b corresponds to the eigenvector with least absolute eigenvalue of matrix (XTX)−1XTWX .
After getting a, b, we select a quantile threshold over a/(a + b) to initialize the second stochastic gate layer. The first stochastic gate layer is initialized with uniform weights.
C EXPERIMENTAL DETAILS
C.1 TIME-SERIES BENCHMARKING STUDY
In the study the causal processes is simulated with Python package Tigramite. Among the total 100 features, there are 6 interacting features {1, 2, 3, 4, 5, 6}. The causal links are: 1->2 with time lag 2, 2->3 with time lag 1, 5->4 with time lag 1, 1->5 with time lag 1, 3->6 with time lag 3. These features also have autocorrelations with time lags ranging from 1 to 3. There is also a latent confounder modeled by Tigramite interacting with feature 0 and feature 2. In the case of strong latent process, the latent confounder also have effects on other 43 features. All other features (93/50) not mentioned above are nuisance features with white noise dynamics. The forward operator is defined by 5-neighbor lower triangular matrix.
C.1.1 ALGORITHM IMPLEMENTATION
If not particularly mentioned, default settings of the algorithms are used throughout.
• VAR-LINGAM. The VAR-LINGAM algorithm is implemented in the Python package LINGAM, available at https://github.com/cdt15/lingam. VAR-LINGAM gives a weighted matrix as output. Therefore in our benchmarking study, we choose the most significant edge corresponding features with the number matching the sparsity level.
• PCMCI. The PCMCI algorithm is implemented in the Python package Tigramite, which gives a weighted matrix as output. We choose the most significant edge corresponding features with the number matching the sparsity level.
• GVAR. The GVAR algorithm is implemented at https://github.com/i6092467/GVAR. The sparsity parameter is set to be 1. We use the stable training option in GVAR, which trains the first and second half of the time series respectively to optimize over edge selection sparsity level then train on the whole time series, giving a binary output and no threshold selection is needed.
• Grid-net. The Grid-net algorithm is implemented at https://github.com/alexw16/gridnet. The parameter set: order=5, hidden_layer_size = 10, end_epoch=50, batch_size = 50, lmbd=1 is used throughout our study. After the training finishes, we choose the most significant edge corresponding features with the number matching the sparsity level.
• DCM, NGM. The two algorithms are both implemented at https://github.com/alexisbellot/ Graphical-modelling-continuous-time. For DCM, the default setting is used, and we use hidden dim = 10 for NGM. After both training finishes, we choose the most significant edge corresponding features with the number matching the sparsity level.
• GEASS. We use the same training parameters in all time-series settings, with the key sparsity regularization parameter λ1 set with 0.04/0.05 based on a validation set, and the rest parameter settings are consistent with default.
C.1.2 SCALABILITY ANALYSIS
We test PCMCI, GVAR, GrID-net, NGM, GEASS, GEASS+LPCMCI’s running time with consistent settings described in the above section. (LPCMCI’s setting is consistent with PCMCI’s setting). We use the same data generation pipeline and select the set of the total feature numbers as [100, 200, 400, 800, 1600].
C.2 SIMULATED SPATIAL OMICS DATA BENCHMARKING STUDY
In the study the spatial omics data is simulated with Python package Scsim (Kotliar et al., 2019). 1000 genes are simulated in total, while 990 genes are cell-type-specificly expressed. The rest 10 genes each has a functional relationship (linear/nonlinear) with one cell-type-specific genes plus the noise term in order to model the cell-type-specific interactions. The data is then normalized and log-transformed according to the standard Scanpy pipeline (Wolf et al., 2018). The forward operator is defined by 4-neighbor adjacency matrix.
C.2.1 ALGORITHM IMPLEMENTATION
If not particularly mentioned, default settings of the algorithms are used throughout.
• Lasso Granger. The Lasso algorithm is implemented by Scipy with tuned α (0.12) to match the sparsity level.
• NCEM. NCEM (Linear) is a linear graph neural network, which in the grid case corresponds to a standard linear regression based on neighbors and the cell type label. Based on the original work, we implemented our equivalent version by Lasso regression with α = 0.019 to match the sparsity level.
• GEASS. We use the same training parameters in all settings, with the key sparsity regularization parameter λ1 set with 0.02 based on a validation set, and the latent dimension number is set to be 64.
• TE. To give a fair comparison, we use the same architecture as GEASS except for the loss function is changed. We use the same training parameters in all time-series settings, with the key sparsity regularization parameter λ1 set with 0.05 based on a validation set, and the latent dimension number is set to be 64 consistent with GEASS.
C.3 SCRNA-SEQ PANCREAS TRAJECTORY
The data preprocessing is consistent with the scVelo tutorial: https://scvelo.readthedocs.io/ VelocityBasics/ (Bergen et al., 2020). The parameter set: λ1 = 0.06, λ2 = 0.1. Here because the gene regulatory network is fully connected and activated in cascade along the developmental trajectory, we consider the opposite initialization with b be the largest eigenvalues corresponding eigenvectors of the matrix (XTX)−1XTWX .
C.4 MERFISH SPATIAL TRANSCRIPTOMICS DATA
The data is downloaded from Dryad and preprocessed with the standard Scanpy pipeline (Wolf et al., 2018): first normalize and log-transform the data by default functions in Scanpy then select 1000 highly variable genes by default functions in Scanpy (Wolf et al., 2018). The forward operator is defined by 5-neighbor adjacency matrix. The GEASS parameter set is consistent with those used in the spatial omics benchmarking.
D ADDITIONAL EXPERIMENTAL RESULTS | 1. What is the focus and contribution of the paper regarding spatiotemporal data analysis?
2. What are the strengths of the proposed approach, particularly in terms of its novel use of transfer entropy?
3. What are the weaknesses of the paper, especially regarding the forward operator and mutual information computation?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper provides a novel methodology for identifying significant causal features in spatiotemporal data, with primary applications in biology (e.g. scRNA-Seq data). Its primary novelty lies in the judicious use of a transfer entropy characterisation of feature significance/relevance and the design of a combinatorial stochastic gate layer.
Strengths And Weaknesses
The strengths are as follows:
The paper is well-written and coherently organised.
It has clear novelty in the use of maximum transfer entropy.
All the non-trivial mathematical statements are supported by proofs in the appendix.
The weaknesses are as follows:
It is unclear how the forward operator translates to spatial settings; the authors vaguely mention something to do with diffusion, but that would not be applicable unless the diffusion trajectory could be reconstructed in time.
It is also unclear how the mutual information is computed (via approximation? based on frequencies in the data?).
Many proofs are hand-wavy; this may make them clear/obvious to experts, but for the general reader they should be made more explicit.
Some of the extensions of the method to specific datasets seem somewhat ad hoc, rather than being motivated by the same principles as the main method.
Clarity, Quality, Novelty And Reproducibility
The paper is clearly written, with high-quality exposition and proofs supporting each non-trivial assertion. The methods appear to be novel, but I am not very familiar with this subfield. The results are well-described, but no code repository is provided, so reproducibility is limited. |
ICLR | Title
GEASS: Neural causal feature selection for high-dimensional biological data
Abstract
Identifying nonlinear causal relationships in high-dimensional biological data is an important task. However, current neural network based causality detection approaches for such data suffer from poor interpretability and cannot scale well to the high dimensional regime. Here we present GEASS (Granger fEAture Selection of Spatiotemporal data), which identifies sparse Granger causal interacting features of high dimensional spatiotemporal data by a single neural network. GEASS maximizes sparsity-regularized modified transfer entropy with a theoretical guarantee of recovering features with spatial/temporal Granger causal relationships. The sparsity regularization is achieved by a novel combinatorial stochastic gate layer to select sparse non-overlapping feature subsets. We demonstrate the efficacy of GEASS in several synthetic datasets and real biological data from single-cell RNA sequencing and spatial transcriptomics.
1 INTRODUCTION
Advances in single-cell omics research enable full characterizations of high-dimensional gene dynamics in biological systems on a either temporal or spatial scale. An example for the temporal case is single-cell RNA sequencing (scRNA-seq) trajectories, where cells are sampled from a dynamical biological process, sequenced, and ordered based on either real sampled time or inferred pseudo-time (Cannoodt et al., 2016; Saelens et al., 2019). Gene dynamics along the specified cell order encodes information of causal regulation for the underlying biological process. An example for the spatial case is single-cell level spatial transcriptomics (e.g. SeqFISH+ (Eng et al., 2019), Merfish (Fang et al., 2022)), in which cells from a tissue slice are sequenced with their spatial coordinates preserved (Moses and Pachter, 2022; Rao et al., 2021; Palla et al., 2022). Spatial profiling allows investigations of the cellular interplay, corresponding to conditional gene expression change caused by neighborhood phenotypic states. However, despite the potential significance, data-driven causal discovery for such data remains largely unexplored, especially for the spatial omics data.
Identifications of causal regulatory patterns in such data can be reformulated into the general task of causal feature selection in observational data with intrinsic structures, e.g. spatial data or temporal data. Identifications of causal interactions in time-series has lead to valuable findings in multiple disciplines, including but not limited to, economy, climate science, and biology (Hoover, 2006; Kamiński et al., 2001; Runge et al., 2019a).
Learning directed causal relationships in temporal/spatial data is feasible as time and space both induce asymmetric dependencies. In the case of time-series data, a feature in the future cannot have effect on past values of other features. For spatial data, a similar definition of causal dependency can be established (Herrera Gómez et al., 2014).
The concept of Granger causality is proposed in order to uncover the assymetric causal dependency (Granger, 1969; Shojaie and Fox, 2022). In time-series data, this would translate to identifying one variable’s causal relationship with other variables based on how well the historical observations of other variables can predict the variable’s present value. The application of Granger causality in a spatial context corresponds to predicting significant relationships between neighboring observations of other variables and the specified variable (Mielke et al., 2020), which is a key insight used in recent works aimed to discover cellular interaction patterns in spatial omics data (Fischer et al., 2021; Valdés-Sosa et al., 18).
In the nonlinear regime, information-theoretic measures such as directed information, transfer entropy (Schreiber, 2000), and partial transfer entropy (Staniek and Lehnertz, 2008), are used as a counterpart of linear Granger causality. Moreover, some works consider modeling conditional independence (CI) in time-series data to identify the underlying causal graph (Entner and Hoyer, 2010; Malinsky and Spirtes, 2018; Moneta et al., 2011; Runge et al., 2019a; Pfister et al., 2019; Mastakouri et al., 2021). Two examples are VarLINGAM (Hyvärinen et al., 2010) and PCMCI (Runge et al., 2019b), which are generalizations of LINGAM (Shimizu et al., 2006) and PC (Spirtes et al., 2000) respectively. Finally, multiple recent works have proposed to use neural network approaches to model the nonlinear Granger causality, including MLP, LSTM, and neural-ODE based approaches, resulting in improved prediction power for nonlinear time-series dynamics (Li et al., 2017; Tank et al., 2021; Nauta et al., 2019; Yin and Barucca, 2022; Bellot et al., 2021).
Despite the success of these methods in various systems of interest, multiple challenges limit their use in high-dimensional biological datasets.
• Although linear methods (LINGAM, linear Granger causality) have succeeded in various settings and can potentially scale to high feature numbers, these methods may completely fail when the feature dependency in data is highly complex and nonlinear.
• As the number of conditional independencies generally scales exponentially or at least polynomially with the feature size, applying causal discovery methods which are based on CI tests to high-dimensional data is not realistic. Distinctively, Granger-causality based methods are built with a prediction model for each feature in the data. The time complexity of solving the stacked prediction model for all features is of polynomial level with respect to the feature size.
• In previous methods, the number of causal edges between features is assumed to be sparse (edge sparsity) to maximize interpretability of the identified causal graph. However, in biological data, there exists a large proportion of nuisance features. Also, one functional gene may activate a large number of downstream genes in neighboring cells. Sparsifying the number of interacting features (feature sparsity) has the potential to improve causal discovery in biological systems, which remains to be explored.
• While a large number of methods are designed for causal discovery in time-series data, only a limited number of present works aim for causal discovery in general graph-structured data. Time-series based methods cannot be directly adopted on data with multi-branch trajectory dynamics or spatial structures.
Our contribution. In this work, we present GEASS (Granger fEAture Selection of Spatiotemporal data), which identifies causally interacting features of high dimensional temporal / spatial data by a single neural network. GEASS considers the aforementioned feature sparsity instead of edge sparsity, thus selects most significant interacting features for downstream causal discovery. Our contributions are three-folds.
1. Instead of direct causal discovery in data, we formulate the task as two steps of causal feature selection and causal graph identification. We provide a novel solution of causal feature selection problem in general graph-structured data by the use of modified transfer entropy maximization with theoretical guarantees.
2. In order to solve our proposed optimization problem, we design a novel combinatorial stochastic gate layer to select non-overlapping sparse feature sets with a newly designed initialization procedure.
3. We demonstrate the power of our method by benchmarking it on both temporal data and spatial data of multiple settings. Our method gives accurate and robust causal feature identification and reveals novel biology in real datasets.
1.1 RELATED WORKS
Neural Granger causality. Despite the large body of work based on linear Granger causal discovery, neural Granger causality still remains an active area of research. Various neural network architectures, such as MLP, sequential model, and attention-based architecture (Tank et al., 2021; Nauta et al., 2019; Khanna and Tan, 2019; Sun et al., 2021), have been proposed for nonlinear Granger causality
discovery. A recent work uses the information of proxy variable to learn latent confounder for Granger causality by a dual-decoder neural network (Yin and Barucca, 2022). One recent biologyoriented work extends the definition of Granger causality to DAGs, where the use of a linear graph neural network is proposed to model underlying Granger causality (Wu et al., 2021). Meanwhile, a neural-ODE based approach has been proposed to reformulate the Granger causality problem in terms of local dependence graph identification (Bellot et al., 2021).
Causal feature selection. The task of causal feature selection has been considered by multiple groups. Most works in this category uses constraint-based methods to identify each feature’s causal relation with all other features, equivalent of identifying the whole causal graph structure, including VARLINGAM, tsFCI, SVAR-FCI, and PCMCI (Hyvärinen et al., 2010; Entner and Hoyer, 2010; Malinsky and Spirtes, 2018; Moneta et al., 2011; Runge et al., 2019a). Meanwhile, seqICP focus on identifying the direct or indirect cause for each feature assuming sufficient interventions in the dataset (Pfister et al., 2019). SyPI tackles the causal feature selection problem without the assumption of causal sufficiency and avoids issues in multi-hypothesis testing by construction of the correct conditional set (Mastakouri et al., 2021). Finally, Guo et al. (2022) considers dual correction of causal feature selection to control both false positive rates and false negative rates.
2 MODIFIED TRANSFER ENTROPY (MTE)
In order to tackle the issue that a neural network may overfit each model therefore overestimates the number of causal interactions, we need a prediction-free loss function that directly indicates causal signficance. In this work, we propose a novel function, modified transfer entropy (mTE), based on transfer entropy (Schreiber, 2000) as a metric of causal interaction significance.
Transfer entropy is a information-theoretic measure of cross dependence (Schreiber, 2000). Consider two vectorized time series xt and yt for t ∈ 1, ..., T . In a Markovian model, the transfer entropy from x to y at time t is defined as the mutual information between the present value xt and the future value yt+1, conditioning on yt to eliminate possible autocorrelation: TEt(x,y) = I(xt;yt+1|yt). By the use of mutual information, transfer entropy is able to model general nonlinear dependencies beyond linear Granger causality. In this work, we further consider the generalization of transfer entropy on graph structured xi and yi, where i denotes a vertex on the data graph G = (V,E):
TEi(x,y) := I(x i;yN(i)|yi), where N(i) := {j|(i, j) ∈ E}. (1)
Note here the graph can be either directed (the time-series case) or undirected (the spatial case). In this study, we introduce a novel function, modified transfer entropy, that enables the application of bivariate transfer entropy for causal discovery in high-dimensional data. Our key insight is to consider two feature subsets in the dataset that maximizes the mutual information difference: Definition 2.1. Let X = [x1x2 . . .xn] ∈ Rp×n be a matrix containing graph-structured vector series xi, with i as vertices of the data graph G = (V,E). Suppose S1 and S2 be two subsets of {1, 2, ..., p}. The modified transfer entropy mTEi(S1, S2) and its maximum mTE∗i are defined by
mTEi(S1, S2) := I(x i S1 ;x N(i) S2 )− I(xiS1 ;x i S2); mTE ∗ i := max
S1,S2 mTEi(S1, S2). (2)
Note the mTE function requires strictly stronger dependence than the analogically defined transfer entropy TEi(S1, S2), as shown by the proposition below (The proof can be seen at Appendix A.1): Proposition 2.2. ∀S1, S2 ⊂ {1, ..., p},mTEi(S1, S2) > 0 ⇒ TEi(S1, S2) > 0.
Let (S∗1 , S ∗ 2 ) be one of the maximizers with the smallest size of |S1 ∪S2|, and denote S∗ := S∗1 ∪S∗2 (note (S∗1 , S ∗ 2 ) may not be unique). Under some mild assumptions listed below, we are able to provide the theoretical justification for mTE maximization in the time-series setting (Theorem 2.4). A proof can be seen in Appendix A.3.
Assumptions:
A1-A3 Causal Markov assumption, faithfulness, and causal sufficiency for the causal graph.
A4 Ergodicity and Stationarity of the stochastic process defined by the causal graph, meaning the ensemble average equals time average, and the functional relationships encoded by the causal graph do not change by time (or location). This also leads to mTEi(S1, S2) is constant across i.
A5 DAG causal graph: We assume XT = [t1, ..., tm, um+1, ..., up] up to a permutation, where ti are causally interacting features forming a directed acyclic graph (DAG), and uk are nuisance features that may correlate with ti. An illustration based on the time series setting can be seen in Figure 1.
A6 Interaction regularity: Given two disjoint feature sets A,B, such that A is a subset of the parent features of B or B is a subset of child features of A. Then conditioning on any other feature set C such that I(Ai, BN(i)|Ci), I(Ai, BN(i)|CN(i)) > 0, we have:
∀i,min{I(Ai, BN(i)|Ci), I(Ai, BN(i)|CN(i))} > I(Ai, Bi|Ci). (3) Remark 2.3. Here our only additional assumption from prevalent literatures (Pearl, 2009; Spirtes et al., 2000) is A6, which aims to filter out features with spurious causations and regularize the algorithmic complexity of causal interactions, thus enabling information-theoretic analysis. A6 has direct connections with the concept of conditional transfer entropy (Faes et al., 2016; Shahsavari Baboukani et al., 2020); further discussions can be seen at Appendix A.2. Theorem 2.4. Given A1-A6, S∗ := (S∗1 ∪S∗2 ) ⊆ {1, ...,m} (the index set of true interacting features described in A5). Moreover, each feature in S∗ is connected to other features in the set S∗.
3 NEURAL OPTIMIZATION OF MODIFIED TRANSFER ENTROPY
With Theorem 3.1 stated below, we are able to give a theoretical guarantee of the l0-penalized optimization of mTE. A proof can be seen at Appendix A.4. Here ⊙ stands for the Hardmard product. Theorem 3.1. Assume A1-A6 holds and f, g, h define one-to-one mappings on X⊙1S1(for f) or X⊙ 1S2(for g, h). Then ∃λ > 0, such that for (4), any solution (S∗1 ∪ S∗2 ) satisfies S∗ := (S∗1 ∪ S∗2 ) ⊆ {1, ...,m}. Moreover, each feature in S∗ is connected to other features in the set.
min f,g,h,S1,S2
−(I(f(xi ⊙ 1S1);h(xN(i) ⊙ 1S2))− I(f(xi ⊙ 1S1); g(xi ⊙ 1S2))) + λ|S1 ∪ S2| (4)
Remark 3.2. The estimation of mutual information by various approaches is an active field itself (Belghazi et al., 2018; Hjelm et al., 2018; McAllester and Stratos, 2020; Zhang et al., 2019). In contrast, by this theorem, we show that an accurate estimation of the transfer entropy (such as in (Zhang et al., 2019)) may not be needed as optimizing the upper bound of the modified transfer entropy automatically gives the best feature subset selection. Remark 3.3. Our theoretical guarantee is derived based on one-to-one embeddings f, g, h. In a neural network, the injectivity may be enforced with various architecture designs yet may not perfectly hold. Empirically, we have found that the optimization of mTE is robust to the embedding injectivity, compared with the original transfer entropy. This is due to our stricter design of the mTE function (Proposition 2.2) and is further illustrated by our experiments in the next section.
Given Theorem 3.1, we are able to construct a neural network for optimizing the proposed loss function. However, the estimation of mutual information is not directly tractable. In this case, because mutual information is invariant by one-to-one transforms, we can restrict the function class of f, g, h in the optimization problem (4) as flows transforming the original feature distributions into Gaussian distributions with fixed dimensionality. We are able to formulate the target for neural network optimization by the explicit formula for mutual information between Gaussians: I(X,Y ) = 1 2 log detΣX detΣY detΣ[X,Y ]
. The Gaussian regularization can be applied either by regularizing over the discrepancy between embedding distributions [f, g, h] and Gaussian distributions or by applying a adversarial training procedure. In this work, we have implemented the former approach, constructing means and covariance matrices for the concatenated embedding as learnable parameters and minimize the cross entropy between target distributions and the parametrized Gaussian distributions.
3.1 COMBINATORIAL STOCHASTIC GATES
In order to solve the optimization problem, we need to learn two sparse sets S1, S2, which involves combinatorial optimization, making the task impractical for high-dimensional data. To overcome this issue, we use a stochastic gate based approach (Yamada et al., 2020; Lindenbaum et al., 2021), which performs probabilistic relaxation of deterministic l0 norms. In order to explicitly construct S1 and S2 by stochastic gates, we define two random vectors T 1 and T 2 ranging in [0, 1] with lengths equal to the feature number, with each element independently sampled from STG distribution defined as: T id = max(0,min(1, µ i d + ϵ i d)), where ϵ i d ∼ N(0, σ2i ) is i.i.d. sampled with fixed variance and µid is a parameter trainable by reparametrization (Miller et al., 2017; Figurnov et al., 2018).
The new loss function applying stochastic gates can be formulated as: ET 1,T 2 − [Î(f(X̃S1);h(WX̃S2))− Î(f(X̃S1); g(X̃S2))] + p∑
d=1
[λ1P(T 1d > 0) + λ2P(T 2d ∈ (0, 1))],
s.t. X̃S1 = X ⊙ T 1 ⊙ T 2, X̃S2 = X ⊙ T 1 ⊙ (1− T 2). (5)
Here Î is defined as the empirical Gaussian mutual information: Î(X,Y ) = 12 log det Σ̂X det Σ̂Y det Σ̂[X,Y ] , and W is defined as the graph diffusion operator: Wxi = xN(i). In our construction, T 1 controls the sparsity of feature selection, while T 2 controls the expectation of overlap between X̃S1 and X̃S2 . Denoting the Gaussian error function as erf(), the regularization term for the first layer is of form:
p∑ d=1 P(T 1d > 0) = p∑ i=1 ( 1 2 − 1 2 erf( µ1d√ 2σ1 )). (6)
The regularization term for the second layer can be expressed as: p∑
d=1
P(T 2d ∈ (0, 1)) = p∑
d=1
P(T 2d > 0)− P(T 2d ≥ 1) = 1
2 p∑ d=1 (erf( µ2d√ 2σ2 )− erf(µ 2 d − 1√ 2σ2 )). (7)
We are able to show strong consistency for our stochastic-gate based feature selection scheme by the theorem below (A proof can be seen at Appendix A.5): Theorem 3.4. Assume A1-A6 and f, g, h are one-to-one Gaussian embeddings as described above. For the optimal solution of (5), denote a sample of stochastic gate as T 1, T 2 and denote the ground truth interacting feature set as S, then there exists λ1, λ2 > 0 for (5) such that as n → ∞,
∀i ∈ {0, 1}, P(Bi ⊆ S) a.s.−−→ 1, where Bi := {d|T 1d > 0, T 2d = i}. (8)
In practice, we also have observed the method’s solution highly depends on the stochastic gate initialization. Here we provide a heuristic initialization scheme that shows superior empirical performance. Details of the initialization scheme can be seen in Appendix B.
3.2 PROPOSED NETWORK ARCHITECTURE
Our proposed network architecture is summarized in Figure 2. For an input dataset X ∈ Rp×n and its corresponding graph adjacency matrix A ∈ Rn×n, we first pass each feature through two sequential stochastic gate layers T 1, T 2. The l0 penalty is conducted on the first STG layer, while the second STG layer is regularized with the 0-1 penalty, consistent with the descriptions in the previous section.
After passing each feature, denote T̂ 2i = 1− T 2i , we have two intermediate embeddings defined by X̃S1 = X ⊙ T 1 ⊙ T 2 and X̃S2 = X ⊙ T 1 ⊙ T̂ 2 respectively. Then these two embeddings are passed through MLP1 (f ) and MLP2 (g) to generate Gaussian embeddings f(X̃S1), g(X̃S2) corresponding to (5). For the design of function h, we consider two crucial elements: 1. an additional layer to aggregate the information from different nodes in xN(i); 2. the injectivity of mappings f, g, h. Note f, h in (5) are automatically enforced to be injective on interacting features to maximize the first term of mTE, but g is not. Therefore, our final design of h is the composition of first applying g (enforcing the injectivity of g), a mean aggregation layer without self-loop consistent with the GCN design (Kipf and Welling, 2016) by multiplying the adjacency matrix A, and another MLP layer (MLP3). Finally, we compute the minus empirical Gaussian mTE Î(f, g)− Î(f, h) and add the cross-entropy penalty between the concatenated embedding distribution and a learnable Gaussian distribution.
3.3 OUTPUT INTERPRETATION
Upon the algorithm convergence, GEASS provides both outputs of active features (B0 ∪ B1) and embeddings (f, g, h) produced by causally interacting features. In this paper, we emphasize the use of the identified interacting features B0 ∪B1. The output of embeddings (f, g, h) may be complex and nonlinear, potentially requiring additional architectures to maximize its interpretability.
By the construction of GEASS, we are able to get two separate sparse feature subsets as source features B1 and sink features B0. These features may be used as inputs to further proper causal analysis, such as LPCMCI (Gerhardus and Runge, 2020) for time-series data, which despite its statistical power in depicting possible lags, identifying latent confounders, and allowing nonlinear tests, can only work on data with moderate feature sizes. Also, these features may be used in other machine learning models for improved model interpretability.
4 EXPERIMENTS
4.1 GAUSSIAN TIME-SERIES WITH POSSIBLE NONLINEARITY
In order to benchmark the method in time-series data, we consider two settings: 1. Minor effect of latent processes, with autocorrelation present; 2. Significant effect of latent processes, with autocorrelation present. Both settings are modeled by Gaussian structural processes with an underlying causal graph. Further details can be seen in Appendix C.1.
We test the false discovery rate (FDR) and F1 score between ground truth interacting features and recovered features as two metrics for high-dimensional data causal discovery. We compare GEASS with two categories of methods, namely conditional independence based (CI-based) methods and Granger causality based (GC-based) methods respectively. The first method category includes VAR-LINGAM (Hyvärinen et al., 2010), PCMCI (Runge et al., 2019b), and LPCMCI (Gerhardus and Runge, 2020). Among them, despite the statistical power, LPCMCI is not included in our experiment as it fails to converge in given time in our preliminary experiments. The second method category includes a neural-network based generalized vector autoregression model GVAR Granger (Marcinkevičs and Vogt, 2021), and Grid-net which generalizes the definition of Granger causality to Directed Acyclic Graph (DAG) (Wu et al., 2021); moreover we include two state-of-the-art approaches, DCM and NGM implemented in (Bellot et al., 2021) that use neural ODE to model nonlinear dependence graph.
Table 1 shows our benchmarking results. Among the alternative methods, GVAR and GrID-net fail in all settings as they are not designed for causal feature selection. VAR-LINGAM achieves high accuracy in linear settings while fails in nonlinear settings. In contrast, PCMCI fails when latent processes contribute to both true causally interacting features and nuisance features, creating spurious correlations. Empirically we also observe that DCM and NGM achieves comparable performance
when the dynamics are linear but performs worse in the nonlinear setting, where the dynamics are more irregular. Finally, GEASS consistently gives accurate causal feature identifications (high F1) and low false discovery rate (low FDR) in all settings considered.
GVAR (GC) .94 (.00) .11 (.00) .94 (.00) .11 (.00) .94 (.00) .11 (.00) .94 (.00) .11 (.00) GrID-net (GC) 1.0 (.00) .00 (.00) 1.0 (.00) .00 (.00) 1.0 (.00) .00 (.00) 1.0 (.00) .00 (.00) DCM (GC) .12 (.20) .88 (.20) .65 (.12) .35 (.12) .18 (.09) .82 (.09) .93 (.11) .07 (.11) NGM (GC) .07 (.08) .88 (.04) .48 (.17) .50 (.17) .00 (.00) .91 (.00) .62 (.25) .38 (.25) GEASS (Ours) .05 (.15) .97 (.10) .03 (.06) .92 (.05) .03 (.07) .90 (.04) .00 (.00) .91 (.00)
Furthermore, we evaluate different methods’ scalability with respect to the feature size. (Experimental details can be seen at Appendix C.1.2). As described before, we anticipate high computational complexity of both conditional independence based methods and neural network based methods with respect to the feature size, which prohibits further use of these methods for high-dimensional biological data analysis, where the feature number is typically at the scale of 103 − 104. Meanwhile, GEASS constructs a single neural network with parameters approximately proportional to p, thus largely reducing the complexity in the high-dimensional regime. We benchmark PCMCI, GVAR, GrID-net, NGM, GEASS, and an additional combination of GEASS with a downstream CI-test based causal graph identification method LPCMCI. Our experimental result shows
the superior performance of GEASS as well as GEASS+LPCMCI in time complexity, consistent with our qualitative analysis (Figure 3).
4.2 SIMULATED SPATIAL OMICS DATA WITH CELL TYPE CONFOUNDER
In order to jointly consider spatial confounders and corresponding autocorrelation patterns that are potentially enriched in specific niches, we consider the case of spatial omics data, where the autocorrelation is modeled by a higher likelihood of same type of cells in the neighborhood, and the confounder (nuisance features) is modeled by a coherent shift of global gene expression for each cell type. We first simulate scRNA-seq datasets, then each synthetic scRNA-seq dataset is assigned to a fixed size grid with cell type labels simulated by Ising model simulation. We then add artificial genes that are spatially correlated with neighboring cell’s given gene set. Finally each dataset is normalized and log1p transformed as the standard pipeline in Scanpy (Wolf et al., 2018).
The majority of methods are not available as their focus is on time-series data. Therefore in order to perform our benchmarking study, we compare GEASS with Lasso Granger, as well as our implemented L1-regularized version of NCEM, an approach proposed to detect interactions in spatial omics data (Fischer et al., 2021). Finally, we also implemented a method that maximizes over the original transfer entropy to select causal features (TE).
As shown in Table 2, the original LASSO cannot identify causal features because of the strong correlation between features. L1-NCEM alleviates the issue by conditioning on cell type labels in regression. TE outperforms linear methods yet generates a number of false positives, as it may learn spurious causations as discussed in Remark 3.3. Finally, GEASS consistently outperforms over other methods in identifying causal features of data as shown by both high F1 score and low FDR.
4.3 SCRNA-SEQ PANCREATIC ENDOCRINOGENESIS TRAJECTORY
We test GEASS on the pancreatic endocrinogenesis trajectory data, which is a standard dataset for scRNA-seq trajectory inference task (Bergen et al., 2020; Bastidas-Ponce et al., 2019). The pancreas trajectory data contains 3696 cells and 27998 genes. After preprocessing, lowly-expressed genes are filtered out as the standard pipeline in scVelo (Bergen et al., 2020), with remaining 2000 genes for further analysis. We aim to use GEASS to identify causally-related genes along the developmental trajectory to reveal underlying biology. (See Appendix C.3 for experimental details).
scRNA-seq data provides a snapshot of cell population distribution therefore time-series based analysis methods cannot be directly applied. However, due to GEASS’s flexible setting in forward operator W , we are able to define the time flow by RNA velocity analysis. RNA velocity analysis uses the additional information of intron RNAs to infer the underlying dynamics of gene expression change. Thus, we are able to define a velocity kernel matrix Avelo, which provides weighted adjacency relationships of cells based on velocity direction and cell phenotypic proximity.
GEASS identifies 50 causally-related features with high biological relevance. For example, the gene list includes the key transcriptional regulator NEUROG3, which is required for the specification of a common precursor of the 4 pancreatic terminal states (uni, 2021). As the ground truth causal interactions here are unknown, for further quantitative validation, we assume the underlying biological process is driven by a causal cascade of gene interactions, meaning target genes activated in earlier phases of the trajectory further cause downstream gene activation at later phases. In this case, the higher a gene velocity is, the more likely the gene is associated with causal gene-gene relationships. Our benchmarking result here suggests GEASS achieves the best performance in selecting genes with high mean velocity likelihood, compared with alternative gene selection schemes with fixed gene number (50) including high-expressed genes (HEG), highly-variable genes (HVG), and genes having high correlation with inferred latent time (HCG) (Figure 4).
4.4 MERFISH HUMAN CORTEX SINGLE-CELL LEVEL SPATIAL TRANSCRIPTOMICS
Spatial transcriptomics represent a wide category of method that can achieve spatial profiling of gene expression in tissues (Moses and Pachter, 2022; Rao et al., 2021; Palla et al., 2022). By the additional information of spatial locations, such measurements enable deeper understandings of cellular interactions (Palla et al., 2022; Jerby-Arnon and Regev, 2022; Fischer et al., 2021). However, current computational methods revealing interaction modules (Jerby-Arnon and Regev, 2022) or niche effects (Fischer et al., 2021; Raredon et al., 2023) for spatial omics data lacks causal interpretation. Applying GEASS, we aim to reveal underlying causal intercellular patterns to fully utilize the potential of spatial omics data for biological discovery.
Here we use GEASS on a recent published MERFISH dataset measuring spatially-resolved single-cell gene expression of human cortex (Fang et al., 2022). The dataset we used comprises of 3044 cells and 4000 genes; each cell is annotated as one of the eight cell types: excitatory neurons (EXC), inhibitory neurons (INC), astrocytes (ASC), microglial cells (MGC), oligodendrocytes (OGC), oligodendrocyte progenitor cells (OPC), endothelial cells (ENDO), and mural cells (MURAL) as shown by the first panel of Figure 6 in Appendix D. Our GEASS analysis selects 9 genes, namely FILIP1, SLC17A7, MYH11, RP11-10j21.2, PIRT, C3ORF67, TRDMT1, RGS8, SPTLC2 (Appendix Figure 6), with further experimental details available in Appendix C.4. Among these genes, MYH11, RP11-10j21.2, and TRDMT1 are enriched at the endothelial cells adjacent with mural cells, corresponding to underlying vascular structures (marked by ellipses in the first panel of Appendix Figure 6). We next aim to verify if their expression difference with those of non-adjacent endothelial cells is statistically significant. Indeed, by applying the Wilcoxon rank-sum test, we have found significant enrichments for both MYH11 and TRDMT1, with p-values 0.003 and 0.015 respectively, while the p-value for the gene RP11-10j21.2 is not significant (0.5) due to the gene expression sparsity. The finding is consistent with the MERFISH images, which reveals rich cellular interactions between neuronal cells and the blood vessels (Fang et al., 2022). Therefore, these identified marker genes of vascular structure may encode underlying meaningful cellular interactions.
Next, we focus on two GEASS identified genes, C3ORF67 and PIRT, which are highly expressed at nearby spatial locations. In order to confirm the possible causal relationship between the two genes, we consider three models: 1. the two genes are expressed in the same cell without spatial causal relationships; 2. The expression of C3ORF67 in each cell causes the expression of PIRT in neighboring cells (C3ORF67 → PIRT); 3. The expression of PIRT in each cell causes the expression of C3ORF67 in neighboring cells (PIRT → C3ORF67). To this end, we first compare Pearson and Spearman p-values of intracellular correlation (model 1), C3ORF67 to neighboring PIRT
(model 2), and PIRT to neighboring C3ORF67 (model 3). Our comparison shows for the p-values of both correlation measures, model 3 is favored (0.004, 0.001) over model 1 (0.014, 0.003) and model 2 (0.049, 0.004). The validity of model 3 (PIRT → C3ORF67) is further supported by a linear model predicting C3ORF67 expression by both intracellular and neighbor expression of PIRT, where the neighboring cell effect coefficient is significant at the confidence level of 0.01 by bootstrap, while the alternative model’s corresponding coefficient is not significant. Our finding is consistent with the predicted role of PIRT in transmembrane transporter binding and phosphatidylinositol-mediated signaling (Safran et al., 2021). As the role of C3ORF67 in human cortex remains unclear, this revealed causal link may lead to novel biological discoveries with further experimental validations.
5 CONCLUSIONS
In this work, we present GEASS, a causal feature selection method based on information-theoretic tools and neural networks. GEASS is able to scale to high dimensions and identify sparse interacting features. We provide both theoretical gaurantees and empirical validations of GEASS on synthetic and real biological data. Our results show GEASS can be integrated into high-dimensional spatiotemporal data analysis pipelines to provide unique insights for further findings.
Limitations. GEASS is a method designed for nonlinear causal feature selection. GEASS does not provide a causal graph itself as it optimizes a latent embedding corresponding to different causal mechanisms. Therefore, in applications where a causal graph output is favored, constraint-based methods may need to be applied after GEASS. Moreover, when underlying causal graph has a large number of vertices, the sparsity assumption is violated and GEASS is not gauranteed to work. Also, further efforts may be taken to incorporate lag selections for GEASS.
Broader impact. We anticipate a wide use of GEASS in high-dimensional graph-structured data, especially for high-dimensional biological data such as single cell trajectories and spatial omics measurements. Applying GEASS along with causal graph identification methods to a wider range of real biological data may greatly facilitate downstream biological discoveries.
ACKNOWLEDGEMENTS
The authors thank Ofir Lindenbaum, Boaz Nadler, Yifei Min, and Ronen Basri for helpful discussions. Y.K. acknowledges support by NIH grants R01GM131642, UM1DA051410, U54AG076043, P50CA121974, and U01DA053628.
APPENDIX
A PROOFS
A.1 PROOF OF PROPOSITION 2.2.
Proposition 2.2. ∀S1, S2 ⊂ {1, ..., p},mTEi(S1, S2) > 0 ⇒ TEi(S1, S2) > 0.
Proof. By standard properties of mutual information (Cover, 1999) we have
TEi(XS1 , XS2) = I(X i S1 ;X j:(i,j)∈E S2 |XiS2)
= I(XiS1 ;X j:(i,j)∈E S2 , XiS2)− I(X i S1 ;X i S2) = I(XiS1 ;X j:(i,j)∈E S2 )− I(XiS1 ;X i S2) + I(X i S1 ;X i S2 |X j:(i,j)∈E S2 ).
(9)
Therefore TEi(S1, S2) ≥ mTEi(S1, S2) holds, thus mTEi(S1, S2) > 0 ⇒ TEi(S1, S2) > 0.
A.2 DISCUSSION OF ASSUMPTION A6.
Our assumption A6 is based on the concept of conditional mutual entropy, which aims to filter out possible indirect causal relationships.
Here are two simple examples to see why TE/mTE can have problems with indirect causal interactions in the time-series setting: consider the relationships: st → wt → vt+1; st → wt+1 → vt+1. Then in both cases, we may have: I(st, vt+1)− I(st, vt) > 0 and I(st, vt+1|vt) > 0 although there are no direct causal relationship between s and v. Note in our setting, we include the possibility of such indirect interaction by allowing correlation between nuisance features and true interacting features.
The issue can be resolved by considering the conditional mutual information I(st, vt+1|wt) or I(st, vt+1|wt+1), which equals 0. This insight is also addressed the concept of conditional transfer entropy:
Definition (Conditional transfer entropy) (Shahsavari Baboukani et al., 2020). Assume X and Y are the features of interest and the conditioning features are Z. Denote − as [1, 2, ..., t], then we have
cTEt(X,Y, Z) = I(Yt+1, X−|Y−, Z−).
The classical formulation of conditional transfer entropy is widely used in high-dimensional observational data to learn direct causal dependencies (Faes et al., 2016; Shahsavari Baboukani et al., 2020). It implicitly assumes that, there is direct causal relationship between X and Y if ∀Z, t, cTEt(X,Y, Z) > 0. Here, we extend this assumption in the context of conditional mTE covering both examples described above. The conditional mTEs are defined in analogy to cTE for generalized graph-structured data in the Markovian model setting:
Definition (Two forms of conditional mTE). Assume X and Y are the feature sets of interest and the conditioning features are Z. Then we have
cmTE1i (X,Y, Z) = I(X i, Y N(i)|Zi)− I(Xi, Y i|Zi);
cmTE2i (X,Y, Z) = I(X i, Y N(i)|ZN(i))− I(Xi, Y i|Zi);
By controlling the two forms of conditional mTE to be larger than zero, we rule out both possibilities of Xi → Zi → Y N(i) and Xi → ZN(i) → Y N(i), as mTE is a stricter version of the original transfer entropy as discussed in Proposition 2.2. In summary, our A6 can be reformulated as ∀Z, i, cmTE1i (X,Y, Z) > 0; cmTE 2 i (X,Y, Z) > 0 for ground truth interacting X,Y in non-degenerating cases, where Z does not fully overlap with X/Y in the same point.
A.3 PROOF OF THEOREM 2.4.
Theorem 2.4. Given A1-A6, S∗ := (S∗1 ∪ S∗2 ) ⊆ {1, ...,m} (the index set of true interacting features described in A5). Moreover, each feature in S∗ is connected to other features in the set S∗.
Proof. Step 1. First we prove S∗1∩S∗2 = ∅. If not, assume p is an overlapping element. For simplicity, we denote N(i) := {j|(i, j) ∈ E}, A = XS∗1 , B = XS∗2 . Then we have
mTE(S∗1 , S ∗ 2 )−mTE(S∗1 \ p, S∗2 )
= I(Ai \ pi, pi;BN(i) \ pN(i), p{N(i)})− I(Ai, pi;Bi, pi)− I(Ai \ pi;BN(i) \ pN(i), pN(i)) + I(Ai;Bi, pi)
= I(pi;BN(i) \ pN(i), pN(i)|Ai \ pi)− I(pi;Bi \ pi, pi|Ai \ pi) < 0. (10)
Therefore removing p would increase the value of mTE, leading to a contradiction.
Step 2. Now we prove nuisance signals cannot be in either S∗1 or S∗2 . Otherwise, first we assume a set of nuisance signals U is in S∗1 . Here we denote A := XS∗1 , B := XS∗2 . As U only interacts with variables at the same time point, U can only interact with BN(i) via indirect links through a subset of interacting features at i. Denote this feature set as PaU (B)i ⊆ {ti1, ..., tim}, and the difference set Pa−U (B) i := PaU (B) i \ Bi. Then we first note Pa−U (B)i cannot be an empty set. Otherwise, denote S1 := S∗1 \ U , noting the non-overlapness between A and B we would have
mTE(S∗1 , S ∗ 2 )−mTE(S1, S∗2 )
= I(Ai \ U i, U i;BN(i))− I(Ai \ U i, U i;Bi)− I(Ai \ U i;BN(i)) + I(Ai \ U i;Bi) = I(U i;BN(i)|Ai \ U i)− I(U i;Bi|Ai \ U i) = −h(U i|BN(i), Ai \ U i) + h(U i|Bi, Ai \ U i) ≤ −h(U i|BN(i), Ai \ U i) + h(U i|PaU (B)i, Ai \ U i) (Conditioning reduces entropy) ≤ 0. (11)
This means (S1, S∗2 )’s mTE is not smaller than (S ∗ 1 , S ∗ 2 )’s while having a smaller union size, leading to a contradiction. Then because Pa−U (B) does not overlap with either U and B, with A6 we have
mTE(S∗1 , S ∗ 2 )−mTE(S∗1 ∪ Index(Pa−U (B)), S ∗ 2 )
= I(Ai \ U i, U i;BN(i))− I(Ai \ U i, U i;Bi)− I(Ai \ U i, U i, Pa−U (B) i;BN(i)) + I(Ai \ U i, U i, Pa−U (B) i;Bi)
= I(Pa−U (B) i;Bi|Ai)− I(Pa−U (B)
i;BN(i)|Ai) A6 ≤ 0.
(12)
The equal sign above is taken iff. Pa−U (B) i ⊆ Ai. Further we have
mTE(S∗1 ∪ Index(Pa−U (B)), S ∗ 2 )−mTE(S1 ∪ Index(Pa−U (B)), S ∗ 2 )
= I(Ai \ U i, U i, Pa−U (B) i;BN(i))− I(Ai \ U i, U i, Pa−U (B) i;Bi) − I(Ai \ U i, Pa−U (B) i;BN(i)) + I(Ai \ U i, Pa−U (B) i;Bi) = I(U i;BN(i)|Pa−U (B) i, Ai \ U i)− I(U i;Bi|Pa−U (B)
i, Ai \ U i) = −h(U i|BN(i), Pa−U (B) i, Ai \ U i) + h(U i|Bi, Pa−U (B) i, Ai \ U i) ≤ −h(U i|BN(i), Pa−U (B) i, Ai \ U i) + h(U i|PaU (B)i, Ai \ U i) ≤ 0.
(13)
Therefore, in all possible cases, mTE(S1 ∪ Index(Pa−U (B)i), S∗2 ) is either strictly larger than mTE(S∗1 , S ∗ 2 ) or equal with mTE(S ∗ 1 , S ∗ 2 ) but with smaller union size, leading to a contradiction.
Next, given the result above, we assume a nuisance signal set U is in S∗2 , and S ∗ 1 does not include any nuisance features. Then as U only interacts with variables at the same time point, UN(i) can only interact with S∗1 via indirect links through a subset of interacting features at N(i). Denote the whole intermediate feature set for A as ChU (A)N(i) ⊆ {tN(i)1 , ..., t N(i) m }, and Ch−U (A)N(i) := ChU (A) N(i) \ AN(i). Then same as above, denote S2 = S∗2 \ U , if Ch−U (A) is an empty set we would have
mTE(S∗1 , S ∗ 2 )−mTE(S∗1 , S2)
= I(Ai;BN(i) \ UN(i), UN(i))− I(Ai;Bi \ U i, U i)− I(Ai;BN(i) \ UN(i)) + I(Ai;Bi \ U i, U i) = I(Ai;UN(i)|BN(i) \ UN(i))− I(Ai;U i|Bi \ U i) = −h(UN(i)|BN(i) \ UN(i), Ai) + h(U i|Bi \ U i, Ai) ≤ −h(UN(i)|BN(i) \ UN(i), Ai) + h(U i|ChU (A)i, Bi \ U i) ≤ 0.
(14)
Above derivation holds due to stationarity (as |N(i)| ≡ 1 in the time series setting). Therefore Ch−U (A) cannot be an empty set. Because of the non-overlapness between Ch − U (A) and either A or U , with A6, we have
mTE(S∗1 , S ∗ 2 )−mTE(S∗1 , S2 ∪ Index(Ch−U (A)))
= I(Ai;BN(i) \ UN(i), UN(i))− I(Ai;Bi \ U i, U i) − I(Ai;BN(i) \ UN(i), UN(i), Ch−U (A) N(i)) + I(Ai;Bi \ U i, U i, Ch−U (A) i)
= I(Ai;Ch−U (A) i|Bi)− I(Ai;Ch−U (A)
N(i)|BN(i)) A6 ≤ 0.
(15)
The equal sign above is taken iff. Ch−U (A) i ⊆ Bi. Further we have
mTE(S∗1 , S ∗ 2 ∪ Index(Ch−U (A)))−mTE(S ∗ 1 , S2 ∪ Index(Ch−U (A)))
= I(Ai;BN(i) \ UN(i), UN(i), Ch−U (A) N(i))− I(Ai;Bi \ U i, U i, Ch−U (A) i) − I(Ai;BN(i) \ UN(i), Ch−U (A) N(i)) + I(Ai;Bi \ U i, Ch−U (A) i) = I(Ai;UN(i)|BN(i) \ UN(i), Ch−U (A) N(i))− I(Ai;U i|Bi \ U i, Ch−U (A) i) ≤ 0.
(16)
Therefore, in all possible cases, mTE(S∗1 , S2 ∪ Index(Ch−U (A))) is either strictly larger than mTE(S∗1 , S ∗ 2 ) or equal with mTE(S ∗ 1 , S ∗ 2 ) but with smaller union size, leading to a contradiction.
Step 3. Moreover, if there exists a component in S∗1 ∪ S∗2 not connected to any other feature components, denote the feature as q. Then, in this case with A1-4, the feature q is independent of any other features in S∗1 ∪ S∗2 . From step 1 it can be deduced that q cannot be in both S∗1 , S∗2 . Therefore in this case, we have mTE(S∗1 − q, S∗2 − q) = mTE(S∗1 , S∗2 ) thus leading to the contradiction of finding an (S1, S2) with the same mTE but smaller |S1 ∪ S2|.
A.4 PROOF OF THEOREM 3.1.
Theorem 3.1. Assume A1-A6 holds and f, g, h define one-to-one mappings on X⊙1S1(for f) or X⊙ 1S2(for g, h). Then ∃λ > 0, such that for (4), any solution (S∗1 ∪ S∗2 ) satisfies S∗ := (S∗1 ∪ S∗2 ) ⊆ {1, ...,m}. Moreover, each feature in S∗ is connected to other features in the set.
min f,g,h,S1,S2
−(I(f(xi ⊙ 1S1);h(xN(i) ⊙ 1S2))− I(f(xi ⊙ 1S1); g(xi ⊙ 1S2))) + λ|S1 ∪ S2|
Proof. With A4 (ergodicity and stationarity), the optimization problem 4 is equivalent to
min f,g,h,S1,S2
−(I(f(xiS1);h(x N(i) S2 ))− I(f(xiS1); g(x i S2))) + λ|S1 ∪ S2|. (17)
Given the assumption that f, g, h define injective mappings on xiS1 ,x i S2 respectively, and one-to-one transformation does not change mutual information, we have the optimization problem is equivalent to
min S1,S2
−(I(xiS1 ;x N(i) S2 )− I(xiS1 ;x i S2)) + λ|S1 ∪ S2|. (18)
Using Theorem 2.4, a minimizer of the mTE term with the smallest union size satisfies S∗ := (S∗1 ∪ S∗2 ) ⊆ {1, ...,m}. Moreover, each feature in S∗1 ∪ S∗2 is connected to other features in the set. Note that with our definition of optimal S1, S2, the minimal gap between mTE(S∗1 , S ∗ 2 ) and any other value mTE(S1, S2) with smaller |S1 ∪ S2| size is larger than zero. Denote the minimal gap as δ, and take λ < δ|S∗1∪S∗2 | , then for these other solutions, we have
−mTE(S1, S2) + λ|S1 ∪ S2| ≥ −mTE(S∗1 , S∗2 ) + δ + λ|S1 ∪ S2| ≥ −mTE(S∗1 , S∗2 ) + δ > −mTE(S∗1 , S∗2 ) + λ|S∗1 ∪ S∗2 |.
(19)
Meanwhile, for the (S1, S2) with larger union size, with the definition of the mTE, we have
−mTE(S1, S2) + λ|S1 ∪ S2| ≥ −mTE(S∗1 , S∗2 ) + λ|S1 ∪ S2| = −mTE(S∗1 , S∗2 ) + λ(|S1 ∪ S2| − |S∗1 ∪ S∗2 |) + λ|S∗1 ∪ S∗2 | > −mTE(S∗1 , S∗2 ) + λ|S∗1 ∪ S∗2 |.
(20)
Therefore, when taking λ ∈ (0, δ|S∗1∪S∗2 | ), the desired optimal S1, S2 by mTE is the optimal output of the constructed optimization problem.
A.5 PROOF OF THEOREM 3.4.
Theorem 3.4. Assume A1-A6 and f, g, h are one-to-one Gaussian embeddings as described above. Denote for the optimal solution of (5), a sample of stochastic gate is given by T 1, T 2 and denote the ground truth interacting feature set as S, then there exists λ1, λ2 > 0 for (5) such that as n → ∞,
∀i ∈ {0, 1}, P(Bi ⊆ S) a.s.−−→ 1, where Bi := {d|T 1d > 0, T 2d = i}.
Proof. In the following proof for simplicity we denote x̃S1 = x⊙T 1⊙T 2; x̃S2 = x⊙T 1⊙(1−T 2). Step 1. Given f, g, h projects input distributions into joint Gaussian distributions with fixed dimensionality, by convergence of Gaussian covariance matrices, we have:
Σ̂(f(x̃iS1), h(x̃ N(i) S2
)) = 1
n n∑ i=1 [f(x̃iS1);h(x̃ N(i) S2 )][f(x̃iS1);h(x̃ N(i) S2 )]T a.s.−−→ Σ f(x̃iS1 ),h(x̃ N(i) S2 ) ;
Σ̂(f(x̃iS1), g(x̃ i S2)) =
1
n n∑ i=1 [f(x̃iS1); g(x̃ i S2)][f(x̃ i S1); g(x̃ i S2)] T a.s.−−→ Σf(x̃iS1 ),g(x̃iS2 ). (21)
As in the Gaussian case, the mutual information between jointly Gaussian r.v.s is a function of the covariance matrix, we have
Î(f(x̃iS1);h(x̃ N(i) S2 )) a.s.−−→ I(f(x̃iS1);h(x̃ N(i) S2 )) = I(x̃iS1 ; x̃ N(i) S2 );
Î(f(x̃iS1); g(x̃ i S2)) a.s.−−→ I(f(x̃iS1); g(x̃ i S2)) = I(x̃ i S1 ; x̃ i S2);
P( lim N→∞ Empirical mTE = mTE) = 1.
(22)
Step 2. Importantly, in our formulation eq (5), the T1, T2 are sampled once in one epoch, meaning they are fixed across features for computing mTE. Further note that ∑p d=1 P(T 1d > 0) =
E||T 1||0; ∑p
d=1 P(T 2d ∈ (0, 1)) = E||1T 2∈(0,1)||0. This means denoting the value of eq (5) as L, we have
L a.s.−−→ ET 1,T 2 [−mTE(1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) + λ1||T 1||0 + λ2||1T 2∈(0,1)||0] ≥ min
T 1,T 2 −mTE(1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) + λ1||T 1||0 + λ2||1T 2∈(0,1)||0.
(23)
Note with step 1 of the proof of theorem 2.4, for any T1 we have
−mTE(1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) + λ1||T 1||0 + λ2||1T 2∈(0,1)||0 ≥ −mTE(1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) + λ1||T 1||0,
(24)
which is taken when ∀d,P(T 2d = 1) = 0/1,P(T 2d = 0) = 1/0. In this case,
||T 1||0 = ||T 1 ⊙ T 2||0 + ||T 1 ⊙ (1− T 2)||0.
Applying theorem 3.1, we have for λ1 = λ in theorem 3.1,
min T 1,T 2
−mTE(1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) + λ1||T 1||0 + λ2||1T 2∈(0,1)||0
= −mTE(S∗1 , S∗2 ) + λ|S∗1 ∪ S∗2 | := L∗. (25)
Here (S∗1 , S ∗ 2 ) satisfies properties described by theorem 3.1. Note the minimizer may not be unique, denote the set containing all minimizers as {(S∗1 , S∗2 )}. Then the equal sign in eq (23) holds if and only if P((1T 1⊙T 2>0,1T 1⊙(1−T 2)>0) ∈ {(S∗1 , S∗2 )}) = 1. Further noting ∀d,P(T 2d = 1) = 0/1,P(T 2d = 0) = 1/0, and our analysis above holds as n → ∞ with probability 1 by a.s. convergence, we finally have
P( lim N→∞ P(B1 ⊆ S) = 1) = 1; P( lim N→∞ P(B0 ⊆ S) = 1) = 1
holds.
B GATE INITIALIZATION
Our proposed initialization scheme is based on analysis of the linear case. Assume
f(XS1) = Xa, g(XS2) = Xb,
where a, b ∈ Rp represents two feature loadings. Then:
1. a, b should be non-overlapping, therefore we expect |aT b| to be small. 2. We should have f(X) ≈ Wg(X) to maximize the mTE.
The constraint can be formulated into a regression problem WXb = Xa, therefore a natural solution is given by a = X†WXb = (XTX)−1XTWXb. In this case, ||aT b|| = ||bT (XTX)−1XTWXb|| = ||b||2(XTX)−1XTWX . Given b is normalized, it can be shown that the optimal b corresponds to the eigenvector with least absolute eigenvalue of matrix (XTX)−1XTWX .
After getting a, b, we select a quantile threshold over a/(a + b) to initialize the second stochastic gate layer. The first stochastic gate layer is initialized with uniform weights.
C EXPERIMENTAL DETAILS
C.1 TIME-SERIES BENCHMARKING STUDY
In the study the causal processes is simulated with Python package Tigramite. Among the total 100 features, there are 6 interacting features {1, 2, 3, 4, 5, 6}. The causal links are: 1->2 with time lag 2, 2->3 with time lag 1, 5->4 with time lag 1, 1->5 with time lag 1, 3->6 with time lag 3. These features also have autocorrelations with time lags ranging from 1 to 3. There is also a latent confounder modeled by Tigramite interacting with feature 0 and feature 2. In the case of strong latent process, the latent confounder also have effects on other 43 features. All other features (93/50) not mentioned above are nuisance features with white noise dynamics. The forward operator is defined by 5-neighbor lower triangular matrix.
C.1.1 ALGORITHM IMPLEMENTATION
If not particularly mentioned, default settings of the algorithms are used throughout.
• VAR-LINGAM. The VAR-LINGAM algorithm is implemented in the Python package LINGAM, available at https://github.com/cdt15/lingam. VAR-LINGAM gives a weighted matrix as output. Therefore in our benchmarking study, we choose the most significant edge corresponding features with the number matching the sparsity level.
• PCMCI. The PCMCI algorithm is implemented in the Python package Tigramite, which gives a weighted matrix as output. We choose the most significant edge corresponding features with the number matching the sparsity level.
• GVAR. The GVAR algorithm is implemented at https://github.com/i6092467/GVAR. The sparsity parameter is set to be 1. We use the stable training option in GVAR, which trains the first and second half of the time series respectively to optimize over edge selection sparsity level then train on the whole time series, giving a binary output and no threshold selection is needed.
• Grid-net. The Grid-net algorithm is implemented at https://github.com/alexw16/gridnet. The parameter set: order=5, hidden_layer_size = 10, end_epoch=50, batch_size = 50, lmbd=1 is used throughout our study. After the training finishes, we choose the most significant edge corresponding features with the number matching the sparsity level.
• DCM, NGM. The two algorithms are both implemented at https://github.com/alexisbellot/ Graphical-modelling-continuous-time. For DCM, the default setting is used, and we use hidden dim = 10 for NGM. After both training finishes, we choose the most significant edge corresponding features with the number matching the sparsity level.
• GEASS. We use the same training parameters in all time-series settings, with the key sparsity regularization parameter λ1 set with 0.04/0.05 based on a validation set, and the rest parameter settings are consistent with default.
C.1.2 SCALABILITY ANALYSIS
We test PCMCI, GVAR, GrID-net, NGM, GEASS, GEASS+LPCMCI’s running time with consistent settings described in the above section. (LPCMCI’s setting is consistent with PCMCI’s setting). We use the same data generation pipeline and select the set of the total feature numbers as [100, 200, 400, 800, 1600].
C.2 SIMULATED SPATIAL OMICS DATA BENCHMARKING STUDY
In the study the spatial omics data is simulated with Python package Scsim (Kotliar et al., 2019). 1000 genes are simulated in total, while 990 genes are cell-type-specificly expressed. The rest 10 genes each has a functional relationship (linear/nonlinear) with one cell-type-specific genes plus the noise term in order to model the cell-type-specific interactions. The data is then normalized and log-transformed according to the standard Scanpy pipeline (Wolf et al., 2018). The forward operator is defined by 4-neighbor adjacency matrix.
C.2.1 ALGORITHM IMPLEMENTATION
If not particularly mentioned, default settings of the algorithms are used throughout.
• Lasso Granger. The Lasso algorithm is implemented by Scipy with tuned α (0.12) to match the sparsity level.
• NCEM. NCEM (Linear) is a linear graph neural network, which in the grid case corresponds to a standard linear regression based on neighbors and the cell type label. Based on the original work, we implemented our equivalent version by Lasso regression with α = 0.019 to match the sparsity level.
• GEASS. We use the same training parameters in all settings, with the key sparsity regularization parameter λ1 set with 0.02 based on a validation set, and the latent dimension number is set to be 64.
• TE. To give a fair comparison, we use the same architecture as GEASS except for the loss function is changed. We use the same training parameters in all time-series settings, with the key sparsity regularization parameter λ1 set with 0.05 based on a validation set, and the latent dimension number is set to be 64 consistent with GEASS.
C.3 SCRNA-SEQ PANCREAS TRAJECTORY
The data preprocessing is consistent with the scVelo tutorial: https://scvelo.readthedocs.io/ VelocityBasics/ (Bergen et al., 2020). The parameter set: λ1 = 0.06, λ2 = 0.1. Here because the gene regulatory network is fully connected and activated in cascade along the developmental trajectory, we consider the opposite initialization with b be the largest eigenvalues corresponding eigenvectors of the matrix (XTX)−1XTWX .
C.4 MERFISH SPATIAL TRANSCRIPTOMICS DATA
The data is downloaded from Dryad and preprocessed with the standard Scanpy pipeline (Wolf et al., 2018): first normalize and log-transform the data by default functions in Scanpy then select 1000 highly variable genes by default functions in Scanpy (Wolf et al., 2018). The forward operator is defined by 5-neighbor adjacency matrix. The GEASS parameter set is consistent with those used in the spatial omics benchmarking.
D ADDITIONAL EXPERIMENTAL RESULTS | 1. What is the focus and contribution of the paper regarding causal feature selection?
2. What are the strengths of the proposed approach, particularly in terms of its novelty and theoretical analysis?
3. What are the weaknesses of the paper, especially regarding the model and its interpretability?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper identifies causally interacting features of high dimensional temporal/spatial data by considering the sparsity of underlying causal mechanisms instead of link sparsity, which can select the critical corresponding features for downstream causal discovery. Theoretical studies are provided, and empirical evaluations on synthetic and real biological data showed its superiority over competing methods.
Strengths And Weaknesses
Strengths: 1. This paper proposes a novel solution of causal feature selection problem in general graph-structured data. 2. Theoretical analysis is provided. 3. Empirical results showed outperformed performance.
Weaknesses: 1. I think the current version takes too much space on the theoretical analysis while ignoring the model itself. Figure 2 is not clear to me. 2. For the Merfish data experiment, what's the reason to downsample the gene number from 4000 to 1000? In real situations, the gene number can be millions. If the proposed method can not deal with high-dimensional features, then the scalability will be an issue. 3. What's the criteria for GEASS to select 9 genes in Merfish data? Can the authors elaborate more /give insight on interpreting the selection of this subset for biological discovery?
Clarity, Quality, Novelty And Reproducibility
The code is not provided. See above for my questions. |
ICLR | Title
Human Perception-based Evaluation Criterion for Ultra-high Resolution Cell Membrane Segmentation
Abstract
Computer vision technology is widely used in biological and medical data analysis and understanding. However, there are still two major bottlenecks in the field of cell membrane segmentation, which seriously hinder further research: lack of sufficient high-quality data and lack of suitable evaluation criteria. In order to solve these two problems, this paper first introduces an Ultra-high Resolution Image Segmentation dataset for the Cell membrane, called U-RISC, the largest annotated Electron Microscopy (EM) dataset for the Cell membrane with multiple iterative annotations and uncompressed high-resolution raw data. During the analysis process of the U-RISC, we found that the current popular segmentation evaluation criteria are inconsistent with human perception. This interesting phenomenon is confirmed by a subjective experiment involving twenty people. Furthermore, to resolve this inconsistency, we propose a new evaluation criterion called Perceptual Hausdorff Distance (PHD) to measure the quality of cell membrane segmentation results. Detailed performance comparison and discussion of classic segmentation methods along with two iterative manual annotation results under existing evaluation criteria and PHD is given.
1 INTRODUCTION
Electron Microscopy (EM) is a powerful tool to explore ultra-fine structures in biological tissues, which has been widely used in the research areas of medicine and biology ( ERLANDSON (2009); Curry et al. (2006); Harris et al. (2006)). In recent years, EM techniques have pioneered an emerging field called “Connectomics” (Lichtman et al. (2014)), which aims to scan and reconstruct the whole brain circuitry at the nanoscale. “Connectomics” has played a key role in several ambitious projects, including the BRAIN Initiative ( Insel et al. (2013)) and MICrONS ( Gleeson & Sawyer (2018)) in the U.S., Brain/MINDS in Japan ( Dando (2020)), and the China Brain Project ( Poo et al. (2016)). Because EM scans brain slices at the nanoscale, it produces massive images with ultra-high resolution and inevitably leads to the explosion of data. However, compared to the advances of EM, techniques of data analysis fall far behind. In particular, how to automatically extract information from massive raw data to reconstruct the circuitry map has growingly become the bottleneck of EM applications.
One critical step in automatic EM data analysis is Membrane segmentation. With the introduction of deep learning techniques, significant improvements have been achieved in several public available EM datasets ISBI 2012 and SNEMI3D ( ISBI 2012 (2012); ISBI 2013 (2013); Arganda-Carreras et al. (2015b); Lee et al. (2017)). One of the earliest works ( Ciresan et al. (2012) used a succession of max-pooling convolutional networks as a pixel classifier, which estimated the probability of a pixel is a membrane. Ronneberger et al. (2015) presented a U-Net structure with contracting paths, which captures multi-contextual information. Fully convolutional networks (FCNs) proposed by Long et al. (2015) led to a breakthrough in semantic segmentation. Follow-up works based on Unet and FCN structure ( Xie & Tu (2015); Drozdzal et al. (2016); Hu et al. (2018); Zhou et al. (2018); Chaurasia & Culurciello (2017); Yu et al. (2017); Chen et al. (2019b)) have also achieved outstanding results near-human performance.
Despite much progress that has been made in cell membrane segmentation for EM data thanks to deep learning, one risk to these popular and classic methods is that they might be “saturated” at the current datasets as their performance appear to be “exceedingly accurate” ( Lee et al. (2017)). How
can these classic deep learning based segmentation methods work on new EM datasets with higher resolution and perhaps more challenges? Moreover, how robust of these methods when they are compared with human performance on such EM images?
To expand the research of membrane segmentation on more comprehensive EM data, we first established a dataset “U-RISC” containing images with original resolution (10000 × 10000 pixels, Fig. 1). To ensure the quality of annotation, it also costs us over 10,000 labor hours to label and double-check the data. To the best of our knowledge, U-RISC is the largest uncompressed annotated and EM dataset today. Next, we tested several classic deep learning based segmentation methods on U-RISC and compared the results to human performance. We found that the performance of these methods was much lower than that of the first annotation. To understand why human perception is better than the popular segmentation methods, we examined in detail the Membrane segmentation results by these popular segmentation methods. How to measure the similarity between two image segmentation results has been widely discussed ( Yeghiazaryan & Voiculescu (2018); Niessen et al. (2000); Veltkamp & Hagedoorn (2000); Lee et al. (2017)). Varduhi Yeghiazaryan ( Yeghiazaryan & Voiculescu (2018)) discussed the family of boundary overlap metrics for the evaluation of medical image segmentation. Veltkamp. etc ( Veltkamp & Hagedoorn (2000)) formulated and summed up the similarity measures in a more general condition. In some challenges, such as ISBI2012 ( Arganda-Carreras et al. (2015a)), they also considered multiple metrics like Rand score on both original images and thinned images. However, we found there was a certain inconsistency between current most popular evaluation criteria for segmentation(e.g. F1 score, IoU) and human perception: while some figures were rated significantly lower in F1 score or IoU, they were “perceived” better by humans (Fig. 4).
Such inconsistency motivated us to propose a human-perception based criterion, Perceptual Hausdorff Distance (PHD) to evaluate image qualities. Further, we set up a subjective experiment to collect human perception about membrane segmentation, and we found the PHD criteria is more consistent with human choices than traditional evaluation criteria. Finally, we found the current popular and classical segmentation methods need to be revisited with PHD criteria.
Overall, our contribution in this work lies mainly in the following two parts: (1) we established the largest, original image resolution-based EM dataset for training and testing; (2) we proposed a human-perception based evaluation criterion, PHD, and verified the superiority of PHD by subjective experiments. The dataset we contributed and the PHD criterion we proposed may help researchers to gain insights into the difference between human perception and conventional evaluation criteria, thus motivate the further design of the segmentation method to catch up with the human performance on original EM images.
2 U-RISC: ULTRA-HIGH RESOLUTION IMAGE SEGMENTATION DATASET FOR CELL MEMBRANE
Supervised learning methods rely heavily on high-quality datasets. To alleviate the lack of training data for cell membrane segmentation, we proposed an Ultra-high Resolution Image Segmentation dataset for Cell membrane, called U-RISC. The dataset was annotated upon RC1, a large scale retinal serial section transmission electron microscopic (ssTEM) dataset, publically available upon request and described in the work of Anderson et al. (2011). The original RC1 dataset is a 0.25mm diameter, 370 TEM slices volume, spanning the inner nuclear, inner plexiform, and ganglion cell layers, acquired at 2.18 nm/pixel across both axes and 70nm thickness in z-axis. From the 370 serial-section volume, we clipped out 120 images in the size of 10000 ×10000 pixels from randomly chosen sections. Then, we manually annotated the cell membranes in an iterative annotation-correction procedure. Since the human labeling process is very valuable for uncovering the human learning process, during the relabeling process, we reserved the intermediate results for public release. The U-RISC dataset will be released on https://Anonymous.com on acceptance.
2.1 COMPARISON WITH OTHER DATASETS
ISBI 2012 (Cardona et al. (2010)) published a set of 30 images for training, which were captured from the ventral nerve cord of a Drosophila first instar larva at a resolution of 4×4×50 nm/pixel through ssTEM (Arganda-Carreras et al. (2015b); ISBI 2012 (2012)). Each image contains 512×512 pixels, spanning a realistic area of 2×2 µm approximately. In the challenge of SNEMI3D (Kasthuri
et al. (2015); ISBI 2013 (2013)), the training data is a 3D stack of 100 images in the size of 1024×1024 pixels with the voxel resolution of 6×6×29 nm/pixel. The raw images were acquired at the resolution of 3×3×29 nm/pixel using serial section scanning electron microscopy (ssSEM) from mouse somatosensory cortex (Kasthuri et al. (2015); ISBI 2013 (2013)). U-RISC contains 120 pieces of annotated images (10000×10000 pixels) at the resolution of 2.18×2.18×70 nm/pixel from rabbit retina.
Due to the difference of species and tissue, U-RISC can fill in the blank of annotated vertebrate retinal segmentation dataset. Besides that, U-RISC has some other characteristics which can be focused on in the future segmentation study. The first one is that the image size and realistic size of U-RISC is much larger, specifically, the image size of U-RISC is 400 and 100 times of ISBI2012 and SNEMI3D respectively, and the realistic size is 100 and 9 times of them respectively (Fig. 1 (c)), which can be applied in developing deep learning based segmentation methods according to various demands. And along with the iterative annotation procedure U-RISC actually contains 3 sets of annotation results with increasing accuracy, which could serve as ground truth at different level standard. And the total number of annotated images is 12 and 3.6 times of the public annotated images of ISBI2012 and SNEMI3D respectively (Fig. 1 (d)). An example of the image with its label is shown in the Supplementary. Due to the limitation of the size of the supplementary material, we only uploaded a quarter (5000 ×5000 pixels) size of the original image with its label.
2.2 TRIPLE LABELING PROCESS
The character of high resolution in TEM image can display a much more detailed sub-cellular structure, which requests more patience to label out the cell (Fig. 2(a)). Besides, the imaging quality can be affected by many factors, such as section thickness or sample staining (Fig. 2(b)). And low imaging quality also requests more labeling efforts. Therefore, increasing labeling efforts is essential to completely annotate U-RISC. To guarantee the labeling accuracy, we set up an iterative correction mechanism in the labeling process (Fig. 3). Before starting the annotation, labeling rules were introduced to all annotators. 58 qualified annotators were allowed to participate in the final labeling process. After the first round annotation, 5 experienced lab staff with sufficient background knowledge were responsible to point out labeling errors pixel by pixel during the second and the third rounds of annotation. Finally, the third round annotation results were regarded as the final “ground truth”. And previous two rounds of manual annotations are also saved for later analysis. Fig. 3 shows an example of the two inspection processes. We can see that there are quiet a few mislabeled and missed labeled cell membranes in each round. Therefore, the iterative correction mechanism is very necessary.
3 PERCEPTION-BASED EVALUATION
In the analysis of EM data, membrane segmentation is generally an indispensable key step. However, in the field of cell membrane segmentation, most of the previous studies, such as Zhou et al. (2018); Chaurasia & Culurciello (2017); Drozdzal et al. (2016), were not specifically designed for high resolution datasets such as U-RISC. In addition, although many researchers discussed various evaluation criteria for medical and general tasks, few researchers actually incorporate them into the design of the architectures of cell membrane segmentation methods.
By comparing the segmentation results of the popular and classic segmentation methods, we found that the widely used evaluation criteria of segmentation were inconsistent with human perception in some cases, which is further discussed through the perceptual consistency experiment (details in Sec. 3.2). To address this issue, we proposed a new evaluation criterion called Perceptual Hausdorff Distance (PHD). The experimental results showed that it was more consistent with human perception.
3.1 INCONSISTENCY BETWEEN EXISTING EVALUATION CRITERIA AND PERCEPTION
Many researchers have proposed various metrics for segmentation evaluation ( Yeghiazaryan & Voiculescu (2018); Niessen et al. (2000); Veltkamp & Hagedoorn (2000); Arganda-Carreras et al. (2015b); Lee et al. (2017)). Some of them, which are the most popular, such as F1 score, Dice Coefficient and IoU ( Sasaki et al. (2007); Dice (1945); Kosub (2019)) are used as the evaluations in most segmentation methods ( Ronneberger et al. (2015); Zhou et al. (2018); Chaurasia & Culurciello (2017); Yu et al. (2017); Chen et al. (2019b)). ISBI2012 cell segmentation challenge used Rand scores (V-Rand and V-Info) ( Arganda-Carreras et al. (2015b)) on thinned membrane for evaluation. Recently, researchers made discussions on various boundary overlap metrics for the
evaluation of medical image segmentation (Yeghiazaryan & Voiculescu (2018)). The most popular evaluation criteria, such as F1 score, are based on the statistics of the degree to which pixels are classified correctly. There are also some metrics designed based on point set distance, such as ASSD ( Yeghiazaryan & Voiculescu (2018)), which is not widely used in recent deep learning researches.
However, quality of segmentation should be judged with respect to the ultimate goal. When we need to use segmentation to reconstruct the whole structure of membranes and connect them, such statistics may not be consistent with human perception in cell membrane segmentation tasks. In the process of segmentation experiments, some interesting phenomena were found. Fig. 4 shows an example of the original image with its manual annotation and segmentation results by two methods GLNet ( Chen et al. (2019b)) and U-Net ( Ronneberger et al. (2015)). The scores indicated that (d) was more similar to (b) than (c).
It should be noted that if these segmentation results are used for reconstructing the structure of cells, the mistakes and loss of structure will be more noticeable when subjects inspect the area surrounded by the red dashed lines in the images, Therefore we consider that (c) is a better prediction, because (d) misses some edges. The reason for the three scores of (c) are lower was that the predicted cell membrane of (c) was thicker than manual labeling. Therefore, it can be inferred that the existing evaluation criteria might not sufficiently robust to variations in the thickness and structures of the membrane, and the evaluation result was more likely inconsistent with human perception.
3.2 PERCEPTUAL CONSISTENCY EXPERIMENTS
In order to verify the above conjecture, a subjective experiment was designed to explore the consistency with the existing evaluation criteria and human subjective perception. Six popular and classical segmentation methods were used to generate cell membrane segmentation results on URISC: U-net ( Ronneberger et al. (2015)), LinkNet( Chaurasia & Culurciello (2017)), CASENet ( Yu et al. (2017)), SENet ( Hu et al. (2018)), U-Net++ ( Zhou et al. (2018)),and GLNet ( Chen et al. (2019b)). Using these segmentation results, 200 groups of images were randomly selected. Each group contained 3 images: the final manual annotation (ground truth) and two automatically generated segmentation results for the same input cell image.
20 subjects were recruited to participate in the experiments. They had either a biological background or experience in cell membrane segmentation and reconstruction. For each group, each of the 20 subjects had three choices. If the subject can tell which segmentation result is more similar to the ground truth, he or she can choose which one. Otherwise, the subject can choose “Difficult to choose”. The experiment interface is shown in the Appendix I.
Before the experiment, the subjects were trained on the purpose and source of the images. During the experiment, 200 groups of images were divided into four groups on average in order to prevent the subjects from choosing randomly due to fatigue. For each batch of groups, the subjects needed to complete the judgment continuously without interruption.
After the experiment, for each group, if there were more than 10 votes of the same number, it was called a valid group. Otherwise, it was invalid and discarded. There were a total of 113 valid groups. Then, based on these valid groups, the consistency of the F1 score, IoU, and Dice with human choices was calculated.
According to our experimental results, the consistency of F1 score, IoU, and Dice with human choice was only 34.51%, 35.40%, and 34.51%, respectively. Therefore, it can be inferred that the three criteria are not consistent with human subjective perception in most cases. More results and design of subjective experiments are shown in the Appendix II, and IV,.
3.3 PERCEPTUAL HAUSDORFF DISTANCE
Based on the subjective experimental results, it was verified that the widely used evaluation criteria for general segmentation were inconsistent with human perception of cell membrane segmentation. This paper proposes a new evaluation standard based on human perception, namely, Perceptual Hausdorff Distance (PHD for short), considering the structure but ignoring the thickness of cell membrane.
An Overview of PHD.
As Fig. 4 shows, from the perspective of neuronal reconstruction, the thickness of the cell membrane is not the key for evaluation. In fact, when the goal is to reconstruct the structure of cells, humans will pay more attention on structure changes, instead of thickness changes. Hence, when measuring the similarity of two cell membrane segmentation results, in order to eliminate the influence of thickness, the segmentation results of two cell membranes were skeletonized, and then the distance between two skeletons was calculated to measure the difference. Since the skeleton is a collection of different points, and Hausdorff distance is a common distance to calculate the difference between two sets of points, the proposed PHD is built upon Hausdorff distance.
On the other hand, through subjective experiments, it was found that people tend to ignore the slight offset between the membrane. Therefore, based on the above two considerations, the Perceptual Hausdorff Distance (PHD) based on Hausdorff distance( Huttenlocher et al. (1993); Aspert et al. (2002); Rachasingho & Tasena (2020)) with modification was designed. Fig. 5 shows the overview of PHD. The details are as follows.
Step 2. Calculate the distance between skeletons. Hausdorff distance is a common distance used to calculate the difference between two point sets. Consider two unordered nonempty sets of points X and Y and the Euclidean distance d(x,y) between two point sets. The Hausdorff distance between X and Y is defined as dH(X,Y) = max { dX,Y, dY,X } = max { max x ∈ X { min y ∈ Y d(x,y)}, max y ∈ Y { min x ∈ X d(x,y)} } ,
(1)
which can be understood as the maximum value of the shortest distance from a point set to another point set. It is easy to prove that the Hausdorff distance is a metric ( Choi (2019)).
In the task of cell membrane segmentation, we should pay attention to the global distance between two point sets, while Hausdorff distance is sensitive to outliers in two point sets. Therefore, the average distance of the two point sets is obtained naturally by using the average operation instead of all the max operations.
Furthermore, it was found that people have tolerance for the small offset between segmentation results. Specifically, if the distance between two points is very small, people tend to ignore it. Therefore, a concept called Tolerance Distance t is defined, which represents human tolerance for small errors.
The Perceptual Hausdorff Distance (PHD) is defined as Eq. 2. dPHD(X,Y) = 1 |X| ∑
x ∈ X min y ∈ Y
d∗(x,y) + 1 |Y| ∑
y ∈ Y min x ∈ X
d∗(x,y), (2)
d∗(x,y) = {‖x− y‖, ‖x− y‖ > t 0, ‖x− y‖ ≤ t (3)
To intuitively understand the influence of tolerance distance in PHD, toy cases (a) and (b) as shown in Fig. 5 are taken as examples. In case (a), the blue skeleton scored 19 points while the orange one scored 18 points. Two skeletons are close in the Euclidean Space but do not coincide. Among all the Euclidean distance d(x,y) of x ∈ X and y ∈ Y, the max distance is 2 pixels, and the most common distance is 1.
When t = 0, which means no mistake can be tolerated, and the PHD is high. If t = 1, the PHD value drops a lot. When the t = 2, PHD becomes 0. In case (b), there is a large offset between two skeletons. When the t is set to [2, 4], the decline of PHD value is slow. When t = 6, it drops to 0, which is the max distance between two point sets of skeletons. The Different settings of t represent the degree of tolerance to the distance between the two skeletons. In practical applications, different tolerance distances can be adopted according to different situations.
Consistency between PHD and human perception.
The consistency with human perception of PHD and existing related criteria (TPVF, TNVF, Prec, RVD, Hausdorff, ASSD, V-Rand, and V-Info) based on the subjective experimental results were also calculated (as described in Section 3.2, the formulas can be found in Appendix V). The result showed that compared with other criteria, PHD with appropriate tolerance distance was more consistent with human perception.
As shown in Fig. 6, while tolerance distance t of PHD increasing from 0 to 800, PHD’s consistency to human perception rose first and then dropped slowly to 0, suggesting human vision does have tolerance for certain offset. Specifically, the maximum value can be reached at 65.48%, when tolerance distance t was set to 3, suggesting that our perceptions prefer to tolerant small perturbations. It was worthy to note that the optimal PHD score (65.48%) was nearly double of the consistency scores obtained by pixel-error based metrics, such as F1 score.
Our experiment shows that most of these compared criteria in color bars can be improved by skeletonizing the segmentation results before evaluation to a certain extent. In Fig. 6, the consistency of these criteria with human perception calculated based on original images can only reach about 30%, while they can improve about 10% on skeletons. Even the best of these metrics (ASSD) on skeletons can only achieve 52.43%, which is significantly lower than the score of PHD performance with t = 3. Therefore, it can be concluded that the PHD performs more consistently with human.
4 RE-EXAMINING PHD ON CLASSIC DEEP LEARNING BASED SEGMENTATION METHODS WITH U-RISC
In the previous two sections, we proposed a new ultra-high resolution cell membrane segmentation dataset U-RISC and a new perceptual criteria PHD to help solve the two bottlenecks in the field of cell membrane segmentation. The subjective experiment on a small-scale dataset demonstrated that PHD is more consistent with human perception for the evaluation of cell membrane segmentation than some widely used criteria.
In order to understand the performance of deep learning methods on the U-RISC dataset, we conducted an in depth investigation on U-RISC with representative deep learning based segmentation methods and different evaluation criteria. To be specific, we chose 6 representative algorithms ( U-net (Ronneberger et al. (2015)), LinkNet(Chaurasia & Culurciello (2017)), CASENet (Yu et al. (2017)), SENet (Hu et al. (2018)), U-Net++ (Zhou et al. (2018)),and GLNet (Chen et al. (2019b))) and re-implemented them on U-RISC dataset. Then four evaluation criteria were used to compare the segmentation results: F1 score, IoU, TPVF, TNVF, Prec, RVD, Hausdorff, ASSD, V-Rand, and V-Info, and PHD.
As mentioned in Sec. 2, the results of the first two rounds of manual labeling results are retained. Therefore, the results of manual annotation under different evaluation criteria will also be analyzed.
Experiment Settings. All the six methods use same training data and testing data to compare the performance. And the parameters and loss functions are same as they are proposed in their references. The parameters for each method and other details are shown in Appendix V.
Experiments Results. The experiment results were shown in Table. 1. The table showed the scores of different evaluation criteria on the first two rounds of manual annotation results and six segmentation results with the ground truth.
Our first finding was that U-RISC was a challenging dataset in the field of cell membrane segmentation. As shown in Table. 1,the performance of deep learning based methods gained around 0.6 in F1-scores, far below the human level (0.98-0.99) (the first annotation performace) on U-RISC dataset, by contrast, they all exceeded 0.95 on the ISBI 2012. Despite possible improvements by parameter tuning, to such ultra-high resolution images, there was clearly a huge gap between the current popular segmentation methods and human performance.
Our second finding was that evaluation rankings for F1-score, IoU, V-Rand-sk, and V-Info-sk were more consistent with each other, but different from PHD-based rankings. Specifically, while PHD tended to choose CASNet, none of the other metrics chose CASNet as the best choice. According to the subjective experimental results of Sec. 3.2, PHD was much closer to human perception. Therefore, the change of ranking led by PHD may also inspire researchers to re-consider the evaluation criteria for cell membrane segmentation algorithms. It also provides a new perspective for promoting the development of segmentation algorithms.
Discussion. Based on the results of these six algorithms, it can be seen that LinkNet and CASENet are better than other methods. From the perspective of network design, LinkNet makes full use of the low-level local information and directly connects the low-level encoder to the decoder of corresponding size. This design pays more attention to the capture of local information, which leads to a more accurate local prediction. CASENet takes full account of the continuity of the edge and makes the low-level features strengthen the high-level semantic information by jumping links between the low-level feature and high-level feature, which pays more attention to structural information. Therefore, the design of LinkNet might be preferred by the traditional evaluation criteria, while CASENet might be preferred by the PHD. This also explains why the two methods rank differently under these two types of evaluation criteria. More local segmentation results of different algorithms are shown in the Appendix III.
In addition, as an example, we add experiments using U-Net, CASENet, and LinkNet on ISBI2012 and SNMI3D datasets (Appendix VI, VIII, IX). The results in Appendix VI show that U-Net with our chosen parameters can perform close to SOTA on ISBI2012 (ours: V-Rand=0.9689,V-Info=0.9723; SOTA: V-Rand=0.9837, V-Info=0.9878 (on skeleton)) and SNEMI3D (ours:V-Rand=0.9389; SOTA: V-Rand=0.9751 (on skeleton)), although we made little efforts in parameter tuning. However, with the same parameter setting, U-Net gets poor scores (V-Rand=0.5288, V-Info=0.5178) on U-RISC. Such a big gap in its performance between U-RISC and previous datasets suggests the challenge from U-RISC dataset, which hopefully will motivate novel designs of machine learning methods in the future. And the results in Appendix VIII and IX show that evaluation rankings for F1-score, IoU, V-Rand-sk, and V-Info-sk are more consistent with each other, but they are different from PHD-based rankings.
5 DISCUSSION AND CONCLUSION
This paper aims to solve the two bottlenecks in the development of cell membrane segmentation. Firstly, we proposed U-RISC,Ultra-high Resolution Image Segmentation dataset for Cell membrane, the largest annotated EM dataset for the Cell membrane so far. To our best knowledge, U-RISC is the only uncompressed annotated EM dataset with multiple iterative annotations and uncompressed high-resolution raw image data. During the analysis process of the U-RISC, we found a certain inconsistency between current evaluation criteria for segmentation (e.g. F1 score, IoU) and human perception. Therefore, this article secondly proposed a human-perception based evaluation criterion, called Perceptual Hausdorff Distance (PHD). Through a subjective experiment on a smallscale dataset, experiments results demonstrated that the new criterion is more consistent with human perception for the evaluation of cell membrane segmentation. In addition, the evaluation criteria of PHD and existing classic deep learning segmentation methods are re-examined.
In future research, we will consider how to improve deep learning segmentation methods from the perspective of cell membrane structure and apply PHD criterion for connectomics research. More disccusions are shown in Appendix VII.
A APPENDIX
I. Fig. 7 is the interface of perceptual consistency experiments.
II. Fig. 8 and Fig. 9 are some examples of subjective experiment images.
III. Fig. 10 and Fig. 11 are some examples of segmentation results of different algorithms.
IV. Experiment Details.
4.1 Subjective Experiment
1) Firstly, the 20 human raters were introduced the value of cell membrane segmentation to connectivity and the importance of structure before testing. And then we used several simple examples to teach them about the experimental process.
2) During the formal experiment, the distribution and selection of data were random. The subjects only need to choose one of the two images that they think is more similar to the ground truth.
3) The 200 groups of images for subjective experiment were randomly selected from the segmentation results produced by the above six methods. Therefore, the training data were from the same dataset as those that were used to create the 200 groups of images that the 20 humans evaluated the results.
4) In order to ensure the continuity of the experiment, each subject was asked to judge each group of images within a specified time (less than 10 minutes).
5) In order to prevent the subjects from fatigue in a long period of experiments, 200 groups of images were distributed to the subjects four times on average.
4.2 Experiments on U-RISC dataset.
All the six methods use same training data and testing data to compare the performance. And the parameters and loss functions are same as they are proposed in their references.
In the training stage, 60% of the dataset was used as the training data, and then the original image was randomly cut into 1024 × 1024 patches to generate 50,000 training images and 20,000 validation images.
Random flipping and clipping were used for data augmentation. Four V100 GPUs were used to train each algorithm. In the testing stage, the original image was cut into the same size of training image, and the patch was tested. These patches were eventually spliced back to the original size for evaluation.
V. Formulas of criteria mentioned in the texture.
The formulas of metrics we compared are shown in Table 3. The symbols in formulations are explained as follows.
- Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances, while recall (also known as sensitivity) is the fraction of the total amount of relevant instances that were actually retrieved.
- TP (true positives), TN (true negatives), FP (false positives), and FN(false negatives) compare the results of the classifier under prediction with ground truth. The terms positive and negative refer to the classifier’s prediction, and the terms true and false refer to whether that prediction corresponds to the ground truth.
- X and Y are two point sets. d(x,y) are the points in X and Y respectively.
- In V-Rand (Arganda-Carreras et al. (2015b)), suppose that S is the predicted segmentation and T is the ground truth segmentation. Define pi,j as the probability that a randomly chosen pixel belongs to segment i in S and segment j in T . This joint probability distribution satisfies the normalization condition ∑ ij pij = 1. The marginal distribution si = ∑ j pij is the probability that
a randomly chosen pixel belongs to segment i in S, and the marginal distribution tj = ∑ i pij is defined similarly. - In V-Info (Arganda-Carreras et al. (2015b)), the mutual information I(S;T ) =∑ ij pij log pij − ∑ i si log si − ∑ j tj log tj is a measure of similarity between S and T . H(S) =
− ∑ i si log si is the entropy function.
VI. Experiments on ISBI2012 and SNEMI3D with U-Net. (Table. 4)
VII. Further Discussion.
Although the content of this paper is mainly involved in EM cell segmentation, we think the significance is beyond.
According to experimental results, the popular methods do not perform well on our dataset (exceeded 95% on ISBI2012, while about 60% on U-RISC). It shows that U-RISC is a challenging dataset that can promote the development of related machine learning and deep learning methods.
The U-RISC dataset may reveal several classic challenges in the field that haven’t been solved: One challenge might be the “imbalance problem of samples” ( Alejo et al. (2016); Li et al. (2010); Zhang et al. (2020)). Due to ultra-high resolution images, the pixels of labeled cell membranes only account for 5.64% of total pixels in training sets, in contrast to 21.96% in ISBS2012 and 33.23% in SNEMI3D. The future design of deep learning methods on U-RISC will have to solve this issue.
Some other challenges might include, e.g. ultra high-resolution image segmentation ( Demir et al. (2018); Zhao et al. (2018); Chen et al. (2019a), appropriate loss function design ( Sudre et al. (2017); Spiring (1993); Choromanska et al. (2015)),and the issues related to ”unclosed” edges as suggested by the reviewer 3.
Taken together, we strongly believe that the U-RISC dataset will have great contribution in technique novelty, by revealing defects in the existing popular methods and promoting novel algorithms for solving classic challenges in machine learning or deep learning community.
In addition, the design of evaluation criteria has been widely concerned in the field of computer science ( Gerl et al. (2020); Lin et al. (2015); Liu et al. (2018)). The PHD we proposed may inspire researchers from a new perspective and further promote the developments of algorithms. The technical novelties of the PHD metric lie in many aspects. To list a few, (1) It can be potentially used
in other tasks, such as vascular segmentation ( Gerl et al. (2020)), bone segmentation ( Lin et al. (2015)), edge detection ( Liu et al. (2018)), and other tasks related to structural and shape information. For example, ( Gerl et al. (2020)) successfully used a distance-based criterion to improve skin layer segmentation in optoacoustic images. (2) It can be modified into loss functions which is also part of our on-going work. It is worthy to note that some works have successfully integrated Hausdorff distance into the loss function ( Genovese et al. (2012); Karimi & Salcudean (2019); Ribera et al. (2019).
VIII. Experiments on ISBI2012 with U-Net, CASENet, and LinkNet. (Table. 5)
IX. Experiments on SNEMI3D with U-Net, CASENet, and LinkNet. (Table. 6)
F1 score = 0.3880
F1 score = 0.5084
F1 score = 0.5347
F1 score = 0.5084
F1 score = 0.5386
F1 score = 0.5482 | 1. What is the focus of the paper regarding cell membrane segmentation?
2. What is the novel evaluation metric proposed in the paper, and how does it differ from existing metrics?
3. What are the strengths of the paper, particularly in terms of its contribution to the field and the quality of the dataset provided?
4. Are there any limitations or potential improvements to the proposed evaluation metric?
5. Can the proposed metric be adapted for use in training processes, or is it primarily intended for evaluation purposes? | Review | Review
This paper presents a large high-resolution cell membrane segmentation dataset and also proposes a new evaluation metric that is more consistent with human perception. The new metric is called Perceptual Hausdorff Distance (PHD), which first applies thinning to skeletonize the segmentation outcome, then computes the Hausdorff distance between skeletons. PHD has a hyper-parameter, i.e., the tolerance distance, to represent the human's tolerance.
Overall, I think this is a good paper that addresses how to correctly evaluate cell membrane segmentation, which is essential for evaluation at a fair standard but has not been studied extensively. The authors provide strong reasons to illustrate the limitations of existing evaluation metrics, and present a relatively larger scale high-quality dataset for evaluating different techniques. The pixel number and the number of images are presented to demonstrate the advantages over ISBI 2012 and SNEMI3D. The image collection, annotation, and evaluation seem to be performed very carefully with 20 subjects involved.
Some minor additional questions: i) The measurement seems to be only tailored for cell membrane segmentation. Is it possible to make the criterion more generalized?
ii) If I am understanding correctly, the measurement seems very hard to be modified into loss functions since it involves thinning and other heuristics. Is it possible to use PHD not only for evaluation purposes but also for improving standard training? |
Subsets and Splits