venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
NIPS
Title Improving Conditional Coverage via Orthogonal Quantile Regression Abstract We develop a method to generate prediction intervals that have a user-specified coverage level across all regions of feature-space, a property called conditional coverage. A typical approach to this task is to estimate the conditional quantiles with quantile regression—it is well-known that this leads to correct coverage in the large-sample limit, although it may not be accurate in finite samples. We find in experiments that traditional quantile regression can have poor conditional coverage. To remedy this, we modify the loss function to promote independence between the size of the intervals and the indicator of a miscoverage event. For the true conditional quantiles, these two quantities are independent (orthogonal), so the modified loss function continues to be valid. Moreover, we empirically show that the modified loss function leads to improved conditional coverage, as evaluated by several metrics. We also introduce two new metrics that check conditional coverage by looking at the strength of the dependence between the interval size and the indicator of miscoverage. 1 Introduction Learning algorithms are increasingly prevalent within consequential real-world systems, where reliability is an essential consideration: confidently deploying learning algorithms requires more than high prediction accuracy in controlled testbeds [1, 2]. Consider, for example, estimating the effects of a drug for a specific person given their demographic information and medical measurements. In such a high-stakes setting, giving a point prediction for the drug’s effect is insufficient; the decision-maker must know what the plausible range of effects for this specific individual. Instance-wise uncertainty quantification in such settings is critical [3–5]. One approach to this problem comes from the quantile regression and prediction interval literature [6–8]; instead of a point prediction, we can return a range of outcomes that represent the plausible response for a given input. We would like these prediction intervals to achieve a pre-specified coverage level (e.g., 90%) for all inputs—that is, across all regions of feature space. Training models to satisfy this validity guarantee is challenging, however, particularly with complex models like neural networks [9, 10]. In this work, we show how to generate prediction intervals that achieve coverage closer to the desired level evenly across all sub-populations. Technically, we achieve this by augmenting the quantile regression loss function with an additional term that promotes appropriately balanced coverage across the feature space. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Formally, consider a regression problem where we are given n training samples {(Xi, Yi)}ni=1, where X ∈ Rp is a feature vector , and Y ∈ R is a response variable. At test time, we observe a feature vector Xn+1 and our goal is to predict the unknown value of Yn+1 and—importantly—to report on its uncertainty. In this work, we represent this uncertainty by constructing a prediction interval Ĉ(Xn+1) ⊆ R that is likely to contain the response Yn+1. In particular, we seek to produce intervals that contain the response with a user-specified probability 1− α that are valid across all regions of feature space: P[Yn+1 ∈ Ĉ(Xn+1) | Xn+1 = x] ≥ 1− α, (1) a property known as conditional coverage. Notice that such prediction intervals are both correct in that they satisfy a coverage guarantee and also are adaptive, in that the size of the prediction intervals can change with the difficulty of the inputs: easy inputs give small intervals and hard inputs give large intervals. Returning to our medical example, consider predicting the outcome of a drug from age, gender, blood pressure, and so on. The conditional coverage requirement in (1) asks that intervals are correct for any age, gender, and health status combination. That is, no matter what an individual’s value of the features x, the uncertainty quantification must be valid. The conditional quantiles of Y | X = x are the natural way to produce intervals satisfying (1). Let αlo = α/2 and αhi = 1− α/2. Given the true conditional quantiles, qαlo(x), qαhi(x), we can build an oracle prediction interval satisfying (1) in the following way: C(x) = [qαlo(x), qαhi(x)]. (2) In practice, the conditional quantiles are unknown but can be estimated with quantile regression, yielding the interval Ĉ(x) = [q̂αlo(x), q̂αhi(x)]. This approach is attractive because quantile regression yields intervals that are adaptive to heteroscedasticity without requiring parametric assumptions [11– 13], but these intervals might not satisfy the conditional coverage statement (1), since q̂αlo(x), q̂αhi(x) are merely estimations of the true quantiles [14]. Indeed, we observe in experiments 5 that traditional quantile regression often gives intervals with poor conditional coverage, prompting the present investigation. In this work, we propose a novel regularization scheme to push quantile regression algorithms towards solutions that better satisfies the conditional coverage requirement (1). The core idea is to force the coverage and interval length to be approximately independent, since this independence must hold for the optimal oracle intervals in (2). A method that constructs intervals whose coverage and length are dependent is either sometimes too conservative (generating too wide intervals), sometimes too liberal (yeilding too short intervals), or both. In addition to improved training schemes, we propose two new tools to check the validity of the resulting predictions in a meaningful way. Specifically, in Section 4, we present two new interpretable metrics to asses the violation of conditional coverage, taking advantage of the orthogonality property identified above. We use these (and other) metrics in Section 5 to study our proposal on simulated data and nine real benchmark data sets. We find that our training scheme yields improvements when used together with both a classic [6] and a more recent [15] quantile regression method. A synthetic two-group example We begin with a small synthetic experiment that demonstrates the challenges of constructing prediction intervals with accurate conditional coverage. We generate a dataset with two unbalanced subpopulations: 80% of the samples belong to a majority group and the remaining 20% to a minority group, where the conditional distribution Y | X of the minority group is more dispersed than the majority group. In our experiments, group membership is included as one of the features. See Section S3.1 of the Supplementary Material for a full description of the distribution. As a baseline method, we first fit a quantile neural network model (vanilla QR) by optimizing the pinball loss (see Section 2), attempting to estimate the low αlo = 0.05 and high αhi = 0.95 conditional quantiles. The left panel of Figure 1 shows the coverage obtained by the vanilla QR model across training epochs. The coverage on the test data is far lower than suggested by the training data, failing to reach the desired 90% level and increasing as the training progresses. In particular, this gap remains large at the epoch in which the model achieves the minimal loss evaluated on an independent validation set. Here, the empirical test coverage measured over the majority and minority groups is equal to 80% and 68%, and the empirical average lengths evaluated over them are 1.55, 6.45, respectively. Next, we fit our proposed orthogonal QR model (which we will formally introduce in Section 3) on the same data. The coverage rate across epochs is illustrated in the right panel of Figure 1. In contrast to vanilla QR, the training and testing curves of the majority group overlap for the entire training epochs and both approximately reach the desired 90% level. The minority group coverage has similar behavior, with a significantly smaller gap between the two curves compared to vanilla QR. Here, the model that corresponds to the best epoch achieves 86% coverage rate with 1.62 average length on the majority group, and 83% coverage rate with 9.33 average length on the minority group. Importantly, here we are able to train for more epochs before overfitting, which leads to a better final model. To conclude, in this example orthogonal QR prevents overfitting and leads to better conditional coverage. 2 Preliminaries and related work 2.1 Quantile regression The task of estimating the low and high conditional quantiles, q̂αlo(x), q̂αhi(x), can be expressed as a minimization problem taking the following form: (q̂αlo , q̂αhi) = argmin fαlo ,fαhi∈F 1 n n∑ i=1 `α(Yi, fαlo(Xi), fαhi(Xi)). (3) Above, `α is a loss function designed to fit quantiles; we discuss two examples next. The most common loss function for estimating a conditional quantile q̂α is called the pinball loss or check function [6, 9, 16], expressed as ρα(y, ŷ) = { α(y − ŷ) y − ŷ > 0, (1− α)(ŷ − y) otherwise. (4) Observe that for the choice α = 1/2 the above becomes the L1 norm that is known to estimate the conditional median. By setting α 6= 1/2 we get a tilted version of the latter, with a degree controlled by the value of α. In the context of (3), we can set `pbα (y, fαlo(x), fαhi(x)) = ραlo(y, fαlo(x)) + ραhi(y, fαhi(x)) to simultaneously estimate the low and high quantiles. Throughout this paper, we will refer to this procedure as vanilla QR. A recently-developed alternative to the pinball loss is the interval score loss [17], defined as: `intα (y, fαlo(x), fαhi(x)) = (fαhi(x)− fαlo(x)) + 2 α (fαlo(x)− y)1[y < fαlo(x)] + 2 α (y − fαhi(x))1[y > fαhi(x)]. (5) Note that the left-most term encourages short intervals and the two remaining components promote intervals with the right coverage. To improve statistical efficiency, it is recommended by [15] to simultaneously estimate all conditional quantiles by minimizing the following empirical risk function: Eα∼U [0,1][`intα (·)].1 This is the approach we take in our experiments. Note that [15] proposed additional learning schemes for improving efficiency, such as group batching and ensemble learning; see also [18]. These ideas are complementary to our proposal and may further improve the performance of our proposed orthogonal QR. Quantile regression is a large, active area of research, and we finish by pointing out a few representative strands of work in the literature. Estimates of conditional quantile functions using the pinball loss with specific models are proven to be asymptotically consistent under regularity conditions [16]. The work reported in [11–13] offer a non-parametric version of quantile regression. This line of research was further developed by [19] that presented a generalization to additive models that are non-parametric. The pinball loss and interval score loss can be also used to estimate conditional probability distribution [17, 20]. Nevertheless, quantiles can be estimated in other ways rather than minimizing these two loss functions. These include quantile random forest [8] and the method proposed in [21] that iteratively estimates the conditional quantile through the Majorize-Minimize algorithm. Besides the regularization term we propose, there are other suggested penalties that are useful in different situations, such as using sparse modeling in high dimensional response [22–24]. 2.2 Conformal inference Conformal inference [25] is a framework for building a prediction intervals that provably attain a weaker marginal coverage property: P[Yn+1 ∈ Ĉ(Xn+1)] ≥ 1− α. (6) Importantly, one can guarantee that this holds for any joint distribution PXY , sample size n, and predictive algorithm. In contrast to (1), the probability statement in (6) is marginal and taken over all the training and test samples {(Xi, Yi)}n+1i=1 . For example, in the context of the data from Figure 1, intervals that satisfy (6) would be allowed to undercover the minority group and overcover the majority group. Therefore, the statement in (6) is much weaker than that in (1). Yet, the former can be achieved for any distribution whereas the latter can not be achieved for badly-behaved distributions; see [14, 26]. While variants of the guarantee in (6) are possible [27–29], achieving coverage exactly balanced across a continuous feature cannot be done without further assumptions. Much recent work in conformal inference with quantile regression attempts to generate intervals that are adaptive to such heteroscedasticity so that they approximately achieve conditional coverage in (1), while ensuring that 1− α marginal coverage in (6) is exactly achieved [7, 30–35]. The experiments we conduct show that our proposed Orthogonal QR method can be used in combination with conformal prediction to improve the conditional coverage property while attaining valid marginal coverage. 3 Proposed method: orthogonal quantile regression 3.1 Formulating the learning scheme This section presents a modification of the pinball loss or interval score loss in order to fit models with improved conditional coverage. Denote by V = 1[Y ∈ Ĉ(X)] the coverage identifier, and by L = |Ĉ(X)| the interval’s length. Our proposal is motivated by the following observation. 1The code is hosted at https://github.com/YoungseogChung/calibrated-quantile-uq Proposition 1 (Independence of width and coverage). Let (X,Y ) be a sample drawn from PXY , and let X be the support of X . If the distribution of Y | X = x is continuous for all x ∈ X , and the fixed, deterministic interval-valued function Ĉ(X) satisfies P[Y ∈ Ĉ(X)|X = x] = 1− α for all x ∈ X and some α ∈ (0, 1), then the interval satisfies V ⊥⊥ L. In particular, the above implies that the length of an interval L = qαhi(X)− qαlo(X) constructed by the true low and high quantiles is independent of the coverage identifier V , as stated next. Corollary 1. Under the assumptions of Proposition 1, an interval constructed by the true conditional quantiles satisfies V ⊥⊥ L. We note that an earlier, limited version of this observation appears in [36] in the context conditional coverage for classification problems. All proofs are presented in Section S2 of the Supplementary Material. Since intervals constructed by the true conditional quantiles obey the independence property, forcing the fitted model to approximately satisfy this property during training can result in better conditional coverage for future test points. This leads us to our proposed orthogonal QR objective: (q̂αlo , q̂αhi) = argmin fαlo ,fαhi∈F 1 n n∑ i=1 `α(Yi, fαlo(Xi), fαhi(Xi)) + γR(L,V). (7) where `α is either ` pb α or `intα , L ∈ Rn is a vector that contains Li = fαhi(Xi)− fαlo(Xi) = |Ĉ(Xi)| as its elements, and V ∈ Rn is a vector with entries Vi = 1[Yi ∈ Ĉ(Xi)]. (To facilitate training with gradient methods, in practice we use a smooth approximation to the indicator function; see Section S1.1 of the Supplementary Material.) The functionR(L,V) ∈ R+ returns a real-valued score that quantifies the strength of the dependence between L and V , where a large value indicates that the two are more dependent; we discuss specific choices in Section 3.2. The regularization strength is controlled by the hyperparameter γ. In Supplementary Section S4.1 we explain how this parameter is determined, and in Supplementary Section S5.1.3 we demonstrate the effect of this parameter on the performance of our method. Lastly, we point out that our proposal falls into the broader theme of fitting models while enforcing conditional independence properties, a goal that is important for algorithmic fairness [e.g., 37–39]. This work aims to achieve uncertainty estimates that are equally good across all feature space, a prediction interval analog to the goal of [40]. 3.2 The orthogonality loss We now turn to the question of choosing the specific dependence loss penalty,R in (7). In principle, we could use any dependence measure from the many in the literature: chi-squared tests [41], Pearson’s correlation, distance correlation [42], Kolmogorov-Smirnov statistic [43], Randomized Dependence Coefficient [44], Hilbert-Schmidt independence criterion (HSIC) [45], and so on. In this work we focus on Pearson’s correlation and HSIC which are described hereafter. The Pearson’s correlation measures the linear dependency between two random variables. Here, the loss is defined as: Rcorr(L, V ) = ∣∣∣∣∣ Cov(L, V )√Var(L)√Var(V ) ∣∣∣∣∣ . (8) The advantages of this choice are its simplicity and the minimal computational burden. Next, HSIC is a more sophisticated, nonlinear complement to the Pearson’s correlation measure, which can detect arbitrary complex relationships between the coverage identifier and the interval length. It is an analog of the well-known Maximum Mean Discrepancy (MMD) distance [46], but is a measure of dependence. The idea is that while Rcorr(L, V ) = 0 does not necessarily imply that L and V are independent, having Rcorr(g(L), h(V )) = 0 for every continuous bounded functions g, h guarantees the independence property [47]. While it is impossible to sweep over all possible continuous bounded functions, HSIC offers a tractable solution, guaranteeing that HSIC(L, V ) = 0 if and only if L ⊥⊥ V [45]. In our work, we utilize this measure, and define the orthogonality loss as RHSIC(L, V ) = √ HSIC(L, V ) (taking the square root to magnify small values). This choice is similar to the one advocated in [48]. With these choices of R, we now show that the true conditional quantiles are a solution for the orthogonal QR problem. Theorem 1 (Validity of orthogonal quantile regression). Suppose Y | X = x follows a continuous distribution for each x ∈ X , and suppose that qαlo(X), qαhi(X) ∈ F . Consider the infinite-data version of the orthogonal QR optimization in (7): argmin fαlo ,fαhi∈F E [ `α(Y, fαlo(X), fαhi(X)) + γR ( |Ĉ(X)|, I[Y ∈ Ĉ(X)] )] , where Ĉ(X) = [fαlo(X), fαhi(X)], `α is either ` pb α or ` int α , and R is Rcorr or RHSIC. Then, true conditional quantiles are solutions to the above optimization problem. Moreover, if the solution is unique for γ = 0 then the solution is unique for all γ > 0. The uniqueness part of the theorem means that whenever vanilla QR is guaranteed to give the correct quantiles in the large-sample limit, then orthogonal QR will give the same (correct) solution. The result continues to hold for any dependence measure R that achieves its minimum value for any two independent variables, a basic property that all dependence measures that we are aware of satisfy. See the proof in Section S2 of the Supplementary Material for further details. 4 Metrics for assessing conditional coverage We next discuss several quantitative measures of conditional coverage. We will introduce two new metrics for conditional coverage, and then review one existing proposal from the literature. Lastly, we will discuss two ad-hoc metrics to help us compare orthogonal QR with vanilla QR in our upcoming simulation experiments. 4.1 Two new metrics for conditional coverage Pearson’s correlation: As previewed in the previous section, the Pearson correlation between the interval size and the indicator of coverage (i.e.,Rcorr from (8)) is a simple, effective way to measure conditional coverage. However, to the best of our knowledge, we are the first to leverage it for this purpose. HSIC: Similarly, we consider the HSIC measure of dependence between the interval size and the indicator of coverage (i.e.,RHSIC above). We estimate this metric as described in [40].2 As before, to our knowledge this has never been leveraged as a metric to asses conditional coverage. 4.2 Other metrics for our empirical evaluations ∆WSC: As an additional measure of conditional coverage, we evaluate the coverage over the worstslab as proposed in [49].3 To avoid a case where an improvement in this quantity is obtained by naively enlarging all prediction intervals, we suggest a variant that we call ∆WSC. This metric is defined as the absolute difference between the worst-slab coverage and the marginal coverage, both evaluated on test data I: ∆WSC = ∣∣∣WSC({(Xi, Yi)}i∈I ; Ĉ)− Coverage({(Xi, Yi)}i∈I ; Ĉ)∣∣∣ . Above, Coverage ( {(Xi, Yi)}i∈I ; Ĉ ) = 1|I| ∑ i∈I 1[Yi ∈ Ĉ(Xi)] where Ĉ(x) is a prediction interval method. Importantly, a uniform increase of the length of all intervals will not deceive the ∆WSC measure as it will remain fixed. ∆ILS-Coverage: We next consider a measure that checks whether the intervals made larger by orthogonal QR compared to vanilla QR are necessary for improving the conditional coverage. In general, suppose we are given two algorithms A1 and A2 for constructing prediction intervals. Let ∆Li = |ĈA1(Xi)| − |ĈA2(Xi)| 2The code is available at https://github.com/danielgreenfeld3/XIC 3We used the implementation from https://github.com/msesia/arc/ be the difference between the interval length |ĈA(Xi)| obtained by A1 and A2, evaluated on the same test point Xi. Next, let q0.9({∆Li}i∈I) be the 90% empirical quantile of {∆Li}i∈I . Then, let ILS be the 10% of samples whose length increased the most: ILS = {i : ∆Li ≥ q0.9({∆Li}i∈I), i ∈ I}. With this notation in place, we propose the ∆ILS-Coverage metric: ∆ILS-Coverage = ∣∣∣Coverage({(Xi, Yi)}i∈ILS ; ĈAk)− Coverage({(Xi, Yi)}i∈I ; ĈAk)∣∣∣. In words, the above is the absolute difference between the coverage over the ILS samples and the marginal coverage, evaluated for each algorithm Ak, k = 1, 2. A smaller value for k = 1 indicates that the points with very different size under A1 and A2 are handled better by A1. ∆Node-Coverage: As a variant of ∆ILS-Coverage, we identify a sub-population characterized by a small set of features such that the two algorithmsA1 andA2 produce very different intervals, and check the coverage on this region. To this end, we label the ILS samples as the positive class and fit a binary classifier formulated as a decision tree, aiming to predict whether a sample X belongs to the ILS set. Denote the set of tree nodes in depth at most three that contain at least 5% of the samples by {Nodej}j , where Nodej ⊆ I. Next, let ND be the set of indices of the samples that belong to the node that maximizes the following ratio: |Nodej ∩ ILS| / |Nodej \ ILS|. Finally, given a method for constructing prediction intervals Ĉ(·), compute the distance between the coverage over the ND samples and the marginal coverage, formulated as ∆Node-Coverage = ∣∣∣Coverage({(Xi, Yi)}i∈ND ; Ĉ)− Coverage({(Xi, Yi)}i∈I ; Ĉ)∣∣∣ . 5 Experiments Armed with the performance metrics described in Section 4, we now systematically quantify the effectiveness of the proposed independence penalty when combined with baseline quantile regression methods. In all experiments, we apply a deep neural network as a base model for constructing prediction intervals with 1 − α = 0.9 coverage level. Section S4 of the Supplementary Material gives the details about the network architecture, training strategy, and details about this experimental setup. Software implementing the proposed method and reproducing our experiments can be found at https://github.com/Shai128/oqr 5.1 A synthetic two-group setting We return to the synthetic two-group setting previewed in Section 1, but first provide more details about the data. In this data set, the difference in distribution between the majority and minority groups is controlled by modifying the noise level of the conditional distribution Y | X of the minority group. Furthermore, X has 50 coordinates, the first of which indicates the group membership. Section S3.1 of the Supplementary Material contains more details about the generation of this synthetic data. To analyze the performance of our method, we generate 7000 i.i.d. samples and repeat the following experiment for 30 random train/validation/test splits of the data. We fit a quantile regression model with pinball loss on 5040 training samples, where we tune the number of epochs on an independent validation set that contains 560 samples. The remaining 1400 samples are used to test the model’s performance. We pre-processed the feature vector using z-score standardization, and normalized the features and response variables to have a zero mean and a unit variance. In Table 1 we report the average coverage, length, and conditional coverage metrics for vanilla QR and orthogonal QR for two minority-group noise levels. (The ∆ILS-Coverage, ∆Node-Coverage metrics are not reported in this case, since they both essentially correspond to the minority group’s coverage level, which is given in the table.) The intervals constructed by the baseline vanilla QR undercover both the majority and minority group but; this tendency is more severe for the latter. By contrast, the regularized model achieves similar coverage rates for the two groups, with levels that are closer to the nominal 90%. This is also reflected by the improvement in the Pearson’s correlation, HSIC, and ∆WSC metrics of conditional coverage. Overall, orthogonal QR gives wider intervals for the minority group, which is anticipated since the coverage of the baseline model is far below the nominal rate. As for the majority group, our proposed training gives intervals of about the same length compared to the baseline model, while achieving a considerably higher coverage rate. In fact, the regularized model constructs even shorter intervals in the high noise level case. We note that this performance continues to hold even when using interval score loss in place of the pinball loss; see Section S5.1.1 of the Supplementary Material. Lastly, we also examine the effect of our regularization on weighted QR [50], and display the results in Section S5.1.2 of the Supplementary Material. 5.2 Real data Next, we compare the performance of the proposed orthogonal QR to vanilla QR on nine benchmarks data sets as in [15, 30]: Facebook comment volume variants one and two (facebook_1, facebook_2), blog feedback (blog_data), physicochemical properties of protein tertiary structure (bio), forward kinematics of an 8 link robot arm (kin8nm), condition based maintenance of naval propulsion plants (naval), and medical expenditure panel survey number 19-21 (meps_19, meps_20, and meps_21). See Section S3.2 of the Supplementary Material for details about these data sets. We follow the experimental protocol and training strategy described in Section 5.1. We randomly split each data set into disjoint training (54%), validation (6%), and testing sets (40%). We normalized the features and response variables to have a zero mean and a unit variance each, except for facebook_1, facebook_2, blog_data, and bio datasets in which we log transform Y before the standardization. Table 2 summarizes the performance of vanilla QR and orthogonal QR. Our proposed method consistently improves the conditional coverage, as measured by the Pearson’s correlation, HSIC, and ∆WSC metrics for conditional coverage, even though the latter two are not optimized directly by orthogonal QR. In Figure 2, we show the coverage as a function of the interval’s length evaluated on the meps_21 data set, and find that orthogonal QR (in orange) is closer to the nominal 90% level when compared to the baseline method. Our penalty also improves the ∆Node-Coverage in most data sets (see Table 2), indicating that the baseline model tends to undercover the response of at least one sub- population. Turning to the statistical efficiency, observe that the intervals produced by the regularized models tend to be wider than the ones of the baseline method, which is needed to better achieve conditional coverage. We further probe this phenomenon by checking the ∆ILS-Coverage which shows that the regions with wider intervals now have better coverage. In Section S5.2 of the Supplementary Material we provide additional experiments by replacing the pinball loss with the recently-introduced interval score loss. The effect of the decorrelation penalty is similar to the one described above. Moreover, in Section S5.2 we also compare between the decorrelation and HSIC penalties for independence, and show that in most data sets the decorrelation penalty achieves better performance over all metrics of conditional independence at the cost of producing wider intervals. For completeness, we also present the results obtained by quantile regression forests [8] on the real data sets in Supplementary Table S15. Conformalized quantile regression results In previous experiments the quantile regression methods tend to (marginally) undercover the response variables. This limitation is easily remedied by combining vanilla QR or orthogonal QR with conformalized quantile regression [30] that adjusts the estimated intervals to exactly achieve the marginal coverage property in (6). Table 3 summarizes the results, demonstrating that by conformalizing the intervals our proposed method precisely achieves the desired marginal coverage while improving the conditional coverage of the baseline model, as measured by the Pearson’s correlation, HSIC, and ∆WSC, ∆Node-Coverage metrics. The two independence metrics indicate that even after after adjusting the intervals to achieve marginal coverage, our method still results in improved independence between the coverage event and length. We note that the ∆WSC, and ∆Node-Coverage metrics have a more muted improvement compared to the setting without conformalization, since the conformalization step smooths out the coverage to some extent. Further details regarding this conformalizing setting is given in Section S4.4 of the Supplementary Material. 6 Conclusion In this work we presented the orthogonal QR approach to achieve coverage closer to the desired level evenly across all sup-populations in the setting of quantile regression algorithms. A technical limitation of our method is the use of large batches during training, required to effectively detect the dependencies between the intervals length and coverage events. In our experiments we focus on i.i.d data, but we believe that the orthogonal loss can be beneficial beyond this, such as in time-series data, which we hope to investigate in future work. A related future direction is to encourage independence between a coverage event and a function of the feature vector, other than that of interval length— similar logic to that of our proposal means that this independence would hold for the true quanties. A clever choice may better capture the relationships between X and the coverage obtained by the model, and further improve conditional coverage. As a concluding remark, while empirical evidence shows that our orthogonal QR approach produces intervals that represent the uncertainty in subgroups more reliably than standard methods, it does not guarantee a valid coverage across all feature space with access to only a finite sample. This guarantee may be necessary for ensuring that predictions are unbiased against a minority group of interest, indexed by an individual’s gender or race, for example. To alleviate this, one can combine our methods with the equalized coverage framework [28] that builds upon conformal inference to achieve the desired coverage for pre-defined sub-populations, which is a weaker but achievable demand compared to conditional coverage. Acknowledgments and Disclosure of Funding S.F. and Y.R. were supported by the ISRAEL SCIENCE FOUNDATION (grant No. 729/21). Y.R. also thanks the Career Advancement Fellowship, Technion, for providing research support. We thank John Cherian for comments on an earlier version of this manuscript.
1. What is the novel method proposed by the authors for constructing prediction intervals? 2. What is the key idea behind the authors' approach, and how does it differ from traditional quantile regression? 3. What is the purpose of the penalty term in the authors' method, and how does it help improve the performance of the prediction intervals? 4. How do the authors evaluate the performance of their method compared to other approaches in the literature? 5. What is the reviewer's main concern regarding the authors' approach, and what alternative approach might the reviewer suggest?
Summary Of The Paper Review
Summary Of The Paper The authors propose a novel method to construct prediction intervals with pre-specified conditional coverage probability. Starting point of the paper is the observation that the length | C ^ ( X ) | and conditional coverage indicator 1 Y ∈ C ^ ( X ) are independent whenever the prediction interval C ^ ( X ) has exact coverage probability. The authors therefore suggest to compute lower and upper bounds of the prediction interval by solving a quantile regression problem augmented by a penalty term that penalizes correlation between | C ^ ( X ) | and 1 Y ∈ C ^ ( X ) . The authors compare the performance of their procedure and competing methods on simulated data and nine benchmark data sets. Review I enjoyed reading this paper. It is very clearly written and makes an interesting contribution. Using quantile regression to construct prediction intervals with (asymptotically) correct conditional coverage probability is well-established. Augmenting this traditional approach with a penalty to encourage independence of length and conditional coverage indicator to improve finite sample performance is creative. My main concern is that the key reason for the poor finite sample performance of the vanilla QR approach might not be due to the violation of the independence property, but simple because of the variability of the quantile regression estimates themselves. Instead of using classical QR, it would be interesting to see whether weighted/ efficient QR (Koenker 2005, Ch. 5.3) already leads to improved conditional coverage. By how much would enforcing the orthogonality condition further improve coverage based on weighted QR?
NIPS
Title Improving Conditional Coverage via Orthogonal Quantile Regression Abstract We develop a method to generate prediction intervals that have a user-specified coverage level across all regions of feature-space, a property called conditional coverage. A typical approach to this task is to estimate the conditional quantiles with quantile regression—it is well-known that this leads to correct coverage in the large-sample limit, although it may not be accurate in finite samples. We find in experiments that traditional quantile regression can have poor conditional coverage. To remedy this, we modify the loss function to promote independence between the size of the intervals and the indicator of a miscoverage event. For the true conditional quantiles, these two quantities are independent (orthogonal), so the modified loss function continues to be valid. Moreover, we empirically show that the modified loss function leads to improved conditional coverage, as evaluated by several metrics. We also introduce two new metrics that check conditional coverage by looking at the strength of the dependence between the interval size and the indicator of miscoverage. 1 Introduction Learning algorithms are increasingly prevalent within consequential real-world systems, where reliability is an essential consideration: confidently deploying learning algorithms requires more than high prediction accuracy in controlled testbeds [1, 2]. Consider, for example, estimating the effects of a drug for a specific person given their demographic information and medical measurements. In such a high-stakes setting, giving a point prediction for the drug’s effect is insufficient; the decision-maker must know what the plausible range of effects for this specific individual. Instance-wise uncertainty quantification in such settings is critical [3–5]. One approach to this problem comes from the quantile regression and prediction interval literature [6–8]; instead of a point prediction, we can return a range of outcomes that represent the plausible response for a given input. We would like these prediction intervals to achieve a pre-specified coverage level (e.g., 90%) for all inputs—that is, across all regions of feature space. Training models to satisfy this validity guarantee is challenging, however, particularly with complex models like neural networks [9, 10]. In this work, we show how to generate prediction intervals that achieve coverage closer to the desired level evenly across all sub-populations. Technically, we achieve this by augmenting the quantile regression loss function with an additional term that promotes appropriately balanced coverage across the feature space. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Formally, consider a regression problem where we are given n training samples {(Xi, Yi)}ni=1, where X ∈ Rp is a feature vector , and Y ∈ R is a response variable. At test time, we observe a feature vector Xn+1 and our goal is to predict the unknown value of Yn+1 and—importantly—to report on its uncertainty. In this work, we represent this uncertainty by constructing a prediction interval Ĉ(Xn+1) ⊆ R that is likely to contain the response Yn+1. In particular, we seek to produce intervals that contain the response with a user-specified probability 1− α that are valid across all regions of feature space: P[Yn+1 ∈ Ĉ(Xn+1) | Xn+1 = x] ≥ 1− α, (1) a property known as conditional coverage. Notice that such prediction intervals are both correct in that they satisfy a coverage guarantee and also are adaptive, in that the size of the prediction intervals can change with the difficulty of the inputs: easy inputs give small intervals and hard inputs give large intervals. Returning to our medical example, consider predicting the outcome of a drug from age, gender, blood pressure, and so on. The conditional coverage requirement in (1) asks that intervals are correct for any age, gender, and health status combination. That is, no matter what an individual’s value of the features x, the uncertainty quantification must be valid. The conditional quantiles of Y | X = x are the natural way to produce intervals satisfying (1). Let αlo = α/2 and αhi = 1− α/2. Given the true conditional quantiles, qαlo(x), qαhi(x), we can build an oracle prediction interval satisfying (1) in the following way: C(x) = [qαlo(x), qαhi(x)]. (2) In practice, the conditional quantiles are unknown but can be estimated with quantile regression, yielding the interval Ĉ(x) = [q̂αlo(x), q̂αhi(x)]. This approach is attractive because quantile regression yields intervals that are adaptive to heteroscedasticity without requiring parametric assumptions [11– 13], but these intervals might not satisfy the conditional coverage statement (1), since q̂αlo(x), q̂αhi(x) are merely estimations of the true quantiles [14]. Indeed, we observe in experiments 5 that traditional quantile regression often gives intervals with poor conditional coverage, prompting the present investigation. In this work, we propose a novel regularization scheme to push quantile regression algorithms towards solutions that better satisfies the conditional coverage requirement (1). The core idea is to force the coverage and interval length to be approximately independent, since this independence must hold for the optimal oracle intervals in (2). A method that constructs intervals whose coverage and length are dependent is either sometimes too conservative (generating too wide intervals), sometimes too liberal (yeilding too short intervals), or both. In addition to improved training schemes, we propose two new tools to check the validity of the resulting predictions in a meaningful way. Specifically, in Section 4, we present two new interpretable metrics to asses the violation of conditional coverage, taking advantage of the orthogonality property identified above. We use these (and other) metrics in Section 5 to study our proposal on simulated data and nine real benchmark data sets. We find that our training scheme yields improvements when used together with both a classic [6] and a more recent [15] quantile regression method. A synthetic two-group example We begin with a small synthetic experiment that demonstrates the challenges of constructing prediction intervals with accurate conditional coverage. We generate a dataset with two unbalanced subpopulations: 80% of the samples belong to a majority group and the remaining 20% to a minority group, where the conditional distribution Y | X of the minority group is more dispersed than the majority group. In our experiments, group membership is included as one of the features. See Section S3.1 of the Supplementary Material for a full description of the distribution. As a baseline method, we first fit a quantile neural network model (vanilla QR) by optimizing the pinball loss (see Section 2), attempting to estimate the low αlo = 0.05 and high αhi = 0.95 conditional quantiles. The left panel of Figure 1 shows the coverage obtained by the vanilla QR model across training epochs. The coverage on the test data is far lower than suggested by the training data, failing to reach the desired 90% level and increasing as the training progresses. In particular, this gap remains large at the epoch in which the model achieves the minimal loss evaluated on an independent validation set. Here, the empirical test coverage measured over the majority and minority groups is equal to 80% and 68%, and the empirical average lengths evaluated over them are 1.55, 6.45, respectively. Next, we fit our proposed orthogonal QR model (which we will formally introduce in Section 3) on the same data. The coverage rate across epochs is illustrated in the right panel of Figure 1. In contrast to vanilla QR, the training and testing curves of the majority group overlap for the entire training epochs and both approximately reach the desired 90% level. The minority group coverage has similar behavior, with a significantly smaller gap between the two curves compared to vanilla QR. Here, the model that corresponds to the best epoch achieves 86% coverage rate with 1.62 average length on the majority group, and 83% coverage rate with 9.33 average length on the minority group. Importantly, here we are able to train for more epochs before overfitting, which leads to a better final model. To conclude, in this example orthogonal QR prevents overfitting and leads to better conditional coverage. 2 Preliminaries and related work 2.1 Quantile regression The task of estimating the low and high conditional quantiles, q̂αlo(x), q̂αhi(x), can be expressed as a minimization problem taking the following form: (q̂αlo , q̂αhi) = argmin fαlo ,fαhi∈F 1 n n∑ i=1 `α(Yi, fαlo(Xi), fαhi(Xi)). (3) Above, `α is a loss function designed to fit quantiles; we discuss two examples next. The most common loss function for estimating a conditional quantile q̂α is called the pinball loss or check function [6, 9, 16], expressed as ρα(y, ŷ) = { α(y − ŷ) y − ŷ > 0, (1− α)(ŷ − y) otherwise. (4) Observe that for the choice α = 1/2 the above becomes the L1 norm that is known to estimate the conditional median. By setting α 6= 1/2 we get a tilted version of the latter, with a degree controlled by the value of α. In the context of (3), we can set `pbα (y, fαlo(x), fαhi(x)) = ραlo(y, fαlo(x)) + ραhi(y, fαhi(x)) to simultaneously estimate the low and high quantiles. Throughout this paper, we will refer to this procedure as vanilla QR. A recently-developed alternative to the pinball loss is the interval score loss [17], defined as: `intα (y, fαlo(x), fαhi(x)) = (fαhi(x)− fαlo(x)) + 2 α (fαlo(x)− y)1[y < fαlo(x)] + 2 α (y − fαhi(x))1[y > fαhi(x)]. (5) Note that the left-most term encourages short intervals and the two remaining components promote intervals with the right coverage. To improve statistical efficiency, it is recommended by [15] to simultaneously estimate all conditional quantiles by minimizing the following empirical risk function: Eα∼U [0,1][`intα (·)].1 This is the approach we take in our experiments. Note that [15] proposed additional learning schemes for improving efficiency, such as group batching and ensemble learning; see also [18]. These ideas are complementary to our proposal and may further improve the performance of our proposed orthogonal QR. Quantile regression is a large, active area of research, and we finish by pointing out a few representative strands of work in the literature. Estimates of conditional quantile functions using the pinball loss with specific models are proven to be asymptotically consistent under regularity conditions [16]. The work reported in [11–13] offer a non-parametric version of quantile regression. This line of research was further developed by [19] that presented a generalization to additive models that are non-parametric. The pinball loss and interval score loss can be also used to estimate conditional probability distribution [17, 20]. Nevertheless, quantiles can be estimated in other ways rather than minimizing these two loss functions. These include quantile random forest [8] and the method proposed in [21] that iteratively estimates the conditional quantile through the Majorize-Minimize algorithm. Besides the regularization term we propose, there are other suggested penalties that are useful in different situations, such as using sparse modeling in high dimensional response [22–24]. 2.2 Conformal inference Conformal inference [25] is a framework for building a prediction intervals that provably attain a weaker marginal coverage property: P[Yn+1 ∈ Ĉ(Xn+1)] ≥ 1− α. (6) Importantly, one can guarantee that this holds for any joint distribution PXY , sample size n, and predictive algorithm. In contrast to (1), the probability statement in (6) is marginal and taken over all the training and test samples {(Xi, Yi)}n+1i=1 . For example, in the context of the data from Figure 1, intervals that satisfy (6) would be allowed to undercover the minority group and overcover the majority group. Therefore, the statement in (6) is much weaker than that in (1). Yet, the former can be achieved for any distribution whereas the latter can not be achieved for badly-behaved distributions; see [14, 26]. While variants of the guarantee in (6) are possible [27–29], achieving coverage exactly balanced across a continuous feature cannot be done without further assumptions. Much recent work in conformal inference with quantile regression attempts to generate intervals that are adaptive to such heteroscedasticity so that they approximately achieve conditional coverage in (1), while ensuring that 1− α marginal coverage in (6) is exactly achieved [7, 30–35]. The experiments we conduct show that our proposed Orthogonal QR method can be used in combination with conformal prediction to improve the conditional coverage property while attaining valid marginal coverage. 3 Proposed method: orthogonal quantile regression 3.1 Formulating the learning scheme This section presents a modification of the pinball loss or interval score loss in order to fit models with improved conditional coverage. Denote by V = 1[Y ∈ Ĉ(X)] the coverage identifier, and by L = |Ĉ(X)| the interval’s length. Our proposal is motivated by the following observation. 1The code is hosted at https://github.com/YoungseogChung/calibrated-quantile-uq Proposition 1 (Independence of width and coverage). Let (X,Y ) be a sample drawn from PXY , and let X be the support of X . If the distribution of Y | X = x is continuous for all x ∈ X , and the fixed, deterministic interval-valued function Ĉ(X) satisfies P[Y ∈ Ĉ(X)|X = x] = 1− α for all x ∈ X and some α ∈ (0, 1), then the interval satisfies V ⊥⊥ L. In particular, the above implies that the length of an interval L = qαhi(X)− qαlo(X) constructed by the true low and high quantiles is independent of the coverage identifier V , as stated next. Corollary 1. Under the assumptions of Proposition 1, an interval constructed by the true conditional quantiles satisfies V ⊥⊥ L. We note that an earlier, limited version of this observation appears in [36] in the context conditional coverage for classification problems. All proofs are presented in Section S2 of the Supplementary Material. Since intervals constructed by the true conditional quantiles obey the independence property, forcing the fitted model to approximately satisfy this property during training can result in better conditional coverage for future test points. This leads us to our proposed orthogonal QR objective: (q̂αlo , q̂αhi) = argmin fαlo ,fαhi∈F 1 n n∑ i=1 `α(Yi, fαlo(Xi), fαhi(Xi)) + γR(L,V). (7) where `α is either ` pb α or `intα , L ∈ Rn is a vector that contains Li = fαhi(Xi)− fαlo(Xi) = |Ĉ(Xi)| as its elements, and V ∈ Rn is a vector with entries Vi = 1[Yi ∈ Ĉ(Xi)]. (To facilitate training with gradient methods, in practice we use a smooth approximation to the indicator function; see Section S1.1 of the Supplementary Material.) The functionR(L,V) ∈ R+ returns a real-valued score that quantifies the strength of the dependence between L and V , where a large value indicates that the two are more dependent; we discuss specific choices in Section 3.2. The regularization strength is controlled by the hyperparameter γ. In Supplementary Section S4.1 we explain how this parameter is determined, and in Supplementary Section S5.1.3 we demonstrate the effect of this parameter on the performance of our method. Lastly, we point out that our proposal falls into the broader theme of fitting models while enforcing conditional independence properties, a goal that is important for algorithmic fairness [e.g., 37–39]. This work aims to achieve uncertainty estimates that are equally good across all feature space, a prediction interval analog to the goal of [40]. 3.2 The orthogonality loss We now turn to the question of choosing the specific dependence loss penalty,R in (7). In principle, we could use any dependence measure from the many in the literature: chi-squared tests [41], Pearson’s correlation, distance correlation [42], Kolmogorov-Smirnov statistic [43], Randomized Dependence Coefficient [44], Hilbert-Schmidt independence criterion (HSIC) [45], and so on. In this work we focus on Pearson’s correlation and HSIC which are described hereafter. The Pearson’s correlation measures the linear dependency between two random variables. Here, the loss is defined as: Rcorr(L, V ) = ∣∣∣∣∣ Cov(L, V )√Var(L)√Var(V ) ∣∣∣∣∣ . (8) The advantages of this choice are its simplicity and the minimal computational burden. Next, HSIC is a more sophisticated, nonlinear complement to the Pearson’s correlation measure, which can detect arbitrary complex relationships between the coverage identifier and the interval length. It is an analog of the well-known Maximum Mean Discrepancy (MMD) distance [46], but is a measure of dependence. The idea is that while Rcorr(L, V ) = 0 does not necessarily imply that L and V are independent, having Rcorr(g(L), h(V )) = 0 for every continuous bounded functions g, h guarantees the independence property [47]. While it is impossible to sweep over all possible continuous bounded functions, HSIC offers a tractable solution, guaranteeing that HSIC(L, V ) = 0 if and only if L ⊥⊥ V [45]. In our work, we utilize this measure, and define the orthogonality loss as RHSIC(L, V ) = √ HSIC(L, V ) (taking the square root to magnify small values). This choice is similar to the one advocated in [48]. With these choices of R, we now show that the true conditional quantiles are a solution for the orthogonal QR problem. Theorem 1 (Validity of orthogonal quantile regression). Suppose Y | X = x follows a continuous distribution for each x ∈ X , and suppose that qαlo(X), qαhi(X) ∈ F . Consider the infinite-data version of the orthogonal QR optimization in (7): argmin fαlo ,fαhi∈F E [ `α(Y, fαlo(X), fαhi(X)) + γR ( |Ĉ(X)|, I[Y ∈ Ĉ(X)] )] , where Ĉ(X) = [fαlo(X), fαhi(X)], `α is either ` pb α or ` int α , and R is Rcorr or RHSIC. Then, true conditional quantiles are solutions to the above optimization problem. Moreover, if the solution is unique for γ = 0 then the solution is unique for all γ > 0. The uniqueness part of the theorem means that whenever vanilla QR is guaranteed to give the correct quantiles in the large-sample limit, then orthogonal QR will give the same (correct) solution. The result continues to hold for any dependence measure R that achieves its minimum value for any two independent variables, a basic property that all dependence measures that we are aware of satisfy. See the proof in Section S2 of the Supplementary Material for further details. 4 Metrics for assessing conditional coverage We next discuss several quantitative measures of conditional coverage. We will introduce two new metrics for conditional coverage, and then review one existing proposal from the literature. Lastly, we will discuss two ad-hoc metrics to help us compare orthogonal QR with vanilla QR in our upcoming simulation experiments. 4.1 Two new metrics for conditional coverage Pearson’s correlation: As previewed in the previous section, the Pearson correlation between the interval size and the indicator of coverage (i.e.,Rcorr from (8)) is a simple, effective way to measure conditional coverage. However, to the best of our knowledge, we are the first to leverage it for this purpose. HSIC: Similarly, we consider the HSIC measure of dependence between the interval size and the indicator of coverage (i.e.,RHSIC above). We estimate this metric as described in [40].2 As before, to our knowledge this has never been leveraged as a metric to asses conditional coverage. 4.2 Other metrics for our empirical evaluations ∆WSC: As an additional measure of conditional coverage, we evaluate the coverage over the worstslab as proposed in [49].3 To avoid a case where an improvement in this quantity is obtained by naively enlarging all prediction intervals, we suggest a variant that we call ∆WSC. This metric is defined as the absolute difference between the worst-slab coverage and the marginal coverage, both evaluated on test data I: ∆WSC = ∣∣∣WSC({(Xi, Yi)}i∈I ; Ĉ)− Coverage({(Xi, Yi)}i∈I ; Ĉ)∣∣∣ . Above, Coverage ( {(Xi, Yi)}i∈I ; Ĉ ) = 1|I| ∑ i∈I 1[Yi ∈ Ĉ(Xi)] where Ĉ(x) is a prediction interval method. Importantly, a uniform increase of the length of all intervals will not deceive the ∆WSC measure as it will remain fixed. ∆ILS-Coverage: We next consider a measure that checks whether the intervals made larger by orthogonal QR compared to vanilla QR are necessary for improving the conditional coverage. In general, suppose we are given two algorithms A1 and A2 for constructing prediction intervals. Let ∆Li = |ĈA1(Xi)| − |ĈA2(Xi)| 2The code is available at https://github.com/danielgreenfeld3/XIC 3We used the implementation from https://github.com/msesia/arc/ be the difference between the interval length |ĈA(Xi)| obtained by A1 and A2, evaluated on the same test point Xi. Next, let q0.9({∆Li}i∈I) be the 90% empirical quantile of {∆Li}i∈I . Then, let ILS be the 10% of samples whose length increased the most: ILS = {i : ∆Li ≥ q0.9({∆Li}i∈I), i ∈ I}. With this notation in place, we propose the ∆ILS-Coverage metric: ∆ILS-Coverage = ∣∣∣Coverage({(Xi, Yi)}i∈ILS ; ĈAk)− Coverage({(Xi, Yi)}i∈I ; ĈAk)∣∣∣. In words, the above is the absolute difference between the coverage over the ILS samples and the marginal coverage, evaluated for each algorithm Ak, k = 1, 2. A smaller value for k = 1 indicates that the points with very different size under A1 and A2 are handled better by A1. ∆Node-Coverage: As a variant of ∆ILS-Coverage, we identify a sub-population characterized by a small set of features such that the two algorithmsA1 andA2 produce very different intervals, and check the coverage on this region. To this end, we label the ILS samples as the positive class and fit a binary classifier formulated as a decision tree, aiming to predict whether a sample X belongs to the ILS set. Denote the set of tree nodes in depth at most three that contain at least 5% of the samples by {Nodej}j , where Nodej ⊆ I. Next, let ND be the set of indices of the samples that belong to the node that maximizes the following ratio: |Nodej ∩ ILS| / |Nodej \ ILS|. Finally, given a method for constructing prediction intervals Ĉ(·), compute the distance between the coverage over the ND samples and the marginal coverage, formulated as ∆Node-Coverage = ∣∣∣Coverage({(Xi, Yi)}i∈ND ; Ĉ)− Coverage({(Xi, Yi)}i∈I ; Ĉ)∣∣∣ . 5 Experiments Armed with the performance metrics described in Section 4, we now systematically quantify the effectiveness of the proposed independence penalty when combined with baseline quantile regression methods. In all experiments, we apply a deep neural network as a base model for constructing prediction intervals with 1 − α = 0.9 coverage level. Section S4 of the Supplementary Material gives the details about the network architecture, training strategy, and details about this experimental setup. Software implementing the proposed method and reproducing our experiments can be found at https://github.com/Shai128/oqr 5.1 A synthetic two-group setting We return to the synthetic two-group setting previewed in Section 1, but first provide more details about the data. In this data set, the difference in distribution between the majority and minority groups is controlled by modifying the noise level of the conditional distribution Y | X of the minority group. Furthermore, X has 50 coordinates, the first of which indicates the group membership. Section S3.1 of the Supplementary Material contains more details about the generation of this synthetic data. To analyze the performance of our method, we generate 7000 i.i.d. samples and repeat the following experiment for 30 random train/validation/test splits of the data. We fit a quantile regression model with pinball loss on 5040 training samples, where we tune the number of epochs on an independent validation set that contains 560 samples. The remaining 1400 samples are used to test the model’s performance. We pre-processed the feature vector using z-score standardization, and normalized the features and response variables to have a zero mean and a unit variance. In Table 1 we report the average coverage, length, and conditional coverage metrics for vanilla QR and orthogonal QR for two minority-group noise levels. (The ∆ILS-Coverage, ∆Node-Coverage metrics are not reported in this case, since they both essentially correspond to the minority group’s coverage level, which is given in the table.) The intervals constructed by the baseline vanilla QR undercover both the majority and minority group but; this tendency is more severe for the latter. By contrast, the regularized model achieves similar coverage rates for the two groups, with levels that are closer to the nominal 90%. This is also reflected by the improvement in the Pearson’s correlation, HSIC, and ∆WSC metrics of conditional coverage. Overall, orthogonal QR gives wider intervals for the minority group, which is anticipated since the coverage of the baseline model is far below the nominal rate. As for the majority group, our proposed training gives intervals of about the same length compared to the baseline model, while achieving a considerably higher coverage rate. In fact, the regularized model constructs even shorter intervals in the high noise level case. We note that this performance continues to hold even when using interval score loss in place of the pinball loss; see Section S5.1.1 of the Supplementary Material. Lastly, we also examine the effect of our regularization on weighted QR [50], and display the results in Section S5.1.2 of the Supplementary Material. 5.2 Real data Next, we compare the performance of the proposed orthogonal QR to vanilla QR on nine benchmarks data sets as in [15, 30]: Facebook comment volume variants one and two (facebook_1, facebook_2), blog feedback (blog_data), physicochemical properties of protein tertiary structure (bio), forward kinematics of an 8 link robot arm (kin8nm), condition based maintenance of naval propulsion plants (naval), and medical expenditure panel survey number 19-21 (meps_19, meps_20, and meps_21). See Section S3.2 of the Supplementary Material for details about these data sets. We follow the experimental protocol and training strategy described in Section 5.1. We randomly split each data set into disjoint training (54%), validation (6%), and testing sets (40%). We normalized the features and response variables to have a zero mean and a unit variance each, except for facebook_1, facebook_2, blog_data, and bio datasets in which we log transform Y before the standardization. Table 2 summarizes the performance of vanilla QR and orthogonal QR. Our proposed method consistently improves the conditional coverage, as measured by the Pearson’s correlation, HSIC, and ∆WSC metrics for conditional coverage, even though the latter two are not optimized directly by orthogonal QR. In Figure 2, we show the coverage as a function of the interval’s length evaluated on the meps_21 data set, and find that orthogonal QR (in orange) is closer to the nominal 90% level when compared to the baseline method. Our penalty also improves the ∆Node-Coverage in most data sets (see Table 2), indicating that the baseline model tends to undercover the response of at least one sub- population. Turning to the statistical efficiency, observe that the intervals produced by the regularized models tend to be wider than the ones of the baseline method, which is needed to better achieve conditional coverage. We further probe this phenomenon by checking the ∆ILS-Coverage which shows that the regions with wider intervals now have better coverage. In Section S5.2 of the Supplementary Material we provide additional experiments by replacing the pinball loss with the recently-introduced interval score loss. The effect of the decorrelation penalty is similar to the one described above. Moreover, in Section S5.2 we also compare between the decorrelation and HSIC penalties for independence, and show that in most data sets the decorrelation penalty achieves better performance over all metrics of conditional independence at the cost of producing wider intervals. For completeness, we also present the results obtained by quantile regression forests [8] on the real data sets in Supplementary Table S15. Conformalized quantile regression results In previous experiments the quantile regression methods tend to (marginally) undercover the response variables. This limitation is easily remedied by combining vanilla QR or orthogonal QR with conformalized quantile regression [30] that adjusts the estimated intervals to exactly achieve the marginal coverage property in (6). Table 3 summarizes the results, demonstrating that by conformalizing the intervals our proposed method precisely achieves the desired marginal coverage while improving the conditional coverage of the baseline model, as measured by the Pearson’s correlation, HSIC, and ∆WSC, ∆Node-Coverage metrics. The two independence metrics indicate that even after after adjusting the intervals to achieve marginal coverage, our method still results in improved independence between the coverage event and length. We note that the ∆WSC, and ∆Node-Coverage metrics have a more muted improvement compared to the setting without conformalization, since the conformalization step smooths out the coverage to some extent. Further details regarding this conformalizing setting is given in Section S4.4 of the Supplementary Material. 6 Conclusion In this work we presented the orthogonal QR approach to achieve coverage closer to the desired level evenly across all sup-populations in the setting of quantile regression algorithms. A technical limitation of our method is the use of large batches during training, required to effectively detect the dependencies between the intervals length and coverage events. In our experiments we focus on i.i.d data, but we believe that the orthogonal loss can be beneficial beyond this, such as in time-series data, which we hope to investigate in future work. A related future direction is to encourage independence between a coverage event and a function of the feature vector, other than that of interval length— similar logic to that of our proposal means that this independence would hold for the true quanties. A clever choice may better capture the relationships between X and the coverage obtained by the model, and further improve conditional coverage. As a concluding remark, while empirical evidence shows that our orthogonal QR approach produces intervals that represent the uncertainty in subgroups more reliably than standard methods, it does not guarantee a valid coverage across all feature space with access to only a finite sample. This guarantee may be necessary for ensuring that predictions are unbiased against a minority group of interest, indexed by an individual’s gender or race, for example. To alleviate this, one can combine our methods with the equalized coverage framework [28] that builds upon conformal inference to achieve the desired coverage for pre-defined sub-populations, which is a weaker but achievable demand compared to conditional coverage. Acknowledgments and Disclosure of Funding S.F. and Y.R. were supported by the ISRAEL SCIENCE FOUNDATION (grant No. 729/21). Y.R. also thanks the Career Advancement Fellowship, Technion, for providing research support. We thank John Cherian for comments on an earlier version of this manuscript.
1. What is the focus of the paper in terms of quantile regression? 2. What are the strengths of the proposed approach, particularly in improving conditional coverage? 3. Do you have any concerns or questions regarding the method's application or effectiveness? 4. Are there any limitations to the proposed approach that should be considered?
Summary Of The Paper Review
Summary Of The Paper This paper discusses a regularization scheme that aims to improve the conditional coverage of prediction intervals that are learned while performing quantile regression for a pair of quantiles. The empirical experiments show the effect of applying the regularization scheme by demonstrating improvement across a suite of metrics. Review This paper is clearly written and well motivated. Learning accurate conditional quantiles and prediction intervals (PIs) is a relevant problem, especially to the general UQ community. The proposed method is based on the findings presented in Proposition 1, which is applied via a regularization term as stated in equation (7). I believe these are relevant and interesting findings, and provide insights in the problem of learning accurate quantiles and PIs. I have some questions about the method, which is stated below in the Limitations section.
NIPS
Title Improving Conditional Coverage via Orthogonal Quantile Regression Abstract We develop a method to generate prediction intervals that have a user-specified coverage level across all regions of feature-space, a property called conditional coverage. A typical approach to this task is to estimate the conditional quantiles with quantile regression—it is well-known that this leads to correct coverage in the large-sample limit, although it may not be accurate in finite samples. We find in experiments that traditional quantile regression can have poor conditional coverage. To remedy this, we modify the loss function to promote independence between the size of the intervals and the indicator of a miscoverage event. For the true conditional quantiles, these two quantities are independent (orthogonal), so the modified loss function continues to be valid. Moreover, we empirically show that the modified loss function leads to improved conditional coverage, as evaluated by several metrics. We also introduce two new metrics that check conditional coverage by looking at the strength of the dependence between the interval size and the indicator of miscoverage. 1 Introduction Learning algorithms are increasingly prevalent within consequential real-world systems, where reliability is an essential consideration: confidently deploying learning algorithms requires more than high prediction accuracy in controlled testbeds [1, 2]. Consider, for example, estimating the effects of a drug for a specific person given their demographic information and medical measurements. In such a high-stakes setting, giving a point prediction for the drug’s effect is insufficient; the decision-maker must know what the plausible range of effects for this specific individual. Instance-wise uncertainty quantification in such settings is critical [3–5]. One approach to this problem comes from the quantile regression and prediction interval literature [6–8]; instead of a point prediction, we can return a range of outcomes that represent the plausible response for a given input. We would like these prediction intervals to achieve a pre-specified coverage level (e.g., 90%) for all inputs—that is, across all regions of feature space. Training models to satisfy this validity guarantee is challenging, however, particularly with complex models like neural networks [9, 10]. In this work, we show how to generate prediction intervals that achieve coverage closer to the desired level evenly across all sub-populations. Technically, we achieve this by augmenting the quantile regression loss function with an additional term that promotes appropriately balanced coverage across the feature space. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Formally, consider a regression problem where we are given n training samples {(Xi, Yi)}ni=1, where X ∈ Rp is a feature vector , and Y ∈ R is a response variable. At test time, we observe a feature vector Xn+1 and our goal is to predict the unknown value of Yn+1 and—importantly—to report on its uncertainty. In this work, we represent this uncertainty by constructing a prediction interval Ĉ(Xn+1) ⊆ R that is likely to contain the response Yn+1. In particular, we seek to produce intervals that contain the response with a user-specified probability 1− α that are valid across all regions of feature space: P[Yn+1 ∈ Ĉ(Xn+1) | Xn+1 = x] ≥ 1− α, (1) a property known as conditional coverage. Notice that such prediction intervals are both correct in that they satisfy a coverage guarantee and also are adaptive, in that the size of the prediction intervals can change with the difficulty of the inputs: easy inputs give small intervals and hard inputs give large intervals. Returning to our medical example, consider predicting the outcome of a drug from age, gender, blood pressure, and so on. The conditional coverage requirement in (1) asks that intervals are correct for any age, gender, and health status combination. That is, no matter what an individual’s value of the features x, the uncertainty quantification must be valid. The conditional quantiles of Y | X = x are the natural way to produce intervals satisfying (1). Let αlo = α/2 and αhi = 1− α/2. Given the true conditional quantiles, qαlo(x), qαhi(x), we can build an oracle prediction interval satisfying (1) in the following way: C(x) = [qαlo(x), qαhi(x)]. (2) In practice, the conditional quantiles are unknown but can be estimated with quantile regression, yielding the interval Ĉ(x) = [q̂αlo(x), q̂αhi(x)]. This approach is attractive because quantile regression yields intervals that are adaptive to heteroscedasticity without requiring parametric assumptions [11– 13], but these intervals might not satisfy the conditional coverage statement (1), since q̂αlo(x), q̂αhi(x) are merely estimations of the true quantiles [14]. Indeed, we observe in experiments 5 that traditional quantile regression often gives intervals with poor conditional coverage, prompting the present investigation. In this work, we propose a novel regularization scheme to push quantile regression algorithms towards solutions that better satisfies the conditional coverage requirement (1). The core idea is to force the coverage and interval length to be approximately independent, since this independence must hold for the optimal oracle intervals in (2). A method that constructs intervals whose coverage and length are dependent is either sometimes too conservative (generating too wide intervals), sometimes too liberal (yeilding too short intervals), or both. In addition to improved training schemes, we propose two new tools to check the validity of the resulting predictions in a meaningful way. Specifically, in Section 4, we present two new interpretable metrics to asses the violation of conditional coverage, taking advantage of the orthogonality property identified above. We use these (and other) metrics in Section 5 to study our proposal on simulated data and nine real benchmark data sets. We find that our training scheme yields improvements when used together with both a classic [6] and a more recent [15] quantile regression method. A synthetic two-group example We begin with a small synthetic experiment that demonstrates the challenges of constructing prediction intervals with accurate conditional coverage. We generate a dataset with two unbalanced subpopulations: 80% of the samples belong to a majority group and the remaining 20% to a minority group, where the conditional distribution Y | X of the minority group is more dispersed than the majority group. In our experiments, group membership is included as one of the features. See Section S3.1 of the Supplementary Material for a full description of the distribution. As a baseline method, we first fit a quantile neural network model (vanilla QR) by optimizing the pinball loss (see Section 2), attempting to estimate the low αlo = 0.05 and high αhi = 0.95 conditional quantiles. The left panel of Figure 1 shows the coverage obtained by the vanilla QR model across training epochs. The coverage on the test data is far lower than suggested by the training data, failing to reach the desired 90% level and increasing as the training progresses. In particular, this gap remains large at the epoch in which the model achieves the minimal loss evaluated on an independent validation set. Here, the empirical test coverage measured over the majority and minority groups is equal to 80% and 68%, and the empirical average lengths evaluated over them are 1.55, 6.45, respectively. Next, we fit our proposed orthogonal QR model (which we will formally introduce in Section 3) on the same data. The coverage rate across epochs is illustrated in the right panel of Figure 1. In contrast to vanilla QR, the training and testing curves of the majority group overlap for the entire training epochs and both approximately reach the desired 90% level. The minority group coverage has similar behavior, with a significantly smaller gap between the two curves compared to vanilla QR. Here, the model that corresponds to the best epoch achieves 86% coverage rate with 1.62 average length on the majority group, and 83% coverage rate with 9.33 average length on the minority group. Importantly, here we are able to train for more epochs before overfitting, which leads to a better final model. To conclude, in this example orthogonal QR prevents overfitting and leads to better conditional coverage. 2 Preliminaries and related work 2.1 Quantile regression The task of estimating the low and high conditional quantiles, q̂αlo(x), q̂αhi(x), can be expressed as a minimization problem taking the following form: (q̂αlo , q̂αhi) = argmin fαlo ,fαhi∈F 1 n n∑ i=1 `α(Yi, fαlo(Xi), fαhi(Xi)). (3) Above, `α is a loss function designed to fit quantiles; we discuss two examples next. The most common loss function for estimating a conditional quantile q̂α is called the pinball loss or check function [6, 9, 16], expressed as ρα(y, ŷ) = { α(y − ŷ) y − ŷ > 0, (1− α)(ŷ − y) otherwise. (4) Observe that for the choice α = 1/2 the above becomes the L1 norm that is known to estimate the conditional median. By setting α 6= 1/2 we get a tilted version of the latter, with a degree controlled by the value of α. In the context of (3), we can set `pbα (y, fαlo(x), fαhi(x)) = ραlo(y, fαlo(x)) + ραhi(y, fαhi(x)) to simultaneously estimate the low and high quantiles. Throughout this paper, we will refer to this procedure as vanilla QR. A recently-developed alternative to the pinball loss is the interval score loss [17], defined as: `intα (y, fαlo(x), fαhi(x)) = (fαhi(x)− fαlo(x)) + 2 α (fαlo(x)− y)1[y < fαlo(x)] + 2 α (y − fαhi(x))1[y > fαhi(x)]. (5) Note that the left-most term encourages short intervals and the two remaining components promote intervals with the right coverage. To improve statistical efficiency, it is recommended by [15] to simultaneously estimate all conditional quantiles by minimizing the following empirical risk function: Eα∼U [0,1][`intα (·)].1 This is the approach we take in our experiments. Note that [15] proposed additional learning schemes for improving efficiency, such as group batching and ensemble learning; see also [18]. These ideas are complementary to our proposal and may further improve the performance of our proposed orthogonal QR. Quantile regression is a large, active area of research, and we finish by pointing out a few representative strands of work in the literature. Estimates of conditional quantile functions using the pinball loss with specific models are proven to be asymptotically consistent under regularity conditions [16]. The work reported in [11–13] offer a non-parametric version of quantile regression. This line of research was further developed by [19] that presented a generalization to additive models that are non-parametric. The pinball loss and interval score loss can be also used to estimate conditional probability distribution [17, 20]. Nevertheless, quantiles can be estimated in other ways rather than minimizing these two loss functions. These include quantile random forest [8] and the method proposed in [21] that iteratively estimates the conditional quantile through the Majorize-Minimize algorithm. Besides the regularization term we propose, there are other suggested penalties that are useful in different situations, such as using sparse modeling in high dimensional response [22–24]. 2.2 Conformal inference Conformal inference [25] is a framework for building a prediction intervals that provably attain a weaker marginal coverage property: P[Yn+1 ∈ Ĉ(Xn+1)] ≥ 1− α. (6) Importantly, one can guarantee that this holds for any joint distribution PXY , sample size n, and predictive algorithm. In contrast to (1), the probability statement in (6) is marginal and taken over all the training and test samples {(Xi, Yi)}n+1i=1 . For example, in the context of the data from Figure 1, intervals that satisfy (6) would be allowed to undercover the minority group and overcover the majority group. Therefore, the statement in (6) is much weaker than that in (1). Yet, the former can be achieved for any distribution whereas the latter can not be achieved for badly-behaved distributions; see [14, 26]. While variants of the guarantee in (6) are possible [27–29], achieving coverage exactly balanced across a continuous feature cannot be done without further assumptions. Much recent work in conformal inference with quantile regression attempts to generate intervals that are adaptive to such heteroscedasticity so that they approximately achieve conditional coverage in (1), while ensuring that 1− α marginal coverage in (6) is exactly achieved [7, 30–35]. The experiments we conduct show that our proposed Orthogonal QR method can be used in combination with conformal prediction to improve the conditional coverage property while attaining valid marginal coverage. 3 Proposed method: orthogonal quantile regression 3.1 Formulating the learning scheme This section presents a modification of the pinball loss or interval score loss in order to fit models with improved conditional coverage. Denote by V = 1[Y ∈ Ĉ(X)] the coverage identifier, and by L = |Ĉ(X)| the interval’s length. Our proposal is motivated by the following observation. 1The code is hosted at https://github.com/YoungseogChung/calibrated-quantile-uq Proposition 1 (Independence of width and coverage). Let (X,Y ) be a sample drawn from PXY , and let X be the support of X . If the distribution of Y | X = x is continuous for all x ∈ X , and the fixed, deterministic interval-valued function Ĉ(X) satisfies P[Y ∈ Ĉ(X)|X = x] = 1− α for all x ∈ X and some α ∈ (0, 1), then the interval satisfies V ⊥⊥ L. In particular, the above implies that the length of an interval L = qαhi(X)− qαlo(X) constructed by the true low and high quantiles is independent of the coverage identifier V , as stated next. Corollary 1. Under the assumptions of Proposition 1, an interval constructed by the true conditional quantiles satisfies V ⊥⊥ L. We note that an earlier, limited version of this observation appears in [36] in the context conditional coverage for classification problems. All proofs are presented in Section S2 of the Supplementary Material. Since intervals constructed by the true conditional quantiles obey the independence property, forcing the fitted model to approximately satisfy this property during training can result in better conditional coverage for future test points. This leads us to our proposed orthogonal QR objective: (q̂αlo , q̂αhi) = argmin fαlo ,fαhi∈F 1 n n∑ i=1 `α(Yi, fαlo(Xi), fαhi(Xi)) + γR(L,V). (7) where `α is either ` pb α or `intα , L ∈ Rn is a vector that contains Li = fαhi(Xi)− fαlo(Xi) = |Ĉ(Xi)| as its elements, and V ∈ Rn is a vector with entries Vi = 1[Yi ∈ Ĉ(Xi)]. (To facilitate training with gradient methods, in practice we use a smooth approximation to the indicator function; see Section S1.1 of the Supplementary Material.) The functionR(L,V) ∈ R+ returns a real-valued score that quantifies the strength of the dependence between L and V , where a large value indicates that the two are more dependent; we discuss specific choices in Section 3.2. The regularization strength is controlled by the hyperparameter γ. In Supplementary Section S4.1 we explain how this parameter is determined, and in Supplementary Section S5.1.3 we demonstrate the effect of this parameter on the performance of our method. Lastly, we point out that our proposal falls into the broader theme of fitting models while enforcing conditional independence properties, a goal that is important for algorithmic fairness [e.g., 37–39]. This work aims to achieve uncertainty estimates that are equally good across all feature space, a prediction interval analog to the goal of [40]. 3.2 The orthogonality loss We now turn to the question of choosing the specific dependence loss penalty,R in (7). In principle, we could use any dependence measure from the many in the literature: chi-squared tests [41], Pearson’s correlation, distance correlation [42], Kolmogorov-Smirnov statistic [43], Randomized Dependence Coefficient [44], Hilbert-Schmidt independence criterion (HSIC) [45], and so on. In this work we focus on Pearson’s correlation and HSIC which are described hereafter. The Pearson’s correlation measures the linear dependency between two random variables. Here, the loss is defined as: Rcorr(L, V ) = ∣∣∣∣∣ Cov(L, V )√Var(L)√Var(V ) ∣∣∣∣∣ . (8) The advantages of this choice are its simplicity and the minimal computational burden. Next, HSIC is a more sophisticated, nonlinear complement to the Pearson’s correlation measure, which can detect arbitrary complex relationships between the coverage identifier and the interval length. It is an analog of the well-known Maximum Mean Discrepancy (MMD) distance [46], but is a measure of dependence. The idea is that while Rcorr(L, V ) = 0 does not necessarily imply that L and V are independent, having Rcorr(g(L), h(V )) = 0 for every continuous bounded functions g, h guarantees the independence property [47]. While it is impossible to sweep over all possible continuous bounded functions, HSIC offers a tractable solution, guaranteeing that HSIC(L, V ) = 0 if and only if L ⊥⊥ V [45]. In our work, we utilize this measure, and define the orthogonality loss as RHSIC(L, V ) = √ HSIC(L, V ) (taking the square root to magnify small values). This choice is similar to the one advocated in [48]. With these choices of R, we now show that the true conditional quantiles are a solution for the orthogonal QR problem. Theorem 1 (Validity of orthogonal quantile regression). Suppose Y | X = x follows a continuous distribution for each x ∈ X , and suppose that qαlo(X), qαhi(X) ∈ F . Consider the infinite-data version of the orthogonal QR optimization in (7): argmin fαlo ,fαhi∈F E [ `α(Y, fαlo(X), fαhi(X)) + γR ( |Ĉ(X)|, I[Y ∈ Ĉ(X)] )] , where Ĉ(X) = [fαlo(X), fαhi(X)], `α is either ` pb α or ` int α , and R is Rcorr or RHSIC. Then, true conditional quantiles are solutions to the above optimization problem. Moreover, if the solution is unique for γ = 0 then the solution is unique for all γ > 0. The uniqueness part of the theorem means that whenever vanilla QR is guaranteed to give the correct quantiles in the large-sample limit, then orthogonal QR will give the same (correct) solution. The result continues to hold for any dependence measure R that achieves its minimum value for any two independent variables, a basic property that all dependence measures that we are aware of satisfy. See the proof in Section S2 of the Supplementary Material for further details. 4 Metrics for assessing conditional coverage We next discuss several quantitative measures of conditional coverage. We will introduce two new metrics for conditional coverage, and then review one existing proposal from the literature. Lastly, we will discuss two ad-hoc metrics to help us compare orthogonal QR with vanilla QR in our upcoming simulation experiments. 4.1 Two new metrics for conditional coverage Pearson’s correlation: As previewed in the previous section, the Pearson correlation between the interval size and the indicator of coverage (i.e.,Rcorr from (8)) is a simple, effective way to measure conditional coverage. However, to the best of our knowledge, we are the first to leverage it for this purpose. HSIC: Similarly, we consider the HSIC measure of dependence between the interval size and the indicator of coverage (i.e.,RHSIC above). We estimate this metric as described in [40].2 As before, to our knowledge this has never been leveraged as a metric to asses conditional coverage. 4.2 Other metrics for our empirical evaluations ∆WSC: As an additional measure of conditional coverage, we evaluate the coverage over the worstslab as proposed in [49].3 To avoid a case where an improvement in this quantity is obtained by naively enlarging all prediction intervals, we suggest a variant that we call ∆WSC. This metric is defined as the absolute difference between the worst-slab coverage and the marginal coverage, both evaluated on test data I: ∆WSC = ∣∣∣WSC({(Xi, Yi)}i∈I ; Ĉ)− Coverage({(Xi, Yi)}i∈I ; Ĉ)∣∣∣ . Above, Coverage ( {(Xi, Yi)}i∈I ; Ĉ ) = 1|I| ∑ i∈I 1[Yi ∈ Ĉ(Xi)] where Ĉ(x) is a prediction interval method. Importantly, a uniform increase of the length of all intervals will not deceive the ∆WSC measure as it will remain fixed. ∆ILS-Coverage: We next consider a measure that checks whether the intervals made larger by orthogonal QR compared to vanilla QR are necessary for improving the conditional coverage. In general, suppose we are given two algorithms A1 and A2 for constructing prediction intervals. Let ∆Li = |ĈA1(Xi)| − |ĈA2(Xi)| 2The code is available at https://github.com/danielgreenfeld3/XIC 3We used the implementation from https://github.com/msesia/arc/ be the difference between the interval length |ĈA(Xi)| obtained by A1 and A2, evaluated on the same test point Xi. Next, let q0.9({∆Li}i∈I) be the 90% empirical quantile of {∆Li}i∈I . Then, let ILS be the 10% of samples whose length increased the most: ILS = {i : ∆Li ≥ q0.9({∆Li}i∈I), i ∈ I}. With this notation in place, we propose the ∆ILS-Coverage metric: ∆ILS-Coverage = ∣∣∣Coverage({(Xi, Yi)}i∈ILS ; ĈAk)− Coverage({(Xi, Yi)}i∈I ; ĈAk)∣∣∣. In words, the above is the absolute difference between the coverage over the ILS samples and the marginal coverage, evaluated for each algorithm Ak, k = 1, 2. A smaller value for k = 1 indicates that the points with very different size under A1 and A2 are handled better by A1. ∆Node-Coverage: As a variant of ∆ILS-Coverage, we identify a sub-population characterized by a small set of features such that the two algorithmsA1 andA2 produce very different intervals, and check the coverage on this region. To this end, we label the ILS samples as the positive class and fit a binary classifier formulated as a decision tree, aiming to predict whether a sample X belongs to the ILS set. Denote the set of tree nodes in depth at most three that contain at least 5% of the samples by {Nodej}j , where Nodej ⊆ I. Next, let ND be the set of indices of the samples that belong to the node that maximizes the following ratio: |Nodej ∩ ILS| / |Nodej \ ILS|. Finally, given a method for constructing prediction intervals Ĉ(·), compute the distance between the coverage over the ND samples and the marginal coverage, formulated as ∆Node-Coverage = ∣∣∣Coverage({(Xi, Yi)}i∈ND ; Ĉ)− Coverage({(Xi, Yi)}i∈I ; Ĉ)∣∣∣ . 5 Experiments Armed with the performance metrics described in Section 4, we now systematically quantify the effectiveness of the proposed independence penalty when combined with baseline quantile regression methods. In all experiments, we apply a deep neural network as a base model for constructing prediction intervals with 1 − α = 0.9 coverage level. Section S4 of the Supplementary Material gives the details about the network architecture, training strategy, and details about this experimental setup. Software implementing the proposed method and reproducing our experiments can be found at https://github.com/Shai128/oqr 5.1 A synthetic two-group setting We return to the synthetic two-group setting previewed in Section 1, but first provide more details about the data. In this data set, the difference in distribution between the majority and minority groups is controlled by modifying the noise level of the conditional distribution Y | X of the minority group. Furthermore, X has 50 coordinates, the first of which indicates the group membership. Section S3.1 of the Supplementary Material contains more details about the generation of this synthetic data. To analyze the performance of our method, we generate 7000 i.i.d. samples and repeat the following experiment for 30 random train/validation/test splits of the data. We fit a quantile regression model with pinball loss on 5040 training samples, where we tune the number of epochs on an independent validation set that contains 560 samples. The remaining 1400 samples are used to test the model’s performance. We pre-processed the feature vector using z-score standardization, and normalized the features and response variables to have a zero mean and a unit variance. In Table 1 we report the average coverage, length, and conditional coverage metrics for vanilla QR and orthogonal QR for two minority-group noise levels. (The ∆ILS-Coverage, ∆Node-Coverage metrics are not reported in this case, since they both essentially correspond to the minority group’s coverage level, which is given in the table.) The intervals constructed by the baseline vanilla QR undercover both the majority and minority group but; this tendency is more severe for the latter. By contrast, the regularized model achieves similar coverage rates for the two groups, with levels that are closer to the nominal 90%. This is also reflected by the improvement in the Pearson’s correlation, HSIC, and ∆WSC metrics of conditional coverage. Overall, orthogonal QR gives wider intervals for the minority group, which is anticipated since the coverage of the baseline model is far below the nominal rate. As for the majority group, our proposed training gives intervals of about the same length compared to the baseline model, while achieving a considerably higher coverage rate. In fact, the regularized model constructs even shorter intervals in the high noise level case. We note that this performance continues to hold even when using interval score loss in place of the pinball loss; see Section S5.1.1 of the Supplementary Material. Lastly, we also examine the effect of our regularization on weighted QR [50], and display the results in Section S5.1.2 of the Supplementary Material. 5.2 Real data Next, we compare the performance of the proposed orthogonal QR to vanilla QR on nine benchmarks data sets as in [15, 30]: Facebook comment volume variants one and two (facebook_1, facebook_2), blog feedback (blog_data), physicochemical properties of protein tertiary structure (bio), forward kinematics of an 8 link robot arm (kin8nm), condition based maintenance of naval propulsion plants (naval), and medical expenditure panel survey number 19-21 (meps_19, meps_20, and meps_21). See Section S3.2 of the Supplementary Material for details about these data sets. We follow the experimental protocol and training strategy described in Section 5.1. We randomly split each data set into disjoint training (54%), validation (6%), and testing sets (40%). We normalized the features and response variables to have a zero mean and a unit variance each, except for facebook_1, facebook_2, blog_data, and bio datasets in which we log transform Y before the standardization. Table 2 summarizes the performance of vanilla QR and orthogonal QR. Our proposed method consistently improves the conditional coverage, as measured by the Pearson’s correlation, HSIC, and ∆WSC metrics for conditional coverage, even though the latter two are not optimized directly by orthogonal QR. In Figure 2, we show the coverage as a function of the interval’s length evaluated on the meps_21 data set, and find that orthogonal QR (in orange) is closer to the nominal 90% level when compared to the baseline method. Our penalty also improves the ∆Node-Coverage in most data sets (see Table 2), indicating that the baseline model tends to undercover the response of at least one sub- population. Turning to the statistical efficiency, observe that the intervals produced by the regularized models tend to be wider than the ones of the baseline method, which is needed to better achieve conditional coverage. We further probe this phenomenon by checking the ∆ILS-Coverage which shows that the regions with wider intervals now have better coverage. In Section S5.2 of the Supplementary Material we provide additional experiments by replacing the pinball loss with the recently-introduced interval score loss. The effect of the decorrelation penalty is similar to the one described above. Moreover, in Section S5.2 we also compare between the decorrelation and HSIC penalties for independence, and show that in most data sets the decorrelation penalty achieves better performance over all metrics of conditional independence at the cost of producing wider intervals. For completeness, we also present the results obtained by quantile regression forests [8] on the real data sets in Supplementary Table S15. Conformalized quantile regression results In previous experiments the quantile regression methods tend to (marginally) undercover the response variables. This limitation is easily remedied by combining vanilla QR or orthogonal QR with conformalized quantile regression [30] that adjusts the estimated intervals to exactly achieve the marginal coverage property in (6). Table 3 summarizes the results, demonstrating that by conformalizing the intervals our proposed method precisely achieves the desired marginal coverage while improving the conditional coverage of the baseline model, as measured by the Pearson’s correlation, HSIC, and ∆WSC, ∆Node-Coverage metrics. The two independence metrics indicate that even after after adjusting the intervals to achieve marginal coverage, our method still results in improved independence between the coverage event and length. We note that the ∆WSC, and ∆Node-Coverage metrics have a more muted improvement compared to the setting without conformalization, since the conformalization step smooths out the coverage to some extent. Further details regarding this conformalizing setting is given in Section S4.4 of the Supplementary Material. 6 Conclusion In this work we presented the orthogonal QR approach to achieve coverage closer to the desired level evenly across all sup-populations in the setting of quantile regression algorithms. A technical limitation of our method is the use of large batches during training, required to effectively detect the dependencies between the intervals length and coverage events. In our experiments we focus on i.i.d data, but we believe that the orthogonal loss can be beneficial beyond this, such as in time-series data, which we hope to investigate in future work. A related future direction is to encourage independence between a coverage event and a function of the feature vector, other than that of interval length— similar logic to that of our proposal means that this independence would hold for the true quanties. A clever choice may better capture the relationships between X and the coverage obtained by the model, and further improve conditional coverage. As a concluding remark, while empirical evidence shows that our orthogonal QR approach produces intervals that represent the uncertainty in subgroups more reliably than standard methods, it does not guarantee a valid coverage across all feature space with access to only a finite sample. This guarantee may be necessary for ensuring that predictions are unbiased against a minority group of interest, indexed by an individual’s gender or race, for example. To alleviate this, one can combine our methods with the equalized coverage framework [28] that builds upon conformal inference to achieve the desired coverage for pre-defined sub-populations, which is a weaker but achievable demand compared to conditional coverage. Acknowledgments and Disclosure of Funding S.F. and Y.R. were supported by the ISRAEL SCIENCE FOUNDATION (grant No. 729/21). Y.R. also thanks the Career Advancement Fellowship, Technion, for providing research support. We thank John Cherian for comments on an earlier version of this manuscript.
1. What is the focus and contribution of the paper on quantile regression? 2. What are the strengths of the proposed approach, particularly in terms of its technical correctness and relevance to uncertainty quantification? 3. What are the weaknesses of the paper regarding the choice of penalty factor and sensitivity to its choice? 4. Do you have any concerns about the comparison with other quantile estimators and the use of other metrics to complement the results? 5. Are there any minor issues or suggestions you have for improving the paper, such as incorrect references or lack of explanation of certain terms?
Summary Of The Paper Review
Summary Of The Paper The paper shows a new loss function that can be used for quantile regression. The key idea is to combine loss functions often used in quantile regression with a new penalty that enforces conditional coverage. Results show improvements according to several metrics. Review The language of the paper is very clear. The ideas and math are also technically correct. Quantile regression is relevant for uncertainty quantification, and thus the contributions are important. The key to the development of the paper (in particular of the loss function) is Proposition 1, which is neat. Although similar results have been used in other contexts, I have never seen it being used for quantile regression. The empirical results are encouraging. Major comments: How is gamma (the penalty factor) chosen in practice? How sensitive are the empirical results to such a choice? I think this is a very important point that is not addressed in the paper. It is natural that the method will outperform others in terms of Pearson and HSIC metrics (because they are explicitly used in the loss function); the improvements with respect to the other metrics are more appealing. However, if gamma is large, wouldn't it be the case that all metrics that were computed in the experiments would be better for OQR (because they basically measure conditional coverage), even though a large gamma does not guarantee that the quantiles would be well estimated? That is, a large gamma enforces good conditional coverage, but by itself, it would not enforce the quantiles to be correctly estimated. Other estimates also satisfy conditional coverage. If this is the case, I feel that other metrics would be necessary to complement the results (such as the pinball loss). I also miss comparisons to other quantile estimators that don't directly target the pinball loss, such as quantile regression forests. Minor comments: is the reference to interval score loss "[17]" correct? I think it should be "[15]". it would be useful to explain what the worst-slab coverage means
NIPS
Title Maximum Margin Interval Trees Abstract Learning a regression function using censored or interval-valued output data is an important problem in fields such as genomics and medicine. The goal is to learn a real-valued prediction function, and the training output labels indicate an interval of possible values. Whereas most existing algorithms for this task are linear models, in this paper we investigate learning nonlinear tree models. We propose to learn a tree by minimizing a margin-based discriminative objective function, and we provide a dynamic programming algorithm for computing the optimal solution in log-linear time. We show empirically that this algorithm achieves state-of-the-art speed and prediction accuracy in a benchmark of several data sets. 1 Introduction In the typical supervised regression setting, we are given set of learning examples, each associated with a real-valued output. The goal is to learn a predictor that accurately estimates the outputs, given new examples. This fundamental problem has been extensively studied and has given rise to algorithms such as Support Vector Regression (Basak et al., 2007). A similar, but far less studied, problem is that of interval regression, where each learning example is associated with an interval (y i , yi), indicating a range of acceptable output values, and the expected predictions are real numbers. Interval-valued outputs arise naturally in fields such as computational biology and survival analysis. In the latter setting, one is interested in predicting the time until some adverse event, such as death, occurs. The available information is often limited, giving rise to outputs that are said to be either un-censored (−∞ < y i = yi < ∞), left-censored (−∞ = yi < yi < ∞), right-censored (−∞ < y i < yi =∞), or interval-censored (−∞ < yi < yi <∞) (Klein and Moeschberger, 2005). For instance, right censored data occurs when all that is known is that an individual is still alive after a period of time. Another recent example is from the field of genomics, where interval regression was used to learn a penalty function for changepoint detection in DNA copy number and ChIP-seq data (Rigaill et al., 2013). Despite the ubiquity of this type of problem, there are surprisingly few existing algorithms that have been designed to learn from such outputs, and most are linear models. Decision tree algorithms have been proposed in the 1980s with the pioneering work of Breiman et al. (1984) and Quinlan (1986). Such algorithms rely on a simple framework, where trees are grown by recursive partitioning of leaves, each time maximizing some task-specific criterion. Advantages of these algorithms include the ability to learn non-linear models from both numerical and categorical data of various scales, and having a relatively low training time complexity. In this work, we extend the work of Breiman et al. (1984) to learning non-linear interval regression tree models. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 1.1 Contributions and organization Our first contribution is Section 3, in which we propose a new decision tree algorithm for interval regression. We propose to partition leaves using a margin-based hinge loss, which yields a sequence of convex optimization problems. Our second contribution is Section 4, in which we propose a dynamic programming algorithm that computes the optimal solution to all of these problems in log-linear time. In Section 5 we show that our algorithm achieves state-of-the-art prediction accuracy in several real and simulated data sets. In Section 6 we discuss the significance of our contributions and propose possible future research directions. An implementation is available at https://git.io/mmit. 2 Related work The bulk of related work comes from the field of survival analysis. Linear models for censored outputs have been extensively studied under the name accelerated failure time (AFT) models (Wei, 1992). Recently, L1-regularized variants have been proposed to learn from high-dimensional data (Cai et al., 2009; Huang et al., 2005). Nonlinear models for censored data have also been studied, including decision trees (Segal, 1988; Molinaro et al., 2004), Random Forests (Hothorn et al., 2006) and Support Vector Machines (Pölsterl et al., 2016). However, most of these algorithms are limited to the case of right-censored and un-censored data. In contrast, in the interval regression setting, the data are either left, right or interval-censored. To the best of our knowledge, the only existing nonlinear model for this setting is the recently proposed Transformation Tree of Hothorn and Zeileis (2017). Another related method, which shares great similarity with ours, is the L1-regularized linear models of Rigaill et al. (2013). Like our proposed algorithm, their method optimizes a convex loss function with a margin hyperparameter. Nevertheless, one key limitation of their algorithm is that it is limited to modeling linear patterns, whereas our regression tree algorithm is not. 3 Problem 3.1 Learning from interval outputs Let S def= {(x1,y1), ..., (xn,yn)} ∼ Dn be a data set of n learning examples, where xi ∈ Rp is a feature vector, yi def = (yi, yi), with yi, yi ∈ R and yi < yi, are the lower and upper limits of a target interval, and D is an unknown data generating distribution. In the interval regression setting, a predicted value is only considered erroneous if it is outside of the target interval. Formally, let ` : R→ R be a function and define φ`(x) def = `[(x)+] as its corresponding hinge loss, where (x)+ is the positive part function, i.e. (x)+ = x if x > 0 and (x)+ = 0 otherwise. In this work, we will consider two possible hinge loss functions: the linear one, where `(x) = x, and the squared one where `(x) = x2. Our goal is to find a function h : Rp → R that minimizes the expected error on data drawn from D: minimize h E (xi,yi)∼D φ`(−h(xi) + yi) + φ`(h(xi)− yi), Notice that, if `(x) = x2, this is a generalization of the mean squared error to interval outputs. Moreover, this can be seen as a surrogate to a zero-one loss that measures if a predicted value lies within the target interval (Rigaill et al., 2013). 3.2 Maximum margin interval trees We will seek an interval regression tree model T : Rp → R that minimizes the total hinge loss on data set S: C(T ) def = ∑ (xi,yi)∈S [ φ` ( −T (xi) + yi + ) + φ` (T (xi)− yi + ) ] , (1) where ∈ R+0 is a hyperparameter introduced to improve regularity (see supplementary material for details). A decision tree is an arrangement of nodes and leaves. The leaves are responsible for making predictions, whereas the nodes guide the examples to the leaves based on the outcome of some boolean-valued rules (Breiman et al., 1984). Let T̃ denote the set of leaves in a decision tree T . Each leaf τ ∈ T̃ is associated with a set of examples Sτ ⊆ S, for which it is responsible for making predictions. The sets Sτ obey the following properties: S = ⋃ τ∈T̃ Sτ and Sτ ∩ Sτ ′ 6= ∅ ⇔ τ = τ ′. Hence, the contribution of a leaf τ to the total loss of the tree C(T ), given that it predicts µ ∈ R, is Cτ (µ) def = ∑ (xi,yi)∈Sτ [ φ`(−µ+ yi + ) + φ`(µ− yi + ) ] (2) and the optimal predicted value for the leaf is obtained by minimizing this function over all µ ∈ R. As in the CART algorithm (Breiman et al., 1984), our tree growing algorithm relies on recursive partitioning of the leaves. That is, at any step of the tree growing algorithm, we obtain a new tree T ′ from T by selecting a leaf τ0 ∈ T̃ and dividing it into two leaves τ1, τ2 ∈ T̃ ′, s.t. Sτ0 = Sτ1 ∪ Sτ2 and τ0 6∈ T̃ ′. This partitioning results from applying a boolean-valued rule r : Rp → B to each example (xi,yi) ∈ Sτ0 and sending it to τ1 if r(xi) = True and to τ2 otherwise. The rules that we consider are threshold functions on the value of a single feature, i.e., r(xi) def = “xij ≤ δ ”. This is illustrated in Figure 1. According to Equation (2), for any such rule, we have that the total hinge loss for the examples that are sent to τ1 and τ2 are Cτ1(µ) = ←− Cτ0(µ|j, δ) def = ∑ (xi,yi)∈Sτ0 :xij≤δ [ φ`(−µ+ yi + ) + φ`(µ− yi + ) ] (3) Cτ2(µ) = −→ Cτ0(µ|j, δ) def = ∑ (xi,yi)∈Sτ0 :xij>δ [ φ`(−µ+ yi + ) + φ`(µ− yi + ) ] . (4) The best rule is the one that leads to the smallest total cost C(T ′). This rule, as well as the optimal predicted values for τ1 and τ2, are obtained by solving the following optimization problem: argmin j,δ,µ1,µ2 [ ←− Cτ0(µ1|j, δ) + −→ Cτ0(µ2|j, δ) ] . (5) In the next section we propose a dynamic programming algorithm for this task. 4 Algorithm First note that, for a given j, δ, the optimization separates into two convex minimization sub-problems, which each amount to minimizing a sum of convex loss functions: min j,δ,µ1,µ2 [ ←− Cτ (µ1|j, δ) + −→ Cτ (µ2|j, δ) ] = min j,δ [ min µ1 ←− Cτ (µ1|j, δ) + min µ2 −→ Cτ (µ2|j, δ) ] . (6) We will show that if there exists an efficient dynamic program Ω which, given any set of hinge loss functions defined over µ, computes their sum and returns the minimum value, along with a minimizing value of µ, the minimization problem of Equation (6) can be solved efficiently. Observe that, although there is a continuum of possible values for δ, we can limit the search to the values of feature j that are observed in the data (i.e., δ ∈ {xij ; i = 1, ... , n}), since all other values do not lead to different configurations of Sτ1 and Sτ2 . Thus, there are at most nj ≤ n unique thresholds to consider for each feature. Let these thresholds be δj,1 < ... < δj,nj . Now, consider Φj,k as the set that contains all the losses φ`(−µ + yi + ) and φ`(µ− yi + ) for which we have (xi,yi) ∈ Sτ0 and xij = δj,k. Since we now only consider a finite number of δ-values, it follows from Equation (3), that one can obtain ←− Cτ (µ1|j, δj,k) from ←− Cτ (µ1|j, δj,k−1) by adding all the losses in Φj,k. Similarly, one can also obtain −→ Cτ (µ1|j, δj,k) from −→ Cτ (µ1|j, δj,k−1) by removing all the losses in Φj,k (see Equation (4)). This, in turn, implies that minµ ←− Cτ (µ|j, δj,k) = Ω(Φj,1∪...∪Φj,k) and minµ −→ Cτ (µ|j, δj,k) = Ω(Φj,k+1 ∪ ... ∪ Φj,nj ) . Hence, the cost associated with a split on each threshold δj,k is given by: δj,1 : Ω(Φj,1) + Ω(Φj,2 ∪ · · · ∪ Φj,nj ) . . . . . . . . . δj,i : Ω(Φj,1 ∪ · · · ∪ Φj,i) + Ω(Φj,i+1 ∪ · · · ∪ Φj,nj ) . . . . . . . . . δj,nj−1 : Ω(Φj,1 ∪ · · · ∪ Φj,nj−1) + Ω(Φj,nj ) (7) and the best threshold is the one with the smallest cost. Note that, in contrast with the other thresholds, δj,nj needs not be considered, since it leads to an empty leaf. Note also that, since Ω is a dynamic program, one can efficiently compute Equation (7) by using Ω twice, from the top down for the first column and from the bottom up for the second. Below, we propose such an algorithm. 4.1 Definitions A general expression for the hinge losses φ`(−µ+ yi + ) and φ`(µ− yi + ) is φ`(si(µ− yi) + ), where si = −1 or 1 respectively. Now, choose any convex function ` : R→ R and let Pt(µ) def = t∑ i=1 φ`(si(µ− yi) + ) (8) be a sum of t hinge loss functions. In this notation, Ω(Φj,1 ∪ ... ∪ Φj,i) = minµ Pt(µ), where t = |Φj,1 ∪ ... ∪ Φj,i|. Observation 1. Each of the t hinge loss functions has a breakpoint at yi − si , where it transitions from a zero function to a non-zero one if si = 1 and the converse if si = −1. For the sake of simplicity, we will now consider the case where these breakpoints are all different; the generalization is straightforward, but would needlessly complexify the presentation (see the supplementary material for details). Now, note that Pt(µ) is a convex piecewise function that can be uniquely represented as: Pt(µ) = pt,1(µ) if µ ∈ (−∞, bt,1] . . . pt,i(µ) if µ ∈ (bt,i−1, bt,i] . . . pt,t+1(µ) if µ ∈ (bt,t,∞) (9) where we will call pt,i the ith piece of Pt and bt,i the ith breakpoint of Pt (see Figure 2 for an example). Observe that each piece pt,i is the sum of all the functions that are non-zero on the interval (bt,i−1, bt,i]. It therefore follows from Observation 1 that pt,i(µ) = t∑ j=1 `[sj(µ− yj) + ] I[(sj = −1 ∧ bt,i−1 < yj + ) ∨ (sj = 1 ∧ yj − < bt,i)] (10) where I[·] is the (Boolean) indicator function, i.e., I[True] = 1 and 0 otherwise. Lemma 1. For any i ∈ {1, ..., t}, we have that pt,i+1(µ) = pt,i(µ) + ft,i(µ), where ft,i(µ) = sk`[sk(µ− yk) + ] for some k ∈ {1, ..., t} such that yk − sk = bt,i. Proof. The proof relies on Equation (10) and is detailed in the supplementary material. 4.2 Minimizing a sum of hinge losses by dynamic programming Our algorithm works by recursively adding a hinge loss to the total function Pt(µ), each time, keeping track of the minima. To achieve this, we use a pointer Jt, which points to rightmost piece of Pt(µ) that contains a minimum. Since Pt(µ) is a convex function of µ, we know that this minimum is global. In the algorithm, we refer to the segment pt,Jt as Mt and the essence of the dynamic programming update is moving Jt to its correct position after a new hinge loss is added to the sum. At any time step t, let Bt = {(bt,1, ft,1), ..., (bt,t, ft,t) | bt,1 < ... < bt,t} be the current set of breakpoints (bt,i) together with their corresponding difference functions (ft,i). Moreover, assume the convention bt,0 = −∞ and bt,t+1 =∞, which are defined, but not stored in Bt. The initialization (t = 0) is B0 = {}, J0 = 1, M0(µ) = 0 . (11) Now, at any time step t > 0, start by inserting the new breakpoint and difference function. Hence, Bt = Bt−1 ∪ {(yt − st , st `[st(µ− yt) + ])} . (12) Recall that, by definition, the set Bt remains sorted after the insertion. Let jt ∈ {1, . . . , t+ 1}, be the updated value for the previous minimum pointer (Jt−1) after adding the tth hinge loss (i.e., the index of bt−1,Jt−1 in the sorted set of breakpoints at time t). It is obtained by adding 1 if the new breakpoint is before Jt−1 and 0 otherwise. In other words, jt = Jt−1 + I[yt − st < bt−1,Jt−1 ] . (13) If there is no minimum of Pt(µ) in piece pt,jt , we must move the pointer from jt to its final position Jt ∈ {1, ..., t+ 1}, where Jt is the index of the rightmost function piece that contains a minimum: Jt = max i∈{1,...,t+1} i, s.t. (bt,i−1, bt,i] ∩ {x ∈ R | Pt(x) = min µ Pt(µ)} 6= ∅ . (14) See Figure 2 for an example. The minimum after optimization is in piece Mt, which is obtained by adding or subtracting a series of difference functions ft,i. Hence, applying Lemma 1 multiple times, we obtain: Mt(µ) def = pt,Jt(µ) = pt,jt(µ) + 0 if jt = Jt∑Jt−1 i=jt ft,i(µ) if jt < Jt − ∑jt−1 i=Jt ft,i(µ) if Jt < jt (15) Then, the optimization problem can be solved using minµ Pt(µ) = minµ∈(bt,Jt−1,bt,Jt ]Mt(µ). The proof of this statement is available in the supplementary material, along with a detailed pseudocode and implementation details. 4.3 Complexity analysis The ` functions that we consider are `(x) = x and `(x) = x2. Notice that any such function can be encoded by three coefficients a, b, c ∈ R. Therefore, summing two functions amounts to summing their respective coefficients and takes time O(1). The set of breakpoints Bt can be stored using any data structure that allows sorted insertions in logarithmic time (e.g., a binary search tree). Assume that we have n hinge losses. Inserting a new breakpoint at Equation (12) takes O(log n) time. Updating the jt pointer at Equation (13) takes O(1). In contrast, the complexity of finding the new pointer position Jt and updating Mt at Equations (14) and (15) varies depending on the nature of `. For the case where `(x) = x, we are guaranteed that Jt is at distance at most one of jt. This is demonstrated in Theorem 2 of the supplementary material. Since we can sum two functions in O(1) time, we have that the worst case time complexity of the linear hinge loss algorithm is O(n log n). However, for the case where `(x) = x2, the worst case could involve going through the n breakpoints. Hence, the worst case time complexity of the squared hinge loss algorithm is O(n2). Nevertheless, in Section 5.1, we show that, when tested on a variety real-world data sets, the algorithm achieved a time complexity of O(n log n) in this case also. Finally, the space complexity of this algorithm is O(n), since a list of n breakpoints (bt,i) and difference functions (ft,i) must be stored, along with the coefficients (a, b, c ∈ R) of Mt. Moreover, it follows from Lemma 1 that the function pieces pt,i need not be stored, since they can be recovered using the bt,i and ft,i. 5 Results 5.1 Empirical evaluation of time complexity We performed two experiments to evaluate the expected O(n(m + log n)) time complexity for n interval limits and m pointer moves per limit. First, we ran our algorithm (MMIT) with both squared and linear hinge loss solvers on a variety of real-world data sets of varying sizes (Rigaill et al., 2013; Lichman, 2013), and recorded the number of pointer moves. We plot the average and max pointer moves over a wide range of margin parameters, and all possible feature orderings (Figure 3, left). In agreement with our theoretical result (supplementary material, Theorem 2), we observed a maximum of one move per interval limit for the linear hinge loss. On average we observed that the number of moves does not increase with data set size, even for the squared hinge loss. These results suggest that the number of pointer moves per limit is generally constant m = O(1), so we expect an overall time complexity of O(n log n) in practice, even for the squared hinge loss. Second, we used the limits of the target intervals in the neuroblastoma changepoint data set (see Section 5.3) to simulate data sets from n = 103 to n = 107 limits. We recorded the time required to run the solvers (Figure 3, right), and observed timings which are consistent with the expected O(n log n) complexity. 5.2 MMIT recovers a good approximation in simulations with nonlinear patterns We demonstrate one key limitation of the margin-based interval regression algorithm of Rigaill et al. (2013) (L1-Linear): it is limited to modeling linear patterns. To achieve this, we created three simulated data sets, each containing 200 examples and 20 features. Each data set was generated in such a way that the target intervals followed a specific pattern f : R→ R according to a single feature, which we call the signal feature. The width of the intervals and a small random shift around the true value of f were determined randomly. The details of the data generation protocol are available in the supplementary material. MMIT (linear hinge loss) and L1-Linear were trained on each data set, using cross-validation to choose the hyperparameter values. The resulting data sets and the predictions of each algorithm are illustrated in Figure 4. As expected, L1-Linear fails to fit the non-linear patterns, but achieves a near perfect fit for the linear pattern. In contrast, MMIT learns stepwise approximations of the true functions, which results from each leaf predicting a constant value. Notice the fluctuations in the models of both algorithms, which result from using irrelevant features. 5.3 Empirical evaluation of prediction accuracy In this section, we compare the accuracy of predictions made by MMIT and other learning algorithms on real and simulated data sets. Evaluation protocol To evaluate the accuracy of the algorithms, we performed 5-fold crossvalidation and computed the mean squared error (MSE) with respect to the intervals in each of the five testing sets (Figure 5). For a data set S = {(xi,yi)}ni=1 with xi ∈ Rp and yi ∈ R 2 , and for a model h : Rp → R, the MSE is given by MSE(h, S) = 1 n n∑ i=1 ( [h(xi)− yi] I[h(xi) < yi] + [h(xi)− yi] I[h(xi) > yi] )2 . (16) At each step of the cross-validation, another cross-validation (nested within the former) was used to select the hyperparameters of each algorithm based on the training data. The hyperparameters selected for MMIT are available in the supplementary material. Algorithms The linear and squared hinge loss variants of Maximum Margin Interval Trees (MMITL and MMIT-S) were compared to two state-of-the-art interval regression algorithms: the marginbased L1-regularized linear model of Rigaill et al. (2013) (L1-Linear) and the Transformation Trees of Hothorn and Zeileis (2017) (TransfoTree). Moreover, two baseline methods were included in the comparison. To provide an upper bound for prediction error, we computed the trivial model that ignores all features and just learns a constant function h(x) = µ that minimizes the MSE on the training data (Constant). To demonstrate the importance of using a loss function designed for interval regression, we also considered the CART algorithm (Breiman et al., 1984). Specifically, CART was used to fit a regular regression tree on a transformed training set, where each interval regression example (x, [y, y]) was replaced by two real-valued regression examples with features x and labels y + and y − . This algorithm, which we call Interval-CART, uses a margin hyperparameter and minimizes a squared loss with respect to the interval limits. However, in contrast with MMIT, it does not take the structure of the interval regression problem into account, i.e., it ignores the fact that no cost should be incurred for values predicted inside the target intervals. Results in changepoint data sets The problem in the first two data sets is to learn a penalty function for changepoint detection in DNA copy number and ChIP-seq data (Hocking et al., 2013; Rigaill et al., 2013), two significant interval regression problems from the field of genomics. For the neuroblastoma data set, all methods, except the constant model, perform comparably. Interval-CART achieves the lowest error for one fold, but L1-Linear is the overall best performing method. For the histone data set, the margin-based models clearly outperform the non-margin-based models: Constant and TransfoTree. MMIT-S achieves the lowest error on one of the folds. Moreover, MMIT-S tends to outperform MMIT-L, suggesting that a squared loss is better suited for this task. Interestingly, MMIT-S outperforms Interval-CART, which also uses a squared loss, supporting the importance of using a loss function adapted to the interval regression problem. Results in UCI data sets The next two data sets are regression problems taken from the UCI repository (Lichman, 2013). For the sake of our comparison, the real-valued outputs in these data sets were transformed into censored intervals, using a protocol that we detail in the supplementary material. For the difficult triazines data set, all methods struggle to surpass the Constant model. Neverthess, some achieve lower errors for one fold. For the servo data set, the margin-based tree models: MMIT-S, MMIT-L, and Interval-CART perform comparably and outperform the other models. This highlights the importance of developping non-linear models for interval regression and suggests a positive effect of the margin hyperparameter on accuracy. Results in simulated data sets The last three data sets are the simulated data sets discussed in the previous section. As expected, the L1-linear model tends outperforms the others on the linear data set. However, surprisingly, on a few folds, the MMIT-L and Interval-CART models were able to achieve low test errors. For the non-linear data sets (sin and abs), MMIT-S, MMIT-L and Interval-Cart clearly outperform the TransfoTree, L1-linear and Constant models. Observe that the TransfoTree algorithm achieves results comparable to those of L1-linear which, in Section 5.2, has been shown to learn a roughly constant model in these situations. Hence, although these data sets are simulated, they highlight situations where this non-linear interval regression algorithm fails to yield accurate models, but where MMITs do not. Results for more data sets are available in the supplementary material. 6 Discussion and conclusions We proposed a new margin-based decision tree algorithm for the interval regression problem. We showed that it could be trained by solving a sequence of convex sub-problems, for which we proposed a new dynamic programming algorithm. We showed empirically that the latter’s time complexity is log-linear in the number of intervals in the data set. Hence, like classical regression trees (Breiman et al., 1984), our tree growing algorithm’s time complexity is linear in the number of features and log-linear in the number of examples. Moreover, we studied the prediction accuracy in several real and simulated data sets, showing that our algorithm is competitive with other linear and nonlinear models for interval regression. This initial work on Maximum Margin Interval Trees opens a variety of research directions, which we will explore in future work. We will investigate learning ensembles of MMITs, such as random forests. We also plan to extend the method to learning trees with non-constant leaves. This will increase the smoothness of the models, which, as observed in Figure 4, tend to have a stepwise nature. Moreover, we plan to study the average time complexity of the dynamic programming algorithm. Assuming a certain regularity in the data generating distribution, we should be able to bound the number of pointer moves and justify the time complexity that we observed empirically. In addition, we will study the conditions in which the proposed MMIT algorithm is expected to surpass methods that do not exploit the structure of the target intervals, such as the proposed Interval-CART method. Intuitively, one weakness of Interval-CART is that it does not properly model left and right-censored intervals, for which it favors predictions that are near the finite limits. Finally, we plan to extend the dynamic programming algorithm to data with un-censored outputs. This will make Maximum Margin Interval Trees applicable to survival analysis problems, where they should rank among the state of the art. Reproducibility • Implementation: https://git.io/mmit • Experimental code: https://git.io/mmit-paper • Data: https://git.io/mmit-data The versions of the software used in this work are also provided in the supplementary material. Acknowledgements We are grateful to Ulysse Côté-Allard, Mathieu Blanchette, Pascal Germain, Sébastien Giguère, Gaël Letarte, Mario Marchand, and Pier-Luc Plante for their insightful comments and suggestions. This work was supported by the National Sciences and Engineering Research Council of Canada, through an Alexander Graham Bell Canada Graduate Scholarship Doctoral Award awarded to AD and a Discovery Grant awarded to FL (#262067).
1. What is the focus of the paper regarding computational efficiency in decision tree algorithms? 2. What are the strengths of the proposed method, particularly in its performance on simulated and real datasets? 3. What are the weaknesses of the paper regarding its comparisons with other interval-based prediction methods and lack of demonstration on classic survival data sets? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. What are some potential extensions or improvements to the proposed method, such as integrating it with other machine learning techniques?
Review
Review The paper introduces a computationally efficient decision tree algorithm for learning interval data outcomes. The paper is well written and provides useful illustrates to document the method. The method shows fine performance on simulated data and several real data sets. The paper compares against other interval based prediction methods. However, several of them are not contrasted in the related work. Adding differentiating factors from those would strengthen the paper in identifying when to use MMIT. While the paper is motivated heavily by survival analysis, the algorithm was not demonstrated on classic survival data sets (e.g. UCI thoracic surgery, PBC, etc.). It is not clear why not, given the motivation and that the algorithm appears to work with intervals defined as y_= y-. In survival analysis one is typically interested in rate estimation and risk attribution. In this case, the method would predict a time of event. I imagine risk attribution would be done in the same way as, e.g. random survival forests. Decision trees can be limited in their performance. Food for thought: how might you extend this to MMI forests?
NIPS
Title Maximum Margin Interval Trees Abstract Learning a regression function using censored or interval-valued output data is an important problem in fields such as genomics and medicine. The goal is to learn a real-valued prediction function, and the training output labels indicate an interval of possible values. Whereas most existing algorithms for this task are linear models, in this paper we investigate learning nonlinear tree models. We propose to learn a tree by minimizing a margin-based discriminative objective function, and we provide a dynamic programming algorithm for computing the optimal solution in log-linear time. We show empirically that this algorithm achieves state-of-the-art speed and prediction accuracy in a benchmark of several data sets. 1 Introduction In the typical supervised regression setting, we are given set of learning examples, each associated with a real-valued output. The goal is to learn a predictor that accurately estimates the outputs, given new examples. This fundamental problem has been extensively studied and has given rise to algorithms such as Support Vector Regression (Basak et al., 2007). A similar, but far less studied, problem is that of interval regression, where each learning example is associated with an interval (y i , yi), indicating a range of acceptable output values, and the expected predictions are real numbers. Interval-valued outputs arise naturally in fields such as computational biology and survival analysis. In the latter setting, one is interested in predicting the time until some adverse event, such as death, occurs. The available information is often limited, giving rise to outputs that are said to be either un-censored (−∞ < y i = yi < ∞), left-censored (−∞ = yi < yi < ∞), right-censored (−∞ < y i < yi =∞), or interval-censored (−∞ < yi < yi <∞) (Klein and Moeschberger, 2005). For instance, right censored data occurs when all that is known is that an individual is still alive after a period of time. Another recent example is from the field of genomics, where interval regression was used to learn a penalty function for changepoint detection in DNA copy number and ChIP-seq data (Rigaill et al., 2013). Despite the ubiquity of this type of problem, there are surprisingly few existing algorithms that have been designed to learn from such outputs, and most are linear models. Decision tree algorithms have been proposed in the 1980s with the pioneering work of Breiman et al. (1984) and Quinlan (1986). Such algorithms rely on a simple framework, where trees are grown by recursive partitioning of leaves, each time maximizing some task-specific criterion. Advantages of these algorithms include the ability to learn non-linear models from both numerical and categorical data of various scales, and having a relatively low training time complexity. In this work, we extend the work of Breiman et al. (1984) to learning non-linear interval regression tree models. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 1.1 Contributions and organization Our first contribution is Section 3, in which we propose a new decision tree algorithm for interval regression. We propose to partition leaves using a margin-based hinge loss, which yields a sequence of convex optimization problems. Our second contribution is Section 4, in which we propose a dynamic programming algorithm that computes the optimal solution to all of these problems in log-linear time. In Section 5 we show that our algorithm achieves state-of-the-art prediction accuracy in several real and simulated data sets. In Section 6 we discuss the significance of our contributions and propose possible future research directions. An implementation is available at https://git.io/mmit. 2 Related work The bulk of related work comes from the field of survival analysis. Linear models for censored outputs have been extensively studied under the name accelerated failure time (AFT) models (Wei, 1992). Recently, L1-regularized variants have been proposed to learn from high-dimensional data (Cai et al., 2009; Huang et al., 2005). Nonlinear models for censored data have also been studied, including decision trees (Segal, 1988; Molinaro et al., 2004), Random Forests (Hothorn et al., 2006) and Support Vector Machines (Pölsterl et al., 2016). However, most of these algorithms are limited to the case of right-censored and un-censored data. In contrast, in the interval regression setting, the data are either left, right or interval-censored. To the best of our knowledge, the only existing nonlinear model for this setting is the recently proposed Transformation Tree of Hothorn and Zeileis (2017). Another related method, which shares great similarity with ours, is the L1-regularized linear models of Rigaill et al. (2013). Like our proposed algorithm, their method optimizes a convex loss function with a margin hyperparameter. Nevertheless, one key limitation of their algorithm is that it is limited to modeling linear patterns, whereas our regression tree algorithm is not. 3 Problem 3.1 Learning from interval outputs Let S def= {(x1,y1), ..., (xn,yn)} ∼ Dn be a data set of n learning examples, where xi ∈ Rp is a feature vector, yi def = (yi, yi), with yi, yi ∈ R and yi < yi, are the lower and upper limits of a target interval, and D is an unknown data generating distribution. In the interval regression setting, a predicted value is only considered erroneous if it is outside of the target interval. Formally, let ` : R→ R be a function and define φ`(x) def = `[(x)+] as its corresponding hinge loss, where (x)+ is the positive part function, i.e. (x)+ = x if x > 0 and (x)+ = 0 otherwise. In this work, we will consider two possible hinge loss functions: the linear one, where `(x) = x, and the squared one where `(x) = x2. Our goal is to find a function h : Rp → R that minimizes the expected error on data drawn from D: minimize h E (xi,yi)∼D φ`(−h(xi) + yi) + φ`(h(xi)− yi), Notice that, if `(x) = x2, this is a generalization of the mean squared error to interval outputs. Moreover, this can be seen as a surrogate to a zero-one loss that measures if a predicted value lies within the target interval (Rigaill et al., 2013). 3.2 Maximum margin interval trees We will seek an interval regression tree model T : Rp → R that minimizes the total hinge loss on data set S: C(T ) def = ∑ (xi,yi)∈S [ φ` ( −T (xi) + yi + ) + φ` (T (xi)− yi + ) ] , (1) where ∈ R+0 is a hyperparameter introduced to improve regularity (see supplementary material for details). A decision tree is an arrangement of nodes and leaves. The leaves are responsible for making predictions, whereas the nodes guide the examples to the leaves based on the outcome of some boolean-valued rules (Breiman et al., 1984). Let T̃ denote the set of leaves in a decision tree T . Each leaf τ ∈ T̃ is associated with a set of examples Sτ ⊆ S, for which it is responsible for making predictions. The sets Sτ obey the following properties: S = ⋃ τ∈T̃ Sτ and Sτ ∩ Sτ ′ 6= ∅ ⇔ τ = τ ′. Hence, the contribution of a leaf τ to the total loss of the tree C(T ), given that it predicts µ ∈ R, is Cτ (µ) def = ∑ (xi,yi)∈Sτ [ φ`(−µ+ yi + ) + φ`(µ− yi + ) ] (2) and the optimal predicted value for the leaf is obtained by minimizing this function over all µ ∈ R. As in the CART algorithm (Breiman et al., 1984), our tree growing algorithm relies on recursive partitioning of the leaves. That is, at any step of the tree growing algorithm, we obtain a new tree T ′ from T by selecting a leaf τ0 ∈ T̃ and dividing it into two leaves τ1, τ2 ∈ T̃ ′, s.t. Sτ0 = Sτ1 ∪ Sτ2 and τ0 6∈ T̃ ′. This partitioning results from applying a boolean-valued rule r : Rp → B to each example (xi,yi) ∈ Sτ0 and sending it to τ1 if r(xi) = True and to τ2 otherwise. The rules that we consider are threshold functions on the value of a single feature, i.e., r(xi) def = “xij ≤ δ ”. This is illustrated in Figure 1. According to Equation (2), for any such rule, we have that the total hinge loss for the examples that are sent to τ1 and τ2 are Cτ1(µ) = ←− Cτ0(µ|j, δ) def = ∑ (xi,yi)∈Sτ0 :xij≤δ [ φ`(−µ+ yi + ) + φ`(µ− yi + ) ] (3) Cτ2(µ) = −→ Cτ0(µ|j, δ) def = ∑ (xi,yi)∈Sτ0 :xij>δ [ φ`(−µ+ yi + ) + φ`(µ− yi + ) ] . (4) The best rule is the one that leads to the smallest total cost C(T ′). This rule, as well as the optimal predicted values for τ1 and τ2, are obtained by solving the following optimization problem: argmin j,δ,µ1,µ2 [ ←− Cτ0(µ1|j, δ) + −→ Cτ0(µ2|j, δ) ] . (5) In the next section we propose a dynamic programming algorithm for this task. 4 Algorithm First note that, for a given j, δ, the optimization separates into two convex minimization sub-problems, which each amount to minimizing a sum of convex loss functions: min j,δ,µ1,µ2 [ ←− Cτ (µ1|j, δ) + −→ Cτ (µ2|j, δ) ] = min j,δ [ min µ1 ←− Cτ (µ1|j, δ) + min µ2 −→ Cτ (µ2|j, δ) ] . (6) We will show that if there exists an efficient dynamic program Ω which, given any set of hinge loss functions defined over µ, computes their sum and returns the minimum value, along with a minimizing value of µ, the minimization problem of Equation (6) can be solved efficiently. Observe that, although there is a continuum of possible values for δ, we can limit the search to the values of feature j that are observed in the data (i.e., δ ∈ {xij ; i = 1, ... , n}), since all other values do not lead to different configurations of Sτ1 and Sτ2 . Thus, there are at most nj ≤ n unique thresholds to consider for each feature. Let these thresholds be δj,1 < ... < δj,nj . Now, consider Φj,k as the set that contains all the losses φ`(−µ + yi + ) and φ`(µ− yi + ) for which we have (xi,yi) ∈ Sτ0 and xij = δj,k. Since we now only consider a finite number of δ-values, it follows from Equation (3), that one can obtain ←− Cτ (µ1|j, δj,k) from ←− Cτ (µ1|j, δj,k−1) by adding all the losses in Φj,k. Similarly, one can also obtain −→ Cτ (µ1|j, δj,k) from −→ Cτ (µ1|j, δj,k−1) by removing all the losses in Φj,k (see Equation (4)). This, in turn, implies that minµ ←− Cτ (µ|j, δj,k) = Ω(Φj,1∪...∪Φj,k) and minµ −→ Cτ (µ|j, δj,k) = Ω(Φj,k+1 ∪ ... ∪ Φj,nj ) . Hence, the cost associated with a split on each threshold δj,k is given by: δj,1 : Ω(Φj,1) + Ω(Φj,2 ∪ · · · ∪ Φj,nj ) . . . . . . . . . δj,i : Ω(Φj,1 ∪ · · · ∪ Φj,i) + Ω(Φj,i+1 ∪ · · · ∪ Φj,nj ) . . . . . . . . . δj,nj−1 : Ω(Φj,1 ∪ · · · ∪ Φj,nj−1) + Ω(Φj,nj ) (7) and the best threshold is the one with the smallest cost. Note that, in contrast with the other thresholds, δj,nj needs not be considered, since it leads to an empty leaf. Note also that, since Ω is a dynamic program, one can efficiently compute Equation (7) by using Ω twice, from the top down for the first column and from the bottom up for the second. Below, we propose such an algorithm. 4.1 Definitions A general expression for the hinge losses φ`(−µ+ yi + ) and φ`(µ− yi + ) is φ`(si(µ− yi) + ), where si = −1 or 1 respectively. Now, choose any convex function ` : R→ R and let Pt(µ) def = t∑ i=1 φ`(si(µ− yi) + ) (8) be a sum of t hinge loss functions. In this notation, Ω(Φj,1 ∪ ... ∪ Φj,i) = minµ Pt(µ), where t = |Φj,1 ∪ ... ∪ Φj,i|. Observation 1. Each of the t hinge loss functions has a breakpoint at yi − si , where it transitions from a zero function to a non-zero one if si = 1 and the converse if si = −1. For the sake of simplicity, we will now consider the case where these breakpoints are all different; the generalization is straightforward, but would needlessly complexify the presentation (see the supplementary material for details). Now, note that Pt(µ) is a convex piecewise function that can be uniquely represented as: Pt(µ) = pt,1(µ) if µ ∈ (−∞, bt,1] . . . pt,i(µ) if µ ∈ (bt,i−1, bt,i] . . . pt,t+1(µ) if µ ∈ (bt,t,∞) (9) where we will call pt,i the ith piece of Pt and bt,i the ith breakpoint of Pt (see Figure 2 for an example). Observe that each piece pt,i is the sum of all the functions that are non-zero on the interval (bt,i−1, bt,i]. It therefore follows from Observation 1 that pt,i(µ) = t∑ j=1 `[sj(µ− yj) + ] I[(sj = −1 ∧ bt,i−1 < yj + ) ∨ (sj = 1 ∧ yj − < bt,i)] (10) where I[·] is the (Boolean) indicator function, i.e., I[True] = 1 and 0 otherwise. Lemma 1. For any i ∈ {1, ..., t}, we have that pt,i+1(µ) = pt,i(µ) + ft,i(µ), where ft,i(µ) = sk`[sk(µ− yk) + ] for some k ∈ {1, ..., t} such that yk − sk = bt,i. Proof. The proof relies on Equation (10) and is detailed in the supplementary material. 4.2 Minimizing a sum of hinge losses by dynamic programming Our algorithm works by recursively adding a hinge loss to the total function Pt(µ), each time, keeping track of the minima. To achieve this, we use a pointer Jt, which points to rightmost piece of Pt(µ) that contains a minimum. Since Pt(µ) is a convex function of µ, we know that this minimum is global. In the algorithm, we refer to the segment pt,Jt as Mt and the essence of the dynamic programming update is moving Jt to its correct position after a new hinge loss is added to the sum. At any time step t, let Bt = {(bt,1, ft,1), ..., (bt,t, ft,t) | bt,1 < ... < bt,t} be the current set of breakpoints (bt,i) together with their corresponding difference functions (ft,i). Moreover, assume the convention bt,0 = −∞ and bt,t+1 =∞, which are defined, but not stored in Bt. The initialization (t = 0) is B0 = {}, J0 = 1, M0(µ) = 0 . (11) Now, at any time step t > 0, start by inserting the new breakpoint and difference function. Hence, Bt = Bt−1 ∪ {(yt − st , st `[st(µ− yt) + ])} . (12) Recall that, by definition, the set Bt remains sorted after the insertion. Let jt ∈ {1, . . . , t+ 1}, be the updated value for the previous minimum pointer (Jt−1) after adding the tth hinge loss (i.e., the index of bt−1,Jt−1 in the sorted set of breakpoints at time t). It is obtained by adding 1 if the new breakpoint is before Jt−1 and 0 otherwise. In other words, jt = Jt−1 + I[yt − st < bt−1,Jt−1 ] . (13) If there is no minimum of Pt(µ) in piece pt,jt , we must move the pointer from jt to its final position Jt ∈ {1, ..., t+ 1}, where Jt is the index of the rightmost function piece that contains a minimum: Jt = max i∈{1,...,t+1} i, s.t. (bt,i−1, bt,i] ∩ {x ∈ R | Pt(x) = min µ Pt(µ)} 6= ∅ . (14) See Figure 2 for an example. The minimum after optimization is in piece Mt, which is obtained by adding or subtracting a series of difference functions ft,i. Hence, applying Lemma 1 multiple times, we obtain: Mt(µ) def = pt,Jt(µ) = pt,jt(µ) + 0 if jt = Jt∑Jt−1 i=jt ft,i(µ) if jt < Jt − ∑jt−1 i=Jt ft,i(µ) if Jt < jt (15) Then, the optimization problem can be solved using minµ Pt(µ) = minµ∈(bt,Jt−1,bt,Jt ]Mt(µ). The proof of this statement is available in the supplementary material, along with a detailed pseudocode and implementation details. 4.3 Complexity analysis The ` functions that we consider are `(x) = x and `(x) = x2. Notice that any such function can be encoded by three coefficients a, b, c ∈ R. Therefore, summing two functions amounts to summing their respective coefficients and takes time O(1). The set of breakpoints Bt can be stored using any data structure that allows sorted insertions in logarithmic time (e.g., a binary search tree). Assume that we have n hinge losses. Inserting a new breakpoint at Equation (12) takes O(log n) time. Updating the jt pointer at Equation (13) takes O(1). In contrast, the complexity of finding the new pointer position Jt and updating Mt at Equations (14) and (15) varies depending on the nature of `. For the case where `(x) = x, we are guaranteed that Jt is at distance at most one of jt. This is demonstrated in Theorem 2 of the supplementary material. Since we can sum two functions in O(1) time, we have that the worst case time complexity of the linear hinge loss algorithm is O(n log n). However, for the case where `(x) = x2, the worst case could involve going through the n breakpoints. Hence, the worst case time complexity of the squared hinge loss algorithm is O(n2). Nevertheless, in Section 5.1, we show that, when tested on a variety real-world data sets, the algorithm achieved a time complexity of O(n log n) in this case also. Finally, the space complexity of this algorithm is O(n), since a list of n breakpoints (bt,i) and difference functions (ft,i) must be stored, along with the coefficients (a, b, c ∈ R) of Mt. Moreover, it follows from Lemma 1 that the function pieces pt,i need not be stored, since they can be recovered using the bt,i and ft,i. 5 Results 5.1 Empirical evaluation of time complexity We performed two experiments to evaluate the expected O(n(m + log n)) time complexity for n interval limits and m pointer moves per limit. First, we ran our algorithm (MMIT) with both squared and linear hinge loss solvers on a variety of real-world data sets of varying sizes (Rigaill et al., 2013; Lichman, 2013), and recorded the number of pointer moves. We plot the average and max pointer moves over a wide range of margin parameters, and all possible feature orderings (Figure 3, left). In agreement with our theoretical result (supplementary material, Theorem 2), we observed a maximum of one move per interval limit for the linear hinge loss. On average we observed that the number of moves does not increase with data set size, even for the squared hinge loss. These results suggest that the number of pointer moves per limit is generally constant m = O(1), so we expect an overall time complexity of O(n log n) in practice, even for the squared hinge loss. Second, we used the limits of the target intervals in the neuroblastoma changepoint data set (see Section 5.3) to simulate data sets from n = 103 to n = 107 limits. We recorded the time required to run the solvers (Figure 3, right), and observed timings which are consistent with the expected O(n log n) complexity. 5.2 MMIT recovers a good approximation in simulations with nonlinear patterns We demonstrate one key limitation of the margin-based interval regression algorithm of Rigaill et al. (2013) (L1-Linear): it is limited to modeling linear patterns. To achieve this, we created three simulated data sets, each containing 200 examples and 20 features. Each data set was generated in such a way that the target intervals followed a specific pattern f : R→ R according to a single feature, which we call the signal feature. The width of the intervals and a small random shift around the true value of f were determined randomly. The details of the data generation protocol are available in the supplementary material. MMIT (linear hinge loss) and L1-Linear were trained on each data set, using cross-validation to choose the hyperparameter values. The resulting data sets and the predictions of each algorithm are illustrated in Figure 4. As expected, L1-Linear fails to fit the non-linear patterns, but achieves a near perfect fit for the linear pattern. In contrast, MMIT learns stepwise approximations of the true functions, which results from each leaf predicting a constant value. Notice the fluctuations in the models of both algorithms, which result from using irrelevant features. 5.3 Empirical evaluation of prediction accuracy In this section, we compare the accuracy of predictions made by MMIT and other learning algorithms on real and simulated data sets. Evaluation protocol To evaluate the accuracy of the algorithms, we performed 5-fold crossvalidation and computed the mean squared error (MSE) with respect to the intervals in each of the five testing sets (Figure 5). For a data set S = {(xi,yi)}ni=1 with xi ∈ Rp and yi ∈ R 2 , and for a model h : Rp → R, the MSE is given by MSE(h, S) = 1 n n∑ i=1 ( [h(xi)− yi] I[h(xi) < yi] + [h(xi)− yi] I[h(xi) > yi] )2 . (16) At each step of the cross-validation, another cross-validation (nested within the former) was used to select the hyperparameters of each algorithm based on the training data. The hyperparameters selected for MMIT are available in the supplementary material. Algorithms The linear and squared hinge loss variants of Maximum Margin Interval Trees (MMITL and MMIT-S) were compared to two state-of-the-art interval regression algorithms: the marginbased L1-regularized linear model of Rigaill et al. (2013) (L1-Linear) and the Transformation Trees of Hothorn and Zeileis (2017) (TransfoTree). Moreover, two baseline methods were included in the comparison. To provide an upper bound for prediction error, we computed the trivial model that ignores all features and just learns a constant function h(x) = µ that minimizes the MSE on the training data (Constant). To demonstrate the importance of using a loss function designed for interval regression, we also considered the CART algorithm (Breiman et al., 1984). Specifically, CART was used to fit a regular regression tree on a transformed training set, where each interval regression example (x, [y, y]) was replaced by two real-valued regression examples with features x and labels y + and y − . This algorithm, which we call Interval-CART, uses a margin hyperparameter and minimizes a squared loss with respect to the interval limits. However, in contrast with MMIT, it does not take the structure of the interval regression problem into account, i.e., it ignores the fact that no cost should be incurred for values predicted inside the target intervals. Results in changepoint data sets The problem in the first two data sets is to learn a penalty function for changepoint detection in DNA copy number and ChIP-seq data (Hocking et al., 2013; Rigaill et al., 2013), two significant interval regression problems from the field of genomics. For the neuroblastoma data set, all methods, except the constant model, perform comparably. Interval-CART achieves the lowest error for one fold, but L1-Linear is the overall best performing method. For the histone data set, the margin-based models clearly outperform the non-margin-based models: Constant and TransfoTree. MMIT-S achieves the lowest error on one of the folds. Moreover, MMIT-S tends to outperform MMIT-L, suggesting that a squared loss is better suited for this task. Interestingly, MMIT-S outperforms Interval-CART, which also uses a squared loss, supporting the importance of using a loss function adapted to the interval regression problem. Results in UCI data sets The next two data sets are regression problems taken from the UCI repository (Lichman, 2013). For the sake of our comparison, the real-valued outputs in these data sets were transformed into censored intervals, using a protocol that we detail in the supplementary material. For the difficult triazines data set, all methods struggle to surpass the Constant model. Neverthess, some achieve lower errors for one fold. For the servo data set, the margin-based tree models: MMIT-S, MMIT-L, and Interval-CART perform comparably and outperform the other models. This highlights the importance of developping non-linear models for interval regression and suggests a positive effect of the margin hyperparameter on accuracy. Results in simulated data sets The last three data sets are the simulated data sets discussed in the previous section. As expected, the L1-linear model tends outperforms the others on the linear data set. However, surprisingly, on a few folds, the MMIT-L and Interval-CART models were able to achieve low test errors. For the non-linear data sets (sin and abs), MMIT-S, MMIT-L and Interval-Cart clearly outperform the TransfoTree, L1-linear and Constant models. Observe that the TransfoTree algorithm achieves results comparable to those of L1-linear which, in Section 5.2, has been shown to learn a roughly constant model in these situations. Hence, although these data sets are simulated, they highlight situations where this non-linear interval regression algorithm fails to yield accurate models, but where MMITs do not. Results for more data sets are available in the supplementary material. 6 Discussion and conclusions We proposed a new margin-based decision tree algorithm for the interval regression problem. We showed that it could be trained by solving a sequence of convex sub-problems, for which we proposed a new dynamic programming algorithm. We showed empirically that the latter’s time complexity is log-linear in the number of intervals in the data set. Hence, like classical regression trees (Breiman et al., 1984), our tree growing algorithm’s time complexity is linear in the number of features and log-linear in the number of examples. Moreover, we studied the prediction accuracy in several real and simulated data sets, showing that our algorithm is competitive with other linear and nonlinear models for interval regression. This initial work on Maximum Margin Interval Trees opens a variety of research directions, which we will explore in future work. We will investigate learning ensembles of MMITs, such as random forests. We also plan to extend the method to learning trees with non-constant leaves. This will increase the smoothness of the models, which, as observed in Figure 4, tend to have a stepwise nature. Moreover, we plan to study the average time complexity of the dynamic programming algorithm. Assuming a certain regularity in the data generating distribution, we should be able to bound the number of pointer moves and justify the time complexity that we observed empirically. In addition, we will study the conditions in which the proposed MMIT algorithm is expected to surpass methods that do not exploit the structure of the target intervals, such as the proposed Interval-CART method. Intuitively, one weakness of Interval-CART is that it does not properly model left and right-censored intervals, for which it favors predictions that are near the finite limits. Finally, we plan to extend the dynamic programming algorithm to data with un-censored outputs. This will make Maximum Margin Interval Trees applicable to survival analysis problems, where they should rank among the state of the art. Reproducibility • Implementation: https://git.io/mmit • Experimental code: https://git.io/mmit-paper • Data: https://git.io/mmit-data The versions of the software used in this work are also provided in the supplementary material. Acknowledgements We are grateful to Ulysse Côté-Allard, Mathieu Blanchette, Pascal Germain, Sébastien Giguère, Gaël Letarte, Mario Marchand, and Pier-Luc Plante for their insightful comments and suggestions. This work was supported by the National Sciences and Engineering Research Council of Canada, through an Alexander Graham Bell Canada Graduate Scholarship Doctoral Award awarded to AD and a Discovery Grant awarded to FL (#262067).
1. What is the focus of the paper regarding decision tree algorithms? 2. What are the strengths of the proposed maximum margin interval tree (MMIT) algorithm compared to prior works? 3. How does the reviewer assess the originality and significance of the paper's content? 4. What is the author's contribution to interval regression problems? 5. How does the reviewer evaluate the quality and clarity of the paper?
Review
Review The authors of this paper present a new decision tree algorithm for the interval regression problem. Leaves are partitioned using a margin based hinge loss similar to the L1-regularized hinge loss in Rigaill et al, Proc ICML 2013. However, the regression tree algorithm presented in this work is not limited to modeling linear patterns as the L1-regularized linear models in Rigaill et al. For training the non linear tree model, a sequence of convex optimization subproblems are optimally solved in log-linear time by Dynamic Programming (DP). The new maximum margin interval tree (MMIT) algorithm is compared with state-of-the-art margin-based and non-margin-based methods in several real and simulated datasets. - In terms of originality, the proposed margin based hinge loss is similar to the L1-regularized hinge loss in Rigaill et al, Proc ICML 2013. However, the regression tree algorithm presented in this work is not limited to modeling linear patterns as the L1-regularized linear models in Rigaill et al. MMIT achieves low test errors on nonlinear datasets and yields accurate models on simulated linear datasets as well. - In terms of significance: interval regression is a fundamental problem in fields such as survival analysis and computational biology. There are very few algorithms designed for this task, and most of them are linear models. A method learning nonlinear tree models like Maximum Margin Interval Trees (MMIT) could be helpful for practitioners in the field. - It terms of quality, the paper is fairly executed, the authors compare MMIT with state-of-the-art margin-based and non-margin-based methods in several real and simulated datasets. The 11-page supplementary material contains proofs, pseudocode, details of the open source implementation (link was hidden for anonymity) and of the experiments. - In terms of clarity, this is a well written paper. In summary, my opinion is that this is a well written and nicely organized work; the proposed method would be useful in real-world applications, and the novelty of the work satisfies the NIPS standards. Thus I recommend this work for publication.
NIPS
Title Maximum Margin Interval Trees Abstract Learning a regression function using censored or interval-valued output data is an important problem in fields such as genomics and medicine. The goal is to learn a real-valued prediction function, and the training output labels indicate an interval of possible values. Whereas most existing algorithms for this task are linear models, in this paper we investigate learning nonlinear tree models. We propose to learn a tree by minimizing a margin-based discriminative objective function, and we provide a dynamic programming algorithm for computing the optimal solution in log-linear time. We show empirically that this algorithm achieves state-of-the-art speed and prediction accuracy in a benchmark of several data sets. 1 Introduction In the typical supervised regression setting, we are given set of learning examples, each associated with a real-valued output. The goal is to learn a predictor that accurately estimates the outputs, given new examples. This fundamental problem has been extensively studied and has given rise to algorithms such as Support Vector Regression (Basak et al., 2007). A similar, but far less studied, problem is that of interval regression, where each learning example is associated with an interval (y i , yi), indicating a range of acceptable output values, and the expected predictions are real numbers. Interval-valued outputs arise naturally in fields such as computational biology and survival analysis. In the latter setting, one is interested in predicting the time until some adverse event, such as death, occurs. The available information is often limited, giving rise to outputs that are said to be either un-censored (−∞ < y i = yi < ∞), left-censored (−∞ = yi < yi < ∞), right-censored (−∞ < y i < yi =∞), or interval-censored (−∞ < yi < yi <∞) (Klein and Moeschberger, 2005). For instance, right censored data occurs when all that is known is that an individual is still alive after a period of time. Another recent example is from the field of genomics, where interval regression was used to learn a penalty function for changepoint detection in DNA copy number and ChIP-seq data (Rigaill et al., 2013). Despite the ubiquity of this type of problem, there are surprisingly few existing algorithms that have been designed to learn from such outputs, and most are linear models. Decision tree algorithms have been proposed in the 1980s with the pioneering work of Breiman et al. (1984) and Quinlan (1986). Such algorithms rely on a simple framework, where trees are grown by recursive partitioning of leaves, each time maximizing some task-specific criterion. Advantages of these algorithms include the ability to learn non-linear models from both numerical and categorical data of various scales, and having a relatively low training time complexity. In this work, we extend the work of Breiman et al. (1984) to learning non-linear interval regression tree models. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. 1.1 Contributions and organization Our first contribution is Section 3, in which we propose a new decision tree algorithm for interval regression. We propose to partition leaves using a margin-based hinge loss, which yields a sequence of convex optimization problems. Our second contribution is Section 4, in which we propose a dynamic programming algorithm that computes the optimal solution to all of these problems in log-linear time. In Section 5 we show that our algorithm achieves state-of-the-art prediction accuracy in several real and simulated data sets. In Section 6 we discuss the significance of our contributions and propose possible future research directions. An implementation is available at https://git.io/mmit. 2 Related work The bulk of related work comes from the field of survival analysis. Linear models for censored outputs have been extensively studied under the name accelerated failure time (AFT) models (Wei, 1992). Recently, L1-regularized variants have been proposed to learn from high-dimensional data (Cai et al., 2009; Huang et al., 2005). Nonlinear models for censored data have also been studied, including decision trees (Segal, 1988; Molinaro et al., 2004), Random Forests (Hothorn et al., 2006) and Support Vector Machines (Pölsterl et al., 2016). However, most of these algorithms are limited to the case of right-censored and un-censored data. In contrast, in the interval regression setting, the data are either left, right or interval-censored. To the best of our knowledge, the only existing nonlinear model for this setting is the recently proposed Transformation Tree of Hothorn and Zeileis (2017). Another related method, which shares great similarity with ours, is the L1-regularized linear models of Rigaill et al. (2013). Like our proposed algorithm, their method optimizes a convex loss function with a margin hyperparameter. Nevertheless, one key limitation of their algorithm is that it is limited to modeling linear patterns, whereas our regression tree algorithm is not. 3 Problem 3.1 Learning from interval outputs Let S def= {(x1,y1), ..., (xn,yn)} ∼ Dn be a data set of n learning examples, where xi ∈ Rp is a feature vector, yi def = (yi, yi), with yi, yi ∈ R and yi < yi, are the lower and upper limits of a target interval, and D is an unknown data generating distribution. In the interval regression setting, a predicted value is only considered erroneous if it is outside of the target interval. Formally, let ` : R→ R be a function and define φ`(x) def = `[(x)+] as its corresponding hinge loss, where (x)+ is the positive part function, i.e. (x)+ = x if x > 0 and (x)+ = 0 otherwise. In this work, we will consider two possible hinge loss functions: the linear one, where `(x) = x, and the squared one where `(x) = x2. Our goal is to find a function h : Rp → R that minimizes the expected error on data drawn from D: minimize h E (xi,yi)∼D φ`(−h(xi) + yi) + φ`(h(xi)− yi), Notice that, if `(x) = x2, this is a generalization of the mean squared error to interval outputs. Moreover, this can be seen as a surrogate to a zero-one loss that measures if a predicted value lies within the target interval (Rigaill et al., 2013). 3.2 Maximum margin interval trees We will seek an interval regression tree model T : Rp → R that minimizes the total hinge loss on data set S: C(T ) def = ∑ (xi,yi)∈S [ φ` ( −T (xi) + yi + ) + φ` (T (xi)− yi + ) ] , (1) where ∈ R+0 is a hyperparameter introduced to improve regularity (see supplementary material for details). A decision tree is an arrangement of nodes and leaves. The leaves are responsible for making predictions, whereas the nodes guide the examples to the leaves based on the outcome of some boolean-valued rules (Breiman et al., 1984). Let T̃ denote the set of leaves in a decision tree T . Each leaf τ ∈ T̃ is associated with a set of examples Sτ ⊆ S, for which it is responsible for making predictions. The sets Sτ obey the following properties: S = ⋃ τ∈T̃ Sτ and Sτ ∩ Sτ ′ 6= ∅ ⇔ τ = τ ′. Hence, the contribution of a leaf τ to the total loss of the tree C(T ), given that it predicts µ ∈ R, is Cτ (µ) def = ∑ (xi,yi)∈Sτ [ φ`(−µ+ yi + ) + φ`(µ− yi + ) ] (2) and the optimal predicted value for the leaf is obtained by minimizing this function over all µ ∈ R. As in the CART algorithm (Breiman et al., 1984), our tree growing algorithm relies on recursive partitioning of the leaves. That is, at any step of the tree growing algorithm, we obtain a new tree T ′ from T by selecting a leaf τ0 ∈ T̃ and dividing it into two leaves τ1, τ2 ∈ T̃ ′, s.t. Sτ0 = Sτ1 ∪ Sτ2 and τ0 6∈ T̃ ′. This partitioning results from applying a boolean-valued rule r : Rp → B to each example (xi,yi) ∈ Sτ0 and sending it to τ1 if r(xi) = True and to τ2 otherwise. The rules that we consider are threshold functions on the value of a single feature, i.e., r(xi) def = “xij ≤ δ ”. This is illustrated in Figure 1. According to Equation (2), for any such rule, we have that the total hinge loss for the examples that are sent to τ1 and τ2 are Cτ1(µ) = ←− Cτ0(µ|j, δ) def = ∑ (xi,yi)∈Sτ0 :xij≤δ [ φ`(−µ+ yi + ) + φ`(µ− yi + ) ] (3) Cτ2(µ) = −→ Cτ0(µ|j, δ) def = ∑ (xi,yi)∈Sτ0 :xij>δ [ φ`(−µ+ yi + ) + φ`(µ− yi + ) ] . (4) The best rule is the one that leads to the smallest total cost C(T ′). This rule, as well as the optimal predicted values for τ1 and τ2, are obtained by solving the following optimization problem: argmin j,δ,µ1,µ2 [ ←− Cτ0(µ1|j, δ) + −→ Cτ0(µ2|j, δ) ] . (5) In the next section we propose a dynamic programming algorithm for this task. 4 Algorithm First note that, for a given j, δ, the optimization separates into two convex minimization sub-problems, which each amount to minimizing a sum of convex loss functions: min j,δ,µ1,µ2 [ ←− Cτ (µ1|j, δ) + −→ Cτ (µ2|j, δ) ] = min j,δ [ min µ1 ←− Cτ (µ1|j, δ) + min µ2 −→ Cτ (µ2|j, δ) ] . (6) We will show that if there exists an efficient dynamic program Ω which, given any set of hinge loss functions defined over µ, computes their sum and returns the minimum value, along with a minimizing value of µ, the minimization problem of Equation (6) can be solved efficiently. Observe that, although there is a continuum of possible values for δ, we can limit the search to the values of feature j that are observed in the data (i.e., δ ∈ {xij ; i = 1, ... , n}), since all other values do not lead to different configurations of Sτ1 and Sτ2 . Thus, there are at most nj ≤ n unique thresholds to consider for each feature. Let these thresholds be δj,1 < ... < δj,nj . Now, consider Φj,k as the set that contains all the losses φ`(−µ + yi + ) and φ`(µ− yi + ) for which we have (xi,yi) ∈ Sτ0 and xij = δj,k. Since we now only consider a finite number of δ-values, it follows from Equation (3), that one can obtain ←− Cτ (µ1|j, δj,k) from ←− Cτ (µ1|j, δj,k−1) by adding all the losses in Φj,k. Similarly, one can also obtain −→ Cτ (µ1|j, δj,k) from −→ Cτ (µ1|j, δj,k−1) by removing all the losses in Φj,k (see Equation (4)). This, in turn, implies that minµ ←− Cτ (µ|j, δj,k) = Ω(Φj,1∪...∪Φj,k) and minµ −→ Cτ (µ|j, δj,k) = Ω(Φj,k+1 ∪ ... ∪ Φj,nj ) . Hence, the cost associated with a split on each threshold δj,k is given by: δj,1 : Ω(Φj,1) + Ω(Φj,2 ∪ · · · ∪ Φj,nj ) . . . . . . . . . δj,i : Ω(Φj,1 ∪ · · · ∪ Φj,i) + Ω(Φj,i+1 ∪ · · · ∪ Φj,nj ) . . . . . . . . . δj,nj−1 : Ω(Φj,1 ∪ · · · ∪ Φj,nj−1) + Ω(Φj,nj ) (7) and the best threshold is the one with the smallest cost. Note that, in contrast with the other thresholds, δj,nj needs not be considered, since it leads to an empty leaf. Note also that, since Ω is a dynamic program, one can efficiently compute Equation (7) by using Ω twice, from the top down for the first column and from the bottom up for the second. Below, we propose such an algorithm. 4.1 Definitions A general expression for the hinge losses φ`(−µ+ yi + ) and φ`(µ− yi + ) is φ`(si(µ− yi) + ), where si = −1 or 1 respectively. Now, choose any convex function ` : R→ R and let Pt(µ) def = t∑ i=1 φ`(si(µ− yi) + ) (8) be a sum of t hinge loss functions. In this notation, Ω(Φj,1 ∪ ... ∪ Φj,i) = minµ Pt(µ), where t = |Φj,1 ∪ ... ∪ Φj,i|. Observation 1. Each of the t hinge loss functions has a breakpoint at yi − si , where it transitions from a zero function to a non-zero one if si = 1 and the converse if si = −1. For the sake of simplicity, we will now consider the case where these breakpoints are all different; the generalization is straightforward, but would needlessly complexify the presentation (see the supplementary material for details). Now, note that Pt(µ) is a convex piecewise function that can be uniquely represented as: Pt(µ) = pt,1(µ) if µ ∈ (−∞, bt,1] . . . pt,i(µ) if µ ∈ (bt,i−1, bt,i] . . . pt,t+1(µ) if µ ∈ (bt,t,∞) (9) where we will call pt,i the ith piece of Pt and bt,i the ith breakpoint of Pt (see Figure 2 for an example). Observe that each piece pt,i is the sum of all the functions that are non-zero on the interval (bt,i−1, bt,i]. It therefore follows from Observation 1 that pt,i(µ) = t∑ j=1 `[sj(µ− yj) + ] I[(sj = −1 ∧ bt,i−1 < yj + ) ∨ (sj = 1 ∧ yj − < bt,i)] (10) where I[·] is the (Boolean) indicator function, i.e., I[True] = 1 and 0 otherwise. Lemma 1. For any i ∈ {1, ..., t}, we have that pt,i+1(µ) = pt,i(µ) + ft,i(µ), where ft,i(µ) = sk`[sk(µ− yk) + ] for some k ∈ {1, ..., t} such that yk − sk = bt,i. Proof. The proof relies on Equation (10) and is detailed in the supplementary material. 4.2 Minimizing a sum of hinge losses by dynamic programming Our algorithm works by recursively adding a hinge loss to the total function Pt(µ), each time, keeping track of the minima. To achieve this, we use a pointer Jt, which points to rightmost piece of Pt(µ) that contains a minimum. Since Pt(µ) is a convex function of µ, we know that this minimum is global. In the algorithm, we refer to the segment pt,Jt as Mt and the essence of the dynamic programming update is moving Jt to its correct position after a new hinge loss is added to the sum. At any time step t, let Bt = {(bt,1, ft,1), ..., (bt,t, ft,t) | bt,1 < ... < bt,t} be the current set of breakpoints (bt,i) together with their corresponding difference functions (ft,i). Moreover, assume the convention bt,0 = −∞ and bt,t+1 =∞, which are defined, but not stored in Bt. The initialization (t = 0) is B0 = {}, J0 = 1, M0(µ) = 0 . (11) Now, at any time step t > 0, start by inserting the new breakpoint and difference function. Hence, Bt = Bt−1 ∪ {(yt − st , st `[st(µ− yt) + ])} . (12) Recall that, by definition, the set Bt remains sorted after the insertion. Let jt ∈ {1, . . . , t+ 1}, be the updated value for the previous minimum pointer (Jt−1) after adding the tth hinge loss (i.e., the index of bt−1,Jt−1 in the sorted set of breakpoints at time t). It is obtained by adding 1 if the new breakpoint is before Jt−1 and 0 otherwise. In other words, jt = Jt−1 + I[yt − st < bt−1,Jt−1 ] . (13) If there is no minimum of Pt(µ) in piece pt,jt , we must move the pointer from jt to its final position Jt ∈ {1, ..., t+ 1}, where Jt is the index of the rightmost function piece that contains a minimum: Jt = max i∈{1,...,t+1} i, s.t. (bt,i−1, bt,i] ∩ {x ∈ R | Pt(x) = min µ Pt(µ)} 6= ∅ . (14) See Figure 2 for an example. The minimum after optimization is in piece Mt, which is obtained by adding or subtracting a series of difference functions ft,i. Hence, applying Lemma 1 multiple times, we obtain: Mt(µ) def = pt,Jt(µ) = pt,jt(µ) + 0 if jt = Jt∑Jt−1 i=jt ft,i(µ) if jt < Jt − ∑jt−1 i=Jt ft,i(µ) if Jt < jt (15) Then, the optimization problem can be solved using minµ Pt(µ) = minµ∈(bt,Jt−1,bt,Jt ]Mt(µ). The proof of this statement is available in the supplementary material, along with a detailed pseudocode and implementation details. 4.3 Complexity analysis The ` functions that we consider are `(x) = x and `(x) = x2. Notice that any such function can be encoded by three coefficients a, b, c ∈ R. Therefore, summing two functions amounts to summing their respective coefficients and takes time O(1). The set of breakpoints Bt can be stored using any data structure that allows sorted insertions in logarithmic time (e.g., a binary search tree). Assume that we have n hinge losses. Inserting a new breakpoint at Equation (12) takes O(log n) time. Updating the jt pointer at Equation (13) takes O(1). In contrast, the complexity of finding the new pointer position Jt and updating Mt at Equations (14) and (15) varies depending on the nature of `. For the case where `(x) = x, we are guaranteed that Jt is at distance at most one of jt. This is demonstrated in Theorem 2 of the supplementary material. Since we can sum two functions in O(1) time, we have that the worst case time complexity of the linear hinge loss algorithm is O(n log n). However, for the case where `(x) = x2, the worst case could involve going through the n breakpoints. Hence, the worst case time complexity of the squared hinge loss algorithm is O(n2). Nevertheless, in Section 5.1, we show that, when tested on a variety real-world data sets, the algorithm achieved a time complexity of O(n log n) in this case also. Finally, the space complexity of this algorithm is O(n), since a list of n breakpoints (bt,i) and difference functions (ft,i) must be stored, along with the coefficients (a, b, c ∈ R) of Mt. Moreover, it follows from Lemma 1 that the function pieces pt,i need not be stored, since they can be recovered using the bt,i and ft,i. 5 Results 5.1 Empirical evaluation of time complexity We performed two experiments to evaluate the expected O(n(m + log n)) time complexity for n interval limits and m pointer moves per limit. First, we ran our algorithm (MMIT) with both squared and linear hinge loss solvers on a variety of real-world data sets of varying sizes (Rigaill et al., 2013; Lichman, 2013), and recorded the number of pointer moves. We plot the average and max pointer moves over a wide range of margin parameters, and all possible feature orderings (Figure 3, left). In agreement with our theoretical result (supplementary material, Theorem 2), we observed a maximum of one move per interval limit for the linear hinge loss. On average we observed that the number of moves does not increase with data set size, even for the squared hinge loss. These results suggest that the number of pointer moves per limit is generally constant m = O(1), so we expect an overall time complexity of O(n log n) in practice, even for the squared hinge loss. Second, we used the limits of the target intervals in the neuroblastoma changepoint data set (see Section 5.3) to simulate data sets from n = 103 to n = 107 limits. We recorded the time required to run the solvers (Figure 3, right), and observed timings which are consistent with the expected O(n log n) complexity. 5.2 MMIT recovers a good approximation in simulations with nonlinear patterns We demonstrate one key limitation of the margin-based interval regression algorithm of Rigaill et al. (2013) (L1-Linear): it is limited to modeling linear patterns. To achieve this, we created three simulated data sets, each containing 200 examples and 20 features. Each data set was generated in such a way that the target intervals followed a specific pattern f : R→ R according to a single feature, which we call the signal feature. The width of the intervals and a small random shift around the true value of f were determined randomly. The details of the data generation protocol are available in the supplementary material. MMIT (linear hinge loss) and L1-Linear were trained on each data set, using cross-validation to choose the hyperparameter values. The resulting data sets and the predictions of each algorithm are illustrated in Figure 4. As expected, L1-Linear fails to fit the non-linear patterns, but achieves a near perfect fit for the linear pattern. In contrast, MMIT learns stepwise approximations of the true functions, which results from each leaf predicting a constant value. Notice the fluctuations in the models of both algorithms, which result from using irrelevant features. 5.3 Empirical evaluation of prediction accuracy In this section, we compare the accuracy of predictions made by MMIT and other learning algorithms on real and simulated data sets. Evaluation protocol To evaluate the accuracy of the algorithms, we performed 5-fold crossvalidation and computed the mean squared error (MSE) with respect to the intervals in each of the five testing sets (Figure 5). For a data set S = {(xi,yi)}ni=1 with xi ∈ Rp and yi ∈ R 2 , and for a model h : Rp → R, the MSE is given by MSE(h, S) = 1 n n∑ i=1 ( [h(xi)− yi] I[h(xi) < yi] + [h(xi)− yi] I[h(xi) > yi] )2 . (16) At each step of the cross-validation, another cross-validation (nested within the former) was used to select the hyperparameters of each algorithm based on the training data. The hyperparameters selected for MMIT are available in the supplementary material. Algorithms The linear and squared hinge loss variants of Maximum Margin Interval Trees (MMITL and MMIT-S) were compared to two state-of-the-art interval regression algorithms: the marginbased L1-regularized linear model of Rigaill et al. (2013) (L1-Linear) and the Transformation Trees of Hothorn and Zeileis (2017) (TransfoTree). Moreover, two baseline methods were included in the comparison. To provide an upper bound for prediction error, we computed the trivial model that ignores all features and just learns a constant function h(x) = µ that minimizes the MSE on the training data (Constant). To demonstrate the importance of using a loss function designed for interval regression, we also considered the CART algorithm (Breiman et al., 1984). Specifically, CART was used to fit a regular regression tree on a transformed training set, where each interval regression example (x, [y, y]) was replaced by two real-valued regression examples with features x and labels y + and y − . This algorithm, which we call Interval-CART, uses a margin hyperparameter and minimizes a squared loss with respect to the interval limits. However, in contrast with MMIT, it does not take the structure of the interval regression problem into account, i.e., it ignores the fact that no cost should be incurred for values predicted inside the target intervals. Results in changepoint data sets The problem in the first two data sets is to learn a penalty function for changepoint detection in DNA copy number and ChIP-seq data (Hocking et al., 2013; Rigaill et al., 2013), two significant interval regression problems from the field of genomics. For the neuroblastoma data set, all methods, except the constant model, perform comparably. Interval-CART achieves the lowest error for one fold, but L1-Linear is the overall best performing method. For the histone data set, the margin-based models clearly outperform the non-margin-based models: Constant and TransfoTree. MMIT-S achieves the lowest error on one of the folds. Moreover, MMIT-S tends to outperform MMIT-L, suggesting that a squared loss is better suited for this task. Interestingly, MMIT-S outperforms Interval-CART, which also uses a squared loss, supporting the importance of using a loss function adapted to the interval regression problem. Results in UCI data sets The next two data sets are regression problems taken from the UCI repository (Lichman, 2013). For the sake of our comparison, the real-valued outputs in these data sets were transformed into censored intervals, using a protocol that we detail in the supplementary material. For the difficult triazines data set, all methods struggle to surpass the Constant model. Neverthess, some achieve lower errors for one fold. For the servo data set, the margin-based tree models: MMIT-S, MMIT-L, and Interval-CART perform comparably and outperform the other models. This highlights the importance of developping non-linear models for interval regression and suggests a positive effect of the margin hyperparameter on accuracy. Results in simulated data sets The last three data sets are the simulated data sets discussed in the previous section. As expected, the L1-linear model tends outperforms the others on the linear data set. However, surprisingly, on a few folds, the MMIT-L and Interval-CART models were able to achieve low test errors. For the non-linear data sets (sin and abs), MMIT-S, MMIT-L and Interval-Cart clearly outperform the TransfoTree, L1-linear and Constant models. Observe that the TransfoTree algorithm achieves results comparable to those of L1-linear which, in Section 5.2, has been shown to learn a roughly constant model in these situations. Hence, although these data sets are simulated, they highlight situations where this non-linear interval regression algorithm fails to yield accurate models, but where MMITs do not. Results for more data sets are available in the supplementary material. 6 Discussion and conclusions We proposed a new margin-based decision tree algorithm for the interval regression problem. We showed that it could be trained by solving a sequence of convex sub-problems, for which we proposed a new dynamic programming algorithm. We showed empirically that the latter’s time complexity is log-linear in the number of intervals in the data set. Hence, like classical regression trees (Breiman et al., 1984), our tree growing algorithm’s time complexity is linear in the number of features and log-linear in the number of examples. Moreover, we studied the prediction accuracy in several real and simulated data sets, showing that our algorithm is competitive with other linear and nonlinear models for interval regression. This initial work on Maximum Margin Interval Trees opens a variety of research directions, which we will explore in future work. We will investigate learning ensembles of MMITs, such as random forests. We also plan to extend the method to learning trees with non-constant leaves. This will increase the smoothness of the models, which, as observed in Figure 4, tend to have a stepwise nature. Moreover, we plan to study the average time complexity of the dynamic programming algorithm. Assuming a certain regularity in the data generating distribution, we should be able to bound the number of pointer moves and justify the time complexity that we observed empirically. In addition, we will study the conditions in which the proposed MMIT algorithm is expected to surpass methods that do not exploit the structure of the target intervals, such as the proposed Interval-CART method. Intuitively, one weakness of Interval-CART is that it does not properly model left and right-censored intervals, for which it favors predictions that are near the finite limits. Finally, we plan to extend the dynamic programming algorithm to data with un-censored outputs. This will make Maximum Margin Interval Trees applicable to survival analysis problems, where they should rank among the state of the art. Reproducibility • Implementation: https://git.io/mmit • Experimental code: https://git.io/mmit-paper • Data: https://git.io/mmit-data The versions of the software used in this work are also provided in the supplementary material. Acknowledgements We are grateful to Ulysse Côté-Allard, Mathieu Blanchette, Pascal Germain, Sébastien Giguère, Gaël Letarte, Mario Marchand, and Pier-Luc Plante for their insightful comments and suggestions. This work was supported by the National Sciences and Engineering Research Council of Canada, through an Alexander Graham Bell Canada Graduate Scholarship Doctoral Award awarded to AD and a Discovery Grant awarded to FL (#262067).
1. What is the focus and contribution of the paper on interval regression problems? 2. What are the strengths of the proposed dynamic programming algorithm? 3. What are the weaknesses of the paper regarding its claims, experiments, and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or suggestions regarding the algorithm's applicability to different scenarios?
Review
Review Maximum Margin Interval Trees --------------------------------- In this paper the authors study interval regression problems, where each example is associated with a range output instead of a single point. Specifically, the authors investigate how to modify trees to produce such output by minimizing a modified sum of hinge losses. The key contribution of the paper is a dynamic programming algorithm that efficiently constructs these trees. The authors provide experimental results on a variety of simulated and real datasets. This is a generally well written paper, though it gets a little subscriptitis in 4.1-4.2. The algorithm seems sound and the experiments do a good job of comparing the approach to a reasonable set of baselines on a variety of datasets along both accuracy and time complexity metrics (which is the key selling point of the paper). I did have some questions that the authors could clarify: 1. What is epsilon in the experiments? How was it chosen? What is the effect of varying it? 2. What changes if the breakpoints are not all different (line 117)? 3. Does the algorithm work for trees with nonconstant leaves? (a model tree of some sort) What would need to change? 4. The choice of the examples with CART in the experiments seems rather strange. I think a baseline with CART where the examples are (x, (y_low+y_high)/2) would make more sense. 5. Along the same lines, would just building two trees one for y_low and the other for y_high work? I'm not very convinced we need a separate method just for this kind of problem. To summarize, this is a nice paper that proposes and studies an algorithm for learning interval trees and supports it with experimental results. Some key clarifications and experimental modifications would make the paper stronger. After feedback: The authors have clarified several questions I had, and I have upgraded my score to take this into account.
NIPS
Title Post-processing for Individual Fairness Abstract Post-processing in algorithmic fairness is a versatile approach for correcting bias in ML systems that are already used in production. The main appeal of postprocessing is that it avoids expensive retraining. In this work, we propose general post-processing algorithms for individual fairness (IF). We consider a setting where the learner only has access to the predictions of the original model and a similarity graph between individuals guiding the desired fairness constraints. We cast the IF post-processing problem as a graph smoothing problem corresponding to graph Laplacian regularization that preserves the desired “treat similar individuals similarly” interpretation. Our theoretical results demonstrate the connection of the new objective function to a local relaxation of the original individual fairness. Empirically, our post-processing algorithms correct individual biases in large-scale NLP models such as BERT, while preserving accuracy. 1 Introduction There are many instances of algorithmic bias in machine learning (ML) models [1]–[4], which has led to the development of methods for quantifying and correcting algorithmic bias. To quantify algorithmic bias, researchers have proposed numerous mathematical definitions of algorithmic fairness. Broadly speaking, these definitions fall into two categories: group fairness [5] and individual fairness [6]. The former formalizes the idea that ML system should treat certain groups of individuals similarly, e.g., requiring the average loan approval rate for applicants of different ethnicities be similar [7]. The latter asks for similar treatment of similar individuals, e.g., same outcome for applicants with resumes that differ only in names [8]. Researchers have also developed many ways of correcting algorithmic bias. These fairness interventions broadly fall into three categories: pre-processing the data, enforcing fairness during model training (also known as in-processing), and post-processing the outputs of a model. While both group and individual fairness (IF) definitions have their benefits and drawbacks [5], [6], [9], the existing suite of algorithmic fairness solutions mostly enforces group fairness. The few prior works on individual fairness are all in-processing methods [10]–[13]. Although in-processing is arguably the most-effective type of intervention, it has many practical limitations. For example, it requires training models from scratch. Nowadays, it is more common to fine-tune publicly available models (e.g., language models such as BERT [14] and GPT-3 [15]) than to train models afresh, as many practitioners do not have the necessary computational resources. Even with enough computational resources, training large deep learning models has a significant environmental impact [4], [16]. Equal Contribution. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Post-processing offers an easier path towards incorporating algorithmic fairness into deployed ML models, and has potential to reduce environmental harm from re-training with in-processing fairness techniques. IF, the graph needs to be smooth, i.e., the similar / connected candidates should have similar node signals, which can be accomplished by also offering the job to Alice. In contrast, directly enforcing IF constraints [6] requires a certain degree of output similarity on all pairs of candidates, and not just on those which are connected and thus similar. Our main contributions are summarized below. 1. We cast post-processing for individual fairness as a graph smoothing problem and propose a coordinate descent algorithm to scale the approach to large data sets. 2. We demonstrate theoretically and verify empirically that graph smoothing enforces individual fairness constraints locally, i.e., it guarantees similar treatment of similar individuals. 3. We empirically compare the Laplacian smoothing method to the post-processing adaptation of the algorithm by Dwork et al. [6] enforcing global Lipschitz continuity. The Laplacian smoothing method is not only computationally more efficient but is also more effective in reducing algorithmic bias while preserving accuracy of the original model. 4. We demonstrate the efficacy of Laplacian smoothing on two large-scale text data sets by reducing biases in fine-tuned BERT models. 2 Post-processing Problem Formulation Let X be the feature space, Y be the set of possible labels/targets, and h : X → Y be a (possibly unfair) ML model trained for the task. Our goal is to post-process the outputs of h so that they are individually fair. Formally, the post-processor is provided with a set of inputs {xi} n i=1 and the outputs of h on the inputs {ybi , h(xi)}in =1, and its goal is to produce {fb i}in =1 that is both individually fair and similar to the ybi’s. Recall that individual fairness of h is the Lipschitz continuity of h with respect to a fair metric dX on the input space: ′ dY (h(x), h(x ′ )) ≤ LdX (x, x ′ ) for all x, x ∈ X , (2.1) where L > 0 is a Lipschitz constant. The fair metric encodes problem-specific intuition of which samples should be treated similarly by the ML model. It is analogous to the knowledge of protected attributes in group fairness needed to define corresponding fairness constraints. Recent literature proposes several practical methods for learning fair metric from data [18], [19]. We assume the postprocessor is either given access to the fair metric (it can evaluate the fair distance on any pair of points in X ), or receives feedback on which inputs should be treated similarly. We encode this information in an adjacency matrix W ∈ Rn×n of a graph with individuals as nodes. If the post-processor is given the fair metric, then the entries of W are ˆ exp(−θdX (xi, xj) 2) dX (xi, xj) ≤ τ Wij = (2.2) 0 otherwise, where θ > 0 is a scale parameter and τ > 0 is a threshold parameter. If the post-processor is given an annotator’s feedback, then W is a binary matrix with Wij = 1 if i and j are considered to be treated similarly by the annotator and 0 otherwise. Extensions to multiple annotators are straightforward. We start with a simple post-processing adaptation of the algorithm by Dwork et al. [6] for enforcing individual fairness, that projects the (possibly unfair) outputs of h onto a constraint set to enforce (2.1). In other words, the post-processor seeks the closest set of outputs to the ybi’s that satisfies individual fairness: n 1 ( arg minf1,...,fn P dY (fi, ybi)2 ) i=1 2{fb i}ni=1 ∈ . (2.3) subject to dY (fi, fj ) ≤ LdX (xi, xj ) This objective function, though convex, scales poorly due to the order of n2 constraints. Empirically, we observe that (2.3) leads to post-processed outputs that are dissimilar to the ybi’s, leading to poor performance in practice. The goal of our method is to improve performance and scalability, while preserving the IF desiderata of treating similar individual similarly. Before presenting our method, we discuss other post-processing perspectives that differ in their applicability and input requirements. 2.1 Alternative Post-processing Formulations We review three post-processing problem setups and the corresponding methods in the literature. First, one can fine-tune a model via an in-processing algorithm to reduce algorithmic biases. Yurochkin et al. [12] proposed an in-processing algorithm for IF and used it to train fair models for text classification using sentence BERT embeddings. This setting is the most demanding in terms of input and computational requirements: a user needs access to the original model parameters, fair metric function, and train a predictor, e.g., a moderately deep fully connected neural network, with a non-trivial fairness-promoting objective function. Second, it is possible to post-process by training additional models to correct the initial model’s behavior. For example, Kim et al. [20] propose a boosting-based method for group fairness postprocessing. This perspective can be adapted to individual fairness; however, it implicitly assumes that we can train weak-learners to boost. Lohia et al. [21], [22] propose to train a bias detector to post-process for group fairness and a special, group based, notion of individual fairness. Such methods are challenging to apply to text data or other non-tabular data types. The third perspective is the most generic: a user has access to original model outputs only, and a minimal additional feedback guiding fairness constraints. Wei et al. [23] consider such setting and propose a method to satisfy group fairness constraints; however, it is not applicable to individual fairness. Our problem formulation belongs to this post-processing setup. The main benefit of this approach is its broad applicability and ease of deployment. 3 Graph Laplacian Individual Fairness To formulate our method, we cast IF post-processing as a graph smoothing problem. Using the fair metric or human annotations as discussed in Section 2, we obtain an n × n matrix W that we treat as an adjacency matrix. As elaborated earlier, the goal of post-processing is to obtain a model f that is individually fair and accurate. The accuracy is achieved by minimizing the distance between the outputs of f and h, a pre-trained model assumed to be accurate but possibly biased. Recall that we do not have access to the parameters of h, but can evaluate its predictions. Our method enforces fairness using a graph Laplacian quadratic form [24] regularizer: bf = arg min g (f ) = arg min kf − ŷk22 + λ f⊤Lnf , (3.1) f f where ŷ is the output of the model h, and bf is the vector of the post-processed outputs, i.e., fb i = f(xi) for i = 1, . . . , n. The matrix Ln ∈ Rn×n is called graph Laplacian matrix and is a function of W . There are multiple versions of Ln popularized in the graph literature (see, e.g., [25] or [26]). To elucidate the connection to individual fairness, consider the unnormalized Laplacian Lun,n = D −W ,Pn where Dii = j=1 Wij , Dij = 0 for i 6= j is the degree matrix corresponding to W . Then a known identity is: P 21f⊤Lun,nf = Wij (fi − fj ) . (3.2)2 i=6 j Hence, the Laplacian regularizer is small if the post-processed model outputs fb i and fb j (i.e., treatment) are similar for large Wij (i.e., for similar individuals i and j). This promotes the philosophy of individual fairness: “treat similar individuals similarly”. This observation intuitively explains the motivation for minimizing the graph Laplacian quadratic form to achieve IF. In Section 4, we present a more formal discussion on the connections between the graph Laplacian regularization and IF. Our post-processing problem (3.1) is easy to solve: setting the gradient of g to 0 implies that the optimal solution bf is: −1 b Ln + Ln ⊤ f = I + λ yb . (3.3) 2 The Laplacian Ln is a positive semi-definite matrix ensuring that (3.1) is strongly convex and that (3.3) is a global minimum. In comparison to the computationally expensive constraint optimization problem (2.3), this approach has a simple closed-form expression. Note that the symmetry of the unnormalized Laplacian Lun,n simplifies (3.3); however, there are also non-symmetric Laplacian variations. In this work, we also consider the normalized random walk Laplacian Lnrw,n = (I − De−1Wf), where Wf = D−1/2WD−1/2 is the normalized adjacency matrix and De is its degree matrix. We discuss its properties in the context of IF in Section 4. Henceforth, we refer to our method as Graph Laplacian Individual Fairness (GLIF) when using the unnormalized Laplacian, and GLIF-NRW when using Normalized Random Walk Laplacian. 3.1 Prior Work on Graph Laplacians Graph-based learning via a similarity matrix is prevalent in statistics and ML literature, specifically, in semi-supervised learning. The core idea is to gather information from similar unlabeled inputs to improve prediction accuracy (e.g., see [27], [28], [29] and references therein). Laplacian regularization is widely used in science engineering. We refer to Chapelle [17] for a survey. We note that [30], [31] also use graph Laplacian regularizers to enforce individual fairness. Our work builds on their work by elucidating the key role played by the graph Laplacian in enforcing individual fairness. In particular, we clarify the connection between the choice of the graph Laplacian and the exact notion of individual fairness the corresponding graph Laplacian regularizer enforces. 3.2 Extensions of the Basic Method In this subsection, we present four extensions of our method: multi-dimensional outputs, coordinate descent for large-scale data, an inductive setting, and alternative output space discrepancy measures. 3.2.1 Multi-dimensional Output We presented our objective function (3.1) and post-processing procedure (3.3) for the case of univariate outputs. This covers regression and binary classification. Our method readily extends to multi-dimensional output space, for example, in classification, fi, ybi ∈ RK can represent logits, i.e., softmax inputs, of the K classes. In this case, f and ŷ are n × K matrices, and the term f ⊤Lnf is a K × K matrix. We use the trace of it as a regularizer. The optimization problem (3.1) then becomes: � b yk2 f⊤f = arg min g (f) = arg min kf − ˆ F + λ tr Lnf , (3.4) f f where k · kF is the Frobenius norm. Similar to the univariate output case, this yields: −1 b Ln + Ln ⊤ f = I + λ yb . (3.5) 2 The solution is the same as (3.3); however, now it accounts for multi-dimensional outputs. 3.2.2 Coordinate Descent for Large-Scale Data Although our method has a closed form solution, it is not immediately scalable, as we have to invert a n × n matrix to obtain the optimal solution. We propose a coordinate descent variant of our method that readily scales to any data size. The idea stems primarily from the gradient of equation (3.4), where we solve: Ln + L ⊤ nf − yb + λ f = 0 . (3.6) 2 Fixing {fj }j 6=i, we can solve (3.6) for fi: P ŷi − 2 j=6 i(Ln,ij + Ln,ji)fjfi ← . (3.7) 1 + λLn,ii This gives rise to the coordinate descent algorithm. We perform asynchronous updates over randomly selected coordinate batches until convergence. We refer the reader to Wright [32] and the references therein for the convergence properties of (asynchronous) coordinate descent. 3.2.3 Extension to the Inductive Setting This coordinate descent update is key to extending our approach to the inductive setting. To handle new unseen points, we assume we have a set of test points on which we have already post-processed the outputs of the ML model. To post-process new unseen points, we simply fix the outputs of the other test points and perform a single coordinate descent step with respect to the output of the new point. Similar strategies are often employed to extend transductive graph-based algorithms to the inductive setting [17]. 3.2.4 Alternative Discrepancy Measures on the Output Space So far, we have considered the squared Euclidean distance as a measure of discrepancy between outputs. This is a natural choice for post-processing models with continuous-valued outputs. For models that output a probability distribution over the possible classes, we consider alternative discrepancy measures on the output space. It is possible to replace the squared Euclidean distance with a Bregman divergence with very little change to the algorithm in the case of the unnormalized Laplacian. Below, we work through the details for the KL divergence as a demonstration of the idea. A result for the general Bregman divergence can be found in Appendix B.3 (see Theorem B.4). PKoi,j / oi,k }KSuppose the output of the pre-trained model h is ŷi ∈ K , where ˆ = {e -yi k=1 e j=1 a K dimensional probability vector corresponding to a K class classification problem ({oi,j} is the output of the penultimate layer of the pre-trained model and ŷi is obtained by passing it through softmax) PK and K = {x ∈ RK : xi ≥ 0, = 1} is the probability simplex in RK . Let Pv denote the i=1 xi multinomial distribution with success probabilities v for any v ∈ k . Define η̂i ∈ RK−1 (resp. ηi) as the natural parameter corresponding to ŷi (resp. fi), i.e., η̂i,j = log (ŷi,j/ŷi,K ) = oi,j − oi,K for 1 ≤ j ≤ K − 1. The (unnormalized) Laplacian smoothing problem with the KL divergence is n λ n o X X (ỹ1, . . . , ỹn) = arg min ∈ K KL (Pyi ||Pŷi )+ Wij KL Pyi ||Pyj . (3.8)y1,...,yn 2 i j=1,j=6 i A coordinate descent approach for solving the above equation is: n P � o n ỹi = arg miny∈ k KL (Py||Pŷi ) + Wij KL Py||Pỹj . (3.9)2 j=1,j=6 i The following theorem establishes that (3.5) solves the above problem in the logit space, or equivalently in the space of the corresponding natural parameters (see Appendix B for the proof): Theorem 3.1. Consider the following optimization problem on the space of natural parameters: h iPn η̃i = arg min kη − η̂ik2 + ηjk2 . (3.10)2 j=1,j 6=i Wij kη − ˜ Then, the minimizer η̃i of equation (3.10) is the natural parameter corresponding to the minimizer ỹi of (3.8). 4 Local IF and Graph Laplacian Regularization In this section, we provide theoretical insights into why the graph Laplacian regularizer enforces individual fairness. As pointed out in Section 2, enforcing IF globally is expensive and often reduces a significant amount of accuracy of the final classifier. Here, we establish that solving (3.1) is tantamount to enforcing a localized version of individual fairness, namely Local Individual Fairness, which is defined below: Definition 4.1 (Local Individual Fairness). An ML model h is said to be locally individually fair if it satisfies: " # dY (h(x), h(x ′ )) Ex∼P lim sup ≤ L < ∞ . (4.1) x ′ :dX (x,x ′)↓0 dX (x, x ′) For practical purposes, this means that h is locally individually fair with constants ǫ and L if it satisfies ′ dY (h(x), h(x ′ )) ≤ LdX (x, x ′ ) for all x, x ∈ X where dX (x, x ′ ) ≤ ǫ (4.2) in analogy to equation (2.1). Equation (4.2) is a relaxation of traditional IF, where we only care about the Lipschitz-constraint for all pairs of points with small fair distances, i.e., where it is less than some user-defined threshold ǫ. Example 4.2. For our theoretical analysis, we need to specify a functional form of the fair metric. A popular choice is a Mahalanobis fair metric proposed by [19], which is defined as: d2 (x, x ′ ) = (x − x ′ )⊤ (x − x ′ ), (4.3)X where is a dispersion matrix that puts lower weight in the directions of sensitive attributes and higher weight in the directions of relevant attributes. [19] also proposed several algorithms to learn such a fair metric from the data. If we further assume dY (y1, y2) = |y1 − y2|, then a simple application of Lagrange’s mean value theorem yields: |h(x) − h(x ′ )| lim sup ≤ k −1/2∇h(x)k . (4.4) x ′ :dX (x,x ′ )↓0 dX (x, x ′) This immediately implies: " # dY (h(x), h(x ′ )) Ex∼P lim sup ≤ E[k −1/2∇h(x)k] , (4.5) x ′ :dX (x,x ′ )↓0 dX (x, x ′) i.e., h satisfies local individual fairness constraint as long as E[k −1/2∇h(x)k] < ∞. On the other hand, the global IF constraint necessitates sup k −1/2∇h(x)k < ∞, i.e., h is Lipschitz x∈X continuous with respect to the Mahalanobis distance. The main advantage of this local notion of IF over its global counterpart is that the local definition concentrates on the input pairs with smaller fair distance and ignores those with larger distance. For example, in Figure 1, the edge-weights among Alice, Charlie, and Dave are much larger than among any other pairs (which have a weight of 0); therefore, our local notion enforces fairness constraint on the corresponding similar pairs, while ignoring (or being less stringent on) others. This prevents over-smoothing and consequently preserves accuracy while enforcing fairness as is evident from our real data experiments in Section 5. We now present our main theorem, which establishes that, under certain assumptions on the underlying hypothesis class and the distribution of inputs, the graph Laplacian regularizers (both unnormalized and normalized random walk) enforce the local IF constraint (as defined in Definition 4.1) in the limit. For our theory, we work with dX as the Mahalanobis distance introduced in Example 4.2 in equation (2.2) along with θ = 1/(2σ2) (σ is a bandwidth parameter which goes to 0 at an appropriate rate as n →∞) and τ = ∞. All our results will be thorough for any finite τ but with more tedious technical analysis. Therefore, our weight matrix W becomes: | |1/2 1 Wij = exp − (xi − xj ) ⊤ (xi − xj ) . (4.6) (2π)d/2σd 2σ2 The constant | |1/2/((2π)d/2σd) is for the normalization purpose and can be absorbed into the penalty parameter λ. We start by listing our assumptions: Assumption 4.3 (Assumption on the domain). The domain of the inputs X is a compact subset of R d where d is the underlying dimension. Assumption 4.4 (Assumption on the hypothesis). All functions f ∈ F of the hypothesis class satisfy the following: 1. The ith derivative f (i) is uniformly bounded over the domain X of inputs for i ∈ {0, 1, 2}. 2. f (1)(x) = 0 for all x ∈ ∂X , where ∂X denotes the boundary of X . Assumption 4.5 (Assumption on the density of inputs). The density p of the input random variable x on the domain X satisfies the following: 1. There exists pmax < ∞ and pmin > 0 such that, for all x ∈ X , we have pmin ≤ p(x) ≤ pmax. 2. The derivatives {p(i)}i=0,1,2 of the density p are uniformly bounded on the domain X . Discussion on the assumptions Most of our assumptions (e.g., compactness of the domain, bounded derivatives of f or p) are for technical simplicity and are fairly common for the asymptotic analysis of graph regularization (see, e.g., Hein et al. [25], [33] and references therein). It is possible to relax some of the assumptions: for example, if the domain X of inputs is unbounded, then the target function f and the density p should decay at certain rate so that observations far away will not be able to affect the convergence (e.g., sub-exponential tails). Part (2.) of Assumption 4.4 can be relaxed if we assume p(x) is 0 at boundary. However, we do not pursue these extensions further in this manuscript, as they are purely technical and do not add anything of significance to the main intuition of the result. Theorem 4.6. Under Assumptions 4.3 - 4.5, we have: 1. If the sequence of bandwidths σ ≡ σn ↓ 0 such that nσn 2 → ∞ and Lun,n is unnormalized Laplacian matrix, then 2 P n2σ2 f⊤Lun,nf −→ Ex∼p ∇f(x) ⊤ −1∇f(x) p(x) . (4.7) 2. If the sequence of bandwidths σ ≡ σn ↓ 0 such that (nσd+4)/(log (1/σ)) → ∞ and Lnrw,n is the normalized random walk Laplacian matrix, then: 1 P f⊤Lnrw,nf −→ Ex∼p ∇f(x) ⊤ −1∇f(x) . (4.8) nσ2 where f = {f(xi)}ni=1. Consequently, both Laplacian regularizers asymptotically enforce local IF. The proof of the above theorem can be found in Appendix B. When we use a normalized random walk graph Laplacian matrix Lnrw,n as regularizer, the regularizer does (asymptotically) penalize E ∇f(x)⊤ −1∇f(x) = E k −1/2∇f(x)k2 , which, by Example 4.2, is equivalent to enforcing the local IF constraint. Similarly, the un-normalized Laplacian matrix Lun,n, also enforces the same under Assumption 4.5 as: h i 1 E k −1/2∇f(x)k2 ≤ E ∇f(x)⊤ −1∇f(x) p(x) , where pmin = inf p(x). (4.9) pmin x∈X Although both the Laplacian matrices enforce local IF, the primary difference between them is that the limit of the unnormalized Laplacian involves the density p(x), i.e., it upweights the high-density region (consequently stringent imposition of fairness constraint), whereas it down-weights the underrepresented/low-density region. On the other hand, the limit corresponding to the normalized random walk Laplacian matrix does not depend on p(x) and enforces fairness constraint with equal intensity on the entire input space. We used both regularizers in our experiments, comparing and contrasting their performance on several practical ML problems. 5 Experiments The goals of our experiments are threefold: 1. Exploring the trade-offs between post-processing for local IF with GLIF and post-processing with (global) IF constraints using our adaptation of the algorithm by Dwork et al. [6] described in (2.3). 2. Studying practical implications of theoretical differences between GLIF and GLIF-NRW, i.e., different graph Laplacians, presented in Section 4. 3. Evaluating the effectiveness of GLIF in its main application, i.e., computationally light debiasing of large deep learning models such as BERT. The implementation of this work is available at github.com/Felix-Petersen/fairness-post-processing. 5.1 Comparing GLIF and Global IF-constraints For our first experiment, we consider the sentiment prediction task [34], where our goal is to classify words as having a positive or negative sentiment. The baseline model is a neural network trained with GloVe word embeddings [35]. Following Yurochkin et al. [11], we evaluate the model on a set of names and observe that it assigns varying sentiments to names. An individually fair model should assign similar sentiment scores to all names. Further, we observe that there is a gap between average sentiments of names typical for Caucasian and African-American ethnic groups [36], which is violating group fairness. Yurochkin et al. [11] propose a fair metric learning procedure for this task using a side data set of names, and an in-processing technique for achieving individual fairness. We use their method to obtain the fair metric and compare post-processing of the baseline model with GLIF, GLIF-NRW and the global IF-constraints method. The test set comprises 663 words from the original task and 94 names. For post-processing, no problem specific knowledge is used. The resulting post-processed predictions for the original test set are used to evaluate accuracy, and the predictions on the names are used for evaluating fairness metrics. Even for this small problem, the global IF-constraints method, i.e., a CVXPY [37] implementation of (2.3), takes 7 minutes to run. Due to the poor scalability of the global IF-constraints method, we can use it only for the study of this smaller data set and can not consider it for the large language model experiments in Section 5.2. For GLIF(-NRW), we implement the closed-form solution (3.3) that takes less than a tenth of a second to run. See Appendix A for additional experimental details and a runtime analysis. We evaluate the fairness-accuracy trade-offs for a range of threshold parameters τ (for GLIF and GLIF-NRW) and for a range of Lipschitz-constants L (for IF-constraints) in Figure 2. Figure 2 (left) shows the standard deviation of the post-processed outputs on all names as a function of test accuracy on the original sentiment task. Lower standard deviations imply that all names received similar predictions, which is the goal of individual fairness. Figure 2 (center) visualizes group fairness and accuracy, i.e., difference in average name sentiment scores for the two ethnic groups. In this problem, individual fairness is a stronger notion of fairness: achieving similar predictions for all names implies similar group averages, but not vice a versa. Therefore, for this task, post-processing for individual fairness also corrects group disparities. In both settings, GLIF and GLIF-NRW achieve substantially better fairness metrics for the same levels of test accuracy in comparison to the IF-constraints method. To understand the reason for this, we study which global IF constraints are violated after applying the GLIF method in Figure 2 (right). Corresponding to the unique pairs of words in our test set, there are n(n − 1)/2 unique constraints in (2.3), and the global IF-constraints method satisfies all of them by design. Each constraint (i.e., each pair of words) corresponds to a fair distance, which is small for (under the fair metric) similar words and large for dissimilar words. We bin the constraints by fair distance and present the proportion of global IF constraints violated after applying the GLIF method for each bin in the histogram in Figure 2 (right). Here, we set the Lipschitz-constant L in (2.3) to L = 2.25 corresponding to a 89.4% accuracy of the IF-constraints method and show global IF constraint violations of GLIF corresponding to 95% accuracy in blue. This means that we use strong global IF constraints and use a setting of the GLIF method which maintains most of the accuracy, which would not be possible using the IF-constraints method. GLIF does not violate any constraints corresponding to small fair distances, i.e., it satisfies IF on similar individuals, while violating many large fair distance constraints. This can be seen as basically all constraint violations (blue) are at large fair distances of greater or equal 6. This demonstrates the effect of enforcing local individual fairness from our theoretical analysis in Section 4. At the same time, we display frequency of constraints that correspond to pairs of names in orange, where we can see that almost all constraints corresponding to names occur at small fair distances of smaller or equal to 6. This is expected in this task because we consider all names similar, so fair distances between them should be small. We can see that the distributions of constraint violations after applying GLIF (blue, right) and names (orange, left) are almost disjoint. We mark all global IF constraint violations after applying GLIF that correspond to names in green, and observe that there are none. Summarizing, GLIF ignores unnecessary (in the context of this problem) constraints allowing it to achieve higher accuracy, while satisfying the more relevant local IF constraints leading to improved fairness. Regarding the practical differences between GLIF and GLIF-NRW, in Figure 2 (left) GLIF has smaller standard deviations on the name outputs, but in in Figure 2 (center) GLIF-NRW achieves lower race gap. In Theorem 4.6, we showed that GLIF penalizes fairness violations in high density data regions stronger. As a result, GLIF may favor enforcing similar outputs in the high density region causing lower standard deviation, while leaving outputs nearly unchanged in the lower density region, resulting in larger race gaps. GLIF-NRW weights all data density regions equally, i.e., it is less likely to miss a small subset of names, but is less stringent in the high density regions. 5.2 Post-processing for Debiasing Large Language Models Large language models have achieved impressive results on many tasks; however, there is also significant evidence demonstrating that they are prone to biases [4], [38], [39]. Debiasing these models remains largely an open problem: most in-processing algorithms are not applicable or computationally prohibitive due to large and highly complex model architectures, and challenges in handling text inputs. Even if an appropriate in-processing algorithm arises, significant environmental impact due to re-training is unavoidable [4], [16]. In our experiments, we evaluate effectiveness of GLIF as a simple post-processing technique to debias BERT-based models for text classification. Another possible solution is to fine-tune BERT with an in-processing technique as was done by Yurochkin et al. [12]. The two approaches are not directly comparable: fine-tuning with SenSeI [12] requires knowledge of the model parameters, alleviates only part of the computational burden, and Table 1: Results for the Bios task. 0.846 0.988 Method Test Acc. Pred. Consist. Baseline ± 0.003 0.942 ± 0.002 GLIF 0.830 ± 0.004 0.986 ± 0.002 GLIF-NRW 0.834 ± 0.003 ± 0.002 SenSEI 0.843 ± 0.003 0.977 ± 0.001 Table 2: Results for the Toxicity task. 0.809 0.844 Method Test Acc. Pred. Consist. Baseline ± 0.004 0.614 ± 0.013 GLIF 0.803 ± 0.003 0.835 ± 0.012 GLIF-NRW 0.803 ± 0.003 ± 0.013 SenSEI 0.791 ± 0.005 0.773 ± 0.043 has more stringent requirements on the fair metric, while post-processing with GLIF is transductive, i.e., it requires access to unlabeled test data (see extended discussion in Section 2.1). We replicate the experiments of Yurochkin et al. [12] on Bios [40] and Toxicity1 data sets. They use the approach of Mukherjee et al. [19] for fair metric learning which we reproduce. We refer to the Appendix B.1 of [12] for details. In both tasks, following [12], we quantify performance with balanced accuracy due to class imbalance, and measure individual fairness via prediction consistency, i.e., the fraction of test points where the prediction remains unchanged when performing task-specific input modifications. For implementation details, see Appendix A. In Appendix A.4, we analyze the runtime and distinguish between the closed-form and coordinate descent variants of GLIF. In Bios, the goal is to predict the occupation of a person based on their textual biography. Such models can be useful for recruiting purposes. However, due to historical gender bias in some occupations, the baseline BERT model learns to associate gender pronouns and names with the corresponding occupations. Individual fairness is measured with prediction consistency with respect to gender pronouns and names alterations. A prediction is considered consistent if it is the same after swapping the gender pronouns and names. We present the fairness-accuracy trade-off in Figure 3 (left) for a range of threshold parameters τ , and compare performance based on hyperparameter values selected with a validation data in Table 1. Both GLIF and GLIF-NRW noticeably improve individual fairness measured with prediction consistency, while retaining most of the accuracy. In Toxicity, the task is to identify toxic comments—an important tool for facilitating inclusive discussions online. The baseline BERT model learns to associate certain identity words with toxicity (e.g., “gay”) because they are often abused in online conversations. The prediction consistency is measured with respect to changes to identity words in the inputs. There are 50 identity words, e.g., “gay”, “muslim”, “asian”, etc. and a prediction is considered consistent if it is the same for all 50 identities. We present the trade-off plots in Figure 3 (right) and compare performance in Table 2 (right). Our methods reduce individual biases in BERT predictions. We note that in both Toxicity and Bios experiments, we observe no practical differences between GLIF and GLIF-NRW. 6 Summary and Discussion We studied post-processing methods for enforcing individual fairness. The methods provably enforce a local form of IF and scale readily to large data sets. We hope this broadens the appeal of IF by (i) alleviating the computational costs of operationalizing IF and (ii) allowing practitioners to use off-the-shelf models for standard ML tasks. We also note that it is possible to use our objective for in-processing. We conclude with two warnings: First, enforcing any algorithmic fairness definition does not guarantee complete fairness from the perspective of the user. The problem-specific meaning of fairness is often hard to encode exactly with a mathematical fairness definition. Second, while local individual fairness is a reasonable choice in many applications, this choice should be understood and verified by the practitioner depending on the situation. 1Based on the Kaggle “Toxic Comment Classification Challenge”. Acknowledgments and Disclosure of Funding This note is based upon work supported by the National Science Foundation (NSF) under grants no. 1916271, 2027737, and 2113373 and supported by the German Research Foundation (DFG) under Germany’s Excellence Strategy EXC–2117–390829875. Any opinions, findings, and conclusions or recommendations expressed in this note are those of the authors and do not necessarily reflect the views of the NSF nor the DFG.
1. What is the main contribution of the paper regarding individual fairness in machine learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in its novelty, motivation, and limitations? 3. Do you have any questions or concerns about the paper's experimental results, baselines, and comparisons? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a post-processing approach to promote individual fairness of the outcome of any machine learning model. Based on the problem formulation, a basic approach together with several variants utilizing graph Laplacian to enforce individual fairness (GLIF) are presented. In the experiments, GLIF shows better efficiency, fairness metrics, and balance between test accuracy and prediction consistency. Generally, this paper is with limited technical novelty and lacks baselines for a comprehensive comparison. However, making trails to enforce individual fairness in different realms, such as debiasing the output of language models, is always encouraged. Review Strengths: (1) This paper considers a practical scenario and is well-organized. (2) This paper is well-motivated, and utilizing graph Laplacian to promote individual fairness is a reasonable way despite the lack of novelty. (3) Compared to the chosen baselines, the proposed GLIF achieves better performance in different ways. Weaknesses: (1) Some techniques proposed in this paper lack novelty and motivation. (2) This paper lacks the study on related work, which should have been included to enrich the baselines for a comprehensive investigation. (3) Some potential limitations are not well considered and presented. Details: In this paper, a post-processing approach based on graph Laplacian to promote individual fairness is proposed. On the one hand, this paper considers a practical scenario, which is the post-processing stage, to promote individual fairness. Generally, this paper is well-motivated and well-organized. Experiments also verify GLIF’s better performance on efficiency, fairness metrics, and balance between test accuracy and prediction consistency. Nevertheless, on the other hand, the drawbacks of this paper are also obvious. Firstly, some techniques proposed in this paper lacks novelty. For example, this paper lacks a survey on related works, which should have been included to notify the audience that similar approaches have been proposed and adopted in various previous works, such as [1, 2]. Also, although enforcing individual fairness based on graph Laplacian is already a widely adopted approach, what would be the specific motivation in this paper? Secondly, some techniques in this paper also lack motivation. For example, what would be the advantage of making use of alternative discrepancy measures on the output space compared with your approach in section 3.2.1? Both of the outputs here can be regarded as a distribution. Also, if KL Divergence is adopted here, do we need to explicitly assign the distribution? If the answer is positive, would you believe that this can also be a limitation of the approach in section 3.2.4? Can you provide a theoretical analysis on what is the main motivation and advantage of the alternative discrepancy measure? Thirdly, some potential limitation is not well considered. Beyond the limitation we mentioned above, why should we base on sensitive attributes to promote individual fairness in Example 4.2? What if there is no sensitive attribute assignment? What is the definition of the mentioned sensitive attribute here? Also, does the dispersion matrix only have the ability to reflect linear relationships? Will this be another limitation? Beyond the problems I mentioned above, I would still have the following concerns: (1) For inductive settings, only performing a single coordinate descent step could be far from optimal, and the step length can be hard to specify. How could you tackle such problem? (2) What is the exact definition of the bandwidth here? In the paper, is h a function or a bandwidth parameter? [1] Lahoti, P., Gummadi, K. P., & Weikum, G. (2019). Operationalizing individual fairness with pairwise fair representations. arXiv preprint arXiv:1907.01439. [2] Kang, J., He, J., Maciejewski, R., & Tong, H. (2020, August). Inform: Individual fairness on graph mining. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (pp. 379-389).
NIPS
Title Post-processing for Individual Fairness Abstract Post-processing in algorithmic fairness is a versatile approach for correcting bias in ML systems that are already used in production. The main appeal of postprocessing is that it avoids expensive retraining. In this work, we propose general post-processing algorithms for individual fairness (IF). We consider a setting where the learner only has access to the predictions of the original model and a similarity graph between individuals guiding the desired fairness constraints. We cast the IF post-processing problem as a graph smoothing problem corresponding to graph Laplacian regularization that preserves the desired “treat similar individuals similarly” interpretation. Our theoretical results demonstrate the connection of the new objective function to a local relaxation of the original individual fairness. Empirically, our post-processing algorithms correct individual biases in large-scale NLP models such as BERT, while preserving accuracy. 1 Introduction There are many instances of algorithmic bias in machine learning (ML) models [1]–[4], which has led to the development of methods for quantifying and correcting algorithmic bias. To quantify algorithmic bias, researchers have proposed numerous mathematical definitions of algorithmic fairness. Broadly speaking, these definitions fall into two categories: group fairness [5] and individual fairness [6]. The former formalizes the idea that ML system should treat certain groups of individuals similarly, e.g., requiring the average loan approval rate for applicants of different ethnicities be similar [7]. The latter asks for similar treatment of similar individuals, e.g., same outcome for applicants with resumes that differ only in names [8]. Researchers have also developed many ways of correcting algorithmic bias. These fairness interventions broadly fall into three categories: pre-processing the data, enforcing fairness during model training (also known as in-processing), and post-processing the outputs of a model. While both group and individual fairness (IF) definitions have their benefits and drawbacks [5], [6], [9], the existing suite of algorithmic fairness solutions mostly enforces group fairness. The few prior works on individual fairness are all in-processing methods [10]–[13]. Although in-processing is arguably the most-effective type of intervention, it has many practical limitations. For example, it requires training models from scratch. Nowadays, it is more common to fine-tune publicly available models (e.g., language models such as BERT [14] and GPT-3 [15]) than to train models afresh, as many practitioners do not have the necessary computational resources. Even with enough computational resources, training large deep learning models has a significant environmental impact [4], [16]. Equal Contribution. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Post-processing offers an easier path towards incorporating algorithmic fairness into deployed ML models, and has potential to reduce environmental harm from re-training with in-processing fairness techniques. IF, the graph needs to be smooth, i.e., the similar / connected candidates should have similar node signals, which can be accomplished by also offering the job to Alice. In contrast, directly enforcing IF constraints [6] requires a certain degree of output similarity on all pairs of candidates, and not just on those which are connected and thus similar. Our main contributions are summarized below. 1. We cast post-processing for individual fairness as a graph smoothing problem and propose a coordinate descent algorithm to scale the approach to large data sets. 2. We demonstrate theoretically and verify empirically that graph smoothing enforces individual fairness constraints locally, i.e., it guarantees similar treatment of similar individuals. 3. We empirically compare the Laplacian smoothing method to the post-processing adaptation of the algorithm by Dwork et al. [6] enforcing global Lipschitz continuity. The Laplacian smoothing method is not only computationally more efficient but is also more effective in reducing algorithmic bias while preserving accuracy of the original model. 4. We demonstrate the efficacy of Laplacian smoothing on two large-scale text data sets by reducing biases in fine-tuned BERT models. 2 Post-processing Problem Formulation Let X be the feature space, Y be the set of possible labels/targets, and h : X → Y be a (possibly unfair) ML model trained for the task. Our goal is to post-process the outputs of h so that they are individually fair. Formally, the post-processor is provided with a set of inputs {xi} n i=1 and the outputs of h on the inputs {ybi , h(xi)}in =1, and its goal is to produce {fb i}in =1 that is both individually fair and similar to the ybi’s. Recall that individual fairness of h is the Lipschitz continuity of h with respect to a fair metric dX on the input space: ′ dY (h(x), h(x ′ )) ≤ LdX (x, x ′ ) for all x, x ∈ X , (2.1) where L > 0 is a Lipschitz constant. The fair metric encodes problem-specific intuition of which samples should be treated similarly by the ML model. It is analogous to the knowledge of protected attributes in group fairness needed to define corresponding fairness constraints. Recent literature proposes several practical methods for learning fair metric from data [18], [19]. We assume the postprocessor is either given access to the fair metric (it can evaluate the fair distance on any pair of points in X ), or receives feedback on which inputs should be treated similarly. We encode this information in an adjacency matrix W ∈ Rn×n of a graph with individuals as nodes. If the post-processor is given the fair metric, then the entries of W are ˆ exp(−θdX (xi, xj) 2) dX (xi, xj) ≤ τ Wij = (2.2) 0 otherwise, where θ > 0 is a scale parameter and τ > 0 is a threshold parameter. If the post-processor is given an annotator’s feedback, then W is a binary matrix with Wij = 1 if i and j are considered to be treated similarly by the annotator and 0 otherwise. Extensions to multiple annotators are straightforward. We start with a simple post-processing adaptation of the algorithm by Dwork et al. [6] for enforcing individual fairness, that projects the (possibly unfair) outputs of h onto a constraint set to enforce (2.1). In other words, the post-processor seeks the closest set of outputs to the ybi’s that satisfies individual fairness: n 1 ( arg minf1,...,fn P dY (fi, ybi)2 ) i=1 2{fb i}ni=1 ∈ . (2.3) subject to dY (fi, fj ) ≤ LdX (xi, xj ) This objective function, though convex, scales poorly due to the order of n2 constraints. Empirically, we observe that (2.3) leads to post-processed outputs that are dissimilar to the ybi’s, leading to poor performance in practice. The goal of our method is to improve performance and scalability, while preserving the IF desiderata of treating similar individual similarly. Before presenting our method, we discuss other post-processing perspectives that differ in their applicability and input requirements. 2.1 Alternative Post-processing Formulations We review three post-processing problem setups and the corresponding methods in the literature. First, one can fine-tune a model via an in-processing algorithm to reduce algorithmic biases. Yurochkin et al. [12] proposed an in-processing algorithm for IF and used it to train fair models for text classification using sentence BERT embeddings. This setting is the most demanding in terms of input and computational requirements: a user needs access to the original model parameters, fair metric function, and train a predictor, e.g., a moderately deep fully connected neural network, with a non-trivial fairness-promoting objective function. Second, it is possible to post-process by training additional models to correct the initial model’s behavior. For example, Kim et al. [20] propose a boosting-based method for group fairness postprocessing. This perspective can be adapted to individual fairness; however, it implicitly assumes that we can train weak-learners to boost. Lohia et al. [21], [22] propose to train a bias detector to post-process for group fairness and a special, group based, notion of individual fairness. Such methods are challenging to apply to text data or other non-tabular data types. The third perspective is the most generic: a user has access to original model outputs only, and a minimal additional feedback guiding fairness constraints. Wei et al. [23] consider such setting and propose a method to satisfy group fairness constraints; however, it is not applicable to individual fairness. Our problem formulation belongs to this post-processing setup. The main benefit of this approach is its broad applicability and ease of deployment. 3 Graph Laplacian Individual Fairness To formulate our method, we cast IF post-processing as a graph smoothing problem. Using the fair metric or human annotations as discussed in Section 2, we obtain an n × n matrix W that we treat as an adjacency matrix. As elaborated earlier, the goal of post-processing is to obtain a model f that is individually fair and accurate. The accuracy is achieved by minimizing the distance between the outputs of f and h, a pre-trained model assumed to be accurate but possibly biased. Recall that we do not have access to the parameters of h, but can evaluate its predictions. Our method enforces fairness using a graph Laplacian quadratic form [24] regularizer: bf = arg min g (f ) = arg min kf − ŷk22 + λ f⊤Lnf , (3.1) f f where ŷ is the output of the model h, and bf is the vector of the post-processed outputs, i.e., fb i = f(xi) for i = 1, . . . , n. The matrix Ln ∈ Rn×n is called graph Laplacian matrix and is a function of W . There are multiple versions of Ln popularized in the graph literature (see, e.g., [25] or [26]). To elucidate the connection to individual fairness, consider the unnormalized Laplacian Lun,n = D −W ,Pn where Dii = j=1 Wij , Dij = 0 for i 6= j is the degree matrix corresponding to W . Then a known identity is: P 21f⊤Lun,nf = Wij (fi − fj ) . (3.2)2 i=6 j Hence, the Laplacian regularizer is small if the post-processed model outputs fb i and fb j (i.e., treatment) are similar for large Wij (i.e., for similar individuals i and j). This promotes the philosophy of individual fairness: “treat similar individuals similarly”. This observation intuitively explains the motivation for minimizing the graph Laplacian quadratic form to achieve IF. In Section 4, we present a more formal discussion on the connections between the graph Laplacian regularization and IF. Our post-processing problem (3.1) is easy to solve: setting the gradient of g to 0 implies that the optimal solution bf is: −1 b Ln + Ln ⊤ f = I + λ yb . (3.3) 2 The Laplacian Ln is a positive semi-definite matrix ensuring that (3.1) is strongly convex and that (3.3) is a global minimum. In comparison to the computationally expensive constraint optimization problem (2.3), this approach has a simple closed-form expression. Note that the symmetry of the unnormalized Laplacian Lun,n simplifies (3.3); however, there are also non-symmetric Laplacian variations. In this work, we also consider the normalized random walk Laplacian Lnrw,n = (I − De−1Wf), where Wf = D−1/2WD−1/2 is the normalized adjacency matrix and De is its degree matrix. We discuss its properties in the context of IF in Section 4. Henceforth, we refer to our method as Graph Laplacian Individual Fairness (GLIF) when using the unnormalized Laplacian, and GLIF-NRW when using Normalized Random Walk Laplacian. 3.1 Prior Work on Graph Laplacians Graph-based learning via a similarity matrix is prevalent in statistics and ML literature, specifically, in semi-supervised learning. The core idea is to gather information from similar unlabeled inputs to improve prediction accuracy (e.g., see [27], [28], [29] and references therein). Laplacian regularization is widely used in science engineering. We refer to Chapelle [17] for a survey. We note that [30], [31] also use graph Laplacian regularizers to enforce individual fairness. Our work builds on their work by elucidating the key role played by the graph Laplacian in enforcing individual fairness. In particular, we clarify the connection between the choice of the graph Laplacian and the exact notion of individual fairness the corresponding graph Laplacian regularizer enforces. 3.2 Extensions of the Basic Method In this subsection, we present four extensions of our method: multi-dimensional outputs, coordinate descent for large-scale data, an inductive setting, and alternative output space discrepancy measures. 3.2.1 Multi-dimensional Output We presented our objective function (3.1) and post-processing procedure (3.3) for the case of univariate outputs. This covers regression and binary classification. Our method readily extends to multi-dimensional output space, for example, in classification, fi, ybi ∈ RK can represent logits, i.e., softmax inputs, of the K classes. In this case, f and ŷ are n × K matrices, and the term f ⊤Lnf is a K × K matrix. We use the trace of it as a regularizer. The optimization problem (3.1) then becomes: � b yk2 f⊤f = arg min g (f) = arg min kf − ˆ F + λ tr Lnf , (3.4) f f where k · kF is the Frobenius norm. Similar to the univariate output case, this yields: −1 b Ln + Ln ⊤ f = I + λ yb . (3.5) 2 The solution is the same as (3.3); however, now it accounts for multi-dimensional outputs. 3.2.2 Coordinate Descent for Large-Scale Data Although our method has a closed form solution, it is not immediately scalable, as we have to invert a n × n matrix to obtain the optimal solution. We propose a coordinate descent variant of our method that readily scales to any data size. The idea stems primarily from the gradient of equation (3.4), where we solve: Ln + L ⊤ nf − yb + λ f = 0 . (3.6) 2 Fixing {fj }j 6=i, we can solve (3.6) for fi: P ŷi − 2 j=6 i(Ln,ij + Ln,ji)fjfi ← . (3.7) 1 + λLn,ii This gives rise to the coordinate descent algorithm. We perform asynchronous updates over randomly selected coordinate batches until convergence. We refer the reader to Wright [32] and the references therein for the convergence properties of (asynchronous) coordinate descent. 3.2.3 Extension to the Inductive Setting This coordinate descent update is key to extending our approach to the inductive setting. To handle new unseen points, we assume we have a set of test points on which we have already post-processed the outputs of the ML model. To post-process new unseen points, we simply fix the outputs of the other test points and perform a single coordinate descent step with respect to the output of the new point. Similar strategies are often employed to extend transductive graph-based algorithms to the inductive setting [17]. 3.2.4 Alternative Discrepancy Measures on the Output Space So far, we have considered the squared Euclidean distance as a measure of discrepancy between outputs. This is a natural choice for post-processing models with continuous-valued outputs. For models that output a probability distribution over the possible classes, we consider alternative discrepancy measures on the output space. It is possible to replace the squared Euclidean distance with a Bregman divergence with very little change to the algorithm in the case of the unnormalized Laplacian. Below, we work through the details for the KL divergence as a demonstration of the idea. A result for the general Bregman divergence can be found in Appendix B.3 (see Theorem B.4). PKoi,j / oi,k }KSuppose the output of the pre-trained model h is ŷi ∈ K , where ˆ = {e -yi k=1 e j=1 a K dimensional probability vector corresponding to a K class classification problem ({oi,j} is the output of the penultimate layer of the pre-trained model and ŷi is obtained by passing it through softmax) PK and K = {x ∈ RK : xi ≥ 0, = 1} is the probability simplex in RK . Let Pv denote the i=1 xi multinomial distribution with success probabilities v for any v ∈ k . Define η̂i ∈ RK−1 (resp. ηi) as the natural parameter corresponding to ŷi (resp. fi), i.e., η̂i,j = log (ŷi,j/ŷi,K ) = oi,j − oi,K for 1 ≤ j ≤ K − 1. The (unnormalized) Laplacian smoothing problem with the KL divergence is n λ n o X X (ỹ1, . . . , ỹn) = arg min ∈ K KL (Pyi ||Pŷi )+ Wij KL Pyi ||Pyj . (3.8)y1,...,yn 2 i j=1,j=6 i A coordinate descent approach for solving the above equation is: n P � o n ỹi = arg miny∈ k KL (Py||Pŷi ) + Wij KL Py||Pỹj . (3.9)2 j=1,j=6 i The following theorem establishes that (3.5) solves the above problem in the logit space, or equivalently in the space of the corresponding natural parameters (see Appendix B for the proof): Theorem 3.1. Consider the following optimization problem on the space of natural parameters: h iPn η̃i = arg min kη − η̂ik2 + ηjk2 . (3.10)2 j=1,j 6=i Wij kη − ˜ Then, the minimizer η̃i of equation (3.10) is the natural parameter corresponding to the minimizer ỹi of (3.8). 4 Local IF and Graph Laplacian Regularization In this section, we provide theoretical insights into why the graph Laplacian regularizer enforces individual fairness. As pointed out in Section 2, enforcing IF globally is expensive and often reduces a significant amount of accuracy of the final classifier. Here, we establish that solving (3.1) is tantamount to enforcing a localized version of individual fairness, namely Local Individual Fairness, which is defined below: Definition 4.1 (Local Individual Fairness). An ML model h is said to be locally individually fair if it satisfies: " # dY (h(x), h(x ′ )) Ex∼P lim sup ≤ L < ∞ . (4.1) x ′ :dX (x,x ′)↓0 dX (x, x ′) For practical purposes, this means that h is locally individually fair with constants ǫ and L if it satisfies ′ dY (h(x), h(x ′ )) ≤ LdX (x, x ′ ) for all x, x ∈ X where dX (x, x ′ ) ≤ ǫ (4.2) in analogy to equation (2.1). Equation (4.2) is a relaxation of traditional IF, where we only care about the Lipschitz-constraint for all pairs of points with small fair distances, i.e., where it is less than some user-defined threshold ǫ. Example 4.2. For our theoretical analysis, we need to specify a functional form of the fair metric. A popular choice is a Mahalanobis fair metric proposed by [19], which is defined as: d2 (x, x ′ ) = (x − x ′ )⊤ (x − x ′ ), (4.3)X where is a dispersion matrix that puts lower weight in the directions of sensitive attributes and higher weight in the directions of relevant attributes. [19] also proposed several algorithms to learn such a fair metric from the data. If we further assume dY (y1, y2) = |y1 − y2|, then a simple application of Lagrange’s mean value theorem yields: |h(x) − h(x ′ )| lim sup ≤ k −1/2∇h(x)k . (4.4) x ′ :dX (x,x ′ )↓0 dX (x, x ′) This immediately implies: " # dY (h(x), h(x ′ )) Ex∼P lim sup ≤ E[k −1/2∇h(x)k] , (4.5) x ′ :dX (x,x ′ )↓0 dX (x, x ′) i.e., h satisfies local individual fairness constraint as long as E[k −1/2∇h(x)k] < ∞. On the other hand, the global IF constraint necessitates sup k −1/2∇h(x)k < ∞, i.e., h is Lipschitz x∈X continuous with respect to the Mahalanobis distance. The main advantage of this local notion of IF over its global counterpart is that the local definition concentrates on the input pairs with smaller fair distance and ignores those with larger distance. For example, in Figure 1, the edge-weights among Alice, Charlie, and Dave are much larger than among any other pairs (which have a weight of 0); therefore, our local notion enforces fairness constraint on the corresponding similar pairs, while ignoring (or being less stringent on) others. This prevents over-smoothing and consequently preserves accuracy while enforcing fairness as is evident from our real data experiments in Section 5. We now present our main theorem, which establishes that, under certain assumptions on the underlying hypothesis class and the distribution of inputs, the graph Laplacian regularizers (both unnormalized and normalized random walk) enforce the local IF constraint (as defined in Definition 4.1) in the limit. For our theory, we work with dX as the Mahalanobis distance introduced in Example 4.2 in equation (2.2) along with θ = 1/(2σ2) (σ is a bandwidth parameter which goes to 0 at an appropriate rate as n →∞) and τ = ∞. All our results will be thorough for any finite τ but with more tedious technical analysis. Therefore, our weight matrix W becomes: | |1/2 1 Wij = exp − (xi − xj ) ⊤ (xi − xj ) . (4.6) (2π)d/2σd 2σ2 The constant | |1/2/((2π)d/2σd) is for the normalization purpose and can be absorbed into the penalty parameter λ. We start by listing our assumptions: Assumption 4.3 (Assumption on the domain). The domain of the inputs X is a compact subset of R d where d is the underlying dimension. Assumption 4.4 (Assumption on the hypothesis). All functions f ∈ F of the hypothesis class satisfy the following: 1. The ith derivative f (i) is uniformly bounded over the domain X of inputs for i ∈ {0, 1, 2}. 2. f (1)(x) = 0 for all x ∈ ∂X , where ∂X denotes the boundary of X . Assumption 4.5 (Assumption on the density of inputs). The density p of the input random variable x on the domain X satisfies the following: 1. There exists pmax < ∞ and pmin > 0 such that, for all x ∈ X , we have pmin ≤ p(x) ≤ pmax. 2. The derivatives {p(i)}i=0,1,2 of the density p are uniformly bounded on the domain X . Discussion on the assumptions Most of our assumptions (e.g., compactness of the domain, bounded derivatives of f or p) are for technical simplicity and are fairly common for the asymptotic analysis of graph regularization (see, e.g., Hein et al. [25], [33] and references therein). It is possible to relax some of the assumptions: for example, if the domain X of inputs is unbounded, then the target function f and the density p should decay at certain rate so that observations far away will not be able to affect the convergence (e.g., sub-exponential tails). Part (2.) of Assumption 4.4 can be relaxed if we assume p(x) is 0 at boundary. However, we do not pursue these extensions further in this manuscript, as they are purely technical and do not add anything of significance to the main intuition of the result. Theorem 4.6. Under Assumptions 4.3 - 4.5, we have: 1. If the sequence of bandwidths σ ≡ σn ↓ 0 such that nσn 2 → ∞ and Lun,n is unnormalized Laplacian matrix, then 2 P n2σ2 f⊤Lun,nf −→ Ex∼p ∇f(x) ⊤ −1∇f(x) p(x) . (4.7) 2. If the sequence of bandwidths σ ≡ σn ↓ 0 such that (nσd+4)/(log (1/σ)) → ∞ and Lnrw,n is the normalized random walk Laplacian matrix, then: 1 P f⊤Lnrw,nf −→ Ex∼p ∇f(x) ⊤ −1∇f(x) . (4.8) nσ2 where f = {f(xi)}ni=1. Consequently, both Laplacian regularizers asymptotically enforce local IF. The proof of the above theorem can be found in Appendix B. When we use a normalized random walk graph Laplacian matrix Lnrw,n as regularizer, the regularizer does (asymptotically) penalize E ∇f(x)⊤ −1∇f(x) = E k −1/2∇f(x)k2 , which, by Example 4.2, is equivalent to enforcing the local IF constraint. Similarly, the un-normalized Laplacian matrix Lun,n, also enforces the same under Assumption 4.5 as: h i 1 E k −1/2∇f(x)k2 ≤ E ∇f(x)⊤ −1∇f(x) p(x) , where pmin = inf p(x). (4.9) pmin x∈X Although both the Laplacian matrices enforce local IF, the primary difference between them is that the limit of the unnormalized Laplacian involves the density p(x), i.e., it upweights the high-density region (consequently stringent imposition of fairness constraint), whereas it down-weights the underrepresented/low-density region. On the other hand, the limit corresponding to the normalized random walk Laplacian matrix does not depend on p(x) and enforces fairness constraint with equal intensity on the entire input space. We used both regularizers in our experiments, comparing and contrasting their performance on several practical ML problems. 5 Experiments The goals of our experiments are threefold: 1. Exploring the trade-offs between post-processing for local IF with GLIF and post-processing with (global) IF constraints using our adaptation of the algorithm by Dwork et al. [6] described in (2.3). 2. Studying practical implications of theoretical differences between GLIF and GLIF-NRW, i.e., different graph Laplacians, presented in Section 4. 3. Evaluating the effectiveness of GLIF in its main application, i.e., computationally light debiasing of large deep learning models such as BERT. The implementation of this work is available at github.com/Felix-Petersen/fairness-post-processing. 5.1 Comparing GLIF and Global IF-constraints For our first experiment, we consider the sentiment prediction task [34], where our goal is to classify words as having a positive or negative sentiment. The baseline model is a neural network trained with GloVe word embeddings [35]. Following Yurochkin et al. [11], we evaluate the model on a set of names and observe that it assigns varying sentiments to names. An individually fair model should assign similar sentiment scores to all names. Further, we observe that there is a gap between average sentiments of names typical for Caucasian and African-American ethnic groups [36], which is violating group fairness. Yurochkin et al. [11] propose a fair metric learning procedure for this task using a side data set of names, and an in-processing technique for achieving individual fairness. We use their method to obtain the fair metric and compare post-processing of the baseline model with GLIF, GLIF-NRW and the global IF-constraints method. The test set comprises 663 words from the original task and 94 names. For post-processing, no problem specific knowledge is used. The resulting post-processed predictions for the original test set are used to evaluate accuracy, and the predictions on the names are used for evaluating fairness metrics. Even for this small problem, the global IF-constraints method, i.e., a CVXPY [37] implementation of (2.3), takes 7 minutes to run. Due to the poor scalability of the global IF-constraints method, we can use it only for the study of this smaller data set and can not consider it for the large language model experiments in Section 5.2. For GLIF(-NRW), we implement the closed-form solution (3.3) that takes less than a tenth of a second to run. See Appendix A for additional experimental details and a runtime analysis. We evaluate the fairness-accuracy trade-offs for a range of threshold parameters τ (for GLIF and GLIF-NRW) and for a range of Lipschitz-constants L (for IF-constraints) in Figure 2. Figure 2 (left) shows the standard deviation of the post-processed outputs on all names as a function of test accuracy on the original sentiment task. Lower standard deviations imply that all names received similar predictions, which is the goal of individual fairness. Figure 2 (center) visualizes group fairness and accuracy, i.e., difference in average name sentiment scores for the two ethnic groups. In this problem, individual fairness is a stronger notion of fairness: achieving similar predictions for all names implies similar group averages, but not vice a versa. Therefore, for this task, post-processing for individual fairness also corrects group disparities. In both settings, GLIF and GLIF-NRW achieve substantially better fairness metrics for the same levels of test accuracy in comparison to the IF-constraints method. To understand the reason for this, we study which global IF constraints are violated after applying the GLIF method in Figure 2 (right). Corresponding to the unique pairs of words in our test set, there are n(n − 1)/2 unique constraints in (2.3), and the global IF-constraints method satisfies all of them by design. Each constraint (i.e., each pair of words) corresponds to a fair distance, which is small for (under the fair metric) similar words and large for dissimilar words. We bin the constraints by fair distance and present the proportion of global IF constraints violated after applying the GLIF method for each bin in the histogram in Figure 2 (right). Here, we set the Lipschitz-constant L in (2.3) to L = 2.25 corresponding to a 89.4% accuracy of the IF-constraints method and show global IF constraint violations of GLIF corresponding to 95% accuracy in blue. This means that we use strong global IF constraints and use a setting of the GLIF method which maintains most of the accuracy, which would not be possible using the IF-constraints method. GLIF does not violate any constraints corresponding to small fair distances, i.e., it satisfies IF on similar individuals, while violating many large fair distance constraints. This can be seen as basically all constraint violations (blue) are at large fair distances of greater or equal 6. This demonstrates the effect of enforcing local individual fairness from our theoretical analysis in Section 4. At the same time, we display frequency of constraints that correspond to pairs of names in orange, where we can see that almost all constraints corresponding to names occur at small fair distances of smaller or equal to 6. This is expected in this task because we consider all names similar, so fair distances between them should be small. We can see that the distributions of constraint violations after applying GLIF (blue, right) and names (orange, left) are almost disjoint. We mark all global IF constraint violations after applying GLIF that correspond to names in green, and observe that there are none. Summarizing, GLIF ignores unnecessary (in the context of this problem) constraints allowing it to achieve higher accuracy, while satisfying the more relevant local IF constraints leading to improved fairness. Regarding the practical differences between GLIF and GLIF-NRW, in Figure 2 (left) GLIF has smaller standard deviations on the name outputs, but in in Figure 2 (center) GLIF-NRW achieves lower race gap. In Theorem 4.6, we showed that GLIF penalizes fairness violations in high density data regions stronger. As a result, GLIF may favor enforcing similar outputs in the high density region causing lower standard deviation, while leaving outputs nearly unchanged in the lower density region, resulting in larger race gaps. GLIF-NRW weights all data density regions equally, i.e., it is less likely to miss a small subset of names, but is less stringent in the high density regions. 5.2 Post-processing for Debiasing Large Language Models Large language models have achieved impressive results on many tasks; however, there is also significant evidence demonstrating that they are prone to biases [4], [38], [39]. Debiasing these models remains largely an open problem: most in-processing algorithms are not applicable or computationally prohibitive due to large and highly complex model architectures, and challenges in handling text inputs. Even if an appropriate in-processing algorithm arises, significant environmental impact due to re-training is unavoidable [4], [16]. In our experiments, we evaluate effectiveness of GLIF as a simple post-processing technique to debias BERT-based models for text classification. Another possible solution is to fine-tune BERT with an in-processing technique as was done by Yurochkin et al. [12]. The two approaches are not directly comparable: fine-tuning with SenSeI [12] requires knowledge of the model parameters, alleviates only part of the computational burden, and Table 1: Results for the Bios task. 0.846 0.988 Method Test Acc. Pred. Consist. Baseline ± 0.003 0.942 ± 0.002 GLIF 0.830 ± 0.004 0.986 ± 0.002 GLIF-NRW 0.834 ± 0.003 ± 0.002 SenSEI 0.843 ± 0.003 0.977 ± 0.001 Table 2: Results for the Toxicity task. 0.809 0.844 Method Test Acc. Pred. Consist. Baseline ± 0.004 0.614 ± 0.013 GLIF 0.803 ± 0.003 0.835 ± 0.012 GLIF-NRW 0.803 ± 0.003 ± 0.013 SenSEI 0.791 ± 0.005 0.773 ± 0.043 has more stringent requirements on the fair metric, while post-processing with GLIF is transductive, i.e., it requires access to unlabeled test data (see extended discussion in Section 2.1). We replicate the experiments of Yurochkin et al. [12] on Bios [40] and Toxicity1 data sets. They use the approach of Mukherjee et al. [19] for fair metric learning which we reproduce. We refer to the Appendix B.1 of [12] for details. In both tasks, following [12], we quantify performance with balanced accuracy due to class imbalance, and measure individual fairness via prediction consistency, i.e., the fraction of test points where the prediction remains unchanged when performing task-specific input modifications. For implementation details, see Appendix A. In Appendix A.4, we analyze the runtime and distinguish between the closed-form and coordinate descent variants of GLIF. In Bios, the goal is to predict the occupation of a person based on their textual biography. Such models can be useful for recruiting purposes. However, due to historical gender bias in some occupations, the baseline BERT model learns to associate gender pronouns and names with the corresponding occupations. Individual fairness is measured with prediction consistency with respect to gender pronouns and names alterations. A prediction is considered consistent if it is the same after swapping the gender pronouns and names. We present the fairness-accuracy trade-off in Figure 3 (left) for a range of threshold parameters τ , and compare performance based on hyperparameter values selected with a validation data in Table 1. Both GLIF and GLIF-NRW noticeably improve individual fairness measured with prediction consistency, while retaining most of the accuracy. In Toxicity, the task is to identify toxic comments—an important tool for facilitating inclusive discussions online. The baseline BERT model learns to associate certain identity words with toxicity (e.g., “gay”) because they are often abused in online conversations. The prediction consistency is measured with respect to changes to identity words in the inputs. There are 50 identity words, e.g., “gay”, “muslim”, “asian”, etc. and a prediction is considered consistent if it is the same for all 50 identities. We present the trade-off plots in Figure 3 (right) and compare performance in Table 2 (right). Our methods reduce individual biases in BERT predictions. We note that in both Toxicity and Bios experiments, we observe no practical differences between GLIF and GLIF-NRW. 6 Summary and Discussion We studied post-processing methods for enforcing individual fairness. The methods provably enforce a local form of IF and scale readily to large data sets. We hope this broadens the appeal of IF by (i) alleviating the computational costs of operationalizing IF and (ii) allowing practitioners to use off-the-shelf models for standard ML tasks. We also note that it is possible to use our objective for in-processing. We conclude with two warnings: First, enforcing any algorithmic fairness definition does not guarantee complete fairness from the perspective of the user. The problem-specific meaning of fairness is often hard to encode exactly with a mathematical fairness definition. Second, while local individual fairness is a reasonable choice in many applications, this choice should be understood and verified by the practitioner depending on the situation. 1Based on the Kaggle “Toxic Comment Classification Challenge”. Acknowledgments and Disclosure of Funding This note is based upon work supported by the National Science Foundation (NSF) under grants no. 1916271, 2027737, and 2113373 and supported by the German Research Foundation (DFG) under Germany’s Excellence Strategy EXC–2117–390829875. Any opinions, findings, and conclusions or recommendations expressed in this note are those of the authors and do not necessarily reflect the views of the NSF nor the DFG.
1. What is the focus of the paper regarding individual fairness, and how does the proposed post-processing algorithm aim to address this issue? 2. What are the strengths of the paper, particularly in its motivation and writing quality? 3. What are the weaknesses of the paper, especially regarding the effectiveness of the proposed method? 4. Do you have any concerns or questions about the method's ability to address bias in predictions, such as in the example given in Figure 1? 5. Are there any assumptions made in the paper regarding the bias of fine-tuned models like Bert, and how do these assumptions impact the experimental results?
Summary Of The Paper Review
Summary Of The Paper The authors proposed a post-processing algorithm for individual fairness. The algorithm get access of the predictions of the original model and a similarity graph between individuals, and solves the IF post-processing problem as a graph smoothing problem. The motivation to reduce bias of fine-tuned model is interesting. The paper is well written. However, I have some concerns about the effectiveness of the proposed method. In Figure 1, the node signals of Alice and Bob are different, cross for Alice and tick for Bob. After connecting the node Alice and node Bob which enjoy the same qualities, their signals should be the same. I wonder what the signal should be. The experimental results on "Table 1: Results for the Bios task" and "Table 2: Results for the Toxicity task" show that the difference of the proposed method and the baseline is trivial is in terms of test accuracy and the prediction consistency. But if there are bias of the algorithm's prediction as shown in Figure 1, the difference between the test accuracy of the proposed method and the baseline should be significant. If the algorithm is biased but the proposed method in this work does not connect similar nodes in the graph, it tends to yield similar predictions as the baseline. In line 72, the experiments are conducted to reduce bias of fine-tuned Bert. Is there any assumption that the fine-tuned Bert is biased? Review See above.
NIPS
Title Post-processing for Individual Fairness Abstract Post-processing in algorithmic fairness is a versatile approach for correcting bias in ML systems that are already used in production. The main appeal of postprocessing is that it avoids expensive retraining. In this work, we propose general post-processing algorithms for individual fairness (IF). We consider a setting where the learner only has access to the predictions of the original model and a similarity graph between individuals guiding the desired fairness constraints. We cast the IF post-processing problem as a graph smoothing problem corresponding to graph Laplacian regularization that preserves the desired “treat similar individuals similarly” interpretation. Our theoretical results demonstrate the connection of the new objective function to a local relaxation of the original individual fairness. Empirically, our post-processing algorithms correct individual biases in large-scale NLP models such as BERT, while preserving accuracy. 1 Introduction There are many instances of algorithmic bias in machine learning (ML) models [1]–[4], which has led to the development of methods for quantifying and correcting algorithmic bias. To quantify algorithmic bias, researchers have proposed numerous mathematical definitions of algorithmic fairness. Broadly speaking, these definitions fall into two categories: group fairness [5] and individual fairness [6]. The former formalizes the idea that ML system should treat certain groups of individuals similarly, e.g., requiring the average loan approval rate for applicants of different ethnicities be similar [7]. The latter asks for similar treatment of similar individuals, e.g., same outcome for applicants with resumes that differ only in names [8]. Researchers have also developed many ways of correcting algorithmic bias. These fairness interventions broadly fall into three categories: pre-processing the data, enforcing fairness during model training (also known as in-processing), and post-processing the outputs of a model. While both group and individual fairness (IF) definitions have their benefits and drawbacks [5], [6], [9], the existing suite of algorithmic fairness solutions mostly enforces group fairness. The few prior works on individual fairness are all in-processing methods [10]–[13]. Although in-processing is arguably the most-effective type of intervention, it has many practical limitations. For example, it requires training models from scratch. Nowadays, it is more common to fine-tune publicly available models (e.g., language models such as BERT [14] and GPT-3 [15]) than to train models afresh, as many practitioners do not have the necessary computational resources. Even with enough computational resources, training large deep learning models has a significant environmental impact [4], [16]. Equal Contribution. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Post-processing offers an easier path towards incorporating algorithmic fairness into deployed ML models, and has potential to reduce environmental harm from re-training with in-processing fairness techniques. IF, the graph needs to be smooth, i.e., the similar / connected candidates should have similar node signals, which can be accomplished by also offering the job to Alice. In contrast, directly enforcing IF constraints [6] requires a certain degree of output similarity on all pairs of candidates, and not just on those which are connected and thus similar. Our main contributions are summarized below. 1. We cast post-processing for individual fairness as a graph smoothing problem and propose a coordinate descent algorithm to scale the approach to large data sets. 2. We demonstrate theoretically and verify empirically that graph smoothing enforces individual fairness constraints locally, i.e., it guarantees similar treatment of similar individuals. 3. We empirically compare the Laplacian smoothing method to the post-processing adaptation of the algorithm by Dwork et al. [6] enforcing global Lipschitz continuity. The Laplacian smoothing method is not only computationally more efficient but is also more effective in reducing algorithmic bias while preserving accuracy of the original model. 4. We demonstrate the efficacy of Laplacian smoothing on two large-scale text data sets by reducing biases in fine-tuned BERT models. 2 Post-processing Problem Formulation Let X be the feature space, Y be the set of possible labels/targets, and h : X → Y be a (possibly unfair) ML model trained for the task. Our goal is to post-process the outputs of h so that they are individually fair. Formally, the post-processor is provided with a set of inputs {xi} n i=1 and the outputs of h on the inputs {ybi , h(xi)}in =1, and its goal is to produce {fb i}in =1 that is both individually fair and similar to the ybi’s. Recall that individual fairness of h is the Lipschitz continuity of h with respect to a fair metric dX on the input space: ′ dY (h(x), h(x ′ )) ≤ LdX (x, x ′ ) for all x, x ∈ X , (2.1) where L > 0 is a Lipschitz constant. The fair metric encodes problem-specific intuition of which samples should be treated similarly by the ML model. It is analogous to the knowledge of protected attributes in group fairness needed to define corresponding fairness constraints. Recent literature proposes several practical methods for learning fair metric from data [18], [19]. We assume the postprocessor is either given access to the fair metric (it can evaluate the fair distance on any pair of points in X ), or receives feedback on which inputs should be treated similarly. We encode this information in an adjacency matrix W ∈ Rn×n of a graph with individuals as nodes. If the post-processor is given the fair metric, then the entries of W are ˆ exp(−θdX (xi, xj) 2) dX (xi, xj) ≤ τ Wij = (2.2) 0 otherwise, where θ > 0 is a scale parameter and τ > 0 is a threshold parameter. If the post-processor is given an annotator’s feedback, then W is a binary matrix with Wij = 1 if i and j are considered to be treated similarly by the annotator and 0 otherwise. Extensions to multiple annotators are straightforward. We start with a simple post-processing adaptation of the algorithm by Dwork et al. [6] for enforcing individual fairness, that projects the (possibly unfair) outputs of h onto a constraint set to enforce (2.1). In other words, the post-processor seeks the closest set of outputs to the ybi’s that satisfies individual fairness: n 1 ( arg minf1,...,fn P dY (fi, ybi)2 ) i=1 2{fb i}ni=1 ∈ . (2.3) subject to dY (fi, fj ) ≤ LdX (xi, xj ) This objective function, though convex, scales poorly due to the order of n2 constraints. Empirically, we observe that (2.3) leads to post-processed outputs that are dissimilar to the ybi’s, leading to poor performance in practice. The goal of our method is to improve performance and scalability, while preserving the IF desiderata of treating similar individual similarly. Before presenting our method, we discuss other post-processing perspectives that differ in their applicability and input requirements. 2.1 Alternative Post-processing Formulations We review three post-processing problem setups and the corresponding methods in the literature. First, one can fine-tune a model via an in-processing algorithm to reduce algorithmic biases. Yurochkin et al. [12] proposed an in-processing algorithm for IF and used it to train fair models for text classification using sentence BERT embeddings. This setting is the most demanding in terms of input and computational requirements: a user needs access to the original model parameters, fair metric function, and train a predictor, e.g., a moderately deep fully connected neural network, with a non-trivial fairness-promoting objective function. Second, it is possible to post-process by training additional models to correct the initial model’s behavior. For example, Kim et al. [20] propose a boosting-based method for group fairness postprocessing. This perspective can be adapted to individual fairness; however, it implicitly assumes that we can train weak-learners to boost. Lohia et al. [21], [22] propose to train a bias detector to post-process for group fairness and a special, group based, notion of individual fairness. Such methods are challenging to apply to text data or other non-tabular data types. The third perspective is the most generic: a user has access to original model outputs only, and a minimal additional feedback guiding fairness constraints. Wei et al. [23] consider such setting and propose a method to satisfy group fairness constraints; however, it is not applicable to individual fairness. Our problem formulation belongs to this post-processing setup. The main benefit of this approach is its broad applicability and ease of deployment. 3 Graph Laplacian Individual Fairness To formulate our method, we cast IF post-processing as a graph smoothing problem. Using the fair metric or human annotations as discussed in Section 2, we obtain an n × n matrix W that we treat as an adjacency matrix. As elaborated earlier, the goal of post-processing is to obtain a model f that is individually fair and accurate. The accuracy is achieved by minimizing the distance between the outputs of f and h, a pre-trained model assumed to be accurate but possibly biased. Recall that we do not have access to the parameters of h, but can evaluate its predictions. Our method enforces fairness using a graph Laplacian quadratic form [24] regularizer: bf = arg min g (f ) = arg min kf − ŷk22 + λ f⊤Lnf , (3.1) f f where ŷ is the output of the model h, and bf is the vector of the post-processed outputs, i.e., fb i = f(xi) for i = 1, . . . , n. The matrix Ln ∈ Rn×n is called graph Laplacian matrix and is a function of W . There are multiple versions of Ln popularized in the graph literature (see, e.g., [25] or [26]). To elucidate the connection to individual fairness, consider the unnormalized Laplacian Lun,n = D −W ,Pn where Dii = j=1 Wij , Dij = 0 for i 6= j is the degree matrix corresponding to W . Then a known identity is: P 21f⊤Lun,nf = Wij (fi − fj ) . (3.2)2 i=6 j Hence, the Laplacian regularizer is small if the post-processed model outputs fb i and fb j (i.e., treatment) are similar for large Wij (i.e., for similar individuals i and j). This promotes the philosophy of individual fairness: “treat similar individuals similarly”. This observation intuitively explains the motivation for minimizing the graph Laplacian quadratic form to achieve IF. In Section 4, we present a more formal discussion on the connections between the graph Laplacian regularization and IF. Our post-processing problem (3.1) is easy to solve: setting the gradient of g to 0 implies that the optimal solution bf is: −1 b Ln + Ln ⊤ f = I + λ yb . (3.3) 2 The Laplacian Ln is a positive semi-definite matrix ensuring that (3.1) is strongly convex and that (3.3) is a global minimum. In comparison to the computationally expensive constraint optimization problem (2.3), this approach has a simple closed-form expression. Note that the symmetry of the unnormalized Laplacian Lun,n simplifies (3.3); however, there are also non-symmetric Laplacian variations. In this work, we also consider the normalized random walk Laplacian Lnrw,n = (I − De−1Wf), where Wf = D−1/2WD−1/2 is the normalized adjacency matrix and De is its degree matrix. We discuss its properties in the context of IF in Section 4. Henceforth, we refer to our method as Graph Laplacian Individual Fairness (GLIF) when using the unnormalized Laplacian, and GLIF-NRW when using Normalized Random Walk Laplacian. 3.1 Prior Work on Graph Laplacians Graph-based learning via a similarity matrix is prevalent in statistics and ML literature, specifically, in semi-supervised learning. The core idea is to gather information from similar unlabeled inputs to improve prediction accuracy (e.g., see [27], [28], [29] and references therein). Laplacian regularization is widely used in science engineering. We refer to Chapelle [17] for a survey. We note that [30], [31] also use graph Laplacian regularizers to enforce individual fairness. Our work builds on their work by elucidating the key role played by the graph Laplacian in enforcing individual fairness. In particular, we clarify the connection between the choice of the graph Laplacian and the exact notion of individual fairness the corresponding graph Laplacian regularizer enforces. 3.2 Extensions of the Basic Method In this subsection, we present four extensions of our method: multi-dimensional outputs, coordinate descent for large-scale data, an inductive setting, and alternative output space discrepancy measures. 3.2.1 Multi-dimensional Output We presented our objective function (3.1) and post-processing procedure (3.3) for the case of univariate outputs. This covers regression and binary classification. Our method readily extends to multi-dimensional output space, for example, in classification, fi, ybi ∈ RK can represent logits, i.e., softmax inputs, of the K classes. In this case, f and ŷ are n × K matrices, and the term f ⊤Lnf is a K × K matrix. We use the trace of it as a regularizer. The optimization problem (3.1) then becomes: � b yk2 f⊤f = arg min g (f) = arg min kf − ˆ F + λ tr Lnf , (3.4) f f where k · kF is the Frobenius norm. Similar to the univariate output case, this yields: −1 b Ln + Ln ⊤ f = I + λ yb . (3.5) 2 The solution is the same as (3.3); however, now it accounts for multi-dimensional outputs. 3.2.2 Coordinate Descent for Large-Scale Data Although our method has a closed form solution, it is not immediately scalable, as we have to invert a n × n matrix to obtain the optimal solution. We propose a coordinate descent variant of our method that readily scales to any data size. The idea stems primarily from the gradient of equation (3.4), where we solve: Ln + L ⊤ nf − yb + λ f = 0 . (3.6) 2 Fixing {fj }j 6=i, we can solve (3.6) for fi: P ŷi − 2 j=6 i(Ln,ij + Ln,ji)fjfi ← . (3.7) 1 + λLn,ii This gives rise to the coordinate descent algorithm. We perform asynchronous updates over randomly selected coordinate batches until convergence. We refer the reader to Wright [32] and the references therein for the convergence properties of (asynchronous) coordinate descent. 3.2.3 Extension to the Inductive Setting This coordinate descent update is key to extending our approach to the inductive setting. To handle new unseen points, we assume we have a set of test points on which we have already post-processed the outputs of the ML model. To post-process new unseen points, we simply fix the outputs of the other test points and perform a single coordinate descent step with respect to the output of the new point. Similar strategies are often employed to extend transductive graph-based algorithms to the inductive setting [17]. 3.2.4 Alternative Discrepancy Measures on the Output Space So far, we have considered the squared Euclidean distance as a measure of discrepancy between outputs. This is a natural choice for post-processing models with continuous-valued outputs. For models that output a probability distribution over the possible classes, we consider alternative discrepancy measures on the output space. It is possible to replace the squared Euclidean distance with a Bregman divergence with very little change to the algorithm in the case of the unnormalized Laplacian. Below, we work through the details for the KL divergence as a demonstration of the idea. A result for the general Bregman divergence can be found in Appendix B.3 (see Theorem B.4). PKoi,j / oi,k }KSuppose the output of the pre-trained model h is ŷi ∈ K , where ˆ = {e -yi k=1 e j=1 a K dimensional probability vector corresponding to a K class classification problem ({oi,j} is the output of the penultimate layer of the pre-trained model and ŷi is obtained by passing it through softmax) PK and K = {x ∈ RK : xi ≥ 0, = 1} is the probability simplex in RK . Let Pv denote the i=1 xi multinomial distribution with success probabilities v for any v ∈ k . Define η̂i ∈ RK−1 (resp. ηi) as the natural parameter corresponding to ŷi (resp. fi), i.e., η̂i,j = log (ŷi,j/ŷi,K ) = oi,j − oi,K for 1 ≤ j ≤ K − 1. The (unnormalized) Laplacian smoothing problem with the KL divergence is n λ n o X X (ỹ1, . . . , ỹn) = arg min ∈ K KL (Pyi ||Pŷi )+ Wij KL Pyi ||Pyj . (3.8)y1,...,yn 2 i j=1,j=6 i A coordinate descent approach for solving the above equation is: n P � o n ỹi = arg miny∈ k KL (Py||Pŷi ) + Wij KL Py||Pỹj . (3.9)2 j=1,j=6 i The following theorem establishes that (3.5) solves the above problem in the logit space, or equivalently in the space of the corresponding natural parameters (see Appendix B for the proof): Theorem 3.1. Consider the following optimization problem on the space of natural parameters: h iPn η̃i = arg min kη − η̂ik2 + ηjk2 . (3.10)2 j=1,j 6=i Wij kη − ˜ Then, the minimizer η̃i of equation (3.10) is the natural parameter corresponding to the minimizer ỹi of (3.8). 4 Local IF and Graph Laplacian Regularization In this section, we provide theoretical insights into why the graph Laplacian regularizer enforces individual fairness. As pointed out in Section 2, enforcing IF globally is expensive and often reduces a significant amount of accuracy of the final classifier. Here, we establish that solving (3.1) is tantamount to enforcing a localized version of individual fairness, namely Local Individual Fairness, which is defined below: Definition 4.1 (Local Individual Fairness). An ML model h is said to be locally individually fair if it satisfies: " # dY (h(x), h(x ′ )) Ex∼P lim sup ≤ L < ∞ . (4.1) x ′ :dX (x,x ′)↓0 dX (x, x ′) For practical purposes, this means that h is locally individually fair with constants ǫ and L if it satisfies ′ dY (h(x), h(x ′ )) ≤ LdX (x, x ′ ) for all x, x ∈ X where dX (x, x ′ ) ≤ ǫ (4.2) in analogy to equation (2.1). Equation (4.2) is a relaxation of traditional IF, where we only care about the Lipschitz-constraint for all pairs of points with small fair distances, i.e., where it is less than some user-defined threshold ǫ. Example 4.2. For our theoretical analysis, we need to specify a functional form of the fair metric. A popular choice is a Mahalanobis fair metric proposed by [19], which is defined as: d2 (x, x ′ ) = (x − x ′ )⊤ (x − x ′ ), (4.3)X where is a dispersion matrix that puts lower weight in the directions of sensitive attributes and higher weight in the directions of relevant attributes. [19] also proposed several algorithms to learn such a fair metric from the data. If we further assume dY (y1, y2) = |y1 − y2|, then a simple application of Lagrange’s mean value theorem yields: |h(x) − h(x ′ )| lim sup ≤ k −1/2∇h(x)k . (4.4) x ′ :dX (x,x ′ )↓0 dX (x, x ′) This immediately implies: " # dY (h(x), h(x ′ )) Ex∼P lim sup ≤ E[k −1/2∇h(x)k] , (4.5) x ′ :dX (x,x ′ )↓0 dX (x, x ′) i.e., h satisfies local individual fairness constraint as long as E[k −1/2∇h(x)k] < ∞. On the other hand, the global IF constraint necessitates sup k −1/2∇h(x)k < ∞, i.e., h is Lipschitz x∈X continuous with respect to the Mahalanobis distance. The main advantage of this local notion of IF over its global counterpart is that the local definition concentrates on the input pairs with smaller fair distance and ignores those with larger distance. For example, in Figure 1, the edge-weights among Alice, Charlie, and Dave are much larger than among any other pairs (which have a weight of 0); therefore, our local notion enforces fairness constraint on the corresponding similar pairs, while ignoring (or being less stringent on) others. This prevents over-smoothing and consequently preserves accuracy while enforcing fairness as is evident from our real data experiments in Section 5. We now present our main theorem, which establishes that, under certain assumptions on the underlying hypothesis class and the distribution of inputs, the graph Laplacian regularizers (both unnormalized and normalized random walk) enforce the local IF constraint (as defined in Definition 4.1) in the limit. For our theory, we work with dX as the Mahalanobis distance introduced in Example 4.2 in equation (2.2) along with θ = 1/(2σ2) (σ is a bandwidth parameter which goes to 0 at an appropriate rate as n →∞) and τ = ∞. All our results will be thorough for any finite τ but with more tedious technical analysis. Therefore, our weight matrix W becomes: | |1/2 1 Wij = exp − (xi − xj ) ⊤ (xi − xj ) . (4.6) (2π)d/2σd 2σ2 The constant | |1/2/((2π)d/2σd) is for the normalization purpose and can be absorbed into the penalty parameter λ. We start by listing our assumptions: Assumption 4.3 (Assumption on the domain). The domain of the inputs X is a compact subset of R d where d is the underlying dimension. Assumption 4.4 (Assumption on the hypothesis). All functions f ∈ F of the hypothesis class satisfy the following: 1. The ith derivative f (i) is uniformly bounded over the domain X of inputs for i ∈ {0, 1, 2}. 2. f (1)(x) = 0 for all x ∈ ∂X , where ∂X denotes the boundary of X . Assumption 4.5 (Assumption on the density of inputs). The density p of the input random variable x on the domain X satisfies the following: 1. There exists pmax < ∞ and pmin > 0 such that, for all x ∈ X , we have pmin ≤ p(x) ≤ pmax. 2. The derivatives {p(i)}i=0,1,2 of the density p are uniformly bounded on the domain X . Discussion on the assumptions Most of our assumptions (e.g., compactness of the domain, bounded derivatives of f or p) are for technical simplicity and are fairly common for the asymptotic analysis of graph regularization (see, e.g., Hein et al. [25], [33] and references therein). It is possible to relax some of the assumptions: for example, if the domain X of inputs is unbounded, then the target function f and the density p should decay at certain rate so that observations far away will not be able to affect the convergence (e.g., sub-exponential tails). Part (2.) of Assumption 4.4 can be relaxed if we assume p(x) is 0 at boundary. However, we do not pursue these extensions further in this manuscript, as they are purely technical and do not add anything of significance to the main intuition of the result. Theorem 4.6. Under Assumptions 4.3 - 4.5, we have: 1. If the sequence of bandwidths σ ≡ σn ↓ 0 such that nσn 2 → ∞ and Lun,n is unnormalized Laplacian matrix, then 2 P n2σ2 f⊤Lun,nf −→ Ex∼p ∇f(x) ⊤ −1∇f(x) p(x) . (4.7) 2. If the sequence of bandwidths σ ≡ σn ↓ 0 such that (nσd+4)/(log (1/σ)) → ∞ and Lnrw,n is the normalized random walk Laplacian matrix, then: 1 P f⊤Lnrw,nf −→ Ex∼p ∇f(x) ⊤ −1∇f(x) . (4.8) nσ2 where f = {f(xi)}ni=1. Consequently, both Laplacian regularizers asymptotically enforce local IF. The proof of the above theorem can be found in Appendix B. When we use a normalized random walk graph Laplacian matrix Lnrw,n as regularizer, the regularizer does (asymptotically) penalize E ∇f(x)⊤ −1∇f(x) = E k −1/2∇f(x)k2 , which, by Example 4.2, is equivalent to enforcing the local IF constraint. Similarly, the un-normalized Laplacian matrix Lun,n, also enforces the same under Assumption 4.5 as: h i 1 E k −1/2∇f(x)k2 ≤ E ∇f(x)⊤ −1∇f(x) p(x) , where pmin = inf p(x). (4.9) pmin x∈X Although both the Laplacian matrices enforce local IF, the primary difference between them is that the limit of the unnormalized Laplacian involves the density p(x), i.e., it upweights the high-density region (consequently stringent imposition of fairness constraint), whereas it down-weights the underrepresented/low-density region. On the other hand, the limit corresponding to the normalized random walk Laplacian matrix does not depend on p(x) and enforces fairness constraint with equal intensity on the entire input space. We used both regularizers in our experiments, comparing and contrasting their performance on several practical ML problems. 5 Experiments The goals of our experiments are threefold: 1. Exploring the trade-offs between post-processing for local IF with GLIF and post-processing with (global) IF constraints using our adaptation of the algorithm by Dwork et al. [6] described in (2.3). 2. Studying practical implications of theoretical differences between GLIF and GLIF-NRW, i.e., different graph Laplacians, presented in Section 4. 3. Evaluating the effectiveness of GLIF in its main application, i.e., computationally light debiasing of large deep learning models such as BERT. The implementation of this work is available at github.com/Felix-Petersen/fairness-post-processing. 5.1 Comparing GLIF and Global IF-constraints For our first experiment, we consider the sentiment prediction task [34], where our goal is to classify words as having a positive or negative sentiment. The baseline model is a neural network trained with GloVe word embeddings [35]. Following Yurochkin et al. [11], we evaluate the model on a set of names and observe that it assigns varying sentiments to names. An individually fair model should assign similar sentiment scores to all names. Further, we observe that there is a gap between average sentiments of names typical for Caucasian and African-American ethnic groups [36], which is violating group fairness. Yurochkin et al. [11] propose a fair metric learning procedure for this task using a side data set of names, and an in-processing technique for achieving individual fairness. We use their method to obtain the fair metric and compare post-processing of the baseline model with GLIF, GLIF-NRW and the global IF-constraints method. The test set comprises 663 words from the original task and 94 names. For post-processing, no problem specific knowledge is used. The resulting post-processed predictions for the original test set are used to evaluate accuracy, and the predictions on the names are used for evaluating fairness metrics. Even for this small problem, the global IF-constraints method, i.e., a CVXPY [37] implementation of (2.3), takes 7 minutes to run. Due to the poor scalability of the global IF-constraints method, we can use it only for the study of this smaller data set and can not consider it for the large language model experiments in Section 5.2. For GLIF(-NRW), we implement the closed-form solution (3.3) that takes less than a tenth of a second to run. See Appendix A for additional experimental details and a runtime analysis. We evaluate the fairness-accuracy trade-offs for a range of threshold parameters τ (for GLIF and GLIF-NRW) and for a range of Lipschitz-constants L (for IF-constraints) in Figure 2. Figure 2 (left) shows the standard deviation of the post-processed outputs on all names as a function of test accuracy on the original sentiment task. Lower standard deviations imply that all names received similar predictions, which is the goal of individual fairness. Figure 2 (center) visualizes group fairness and accuracy, i.e., difference in average name sentiment scores for the two ethnic groups. In this problem, individual fairness is a stronger notion of fairness: achieving similar predictions for all names implies similar group averages, but not vice a versa. Therefore, for this task, post-processing for individual fairness also corrects group disparities. In both settings, GLIF and GLIF-NRW achieve substantially better fairness metrics for the same levels of test accuracy in comparison to the IF-constraints method. To understand the reason for this, we study which global IF constraints are violated after applying the GLIF method in Figure 2 (right). Corresponding to the unique pairs of words in our test set, there are n(n − 1)/2 unique constraints in (2.3), and the global IF-constraints method satisfies all of them by design. Each constraint (i.e., each pair of words) corresponds to a fair distance, which is small for (under the fair metric) similar words and large for dissimilar words. We bin the constraints by fair distance and present the proportion of global IF constraints violated after applying the GLIF method for each bin in the histogram in Figure 2 (right). Here, we set the Lipschitz-constant L in (2.3) to L = 2.25 corresponding to a 89.4% accuracy of the IF-constraints method and show global IF constraint violations of GLIF corresponding to 95% accuracy in blue. This means that we use strong global IF constraints and use a setting of the GLIF method which maintains most of the accuracy, which would not be possible using the IF-constraints method. GLIF does not violate any constraints corresponding to small fair distances, i.e., it satisfies IF on similar individuals, while violating many large fair distance constraints. This can be seen as basically all constraint violations (blue) are at large fair distances of greater or equal 6. This demonstrates the effect of enforcing local individual fairness from our theoretical analysis in Section 4. At the same time, we display frequency of constraints that correspond to pairs of names in orange, where we can see that almost all constraints corresponding to names occur at small fair distances of smaller or equal to 6. This is expected in this task because we consider all names similar, so fair distances between them should be small. We can see that the distributions of constraint violations after applying GLIF (blue, right) and names (orange, left) are almost disjoint. We mark all global IF constraint violations after applying GLIF that correspond to names in green, and observe that there are none. Summarizing, GLIF ignores unnecessary (in the context of this problem) constraints allowing it to achieve higher accuracy, while satisfying the more relevant local IF constraints leading to improved fairness. Regarding the practical differences between GLIF and GLIF-NRW, in Figure 2 (left) GLIF has smaller standard deviations on the name outputs, but in in Figure 2 (center) GLIF-NRW achieves lower race gap. In Theorem 4.6, we showed that GLIF penalizes fairness violations in high density data regions stronger. As a result, GLIF may favor enforcing similar outputs in the high density region causing lower standard deviation, while leaving outputs nearly unchanged in the lower density region, resulting in larger race gaps. GLIF-NRW weights all data density regions equally, i.e., it is less likely to miss a small subset of names, but is less stringent in the high density regions. 5.2 Post-processing for Debiasing Large Language Models Large language models have achieved impressive results on many tasks; however, there is also significant evidence demonstrating that they are prone to biases [4], [38], [39]. Debiasing these models remains largely an open problem: most in-processing algorithms are not applicable or computationally prohibitive due to large and highly complex model architectures, and challenges in handling text inputs. Even if an appropriate in-processing algorithm arises, significant environmental impact due to re-training is unavoidable [4], [16]. In our experiments, we evaluate effectiveness of GLIF as a simple post-processing technique to debias BERT-based models for text classification. Another possible solution is to fine-tune BERT with an in-processing technique as was done by Yurochkin et al. [12]. The two approaches are not directly comparable: fine-tuning with SenSeI [12] requires knowledge of the model parameters, alleviates only part of the computational burden, and Table 1: Results for the Bios task. 0.846 0.988 Method Test Acc. Pred. Consist. Baseline ± 0.003 0.942 ± 0.002 GLIF 0.830 ± 0.004 0.986 ± 0.002 GLIF-NRW 0.834 ± 0.003 ± 0.002 SenSEI 0.843 ± 0.003 0.977 ± 0.001 Table 2: Results for the Toxicity task. 0.809 0.844 Method Test Acc. Pred. Consist. Baseline ± 0.004 0.614 ± 0.013 GLIF 0.803 ± 0.003 0.835 ± 0.012 GLIF-NRW 0.803 ± 0.003 ± 0.013 SenSEI 0.791 ± 0.005 0.773 ± 0.043 has more stringent requirements on the fair metric, while post-processing with GLIF is transductive, i.e., it requires access to unlabeled test data (see extended discussion in Section 2.1). We replicate the experiments of Yurochkin et al. [12] on Bios [40] and Toxicity1 data sets. They use the approach of Mukherjee et al. [19] for fair metric learning which we reproduce. We refer to the Appendix B.1 of [12] for details. In both tasks, following [12], we quantify performance with balanced accuracy due to class imbalance, and measure individual fairness via prediction consistency, i.e., the fraction of test points where the prediction remains unchanged when performing task-specific input modifications. For implementation details, see Appendix A. In Appendix A.4, we analyze the runtime and distinguish between the closed-form and coordinate descent variants of GLIF. In Bios, the goal is to predict the occupation of a person based on their textual biography. Such models can be useful for recruiting purposes. However, due to historical gender bias in some occupations, the baseline BERT model learns to associate gender pronouns and names with the corresponding occupations. Individual fairness is measured with prediction consistency with respect to gender pronouns and names alterations. A prediction is considered consistent if it is the same after swapping the gender pronouns and names. We present the fairness-accuracy trade-off in Figure 3 (left) for a range of threshold parameters τ , and compare performance based on hyperparameter values selected with a validation data in Table 1. Both GLIF and GLIF-NRW noticeably improve individual fairness measured with prediction consistency, while retaining most of the accuracy. In Toxicity, the task is to identify toxic comments—an important tool for facilitating inclusive discussions online. The baseline BERT model learns to associate certain identity words with toxicity (e.g., “gay”) because they are often abused in online conversations. The prediction consistency is measured with respect to changes to identity words in the inputs. There are 50 identity words, e.g., “gay”, “muslim”, “asian”, etc. and a prediction is considered consistent if it is the same for all 50 identities. We present the trade-off plots in Figure 3 (right) and compare performance in Table 2 (right). Our methods reduce individual biases in BERT predictions. We note that in both Toxicity and Bios experiments, we observe no practical differences between GLIF and GLIF-NRW. 6 Summary and Discussion We studied post-processing methods for enforcing individual fairness. The methods provably enforce a local form of IF and scale readily to large data sets. We hope this broadens the appeal of IF by (i) alleviating the computational costs of operationalizing IF and (ii) allowing practitioners to use off-the-shelf models for standard ML tasks. We also note that it is possible to use our objective for in-processing. We conclude with two warnings: First, enforcing any algorithmic fairness definition does not guarantee complete fairness from the perspective of the user. The problem-specific meaning of fairness is often hard to encode exactly with a mathematical fairness definition. Second, while local individual fairness is a reasonable choice in many applications, this choice should be understood and verified by the practitioner depending on the situation. 1Based on the Kaggle “Toxic Comment Classification Challenge”. Acknowledgments and Disclosure of Funding This note is based upon work supported by the National Science Foundation (NSF) under grants no. 1916271, 2027737, and 2113373 and supported by the German Research Foundation (DFG) under Germany’s Excellence Strategy EXC–2117–390829875. Any opinions, findings, and conclusions or recommendations expressed in this note are those of the authors and do not necessarily reflect the views of the NSF nor the DFG.
1. What is the focus of the paper regarding post-processing methods for guaranteeing individual fairness properties? 2. What are the strengths and weaknesses of the proposed method based on graph Laplacian regularization? 3. How does the reviewer assess the novelty and significance of the main result, particularly its technical soundness and surprise? 4. What are the limitations of the provided empirical results and comparisons with other individual fairness definitions? 5. How does the reviewer evaluate the overall quality and impact of the paper despite some reservations about the proposed fairness definition?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a method for post-processing a model for guaranteeing individual fairness properties based on graph Laplacian regularization. Their main result is that this method achieves a weakened definition of individual fairness (under certain assumptions). The authors also evaluate this method on text datasets for fine-tuned BERT models. The method takes as input a graph consists of pairs which are close according to the similarity metric and solves an optimization problem using a graph Laplacian quadratic form regularizer to achieve fairness conditions. The fairness guarantee is that in expectation, the function is Lipschitz on sufficiently small distances (a sufficient condition for Mahalanobis distance is that the derivative is bounded in expectation). Review The authors describe two problems with the classical post-processing approach (Dwork et al): (1) it is computationally inefficient since it requires solving an optimization problem with O(n^2) constraints, and (2) global individual fairness can be in tension with accuracy. The idea of designing a post-processing method that achieves more desirable guarantees in practice is well-motivated and interesting. The paper is generally well-written. Although the main result (Theorem 4.6) appears technically sound, it does not seem particularly surprising that the graph Laplacian approach guarantees local individual fairness. In particular, the graph Laplacian regularization term can be written as the sum of the squares of distances between pairs weighted appropriately by the similarity metric distance (as shown in equation 3.2). Another weakness is that the authors provide a limited comparison of local individual fairness definition and the typical individual fairness definition. First, the empirical results in Section 5.1 are fairly limited and do not offer a thorough the fairness-accuracy tradeoffs for these two definitions. Second, since the local individual fairness is permitted to violate the Lipschitz constraint for large distances, it would be natural to also compare with a global individual fairness with respect to an adapted metric d’ where d’(u, v) = ∞ if d(u,v) > \tau and d’(u,v) = d(u,v) otherwise. Update: Thanks to the authors for their detailed responses (and for correcting the mistake in my definition of d ′ ). However, I am still not entirely convinced by the proposed fairness definition, which seems to be a significant relaxation of IF. Thus, I have decided to maintain my score.
NIPS
Title Post-processing for Individual Fairness Abstract Post-processing in algorithmic fairness is a versatile approach for correcting bias in ML systems that are already used in production. The main appeal of postprocessing is that it avoids expensive retraining. In this work, we propose general post-processing algorithms for individual fairness (IF). We consider a setting where the learner only has access to the predictions of the original model and a similarity graph between individuals guiding the desired fairness constraints. We cast the IF post-processing problem as a graph smoothing problem corresponding to graph Laplacian regularization that preserves the desired “treat similar individuals similarly” interpretation. Our theoretical results demonstrate the connection of the new objective function to a local relaxation of the original individual fairness. Empirically, our post-processing algorithms correct individual biases in large-scale NLP models such as BERT, while preserving accuracy. 1 Introduction There are many instances of algorithmic bias in machine learning (ML) models [1]–[4], which has led to the development of methods for quantifying and correcting algorithmic bias. To quantify algorithmic bias, researchers have proposed numerous mathematical definitions of algorithmic fairness. Broadly speaking, these definitions fall into two categories: group fairness [5] and individual fairness [6]. The former formalizes the idea that ML system should treat certain groups of individuals similarly, e.g., requiring the average loan approval rate for applicants of different ethnicities be similar [7]. The latter asks for similar treatment of similar individuals, e.g., same outcome for applicants with resumes that differ only in names [8]. Researchers have also developed many ways of correcting algorithmic bias. These fairness interventions broadly fall into three categories: pre-processing the data, enforcing fairness during model training (also known as in-processing), and post-processing the outputs of a model. While both group and individual fairness (IF) definitions have their benefits and drawbacks [5], [6], [9], the existing suite of algorithmic fairness solutions mostly enforces group fairness. The few prior works on individual fairness are all in-processing methods [10]–[13]. Although in-processing is arguably the most-effective type of intervention, it has many practical limitations. For example, it requires training models from scratch. Nowadays, it is more common to fine-tune publicly available models (e.g., language models such as BERT [14] and GPT-3 [15]) than to train models afresh, as many practitioners do not have the necessary computational resources. Even with enough computational resources, training large deep learning models has a significant environmental impact [4], [16]. Equal Contribution. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Post-processing offers an easier path towards incorporating algorithmic fairness into deployed ML models, and has potential to reduce environmental harm from re-training with in-processing fairness techniques. IF, the graph needs to be smooth, i.e., the similar / connected candidates should have similar node signals, which can be accomplished by also offering the job to Alice. In contrast, directly enforcing IF constraints [6] requires a certain degree of output similarity on all pairs of candidates, and not just on those which are connected and thus similar. Our main contributions are summarized below. 1. We cast post-processing for individual fairness as a graph smoothing problem and propose a coordinate descent algorithm to scale the approach to large data sets. 2. We demonstrate theoretically and verify empirically that graph smoothing enforces individual fairness constraints locally, i.e., it guarantees similar treatment of similar individuals. 3. We empirically compare the Laplacian smoothing method to the post-processing adaptation of the algorithm by Dwork et al. [6] enforcing global Lipschitz continuity. The Laplacian smoothing method is not only computationally more efficient but is also more effective in reducing algorithmic bias while preserving accuracy of the original model. 4. We demonstrate the efficacy of Laplacian smoothing on two large-scale text data sets by reducing biases in fine-tuned BERT models. 2 Post-processing Problem Formulation Let X be the feature space, Y be the set of possible labels/targets, and h : X → Y be a (possibly unfair) ML model trained for the task. Our goal is to post-process the outputs of h so that they are individually fair. Formally, the post-processor is provided with a set of inputs {xi} n i=1 and the outputs of h on the inputs {ybi , h(xi)}in =1, and its goal is to produce {fb i}in =1 that is both individually fair and similar to the ybi’s. Recall that individual fairness of h is the Lipschitz continuity of h with respect to a fair metric dX on the input space: ′ dY (h(x), h(x ′ )) ≤ LdX (x, x ′ ) for all x, x ∈ X , (2.1) where L > 0 is a Lipschitz constant. The fair metric encodes problem-specific intuition of which samples should be treated similarly by the ML model. It is analogous to the knowledge of protected attributes in group fairness needed to define corresponding fairness constraints. Recent literature proposes several practical methods for learning fair metric from data [18], [19]. We assume the postprocessor is either given access to the fair metric (it can evaluate the fair distance on any pair of points in X ), or receives feedback on which inputs should be treated similarly. We encode this information in an adjacency matrix W ∈ Rn×n of a graph with individuals as nodes. If the post-processor is given the fair metric, then the entries of W are ˆ exp(−θdX (xi, xj) 2) dX (xi, xj) ≤ τ Wij = (2.2) 0 otherwise, where θ > 0 is a scale parameter and τ > 0 is a threshold parameter. If the post-processor is given an annotator’s feedback, then W is a binary matrix with Wij = 1 if i and j are considered to be treated similarly by the annotator and 0 otherwise. Extensions to multiple annotators are straightforward. We start with a simple post-processing adaptation of the algorithm by Dwork et al. [6] for enforcing individual fairness, that projects the (possibly unfair) outputs of h onto a constraint set to enforce (2.1). In other words, the post-processor seeks the closest set of outputs to the ybi’s that satisfies individual fairness: n 1 ( arg minf1,...,fn P dY (fi, ybi)2 ) i=1 2{fb i}ni=1 ∈ . (2.3) subject to dY (fi, fj ) ≤ LdX (xi, xj ) This objective function, though convex, scales poorly due to the order of n2 constraints. Empirically, we observe that (2.3) leads to post-processed outputs that are dissimilar to the ybi’s, leading to poor performance in practice. The goal of our method is to improve performance and scalability, while preserving the IF desiderata of treating similar individual similarly. Before presenting our method, we discuss other post-processing perspectives that differ in their applicability and input requirements. 2.1 Alternative Post-processing Formulations We review three post-processing problem setups and the corresponding methods in the literature. First, one can fine-tune a model via an in-processing algorithm to reduce algorithmic biases. Yurochkin et al. [12] proposed an in-processing algorithm for IF and used it to train fair models for text classification using sentence BERT embeddings. This setting is the most demanding in terms of input and computational requirements: a user needs access to the original model parameters, fair metric function, and train a predictor, e.g., a moderately deep fully connected neural network, with a non-trivial fairness-promoting objective function. Second, it is possible to post-process by training additional models to correct the initial model’s behavior. For example, Kim et al. [20] propose a boosting-based method for group fairness postprocessing. This perspective can be adapted to individual fairness; however, it implicitly assumes that we can train weak-learners to boost. Lohia et al. [21], [22] propose to train a bias detector to post-process for group fairness and a special, group based, notion of individual fairness. Such methods are challenging to apply to text data or other non-tabular data types. The third perspective is the most generic: a user has access to original model outputs only, and a minimal additional feedback guiding fairness constraints. Wei et al. [23] consider such setting and propose a method to satisfy group fairness constraints; however, it is not applicable to individual fairness. Our problem formulation belongs to this post-processing setup. The main benefit of this approach is its broad applicability and ease of deployment. 3 Graph Laplacian Individual Fairness To formulate our method, we cast IF post-processing as a graph smoothing problem. Using the fair metric or human annotations as discussed in Section 2, we obtain an n × n matrix W that we treat as an adjacency matrix. As elaborated earlier, the goal of post-processing is to obtain a model f that is individually fair and accurate. The accuracy is achieved by minimizing the distance between the outputs of f and h, a pre-trained model assumed to be accurate but possibly biased. Recall that we do not have access to the parameters of h, but can evaluate its predictions. Our method enforces fairness using a graph Laplacian quadratic form [24] regularizer: bf = arg min g (f ) = arg min kf − ŷk22 + λ f⊤Lnf , (3.1) f f where ŷ is the output of the model h, and bf is the vector of the post-processed outputs, i.e., fb i = f(xi) for i = 1, . . . , n. The matrix Ln ∈ Rn×n is called graph Laplacian matrix and is a function of W . There are multiple versions of Ln popularized in the graph literature (see, e.g., [25] or [26]). To elucidate the connection to individual fairness, consider the unnormalized Laplacian Lun,n = D −W ,Pn where Dii = j=1 Wij , Dij = 0 for i 6= j is the degree matrix corresponding to W . Then a known identity is: P 21f⊤Lun,nf = Wij (fi − fj ) . (3.2)2 i=6 j Hence, the Laplacian regularizer is small if the post-processed model outputs fb i and fb j (i.e., treatment) are similar for large Wij (i.e., for similar individuals i and j). This promotes the philosophy of individual fairness: “treat similar individuals similarly”. This observation intuitively explains the motivation for minimizing the graph Laplacian quadratic form to achieve IF. In Section 4, we present a more formal discussion on the connections between the graph Laplacian regularization and IF. Our post-processing problem (3.1) is easy to solve: setting the gradient of g to 0 implies that the optimal solution bf is: −1 b Ln + Ln ⊤ f = I + λ yb . (3.3) 2 The Laplacian Ln is a positive semi-definite matrix ensuring that (3.1) is strongly convex and that (3.3) is a global minimum. In comparison to the computationally expensive constraint optimization problem (2.3), this approach has a simple closed-form expression. Note that the symmetry of the unnormalized Laplacian Lun,n simplifies (3.3); however, there are also non-symmetric Laplacian variations. In this work, we also consider the normalized random walk Laplacian Lnrw,n = (I − De−1Wf), where Wf = D−1/2WD−1/2 is the normalized adjacency matrix and De is its degree matrix. We discuss its properties in the context of IF in Section 4. Henceforth, we refer to our method as Graph Laplacian Individual Fairness (GLIF) when using the unnormalized Laplacian, and GLIF-NRW when using Normalized Random Walk Laplacian. 3.1 Prior Work on Graph Laplacians Graph-based learning via a similarity matrix is prevalent in statistics and ML literature, specifically, in semi-supervised learning. The core idea is to gather information from similar unlabeled inputs to improve prediction accuracy (e.g., see [27], [28], [29] and references therein). Laplacian regularization is widely used in science engineering. We refer to Chapelle [17] for a survey. We note that [30], [31] also use graph Laplacian regularizers to enforce individual fairness. Our work builds on their work by elucidating the key role played by the graph Laplacian in enforcing individual fairness. In particular, we clarify the connection between the choice of the graph Laplacian and the exact notion of individual fairness the corresponding graph Laplacian regularizer enforces. 3.2 Extensions of the Basic Method In this subsection, we present four extensions of our method: multi-dimensional outputs, coordinate descent for large-scale data, an inductive setting, and alternative output space discrepancy measures. 3.2.1 Multi-dimensional Output We presented our objective function (3.1) and post-processing procedure (3.3) for the case of univariate outputs. This covers regression and binary classification. Our method readily extends to multi-dimensional output space, for example, in classification, fi, ybi ∈ RK can represent logits, i.e., softmax inputs, of the K classes. In this case, f and ŷ are n × K matrices, and the term f ⊤Lnf is a K × K matrix. We use the trace of it as a regularizer. The optimization problem (3.1) then becomes: � b yk2 f⊤f = arg min g (f) = arg min kf − ˆ F + λ tr Lnf , (3.4) f f where k · kF is the Frobenius norm. Similar to the univariate output case, this yields: −1 b Ln + Ln ⊤ f = I + λ yb . (3.5) 2 The solution is the same as (3.3); however, now it accounts for multi-dimensional outputs. 3.2.2 Coordinate Descent for Large-Scale Data Although our method has a closed form solution, it is not immediately scalable, as we have to invert a n × n matrix to obtain the optimal solution. We propose a coordinate descent variant of our method that readily scales to any data size. The idea stems primarily from the gradient of equation (3.4), where we solve: Ln + L ⊤ nf − yb + λ f = 0 . (3.6) 2 Fixing {fj }j 6=i, we can solve (3.6) for fi: P ŷi − 2 j=6 i(Ln,ij + Ln,ji)fjfi ← . (3.7) 1 + λLn,ii This gives rise to the coordinate descent algorithm. We perform asynchronous updates over randomly selected coordinate batches until convergence. We refer the reader to Wright [32] and the references therein for the convergence properties of (asynchronous) coordinate descent. 3.2.3 Extension to the Inductive Setting This coordinate descent update is key to extending our approach to the inductive setting. To handle new unseen points, we assume we have a set of test points on which we have already post-processed the outputs of the ML model. To post-process new unseen points, we simply fix the outputs of the other test points and perform a single coordinate descent step with respect to the output of the new point. Similar strategies are often employed to extend transductive graph-based algorithms to the inductive setting [17]. 3.2.4 Alternative Discrepancy Measures on the Output Space So far, we have considered the squared Euclidean distance as a measure of discrepancy between outputs. This is a natural choice for post-processing models with continuous-valued outputs. For models that output a probability distribution over the possible classes, we consider alternative discrepancy measures on the output space. It is possible to replace the squared Euclidean distance with a Bregman divergence with very little change to the algorithm in the case of the unnormalized Laplacian. Below, we work through the details for the KL divergence as a demonstration of the idea. A result for the general Bregman divergence can be found in Appendix B.3 (see Theorem B.4). PKoi,j / oi,k }KSuppose the output of the pre-trained model h is ŷi ∈ K , where ˆ = {e -yi k=1 e j=1 a K dimensional probability vector corresponding to a K class classification problem ({oi,j} is the output of the penultimate layer of the pre-trained model and ŷi is obtained by passing it through softmax) PK and K = {x ∈ RK : xi ≥ 0, = 1} is the probability simplex in RK . Let Pv denote the i=1 xi multinomial distribution with success probabilities v for any v ∈ k . Define η̂i ∈ RK−1 (resp. ηi) as the natural parameter corresponding to ŷi (resp. fi), i.e., η̂i,j = log (ŷi,j/ŷi,K ) = oi,j − oi,K for 1 ≤ j ≤ K − 1. The (unnormalized) Laplacian smoothing problem with the KL divergence is n λ n o X X (ỹ1, . . . , ỹn) = arg min ∈ K KL (Pyi ||Pŷi )+ Wij KL Pyi ||Pyj . (3.8)y1,...,yn 2 i j=1,j=6 i A coordinate descent approach for solving the above equation is: n P � o n ỹi = arg miny∈ k KL (Py||Pŷi ) + Wij KL Py||Pỹj . (3.9)2 j=1,j=6 i The following theorem establishes that (3.5) solves the above problem in the logit space, or equivalently in the space of the corresponding natural parameters (see Appendix B for the proof): Theorem 3.1. Consider the following optimization problem on the space of natural parameters: h iPn η̃i = arg min kη − η̂ik2 + ηjk2 . (3.10)2 j=1,j 6=i Wij kη − ˜ Then, the minimizer η̃i of equation (3.10) is the natural parameter corresponding to the minimizer ỹi of (3.8). 4 Local IF and Graph Laplacian Regularization In this section, we provide theoretical insights into why the graph Laplacian regularizer enforces individual fairness. As pointed out in Section 2, enforcing IF globally is expensive and often reduces a significant amount of accuracy of the final classifier. Here, we establish that solving (3.1) is tantamount to enforcing a localized version of individual fairness, namely Local Individual Fairness, which is defined below: Definition 4.1 (Local Individual Fairness). An ML model h is said to be locally individually fair if it satisfies: " # dY (h(x), h(x ′ )) Ex∼P lim sup ≤ L < ∞ . (4.1) x ′ :dX (x,x ′)↓0 dX (x, x ′) For practical purposes, this means that h is locally individually fair with constants ǫ and L if it satisfies ′ dY (h(x), h(x ′ )) ≤ LdX (x, x ′ ) for all x, x ∈ X where dX (x, x ′ ) ≤ ǫ (4.2) in analogy to equation (2.1). Equation (4.2) is a relaxation of traditional IF, where we only care about the Lipschitz-constraint for all pairs of points with small fair distances, i.e., where it is less than some user-defined threshold ǫ. Example 4.2. For our theoretical analysis, we need to specify a functional form of the fair metric. A popular choice is a Mahalanobis fair metric proposed by [19], which is defined as: d2 (x, x ′ ) = (x − x ′ )⊤ (x − x ′ ), (4.3)X where is a dispersion matrix that puts lower weight in the directions of sensitive attributes and higher weight in the directions of relevant attributes. [19] also proposed several algorithms to learn such a fair metric from the data. If we further assume dY (y1, y2) = |y1 − y2|, then a simple application of Lagrange’s mean value theorem yields: |h(x) − h(x ′ )| lim sup ≤ k −1/2∇h(x)k . (4.4) x ′ :dX (x,x ′ )↓0 dX (x, x ′) This immediately implies: " # dY (h(x), h(x ′ )) Ex∼P lim sup ≤ E[k −1/2∇h(x)k] , (4.5) x ′ :dX (x,x ′ )↓0 dX (x, x ′) i.e., h satisfies local individual fairness constraint as long as E[k −1/2∇h(x)k] < ∞. On the other hand, the global IF constraint necessitates sup k −1/2∇h(x)k < ∞, i.e., h is Lipschitz x∈X continuous with respect to the Mahalanobis distance. The main advantage of this local notion of IF over its global counterpart is that the local definition concentrates on the input pairs with smaller fair distance and ignores those with larger distance. For example, in Figure 1, the edge-weights among Alice, Charlie, and Dave are much larger than among any other pairs (which have a weight of 0); therefore, our local notion enforces fairness constraint on the corresponding similar pairs, while ignoring (or being less stringent on) others. This prevents over-smoothing and consequently preserves accuracy while enforcing fairness as is evident from our real data experiments in Section 5. We now present our main theorem, which establishes that, under certain assumptions on the underlying hypothesis class and the distribution of inputs, the graph Laplacian regularizers (both unnormalized and normalized random walk) enforce the local IF constraint (as defined in Definition 4.1) in the limit. For our theory, we work with dX as the Mahalanobis distance introduced in Example 4.2 in equation (2.2) along with θ = 1/(2σ2) (σ is a bandwidth parameter which goes to 0 at an appropriate rate as n →∞) and τ = ∞. All our results will be thorough for any finite τ but with more tedious technical analysis. Therefore, our weight matrix W becomes: | |1/2 1 Wij = exp − (xi − xj ) ⊤ (xi − xj ) . (4.6) (2π)d/2σd 2σ2 The constant | |1/2/((2π)d/2σd) is for the normalization purpose and can be absorbed into the penalty parameter λ. We start by listing our assumptions: Assumption 4.3 (Assumption on the domain). The domain of the inputs X is a compact subset of R d where d is the underlying dimension. Assumption 4.4 (Assumption on the hypothesis). All functions f ∈ F of the hypothesis class satisfy the following: 1. The ith derivative f (i) is uniformly bounded over the domain X of inputs for i ∈ {0, 1, 2}. 2. f (1)(x) = 0 for all x ∈ ∂X , where ∂X denotes the boundary of X . Assumption 4.5 (Assumption on the density of inputs). The density p of the input random variable x on the domain X satisfies the following: 1. There exists pmax < ∞ and pmin > 0 such that, for all x ∈ X , we have pmin ≤ p(x) ≤ pmax. 2. The derivatives {p(i)}i=0,1,2 of the density p are uniformly bounded on the domain X . Discussion on the assumptions Most of our assumptions (e.g., compactness of the domain, bounded derivatives of f or p) are for technical simplicity and are fairly common for the asymptotic analysis of graph regularization (see, e.g., Hein et al. [25], [33] and references therein). It is possible to relax some of the assumptions: for example, if the domain X of inputs is unbounded, then the target function f and the density p should decay at certain rate so that observations far away will not be able to affect the convergence (e.g., sub-exponential tails). Part (2.) of Assumption 4.4 can be relaxed if we assume p(x) is 0 at boundary. However, we do not pursue these extensions further in this manuscript, as they are purely technical and do not add anything of significance to the main intuition of the result. Theorem 4.6. Under Assumptions 4.3 - 4.5, we have: 1. If the sequence of bandwidths σ ≡ σn ↓ 0 such that nσn 2 → ∞ and Lun,n is unnormalized Laplacian matrix, then 2 P n2σ2 f⊤Lun,nf −→ Ex∼p ∇f(x) ⊤ −1∇f(x) p(x) . (4.7) 2. If the sequence of bandwidths σ ≡ σn ↓ 0 such that (nσd+4)/(log (1/σ)) → ∞ and Lnrw,n is the normalized random walk Laplacian matrix, then: 1 P f⊤Lnrw,nf −→ Ex∼p ∇f(x) ⊤ −1∇f(x) . (4.8) nσ2 where f = {f(xi)}ni=1. Consequently, both Laplacian regularizers asymptotically enforce local IF. The proof of the above theorem can be found in Appendix B. When we use a normalized random walk graph Laplacian matrix Lnrw,n as regularizer, the regularizer does (asymptotically) penalize E ∇f(x)⊤ −1∇f(x) = E k −1/2∇f(x)k2 , which, by Example 4.2, is equivalent to enforcing the local IF constraint. Similarly, the un-normalized Laplacian matrix Lun,n, also enforces the same under Assumption 4.5 as: h i 1 E k −1/2∇f(x)k2 ≤ E ∇f(x)⊤ −1∇f(x) p(x) , where pmin = inf p(x). (4.9) pmin x∈X Although both the Laplacian matrices enforce local IF, the primary difference between them is that the limit of the unnormalized Laplacian involves the density p(x), i.e., it upweights the high-density region (consequently stringent imposition of fairness constraint), whereas it down-weights the underrepresented/low-density region. On the other hand, the limit corresponding to the normalized random walk Laplacian matrix does not depend on p(x) and enforces fairness constraint with equal intensity on the entire input space. We used both regularizers in our experiments, comparing and contrasting their performance on several practical ML problems. 5 Experiments The goals of our experiments are threefold: 1. Exploring the trade-offs between post-processing for local IF with GLIF and post-processing with (global) IF constraints using our adaptation of the algorithm by Dwork et al. [6] described in (2.3). 2. Studying practical implications of theoretical differences between GLIF and GLIF-NRW, i.e., different graph Laplacians, presented in Section 4. 3. Evaluating the effectiveness of GLIF in its main application, i.e., computationally light debiasing of large deep learning models such as BERT. The implementation of this work is available at github.com/Felix-Petersen/fairness-post-processing. 5.1 Comparing GLIF and Global IF-constraints For our first experiment, we consider the sentiment prediction task [34], where our goal is to classify words as having a positive or negative sentiment. The baseline model is a neural network trained with GloVe word embeddings [35]. Following Yurochkin et al. [11], we evaluate the model on a set of names and observe that it assigns varying sentiments to names. An individually fair model should assign similar sentiment scores to all names. Further, we observe that there is a gap between average sentiments of names typical for Caucasian and African-American ethnic groups [36], which is violating group fairness. Yurochkin et al. [11] propose a fair metric learning procedure for this task using a side data set of names, and an in-processing technique for achieving individual fairness. We use their method to obtain the fair metric and compare post-processing of the baseline model with GLIF, GLIF-NRW and the global IF-constraints method. The test set comprises 663 words from the original task and 94 names. For post-processing, no problem specific knowledge is used. The resulting post-processed predictions for the original test set are used to evaluate accuracy, and the predictions on the names are used for evaluating fairness metrics. Even for this small problem, the global IF-constraints method, i.e., a CVXPY [37] implementation of (2.3), takes 7 minutes to run. Due to the poor scalability of the global IF-constraints method, we can use it only for the study of this smaller data set and can not consider it for the large language model experiments in Section 5.2. For GLIF(-NRW), we implement the closed-form solution (3.3) that takes less than a tenth of a second to run. See Appendix A for additional experimental details and a runtime analysis. We evaluate the fairness-accuracy trade-offs for a range of threshold parameters τ (for GLIF and GLIF-NRW) and for a range of Lipschitz-constants L (for IF-constraints) in Figure 2. Figure 2 (left) shows the standard deviation of the post-processed outputs on all names as a function of test accuracy on the original sentiment task. Lower standard deviations imply that all names received similar predictions, which is the goal of individual fairness. Figure 2 (center) visualizes group fairness and accuracy, i.e., difference in average name sentiment scores for the two ethnic groups. In this problem, individual fairness is a stronger notion of fairness: achieving similar predictions for all names implies similar group averages, but not vice a versa. Therefore, for this task, post-processing for individual fairness also corrects group disparities. In both settings, GLIF and GLIF-NRW achieve substantially better fairness metrics for the same levels of test accuracy in comparison to the IF-constraints method. To understand the reason for this, we study which global IF constraints are violated after applying the GLIF method in Figure 2 (right). Corresponding to the unique pairs of words in our test set, there are n(n − 1)/2 unique constraints in (2.3), and the global IF-constraints method satisfies all of them by design. Each constraint (i.e., each pair of words) corresponds to a fair distance, which is small for (under the fair metric) similar words and large for dissimilar words. We bin the constraints by fair distance and present the proportion of global IF constraints violated after applying the GLIF method for each bin in the histogram in Figure 2 (right). Here, we set the Lipschitz-constant L in (2.3) to L = 2.25 corresponding to a 89.4% accuracy of the IF-constraints method and show global IF constraint violations of GLIF corresponding to 95% accuracy in blue. This means that we use strong global IF constraints and use a setting of the GLIF method which maintains most of the accuracy, which would not be possible using the IF-constraints method. GLIF does not violate any constraints corresponding to small fair distances, i.e., it satisfies IF on similar individuals, while violating many large fair distance constraints. This can be seen as basically all constraint violations (blue) are at large fair distances of greater or equal 6. This demonstrates the effect of enforcing local individual fairness from our theoretical analysis in Section 4. At the same time, we display frequency of constraints that correspond to pairs of names in orange, where we can see that almost all constraints corresponding to names occur at small fair distances of smaller or equal to 6. This is expected in this task because we consider all names similar, so fair distances between them should be small. We can see that the distributions of constraint violations after applying GLIF (blue, right) and names (orange, left) are almost disjoint. We mark all global IF constraint violations after applying GLIF that correspond to names in green, and observe that there are none. Summarizing, GLIF ignores unnecessary (in the context of this problem) constraints allowing it to achieve higher accuracy, while satisfying the more relevant local IF constraints leading to improved fairness. Regarding the practical differences between GLIF and GLIF-NRW, in Figure 2 (left) GLIF has smaller standard deviations on the name outputs, but in in Figure 2 (center) GLIF-NRW achieves lower race gap. In Theorem 4.6, we showed that GLIF penalizes fairness violations in high density data regions stronger. As a result, GLIF may favor enforcing similar outputs in the high density region causing lower standard deviation, while leaving outputs nearly unchanged in the lower density region, resulting in larger race gaps. GLIF-NRW weights all data density regions equally, i.e., it is less likely to miss a small subset of names, but is less stringent in the high density regions. 5.2 Post-processing for Debiasing Large Language Models Large language models have achieved impressive results on many tasks; however, there is also significant evidence demonstrating that they are prone to biases [4], [38], [39]. Debiasing these models remains largely an open problem: most in-processing algorithms are not applicable or computationally prohibitive due to large and highly complex model architectures, and challenges in handling text inputs. Even if an appropriate in-processing algorithm arises, significant environmental impact due to re-training is unavoidable [4], [16]. In our experiments, we evaluate effectiveness of GLIF as a simple post-processing technique to debias BERT-based models for text classification. Another possible solution is to fine-tune BERT with an in-processing technique as was done by Yurochkin et al. [12]. The two approaches are not directly comparable: fine-tuning with SenSeI [12] requires knowledge of the model parameters, alleviates only part of the computational burden, and Table 1: Results for the Bios task. 0.846 0.988 Method Test Acc. Pred. Consist. Baseline ± 0.003 0.942 ± 0.002 GLIF 0.830 ± 0.004 0.986 ± 0.002 GLIF-NRW 0.834 ± 0.003 ± 0.002 SenSEI 0.843 ± 0.003 0.977 ± 0.001 Table 2: Results for the Toxicity task. 0.809 0.844 Method Test Acc. Pred. Consist. Baseline ± 0.004 0.614 ± 0.013 GLIF 0.803 ± 0.003 0.835 ± 0.012 GLIF-NRW 0.803 ± 0.003 ± 0.013 SenSEI 0.791 ± 0.005 0.773 ± 0.043 has more stringent requirements on the fair metric, while post-processing with GLIF is transductive, i.e., it requires access to unlabeled test data (see extended discussion in Section 2.1). We replicate the experiments of Yurochkin et al. [12] on Bios [40] and Toxicity1 data sets. They use the approach of Mukherjee et al. [19] for fair metric learning which we reproduce. We refer to the Appendix B.1 of [12] for details. In both tasks, following [12], we quantify performance with balanced accuracy due to class imbalance, and measure individual fairness via prediction consistency, i.e., the fraction of test points where the prediction remains unchanged when performing task-specific input modifications. For implementation details, see Appendix A. In Appendix A.4, we analyze the runtime and distinguish between the closed-form and coordinate descent variants of GLIF. In Bios, the goal is to predict the occupation of a person based on their textual biography. Such models can be useful for recruiting purposes. However, due to historical gender bias in some occupations, the baseline BERT model learns to associate gender pronouns and names with the corresponding occupations. Individual fairness is measured with prediction consistency with respect to gender pronouns and names alterations. A prediction is considered consistent if it is the same after swapping the gender pronouns and names. We present the fairness-accuracy trade-off in Figure 3 (left) for a range of threshold parameters τ , and compare performance based on hyperparameter values selected with a validation data in Table 1. Both GLIF and GLIF-NRW noticeably improve individual fairness measured with prediction consistency, while retaining most of the accuracy. In Toxicity, the task is to identify toxic comments—an important tool for facilitating inclusive discussions online. The baseline BERT model learns to associate certain identity words with toxicity (e.g., “gay”) because they are often abused in online conversations. The prediction consistency is measured with respect to changes to identity words in the inputs. There are 50 identity words, e.g., “gay”, “muslim”, “asian”, etc. and a prediction is considered consistent if it is the same for all 50 identities. We present the trade-off plots in Figure 3 (right) and compare performance in Table 2 (right). Our methods reduce individual biases in BERT predictions. We note that in both Toxicity and Bios experiments, we observe no practical differences between GLIF and GLIF-NRW. 6 Summary and Discussion We studied post-processing methods for enforcing individual fairness. The methods provably enforce a local form of IF and scale readily to large data sets. We hope this broadens the appeal of IF by (i) alleviating the computational costs of operationalizing IF and (ii) allowing practitioners to use off-the-shelf models for standard ML tasks. We also note that it is possible to use our objective for in-processing. We conclude with two warnings: First, enforcing any algorithmic fairness definition does not guarantee complete fairness from the perspective of the user. The problem-specific meaning of fairness is often hard to encode exactly with a mathematical fairness definition. Second, while local individual fairness is a reasonable choice in many applications, this choice should be understood and verified by the practitioner depending on the situation. 1Based on the Kaggle “Toxic Comment Classification Challenge”. Acknowledgments and Disclosure of Funding This note is based upon work supported by the National Science Foundation (NSF) under grants no. 1916271, 2027737, and 2113373 and supported by the German Research Foundation (DFG) under Germany’s Excellence Strategy EXC–2117–390829875. Any opinions, findings, and conclusions or recommendations expressed in this note are those of the authors and do not necessarily reflect the views of the NSF nor the DFG.
1. What is the focus and contribution of the paper regarding individual fairness solution concepts? 2. What are the strengths of the proposed approach, particularly in its ability to provide a better tradeoff between fairness and accuracy? 3. What are some concerns or questions regarding the method's reliance on a graph Laplacian smoothing technique and its relationship to other proposals using a graph structure for individual fairness? 4. How do the parameters theta and tau affect the behavior of the method, and what is their significance in the context of individual fairness? 5. Can you provide more information or clarification on the conceptual relation between the Laplacian method and the alternate definition of individual fairness proposed in [36], as well as how it compares to other methods that have been proposed for instantiating individual fairness?
Summary Of The Paper Review
Summary Of The Paper This paper attempts to instantiate the individual fairness solution concept of Dwork et al in a manner which relaxes some of the stringency of a global Lipschitz constraint. The proposed formulation applies a graph Laplacian smoothing technique to post-process the predictions of an arbitrary model. The graph is chosen so that edge weights decay exponentially with decreasing similarity (wrt the "fairness" metric), with a cutoff to 0 at some point. Under some assumptions, this regularizer is shown to enforce a much more local version of individual fairness. Experimental results show that this results in a better tradeoff with accuracy than the original notion, and similar or better performance compared to a recently proposed method. Review The main strength of this paper is to propose an alternate way of instantiating individual fairness which is easy to apply and which appears to provide a better tradeoff between fairness and accuracy. The Laplacian method is well-motivated in terms of a connection to a local notion of fairness. This method does not address the (often prohibitive) challenge of identifying a suitable metric with which to define individual fairness. However, given that such a method can be identified, it proposes a nice way of enforcing it. A few questions: (1) What is the impact of the parameters theta and tau on the behavior of the method? (2) What is the conceptual relation of the Laplacian method to the alternate definition of individual fairness proposed in [36]? (3) How does the method relate to alternate proposals to use a graph structure to instantiate individual fairness such as http://www.vldb.org/pvldb/vol13/p506-lahoti.pdf or https://dl.acm.org/doi/abs/10.1145/3394486.3403080?
NIPS
Title Contextual Stochastic Block Models Abstract We provide the first information theoretic tight analysis for inference of latent community structure given a sparse graph along with high dimensional node covariates, correlated with the same latent communities. Our work bridges recent theoretical breakthroughs in the detection of latent community structure without nodes covariates and a large body of empirical work using diverse heuristics for combining node covariates with graphs for inference. The tightness of our analysis implies in particular, the information theoretical necessity of combining the different sources of information. Our analysis holds for networks of large degrees as well as for a Gaussian version of the model. 1 Introduction Data clustering is a widely used primitive in exploratory data analysis and summarization. These methods discover clusters or partitions that are assumed to reflect a latent partitioning of the data with semantic significance. In a machine learning pipeline, results of such a clustering may then be used for downstream supervised tasks, such as feature engineering, privacy-preserving classification or fair allocation [CMS11, KGB+12, CDPF+17]. At risk of over-simplification, there are two settings that are popular in literature. In graph clustering, the dataset of n objects is represented as a symmetric similarity matrix A = (Aij)1≤i,j≤n. For instance, A can be binary, where Aij = 1 (or 0) denotes that the two objects i, j are similar (or not). It is, then, natural to interpret A as the adjacency matrix of a graph. This can be carried over to non-binary settings by considering weighted graphs. On the other hand, in more traditional (binary) classification problems, the n objects are represented as p-dimensional feature or covariate vectors b1, b2, · · · , bn. This feature representation can be the input for a clustering method such as k-means, or instead used to construct a similarity matrix A, which in turn is used for clustering or partitioning. These two representations are often taken to be mutually exclusive and, in fact, interchangeable. Indeed, just as feature representations can be used to construct similarity matrices, popular spectral methods [NJW02, VL07] implicitly construct a low-dimensional feature representation from the similarity matrices. This paper is motivated by scenarios where the graph, or similarity, representation A ∈ Rn×n, and the feature representation B = [b1, b2, . . . , bn] ∈ Rp×n provide independent, or complementary, information on the latent clustering of the n objects. (Technically, we will assume that A and B are conditionally independent given the node labels.) We argue that in fact in almost all practical graph clustering problems, feature representations provide complementary information of the latent clustering. This is indeed the case in many social and biological networks, see e.g. [NC16] and references within. As an example, consider the ‘political blogs’ dataset [AG05]. This is a directed network of political blogs during the 2004 US presidential election, with a link between two blogs if one referred to the ∗Department of Mathematics, Massachusetts Institute of Technology †Departments of Electrical Engineering and Statistics, Stanford University ‡Department of Mathematics, Massachusetts Institute of Technology §Department of Mathematics, Massachusetts Institute of Technology 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. other. It is possible to just use the graph structure in order to identify political communities (as was done in [AG05]). Note however that much more data is available. For example we may consider an alternative feature representation of the blogs, wherein each blog is converted to a ‘bag-of words’ vector of its content. This gives a quite different, and complementary representation of blogs that plausibly reflects their political leaning. A number of approaches can be used for the simple task of predicting leaning from the graph information (or feature information) individually. However, given access to both sources, it is challenging to combine them in a principled fashion. In this context, we introduce a simple statistical model of complementary graph and high-dimensional covariate data that share latent cluster structure. This model is an intuitive combination of two well-studied models in machine learning and statistics: the stochastic block model and the spiked covariance model [Abb17, HLL83, JL04]. We focus on the task of uncovering this latent structure and make the following contributions: Sharp thresholds: We establish a sharp information-theoretic threshold for detecting the latent structure in this model. This threshold is based on non-rigorous, but powerful, techniques from statistical physics. Rigorous validation: We consider a certain ‘Gaussian’ limit of the statistical model, which is of independent interest. In this limit, we rigorously establish the correct information-theoretic threshold using novel Gaussian comparison inequalities. We further show convergence to the Gaussian limit predictions as the density of the graph diverges. Algorithm: We provide a simple, iterative algorithm for inference based on the belief propagation heuristic. For data generated from the model, we empirically demonstrate that the the algorithm achieves the conjectured information-theoretic threshold. The rest of the paper is organized as follows. The model and results are presented in Section 2. Further related work is discussed in Section 3. The prediction of the threshold from statistical physics techniques is presented in 4, along with the algorithm. While all proofs are presented in the appendix, we provide an overview of the proofs of our rigorous results in Section 5. Finally, we numerically validate the prediction in Section 6. 2 Model and main results We will focus on the simple case where the n objects form two latent clusters of approximately equal size, labeled + and −. Let v ∈ {±1}n be the vector encoding this partitioning. Then, the observed data is a pair of matrices (AG, B), where AG is the adjacency matrix of the graph G and B ∈ Rp×n is the matrix of covariate information. Each column bi, i ≤ n of matrix B contains the covariate information about vertex i. We use the following probabilistic model: conditional on v, and a latent vector u ∼ N(0, Ip/p): P(AGij = 1) = { cin/n with probability , cout/n otherwise. (1) bi = √ µ n viu+ Zi√ p , (2) where Zi ∈ Rp has independent standard normal entries. It is convenient to parametrize the edge probabilities by the average degree d and the normalized degree separation λ: cin = d+ λ √ d , cout = d− λ √ d . (3) Here d, λ, µ are parameters of the model which, for the sake of simplicity, we assume to be fixed and known. In other words, two objects i, j in the same cluster or community are slightly more likely to be connected than for objects i, j′ in different clusters. Similarly, according to (2), they have slightly positively correlated feature vectors bi, bj , while objects i, j′ in different clusters have negatively correlated covariates bi, bj′ . Note that this model is a combination of two observation models that have been extensively studied: the stochastic block model and the spiked covariance model. The stochastic block model has its roots in sociology literature [HLL83] and has witnessed a resurgence of interest from the computer science and statistics community since the work of Decelle et al. [DKMZ11]. This work focused on the sparse setting where the graph as O(n) edges and conjectured, using the non-rigorous cavity method, the following phase transition phenomenon. This was later established rigorously in a series of papers [MNS15, MNS13, Mas14]. Theorem 1 ([MNS15, MNS13, Mas14]). Suppose d > 1 is fixed. The graph G is distinguishable with high probability from an Erdös-Renyi random graph with average degree d if and only if λ ≥ 1. Moreover, if λ > 1, there exists a polynomial-time computable estimate v̂ = v̂(AG) ∈ {±1}n of the cluster assignment satisfying, almost surely: lim inf n→∞ |〈v̂, v〉| n ≥ ε(λ) > 0. (4) In other words, given the graph G, it is possible to non-trivially estimate the latent clustering v if, and only if, λ > 1. The covariate model (2) was proposed by Johnstone and Lu [JL04] and has been extensively studied in statistics and random matrix theory. The weak recovery threshold was characterized by a number of authors, including Baik et al [BBAP05], Paul [Pau07] and Onatski et al [OMH+13]. Theorem 2 ([BBAP05, Pau07, OMH+13]). Let v̂1 be the principal eigenvector of BTB, where v̂1 is normalized so that ‖v̂1‖2 = n. Suppose that p, n → ∞ with p/n → 1/γ ∈ (0,∞). Then lim infn→∞ |〈v̂1, v〉|/n > 0 if and only if µ > √ γ. Moreover, if µ < √ γ, no such estimator exists. In other words, this theorem shows that it is possible to estimate v solely from the covariates using, in fact, a spectral method if, and only if µ > √ γ. Our first result is the following prediction that establishes the analogous threshold prediction that smoothly interpolates between Theorems 1 and 2. Claim 3 (Cavity prediction). GivenAG, B as in Eqs.(1), (2), and assume that n, p→∞ with p/n→ 1/γ ∈ (0,∞). Then there exists an estimator v̂ = v̂(AG, B) ∈ {±1}n so that lim inf |〈v̂, v〉|/n is bounded away from 0 if and only if λ2 + µ2 γ > 1 . (5) We emphasize here that this claim is not rigorous; we obtain this prediction via the cavity method. The cavity method is a powerful technique from the statistical physics of mean field models [MM09]. Our instantiation of the cavity method is outlined in Section 4, along with Appendix B and D (see supplement). The cavity method is remarkably successful and a number of its predictions have been made rigorous [MM09, Tal10]. Consequently, we view Claim 3 as a conjecture, with strong positive evidence. Theorems 1 and 2 confirm the cavity prediction rigorously in the corner cases, in which either λ or µ vanishes, using intricate tools from random matrix theory and sparse random graphs. Our main result confirms rigorously Claim 3 in the limit of large degrees. Theorem 4. Suppose v is uniformly distributed in {±1}n and we observe AG, B as in (1), (2). Consider the limit p, n→∞ with p/n→ 1/γ. Then, for some ε(λ, µ) > 0 independent of d, lim inf n→∞ sup v̂( · ) |〈v̂(AG, B), v〉| n ≥ ε(λ, µ)− od(1) if λ2 + µ2/γ > 1, (6) lim sup n→∞ sup v̂( · ) |〈v̂(AG, B), v〉| n = od(1) if λ2 + µ2/γ < 1. (7) Here the limits hold in probability, the supremum is over estimators v̂ : (AG, B) 7→ v̂(AG, B) ∈ Rn, with ‖v̂(AG, B)‖2 = √ n. Here od(1) indicates a term independent of n which tends to zero as d→∞. In order to establish this result, we consider a modification of the original model in (1), (2), which is of independent interest. Suppose, conditional on v ∈ {±1} and the latent vector u we observe (A,B) as follows: Aij ∼ { N(λvivj/n, 1/n) if i < j N(λvivj/n, 2/n) if i = j, (8) Bai ∼ N( √ µviua/ √ n, 1/p). (9) This model differs from (1), in that the graph observation AG is replaced by the observation A which is equal to λvvT/n, corrupted by Gaussian noise. This model generalizes so called ‘rank-one deformations’ of random matrices [Péc06, KY13, BGN11], as well as the Z2 synchronization model [ABBS14, Cuc15]. Our main motivation for introducing the Gaussian observation model is that it captures the largedegree behavior of the original graph model. The next result formalizes this intuition: its proof is an immediate generalization of the Lindeberg interpolation method of [DAM16]. Theorem 5. Suppose v ∈ {±1}n is uniformly random, and u is independent. We denote by I(v;AG, B) the mutual information of the latent random variables v and the observable data AG, B. For all λ, µ: we have that: lim d→∞ lim sup n→∞ 1 n |I(v;AG, B)− I(v;A,B)| = 0, (10) lim d→∞ lim sup n→∞ ∣∣∣ 1 n dI(v;AG, B) d(λ2) − 1 4 MMSE(v;AG, B) ∣∣∣ = 0, (11) where MMSE(v;AG, B) = n−2E{‖vvT − E{vvT|AG, B}‖2F }. For the Gaussian observation model (8), (9) we can establish a precise weak recovery threshold, which is the main technical novelty of this paper. Theorem 6. Suppose v is uniformly distributed in {±1}n and we observe A,B as in (8), (9). Consider the limit p, n→∞ with p/n→ 1/γ. 1. If λ2 + µ2/γ < 1, then for any estimator v̂ : (A,B) 7→ v̂(A,B), with ‖v̂(A,B)‖2 = √ n, we have lim supn→∞ |〈v̂, v〉|/n = 0. 2. If λ2 + µ2/γ > 1, let v̂(A,B) be normalized so that ‖v̂(A,B)‖2 = √ n, and proportional the maximum eigenvector of the matrix M(ξ∗), where M(ξ) = A+ 2µ2 λ2γ2ξ BTB + ξ 2 In , (12) and ξ∗ = arg minξ>0 λmax(M(ξ)). Then, lim infn→∞ |〈v̂, v〉|/n > 0 in probability. Theorem 4 is proved by using this threshold result, in conjunction with the universality Theorem 5. 3 Related work The need to incorporate node information in graph clustering has been long recognized. To address the problem, diverse clustering methods have been introduced— e.g. those based on generative models [NC16, Hof03, ZVA10, YJCZ09, KL12, LM12, XKW+12, HL14, YML13], heuristic model free approaches [BVR17, ZLZ+16, GVB12, ZCY09, NAJ03, GFRS13, DV12, CZY11, SMJZ12, SZLP16], Bayesian methods [CB10, BC11] etc. [BCMM15] surveys other clustering methods for graphs with node and edge attributes. Semisupervised graph clustering [Pee12, EM12, ZMZ14], where labels are available for a few vertices are also somewhat related to our line of enquiry. The literature in this domain is quite vast and extremely diffuse, and thus we do not attempt to provide an exhaustive survey of all related attempts in this direction. In terms of rigorous results, [AJC14, LMX15] introduced and analyzed a model with informative edges, but they make the strong and unrealistic requirement that the label of individual edges and each of their endpoints are uncorrelated and are only able to prove one side of their conjectured threshold. The papers [BVR17, ZLZ+16] –among others– rigorously analyze specific heuristics for clustering and provide some guarantees that ensure consistency. However, these results are not optimal. Moreover, it is possible that they only hold in the regime where using either the node covariates or the graph suffices for inference. Several theoretical works [KMS16, MX16] analyze the performance of local algorithms in the semisupervised setting, i.e., where the true labels are given for a small fraction of nodes. In particular [KMS16] establishes that for the two community sparse stochastic block model, correlated recovery is impossible given any vanishing proportion of nodes. Note that this is in stark contrast to Theorem 4 (and the Claim for the sparse graph model) above, which posits that given high dimensional covariate information actually shifts the information theoretic threshold for detection and weak recovery. The analysis in [KMS16, MX16] is also local in nature, while our algorithms and their analysis go well beyond the diameter of the graph. 4 Belief propagation: algorithm and cavity prediction Recall the model (1), (2), where we are given the data (AG, B) and our task is to infer the latent community labels v. From a Bayesian perspective, a principled approach computes posterior expectation with respect to the conditional distribution P(v, u|AG, B) = P(v, u,AG, B)/P(AG, B). This is, however, not computationally tractable because it requires to marginalize over v ∈ {+1,−1}n and u ∈ Rp. At this point, it becomes necessary to choose an approximate inference procedure, such as variational inference or mean field approximations [WJ+08]. In Bayes inference problem on locally-tree like graphs, belief propagation is optimal among local algorithms (see for instance [DM15] for an explanation of why this is the case). The algorithm proceeds by computing, in an iterative fashion vertex messages ηti ,m t a for i ∈ [n], a ∈ [p] and edge messages ηti→j for all pairs (i, j) that are connected in the graph G. For a vertex i of G, we denote its neighborhood in G by ∂i. Starting from an initialization (ηt0 ,mt0)t0=−1,0, we update the messages in the following linear fashion: ηt+1i→j = √ µ γ (BTmt)i − µ γ ηt−1i + λ√ d ∑ k∈∂i\j ηtk→i − λ √ d n ∑ k∈[n] ηtk, (13) ηt+1i = √ µ γ (BTmt)i − µ γ ηt−1i + λ√ d ∑ k∈∂i ηtk→i − λ √ d n ∑ k∈[n] ηtk, (14) mt+1 = √ µ γ Bηt − µmt−1. (15) Here, and below, we will use ηt = (ηti)i∈[n], m t = (mta)a∈[p] to denote the vectors of vertex messages. After running the algorithm for some number of iterations tmax, we return, as an estimate, the sign of the vertex messages ηtmaxi , i.e. v̂i(A G, B) = sgn(ηtmaxi ). (16) These update equations have a number of intuitive features. First, in the case that µ = 0, i.e. we have no covariate information, the edge messages become: ηt+1i→j = λ√ d ∑ k∈∂i\j ηtk→i − λ √ d n ∑ k∈[n] ηtk, (17) which corresponds closely to the spectral power method on the nonbacktracking walk matrix of G [KMM+13]. Conversely, when λ = 0, the updates equations on mt, ηt correspond closely to the usual power iteration to compute singular vectors of B. We obtain this algorithm from belief propagation using two approximations. First, we linearize the belief propagation update equations around a certain ‘zero information’ fixed point. Second, we use an ‘approximate message passing’ version of the belief propagation updates which results in the addition of the memory terms in Eqs. (13), (14), (15). The details of these approximations are quite standard and deferred to Appendix D. For a heuristic discussion, we refer the interested reader to the tutorials [Mon12, TKGM14] (for the Gaussian approximation) and the papers [DKMZ11, KMM+13] (for the linearization procedure). As with belief propagation, the behavior of this iterative algorithm, in the limit p, n → ∞ can be tracked using a distributional recursion called density evolution. Definition 1 (Density evolution). Let (m̄, U) and (η̄, V ) be independent random vectors such that U ∼ N(0, 1), V ∼ Uniform({±1}), m̄, η̄ have finite variance. Further assume that (η̄, V ) d= (−η̄,−V ) and (m̄, U) d= (−m̄,−U) (where d= denotes equality in distribution). We then define new random pairs (m̄′, U ′) and (η̄′, V ′), where U ′ ∼ N(0, 1), V ′ ∼ Uniform({±1}), and (η̄, V ) d= (−η̄,−V ), (m̄, U) d= (−m̄,−U), via the following distributional equation m̄′ ∣∣ U ′ d = µE{V η̄}U ′ + ( µE{η̄2} )1/2 ζ1, (18) η̄′ ∣∣ V ′=+1 d = λ√ d [ k+∑ k=1 η̄k ∣∣ + + k−∑ k=1 η̄k ∣∣ − ] − λ √ dE{η̄} + µ γ E{Um̄}+ (µ γ E{m̄2} )1/2 ζ2. (19) Here we use the notation X|Y d = Z to mean that the conditional distribution of X given Y is the same as the (unconditional) distribution of Z. Notice that the distribution of η̄′ ∣∣ V ′=− is determined by the last equation using the symmetry property. Further η̄k|+ and η̄k|− denote independent random variables distributed (respectively) as η̄|V=+ and η̄|V=−. Finally k+ ∼ Poiss(d/2 + λ √ d/2), k− ∼ Poiss(d/2 − λ √ d/2), ζ1 ∼ N(0, 1) and ζ2 ∼ N(0, 1) are mutually independent and independent from the previous random variables. The density evolution map, denoted by DE, is defined as the mapping from the law of (η̄, V, m̄, U) to the law of (η̄′, V ′, m̄′, U ′). With a slight abuse of notation, we will omit V,U , V ′, U ′, whose distribution is left unchanged and write (η̄′, m̄′) = DE(η̄, m̄) . (20) The following claim is the core of the cavity prediction. It states that the density evolution recursion faithfully describes the distribution of the iterates ηt,mt. Claim 7. Let (η̄0, V ), (m̄0, U) be random vectors satisfying the conditions of definition 1. Define the density evolution sequence (η̄t, m̄t) = DEt(η̄0, m̄0), i.e. the result of iteratively applying the mapping DE t times. Consider the linear message passing algorithm of Eqs. (13) to (15), with the following initialization. We set (m0r)r∈[p] conditionally independent given u, with conditional distribution m 0 r|u d = m̄0|U=√pur . Analogously, η0i , η0i→j are conditionally independent given v with η0i |v d = η̄0|V=vi , η0i→j |v d = η̄0|V=vi . Finally η−1i = η −1 i→j = m −1 r = 0 for all i, j, r. Then, as n, p→∞ with p/n→ 1/γ, the following holds for uniformly random indices i ∈ [n] and a ∈ [p]: (mta, ua √ p) d⇒ (m̄t, U) (21) (ηti , vi) d⇒ (η̄t, V ). (22) The following simple lemma shows the instability of the density evolution recursion. Lemma 8. Under the density evolution mapping, we obtain the random variables (η̄′, m̄′) = DE(η̄, m̄′ Let m and m′ denote the vector of the first two moments of (η̄, V, m̄, U) and (η̄′, V ′, m̄′, U ′) defined as follows: m = (E{V η̄},E{Um̄},E{η̄2},E{m̄2}) , (23) and similarly for m′. Then, for ‖m‖2 → 0, we have m′ = λ 2 µ/γ 0 0 µ 0 0 0 0 0 λ2 µ/γ 0 0 µ 0 m +O(‖m‖2) (24) In particular, the linearized map m 7→ m′ at m = 0 has spectral radius larger than one if and only if λ2 + µ2/γ > 1. The interpretation of the cavity prediction and the instability lemma is as follows. If we choose an initialization (η̄0, V ), (m̄0, U) with η̄0, m̄0 positively correlated with V and U , then this correlation increases exponentially over time if and only if λ2 + µ2/γ > 15. In other words, a small initial correlation is amplified. While we do not have an initialization that is positively correlated with the true labels, a random initialization η0,m0 has a random correlation with v, u of order 1/ √ n. If λ2 + µ2/γ > 1, this correlation is amplified over iterations, yielding a nontrivial reconstruction of v. On the other hand, if λ2 + µ2/γ < 1 then this correlation is expected to remain small, indicating that the algorithm does not yield a useful estimate. 5 Proof overview As mentioned above, a key step of our analysis is provided by Theorem 6, which establishes a weak recovery threshold for the Gaussian observation model of Eqs. (8), (9). The proof proceeds in two steps: first, we prove that, for λ2 +µ2/γ < 1 it is impossible to distinguish between data A,B generated according to this model, and data generated according to the null model µ = λ = 0. Denoting by Pλ,µ the law of data A,B, this is proved via a standard second moment argument. Namely, we bound the chi square distance uniformly in n, p χ2(Pλ,µ,P0,0) ≡ E0,0 {( dPλ,µ dP0,0 )2} − 1 ≤ C , (25) and then bound the total variation distance by the chi-squared distance ‖Pλ,µ − P0,0‖TV ≤ 1 − (χ2(Pλ,µ,P0,0) + 1)−1. This in turn implies that no test can distinguish between the two hypotheses with probability approaching one as n, p→∞. The chi-squared bound also allows to show that weak recovery is impossible in the same regime. In order to prove that weak recovery is possible for λ2 + µ2/γ > 1, we consider the following optimization problem over x ∈ Rn, y ∈ Rp: maximize 〈x,Ax〉+ b∗〈x,By〉, (26) subject to ‖x‖2 = ‖y‖2 = 1 . (27) where b∗ = 2µλγ . Denoting solution of this problem by (x̂, ŷ), we output the (soft) label estimates v̂ = √ nx̂. This definition turns out to be equivalent to the spectral algorithm in the statement of Theorem 6, and is therefore efficiently computable. This optimization problem undergoes a phase transition exactly at the weak recovery threshold λ2 + µ2/γ = 1, as stated below. Lemma 9. Denote by T = Tn,p(A,B) the value of the optimization problem (26). (i) If λ2 + µ 2 γ < 1, then, almost surely lim n,p→∞ Tn,p(A,B) = 2 √ 1 + b2∗γ 4 + b∗ . (28) (ii) If λ, µ > 0, and λ2 + µ 2 γ > 1 then there exists δ = δ(λ, µ) > 0 such that, almost surely lim n,p→∞ Tn,p(A,B) = 2 √ 1 + b2∗γ 4 + b∗ + δ(λ, µ) . (29) (iii) Further, define T̃n,p(δ̃;A,B) = sup ‖x‖=‖y‖=1,|〈x,v〉|<δ̃ √ n [ 〈x,Ax〉+ b∗〈x,By〉 ] . 5Notice that both the messages variance E(η2) and covariance with the ground truth E(ηV ) increase, but the normalized correlation (correlation divided by standard deviation) increases. Then for each δ > 0, there exists δ̃ > 0 sufficiently small, such that, almost surely lim n,p→∞ T̃n,p(δ̃;A,B) < 2 √ 1 + b2∗γ 4 + b∗ + δ 2 . (30) The first two points imply that Tn,p(A,B) provide a statistic to distinguish between P0,0 and Pλ,µ with probability of error that vanishes as n, p→∞ if λ2 +µ2/γ > 1. The third point (in conjunction with the second one) guarantees that the maximizer x̂ is positively correlated with v, and hence implies weak recovery. In fact, we prove a stronger result that provides an asymptotic expression for the value Tn,p(A,B) for all λ, µ. We obtain the above phase-transition result by specializing the resulting formula in the two regimes λ2 + µ2/γ < 1 and λ2 + µ2/γ > 1. We prove this asymptotic formula by Gaussian process comparison, using Sudakov-Fernique inequality. Namely, we compare the Gaussian process appearing in the optimization problem of Eq. (26) with the following ones: X1(x, y) = λ n 〈x, v0〉2 + 〈x, g̃x〉+ b∗ √ µ n 〈x, v0〉〈y, u0〉+ 〈y, g̃y〉 , (31) X2(x, y) = λ n 〈x, v0〉2 + 1 2 〈x, W̃xx〉+ b∗ √ µ n 〈x, v0〉〈y, u0〉+ 1 2 〈y, W̃yy〉 , (32) where g̃x, g̃y are isotropic Gaussian vectors, with suitably chosen variances, and W̃x, W̃y are GOE matrices, again with properly chosen variances. We prove that maxx,y X1(x, y) yields an upper bound on Tn,p(A,B), and maxx,y X2(x, y) yields a lower bound on the same quantity. Note that maximizing the first process X1(x, y) essentially reduces to solving a separable problem over the coordinates of x and y and hence to an explicit expression. On the other hand, maximizing the second process leads (after decoupling the term 〈x, v0〉〈y, u0〉) to two separate problems, one for the vector x, and the other for y. Each of the two problems reduce to finding the maximum eigenvector of a rank-one deformation of a GOE matrix, a problem for which we can leverage on significant amount of information from random matrix theory. The resulting upper and lower bound coincide asymptotically. As is often the case with Gaussian comparison arguments, the proof is remarkably compact, and somewhat surprising (it is unclear a priori that the two bounds should coincide asymptotically). While upper bounds by processes of the type of X1(x, y) are quite common in random matrix theory, we think that the lower bound by X2(x, y) (which is crucial for proving our main theorem) is novel and might have interesting generalizations. 6 Experiments We demonstrate the efficacy of the full belief propagation algorithm, restated below: ηt+1i = √ µ γ ∑ q∈[p] Bqim t q − µ γ ( ∑ q∈[p] B2qi τ tq ) tanh(ηt−1i ) + ∑ k∈∂i f(ηtk→i; ρ)− ∑ k∈[n] f(ηtk; ρn) , (33) ηt+1i→j = √ µ γ ∑ q∈[p] Bqim t q − µ γ ( ∑ q∈[p] B2qi τ tq ) tanh(ηt−1i ) + ∑ k∈∂i\j f(ηtk→i; ρ)− ∑ k∈[n] f(ηtk; ρn) , (34) mt+1q = √ µ/γ τ t+1q ∑ j∈[n] Bqj tanh(η t j)− µ γτ t+1q ( ∑ j∈[n] B2qjsech 2(ηtj) ) mt−1q (35) τ t+1q = 1 + µ− µ γ ∑ j∈[n] B2qjsech 2(ηtj) −1 . (36) Here the function f(; ρ) and the parameters ρ, ρn are defined as: f(z; ρ) ≡ 1 2 log (cosh(z + ρ) cosh(z − ρ) ) , (37) ρ ≡ tanh−1(λ/ √ d) , (38) ρn ≡ tanh−1 ( λ√d n− d ) . (39) We refer the reader to Appendix D for a derivation of the algorithm. As demonstrated in Appendix D, the BP algorithm in Section 4 is obtained by linearizing the above in η. In our experiments, we perform 100 Monte Carlo runs of the following process: 1. Sample AG, B from Pλ,µ with n = 800, p = 1000, d = 5. 2. Run BP algorithm for T = 50 iterations with random initialization η0i , η −1 i ,m 0 a,m −1 a ∼iid N(0, 0.01). yielding vertex and covariate iterates ηT ∈ Rn, mT ∈ Rp. 3. Reject the null hypothesis if ∥∥ηT∥∥ 2 > ∥∥η0∥∥ 2 , else accept the null. 4. Return estimates v̂BPi = sgn(η T i ), û BP a = m T a / ∥∥mT∥∥ 2 . Figure 1 (left) shows empirical probabilities of rejecting the null for (λ, µ) ∈ [0, 1]× [0,√γ]. The next two plots display the mean overlap |〈v̂BP, v〉/n| and 〈ûBP, u〉/ ‖u‖ achieved by the BP estimates (lighter is higher overlap). Below the theoretical curve (red) of λ2 + µ2/γ = 1, the null hypothesis is accepted and the estimates show negligible correlation with the truth. These results are in excellent agreement with our theory. Importantly, while our rigorous result holds only in the limit of diverging d, the simulations show agreement already for d = 5. This lends further credence to the cavity prediction Claim 3. Acknowledgements A.M. was partially supported by grants NSF DMS-1613091, NSF CCF-1714305 and NSF IIS1741162. E.M was partially supported by grants NSF DMS-1737944 and ONR N00014-17-1-2598. Y.D would like to acknowledge Nilesh Tripuraneni for discussions about this paper.
1. What is the focus of the paper in terms of community structure recovery? 2. What are the key contributions of the proposed statistical model? 3. Can you explain the non-rigorous cavity method used for detecting the latent structure? 4. How does the paper establish the validity of the threshold in a particular limiting regime? 5. Can you describe the provided algorithm for solving the problem? 6. Are there any limitations or potential improvements for the proposed approach?
Review
Review This paper considers the problem of community structure recovery (in the balanced binary case) when presented with informative node covariates. The authors propose a simple statistical model for this problem and conjecture a sharp threshold for detecting the latent structure in this model (using the non-rigorous cavity method). They then rigorously establish the validity of this threshold in a particular limiting regime. Finally, they provide an algorithm for solving this problem and demonstrate empirical support for the above claims. The paper is very well written and is bound to be of interest to the NIPS community. I did not check all the proofs in detail, but I performed some sanity checks to verify their validity. I recommend this paper be accepted to the NIPS program. I have one suggestion for the authors: Please include an intuitive discussion about the threshold derived in Claim 3, right after the statement. This will be helpful for the reader.
NIPS
Title Contextual Stochastic Block Models Abstract We provide the first information theoretic tight analysis for inference of latent community structure given a sparse graph along with high dimensional node covariates, correlated with the same latent communities. Our work bridges recent theoretical breakthroughs in the detection of latent community structure without nodes covariates and a large body of empirical work using diverse heuristics for combining node covariates with graphs for inference. The tightness of our analysis implies in particular, the information theoretical necessity of combining the different sources of information. Our analysis holds for networks of large degrees as well as for a Gaussian version of the model. 1 Introduction Data clustering is a widely used primitive in exploratory data analysis and summarization. These methods discover clusters or partitions that are assumed to reflect a latent partitioning of the data with semantic significance. In a machine learning pipeline, results of such a clustering may then be used for downstream supervised tasks, such as feature engineering, privacy-preserving classification or fair allocation [CMS11, KGB+12, CDPF+17]. At risk of over-simplification, there are two settings that are popular in literature. In graph clustering, the dataset of n objects is represented as a symmetric similarity matrix A = (Aij)1≤i,j≤n. For instance, A can be binary, where Aij = 1 (or 0) denotes that the two objects i, j are similar (or not). It is, then, natural to interpret A as the adjacency matrix of a graph. This can be carried over to non-binary settings by considering weighted graphs. On the other hand, in more traditional (binary) classification problems, the n objects are represented as p-dimensional feature or covariate vectors b1, b2, · · · , bn. This feature representation can be the input for a clustering method such as k-means, or instead used to construct a similarity matrix A, which in turn is used for clustering or partitioning. These two representations are often taken to be mutually exclusive and, in fact, interchangeable. Indeed, just as feature representations can be used to construct similarity matrices, popular spectral methods [NJW02, VL07] implicitly construct a low-dimensional feature representation from the similarity matrices. This paper is motivated by scenarios where the graph, or similarity, representation A ∈ Rn×n, and the feature representation B = [b1, b2, . . . , bn] ∈ Rp×n provide independent, or complementary, information on the latent clustering of the n objects. (Technically, we will assume that A and B are conditionally independent given the node labels.) We argue that in fact in almost all practical graph clustering problems, feature representations provide complementary information of the latent clustering. This is indeed the case in many social and biological networks, see e.g. [NC16] and references within. As an example, consider the ‘political blogs’ dataset [AG05]. This is a directed network of political blogs during the 2004 US presidential election, with a link between two blogs if one referred to the ∗Department of Mathematics, Massachusetts Institute of Technology †Departments of Electrical Engineering and Statistics, Stanford University ‡Department of Mathematics, Massachusetts Institute of Technology §Department of Mathematics, Massachusetts Institute of Technology 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. other. It is possible to just use the graph structure in order to identify political communities (as was done in [AG05]). Note however that much more data is available. For example we may consider an alternative feature representation of the blogs, wherein each blog is converted to a ‘bag-of words’ vector of its content. This gives a quite different, and complementary representation of blogs that plausibly reflects their political leaning. A number of approaches can be used for the simple task of predicting leaning from the graph information (or feature information) individually. However, given access to both sources, it is challenging to combine them in a principled fashion. In this context, we introduce a simple statistical model of complementary graph and high-dimensional covariate data that share latent cluster structure. This model is an intuitive combination of two well-studied models in machine learning and statistics: the stochastic block model and the spiked covariance model [Abb17, HLL83, JL04]. We focus on the task of uncovering this latent structure and make the following contributions: Sharp thresholds: We establish a sharp information-theoretic threshold for detecting the latent structure in this model. This threshold is based on non-rigorous, but powerful, techniques from statistical physics. Rigorous validation: We consider a certain ‘Gaussian’ limit of the statistical model, which is of independent interest. In this limit, we rigorously establish the correct information-theoretic threshold using novel Gaussian comparison inequalities. We further show convergence to the Gaussian limit predictions as the density of the graph diverges. Algorithm: We provide a simple, iterative algorithm for inference based on the belief propagation heuristic. For data generated from the model, we empirically demonstrate that the the algorithm achieves the conjectured information-theoretic threshold. The rest of the paper is organized as follows. The model and results are presented in Section 2. Further related work is discussed in Section 3. The prediction of the threshold from statistical physics techniques is presented in 4, along with the algorithm. While all proofs are presented in the appendix, we provide an overview of the proofs of our rigorous results in Section 5. Finally, we numerically validate the prediction in Section 6. 2 Model and main results We will focus on the simple case where the n objects form two latent clusters of approximately equal size, labeled + and −. Let v ∈ {±1}n be the vector encoding this partitioning. Then, the observed data is a pair of matrices (AG, B), where AG is the adjacency matrix of the graph G and B ∈ Rp×n is the matrix of covariate information. Each column bi, i ≤ n of matrix B contains the covariate information about vertex i. We use the following probabilistic model: conditional on v, and a latent vector u ∼ N(0, Ip/p): P(AGij = 1) = { cin/n with probability , cout/n otherwise. (1) bi = √ µ n viu+ Zi√ p , (2) where Zi ∈ Rp has independent standard normal entries. It is convenient to parametrize the edge probabilities by the average degree d and the normalized degree separation λ: cin = d+ λ √ d , cout = d− λ √ d . (3) Here d, λ, µ are parameters of the model which, for the sake of simplicity, we assume to be fixed and known. In other words, two objects i, j in the same cluster or community are slightly more likely to be connected than for objects i, j′ in different clusters. Similarly, according to (2), they have slightly positively correlated feature vectors bi, bj , while objects i, j′ in different clusters have negatively correlated covariates bi, bj′ . Note that this model is a combination of two observation models that have been extensively studied: the stochastic block model and the spiked covariance model. The stochastic block model has its roots in sociology literature [HLL83] and has witnessed a resurgence of interest from the computer science and statistics community since the work of Decelle et al. [DKMZ11]. This work focused on the sparse setting where the graph as O(n) edges and conjectured, using the non-rigorous cavity method, the following phase transition phenomenon. This was later established rigorously in a series of papers [MNS15, MNS13, Mas14]. Theorem 1 ([MNS15, MNS13, Mas14]). Suppose d > 1 is fixed. The graph G is distinguishable with high probability from an Erdös-Renyi random graph with average degree d if and only if λ ≥ 1. Moreover, if λ > 1, there exists a polynomial-time computable estimate v̂ = v̂(AG) ∈ {±1}n of the cluster assignment satisfying, almost surely: lim inf n→∞ |〈v̂, v〉| n ≥ ε(λ) > 0. (4) In other words, given the graph G, it is possible to non-trivially estimate the latent clustering v if, and only if, λ > 1. The covariate model (2) was proposed by Johnstone and Lu [JL04] and has been extensively studied in statistics and random matrix theory. The weak recovery threshold was characterized by a number of authors, including Baik et al [BBAP05], Paul [Pau07] and Onatski et al [OMH+13]. Theorem 2 ([BBAP05, Pau07, OMH+13]). Let v̂1 be the principal eigenvector of BTB, where v̂1 is normalized so that ‖v̂1‖2 = n. Suppose that p, n → ∞ with p/n → 1/γ ∈ (0,∞). Then lim infn→∞ |〈v̂1, v〉|/n > 0 if and only if µ > √ γ. Moreover, if µ < √ γ, no such estimator exists. In other words, this theorem shows that it is possible to estimate v solely from the covariates using, in fact, a spectral method if, and only if µ > √ γ. Our first result is the following prediction that establishes the analogous threshold prediction that smoothly interpolates between Theorems 1 and 2. Claim 3 (Cavity prediction). GivenAG, B as in Eqs.(1), (2), and assume that n, p→∞ with p/n→ 1/γ ∈ (0,∞). Then there exists an estimator v̂ = v̂(AG, B) ∈ {±1}n so that lim inf |〈v̂, v〉|/n is bounded away from 0 if and only if λ2 + µ2 γ > 1 . (5) We emphasize here that this claim is not rigorous; we obtain this prediction via the cavity method. The cavity method is a powerful technique from the statistical physics of mean field models [MM09]. Our instantiation of the cavity method is outlined in Section 4, along with Appendix B and D (see supplement). The cavity method is remarkably successful and a number of its predictions have been made rigorous [MM09, Tal10]. Consequently, we view Claim 3 as a conjecture, with strong positive evidence. Theorems 1 and 2 confirm the cavity prediction rigorously in the corner cases, in which either λ or µ vanishes, using intricate tools from random matrix theory and sparse random graphs. Our main result confirms rigorously Claim 3 in the limit of large degrees. Theorem 4. Suppose v is uniformly distributed in {±1}n and we observe AG, B as in (1), (2). Consider the limit p, n→∞ with p/n→ 1/γ. Then, for some ε(λ, µ) > 0 independent of d, lim inf n→∞ sup v̂( · ) |〈v̂(AG, B), v〉| n ≥ ε(λ, µ)− od(1) if λ2 + µ2/γ > 1, (6) lim sup n→∞ sup v̂( · ) |〈v̂(AG, B), v〉| n = od(1) if λ2 + µ2/γ < 1. (7) Here the limits hold in probability, the supremum is over estimators v̂ : (AG, B) 7→ v̂(AG, B) ∈ Rn, with ‖v̂(AG, B)‖2 = √ n. Here od(1) indicates a term independent of n which tends to zero as d→∞. In order to establish this result, we consider a modification of the original model in (1), (2), which is of independent interest. Suppose, conditional on v ∈ {±1} and the latent vector u we observe (A,B) as follows: Aij ∼ { N(λvivj/n, 1/n) if i < j N(λvivj/n, 2/n) if i = j, (8) Bai ∼ N( √ µviua/ √ n, 1/p). (9) This model differs from (1), in that the graph observation AG is replaced by the observation A which is equal to λvvT/n, corrupted by Gaussian noise. This model generalizes so called ‘rank-one deformations’ of random matrices [Péc06, KY13, BGN11], as well as the Z2 synchronization model [ABBS14, Cuc15]. Our main motivation for introducing the Gaussian observation model is that it captures the largedegree behavior of the original graph model. The next result formalizes this intuition: its proof is an immediate generalization of the Lindeberg interpolation method of [DAM16]. Theorem 5. Suppose v ∈ {±1}n is uniformly random, and u is independent. We denote by I(v;AG, B) the mutual information of the latent random variables v and the observable data AG, B. For all λ, µ: we have that: lim d→∞ lim sup n→∞ 1 n |I(v;AG, B)− I(v;A,B)| = 0, (10) lim d→∞ lim sup n→∞ ∣∣∣ 1 n dI(v;AG, B) d(λ2) − 1 4 MMSE(v;AG, B) ∣∣∣ = 0, (11) where MMSE(v;AG, B) = n−2E{‖vvT − E{vvT|AG, B}‖2F }. For the Gaussian observation model (8), (9) we can establish a precise weak recovery threshold, which is the main technical novelty of this paper. Theorem 6. Suppose v is uniformly distributed in {±1}n and we observe A,B as in (8), (9). Consider the limit p, n→∞ with p/n→ 1/γ. 1. If λ2 + µ2/γ < 1, then for any estimator v̂ : (A,B) 7→ v̂(A,B), with ‖v̂(A,B)‖2 = √ n, we have lim supn→∞ |〈v̂, v〉|/n = 0. 2. If λ2 + µ2/γ > 1, let v̂(A,B) be normalized so that ‖v̂(A,B)‖2 = √ n, and proportional the maximum eigenvector of the matrix M(ξ∗), where M(ξ) = A+ 2µ2 λ2γ2ξ BTB + ξ 2 In , (12) and ξ∗ = arg minξ>0 λmax(M(ξ)). Then, lim infn→∞ |〈v̂, v〉|/n > 0 in probability. Theorem 4 is proved by using this threshold result, in conjunction with the universality Theorem 5. 3 Related work The need to incorporate node information in graph clustering has been long recognized. To address the problem, diverse clustering methods have been introduced— e.g. those based on generative models [NC16, Hof03, ZVA10, YJCZ09, KL12, LM12, XKW+12, HL14, YML13], heuristic model free approaches [BVR17, ZLZ+16, GVB12, ZCY09, NAJ03, GFRS13, DV12, CZY11, SMJZ12, SZLP16], Bayesian methods [CB10, BC11] etc. [BCMM15] surveys other clustering methods for graphs with node and edge attributes. Semisupervised graph clustering [Pee12, EM12, ZMZ14], where labels are available for a few vertices are also somewhat related to our line of enquiry. The literature in this domain is quite vast and extremely diffuse, and thus we do not attempt to provide an exhaustive survey of all related attempts in this direction. In terms of rigorous results, [AJC14, LMX15] introduced and analyzed a model with informative edges, but they make the strong and unrealistic requirement that the label of individual edges and each of their endpoints are uncorrelated and are only able to prove one side of their conjectured threshold. The papers [BVR17, ZLZ+16] –among others– rigorously analyze specific heuristics for clustering and provide some guarantees that ensure consistency. However, these results are not optimal. Moreover, it is possible that they only hold in the regime where using either the node covariates or the graph suffices for inference. Several theoretical works [KMS16, MX16] analyze the performance of local algorithms in the semisupervised setting, i.e., where the true labels are given for a small fraction of nodes. In particular [KMS16] establishes that for the two community sparse stochastic block model, correlated recovery is impossible given any vanishing proportion of nodes. Note that this is in stark contrast to Theorem 4 (and the Claim for the sparse graph model) above, which posits that given high dimensional covariate information actually shifts the information theoretic threshold for detection and weak recovery. The analysis in [KMS16, MX16] is also local in nature, while our algorithms and their analysis go well beyond the diameter of the graph. 4 Belief propagation: algorithm and cavity prediction Recall the model (1), (2), where we are given the data (AG, B) and our task is to infer the latent community labels v. From a Bayesian perspective, a principled approach computes posterior expectation with respect to the conditional distribution P(v, u|AG, B) = P(v, u,AG, B)/P(AG, B). This is, however, not computationally tractable because it requires to marginalize over v ∈ {+1,−1}n and u ∈ Rp. At this point, it becomes necessary to choose an approximate inference procedure, such as variational inference or mean field approximations [WJ+08]. In Bayes inference problem on locally-tree like graphs, belief propagation is optimal among local algorithms (see for instance [DM15] for an explanation of why this is the case). The algorithm proceeds by computing, in an iterative fashion vertex messages ηti ,m t a for i ∈ [n], a ∈ [p] and edge messages ηti→j for all pairs (i, j) that are connected in the graph G. For a vertex i of G, we denote its neighborhood in G by ∂i. Starting from an initialization (ηt0 ,mt0)t0=−1,0, we update the messages in the following linear fashion: ηt+1i→j = √ µ γ (BTmt)i − µ γ ηt−1i + λ√ d ∑ k∈∂i\j ηtk→i − λ √ d n ∑ k∈[n] ηtk, (13) ηt+1i = √ µ γ (BTmt)i − µ γ ηt−1i + λ√ d ∑ k∈∂i ηtk→i − λ √ d n ∑ k∈[n] ηtk, (14) mt+1 = √ µ γ Bηt − µmt−1. (15) Here, and below, we will use ηt = (ηti)i∈[n], m t = (mta)a∈[p] to denote the vectors of vertex messages. After running the algorithm for some number of iterations tmax, we return, as an estimate, the sign of the vertex messages ηtmaxi , i.e. v̂i(A G, B) = sgn(ηtmaxi ). (16) These update equations have a number of intuitive features. First, in the case that µ = 0, i.e. we have no covariate information, the edge messages become: ηt+1i→j = λ√ d ∑ k∈∂i\j ηtk→i − λ √ d n ∑ k∈[n] ηtk, (17) which corresponds closely to the spectral power method on the nonbacktracking walk matrix of G [KMM+13]. Conversely, when λ = 0, the updates equations on mt, ηt correspond closely to the usual power iteration to compute singular vectors of B. We obtain this algorithm from belief propagation using two approximations. First, we linearize the belief propagation update equations around a certain ‘zero information’ fixed point. Second, we use an ‘approximate message passing’ version of the belief propagation updates which results in the addition of the memory terms in Eqs. (13), (14), (15). The details of these approximations are quite standard and deferred to Appendix D. For a heuristic discussion, we refer the interested reader to the tutorials [Mon12, TKGM14] (for the Gaussian approximation) and the papers [DKMZ11, KMM+13] (for the linearization procedure). As with belief propagation, the behavior of this iterative algorithm, in the limit p, n → ∞ can be tracked using a distributional recursion called density evolution. Definition 1 (Density evolution). Let (m̄, U) and (η̄, V ) be independent random vectors such that U ∼ N(0, 1), V ∼ Uniform({±1}), m̄, η̄ have finite variance. Further assume that (η̄, V ) d= (−η̄,−V ) and (m̄, U) d= (−m̄,−U) (where d= denotes equality in distribution). We then define new random pairs (m̄′, U ′) and (η̄′, V ′), where U ′ ∼ N(0, 1), V ′ ∼ Uniform({±1}), and (η̄, V ) d= (−η̄,−V ), (m̄, U) d= (−m̄,−U), via the following distributional equation m̄′ ∣∣ U ′ d = µE{V η̄}U ′ + ( µE{η̄2} )1/2 ζ1, (18) η̄′ ∣∣ V ′=+1 d = λ√ d [ k+∑ k=1 η̄k ∣∣ + + k−∑ k=1 η̄k ∣∣ − ] − λ √ dE{η̄} + µ γ E{Um̄}+ (µ γ E{m̄2} )1/2 ζ2. (19) Here we use the notation X|Y d = Z to mean that the conditional distribution of X given Y is the same as the (unconditional) distribution of Z. Notice that the distribution of η̄′ ∣∣ V ′=− is determined by the last equation using the symmetry property. Further η̄k|+ and η̄k|− denote independent random variables distributed (respectively) as η̄|V=+ and η̄|V=−. Finally k+ ∼ Poiss(d/2 + λ √ d/2), k− ∼ Poiss(d/2 − λ √ d/2), ζ1 ∼ N(0, 1) and ζ2 ∼ N(0, 1) are mutually independent and independent from the previous random variables. The density evolution map, denoted by DE, is defined as the mapping from the law of (η̄, V, m̄, U) to the law of (η̄′, V ′, m̄′, U ′). With a slight abuse of notation, we will omit V,U , V ′, U ′, whose distribution is left unchanged and write (η̄′, m̄′) = DE(η̄, m̄) . (20) The following claim is the core of the cavity prediction. It states that the density evolution recursion faithfully describes the distribution of the iterates ηt,mt. Claim 7. Let (η̄0, V ), (m̄0, U) be random vectors satisfying the conditions of definition 1. Define the density evolution sequence (η̄t, m̄t) = DEt(η̄0, m̄0), i.e. the result of iteratively applying the mapping DE t times. Consider the linear message passing algorithm of Eqs. (13) to (15), with the following initialization. We set (m0r)r∈[p] conditionally independent given u, with conditional distribution m 0 r|u d = m̄0|U=√pur . Analogously, η0i , η0i→j are conditionally independent given v with η0i |v d = η̄0|V=vi , η0i→j |v d = η̄0|V=vi . Finally η−1i = η −1 i→j = m −1 r = 0 for all i, j, r. Then, as n, p→∞ with p/n→ 1/γ, the following holds for uniformly random indices i ∈ [n] and a ∈ [p]: (mta, ua √ p) d⇒ (m̄t, U) (21) (ηti , vi) d⇒ (η̄t, V ). (22) The following simple lemma shows the instability of the density evolution recursion. Lemma 8. Under the density evolution mapping, we obtain the random variables (η̄′, m̄′) = DE(η̄, m̄′ Let m and m′ denote the vector of the first two moments of (η̄, V, m̄, U) and (η̄′, V ′, m̄′, U ′) defined as follows: m = (E{V η̄},E{Um̄},E{η̄2},E{m̄2}) , (23) and similarly for m′. Then, for ‖m‖2 → 0, we have m′ = λ 2 µ/γ 0 0 µ 0 0 0 0 0 λ2 µ/γ 0 0 µ 0 m +O(‖m‖2) (24) In particular, the linearized map m 7→ m′ at m = 0 has spectral radius larger than one if and only if λ2 + µ2/γ > 1. The interpretation of the cavity prediction and the instability lemma is as follows. If we choose an initialization (η̄0, V ), (m̄0, U) with η̄0, m̄0 positively correlated with V and U , then this correlation increases exponentially over time if and only if λ2 + µ2/γ > 15. In other words, a small initial correlation is amplified. While we do not have an initialization that is positively correlated with the true labels, a random initialization η0,m0 has a random correlation with v, u of order 1/ √ n. If λ2 + µ2/γ > 1, this correlation is amplified over iterations, yielding a nontrivial reconstruction of v. On the other hand, if λ2 + µ2/γ < 1 then this correlation is expected to remain small, indicating that the algorithm does not yield a useful estimate. 5 Proof overview As mentioned above, a key step of our analysis is provided by Theorem 6, which establishes a weak recovery threshold for the Gaussian observation model of Eqs. (8), (9). The proof proceeds in two steps: first, we prove that, for λ2 +µ2/γ < 1 it is impossible to distinguish between data A,B generated according to this model, and data generated according to the null model µ = λ = 0. Denoting by Pλ,µ the law of data A,B, this is proved via a standard second moment argument. Namely, we bound the chi square distance uniformly in n, p χ2(Pλ,µ,P0,0) ≡ E0,0 {( dPλ,µ dP0,0 )2} − 1 ≤ C , (25) and then bound the total variation distance by the chi-squared distance ‖Pλ,µ − P0,0‖TV ≤ 1 − (χ2(Pλ,µ,P0,0) + 1)−1. This in turn implies that no test can distinguish between the two hypotheses with probability approaching one as n, p→∞. The chi-squared bound also allows to show that weak recovery is impossible in the same regime. In order to prove that weak recovery is possible for λ2 + µ2/γ > 1, we consider the following optimization problem over x ∈ Rn, y ∈ Rp: maximize 〈x,Ax〉+ b∗〈x,By〉, (26) subject to ‖x‖2 = ‖y‖2 = 1 . (27) where b∗ = 2µλγ . Denoting solution of this problem by (x̂, ŷ), we output the (soft) label estimates v̂ = √ nx̂. This definition turns out to be equivalent to the spectral algorithm in the statement of Theorem 6, and is therefore efficiently computable. This optimization problem undergoes a phase transition exactly at the weak recovery threshold λ2 + µ2/γ = 1, as stated below. Lemma 9. Denote by T = Tn,p(A,B) the value of the optimization problem (26). (i) If λ2 + µ 2 γ < 1, then, almost surely lim n,p→∞ Tn,p(A,B) = 2 √ 1 + b2∗γ 4 + b∗ . (28) (ii) If λ, µ > 0, and λ2 + µ 2 γ > 1 then there exists δ = δ(λ, µ) > 0 such that, almost surely lim n,p→∞ Tn,p(A,B) = 2 √ 1 + b2∗γ 4 + b∗ + δ(λ, µ) . (29) (iii) Further, define T̃n,p(δ̃;A,B) = sup ‖x‖=‖y‖=1,|〈x,v〉|<δ̃ √ n [ 〈x,Ax〉+ b∗〈x,By〉 ] . 5Notice that both the messages variance E(η2) and covariance with the ground truth E(ηV ) increase, but the normalized correlation (correlation divided by standard deviation) increases. Then for each δ > 0, there exists δ̃ > 0 sufficiently small, such that, almost surely lim n,p→∞ T̃n,p(δ̃;A,B) < 2 √ 1 + b2∗γ 4 + b∗ + δ 2 . (30) The first two points imply that Tn,p(A,B) provide a statistic to distinguish between P0,0 and Pλ,µ with probability of error that vanishes as n, p→∞ if λ2 +µ2/γ > 1. The third point (in conjunction with the second one) guarantees that the maximizer x̂ is positively correlated with v, and hence implies weak recovery. In fact, we prove a stronger result that provides an asymptotic expression for the value Tn,p(A,B) for all λ, µ. We obtain the above phase-transition result by specializing the resulting formula in the two regimes λ2 + µ2/γ < 1 and λ2 + µ2/γ > 1. We prove this asymptotic formula by Gaussian process comparison, using Sudakov-Fernique inequality. Namely, we compare the Gaussian process appearing in the optimization problem of Eq. (26) with the following ones: X1(x, y) = λ n 〈x, v0〉2 + 〈x, g̃x〉+ b∗ √ µ n 〈x, v0〉〈y, u0〉+ 〈y, g̃y〉 , (31) X2(x, y) = λ n 〈x, v0〉2 + 1 2 〈x, W̃xx〉+ b∗ √ µ n 〈x, v0〉〈y, u0〉+ 1 2 〈y, W̃yy〉 , (32) where g̃x, g̃y are isotropic Gaussian vectors, with suitably chosen variances, and W̃x, W̃y are GOE matrices, again with properly chosen variances. We prove that maxx,y X1(x, y) yields an upper bound on Tn,p(A,B), and maxx,y X2(x, y) yields a lower bound on the same quantity. Note that maximizing the first process X1(x, y) essentially reduces to solving a separable problem over the coordinates of x and y and hence to an explicit expression. On the other hand, maximizing the second process leads (after decoupling the term 〈x, v0〉〈y, u0〉) to two separate problems, one for the vector x, and the other for y. Each of the two problems reduce to finding the maximum eigenvector of a rank-one deformation of a GOE matrix, a problem for which we can leverage on significant amount of information from random matrix theory. The resulting upper and lower bound coincide asymptotically. As is often the case with Gaussian comparison arguments, the proof is remarkably compact, and somewhat surprising (it is unclear a priori that the two bounds should coincide asymptotically). While upper bounds by processes of the type of X1(x, y) are quite common in random matrix theory, we think that the lower bound by X2(x, y) (which is crucial for proving our main theorem) is novel and might have interesting generalizations. 6 Experiments We demonstrate the efficacy of the full belief propagation algorithm, restated below: ηt+1i = √ µ γ ∑ q∈[p] Bqim t q − µ γ ( ∑ q∈[p] B2qi τ tq ) tanh(ηt−1i ) + ∑ k∈∂i f(ηtk→i; ρ)− ∑ k∈[n] f(ηtk; ρn) , (33) ηt+1i→j = √ µ γ ∑ q∈[p] Bqim t q − µ γ ( ∑ q∈[p] B2qi τ tq ) tanh(ηt−1i ) + ∑ k∈∂i\j f(ηtk→i; ρ)− ∑ k∈[n] f(ηtk; ρn) , (34) mt+1q = √ µ/γ τ t+1q ∑ j∈[n] Bqj tanh(η t j)− µ γτ t+1q ( ∑ j∈[n] B2qjsech 2(ηtj) ) mt−1q (35) τ t+1q = 1 + µ− µ γ ∑ j∈[n] B2qjsech 2(ηtj) −1 . (36) Here the function f(; ρ) and the parameters ρ, ρn are defined as: f(z; ρ) ≡ 1 2 log (cosh(z + ρ) cosh(z − ρ) ) , (37) ρ ≡ tanh−1(λ/ √ d) , (38) ρn ≡ tanh−1 ( λ√d n− d ) . (39) We refer the reader to Appendix D for a derivation of the algorithm. As demonstrated in Appendix D, the BP algorithm in Section 4 is obtained by linearizing the above in η. In our experiments, we perform 100 Monte Carlo runs of the following process: 1. Sample AG, B from Pλ,µ with n = 800, p = 1000, d = 5. 2. Run BP algorithm for T = 50 iterations with random initialization η0i , η −1 i ,m 0 a,m −1 a ∼iid N(0, 0.01). yielding vertex and covariate iterates ηT ∈ Rn, mT ∈ Rp. 3. Reject the null hypothesis if ∥∥ηT∥∥ 2 > ∥∥η0∥∥ 2 , else accept the null. 4. Return estimates v̂BPi = sgn(η T i ), û BP a = m T a / ∥∥mT∥∥ 2 . Figure 1 (left) shows empirical probabilities of rejecting the null for (λ, µ) ∈ [0, 1]× [0,√γ]. The next two plots display the mean overlap |〈v̂BP, v〉/n| and 〈ûBP, u〉/ ‖u‖ achieved by the BP estimates (lighter is higher overlap). Below the theoretical curve (red) of λ2 + µ2/γ = 1, the null hypothesis is accepted and the estimates show negligible correlation with the truth. These results are in excellent agreement with our theory. Importantly, while our rigorous result holds only in the limit of diverging d, the simulations show agreement already for d = 5. This lends further credence to the cavity prediction Claim 3. Acknowledgements A.M. was partially supported by grants NSF DMS-1613091, NSF CCF-1714305 and NSF IIS1741162. E.M was partially supported by grants NSF DMS-1737944 and ONR N00014-17-1-2598. Y.D would like to acknowledge Nilesh Tripuraneni for discussions about this paper.
1. What is the main contribution of the paper regarding clustering statistical entities? 2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to identify meaningful clusters? 3. Do you have any concerns about the novelty of the presented method, considering similar works in the field? 4. How does the reviewer assess the clarity and technical soundness of the paper's content? 5. What are the questions raised by the reviewer regarding the method's resemblance to Hidden Conditional Random Fields or Markov Random Fields? 6. How does the reviewer suggest improving the paper, such as comparing the method to state-of-the-art literature and expanding the scope beyond bi-partitioning?
Review
Review UPDATE after authors' rebuttal I am quite happy about the scientific answers provided by the authors. Hence I upgrade my score from 6 to 7. Hoping it will be enough for an acceptance to NIPS (competition is harsh!), and if not, still hoping to read your work somewhere! This paper is about the clustering of statistical entities for which network data and individual measurements are available. The article reads well and is technically sound. he numerical experiments are disappointing, I don't really understand how they illustrate the performance of the method. I would have expected some accuracy measure for the methods on its ability to identify meaningful clusters. Moreover, I challenge the novelty of the approach presented here. It was easy to find references like https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5890472/ (very similar problem with the same kind of data) or less recently https://ieeexplore.ieee.org/document/4359897/ (with the implementation the authors published in https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3051335/, you could well compare to your approach). I truly feel you should compare the method you present to the state-of-the-art literature. To me, your model resembles a Hidden Conditional Random Fields (or a Markov Random Fields) via Equations (1) and (2). Details: - l17, the A data is not necessarily a 'similarity' measure, it could just be a graph measure directly defined, e.g. regulatory relationships. A similarity measure is more a representation of similar measures on nodes, hence more a proxy for a B data in your settings. - l65: you focus on bi-partitioning, which in graph is clearly an easier (yet challenging) special case of the general framework. Why is that? How can your method be applied with K classes and K>2? - I quite like the 'in Other words...' explanation of our theorems. - What is a 'Claim' for you? A Proposition? A conjecture? - Again, your algorithm (Section 4) could be compared to traditional approaches like Variational Inference and MCMC strategies. - In the reference, make the style uniform: with or without first names?
NIPS
Title Contextual Stochastic Block Models Abstract We provide the first information theoretic tight analysis for inference of latent community structure given a sparse graph along with high dimensional node covariates, correlated with the same latent communities. Our work bridges recent theoretical breakthroughs in the detection of latent community structure without nodes covariates and a large body of empirical work using diverse heuristics for combining node covariates with graphs for inference. The tightness of our analysis implies in particular, the information theoretical necessity of combining the different sources of information. Our analysis holds for networks of large degrees as well as for a Gaussian version of the model. 1 Introduction Data clustering is a widely used primitive in exploratory data analysis and summarization. These methods discover clusters or partitions that are assumed to reflect a latent partitioning of the data with semantic significance. In a machine learning pipeline, results of such a clustering may then be used for downstream supervised tasks, such as feature engineering, privacy-preserving classification or fair allocation [CMS11, KGB+12, CDPF+17]. At risk of over-simplification, there are two settings that are popular in literature. In graph clustering, the dataset of n objects is represented as a symmetric similarity matrix A = (Aij)1≤i,j≤n. For instance, A can be binary, where Aij = 1 (or 0) denotes that the two objects i, j are similar (or not). It is, then, natural to interpret A as the adjacency matrix of a graph. This can be carried over to non-binary settings by considering weighted graphs. On the other hand, in more traditional (binary) classification problems, the n objects are represented as p-dimensional feature or covariate vectors b1, b2, · · · , bn. This feature representation can be the input for a clustering method such as k-means, or instead used to construct a similarity matrix A, which in turn is used for clustering or partitioning. These two representations are often taken to be mutually exclusive and, in fact, interchangeable. Indeed, just as feature representations can be used to construct similarity matrices, popular spectral methods [NJW02, VL07] implicitly construct a low-dimensional feature representation from the similarity matrices. This paper is motivated by scenarios where the graph, or similarity, representation A ∈ Rn×n, and the feature representation B = [b1, b2, . . . , bn] ∈ Rp×n provide independent, or complementary, information on the latent clustering of the n objects. (Technically, we will assume that A and B are conditionally independent given the node labels.) We argue that in fact in almost all practical graph clustering problems, feature representations provide complementary information of the latent clustering. This is indeed the case in many social and biological networks, see e.g. [NC16] and references within. As an example, consider the ‘political blogs’ dataset [AG05]. This is a directed network of political blogs during the 2004 US presidential election, with a link between two blogs if one referred to the ∗Department of Mathematics, Massachusetts Institute of Technology †Departments of Electrical Engineering and Statistics, Stanford University ‡Department of Mathematics, Massachusetts Institute of Technology §Department of Mathematics, Massachusetts Institute of Technology 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. other. It is possible to just use the graph structure in order to identify political communities (as was done in [AG05]). Note however that much more data is available. For example we may consider an alternative feature representation of the blogs, wherein each blog is converted to a ‘bag-of words’ vector of its content. This gives a quite different, and complementary representation of blogs that plausibly reflects their political leaning. A number of approaches can be used for the simple task of predicting leaning from the graph information (or feature information) individually. However, given access to both sources, it is challenging to combine them in a principled fashion. In this context, we introduce a simple statistical model of complementary graph and high-dimensional covariate data that share latent cluster structure. This model is an intuitive combination of two well-studied models in machine learning and statistics: the stochastic block model and the spiked covariance model [Abb17, HLL83, JL04]. We focus on the task of uncovering this latent structure and make the following contributions: Sharp thresholds: We establish a sharp information-theoretic threshold for detecting the latent structure in this model. This threshold is based on non-rigorous, but powerful, techniques from statistical physics. Rigorous validation: We consider a certain ‘Gaussian’ limit of the statistical model, which is of independent interest. In this limit, we rigorously establish the correct information-theoretic threshold using novel Gaussian comparison inequalities. We further show convergence to the Gaussian limit predictions as the density of the graph diverges. Algorithm: We provide a simple, iterative algorithm for inference based on the belief propagation heuristic. For data generated from the model, we empirically demonstrate that the the algorithm achieves the conjectured information-theoretic threshold. The rest of the paper is organized as follows. The model and results are presented in Section 2. Further related work is discussed in Section 3. The prediction of the threshold from statistical physics techniques is presented in 4, along with the algorithm. While all proofs are presented in the appendix, we provide an overview of the proofs of our rigorous results in Section 5. Finally, we numerically validate the prediction in Section 6. 2 Model and main results We will focus on the simple case where the n objects form two latent clusters of approximately equal size, labeled + and −. Let v ∈ {±1}n be the vector encoding this partitioning. Then, the observed data is a pair of matrices (AG, B), where AG is the adjacency matrix of the graph G and B ∈ Rp×n is the matrix of covariate information. Each column bi, i ≤ n of matrix B contains the covariate information about vertex i. We use the following probabilistic model: conditional on v, and a latent vector u ∼ N(0, Ip/p): P(AGij = 1) = { cin/n with probability , cout/n otherwise. (1) bi = √ µ n viu+ Zi√ p , (2) where Zi ∈ Rp has independent standard normal entries. It is convenient to parametrize the edge probabilities by the average degree d and the normalized degree separation λ: cin = d+ λ √ d , cout = d− λ √ d . (3) Here d, λ, µ are parameters of the model which, for the sake of simplicity, we assume to be fixed and known. In other words, two objects i, j in the same cluster or community are slightly more likely to be connected than for objects i, j′ in different clusters. Similarly, according to (2), they have slightly positively correlated feature vectors bi, bj , while objects i, j′ in different clusters have negatively correlated covariates bi, bj′ . Note that this model is a combination of two observation models that have been extensively studied: the stochastic block model and the spiked covariance model. The stochastic block model has its roots in sociology literature [HLL83] and has witnessed a resurgence of interest from the computer science and statistics community since the work of Decelle et al. [DKMZ11]. This work focused on the sparse setting where the graph as O(n) edges and conjectured, using the non-rigorous cavity method, the following phase transition phenomenon. This was later established rigorously in a series of papers [MNS15, MNS13, Mas14]. Theorem 1 ([MNS15, MNS13, Mas14]). Suppose d > 1 is fixed. The graph G is distinguishable with high probability from an Erdös-Renyi random graph with average degree d if and only if λ ≥ 1. Moreover, if λ > 1, there exists a polynomial-time computable estimate v̂ = v̂(AG) ∈ {±1}n of the cluster assignment satisfying, almost surely: lim inf n→∞ |〈v̂, v〉| n ≥ ε(λ) > 0. (4) In other words, given the graph G, it is possible to non-trivially estimate the latent clustering v if, and only if, λ > 1. The covariate model (2) was proposed by Johnstone and Lu [JL04] and has been extensively studied in statistics and random matrix theory. The weak recovery threshold was characterized by a number of authors, including Baik et al [BBAP05], Paul [Pau07] and Onatski et al [OMH+13]. Theorem 2 ([BBAP05, Pau07, OMH+13]). Let v̂1 be the principal eigenvector of BTB, where v̂1 is normalized so that ‖v̂1‖2 = n. Suppose that p, n → ∞ with p/n → 1/γ ∈ (0,∞). Then lim infn→∞ |〈v̂1, v〉|/n > 0 if and only if µ > √ γ. Moreover, if µ < √ γ, no such estimator exists. In other words, this theorem shows that it is possible to estimate v solely from the covariates using, in fact, a spectral method if, and only if µ > √ γ. Our first result is the following prediction that establishes the analogous threshold prediction that smoothly interpolates between Theorems 1 and 2. Claim 3 (Cavity prediction). GivenAG, B as in Eqs.(1), (2), and assume that n, p→∞ with p/n→ 1/γ ∈ (0,∞). Then there exists an estimator v̂ = v̂(AG, B) ∈ {±1}n so that lim inf |〈v̂, v〉|/n is bounded away from 0 if and only if λ2 + µ2 γ > 1 . (5) We emphasize here that this claim is not rigorous; we obtain this prediction via the cavity method. The cavity method is a powerful technique from the statistical physics of mean field models [MM09]. Our instantiation of the cavity method is outlined in Section 4, along with Appendix B and D (see supplement). The cavity method is remarkably successful and a number of its predictions have been made rigorous [MM09, Tal10]. Consequently, we view Claim 3 as a conjecture, with strong positive evidence. Theorems 1 and 2 confirm the cavity prediction rigorously in the corner cases, in which either λ or µ vanishes, using intricate tools from random matrix theory and sparse random graphs. Our main result confirms rigorously Claim 3 in the limit of large degrees. Theorem 4. Suppose v is uniformly distributed in {±1}n and we observe AG, B as in (1), (2). Consider the limit p, n→∞ with p/n→ 1/γ. Then, for some ε(λ, µ) > 0 independent of d, lim inf n→∞ sup v̂( · ) |〈v̂(AG, B), v〉| n ≥ ε(λ, µ)− od(1) if λ2 + µ2/γ > 1, (6) lim sup n→∞ sup v̂( · ) |〈v̂(AG, B), v〉| n = od(1) if λ2 + µ2/γ < 1. (7) Here the limits hold in probability, the supremum is over estimators v̂ : (AG, B) 7→ v̂(AG, B) ∈ Rn, with ‖v̂(AG, B)‖2 = √ n. Here od(1) indicates a term independent of n which tends to zero as d→∞. In order to establish this result, we consider a modification of the original model in (1), (2), which is of independent interest. Suppose, conditional on v ∈ {±1} and the latent vector u we observe (A,B) as follows: Aij ∼ { N(λvivj/n, 1/n) if i < j N(λvivj/n, 2/n) if i = j, (8) Bai ∼ N( √ µviua/ √ n, 1/p). (9) This model differs from (1), in that the graph observation AG is replaced by the observation A which is equal to λvvT/n, corrupted by Gaussian noise. This model generalizes so called ‘rank-one deformations’ of random matrices [Péc06, KY13, BGN11], as well as the Z2 synchronization model [ABBS14, Cuc15]. Our main motivation for introducing the Gaussian observation model is that it captures the largedegree behavior of the original graph model. The next result formalizes this intuition: its proof is an immediate generalization of the Lindeberg interpolation method of [DAM16]. Theorem 5. Suppose v ∈ {±1}n is uniformly random, and u is independent. We denote by I(v;AG, B) the mutual information of the latent random variables v and the observable data AG, B. For all λ, µ: we have that: lim d→∞ lim sup n→∞ 1 n |I(v;AG, B)− I(v;A,B)| = 0, (10) lim d→∞ lim sup n→∞ ∣∣∣ 1 n dI(v;AG, B) d(λ2) − 1 4 MMSE(v;AG, B) ∣∣∣ = 0, (11) where MMSE(v;AG, B) = n−2E{‖vvT − E{vvT|AG, B}‖2F }. For the Gaussian observation model (8), (9) we can establish a precise weak recovery threshold, which is the main technical novelty of this paper. Theorem 6. Suppose v is uniformly distributed in {±1}n and we observe A,B as in (8), (9). Consider the limit p, n→∞ with p/n→ 1/γ. 1. If λ2 + µ2/γ < 1, then for any estimator v̂ : (A,B) 7→ v̂(A,B), with ‖v̂(A,B)‖2 = √ n, we have lim supn→∞ |〈v̂, v〉|/n = 0. 2. If λ2 + µ2/γ > 1, let v̂(A,B) be normalized so that ‖v̂(A,B)‖2 = √ n, and proportional the maximum eigenvector of the matrix M(ξ∗), where M(ξ) = A+ 2µ2 λ2γ2ξ BTB + ξ 2 In , (12) and ξ∗ = arg minξ>0 λmax(M(ξ)). Then, lim infn→∞ |〈v̂, v〉|/n > 0 in probability. Theorem 4 is proved by using this threshold result, in conjunction with the universality Theorem 5. 3 Related work The need to incorporate node information in graph clustering has been long recognized. To address the problem, diverse clustering methods have been introduced— e.g. those based on generative models [NC16, Hof03, ZVA10, YJCZ09, KL12, LM12, XKW+12, HL14, YML13], heuristic model free approaches [BVR17, ZLZ+16, GVB12, ZCY09, NAJ03, GFRS13, DV12, CZY11, SMJZ12, SZLP16], Bayesian methods [CB10, BC11] etc. [BCMM15] surveys other clustering methods for graphs with node and edge attributes. Semisupervised graph clustering [Pee12, EM12, ZMZ14], where labels are available for a few vertices are also somewhat related to our line of enquiry. The literature in this domain is quite vast and extremely diffuse, and thus we do not attempt to provide an exhaustive survey of all related attempts in this direction. In terms of rigorous results, [AJC14, LMX15] introduced and analyzed a model with informative edges, but they make the strong and unrealistic requirement that the label of individual edges and each of their endpoints are uncorrelated and are only able to prove one side of their conjectured threshold. The papers [BVR17, ZLZ+16] –among others– rigorously analyze specific heuristics for clustering and provide some guarantees that ensure consistency. However, these results are not optimal. Moreover, it is possible that they only hold in the regime where using either the node covariates or the graph suffices for inference. Several theoretical works [KMS16, MX16] analyze the performance of local algorithms in the semisupervised setting, i.e., where the true labels are given for a small fraction of nodes. In particular [KMS16] establishes that for the two community sparse stochastic block model, correlated recovery is impossible given any vanishing proportion of nodes. Note that this is in stark contrast to Theorem 4 (and the Claim for the sparse graph model) above, which posits that given high dimensional covariate information actually shifts the information theoretic threshold for detection and weak recovery. The analysis in [KMS16, MX16] is also local in nature, while our algorithms and their analysis go well beyond the diameter of the graph. 4 Belief propagation: algorithm and cavity prediction Recall the model (1), (2), where we are given the data (AG, B) and our task is to infer the latent community labels v. From a Bayesian perspective, a principled approach computes posterior expectation with respect to the conditional distribution P(v, u|AG, B) = P(v, u,AG, B)/P(AG, B). This is, however, not computationally tractable because it requires to marginalize over v ∈ {+1,−1}n and u ∈ Rp. At this point, it becomes necessary to choose an approximate inference procedure, such as variational inference or mean field approximations [WJ+08]. In Bayes inference problem on locally-tree like graphs, belief propagation is optimal among local algorithms (see for instance [DM15] for an explanation of why this is the case). The algorithm proceeds by computing, in an iterative fashion vertex messages ηti ,m t a for i ∈ [n], a ∈ [p] and edge messages ηti→j for all pairs (i, j) that are connected in the graph G. For a vertex i of G, we denote its neighborhood in G by ∂i. Starting from an initialization (ηt0 ,mt0)t0=−1,0, we update the messages in the following linear fashion: ηt+1i→j = √ µ γ (BTmt)i − µ γ ηt−1i + λ√ d ∑ k∈∂i\j ηtk→i − λ √ d n ∑ k∈[n] ηtk, (13) ηt+1i = √ µ γ (BTmt)i − µ γ ηt−1i + λ√ d ∑ k∈∂i ηtk→i − λ √ d n ∑ k∈[n] ηtk, (14) mt+1 = √ µ γ Bηt − µmt−1. (15) Here, and below, we will use ηt = (ηti)i∈[n], m t = (mta)a∈[p] to denote the vectors of vertex messages. After running the algorithm for some number of iterations tmax, we return, as an estimate, the sign of the vertex messages ηtmaxi , i.e. v̂i(A G, B) = sgn(ηtmaxi ). (16) These update equations have a number of intuitive features. First, in the case that µ = 0, i.e. we have no covariate information, the edge messages become: ηt+1i→j = λ√ d ∑ k∈∂i\j ηtk→i − λ √ d n ∑ k∈[n] ηtk, (17) which corresponds closely to the spectral power method on the nonbacktracking walk matrix of G [KMM+13]. Conversely, when λ = 0, the updates equations on mt, ηt correspond closely to the usual power iteration to compute singular vectors of B. We obtain this algorithm from belief propagation using two approximations. First, we linearize the belief propagation update equations around a certain ‘zero information’ fixed point. Second, we use an ‘approximate message passing’ version of the belief propagation updates which results in the addition of the memory terms in Eqs. (13), (14), (15). The details of these approximations are quite standard and deferred to Appendix D. For a heuristic discussion, we refer the interested reader to the tutorials [Mon12, TKGM14] (for the Gaussian approximation) and the papers [DKMZ11, KMM+13] (for the linearization procedure). As with belief propagation, the behavior of this iterative algorithm, in the limit p, n → ∞ can be tracked using a distributional recursion called density evolution. Definition 1 (Density evolution). Let (m̄, U) and (η̄, V ) be independent random vectors such that U ∼ N(0, 1), V ∼ Uniform({±1}), m̄, η̄ have finite variance. Further assume that (η̄, V ) d= (−η̄,−V ) and (m̄, U) d= (−m̄,−U) (where d= denotes equality in distribution). We then define new random pairs (m̄′, U ′) and (η̄′, V ′), where U ′ ∼ N(0, 1), V ′ ∼ Uniform({±1}), and (η̄, V ) d= (−η̄,−V ), (m̄, U) d= (−m̄,−U), via the following distributional equation m̄′ ∣∣ U ′ d = µE{V η̄}U ′ + ( µE{η̄2} )1/2 ζ1, (18) η̄′ ∣∣ V ′=+1 d = λ√ d [ k+∑ k=1 η̄k ∣∣ + + k−∑ k=1 η̄k ∣∣ − ] − λ √ dE{η̄} + µ γ E{Um̄}+ (µ γ E{m̄2} )1/2 ζ2. (19) Here we use the notation X|Y d = Z to mean that the conditional distribution of X given Y is the same as the (unconditional) distribution of Z. Notice that the distribution of η̄′ ∣∣ V ′=− is determined by the last equation using the symmetry property. Further η̄k|+ and η̄k|− denote independent random variables distributed (respectively) as η̄|V=+ and η̄|V=−. Finally k+ ∼ Poiss(d/2 + λ √ d/2), k− ∼ Poiss(d/2 − λ √ d/2), ζ1 ∼ N(0, 1) and ζ2 ∼ N(0, 1) are mutually independent and independent from the previous random variables. The density evolution map, denoted by DE, is defined as the mapping from the law of (η̄, V, m̄, U) to the law of (η̄′, V ′, m̄′, U ′). With a slight abuse of notation, we will omit V,U , V ′, U ′, whose distribution is left unchanged and write (η̄′, m̄′) = DE(η̄, m̄) . (20) The following claim is the core of the cavity prediction. It states that the density evolution recursion faithfully describes the distribution of the iterates ηt,mt. Claim 7. Let (η̄0, V ), (m̄0, U) be random vectors satisfying the conditions of definition 1. Define the density evolution sequence (η̄t, m̄t) = DEt(η̄0, m̄0), i.e. the result of iteratively applying the mapping DE t times. Consider the linear message passing algorithm of Eqs. (13) to (15), with the following initialization. We set (m0r)r∈[p] conditionally independent given u, with conditional distribution m 0 r|u d = m̄0|U=√pur . Analogously, η0i , η0i→j are conditionally independent given v with η0i |v d = η̄0|V=vi , η0i→j |v d = η̄0|V=vi . Finally η−1i = η −1 i→j = m −1 r = 0 for all i, j, r. Then, as n, p→∞ with p/n→ 1/γ, the following holds for uniformly random indices i ∈ [n] and a ∈ [p]: (mta, ua √ p) d⇒ (m̄t, U) (21) (ηti , vi) d⇒ (η̄t, V ). (22) The following simple lemma shows the instability of the density evolution recursion. Lemma 8. Under the density evolution mapping, we obtain the random variables (η̄′, m̄′) = DE(η̄, m̄′ Let m and m′ denote the vector of the first two moments of (η̄, V, m̄, U) and (η̄′, V ′, m̄′, U ′) defined as follows: m = (E{V η̄},E{Um̄},E{η̄2},E{m̄2}) , (23) and similarly for m′. Then, for ‖m‖2 → 0, we have m′ = λ 2 µ/γ 0 0 µ 0 0 0 0 0 λ2 µ/γ 0 0 µ 0 m +O(‖m‖2) (24) In particular, the linearized map m 7→ m′ at m = 0 has spectral radius larger than one if and only if λ2 + µ2/γ > 1. The interpretation of the cavity prediction and the instability lemma is as follows. If we choose an initialization (η̄0, V ), (m̄0, U) with η̄0, m̄0 positively correlated with V and U , then this correlation increases exponentially over time if and only if λ2 + µ2/γ > 15. In other words, a small initial correlation is amplified. While we do not have an initialization that is positively correlated with the true labels, a random initialization η0,m0 has a random correlation with v, u of order 1/ √ n. If λ2 + µ2/γ > 1, this correlation is amplified over iterations, yielding a nontrivial reconstruction of v. On the other hand, if λ2 + µ2/γ < 1 then this correlation is expected to remain small, indicating that the algorithm does not yield a useful estimate. 5 Proof overview As mentioned above, a key step of our analysis is provided by Theorem 6, which establishes a weak recovery threshold for the Gaussian observation model of Eqs. (8), (9). The proof proceeds in two steps: first, we prove that, for λ2 +µ2/γ < 1 it is impossible to distinguish between data A,B generated according to this model, and data generated according to the null model µ = λ = 0. Denoting by Pλ,µ the law of data A,B, this is proved via a standard second moment argument. Namely, we bound the chi square distance uniformly in n, p χ2(Pλ,µ,P0,0) ≡ E0,0 {( dPλ,µ dP0,0 )2} − 1 ≤ C , (25) and then bound the total variation distance by the chi-squared distance ‖Pλ,µ − P0,0‖TV ≤ 1 − (χ2(Pλ,µ,P0,0) + 1)−1. This in turn implies that no test can distinguish between the two hypotheses with probability approaching one as n, p→∞. The chi-squared bound also allows to show that weak recovery is impossible in the same regime. In order to prove that weak recovery is possible for λ2 + µ2/γ > 1, we consider the following optimization problem over x ∈ Rn, y ∈ Rp: maximize 〈x,Ax〉+ b∗〈x,By〉, (26) subject to ‖x‖2 = ‖y‖2 = 1 . (27) where b∗ = 2µλγ . Denoting solution of this problem by (x̂, ŷ), we output the (soft) label estimates v̂ = √ nx̂. This definition turns out to be equivalent to the spectral algorithm in the statement of Theorem 6, and is therefore efficiently computable. This optimization problem undergoes a phase transition exactly at the weak recovery threshold λ2 + µ2/γ = 1, as stated below. Lemma 9. Denote by T = Tn,p(A,B) the value of the optimization problem (26). (i) If λ2 + µ 2 γ < 1, then, almost surely lim n,p→∞ Tn,p(A,B) = 2 √ 1 + b2∗γ 4 + b∗ . (28) (ii) If λ, µ > 0, and λ2 + µ 2 γ > 1 then there exists δ = δ(λ, µ) > 0 such that, almost surely lim n,p→∞ Tn,p(A,B) = 2 √ 1 + b2∗γ 4 + b∗ + δ(λ, µ) . (29) (iii) Further, define T̃n,p(δ̃;A,B) = sup ‖x‖=‖y‖=1,|〈x,v〉|<δ̃ √ n [ 〈x,Ax〉+ b∗〈x,By〉 ] . 5Notice that both the messages variance E(η2) and covariance with the ground truth E(ηV ) increase, but the normalized correlation (correlation divided by standard deviation) increases. Then for each δ > 0, there exists δ̃ > 0 sufficiently small, such that, almost surely lim n,p→∞ T̃n,p(δ̃;A,B) < 2 √ 1 + b2∗γ 4 + b∗ + δ 2 . (30) The first two points imply that Tn,p(A,B) provide a statistic to distinguish between P0,0 and Pλ,µ with probability of error that vanishes as n, p→∞ if λ2 +µ2/γ > 1. The third point (in conjunction with the second one) guarantees that the maximizer x̂ is positively correlated with v, and hence implies weak recovery. In fact, we prove a stronger result that provides an asymptotic expression for the value Tn,p(A,B) for all λ, µ. We obtain the above phase-transition result by specializing the resulting formula in the two regimes λ2 + µ2/γ < 1 and λ2 + µ2/γ > 1. We prove this asymptotic formula by Gaussian process comparison, using Sudakov-Fernique inequality. Namely, we compare the Gaussian process appearing in the optimization problem of Eq. (26) with the following ones: X1(x, y) = λ n 〈x, v0〉2 + 〈x, g̃x〉+ b∗ √ µ n 〈x, v0〉〈y, u0〉+ 〈y, g̃y〉 , (31) X2(x, y) = λ n 〈x, v0〉2 + 1 2 〈x, W̃xx〉+ b∗ √ µ n 〈x, v0〉〈y, u0〉+ 1 2 〈y, W̃yy〉 , (32) where g̃x, g̃y are isotropic Gaussian vectors, with suitably chosen variances, and W̃x, W̃y are GOE matrices, again with properly chosen variances. We prove that maxx,y X1(x, y) yields an upper bound on Tn,p(A,B), and maxx,y X2(x, y) yields a lower bound on the same quantity. Note that maximizing the first process X1(x, y) essentially reduces to solving a separable problem over the coordinates of x and y and hence to an explicit expression. On the other hand, maximizing the second process leads (after decoupling the term 〈x, v0〉〈y, u0〉) to two separate problems, one for the vector x, and the other for y. Each of the two problems reduce to finding the maximum eigenvector of a rank-one deformation of a GOE matrix, a problem for which we can leverage on significant amount of information from random matrix theory. The resulting upper and lower bound coincide asymptotically. As is often the case with Gaussian comparison arguments, the proof is remarkably compact, and somewhat surprising (it is unclear a priori that the two bounds should coincide asymptotically). While upper bounds by processes of the type of X1(x, y) are quite common in random matrix theory, we think that the lower bound by X2(x, y) (which is crucial for proving our main theorem) is novel and might have interesting generalizations. 6 Experiments We demonstrate the efficacy of the full belief propagation algorithm, restated below: ηt+1i = √ µ γ ∑ q∈[p] Bqim t q − µ γ ( ∑ q∈[p] B2qi τ tq ) tanh(ηt−1i ) + ∑ k∈∂i f(ηtk→i; ρ)− ∑ k∈[n] f(ηtk; ρn) , (33) ηt+1i→j = √ µ γ ∑ q∈[p] Bqim t q − µ γ ( ∑ q∈[p] B2qi τ tq ) tanh(ηt−1i ) + ∑ k∈∂i\j f(ηtk→i; ρ)− ∑ k∈[n] f(ηtk; ρn) , (34) mt+1q = √ µ/γ τ t+1q ∑ j∈[n] Bqj tanh(η t j)− µ γτ t+1q ( ∑ j∈[n] B2qjsech 2(ηtj) ) mt−1q (35) τ t+1q = 1 + µ− µ γ ∑ j∈[n] B2qjsech 2(ηtj) −1 . (36) Here the function f(; ρ) and the parameters ρ, ρn are defined as: f(z; ρ) ≡ 1 2 log (cosh(z + ρ) cosh(z − ρ) ) , (37) ρ ≡ tanh−1(λ/ √ d) , (38) ρn ≡ tanh−1 ( λ√d n− d ) . (39) We refer the reader to Appendix D for a derivation of the algorithm. As demonstrated in Appendix D, the BP algorithm in Section 4 is obtained by linearizing the above in η. In our experiments, we perform 100 Monte Carlo runs of the following process: 1. Sample AG, B from Pλ,µ with n = 800, p = 1000, d = 5. 2. Run BP algorithm for T = 50 iterations with random initialization η0i , η −1 i ,m 0 a,m −1 a ∼iid N(0, 0.01). yielding vertex and covariate iterates ηT ∈ Rn, mT ∈ Rp. 3. Reject the null hypothesis if ∥∥ηT∥∥ 2 > ∥∥η0∥∥ 2 , else accept the null. 4. Return estimates v̂BPi = sgn(η T i ), û BP a = m T a / ∥∥mT∥∥ 2 . Figure 1 (left) shows empirical probabilities of rejecting the null for (λ, µ) ∈ [0, 1]× [0,√γ]. The next two plots display the mean overlap |〈v̂BP, v〉/n| and 〈ûBP, u〉/ ‖u‖ achieved by the BP estimates (lighter is higher overlap). Below the theoretical curve (red) of λ2 + µ2/γ = 1, the null hypothesis is accepted and the estimates show negligible correlation with the truth. These results are in excellent agreement with our theory. Importantly, while our rigorous result holds only in the limit of diverging d, the simulations show agreement already for d = 5. This lends further credence to the cavity prediction Claim 3. Acknowledgements A.M. was partially supported by grants NSF DMS-1613091, NSF CCF-1714305 and NSF IIS1741162. E.M was partially supported by grants NSF DMS-1737944 and ONR N00014-17-1-2598. Y.D would like to acknowledge Nilesh Tripuraneni for discussions about this paper.
1. What is the focus of the paper in terms of community detection? 2. What are the strengths of the proposed approach, particularly in its ability to handle both sparse graphs and high-dimensional node covariates? 3. What are the weaknesses of the paper, specifically regarding the problem's contrived nature? 4. Can you provide more details about the threshold prediction and how it relates to the two extreme cases? 5. How does the paper's main result confirm the threshold prediction, and what is the significance of this confirmation?
Review
Review This paper considered the problem of detecting latent community structure given a sparse graph along with high-dimensional node covariates. The problems with either sparse graph or high-dimensional node covariates have been rather extensively studied in the literature, so the modeling innovation is community detection based on both pieces of information correlated with the same latent structure. To me, this problem seems to be rather contrived. Not surprisingly, the threshold prediction is one (see equation (4)) that naturally bridges the two extreme cases where only one piece of information is available (see Theorem 1 and 2). From the technical viewpoint, the main result is a formal confirmation of the threshold prediction in the limit of large degrees via a Gaussian model. The proof is rather nontrivial and appears to be interesting.
NIPS
Title Second Order Optimality in Decentralized Non-Convex Optimization via Perturbed Gradient Tracking Abstract In this paper we study the problem of escaping from saddle points and achieving second-order optimality in a decentralized setting where a group of agents collaborate to minimize their aggregate objective function. We provide a non-asymptotic (finite-time) analysis and show that by following the idea of perturbed gradient descent, it is possible to converge to a second-order stationary point in a number of iterations which depends linearly on dimension and polynomially on the accuracy of second-order stationary point. Doing this in a communication-efficient manner requires overcoming several challenges, from identifying (first order) stationary points in a distributed manner, to adapting the perturbed gradient framework without prohibitive communication complexity. Our proposed Perturbed Decentralized Gradient Tracking (PDGT) method consists of two major stages: (i) a gradientbased step to find a first-order stationary point and (ii) a perturbed gradient descent step to escape from a first-order stationary point, if it is a saddle point with sufficient curvature. As a side benefit of our result, in the case that all saddle points are non-degenerate (strict), the proposed PDGT method finds a local minimum of the considered decentralized optimization problem in a finite number of iterations. 1 Introduction Recently, we have witnessed an unprecedented increase in the amount of data that is gathered in a distributed fashion and stored over multiple agents (machines). Moreover, the advances in data-driven systems such as Internet of Things, health-care, and multi-agent robotics demand for developing machine learning frameworks that can be implemented in a distributed manner. Simultaneously, convex formulations for training machine learning tasks have been replaced by nonconvex representations such as neural networks. These rapid changes call for the development of a class of communication-efficient algorithms to solve nonconvex decentralized learning problems. In this paper, we focus on a nonconvex decentralized optimization problem where a group of m agents collaborate to minimize their aggregate loss function, while they are allowed to exchange information only with their neighbors. To be more precise, the agents (nodes) aim to solve min x∈Rd f(x) = 1 m m∑ i=1 fi(x), (1) where fi : Rd → R is the objective function of node i which is possibly nonconvex. Finding the global minimizer of this problem, even in the centralized setting where all the functions are available at a single machine, is hard. Given this hardness result, we often settle for finding a stationary point of Problem (1). There have been several lines of work on finding an approximate first-order 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. stationary point of this distributed problem, i.e., finding a set of local solutions x̃1, . . . , x̃m where their average x̃avg has a small gradient norm ‖∇f(x̃avg)‖ and a small consensus error ∑m i=1 ‖x̃i− x̃avg‖. Achieving first-order optimality, however, in nonconvex settings may not lead to a satisfactory solution as it could be a poor saddle point. Therefore, finding a second-order stationary point could improve the quality of the solution. In fact, when all saddle points are non-degenerate finding a second-order stationary point implies convergence to a local-minimum, and in several problems including matrix completion [1], phase retrieval [2], and dictionary learning [3] local minima are global minima. While convergence to a second-order stationary point for the centralized setting has been extensively studied in the recent literature, the non-asymptotic complexity analysis of finding such a point for decentralized problems (under standard smoothness assumptions) has thus far evaded solution, in part because of significant additional challenges presented by communication limitations. A major difference between the centralized and the decentralized framework lies in the exchange of information between the nodes. Exchanging Hessian information is, of course, prohibitively expensive. Furthermore, turning to approximating schemes has the potential to create catastrophic problems for the algorithm, as small errors in approximation across the nodes could lead to inconsistent updates that could reverse progress made by prior steps. Moreover, escaping from first-order stationary points requires identifying that the algorithm has reached such a point, and accomplishing even this basic step in a communication-efficient manner presents challenges. Contributions. In this paper we develop a novel gradient-based method for escaping from saddle points in a decentralized setting and characterize its overall communication cost for achieving a second-order stationary point. The proposed Perturbed Decentralized Gradient Tracking (PDGT) algorithm consists of two major steps: (i) A local decentralized gradient tracking scheme to find a first-order stationary point, while maintaining consensus by averaging over neighboring iterates; (ii) A perturbed gradient tracking scheme to escape from saddle points that are non-degenerate. We show that to achieve an ( , γ, ρ)-second-order stationary point (see Definition 2) the proposed PDGT algorithm requires at most Θ̃ ( max { f(x0)−f∗ (1−σ)2 min{ 2,ρ2}γ3 , d γ6 }) rounds of communication, where d is dimension, f(x0) is the initial objective function value, f∗ is the optimal function value, and σ is the second largest eigenvalue of mixing matrix in terms of absolute norm which depends on the connectivity of the underlying graph. To the best of our knowledge, this result provides the first non-asymptotic guarantee for achieving second-order optimality in decentralized optimization under standard smoothness assumptions. 1.1 Related Work Centralized settings. Convergence to a first-order stationary point for centralized settings has been extensively studied in the nonconvex literature [4–13]. A recent line of work focuses on improving these guarantees and achieving second-order optimality in a finite number of iterations. These schemes can be divided into three categories: (i) fully gradient-based methods which use the perturbation idea for escaping from saddle points once iterates reach a point with small gradient norm [14–16]; (ii) methods which utilize the eigenvector corresponding to the smallest eigenvalue of the Hessian to find an escape direction [5, 6, 17–21]; and (iii) trust-region [22, 23] and cubic regularization algorithms [24–26] which require solving a quadratic or cubic subproblem, respectively, at each iteration. These methods, however, cannot be applied to decentralized settings directly as they require access to the gradient or Hessian of the global objective function. First-order optimality in decentralized settings. Recently, several iterative methods have been introduced and studied for achieving first-order optimality in decentralized settings. In particular, [27– 29] show convergence to a first-order stationary point by leveraging successive convex approximation techniques and using dynamic consensus protocols. Also, a similar guarantee has been established for several well-known decentralized algorithms including distributed gradient descent [30, 31], primaldual schemes [32–34], gradient tracking methods [35, 36], and decentralized alternating direction method of multipliers (ADMM) [37]. Second-order optimality in decentralized settings. Finding a second-order stationary point in a distributed setting has been studied by several works [38–41], but they all only provide asymptotic guarantees. The most related work to our submission is [42] which studies non-asymptotic convergence of stochastic gradient-based diffusion method for decentralized settings. However, the result of this work is obtained under two relatively less common assumptions. First, it requires a bounded gradient disagreement condition which ensures that the local gradients∇fi are not far from the global gradient∇f (Assumption 3 in [42]). Second, it assumes that the computed stochastic gradient near a saddle point is such that there is gradient noise present along some descent direction, spanned by the eigenvectors corresponding to the negative eigenvalues of the Hessian, i.e., stochastic gradient leads to an escape direction (Assumption 7 in [42]). Both these assumptions, and, in particular, the second one may not hold in general decentralized settings, and they both significantly simplify the analysis of escaping from saddle points. Unlike [42], the theoretical results presented here do not require assuming these restrictive conditions, and our paper provides the first non-asymptotic guarantee for achieving second-order optimality in decentralized settings, under standard smoothness assumptions. In fact, the conditions that we assume for proving our results are identical to the ones used in [15] for the analysis of perturbed gradient method in the centralized setting. 2 Preliminaries The problem in (1) is defined over a set of m connected agents (nodes) where each one has access to a component of the objective function. We denote the underlying undirected connectivity graph by G = {V,E}, where V = {1, . . . ,m} is the set of vertices (nodes) and E is the set of edges. As this graph is undirected, if node i can send information to node j, then the reverse communication is also possible. We call two nodes neighbors if there exists an edge between them. We further denote the neighborhood of node i by Ni, which also includes node i itself. Since the optimization variable x in (1) appears in each summand of the objective function, this problem is not decomposable into subproblems that can be solved simultaneously over nodes of the network. To make the objective function separable we introduce m local variables xi ∈ Rd, and instead of minimizing 1m ∑m i=1 fi(x) in (1), we minimize the objective function 1 m ∑m i=1 fi(xi). To ensure that these two problems are equivalent, we enforce the local decision variables to be equal to each other. Since the graph is connected, this condition can be replaced by consensus among neighboring nodes, and therefore the resulting problem can be written as min x=[x1;x2;...;xm]∈Rmd F (x) := 1 m m∑ i=1 fi(xi) s.t. xi = xj , ∀(i, j) ∈ E. (2) Note that in (2) we have introduced the notation x ∈ Rmd to indicate the concatenation of all local variables x := [x1;x2; ...;xm] and defined the function F : Rmd → R as F (x) := 1m ∑m i=1 fi(xi). It can be verified that x∗ is an optimal solution of Problem (1) if and only if x∗ := [x∗; . . . ;x∗] is an optimal solution of Problem (2). In the rest of the paper, therefore, we focus on solving Problem (2) as its objective function is node-separable. We should mention that solving this problem is still challenging as the constraints of this problem are coupled. In this paper, we only assume standard smoothness conditions for the local objective functions fi to establish our theoretical guarantees. Assumption 1. The local functions fi have Lipschitz continuous gradient with constant L1, i.e., for all i ∈ {1, . . . ,m} and any x ∈ Rd and x′ ∈ Rd we have ‖∇fi(x)−∇fi(x′)‖ ≤ L1 ‖x− x′‖. Assumption 2. The local functions fi have Lipschitz continuous Hessian with constant L2, i.e., for all i ∈ {1, . . . ,m} and any x ∈ Rd and x′ ∈ Rd we have ∥∥∇2fi(x)−∇2fi(x′)∥∥ ≤ L2 ‖x− x′‖. The gradient Lipschitz continuity condition in Assumption 1 is customary for the analysis of gradientbased methods. The condition in Assumption 2 is also required to ensure that the function is well-behaved near its saddle stationary points. Finding an optimal solution of (1) or (2) is hard since the local functions fi are nonconvex. Hence, we settle for finding a stationary point. In the centralized unconstrained case, a first-order stationary point of function f satisfies ‖∇f(x̂)‖ = 0, and an approximate -first-order stationary point is defined as ‖∇f(x̂)‖ ≤ . For the constrained decentralized problem in (2) the notion of first-order stationarity should address both stationarity and feasibility as we state in the following definition. Definition 1. A set of vectors {x̂i}mi=1 is an ( , ρ)-first-order stationary point of Problem (2) if∥∥∥∥ 1m m∑ i=1 ∇fi(x̂i) ∥∥∥∥ ≤ , 1m m∑ i=1 ∥∥∥∥x̂i − 1m m∑ j=1 x̂j ∥∥∥∥ ≤ ρ. (3) Algorithm 1: PDGT algorithm 1: Input: x0,∇f(x0), , γ, ρ, δ1, δ2 2: Set xi = x0, yi = ∇f(x0), T1 = Θ̃ ( f(x0)−f∗ (1−σ)2 min{ 2,ρ2} ) , T2 = Θ̃ ( d log(1/γδ2) γ3 ) , η1 = Θ̃ ( (1− σ)2 ) , η2 =Θ̃ ( γ2 d(1−σ) ) , R = Θ̃ ( γ 3 2 ) , B = Θ̃ ( γ3 ) ; 3: Call (x̃) = PDGT Phase I (x,y, η1, T1, δ1); 4: Call (x̂, ŷ, S) = PDGT Phase II (x̃, η2, T2,R, B); 5: if S = 1 then 6: Return x̂ as a second-order stationary point and stop; 7: else 8: Set x = x̂, y = ŷ and go to Step 3; 9: end if The first condition in the above definition ensures that the gradient norm is sufficiently small, while the second condition ensures that the iterates are close to their average. It can be shown that if [x̂1, . . . , x̂m] is an ( , ρ)-first-order stationary point of Problem (2), then their average x̂avg := 1 m ∑m i=1 x̂i is an ( +L1ρ)-first-order stationary point of Problem (1), i.e., ‖ 1 m ∑m i=1∇fi(x̂avg)‖ ≤ + L1ρ. The proof of this claim is available in the supplementary material. The same logic holds for second-order stationary points. In the centralized case, x is an ( , γ)-secondorder stationary point if ‖∇f(x̂)‖ ≤ and ∇2f(x̂) −γ I. Similarly, we define a second-order stationary point of Problem (2) with an extra condition that enforces consensus approximately. Definition 2. A set of vectors {x̂i}mi=1 is an ( , γ, ρ)-second-order stationary point of Problem (2) if∥∥∥∥ 1m m∑ i=1 ∇fi(x̂i) ∥∥∥∥ ≤ , 1m m∑ i=1 ∇2fi(x̂i) −γ I, 1 m m∑ i=1 ∥∥∥∥x̂i − 1m m∑ j=1 x̂j ∥∥∥∥ ≤ ρ. (4) Note that under Assumptions 1 and 2, it can be shown that if the local solutions [x̂1, . . . , x̂m] form an ( , γ, ρ)-second-order stationary point of Problem (2), then their average x̂avg := 1m ∑m i=1 x̂i is an ( + L1ρ, γ + L2ρ)-second-order stationary point of Problem (1), i.e., ‖ 1m ∑m i=1∇fi(x̂avg)‖ ≤ + L1ρ and 1m ∑m i=1∇2fi(x̂avg) −(γ + L2ρ) I. For proof check the supplementary material. 3 Perturbed Decentralized Gradient Tracking Algorithm We now present our proposed Perturbed Decentralized Gradient Tracking (PDGT) algorithm. The PDGT method presented in Algorithm 1 can be decomposed into two phases. Phase I of our method uses the gradient tracking ideas proposed in [35,36] to show convergence to some first-order stationary point. Using this scheme for our setup, however, requires overcoming the following hurdle: The nodes do not have access to the global gradient and thus even the task of realizing that they lie close to such a point is not trivial. Moreover, the consensus error is cumulative over the graph and tracking this quantity for each node is an additional challenge. In prior work, it has been shown that there exists an iterate that achieves first-order optimality without explicitly introducing a mechanism for identifying such an iterate. In this paper, we address this issue by utilizing an average consensus protocol as a subroutine of Phase I, which coordinates the nodes and finds with high probability and negligible communication overhead the correct index achieving first-order optimality. Phase II of PDGT utilizes ideas from centralized perturbed gradient descent developed in [15], in order to escape saddle points. Adapting these ideas to the decentralized setting poses several challenges. A naive use of an approximation scheme could produce further issues as the noise could lead different nodes to take different escaping directions, potentially canceling each other out. Further, in order to control the consensus error and the gradient tracking disagreement we adopt a significantly smaller step size than the one used in the centralized case. Finally, using a common potential function both for Phase I and Phase II derives an interesting tradeoff between the corresponding stepsizes. Taking into account all these challenges we design PDGT to guarantee escaping from strict saddle points. In particular, we show that at the end of the second phase, either a carefully chosen potential function decreases - PDGT escapes from a saddle point - and we go back to Phase I, or an approximate Algorithm 2: PDGT algorithm: Phase I 1: Input: x,y, η1, T1, δ1 2: Initialization: x0 = x, y0 = y; 3: for r = 1, . . . , T1 do 4: Compute xri = ∑ j∈Ni wijx r−1 j − η1y r−1 i ; ∀i = 1, . . . ,m 5: Compute yri = ∑ j∈Ni wijy r−1 j +∇fi(xri )−∇fi(x r−1 i ); ∀i = 1, . . . ,m 6: Exchange xri and y r i with neighboring nodes; ∀i = 1, . . . ,m 7: end for 8: for j = 1 : log( 1δ1 ) do 9: Choose index t̃j ∼ [0, T1] uniformly at random and run Consensus Protocol on t̃j to find first order stationary point x̃ with small gradient tracking disagreement; 10: end for Result: Returns first order stationary point x̃ with probability at least 1− δ1 second-order stationary point has been reached and the exact iterate is reported. Next, we present the details of both phases of PDGT. Phase I. Consider∇fi(xi), the local gradient of node i, and define yi ∈ Rd as the variable of node i which is designed to track the global average gradient 1m ∑m i=1∇fi(xi). The algorithm proceeds to update the iterates xi based on the directions of yi. More specifically, at each iteration r, each agent i first updates its local decision variable by averaging its local iterate with the iterates of its neighbors and descending along the negative direction of its gradient estimate yr−1i , i.e., xri = ∑ j∈Ni wijx r−1 j − η1y r−1 i , (5) where η1 is the stepsize and wij is the weight that node i assigns to the information that it receives from node j. We assume that wij > 0 only for the nodes j that are in the neighborhood of node i, which also includes node i itself. Further, the sum of these weights is 1, i.e., ∑ j∈Ni wij = 1. Once the local xi’s are updated, each agent i computes its local gradient ∇fi(xri ) evaluated at its current iterate xri . Then, the nodes use the gradient tracking variable y r−1 i received from their neighbors in the previous round to update their gradient tracking vector according to the update yri = ∑ j∈Ni wijy r−1 j +∇fi(x r i )−∇fi(xr−1i ), (6) Note that the update in (6) shows that node i computes its new global gradient estimate by combining its previous local estimate with the ones communicated by its neighbors as well as the difference of its two consecutive local gradients. Once the local gradient tracking variables are updated, nodes communicate their local models xri and local gradient tracking vectors y r i with their neighbors. After running the updates in (5) and (6) for T1 rounds, we can ensure that we have visited a set of points [x1, . . . ,xm] that construct a first-order stationary point of Problem (2) (see Theorem 1); however, nodes are oblivious to the time index of those iterates. To resolve this issue all nodes sample a common time index r ∈ {1, . . . , T1} and run an average consensus protocol among themselves to compute the expression ∥∥ 1 m ∑m i=1∇fi(x̃i) ∥∥2 + 1m∑mi=1 ‖x̃i − 1m∑mj=1 x̃j‖2 for that time index. By repeating this process at most log( 1δ1 ) times, the output of the process leads to a set of points satisfying first-order optimality with probability at least 1 − δ1 . The details of this procedure are provided in the appendix. Note that the consensus procedure is standard and known to be linearly convergent. Hence, the additional cost of running the consensus protocol log( 1δ1 ) times is negligible compared to T1; see Theorem 1 for more details. Phase II. In the second phase of PDGT we are given a set of variables denoted by x̃ = [x̃1, . . . , x̃m] which is a first-order stationary point. The goal is to escape from it, if it is a strict saddle, i.e., the smallest eigenvalue of the Hessian at this point is sufficiently negative. Initialized with a first-order stationary point x̃ the algorithm injects the same noise ξ picked uniformly from a ball of radius R = Õ(γ 32 ), to all the local iterates x̃i. Thus for all i we have x0i = x̃i + ξ. After initialization Algorithm 3: PDGT algorithm: Phase II 1: Input: x̃, η2, T2,R, B 2: All nodes sample a vector ξ ∼ uniform ball of radiusR using the same seed; 3: Set x0i = x̃i + ξ and run Average Consensus on ∇fi(x0i ) to set y0i = 1m m∑ i=1 ∇fi(x0i ); 4: for r = 1, . . . , T2 do 5: Compute xri = ∑ j∈Ni wijx r−1 j − η2y r−1 i ; ∀i = 1, . . . ,m 6: Compute yri = ∑ j∈Ni wijy r−1 j +∇fi(xri )−∇fi(x r−1 i ); ∀i = 1, . . . ,m 7: Exchange xri and y r i with neighboring nodes; ∀i = 1, . . . ,m 8: end for 9: Run Average Consensus Protocol for iterates xT2 and x̃; 10: if H(xT2 ,yT2)−H(x̃, ỹ) > −B then 11: Return approximate second-order stationary point x̃ = [x̃1, . . . , x̃m] and set S = 1; 12: else 13: Return xT2 = [xT21 , . . . ,x T2 m ], y T2 = [yT21 , . . . ,y T2 m ] and set S = 0; 14: end if all nodes follow the updates in (5) and (6) with stepsize η2, for T2 rounds. If the initial point was a strict saddle then at the end of this process the iterates escape from it; as a result our properly chosen potential function H (formally defined in (9) in Section 4) decreases substantially and then we revisit Phase I. If the potential function H does not decrease sufficiently, then we conclude that x̃ = [x̃1, . . . , x̃m] is a second-order stationary point of Problem (2). More precisely, choosing a proper stepsize η2 and running PDGT for T2 = Õ(dγ−3) iterations decreases the potential function H by at least B = Õ(γ3), with probability 1− δ2, where T2 has only a polylogarithmic dependence on δ2. If the potential function is not substantially decreased then we confidently report x̃ as an approximate second-order stationary point. Note that S is our indicator, tracking whether we have encountered some approximate second-order stationary point or not. Further, the average consensus protocol is utilized in the second phase both to initialize the gradient tracking variables and to evaluate the potential function H at the iterates xT2 and x̃. Since the communication cost of the average consensus protocol is logarithmic in γ−1, it is negligible compared to T2. Hence, the number of communication rounds for Phase II is Õ(dγ−3). Check Theorem 2 for more details. 4 Theoretical Results In this section, we study convergence properties of our proposed PDGT method. First, we characterize the number of rounds T1 required in Phase I of PDGT to find a set of first-order stationary points with high probability. Then, we establish an upper bound for T2, the number of communication rounds required in the second phase. We further show that each time the algorithm finishes Phase II, a potential function decreases at least by Θ̃(γ3). Finally, using these results, we characterize the overall communication rounds between nodes to find a second-order stationary point. Before stating our result, we first discuss some conditions required for the averaging weights used in (5) and (6). Consider the mixing matrix W ∈ Rm×m where the element of its i-th row and j-th column is wij . We assume W satisfies the following conditions. Assumption 3. The mixing matrix W ∈ Rm×m satisfies the following: W = W>, W1 = 1, σ := max{|λ2(W)|, |λm(W)|} < 1, (7) where λi(W) denotes the i-th largest eigenvalue of W. The first condition in Assumption 3 implies that the weight node i assigns to node j equals the weight node j assigns to node i. The second condition means W is row stochastic, and by symmetry, column stochastic. This condition ensures that the weights that each node i assigns to its neighbors and itself sum up to 1. Further note that the eigenvalues of W are real and in the interval [−1, 1]; in fact they can be sorted in a non-increasing order as 1 = λ1(W) ≥ λ2(W) ≥ · · · ≥ λm(W) ≥ −1. The last condition in Assumption 3 ensures that the maximum absolute value of all eigenvalues of W excluding λ1(W) is strictly smaller than 1. This is required since σ := max{|λ2(W)|, |λm(W)|} indicates the rate of information propagation. For highly connected graphs σ is close to zero, while for less connected graphs it is close to 1. A mixing matrix W satisfying Assumption 3 can be chosen based on local degrees in a variety of ways (e.g., [36]). Remark 1. In the appendix we report explicit expressions. To simplify the presentation in the main body, we turn to asymptotic notation and consider sufficiently small η and α, thus hiding constants but preserving the scaling with respect to quantities that capture important elements of our analysis. Next, we present our first result, which formally characterizes the choice of parameters for PDGT to find an ( , ρ)-first-order stationary point, as defined in (1), with probability 1− δ1. Theorem 1. Consider Phase I of PDGT presented in Algorithm 2. If Assumptions 1 and 3 hold, and we set η1 = Θ((1 − σ) √ α) where α = Θ((1 − σ)2), and the number of iterations satisfies T1 ≥ T = Θ ( f(x0)−f∗ η1 2 ) = Θ ( f(x0)−f∗√ α(1−σ) 2 ) , then w.p. at least 1 − δ1, the iterates x̃1, . . . , x̃m corresponding to one of the randomly selected time indices t̃1, .., t̃log( 1δ1 ) from [0 : T1], satisfy∥∥∥∥∥ 1m m∑ i=1 ∇fi(x̃i) ∥∥∥∥∥ 2 + 1 m m∑ i=1 ∥∥∥∥x̃i − 1m m∑ j=1 x̃j ∥∥∥∥2 ≤ 2. (8) Theorem 1 shows that after Θ ( f(x0)−f∗√ α(1−σ) 2 + 1 1−σ log( 1 δ1 ) log( 1 ) ) rounds of exchanging information with neighboring nodes the goal of Phase I is achieved and we obtain a set of first-order stationary points with small gradient tracking disagreement. Note that the second term 11−σ log( 1 δ1 ) log(1 ) corresponds to the cost of running the average consensus protocol to choose the appropriate iterate among time steps t̃1, t̃2, ..., t̃log( 1δ1 ) . This term is negligible compared to the first term. Next we present our result for Phase II of PDGT. In particular, we show that if the input of Phase II, which satisfies (8), is a strict saddle meaning it has sufficient negative curvature, then PDGT will escape from it and as a result the following Lyapunov function decreases: H(x,y) := 1 m m∑ i=1 fi(xavg) + 1 m m∑ i=1 ‖xi − xavg‖2 + α m m∑ i=1 ‖yi − yavg‖2, (9) where x := [x1; . . . ;xm], y := [y1; . . . ;ym], xavg = 1m ∑m j=1 xj and yavg = 1 m ∑m j=1 yj . Theorem 2. Consider Phase II of PDGT presented in Algorithm 3, and suppose Assumptions 1-3 hold. Further, suppose we set η2 = Θ̃ ( γ2 d(1−σ) ) and α = Θ̃ ( (1− σ)2 ) , and the local perturbed iterates are computed according to x0i = x̃i + ξ, where ξ is drawn from the uniform distribution over the ball of radius R = Θ̃(γ1.5). If the input of the second phase denoted by x̃1, . . . , x̃m satisfies λmin(∇2f(x̃avg)) ≤ −γ, ∥∥∥∥∥ 1m m∑ i=1 ∇fi(x̃i) ∥∥∥∥∥ 2 ≤ 21, 1 m m∑ i=1 ∥∥∥∥x̃i − 1m m∑ j=1 x̃j ∥∥∥∥2 ≤ 22, where 21 = Õ(γ3) and 22 = Õ( γ5 d ), then after T2 ≥ T = Θ̃ ( d log(1/γδ2) γ3 ) iterations with probability at least 1− δ2 we have H(xT2 ,yT2)−H(x̃, ỹ) = −Ω̃(γ3). The result in Theorem 2 shows that if the input of Phase II of PDGT is a first-order stationary point with sufficient negative curvature, then by following the update of PDGT for Θ̃(d log(1/γδ2)γ3 ) iterations with probability at least 1 − δ2 the Lyapunov function H decreases by Ω̃(γ3). Further in order for the nodes to verify whether enough progress has been made we include two calls on the average consensus protocol on iterates x̃ and xT2 with overall communication complexity O( 21−σ log( 1 min{ 1, 2} )), which is negligible compared to Θ̃( d log(1/γδ2) γ3 ) iterations. Combining the results of Theorems 1 and 2, and using the fact that the Lyapunov function H is non-increasing in the first phase (proof is available in section 9) we obtain that if the outcome of the first phase has sufficient negative curvature (i.e, is a strict saddle), then the Lyapunov function H after Phase I and Phase II decreases at least by Θ̃(γ3). Hence, after at most Θ̃(γ−3) calls to the first and second phase of PDGT, we will find a second-order stationary point of Problem (2). Theorem 3. Consider the PDGT method in Algorithm 1, and suppose Assumptions 1-3 hold. If we set the stepsizes as η1 = Θ̃ ( (1− σ)2 ) , η2 = Θ̃ ( γ2 d(1−σ) ) and the number of iterations as T1 = Θ̃ ( f(x0)−f∗ (1−σ)2 min{ 2,ρ2} ) and T2 = Θ̃ ( d γ3 ) , respectively, and we have 2 = Õ ( γ3 ) and ρ2 = Õ ( γ5/d ) , then after at most Θ̃ ( max { f(x0)−f∗ (1−σ)2 min{ 2,ρ2}γ3 , d γ6 }) communication rounds PDGT finds an ( , γ, ρ)-second-order stationary point of Problem (2), with high probability. A major difference between the analysis of PDGT and its centralized counterpart in [15] is that as the iterates move away from a first-order stationary point, the consensus error and the gradient tracking disagreement potentially increase exponentially fast blurring the escaping direction. Addressing this issue requires careful selection of the algorithm’s parameters and setting appropriate stepsizes finetuning the tradeoff on the number of iterations between the first and the second phase. The aforementioned hurdles and the lack of knowledge regarding when the algorithm iterates lie close to a stationary point lead to an overall slower convergence rate than the one shown in the centralized case. Recall that if the local solutions [x̂1, . . . , x̂m] form an ( , γ, ρ)-second-order stationary point of Problem (2), then their average x̂avg := 1m ∑m i=1 x̂i is an ( +L1ρ, γ+L2ρ)-second-order stationary point of Problem (1). Moreover, as discussed earlier, second order stationary points are of paramount importance because when all saddle points are strict, any second-order stationary point is a local minima. We formally state this condition in the following assumption and later show that under this assumption PDGT finds a local minima of Problem (1). Assumption 4. Function f(·) is (θ, ζ, ν)- strict saddle, when for any point x, if its gradient norm is smaller than θ, then its Hessian satisfies the condition λmin(∇2f(x)) ≤ −ζ, unless x is ν−close to the set of local minima. The strict saddle condition defined in Assumption 4 states that if a function is (θ, ζ, ν)- strict saddle then each point in Rd belongs to one of these regions: 1) a region where the gradient is large and it is not close to any stationary point; 2) a region where the gradient is small but the Hessian has a significant negative eigenvalue; and 3) the region close to some local minimum. Indeed, under the extra assumption of strict saddle property on function f , PDGT is able to find a local minima in a finite number of iterations as we state in the following corollary. Corollary 1. Consider the PDGT method presented in Algorithm 3 and suppose the conditions in Theorem 3 are satisfied. If in addition Assumption 4 holds and the objective function f is (θ, ζ, ν)strict saddle point, by setting + L1ρ ≤ θ and γ + L2ρ ≤ ζ, the PDGT will output a point ν−close to the set of local minima after Θ̃ ( max { f(x0)−f∗ (1−σ)2 min{ 2,ρ2}γ3 , d γ6 }) communication rounds. 5 Numerical Experiments In this section, we compare PDGT with a simple version of D-GET where each node has full knowledge of its local gradient. D-GET is a decentralized gradient tracking method that "does not use the perturbation idea" [36]. Our goal is to show that PDGT escapes quickly from saddle points. We focus on a matrix factorization problem for the MovieLens dataset, where the goal is to find a rank r approximation of a matrix M ∈ Ml×n, representing the ratings from 943 users to 1682 movies. Each user has rated at least 20 movies for a total of 9990 known ratings. This problem is given by: (U∗,V∗) := argmin U∈Ml×r,V∈Mn×r f(U,V) = argmin U∈Ml×r,V∈Mn×r ‖M−UV>‖2F . (10) We consider different values of target rank and number of nodes. Both methods are given the same randomly generated connected graph, mixing matrix, and step size. The graph is created using the G(n, p) model with p = log2(n)n−1 enforcing the path 1−2− ...− (n−1)−n to ensure the connectivity of the graph. Further we utilize the Maximum Degree Weight mixing matrix as is presented in (10) of [36]. The stepsize for D-GET and both phases of PDGT is 3. Finally both methods are initialized at the same point which lies in a carefully chosen neighborhood of a saddle point. Note that in this problem all saddles are escapable and each local min is a global min. Regarding the parameters of PDGT we set the number of rounds during phase I and II to be 1500 and 100, respectively. Further, we set the threshold before we add noise during phase I as presented in (8) to be 10−6 and the radius of the noise injected to be 4. In Fig. 1 the experiment is run for 10 nodes, and the target rank is 20. Initially both algorithms are stuck close to a saddle point and make very little progress. However, since the theoretical criterion for PDGT is satisfied in the very first rounds (small average gradient and consensus error) we have injection of noise. This nudge is sufficient to accelerate substantially the escape of PDGT. As we see in the plot, D-GET remains close to the saddle point at least until iteration 1400 where we can see the gradient increasing somewhat faster. At the same time PDGT escapes the saddle point, decreases the loss and approaches a local minimum. In Fig. 2, the experiment is run for 30 nodes and the target rank is 30. Similarly, PDGT escapes from the saddle point much faster and decreases the loss substantially before it reaches the local minimum. We observe that D-GET also escapes the saddle point eventually following a similar trace to PDGT after spending a lot longer at the saddle. Interestingly, for this experiment, we observed that some parameters such as the stepsize of the first and the second phase, the injected noise and the threshold before we inject noise can afford to be substantially greater than the theoretical propositions casting PDGT useful for a series of practical applications. 6 Conclusion and Future Work We proposed the Perturbed Decentralized Gradient Tracking (PDGT) algorithm that achieves secondorder stationarity in a finite number of iterations, under the assumptions that the objective function gradient and Hessian are Lipschitz. We showed that PDGT finds an ( , γ, ρ)-second-order stationary point, where and γ indicate the accuracy for first- and second-order optimality, respectively, and ρ shows the consensus error, after Θ̃ ( max { f(x0)−f∗ (1−σ)2 min{ 2,ρ2}γ3 , d γ6 }) communication rounds, where d is dimension, f(x0)− f∗ is the initial error, and 1− σ is related to graph connectivity. This paper is the first step towards achieving second-order optimality in decentralized settings under standard smoothness assumptions, and several research problems are still unanswered in this area. First, our complexity scales linearly with dimension d, deviating from the poly-logarithmic dependence achieved for centralized perturbed gradient descent [15]. Closing this gap and developing an algorithm that obtains second-order optimality with communication rounds that scale sublinearly or even poly-logarithmically on the dimension is a promising research direction that requires further investigation. Second, in the centralized setting, it has been shown that by using gradient acceleration [16] it is possible to find a second-order stationary point faster than perturbed gradient descent. It would be interesting to see if the same conclusion also holds for decentralized settings. Last, extending the theory developed in this paper to the case that nodes only have access to a noisy estimate of their local gradients is another avenue of research that requires further study. 7 Broader Impact Over the last couple of years we have witnessed an unprecedented increase in the amount of data collected and processed in order to tackle real life problems. Advances in numerous data-driven system such as the Internet of Things, health-care, multi-agent robotics wherein data are scattered across the agents (e.g., sensors, clouds, robots), and the sheer volume and spatial/temporal disparity of data render centralized processing and storage infeasible or inefficient. Compared to the typical parameter-server type distributed system with a fusion center, decentralized optimization has its unique advantages in preserving data privacy, enhancing network robustness, and improving the computation efficiency. Furthermore, in many emerging applications such as collaborative filtering, federated learning, distributed beamforming and dictionary learning, the data is naturally collected in a decentralized setting, and it is not possible to transfer the distributed data to a central location. Therefore, decentralized computation has sparked considerable interest in both academia and industry. At the same time convex formulations for training machine learning tasks have been replaced by nonconvex representations such as neural networks and a line of significant non convex problems are on the spotlight. Our paper contributes to this line of work and broadens the set of problems that can be successfully solved without the presence of a central coordinating authority in the aforementioned framework. The implications on the privacy of the agents are apparent while rendering the presence of an authority unnecessary has political and economical extensions. Furthermore, numerous applications are going to benefit from our result impacting society in many different ways. 8 Acknowledgments and Disclosure of Funding The research of I. Tziotis and A. Mokhtari is supported by NSF Award CCF-2007668. C. Caramanis is supported by NSF Awards 1704778, 1646522, and 1609279.
1. What is the focus and contribution of the paper on decentralized optimization? 2. What are the strengths of the proposed approach, particularly in terms of algorithm design and theoretical analysis? 3. What are the weaknesses of the paper, especially regarding the significance and novelty of the result? 4. Do you have any concerns about the relevance and applicability of the proposed method? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This manuscript proposed a Perturbed Decentralized Gradient Tracking (PDGT) Algorithm that achieves the second-order stationary in polynomial time, by assuming the objective function has first-order and second-order smoothness. Strengths The algorithm design and theoretical analysis are new. The gradient tracking technique and the construction of the potential function $H$ is interesting for the decentralized optimization. Weaknesses 1. I am concerned about the importance of this result. Since [15] says that perturbed gradient descent is able to find second-order stationary with almost-dimension free (with polylog factors of dimension) polynomial iteration complexity, it is not surprising to me that the decentralized algorithm with occasionally added noise is able to escape saddle point in polynomial time. In addition, the iteration complexity is no longer dimension-free anymore in Theorem 3 (there is a $d$ dependency instead of $log d$). 2. There is no empirical study in this paper. The authors should have constructed some synthetic examples where we know the exact location of saddle points and tried to verify the theoretical claims of the proposed algorithm. Furthermore, PGD [15] should also be compared. I know this is a theory paper, but given the presence of [15], the theoretical contribution is not strong enough from my perspective.
NIPS
Title Second Order Optimality in Decentralized Non-Convex Optimization via Perturbed Gradient Tracking Abstract In this paper we study the problem of escaping from saddle points and achieving second-order optimality in a decentralized setting where a group of agents collaborate to minimize their aggregate objective function. We provide a non-asymptotic (finite-time) analysis and show that by following the idea of perturbed gradient descent, it is possible to converge to a second-order stationary point in a number of iterations which depends linearly on dimension and polynomially on the accuracy of second-order stationary point. Doing this in a communication-efficient manner requires overcoming several challenges, from identifying (first order) stationary points in a distributed manner, to adapting the perturbed gradient framework without prohibitive communication complexity. Our proposed Perturbed Decentralized Gradient Tracking (PDGT) method consists of two major stages: (i) a gradientbased step to find a first-order stationary point and (ii) a perturbed gradient descent step to escape from a first-order stationary point, if it is a saddle point with sufficient curvature. As a side benefit of our result, in the case that all saddle points are non-degenerate (strict), the proposed PDGT method finds a local minimum of the considered decentralized optimization problem in a finite number of iterations. 1 Introduction Recently, we have witnessed an unprecedented increase in the amount of data that is gathered in a distributed fashion and stored over multiple agents (machines). Moreover, the advances in data-driven systems such as Internet of Things, health-care, and multi-agent robotics demand for developing machine learning frameworks that can be implemented in a distributed manner. Simultaneously, convex formulations for training machine learning tasks have been replaced by nonconvex representations such as neural networks. These rapid changes call for the development of a class of communication-efficient algorithms to solve nonconvex decentralized learning problems. In this paper, we focus on a nonconvex decentralized optimization problem where a group of m agents collaborate to minimize their aggregate loss function, while they are allowed to exchange information only with their neighbors. To be more precise, the agents (nodes) aim to solve min x∈Rd f(x) = 1 m m∑ i=1 fi(x), (1) where fi : Rd → R is the objective function of node i which is possibly nonconvex. Finding the global minimizer of this problem, even in the centralized setting where all the functions are available at a single machine, is hard. Given this hardness result, we often settle for finding a stationary point of Problem (1). There have been several lines of work on finding an approximate first-order 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. stationary point of this distributed problem, i.e., finding a set of local solutions x̃1, . . . , x̃m where their average x̃avg has a small gradient norm ‖∇f(x̃avg)‖ and a small consensus error ∑m i=1 ‖x̃i− x̃avg‖. Achieving first-order optimality, however, in nonconvex settings may not lead to a satisfactory solution as it could be a poor saddle point. Therefore, finding a second-order stationary point could improve the quality of the solution. In fact, when all saddle points are non-degenerate finding a second-order stationary point implies convergence to a local-minimum, and in several problems including matrix completion [1], phase retrieval [2], and dictionary learning [3] local minima are global minima. While convergence to a second-order stationary point for the centralized setting has been extensively studied in the recent literature, the non-asymptotic complexity analysis of finding such a point for decentralized problems (under standard smoothness assumptions) has thus far evaded solution, in part because of significant additional challenges presented by communication limitations. A major difference between the centralized and the decentralized framework lies in the exchange of information between the nodes. Exchanging Hessian information is, of course, prohibitively expensive. Furthermore, turning to approximating schemes has the potential to create catastrophic problems for the algorithm, as small errors in approximation across the nodes could lead to inconsistent updates that could reverse progress made by prior steps. Moreover, escaping from first-order stationary points requires identifying that the algorithm has reached such a point, and accomplishing even this basic step in a communication-efficient manner presents challenges. Contributions. In this paper we develop a novel gradient-based method for escaping from saddle points in a decentralized setting and characterize its overall communication cost for achieving a second-order stationary point. The proposed Perturbed Decentralized Gradient Tracking (PDGT) algorithm consists of two major steps: (i) A local decentralized gradient tracking scheme to find a first-order stationary point, while maintaining consensus by averaging over neighboring iterates; (ii) A perturbed gradient tracking scheme to escape from saddle points that are non-degenerate. We show that to achieve an ( , γ, ρ)-second-order stationary point (see Definition 2) the proposed PDGT algorithm requires at most Θ̃ ( max { f(x0)−f∗ (1−σ)2 min{ 2,ρ2}γ3 , d γ6 }) rounds of communication, where d is dimension, f(x0) is the initial objective function value, f∗ is the optimal function value, and σ is the second largest eigenvalue of mixing matrix in terms of absolute norm which depends on the connectivity of the underlying graph. To the best of our knowledge, this result provides the first non-asymptotic guarantee for achieving second-order optimality in decentralized optimization under standard smoothness assumptions. 1.1 Related Work Centralized settings. Convergence to a first-order stationary point for centralized settings has been extensively studied in the nonconvex literature [4–13]. A recent line of work focuses on improving these guarantees and achieving second-order optimality in a finite number of iterations. These schemes can be divided into three categories: (i) fully gradient-based methods which use the perturbation idea for escaping from saddle points once iterates reach a point with small gradient norm [14–16]; (ii) methods which utilize the eigenvector corresponding to the smallest eigenvalue of the Hessian to find an escape direction [5, 6, 17–21]; and (iii) trust-region [22, 23] and cubic regularization algorithms [24–26] which require solving a quadratic or cubic subproblem, respectively, at each iteration. These methods, however, cannot be applied to decentralized settings directly as they require access to the gradient or Hessian of the global objective function. First-order optimality in decentralized settings. Recently, several iterative methods have been introduced and studied for achieving first-order optimality in decentralized settings. In particular, [27– 29] show convergence to a first-order stationary point by leveraging successive convex approximation techniques and using dynamic consensus protocols. Also, a similar guarantee has been established for several well-known decentralized algorithms including distributed gradient descent [30, 31], primaldual schemes [32–34], gradient tracking methods [35, 36], and decentralized alternating direction method of multipliers (ADMM) [37]. Second-order optimality in decentralized settings. Finding a second-order stationary point in a distributed setting has been studied by several works [38–41], but they all only provide asymptotic guarantees. The most related work to our submission is [42] which studies non-asymptotic convergence of stochastic gradient-based diffusion method for decentralized settings. However, the result of this work is obtained under two relatively less common assumptions. First, it requires a bounded gradient disagreement condition which ensures that the local gradients∇fi are not far from the global gradient∇f (Assumption 3 in [42]). Second, it assumes that the computed stochastic gradient near a saddle point is such that there is gradient noise present along some descent direction, spanned by the eigenvectors corresponding to the negative eigenvalues of the Hessian, i.e., stochastic gradient leads to an escape direction (Assumption 7 in [42]). Both these assumptions, and, in particular, the second one may not hold in general decentralized settings, and they both significantly simplify the analysis of escaping from saddle points. Unlike [42], the theoretical results presented here do not require assuming these restrictive conditions, and our paper provides the first non-asymptotic guarantee for achieving second-order optimality in decentralized settings, under standard smoothness assumptions. In fact, the conditions that we assume for proving our results are identical to the ones used in [15] for the analysis of perturbed gradient method in the centralized setting. 2 Preliminaries The problem in (1) is defined over a set of m connected agents (nodes) where each one has access to a component of the objective function. We denote the underlying undirected connectivity graph by G = {V,E}, where V = {1, . . . ,m} is the set of vertices (nodes) and E is the set of edges. As this graph is undirected, if node i can send information to node j, then the reverse communication is also possible. We call two nodes neighbors if there exists an edge between them. We further denote the neighborhood of node i by Ni, which also includes node i itself. Since the optimization variable x in (1) appears in each summand of the objective function, this problem is not decomposable into subproblems that can be solved simultaneously over nodes of the network. To make the objective function separable we introduce m local variables xi ∈ Rd, and instead of minimizing 1m ∑m i=1 fi(x) in (1), we minimize the objective function 1 m ∑m i=1 fi(xi). To ensure that these two problems are equivalent, we enforce the local decision variables to be equal to each other. Since the graph is connected, this condition can be replaced by consensus among neighboring nodes, and therefore the resulting problem can be written as min x=[x1;x2;...;xm]∈Rmd F (x) := 1 m m∑ i=1 fi(xi) s.t. xi = xj , ∀(i, j) ∈ E. (2) Note that in (2) we have introduced the notation x ∈ Rmd to indicate the concatenation of all local variables x := [x1;x2; ...;xm] and defined the function F : Rmd → R as F (x) := 1m ∑m i=1 fi(xi). It can be verified that x∗ is an optimal solution of Problem (1) if and only if x∗ := [x∗; . . . ;x∗] is an optimal solution of Problem (2). In the rest of the paper, therefore, we focus on solving Problem (2) as its objective function is node-separable. We should mention that solving this problem is still challenging as the constraints of this problem are coupled. In this paper, we only assume standard smoothness conditions for the local objective functions fi to establish our theoretical guarantees. Assumption 1. The local functions fi have Lipschitz continuous gradient with constant L1, i.e., for all i ∈ {1, . . . ,m} and any x ∈ Rd and x′ ∈ Rd we have ‖∇fi(x)−∇fi(x′)‖ ≤ L1 ‖x− x′‖. Assumption 2. The local functions fi have Lipschitz continuous Hessian with constant L2, i.e., for all i ∈ {1, . . . ,m} and any x ∈ Rd and x′ ∈ Rd we have ∥∥∇2fi(x)−∇2fi(x′)∥∥ ≤ L2 ‖x− x′‖. The gradient Lipschitz continuity condition in Assumption 1 is customary for the analysis of gradientbased methods. The condition in Assumption 2 is also required to ensure that the function is well-behaved near its saddle stationary points. Finding an optimal solution of (1) or (2) is hard since the local functions fi are nonconvex. Hence, we settle for finding a stationary point. In the centralized unconstrained case, a first-order stationary point of function f satisfies ‖∇f(x̂)‖ = 0, and an approximate -first-order stationary point is defined as ‖∇f(x̂)‖ ≤ . For the constrained decentralized problem in (2) the notion of first-order stationarity should address both stationarity and feasibility as we state in the following definition. Definition 1. A set of vectors {x̂i}mi=1 is an ( , ρ)-first-order stationary point of Problem (2) if∥∥∥∥ 1m m∑ i=1 ∇fi(x̂i) ∥∥∥∥ ≤ , 1m m∑ i=1 ∥∥∥∥x̂i − 1m m∑ j=1 x̂j ∥∥∥∥ ≤ ρ. (3) Algorithm 1: PDGT algorithm 1: Input: x0,∇f(x0), , γ, ρ, δ1, δ2 2: Set xi = x0, yi = ∇f(x0), T1 = Θ̃ ( f(x0)−f∗ (1−σ)2 min{ 2,ρ2} ) , T2 = Θ̃ ( d log(1/γδ2) γ3 ) , η1 = Θ̃ ( (1− σ)2 ) , η2 =Θ̃ ( γ2 d(1−σ) ) , R = Θ̃ ( γ 3 2 ) , B = Θ̃ ( γ3 ) ; 3: Call (x̃) = PDGT Phase I (x,y, η1, T1, δ1); 4: Call (x̂, ŷ, S) = PDGT Phase II (x̃, η2, T2,R, B); 5: if S = 1 then 6: Return x̂ as a second-order stationary point and stop; 7: else 8: Set x = x̂, y = ŷ and go to Step 3; 9: end if The first condition in the above definition ensures that the gradient norm is sufficiently small, while the second condition ensures that the iterates are close to their average. It can be shown that if [x̂1, . . . , x̂m] is an ( , ρ)-first-order stationary point of Problem (2), then their average x̂avg := 1 m ∑m i=1 x̂i is an ( +L1ρ)-first-order stationary point of Problem (1), i.e., ‖ 1 m ∑m i=1∇fi(x̂avg)‖ ≤ + L1ρ. The proof of this claim is available in the supplementary material. The same logic holds for second-order stationary points. In the centralized case, x is an ( , γ)-secondorder stationary point if ‖∇f(x̂)‖ ≤ and ∇2f(x̂) −γ I. Similarly, we define a second-order stationary point of Problem (2) with an extra condition that enforces consensus approximately. Definition 2. A set of vectors {x̂i}mi=1 is an ( , γ, ρ)-second-order stationary point of Problem (2) if∥∥∥∥ 1m m∑ i=1 ∇fi(x̂i) ∥∥∥∥ ≤ , 1m m∑ i=1 ∇2fi(x̂i) −γ I, 1 m m∑ i=1 ∥∥∥∥x̂i − 1m m∑ j=1 x̂j ∥∥∥∥ ≤ ρ. (4) Note that under Assumptions 1 and 2, it can be shown that if the local solutions [x̂1, . . . , x̂m] form an ( , γ, ρ)-second-order stationary point of Problem (2), then their average x̂avg := 1m ∑m i=1 x̂i is an ( + L1ρ, γ + L2ρ)-second-order stationary point of Problem (1), i.e., ‖ 1m ∑m i=1∇fi(x̂avg)‖ ≤ + L1ρ and 1m ∑m i=1∇2fi(x̂avg) −(γ + L2ρ) I. For proof check the supplementary material. 3 Perturbed Decentralized Gradient Tracking Algorithm We now present our proposed Perturbed Decentralized Gradient Tracking (PDGT) algorithm. The PDGT method presented in Algorithm 1 can be decomposed into two phases. Phase I of our method uses the gradient tracking ideas proposed in [35,36] to show convergence to some first-order stationary point. Using this scheme for our setup, however, requires overcoming the following hurdle: The nodes do not have access to the global gradient and thus even the task of realizing that they lie close to such a point is not trivial. Moreover, the consensus error is cumulative over the graph and tracking this quantity for each node is an additional challenge. In prior work, it has been shown that there exists an iterate that achieves first-order optimality without explicitly introducing a mechanism for identifying such an iterate. In this paper, we address this issue by utilizing an average consensus protocol as a subroutine of Phase I, which coordinates the nodes and finds with high probability and negligible communication overhead the correct index achieving first-order optimality. Phase II of PDGT utilizes ideas from centralized perturbed gradient descent developed in [15], in order to escape saddle points. Adapting these ideas to the decentralized setting poses several challenges. A naive use of an approximation scheme could produce further issues as the noise could lead different nodes to take different escaping directions, potentially canceling each other out. Further, in order to control the consensus error and the gradient tracking disagreement we adopt a significantly smaller step size than the one used in the centralized case. Finally, using a common potential function both for Phase I and Phase II derives an interesting tradeoff between the corresponding stepsizes. Taking into account all these challenges we design PDGT to guarantee escaping from strict saddle points. In particular, we show that at the end of the second phase, either a carefully chosen potential function decreases - PDGT escapes from a saddle point - and we go back to Phase I, or an approximate Algorithm 2: PDGT algorithm: Phase I 1: Input: x,y, η1, T1, δ1 2: Initialization: x0 = x, y0 = y; 3: for r = 1, . . . , T1 do 4: Compute xri = ∑ j∈Ni wijx r−1 j − η1y r−1 i ; ∀i = 1, . . . ,m 5: Compute yri = ∑ j∈Ni wijy r−1 j +∇fi(xri )−∇fi(x r−1 i ); ∀i = 1, . . . ,m 6: Exchange xri and y r i with neighboring nodes; ∀i = 1, . . . ,m 7: end for 8: for j = 1 : log( 1δ1 ) do 9: Choose index t̃j ∼ [0, T1] uniformly at random and run Consensus Protocol on t̃j to find first order stationary point x̃ with small gradient tracking disagreement; 10: end for Result: Returns first order stationary point x̃ with probability at least 1− δ1 second-order stationary point has been reached and the exact iterate is reported. Next, we present the details of both phases of PDGT. Phase I. Consider∇fi(xi), the local gradient of node i, and define yi ∈ Rd as the variable of node i which is designed to track the global average gradient 1m ∑m i=1∇fi(xi). The algorithm proceeds to update the iterates xi based on the directions of yi. More specifically, at each iteration r, each agent i first updates its local decision variable by averaging its local iterate with the iterates of its neighbors and descending along the negative direction of its gradient estimate yr−1i , i.e., xri = ∑ j∈Ni wijx r−1 j − η1y r−1 i , (5) where η1 is the stepsize and wij is the weight that node i assigns to the information that it receives from node j. We assume that wij > 0 only for the nodes j that are in the neighborhood of node i, which also includes node i itself. Further, the sum of these weights is 1, i.e., ∑ j∈Ni wij = 1. Once the local xi’s are updated, each agent i computes its local gradient ∇fi(xri ) evaluated at its current iterate xri . Then, the nodes use the gradient tracking variable y r−1 i received from their neighbors in the previous round to update their gradient tracking vector according to the update yri = ∑ j∈Ni wijy r−1 j +∇fi(x r i )−∇fi(xr−1i ), (6) Note that the update in (6) shows that node i computes its new global gradient estimate by combining its previous local estimate with the ones communicated by its neighbors as well as the difference of its two consecutive local gradients. Once the local gradient tracking variables are updated, nodes communicate their local models xri and local gradient tracking vectors y r i with their neighbors. After running the updates in (5) and (6) for T1 rounds, we can ensure that we have visited a set of points [x1, . . . ,xm] that construct a first-order stationary point of Problem (2) (see Theorem 1); however, nodes are oblivious to the time index of those iterates. To resolve this issue all nodes sample a common time index r ∈ {1, . . . , T1} and run an average consensus protocol among themselves to compute the expression ∥∥ 1 m ∑m i=1∇fi(x̃i) ∥∥2 + 1m∑mi=1 ‖x̃i − 1m∑mj=1 x̃j‖2 for that time index. By repeating this process at most log( 1δ1 ) times, the output of the process leads to a set of points satisfying first-order optimality with probability at least 1 − δ1 . The details of this procedure are provided in the appendix. Note that the consensus procedure is standard and known to be linearly convergent. Hence, the additional cost of running the consensus protocol log( 1δ1 ) times is negligible compared to T1; see Theorem 1 for more details. Phase II. In the second phase of PDGT we are given a set of variables denoted by x̃ = [x̃1, . . . , x̃m] which is a first-order stationary point. The goal is to escape from it, if it is a strict saddle, i.e., the smallest eigenvalue of the Hessian at this point is sufficiently negative. Initialized with a first-order stationary point x̃ the algorithm injects the same noise ξ picked uniformly from a ball of radius R = Õ(γ 32 ), to all the local iterates x̃i. Thus for all i we have x0i = x̃i + ξ. After initialization Algorithm 3: PDGT algorithm: Phase II 1: Input: x̃, η2, T2,R, B 2: All nodes sample a vector ξ ∼ uniform ball of radiusR using the same seed; 3: Set x0i = x̃i + ξ and run Average Consensus on ∇fi(x0i ) to set y0i = 1m m∑ i=1 ∇fi(x0i ); 4: for r = 1, . . . , T2 do 5: Compute xri = ∑ j∈Ni wijx r−1 j − η2y r−1 i ; ∀i = 1, . . . ,m 6: Compute yri = ∑ j∈Ni wijy r−1 j +∇fi(xri )−∇fi(x r−1 i ); ∀i = 1, . . . ,m 7: Exchange xri and y r i with neighboring nodes; ∀i = 1, . . . ,m 8: end for 9: Run Average Consensus Protocol for iterates xT2 and x̃; 10: if H(xT2 ,yT2)−H(x̃, ỹ) > −B then 11: Return approximate second-order stationary point x̃ = [x̃1, . . . , x̃m] and set S = 1; 12: else 13: Return xT2 = [xT21 , . . . ,x T2 m ], y T2 = [yT21 , . . . ,y T2 m ] and set S = 0; 14: end if all nodes follow the updates in (5) and (6) with stepsize η2, for T2 rounds. If the initial point was a strict saddle then at the end of this process the iterates escape from it; as a result our properly chosen potential function H (formally defined in (9) in Section 4) decreases substantially and then we revisit Phase I. If the potential function H does not decrease sufficiently, then we conclude that x̃ = [x̃1, . . . , x̃m] is a second-order stationary point of Problem (2). More precisely, choosing a proper stepsize η2 and running PDGT for T2 = Õ(dγ−3) iterations decreases the potential function H by at least B = Õ(γ3), with probability 1− δ2, where T2 has only a polylogarithmic dependence on δ2. If the potential function is not substantially decreased then we confidently report x̃ as an approximate second-order stationary point. Note that S is our indicator, tracking whether we have encountered some approximate second-order stationary point or not. Further, the average consensus protocol is utilized in the second phase both to initialize the gradient tracking variables and to evaluate the potential function H at the iterates xT2 and x̃. Since the communication cost of the average consensus protocol is logarithmic in γ−1, it is negligible compared to T2. Hence, the number of communication rounds for Phase II is Õ(dγ−3). Check Theorem 2 for more details. 4 Theoretical Results In this section, we study convergence properties of our proposed PDGT method. First, we characterize the number of rounds T1 required in Phase I of PDGT to find a set of first-order stationary points with high probability. Then, we establish an upper bound for T2, the number of communication rounds required in the second phase. We further show that each time the algorithm finishes Phase II, a potential function decreases at least by Θ̃(γ3). Finally, using these results, we characterize the overall communication rounds between nodes to find a second-order stationary point. Before stating our result, we first discuss some conditions required for the averaging weights used in (5) and (6). Consider the mixing matrix W ∈ Rm×m where the element of its i-th row and j-th column is wij . We assume W satisfies the following conditions. Assumption 3. The mixing matrix W ∈ Rm×m satisfies the following: W = W>, W1 = 1, σ := max{|λ2(W)|, |λm(W)|} < 1, (7) where λi(W) denotes the i-th largest eigenvalue of W. The first condition in Assumption 3 implies that the weight node i assigns to node j equals the weight node j assigns to node i. The second condition means W is row stochastic, and by symmetry, column stochastic. This condition ensures that the weights that each node i assigns to its neighbors and itself sum up to 1. Further note that the eigenvalues of W are real and in the interval [−1, 1]; in fact they can be sorted in a non-increasing order as 1 = λ1(W) ≥ λ2(W) ≥ · · · ≥ λm(W) ≥ −1. The last condition in Assumption 3 ensures that the maximum absolute value of all eigenvalues of W excluding λ1(W) is strictly smaller than 1. This is required since σ := max{|λ2(W)|, |λm(W)|} indicates the rate of information propagation. For highly connected graphs σ is close to zero, while for less connected graphs it is close to 1. A mixing matrix W satisfying Assumption 3 can be chosen based on local degrees in a variety of ways (e.g., [36]). Remark 1. In the appendix we report explicit expressions. To simplify the presentation in the main body, we turn to asymptotic notation and consider sufficiently small η and α, thus hiding constants but preserving the scaling with respect to quantities that capture important elements of our analysis. Next, we present our first result, which formally characterizes the choice of parameters for PDGT to find an ( , ρ)-first-order stationary point, as defined in (1), with probability 1− δ1. Theorem 1. Consider Phase I of PDGT presented in Algorithm 2. If Assumptions 1 and 3 hold, and we set η1 = Θ((1 − σ) √ α) where α = Θ((1 − σ)2), and the number of iterations satisfies T1 ≥ T = Θ ( f(x0)−f∗ η1 2 ) = Θ ( f(x0)−f∗√ α(1−σ) 2 ) , then w.p. at least 1 − δ1, the iterates x̃1, . . . , x̃m corresponding to one of the randomly selected time indices t̃1, .., t̃log( 1δ1 ) from [0 : T1], satisfy∥∥∥∥∥ 1m m∑ i=1 ∇fi(x̃i) ∥∥∥∥∥ 2 + 1 m m∑ i=1 ∥∥∥∥x̃i − 1m m∑ j=1 x̃j ∥∥∥∥2 ≤ 2. (8) Theorem 1 shows that after Θ ( f(x0)−f∗√ α(1−σ) 2 + 1 1−σ log( 1 δ1 ) log( 1 ) ) rounds of exchanging information with neighboring nodes the goal of Phase I is achieved and we obtain a set of first-order stationary points with small gradient tracking disagreement. Note that the second term 11−σ log( 1 δ1 ) log(1 ) corresponds to the cost of running the average consensus protocol to choose the appropriate iterate among time steps t̃1, t̃2, ..., t̃log( 1δ1 ) . This term is negligible compared to the first term. Next we present our result for Phase II of PDGT. In particular, we show that if the input of Phase II, which satisfies (8), is a strict saddle meaning it has sufficient negative curvature, then PDGT will escape from it and as a result the following Lyapunov function decreases: H(x,y) := 1 m m∑ i=1 fi(xavg) + 1 m m∑ i=1 ‖xi − xavg‖2 + α m m∑ i=1 ‖yi − yavg‖2, (9) where x := [x1; . . . ;xm], y := [y1; . . . ;ym], xavg = 1m ∑m j=1 xj and yavg = 1 m ∑m j=1 yj . Theorem 2. Consider Phase II of PDGT presented in Algorithm 3, and suppose Assumptions 1-3 hold. Further, suppose we set η2 = Θ̃ ( γ2 d(1−σ) ) and α = Θ̃ ( (1− σ)2 ) , and the local perturbed iterates are computed according to x0i = x̃i + ξ, where ξ is drawn from the uniform distribution over the ball of radius R = Θ̃(γ1.5). If the input of the second phase denoted by x̃1, . . . , x̃m satisfies λmin(∇2f(x̃avg)) ≤ −γ, ∥∥∥∥∥ 1m m∑ i=1 ∇fi(x̃i) ∥∥∥∥∥ 2 ≤ 21, 1 m m∑ i=1 ∥∥∥∥x̃i − 1m m∑ j=1 x̃j ∥∥∥∥2 ≤ 22, where 21 = Õ(γ3) and 22 = Õ( γ5 d ), then after T2 ≥ T = Θ̃ ( d log(1/γδ2) γ3 ) iterations with probability at least 1− δ2 we have H(xT2 ,yT2)−H(x̃, ỹ) = −Ω̃(γ3). The result in Theorem 2 shows that if the input of Phase II of PDGT is a first-order stationary point with sufficient negative curvature, then by following the update of PDGT for Θ̃(d log(1/γδ2)γ3 ) iterations with probability at least 1 − δ2 the Lyapunov function H decreases by Ω̃(γ3). Further in order for the nodes to verify whether enough progress has been made we include two calls on the average consensus protocol on iterates x̃ and xT2 with overall communication complexity O( 21−σ log( 1 min{ 1, 2} )), which is negligible compared to Θ̃( d log(1/γδ2) γ3 ) iterations. Combining the results of Theorems 1 and 2, and using the fact that the Lyapunov function H is non-increasing in the first phase (proof is available in section 9) we obtain that if the outcome of the first phase has sufficient negative curvature (i.e, is a strict saddle), then the Lyapunov function H after Phase I and Phase II decreases at least by Θ̃(γ3). Hence, after at most Θ̃(γ−3) calls to the first and second phase of PDGT, we will find a second-order stationary point of Problem (2). Theorem 3. Consider the PDGT method in Algorithm 1, and suppose Assumptions 1-3 hold. If we set the stepsizes as η1 = Θ̃ ( (1− σ)2 ) , η2 = Θ̃ ( γ2 d(1−σ) ) and the number of iterations as T1 = Θ̃ ( f(x0)−f∗ (1−σ)2 min{ 2,ρ2} ) and T2 = Θ̃ ( d γ3 ) , respectively, and we have 2 = Õ ( γ3 ) and ρ2 = Õ ( γ5/d ) , then after at most Θ̃ ( max { f(x0)−f∗ (1−σ)2 min{ 2,ρ2}γ3 , d γ6 }) communication rounds PDGT finds an ( , γ, ρ)-second-order stationary point of Problem (2), with high probability. A major difference between the analysis of PDGT and its centralized counterpart in [15] is that as the iterates move away from a first-order stationary point, the consensus error and the gradient tracking disagreement potentially increase exponentially fast blurring the escaping direction. Addressing this issue requires careful selection of the algorithm’s parameters and setting appropriate stepsizes finetuning the tradeoff on the number of iterations between the first and the second phase. The aforementioned hurdles and the lack of knowledge regarding when the algorithm iterates lie close to a stationary point lead to an overall slower convergence rate than the one shown in the centralized case. Recall that if the local solutions [x̂1, . . . , x̂m] form an ( , γ, ρ)-second-order stationary point of Problem (2), then their average x̂avg := 1m ∑m i=1 x̂i is an ( +L1ρ, γ+L2ρ)-second-order stationary point of Problem (1). Moreover, as discussed earlier, second order stationary points are of paramount importance because when all saddle points are strict, any second-order stationary point is a local minima. We formally state this condition in the following assumption and later show that under this assumption PDGT finds a local minima of Problem (1). Assumption 4. Function f(·) is (θ, ζ, ν)- strict saddle, when for any point x, if its gradient norm is smaller than θ, then its Hessian satisfies the condition λmin(∇2f(x)) ≤ −ζ, unless x is ν−close to the set of local minima. The strict saddle condition defined in Assumption 4 states that if a function is (θ, ζ, ν)- strict saddle then each point in Rd belongs to one of these regions: 1) a region where the gradient is large and it is not close to any stationary point; 2) a region where the gradient is small but the Hessian has a significant negative eigenvalue; and 3) the region close to some local minimum. Indeed, under the extra assumption of strict saddle property on function f , PDGT is able to find a local minima in a finite number of iterations as we state in the following corollary. Corollary 1. Consider the PDGT method presented in Algorithm 3 and suppose the conditions in Theorem 3 are satisfied. If in addition Assumption 4 holds and the objective function f is (θ, ζ, ν)strict saddle point, by setting + L1ρ ≤ θ and γ + L2ρ ≤ ζ, the PDGT will output a point ν−close to the set of local minima after Θ̃ ( max { f(x0)−f∗ (1−σ)2 min{ 2,ρ2}γ3 , d γ6 }) communication rounds. 5 Numerical Experiments In this section, we compare PDGT with a simple version of D-GET where each node has full knowledge of its local gradient. D-GET is a decentralized gradient tracking method that "does not use the perturbation idea" [36]. Our goal is to show that PDGT escapes quickly from saddle points. We focus on a matrix factorization problem for the MovieLens dataset, where the goal is to find a rank r approximation of a matrix M ∈ Ml×n, representing the ratings from 943 users to 1682 movies. Each user has rated at least 20 movies for a total of 9990 known ratings. This problem is given by: (U∗,V∗) := argmin U∈Ml×r,V∈Mn×r f(U,V) = argmin U∈Ml×r,V∈Mn×r ‖M−UV>‖2F . (10) We consider different values of target rank and number of nodes. Both methods are given the same randomly generated connected graph, mixing matrix, and step size. The graph is created using the G(n, p) model with p = log2(n)n−1 enforcing the path 1−2− ...− (n−1)−n to ensure the connectivity of the graph. Further we utilize the Maximum Degree Weight mixing matrix as is presented in (10) of [36]. The stepsize for D-GET and both phases of PDGT is 3. Finally both methods are initialized at the same point which lies in a carefully chosen neighborhood of a saddle point. Note that in this problem all saddles are escapable and each local min is a global min. Regarding the parameters of PDGT we set the number of rounds during phase I and II to be 1500 and 100, respectively. Further, we set the threshold before we add noise during phase I as presented in (8) to be 10−6 and the radius of the noise injected to be 4. In Fig. 1 the experiment is run for 10 nodes, and the target rank is 20. Initially both algorithms are stuck close to a saddle point and make very little progress. However, since the theoretical criterion for PDGT is satisfied in the very first rounds (small average gradient and consensus error) we have injection of noise. This nudge is sufficient to accelerate substantially the escape of PDGT. As we see in the plot, D-GET remains close to the saddle point at least until iteration 1400 where we can see the gradient increasing somewhat faster. At the same time PDGT escapes the saddle point, decreases the loss and approaches a local minimum. In Fig. 2, the experiment is run for 30 nodes and the target rank is 30. Similarly, PDGT escapes from the saddle point much faster and decreases the loss substantially before it reaches the local minimum. We observe that D-GET also escapes the saddle point eventually following a similar trace to PDGT after spending a lot longer at the saddle. Interestingly, for this experiment, we observed that some parameters such as the stepsize of the first and the second phase, the injected noise and the threshold before we inject noise can afford to be substantially greater than the theoretical propositions casting PDGT useful for a series of practical applications. 6 Conclusion and Future Work We proposed the Perturbed Decentralized Gradient Tracking (PDGT) algorithm that achieves secondorder stationarity in a finite number of iterations, under the assumptions that the objective function gradient and Hessian are Lipschitz. We showed that PDGT finds an ( , γ, ρ)-second-order stationary point, where and γ indicate the accuracy for first- and second-order optimality, respectively, and ρ shows the consensus error, after Θ̃ ( max { f(x0)−f∗ (1−σ)2 min{ 2,ρ2}γ3 , d γ6 }) communication rounds, where d is dimension, f(x0)− f∗ is the initial error, and 1− σ is related to graph connectivity. This paper is the first step towards achieving second-order optimality in decentralized settings under standard smoothness assumptions, and several research problems are still unanswered in this area. First, our complexity scales linearly with dimension d, deviating from the poly-logarithmic dependence achieved for centralized perturbed gradient descent [15]. Closing this gap and developing an algorithm that obtains second-order optimality with communication rounds that scale sublinearly or even poly-logarithmically on the dimension is a promising research direction that requires further investigation. Second, in the centralized setting, it has been shown that by using gradient acceleration [16] it is possible to find a second-order stationary point faster than perturbed gradient descent. It would be interesting to see if the same conclusion also holds for decentralized settings. Last, extending the theory developed in this paper to the case that nodes only have access to a noisy estimate of their local gradients is another avenue of research that requires further study. 7 Broader Impact Over the last couple of years we have witnessed an unprecedented increase in the amount of data collected and processed in order to tackle real life problems. Advances in numerous data-driven system such as the Internet of Things, health-care, multi-agent robotics wherein data are scattered across the agents (e.g., sensors, clouds, robots), and the sheer volume and spatial/temporal disparity of data render centralized processing and storage infeasible or inefficient. Compared to the typical parameter-server type distributed system with a fusion center, decentralized optimization has its unique advantages in preserving data privacy, enhancing network robustness, and improving the computation efficiency. Furthermore, in many emerging applications such as collaborative filtering, federated learning, distributed beamforming and dictionary learning, the data is naturally collected in a decentralized setting, and it is not possible to transfer the distributed data to a central location. Therefore, decentralized computation has sparked considerable interest in both academia and industry. At the same time convex formulations for training machine learning tasks have been replaced by nonconvex representations such as neural networks and a line of significant non convex problems are on the spotlight. Our paper contributes to this line of work and broadens the set of problems that can be successfully solved without the presence of a central coordinating authority in the aforementioned framework. The implications on the privacy of the agents are apparent while rendering the presence of an authority unnecessary has political and economical extensions. Furthermore, numerous applications are going to benefit from our result impacting society in many different ways. 8 Acknowledgments and Disclosure of Funding The research of I. Tziotis and A. Mokhtari is supported by NSF Award CCF-2007668. C. Caramanis is supported by NSF Awards 1704778, 1646522, and 1609279.
1. What is the focus and contribution of the paper on decentralized algorithms? 2. What are the strengths of the proposed approach, particularly in combining existing algorithmic ideas? 3. What are the weaknesses of the paper regarding the complexity of the proposed algorithm and the need for experimental validation? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposed a decentralized algorithm that provably converges to second-order stationary points in nonconvex optimization under standard assumptions. ===== Post-rebuttal edit: Thanks for the response. I like the newly added experiments, please incorporate it into the paper. Strengths The paper combines existing algorithmic ideas such as gradient tracking, perturbed gradient descent, consensus, to design an algorithm that converges first to a first-order stationary point, and then a second-order stationary point, with communication complexity bounds. Even the separate algorithmic ideas are not novel, putting them together and carefully balancing the trade-offs are nontrivial, where this work seems to be a solid contribution. Weaknesses The proposed algorithm is quite complicated and has many design parameters. It is unclear if which parts of the designs are necessary in practice, and which parts are for theoretical sake. Some numerical experiments would be helpful in verifying the theory and help to guide practice.
NIPS
Title Second Order Optimality in Decentralized Non-Convex Optimization via Perturbed Gradient Tracking Abstract In this paper we study the problem of escaping from saddle points and achieving second-order optimality in a decentralized setting where a group of agents collaborate to minimize their aggregate objective function. We provide a non-asymptotic (finite-time) analysis and show that by following the idea of perturbed gradient descent, it is possible to converge to a second-order stationary point in a number of iterations which depends linearly on dimension and polynomially on the accuracy of second-order stationary point. Doing this in a communication-efficient manner requires overcoming several challenges, from identifying (first order) stationary points in a distributed manner, to adapting the perturbed gradient framework without prohibitive communication complexity. Our proposed Perturbed Decentralized Gradient Tracking (PDGT) method consists of two major stages: (i) a gradientbased step to find a first-order stationary point and (ii) a perturbed gradient descent step to escape from a first-order stationary point, if it is a saddle point with sufficient curvature. As a side benefit of our result, in the case that all saddle points are non-degenerate (strict), the proposed PDGT method finds a local minimum of the considered decentralized optimization problem in a finite number of iterations. 1 Introduction Recently, we have witnessed an unprecedented increase in the amount of data that is gathered in a distributed fashion and stored over multiple agents (machines). Moreover, the advances in data-driven systems such as Internet of Things, health-care, and multi-agent robotics demand for developing machine learning frameworks that can be implemented in a distributed manner. Simultaneously, convex formulations for training machine learning tasks have been replaced by nonconvex representations such as neural networks. These rapid changes call for the development of a class of communication-efficient algorithms to solve nonconvex decentralized learning problems. In this paper, we focus on a nonconvex decentralized optimization problem where a group of m agents collaborate to minimize their aggregate loss function, while they are allowed to exchange information only with their neighbors. To be more precise, the agents (nodes) aim to solve min x∈Rd f(x) = 1 m m∑ i=1 fi(x), (1) where fi : Rd → R is the objective function of node i which is possibly nonconvex. Finding the global minimizer of this problem, even in the centralized setting where all the functions are available at a single machine, is hard. Given this hardness result, we often settle for finding a stationary point of Problem (1). There have been several lines of work on finding an approximate first-order 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. stationary point of this distributed problem, i.e., finding a set of local solutions x̃1, . . . , x̃m where their average x̃avg has a small gradient norm ‖∇f(x̃avg)‖ and a small consensus error ∑m i=1 ‖x̃i− x̃avg‖. Achieving first-order optimality, however, in nonconvex settings may not lead to a satisfactory solution as it could be a poor saddle point. Therefore, finding a second-order stationary point could improve the quality of the solution. In fact, when all saddle points are non-degenerate finding a second-order stationary point implies convergence to a local-minimum, and in several problems including matrix completion [1], phase retrieval [2], and dictionary learning [3] local minima are global minima. While convergence to a second-order stationary point for the centralized setting has been extensively studied in the recent literature, the non-asymptotic complexity analysis of finding such a point for decentralized problems (under standard smoothness assumptions) has thus far evaded solution, in part because of significant additional challenges presented by communication limitations. A major difference between the centralized and the decentralized framework lies in the exchange of information between the nodes. Exchanging Hessian information is, of course, prohibitively expensive. Furthermore, turning to approximating schemes has the potential to create catastrophic problems for the algorithm, as small errors in approximation across the nodes could lead to inconsistent updates that could reverse progress made by prior steps. Moreover, escaping from first-order stationary points requires identifying that the algorithm has reached such a point, and accomplishing even this basic step in a communication-efficient manner presents challenges. Contributions. In this paper we develop a novel gradient-based method for escaping from saddle points in a decentralized setting and characterize its overall communication cost for achieving a second-order stationary point. The proposed Perturbed Decentralized Gradient Tracking (PDGT) algorithm consists of two major steps: (i) A local decentralized gradient tracking scheme to find a first-order stationary point, while maintaining consensus by averaging over neighboring iterates; (ii) A perturbed gradient tracking scheme to escape from saddle points that are non-degenerate. We show that to achieve an ( , γ, ρ)-second-order stationary point (see Definition 2) the proposed PDGT algorithm requires at most Θ̃ ( max { f(x0)−f∗ (1−σ)2 min{ 2,ρ2}γ3 , d γ6 }) rounds of communication, where d is dimension, f(x0) is the initial objective function value, f∗ is the optimal function value, and σ is the second largest eigenvalue of mixing matrix in terms of absolute norm which depends on the connectivity of the underlying graph. To the best of our knowledge, this result provides the first non-asymptotic guarantee for achieving second-order optimality in decentralized optimization under standard smoothness assumptions. 1.1 Related Work Centralized settings. Convergence to a first-order stationary point for centralized settings has been extensively studied in the nonconvex literature [4–13]. A recent line of work focuses on improving these guarantees and achieving second-order optimality in a finite number of iterations. These schemes can be divided into three categories: (i) fully gradient-based methods which use the perturbation idea for escaping from saddle points once iterates reach a point with small gradient norm [14–16]; (ii) methods which utilize the eigenvector corresponding to the smallest eigenvalue of the Hessian to find an escape direction [5, 6, 17–21]; and (iii) trust-region [22, 23] and cubic regularization algorithms [24–26] which require solving a quadratic or cubic subproblem, respectively, at each iteration. These methods, however, cannot be applied to decentralized settings directly as they require access to the gradient or Hessian of the global objective function. First-order optimality in decentralized settings. Recently, several iterative methods have been introduced and studied for achieving first-order optimality in decentralized settings. In particular, [27– 29] show convergence to a first-order stationary point by leveraging successive convex approximation techniques and using dynamic consensus protocols. Also, a similar guarantee has been established for several well-known decentralized algorithms including distributed gradient descent [30, 31], primaldual schemes [32–34], gradient tracking methods [35, 36], and decentralized alternating direction method of multipliers (ADMM) [37]. Second-order optimality in decentralized settings. Finding a second-order stationary point in a distributed setting has been studied by several works [38–41], but they all only provide asymptotic guarantees. The most related work to our submission is [42] which studies non-asymptotic convergence of stochastic gradient-based diffusion method for decentralized settings. However, the result of this work is obtained under two relatively less common assumptions. First, it requires a bounded gradient disagreement condition which ensures that the local gradients∇fi are not far from the global gradient∇f (Assumption 3 in [42]). Second, it assumes that the computed stochastic gradient near a saddle point is such that there is gradient noise present along some descent direction, spanned by the eigenvectors corresponding to the negative eigenvalues of the Hessian, i.e., stochastic gradient leads to an escape direction (Assumption 7 in [42]). Both these assumptions, and, in particular, the second one may not hold in general decentralized settings, and they both significantly simplify the analysis of escaping from saddle points. Unlike [42], the theoretical results presented here do not require assuming these restrictive conditions, and our paper provides the first non-asymptotic guarantee for achieving second-order optimality in decentralized settings, under standard smoothness assumptions. In fact, the conditions that we assume for proving our results are identical to the ones used in [15] for the analysis of perturbed gradient method in the centralized setting. 2 Preliminaries The problem in (1) is defined over a set of m connected agents (nodes) where each one has access to a component of the objective function. We denote the underlying undirected connectivity graph by G = {V,E}, where V = {1, . . . ,m} is the set of vertices (nodes) and E is the set of edges. As this graph is undirected, if node i can send information to node j, then the reverse communication is also possible. We call two nodes neighbors if there exists an edge between them. We further denote the neighborhood of node i by Ni, which also includes node i itself. Since the optimization variable x in (1) appears in each summand of the objective function, this problem is not decomposable into subproblems that can be solved simultaneously over nodes of the network. To make the objective function separable we introduce m local variables xi ∈ Rd, and instead of minimizing 1m ∑m i=1 fi(x) in (1), we minimize the objective function 1 m ∑m i=1 fi(xi). To ensure that these two problems are equivalent, we enforce the local decision variables to be equal to each other. Since the graph is connected, this condition can be replaced by consensus among neighboring nodes, and therefore the resulting problem can be written as min x=[x1;x2;...;xm]∈Rmd F (x) := 1 m m∑ i=1 fi(xi) s.t. xi = xj , ∀(i, j) ∈ E. (2) Note that in (2) we have introduced the notation x ∈ Rmd to indicate the concatenation of all local variables x := [x1;x2; ...;xm] and defined the function F : Rmd → R as F (x) := 1m ∑m i=1 fi(xi). It can be verified that x∗ is an optimal solution of Problem (1) if and only if x∗ := [x∗; . . . ;x∗] is an optimal solution of Problem (2). In the rest of the paper, therefore, we focus on solving Problem (2) as its objective function is node-separable. We should mention that solving this problem is still challenging as the constraints of this problem are coupled. In this paper, we only assume standard smoothness conditions for the local objective functions fi to establish our theoretical guarantees. Assumption 1. The local functions fi have Lipschitz continuous gradient with constant L1, i.e., for all i ∈ {1, . . . ,m} and any x ∈ Rd and x′ ∈ Rd we have ‖∇fi(x)−∇fi(x′)‖ ≤ L1 ‖x− x′‖. Assumption 2. The local functions fi have Lipschitz continuous Hessian with constant L2, i.e., for all i ∈ {1, . . . ,m} and any x ∈ Rd and x′ ∈ Rd we have ∥∥∇2fi(x)−∇2fi(x′)∥∥ ≤ L2 ‖x− x′‖. The gradient Lipschitz continuity condition in Assumption 1 is customary for the analysis of gradientbased methods. The condition in Assumption 2 is also required to ensure that the function is well-behaved near its saddle stationary points. Finding an optimal solution of (1) or (2) is hard since the local functions fi are nonconvex. Hence, we settle for finding a stationary point. In the centralized unconstrained case, a first-order stationary point of function f satisfies ‖∇f(x̂)‖ = 0, and an approximate -first-order stationary point is defined as ‖∇f(x̂)‖ ≤ . For the constrained decentralized problem in (2) the notion of first-order stationarity should address both stationarity and feasibility as we state in the following definition. Definition 1. A set of vectors {x̂i}mi=1 is an ( , ρ)-first-order stationary point of Problem (2) if∥∥∥∥ 1m m∑ i=1 ∇fi(x̂i) ∥∥∥∥ ≤ , 1m m∑ i=1 ∥∥∥∥x̂i − 1m m∑ j=1 x̂j ∥∥∥∥ ≤ ρ. (3) Algorithm 1: PDGT algorithm 1: Input: x0,∇f(x0), , γ, ρ, δ1, δ2 2: Set xi = x0, yi = ∇f(x0), T1 = Θ̃ ( f(x0)−f∗ (1−σ)2 min{ 2,ρ2} ) , T2 = Θ̃ ( d log(1/γδ2) γ3 ) , η1 = Θ̃ ( (1− σ)2 ) , η2 =Θ̃ ( γ2 d(1−σ) ) , R = Θ̃ ( γ 3 2 ) , B = Θ̃ ( γ3 ) ; 3: Call (x̃) = PDGT Phase I (x,y, η1, T1, δ1); 4: Call (x̂, ŷ, S) = PDGT Phase II (x̃, η2, T2,R, B); 5: if S = 1 then 6: Return x̂ as a second-order stationary point and stop; 7: else 8: Set x = x̂, y = ŷ and go to Step 3; 9: end if The first condition in the above definition ensures that the gradient norm is sufficiently small, while the second condition ensures that the iterates are close to their average. It can be shown that if [x̂1, . . . , x̂m] is an ( , ρ)-first-order stationary point of Problem (2), then their average x̂avg := 1 m ∑m i=1 x̂i is an ( +L1ρ)-first-order stationary point of Problem (1), i.e., ‖ 1 m ∑m i=1∇fi(x̂avg)‖ ≤ + L1ρ. The proof of this claim is available in the supplementary material. The same logic holds for second-order stationary points. In the centralized case, x is an ( , γ)-secondorder stationary point if ‖∇f(x̂)‖ ≤ and ∇2f(x̂) −γ I. Similarly, we define a second-order stationary point of Problem (2) with an extra condition that enforces consensus approximately. Definition 2. A set of vectors {x̂i}mi=1 is an ( , γ, ρ)-second-order stationary point of Problem (2) if∥∥∥∥ 1m m∑ i=1 ∇fi(x̂i) ∥∥∥∥ ≤ , 1m m∑ i=1 ∇2fi(x̂i) −γ I, 1 m m∑ i=1 ∥∥∥∥x̂i − 1m m∑ j=1 x̂j ∥∥∥∥ ≤ ρ. (4) Note that under Assumptions 1 and 2, it can be shown that if the local solutions [x̂1, . . . , x̂m] form an ( , γ, ρ)-second-order stationary point of Problem (2), then their average x̂avg := 1m ∑m i=1 x̂i is an ( + L1ρ, γ + L2ρ)-second-order stationary point of Problem (1), i.e., ‖ 1m ∑m i=1∇fi(x̂avg)‖ ≤ + L1ρ and 1m ∑m i=1∇2fi(x̂avg) −(γ + L2ρ) I. For proof check the supplementary material. 3 Perturbed Decentralized Gradient Tracking Algorithm We now present our proposed Perturbed Decentralized Gradient Tracking (PDGT) algorithm. The PDGT method presented in Algorithm 1 can be decomposed into two phases. Phase I of our method uses the gradient tracking ideas proposed in [35,36] to show convergence to some first-order stationary point. Using this scheme for our setup, however, requires overcoming the following hurdle: The nodes do not have access to the global gradient and thus even the task of realizing that they lie close to such a point is not trivial. Moreover, the consensus error is cumulative over the graph and tracking this quantity for each node is an additional challenge. In prior work, it has been shown that there exists an iterate that achieves first-order optimality without explicitly introducing a mechanism for identifying such an iterate. In this paper, we address this issue by utilizing an average consensus protocol as a subroutine of Phase I, which coordinates the nodes and finds with high probability and negligible communication overhead the correct index achieving first-order optimality. Phase II of PDGT utilizes ideas from centralized perturbed gradient descent developed in [15], in order to escape saddle points. Adapting these ideas to the decentralized setting poses several challenges. A naive use of an approximation scheme could produce further issues as the noise could lead different nodes to take different escaping directions, potentially canceling each other out. Further, in order to control the consensus error and the gradient tracking disagreement we adopt a significantly smaller step size than the one used in the centralized case. Finally, using a common potential function both for Phase I and Phase II derives an interesting tradeoff between the corresponding stepsizes. Taking into account all these challenges we design PDGT to guarantee escaping from strict saddle points. In particular, we show that at the end of the second phase, either a carefully chosen potential function decreases - PDGT escapes from a saddle point - and we go back to Phase I, or an approximate Algorithm 2: PDGT algorithm: Phase I 1: Input: x,y, η1, T1, δ1 2: Initialization: x0 = x, y0 = y; 3: for r = 1, . . . , T1 do 4: Compute xri = ∑ j∈Ni wijx r−1 j − η1y r−1 i ; ∀i = 1, . . . ,m 5: Compute yri = ∑ j∈Ni wijy r−1 j +∇fi(xri )−∇fi(x r−1 i ); ∀i = 1, . . . ,m 6: Exchange xri and y r i with neighboring nodes; ∀i = 1, . . . ,m 7: end for 8: for j = 1 : log( 1δ1 ) do 9: Choose index t̃j ∼ [0, T1] uniformly at random and run Consensus Protocol on t̃j to find first order stationary point x̃ with small gradient tracking disagreement; 10: end for Result: Returns first order stationary point x̃ with probability at least 1− δ1 second-order stationary point has been reached and the exact iterate is reported. Next, we present the details of both phases of PDGT. Phase I. Consider∇fi(xi), the local gradient of node i, and define yi ∈ Rd as the variable of node i which is designed to track the global average gradient 1m ∑m i=1∇fi(xi). The algorithm proceeds to update the iterates xi based on the directions of yi. More specifically, at each iteration r, each agent i first updates its local decision variable by averaging its local iterate with the iterates of its neighbors and descending along the negative direction of its gradient estimate yr−1i , i.e., xri = ∑ j∈Ni wijx r−1 j − η1y r−1 i , (5) where η1 is the stepsize and wij is the weight that node i assigns to the information that it receives from node j. We assume that wij > 0 only for the nodes j that are in the neighborhood of node i, which also includes node i itself. Further, the sum of these weights is 1, i.e., ∑ j∈Ni wij = 1. Once the local xi’s are updated, each agent i computes its local gradient ∇fi(xri ) evaluated at its current iterate xri . Then, the nodes use the gradient tracking variable y r−1 i received from their neighbors in the previous round to update their gradient tracking vector according to the update yri = ∑ j∈Ni wijy r−1 j +∇fi(x r i )−∇fi(xr−1i ), (6) Note that the update in (6) shows that node i computes its new global gradient estimate by combining its previous local estimate with the ones communicated by its neighbors as well as the difference of its two consecutive local gradients. Once the local gradient tracking variables are updated, nodes communicate their local models xri and local gradient tracking vectors y r i with their neighbors. After running the updates in (5) and (6) for T1 rounds, we can ensure that we have visited a set of points [x1, . . . ,xm] that construct a first-order stationary point of Problem (2) (see Theorem 1); however, nodes are oblivious to the time index of those iterates. To resolve this issue all nodes sample a common time index r ∈ {1, . . . , T1} and run an average consensus protocol among themselves to compute the expression ∥∥ 1 m ∑m i=1∇fi(x̃i) ∥∥2 + 1m∑mi=1 ‖x̃i − 1m∑mj=1 x̃j‖2 for that time index. By repeating this process at most log( 1δ1 ) times, the output of the process leads to a set of points satisfying first-order optimality with probability at least 1 − δ1 . The details of this procedure are provided in the appendix. Note that the consensus procedure is standard and known to be linearly convergent. Hence, the additional cost of running the consensus protocol log( 1δ1 ) times is negligible compared to T1; see Theorem 1 for more details. Phase II. In the second phase of PDGT we are given a set of variables denoted by x̃ = [x̃1, . . . , x̃m] which is a first-order stationary point. The goal is to escape from it, if it is a strict saddle, i.e., the smallest eigenvalue of the Hessian at this point is sufficiently negative. Initialized with a first-order stationary point x̃ the algorithm injects the same noise ξ picked uniformly from a ball of radius R = Õ(γ 32 ), to all the local iterates x̃i. Thus for all i we have x0i = x̃i + ξ. After initialization Algorithm 3: PDGT algorithm: Phase II 1: Input: x̃, η2, T2,R, B 2: All nodes sample a vector ξ ∼ uniform ball of radiusR using the same seed; 3: Set x0i = x̃i + ξ and run Average Consensus on ∇fi(x0i ) to set y0i = 1m m∑ i=1 ∇fi(x0i ); 4: for r = 1, . . . , T2 do 5: Compute xri = ∑ j∈Ni wijx r−1 j − η2y r−1 i ; ∀i = 1, . . . ,m 6: Compute yri = ∑ j∈Ni wijy r−1 j +∇fi(xri )−∇fi(x r−1 i ); ∀i = 1, . . . ,m 7: Exchange xri and y r i with neighboring nodes; ∀i = 1, . . . ,m 8: end for 9: Run Average Consensus Protocol for iterates xT2 and x̃; 10: if H(xT2 ,yT2)−H(x̃, ỹ) > −B then 11: Return approximate second-order stationary point x̃ = [x̃1, . . . , x̃m] and set S = 1; 12: else 13: Return xT2 = [xT21 , . . . ,x T2 m ], y T2 = [yT21 , . . . ,y T2 m ] and set S = 0; 14: end if all nodes follow the updates in (5) and (6) with stepsize η2, for T2 rounds. If the initial point was a strict saddle then at the end of this process the iterates escape from it; as a result our properly chosen potential function H (formally defined in (9) in Section 4) decreases substantially and then we revisit Phase I. If the potential function H does not decrease sufficiently, then we conclude that x̃ = [x̃1, . . . , x̃m] is a second-order stationary point of Problem (2). More precisely, choosing a proper stepsize η2 and running PDGT for T2 = Õ(dγ−3) iterations decreases the potential function H by at least B = Õ(γ3), with probability 1− δ2, where T2 has only a polylogarithmic dependence on δ2. If the potential function is not substantially decreased then we confidently report x̃ as an approximate second-order stationary point. Note that S is our indicator, tracking whether we have encountered some approximate second-order stationary point or not. Further, the average consensus protocol is utilized in the second phase both to initialize the gradient tracking variables and to evaluate the potential function H at the iterates xT2 and x̃. Since the communication cost of the average consensus protocol is logarithmic in γ−1, it is negligible compared to T2. Hence, the number of communication rounds for Phase II is Õ(dγ−3). Check Theorem 2 for more details. 4 Theoretical Results In this section, we study convergence properties of our proposed PDGT method. First, we characterize the number of rounds T1 required in Phase I of PDGT to find a set of first-order stationary points with high probability. Then, we establish an upper bound for T2, the number of communication rounds required in the second phase. We further show that each time the algorithm finishes Phase II, a potential function decreases at least by Θ̃(γ3). Finally, using these results, we characterize the overall communication rounds between nodes to find a second-order stationary point. Before stating our result, we first discuss some conditions required for the averaging weights used in (5) and (6). Consider the mixing matrix W ∈ Rm×m where the element of its i-th row and j-th column is wij . We assume W satisfies the following conditions. Assumption 3. The mixing matrix W ∈ Rm×m satisfies the following: W = W>, W1 = 1, σ := max{|λ2(W)|, |λm(W)|} < 1, (7) where λi(W) denotes the i-th largest eigenvalue of W. The first condition in Assumption 3 implies that the weight node i assigns to node j equals the weight node j assigns to node i. The second condition means W is row stochastic, and by symmetry, column stochastic. This condition ensures that the weights that each node i assigns to its neighbors and itself sum up to 1. Further note that the eigenvalues of W are real and in the interval [−1, 1]; in fact they can be sorted in a non-increasing order as 1 = λ1(W) ≥ λ2(W) ≥ · · · ≥ λm(W) ≥ −1. The last condition in Assumption 3 ensures that the maximum absolute value of all eigenvalues of W excluding λ1(W) is strictly smaller than 1. This is required since σ := max{|λ2(W)|, |λm(W)|} indicates the rate of information propagation. For highly connected graphs σ is close to zero, while for less connected graphs it is close to 1. A mixing matrix W satisfying Assumption 3 can be chosen based on local degrees in a variety of ways (e.g., [36]). Remark 1. In the appendix we report explicit expressions. To simplify the presentation in the main body, we turn to asymptotic notation and consider sufficiently small η and α, thus hiding constants but preserving the scaling with respect to quantities that capture important elements of our analysis. Next, we present our first result, which formally characterizes the choice of parameters for PDGT to find an ( , ρ)-first-order stationary point, as defined in (1), with probability 1− δ1. Theorem 1. Consider Phase I of PDGT presented in Algorithm 2. If Assumptions 1 and 3 hold, and we set η1 = Θ((1 − σ) √ α) where α = Θ((1 − σ)2), and the number of iterations satisfies T1 ≥ T = Θ ( f(x0)−f∗ η1 2 ) = Θ ( f(x0)−f∗√ α(1−σ) 2 ) , then w.p. at least 1 − δ1, the iterates x̃1, . . . , x̃m corresponding to one of the randomly selected time indices t̃1, .., t̃log( 1δ1 ) from [0 : T1], satisfy∥∥∥∥∥ 1m m∑ i=1 ∇fi(x̃i) ∥∥∥∥∥ 2 + 1 m m∑ i=1 ∥∥∥∥x̃i − 1m m∑ j=1 x̃j ∥∥∥∥2 ≤ 2. (8) Theorem 1 shows that after Θ ( f(x0)−f∗√ α(1−σ) 2 + 1 1−σ log( 1 δ1 ) log( 1 ) ) rounds of exchanging information with neighboring nodes the goal of Phase I is achieved and we obtain a set of first-order stationary points with small gradient tracking disagreement. Note that the second term 11−σ log( 1 δ1 ) log(1 ) corresponds to the cost of running the average consensus protocol to choose the appropriate iterate among time steps t̃1, t̃2, ..., t̃log( 1δ1 ) . This term is negligible compared to the first term. Next we present our result for Phase II of PDGT. In particular, we show that if the input of Phase II, which satisfies (8), is a strict saddle meaning it has sufficient negative curvature, then PDGT will escape from it and as a result the following Lyapunov function decreases: H(x,y) := 1 m m∑ i=1 fi(xavg) + 1 m m∑ i=1 ‖xi − xavg‖2 + α m m∑ i=1 ‖yi − yavg‖2, (9) where x := [x1; . . . ;xm], y := [y1; . . . ;ym], xavg = 1m ∑m j=1 xj and yavg = 1 m ∑m j=1 yj . Theorem 2. Consider Phase II of PDGT presented in Algorithm 3, and suppose Assumptions 1-3 hold. Further, suppose we set η2 = Θ̃ ( γ2 d(1−σ) ) and α = Θ̃ ( (1− σ)2 ) , and the local perturbed iterates are computed according to x0i = x̃i + ξ, where ξ is drawn from the uniform distribution over the ball of radius R = Θ̃(γ1.5). If the input of the second phase denoted by x̃1, . . . , x̃m satisfies λmin(∇2f(x̃avg)) ≤ −γ, ∥∥∥∥∥ 1m m∑ i=1 ∇fi(x̃i) ∥∥∥∥∥ 2 ≤ 21, 1 m m∑ i=1 ∥∥∥∥x̃i − 1m m∑ j=1 x̃j ∥∥∥∥2 ≤ 22, where 21 = Õ(γ3) and 22 = Õ( γ5 d ), then after T2 ≥ T = Θ̃ ( d log(1/γδ2) γ3 ) iterations with probability at least 1− δ2 we have H(xT2 ,yT2)−H(x̃, ỹ) = −Ω̃(γ3). The result in Theorem 2 shows that if the input of Phase II of PDGT is a first-order stationary point with sufficient negative curvature, then by following the update of PDGT for Θ̃(d log(1/γδ2)γ3 ) iterations with probability at least 1 − δ2 the Lyapunov function H decreases by Ω̃(γ3). Further in order for the nodes to verify whether enough progress has been made we include two calls on the average consensus protocol on iterates x̃ and xT2 with overall communication complexity O( 21−σ log( 1 min{ 1, 2} )), which is negligible compared to Θ̃( d log(1/γδ2) γ3 ) iterations. Combining the results of Theorems 1 and 2, and using the fact that the Lyapunov function H is non-increasing in the first phase (proof is available in section 9) we obtain that if the outcome of the first phase has sufficient negative curvature (i.e, is a strict saddle), then the Lyapunov function H after Phase I and Phase II decreases at least by Θ̃(γ3). Hence, after at most Θ̃(γ−3) calls to the first and second phase of PDGT, we will find a second-order stationary point of Problem (2). Theorem 3. Consider the PDGT method in Algorithm 1, and suppose Assumptions 1-3 hold. If we set the stepsizes as η1 = Θ̃ ( (1− σ)2 ) , η2 = Θ̃ ( γ2 d(1−σ) ) and the number of iterations as T1 = Θ̃ ( f(x0)−f∗ (1−σ)2 min{ 2,ρ2} ) and T2 = Θ̃ ( d γ3 ) , respectively, and we have 2 = Õ ( γ3 ) and ρ2 = Õ ( γ5/d ) , then after at most Θ̃ ( max { f(x0)−f∗ (1−σ)2 min{ 2,ρ2}γ3 , d γ6 }) communication rounds PDGT finds an ( , γ, ρ)-second-order stationary point of Problem (2), with high probability. A major difference between the analysis of PDGT and its centralized counterpart in [15] is that as the iterates move away from a first-order stationary point, the consensus error and the gradient tracking disagreement potentially increase exponentially fast blurring the escaping direction. Addressing this issue requires careful selection of the algorithm’s parameters and setting appropriate stepsizes finetuning the tradeoff on the number of iterations between the first and the second phase. The aforementioned hurdles and the lack of knowledge regarding when the algorithm iterates lie close to a stationary point lead to an overall slower convergence rate than the one shown in the centralized case. Recall that if the local solutions [x̂1, . . . , x̂m] form an ( , γ, ρ)-second-order stationary point of Problem (2), then their average x̂avg := 1m ∑m i=1 x̂i is an ( +L1ρ, γ+L2ρ)-second-order stationary point of Problem (1). Moreover, as discussed earlier, second order stationary points are of paramount importance because when all saddle points are strict, any second-order stationary point is a local minima. We formally state this condition in the following assumption and later show that under this assumption PDGT finds a local minima of Problem (1). Assumption 4. Function f(·) is (θ, ζ, ν)- strict saddle, when for any point x, if its gradient norm is smaller than θ, then its Hessian satisfies the condition λmin(∇2f(x)) ≤ −ζ, unless x is ν−close to the set of local minima. The strict saddle condition defined in Assumption 4 states that if a function is (θ, ζ, ν)- strict saddle then each point in Rd belongs to one of these regions: 1) a region where the gradient is large and it is not close to any stationary point; 2) a region where the gradient is small but the Hessian has a significant negative eigenvalue; and 3) the region close to some local minimum. Indeed, under the extra assumption of strict saddle property on function f , PDGT is able to find a local minima in a finite number of iterations as we state in the following corollary. Corollary 1. Consider the PDGT method presented in Algorithm 3 and suppose the conditions in Theorem 3 are satisfied. If in addition Assumption 4 holds and the objective function f is (θ, ζ, ν)strict saddle point, by setting + L1ρ ≤ θ and γ + L2ρ ≤ ζ, the PDGT will output a point ν−close to the set of local minima after Θ̃ ( max { f(x0)−f∗ (1−σ)2 min{ 2,ρ2}γ3 , d γ6 }) communication rounds. 5 Numerical Experiments In this section, we compare PDGT with a simple version of D-GET where each node has full knowledge of its local gradient. D-GET is a decentralized gradient tracking method that "does not use the perturbation idea" [36]. Our goal is to show that PDGT escapes quickly from saddle points. We focus on a matrix factorization problem for the MovieLens dataset, where the goal is to find a rank r approximation of a matrix M ∈ Ml×n, representing the ratings from 943 users to 1682 movies. Each user has rated at least 20 movies for a total of 9990 known ratings. This problem is given by: (U∗,V∗) := argmin U∈Ml×r,V∈Mn×r f(U,V) = argmin U∈Ml×r,V∈Mn×r ‖M−UV>‖2F . (10) We consider different values of target rank and number of nodes. Both methods are given the same randomly generated connected graph, mixing matrix, and step size. The graph is created using the G(n, p) model with p = log2(n)n−1 enforcing the path 1−2− ...− (n−1)−n to ensure the connectivity of the graph. Further we utilize the Maximum Degree Weight mixing matrix as is presented in (10) of [36]. The stepsize for D-GET and both phases of PDGT is 3. Finally both methods are initialized at the same point which lies in a carefully chosen neighborhood of a saddle point. Note that in this problem all saddles are escapable and each local min is a global min. Regarding the parameters of PDGT we set the number of rounds during phase I and II to be 1500 and 100, respectively. Further, we set the threshold before we add noise during phase I as presented in (8) to be 10−6 and the radius of the noise injected to be 4. In Fig. 1 the experiment is run for 10 nodes, and the target rank is 20. Initially both algorithms are stuck close to a saddle point and make very little progress. However, since the theoretical criterion for PDGT is satisfied in the very first rounds (small average gradient and consensus error) we have injection of noise. This nudge is sufficient to accelerate substantially the escape of PDGT. As we see in the plot, D-GET remains close to the saddle point at least until iteration 1400 where we can see the gradient increasing somewhat faster. At the same time PDGT escapes the saddle point, decreases the loss and approaches a local minimum. In Fig. 2, the experiment is run for 30 nodes and the target rank is 30. Similarly, PDGT escapes from the saddle point much faster and decreases the loss substantially before it reaches the local minimum. We observe that D-GET also escapes the saddle point eventually following a similar trace to PDGT after spending a lot longer at the saddle. Interestingly, for this experiment, we observed that some parameters such as the stepsize of the first and the second phase, the injected noise and the threshold before we inject noise can afford to be substantially greater than the theoretical propositions casting PDGT useful for a series of practical applications. 6 Conclusion and Future Work We proposed the Perturbed Decentralized Gradient Tracking (PDGT) algorithm that achieves secondorder stationarity in a finite number of iterations, under the assumptions that the objective function gradient and Hessian are Lipschitz. We showed that PDGT finds an ( , γ, ρ)-second-order stationary point, where and γ indicate the accuracy for first- and second-order optimality, respectively, and ρ shows the consensus error, after Θ̃ ( max { f(x0)−f∗ (1−σ)2 min{ 2,ρ2}γ3 , d γ6 }) communication rounds, where d is dimension, f(x0)− f∗ is the initial error, and 1− σ is related to graph connectivity. This paper is the first step towards achieving second-order optimality in decentralized settings under standard smoothness assumptions, and several research problems are still unanswered in this area. First, our complexity scales linearly with dimension d, deviating from the poly-logarithmic dependence achieved for centralized perturbed gradient descent [15]. Closing this gap and developing an algorithm that obtains second-order optimality with communication rounds that scale sublinearly or even poly-logarithmically on the dimension is a promising research direction that requires further investigation. Second, in the centralized setting, it has been shown that by using gradient acceleration [16] it is possible to find a second-order stationary point faster than perturbed gradient descent. It would be interesting to see if the same conclusion also holds for decentralized settings. Last, extending the theory developed in this paper to the case that nodes only have access to a noisy estimate of their local gradients is another avenue of research that requires further study. 7 Broader Impact Over the last couple of years we have witnessed an unprecedented increase in the amount of data collected and processed in order to tackle real life problems. Advances in numerous data-driven system such as the Internet of Things, health-care, multi-agent robotics wherein data are scattered across the agents (e.g., sensors, clouds, robots), and the sheer volume and spatial/temporal disparity of data render centralized processing and storage infeasible or inefficient. Compared to the typical parameter-server type distributed system with a fusion center, decentralized optimization has its unique advantages in preserving data privacy, enhancing network robustness, and improving the computation efficiency. Furthermore, in many emerging applications such as collaborative filtering, federated learning, distributed beamforming and dictionary learning, the data is naturally collected in a decentralized setting, and it is not possible to transfer the distributed data to a central location. Therefore, decentralized computation has sparked considerable interest in both academia and industry. At the same time convex formulations for training machine learning tasks have been replaced by nonconvex representations such as neural networks and a line of significant non convex problems are on the spotlight. Our paper contributes to this line of work and broadens the set of problems that can be successfully solved without the presence of a central coordinating authority in the aforementioned framework. The implications on the privacy of the agents are apparent while rendering the presence of an authority unnecessary has political and economical extensions. Furthermore, numerous applications are going to benefit from our result impacting society in many different ways. 8 Acknowledgments and Disclosure of Funding The research of I. Tziotis and A. Mokhtari is supported by NSF Award CCF-2007668. C. Caramanis is supported by NSF Awards 1704778, 1646522, and 1609279.
1. What is the focus of the paper regarding non-convex objective functions? 2. What are the strengths of the proposed Perturbed Decentralized Gradient Tracking (PDGT) algorithm? 3. What are the weaknesses of the paper, particularly regarding experimental validation? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any questions or concerns regarding the theoretical analysis or the proposed algorithm?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper considers the problem of finding minima for a non-convex objective function in a decentralized setting. In particular, it considers the case where the objective function is a sum of objective functions local to each node and where only adjacent nodes can communicate with each other. To this end, the paper proposes the Perturbed Decentralized Gradient Tracking (PDGT) algorithm, which is comprised of 2 phases: i) a first-order stationary point is found whilst avoiding consensus error; ii) then, noise is used to check for (and escape from) saddle points. Under standard assumptions, a non-asymptotic guarantee on convergence to a second-order stationary point is given, along with an upper bound on the required communication between nodes. Although this problem has been studied before, the theoretical analysis has only focused on asymptotic analysis or non-asymptotic analysis with stronger conditions than are used in this paper. Strengths The paper was very thorough in its theoretical analysis. A sensible algorithm was proposed, convergence rates (present in most similar papers) were calculated, and communication requirement bounds were found (not so common in similar papers). The paper was also written very clearly. Finally, the approach was novel and clearly explained role previous literature had played in motivating the proposed algorithm. Weaknesses The lack of any experiments was problematic, especially since a new algorithm was proposed and because decentralized machine learning generally has more moving parts and places to go wrong. Indeed, given the care taken in the theoretical analysis, I was a little surprised by this. It’s for this reason that I’ve rated this paper as a borderline reject. This is a good paper that could be published at NeurIPS, but please run this algorithm on a specific problem (even if the problem is just a toy problem), make sure it works as expected, and then put the results in the supplement.
NIPS
Title Sample-Efficient Reinforcement Learning with Stochastic Ensemble Value Expansion Abstract Integrating model-free and model-based approaches in reinforcement learning has the potential to achieve the high performance of model-free algorithms with low sample complexity. However, this is difficult because an imperfect dynamics model can degrade the performance of the learning algorithm, and in sufficiently complex environments, the dynamics model will almost always be imperfect. As a result, a key challenge is to combine model-based approaches with model-free learning in such a way that errors in the model do not degrade performance. We propose stochastic ensemble value expansion (STEVE), a novel model-based technique that addresses this issue. By dynamically interpolating between model rollouts of various horizon lengths for each individual example, STEVE ensures that the model is only utilized when doing so does not introduce significant errors. Our approach outperforms model-free baselines on challenging continuous control benchmarks with an order-of-magnitude increase in sample efficiency, and in contrast to previous model-based approaches, performance does not degrade in complex environments. 1 Introduction Deep model-free reinforcement learning has had great successes in recent years, notably in playing video games [23] and strategic board games [27]. However, training agents using these algorithms requires tens to hundreds of millions of samples, which makes many practical applications infeasible, particularly in real-world control problems (e.g., robotics) where data collection is expensive. Model-based approaches aim to reduce the number of samples required to learn a policy by modeling the dynamics of the environment. A dynamics model can be used to increase sample efficiency in various ways, including training the policy on rollouts from the dynamics model [28], using rollouts to improve targets for temporal difference (TD) learning [7], and using information gained from rollouts as inputs to the policy [31]. Model-based algorithms such as PILCO [4] have shown that it is possible to learn from orders-of-magnitude fewer samples. These successes have mostly been limited to environments where the dynamics are simple to model. In noisy, complex environments, it is difficult to learn an accurate model of the environment. When the model makes mistakes in this context, it can cause the wrong policy to be learned, hindering performance. Recent work has begun to address this issue. Kalweit and Boedecker [17] train a model-free algorithm on a mix of real and imagined data, adjusting the proportion in favor of real data as the Q-function becomes more confident. Kurutach et al. [20] train a model-free algorithm on purely imaginary data, but use an ensemble of environment models to avoid overfitting to errors made by any individual model. ∗This work was completed as part of the Google AI Residency program. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. We propose stochastic ensemble value expansion (STEVE), an extension to model-based value expansion (MVE) proposed by Feinberg et al. [7]. Both techniques use a dynamics model to compute “rollouts” that are used to improve the targets for temporal difference learning. MVE rolls out a fixed length into the future, potentially accumulating model errors or increasing value estimation error along the way. In contrast, STEVE interpolates between many different horizon lengths, favoring those whose estimates have lower uncertainty, and thus lower error. To compute the interpolated target, we replace both the model and Q-function with ensembles, approximating the uncertainty of an estimate by computing its variance under samples from the ensemble. Through these uncertainty estimates, STEVE dynamically utilizes the model rollouts only when they do not introduce significant errors. For illustration, Figure 1 compares the sample efficiency of various algorithms on a tabular toy environment, which shows that STEVE significantly outperforms MVE and TD-learning baselines when the dynamics model is noisy. We systematically evaluate STEVE on several challenging continuous control benchmarks and demonstrate that STEVE significantly outperforms model-free baselines with an order-of-magnitude increase in sample efficiency. 2 Background Reinforcement learning aims to learn an agent policy that maximizes the expected (discounted) sum of rewards [29]. The agent starts at an initial state s0 ∼ p(s0), where p(s0) is the distribution of initial states of the environment. Then, the agent deterministically chooses an action at according to its policy πφ(st) with parameters φ, deterministically transitions to a subsequent state st+1 according to the Markovian dynamics T (st, at) of the environment, and receives a reward rt = r(st, at, st+1). This generates a trajectory of states, actions, and rewards τ = (s0, a0, r0, s1, a1, . . .). If a trajectory reaches a terminal state, it concludes without further transitions or rewards; however, this is optional, and trajectories may instead be infinite in length. We abbreviate the trajectory by τ . The goal is to maximize the expected discounted sum of rewards along sampled trajectories J(θ) = Es0 [ ∑∞ t=0 γ trt] where γ ∈ [0, 1) is a discount parameter. 2.1 Value Estimation with TD-learning The action-value functionQπ(s0, a0) = ∑∞ t=0 γ trt is a critical quantity to estimate for many learning algorithms. Using the fact that Qπ(s, a) satisfies a recursion relation Qπ(s, a) = r(s, a) + γ(1− d(s′))Qπ(s′, π(s′)), where s′ = T (s, a) and d(s′) is an indicator function which returns 1 when s′ is a terminal state and 0 otherwise. We can estimate Qπ(s, a) off-policy with collected transitions of the form (s, a, r, s′) sampled uniformly from a replay buffer [29]. We approximate Qπ(s, a) with a deep neural network, Q̂πθ (s, a). We learn parameters θ to minimize the mean squared error (MSE) between Q-value estimates of states and their corresponding TD targets: T TD(r, s′) = r + γ(1− d(s′))Q̂πθ−(s ′, π(s′)) (1) Lθ = E(s,a,r,s′) [ (Q̂πθ (s, a)− T TD(r, s′))2 ] (2) This expectation is taken with respect to transitions sampled from our replay buffer. Note that we use an older copy of the parameters, θ−, when computing targets [23]. Since we evaluate our method in a continuous action space, it is not possible to compute a policy from our Q-function by simply taking maxa Q̂πθ (s, a). Instead, we use a neural network to approximate this maximization function [21], by learning a parameterized function πφ to minimize the negative Q-value: Lφ = −Q̂πθ (s, πφ(s)). (3) In this work, we use DDPG as the base learning algorithm, but our technique is generally applicable to other methods that use TD objectives. 2.2 Model-Based Value Expansion (MVE) Recently, Feinberg et al. [7] showed that a learned dynamics model can be used to improve value estimation. MVE forms TD targets by combining a short term value estimate formed by unrolling the model dynamics and a long term value estimate using the learned Q̂πθ− function. When the model is accurate, this reduces the bias of the targets, leading to improved performance. The learned dynamics model consists of three learned functions: the transition function T̂ξ(s, a), which returns a successor state s′; a termination function d̂ξ(s), which returns the probability that s is a terminal state; and the reward function r̂ψ(s, a, s′), which returns a scalar reward. This model is trained to minimize Lξ,ψ = E(s,a,r,s′) [ ||T̂ξ(s, a)− s′||2 + H ( d(s′), d̂ξ(T̂ξ(s, a)) ) + (r̂ψ(s, a, s ′)− r)2 ] , (4) where the expectation is over collected transitions (s, a, r, s′), and H is the cross-entropy. In this work, we consider continuous environments; for discrete environments, the first term can be replaced by a cross-entropy loss term. To incorporate the model into value estimation, Feinberg et al. [7] replace the standard Q-learning target with an improved target, T MVEH , computed by rolling the learned model out for H steps. s′0 = s ′, a′i = πφ(s ′ i), s ′ i = T̂ξ(s ′ i−1, a ′ i−1), D i = d(s′) i∏ j=1 (1− d̂ξ(s′j)) (5) T MVEH (r, s′) = r + ( H∑ i=1 Diγir̂ψ(s ′ i−1, a ′ i−1, s ′ i) ) +DH+1γH+1Q̂πθ−(s ′ H , a ′ H). (6) To use this target, we substitute T MVEH in place of T TD when training θ using Equation 2.2 Note that when H = 0, MVE reduces to TD-learning (i.e., T TD = T MVE0 ). When the model is perfect and the learned Q-function has similar bias on all states and actions, Feinberg et al. [7] show that the MVE target with rollout horizon H will decrease the target error by a factor of γ2H . Errors in the learned model can lead to worse targets, so in practice, we must tune H to balance between the errors in the model and the Q-function estimates. An additional challenge is that the bias in the learned Q-function is not uniform across states and actions [7]. In particular, 2This formulation is a minor generalization of the original MVE objective in that we additionally model the reward function and termination function; Feinberg et al. [7] consider “fully observable” environments in which the reward function and termination condition were known, deterministic functions of the observations. Because we use a function approximator for the termination condition, we compute the accumulated probability of termination, Di, at every timestep, and use this value to discount future returns. they find that the bias in the Q-function on states sampled from the replay buffer is lower than when the Q-function is evaluated on states generated from model rollouts. They term this the distribution mismatch problem and propose the TD-k trick as a solution; see Appendix B for further discussion of this trick. While the results of Feinberg et al. [7] are promising, they rely on task-specific tuning of the rollout horizon H . This sensitivity arises from the difficulty of modeling the transition dynamics and the Qfunction, which are task-specific and may change throughout training as the policy explores different parts of the state space. Complex environments require much smaller rollout horizon H , which limits the effectiveness of the approach (e.g., Feinberg et al. [7] used H = 10 for HalfCheetah-v1, but had to reduce to H = 3 on Walker2d-v1). Motivated by this limitation, we propose an approach that balances model error and Q-function error by dynamically adjusting the rollout horizon. 3 Stochastic Ensemble Value Expansion From a single rollout ofH timesteps, we can computeH+1 distinct candidate targets by considering rollouts of various horizon lengths: T MVE0 ,T MVE1 ,T MVE2 ,...,T MVEH . Standard TD learning uses T MVE0 as the target, while MVE uses T MVEH as the target. We propose interpolating all of the candidate targets to produce a target which is better than any individual. Conventionally, one could average the candidate targets, or weight the candidate targets in an exponentially-decaying fashion, similar to TD(λ) [29]. However, we show that we can do still better by weighting the candidate targets in a way that balances errors in the learnedQ-function and errors from longer model rollouts. STEVE provides a computationally-tractable and theoretically-motivated algorithm for choosing these weights. We describe the algorithm for STEVE in Section 3.1, and justify it in Section 3.2. 3.1 Algorithm To estimate uncertainty in our learned estimators, we maintain an ensemble of parameters for our Q-function, reward function, and model: θ = {θ1, ..., θL}, ψ = {ψ1, ..., ψN}, and ξ = {ξ1, ..., ξM}, respectively. Each parameterization is initialized independently and trained on different subsets of the data in each minibatch. We roll out an H step trajectory with each of the M models, τ ξ1 , ..., τ ξM . Each trajectory consists of H + 1 states, τ ξm0 , ..., τ ξm H , which correspond to s ′ 0, ..., s ′ H in Equation 5 with the transition function parameterized by ξm. Similarly, we use the N reward functions and L Q-functions to evaluate Equation 6 for each τ ξm at every rollout-length 0 ≤ i ≤ H . This gives us M ·N · L different values of T MVEi for each rollout-length i. See Figure 2 for a visualization of this process. Using these values, we can compute the empirical mean T µi and variance T σ 2 i for each partial rollout of length i. In order to form a single target, we use an inverse variance weighting of the means: T STEVEH (r, s′) = H∑ i=0 w̃i∑ j w̃j T µi , w̃ −1 i = T σ2 i (7) To learn a value function with STEVE, we substitute in T STEVEH in place of T TD when training θ using Equation 2. 3.2 Derivation We wish to find weights wi, where ∑ i wi = 1 that minimize the mean-squared error between the weighted-average of candidate targets T MVE0 ,T MVE1 ,T MVE2 ,...,T MVEH and the true Q-value. E ( H∑ i=0 wiT MVEi −Qπ(s, a) )2 = Bias(∑ i wiT MVEi )2 + Var (∑ i wiT MVEi ) ≈ Bias (∑ i wiT MVEi )2 + ∑ i w2i Var(T MVEi ), where the expectation considers the candidate targets as random variables conditioned on the collected data and minibatch sampling noise, and the approximation is due to assuming the candidate targets are independent3. Our goal is to minimize this with respect to wi. We can estimate the variance terms using empirical variance estimates from the ensemble. Unfortunately, we could not devise a reliable estimator for the bias terms, and this is a limitation of our approach and an area for future work. In this work, we ignore the bias terms and minimize the weighted sum of variances∑ i w2i Var(T MVEi ). With this approximation, which is equivalent to in inverse-variance weighting [8], we achieve stateof-the-art results. Setting each wi equal to 1Var(T MVEi ) and normalizing yields the formula for T STEVEH given in Equation 7. 3.3 Note on ensembles This technique for calculating uncertainty estimates is applicable to any family of models from which we can sample. For example, we could train a Bayesian neural network for each model [22], or use dropout as a Bayesian approximation by resampling the dropout masks each time we wish to sample a new model [10]. These options could potentially give better diversity of various samples from the family, and thus better uncertainty estimates; exploring them further is a promising direction for future work. However, we found that these methods degraded the accuracy of the base models. An ensemble is far easier to train, and so we focus on that in this work. This is a common choice, as the use of ensembles in the context of uncertainty estimations for deep reinforcement learning has seen wide adoption in the literature. It was first proposed by Osband et al. [25] as a technique to improve exploration, and subsequent work showed that this approach gives a good estimate of the uncertainty of both value functions [17] and models [20]. 4 Experiments 4.1 Implementation We use DDPG [21] as our baseline model-free algorithm. We train two deep feedforward neural networks, a Q-function network Q̂πθ (s, a) and a policy network πφ(s), by minimizing the loss functions given in Equations 2 and 3. We also train another three deep feedforward networks to represent our world model, corresponding to function approximators for the transition T̂ξ(s, a), termination d̂ξ(t | s), and reward r̂ψ(s, a, s′), and minimize the loss function given in Equation 4. When collecting rollouts for evaluation, we simply take the action selected by the policy, πφ(s), at every state s. (Note that only the policy is required at test-time, not the ensembles of Q-functions, 3Initial experiments suggested that omitting the covariance cross terms provided significant computational speedups at the cost of a slight performance degradation. As a result, we omitted the terms in the rest of the experiments. dynamics models, or reward models.) Each run was evaluated after every 500 updates by computing the mean total episode reward (referred to as score) across many environment restarts. To produce the lines in Figures 3, 4, and 5, these evaluation results were downsampled by splitting the domain into non-overlapping regions and computing the mean score within each region across several runs. The shaded area shows one standard deviation of scores in the region as defined above. When collecting rollouts for our replay buffer, we do -greedy exploration: with probability , we select a random action by adding Gaussian noise to the pre-tanh policy action. All algorithms were implemented in Tensorflow [1]. We use a distributed implementation to parallelize computation. In the style of ApeX [16], IMPALA [6], and D4PG [2], we use a centralized learner with several agents operating in parallel. Each agent periodically loads the most recent policy, interacts with the environment, and sends its observations to the central learner. The learner stores received frames in a replay buffer, and continuously loads batches of frames from this buffer to use as training data for a model update. In the algorithms with a model-based component, there are two learners: a policy-learner and a model-learner. In these cases, the policy-learner periodically reloads the latest copy of the model. All baselines reported in this section were re-implementations of existing methods. This allowed us to ensure that the various methods compared were consistent with one another, and that the differences reported are fully attributable to the independent variables in question. Our baselines are competitive with state-of-the-art implementations of these algorithms [7, 14]. All MVE experiments utilize the TD-k trick. For hyperparameters and additional implementation details, please see Appendix C.4 4.2 Comparison of Performance We evaluated STEVE on a variety of continuous control tasks [3, 19]; we plot learning curves in Figure 3. We found that STEVE yields significant improvements in both performance and sample efficiency across a wide range of environments. Importantly, the gains are most substantial in the complex environments. On the most challenging environments: Humanoid-v1, RoboschoolHumanoid- 4Our code is available open-source at: https://github.com/tensorflow/models/tree/master/ research/steve v1, RoboschoolHumanoidFlagrun-v1, and BipedalWalkerHardcore-v2, STEVE is the only algorithm to show significant learning within 5M frames. 4.3 Ablation Study In order to verify that STEVE’s gains in sample efficiency are due to the reweighting, and not simply due to the additional parameters of the ensembles of its components, we examine several ablations. Ensemble MVE is the regular MVE algorithm, but the model and Q-functions are replaced with with ensembles. Mean-MVE uses the exact same architecture as STEVE, but uses a simple uniform weighting instead of the uncertainty-aware reweighting scheme. Similarly, TDL25 and TDL75 correspond to TD(λ) reweighting schemes with λ = 0.25 and λ = 0.75, respectively. COV-STEVE is a version of STEVE which includes the covariances between candidate targets when computing the weights (see Section 3.2). We also investigate the effect of the horizon parameter on the performance of both STEVE and MVE. These results are shown in Figure 4. All of these variants show the same trend: fast initial gains, which quickly taper off and are overtaken by the baseline. STEVE is the only variant to converge faster and higher than the baseline; this provides strong evidence that the gains come specifically from the uncertainty-aware reweighting of targets. Additionally, we find that increasing the rollout horizon increases the sample efficiency of STEVE, even though the dynamics model for Humanoid-v1 has high error. 4.4 Wall-Clock Comparison In the previous experiments, we synchronized data collection, policy updates, and model updates. However, when we run these steps asynchronously, we can reduce the wall-clock time at the risk of instability. To evaluate this configuration, we compare DDPG, MVE-DDPG, and STEVE-DPPG on Humanoid-v1 and RoboschoolHumanoidFlagrun-v1. Both were trained on a P100 GPU and had 8 CPUs collecting data; STEVE-DPPG additionally used a second P100 to learn a model in parallel. We plot reward as a function of wall-clock time for these tasks in Figure 5. STEVE-DDPG learns more quickly on both tasks, and it achieves a higher reward than DDPG and MVE-DDPG on Humanoid-v1 and performs comparably to DDPG on RoboschoolHumanoidFlagrun-v1. Moreover, in future work, STEVE could be accelerated by parallelizing training of each component of the ensemble. 5 Discussion Our primary experiments (Section 4.2) show that STEVE greatly increases sample efficiency relative to baselines, matching or out-performing both MVE-DDPG and DDPG baselines on every task. STEVE also outperforms other recently-published results on these tasks in terms of sample efficiency [13, 14, 26]. Our ablation studies (Section 4.3) support the hypothesis that the increased performance is due to the uncertainty-dependent reweighting of targets, as well as demonstrate that the performance of STEVE consistently increases with longer horizon lengths, even in complex environments. Finally, our wall-clock experiments (Section 4.4) demonstrate that in spite of the additional computation per epoch, the gains in sample efficiency are enough that it is competitive with model-free algorithms in terms of wall-clock time. The speed gains associated with improved sample efficiency will only be exacerbated as samples become more expensive to collect, making STEVE a promising choice for applications involving real-world interaction. Given that the improvements stem from the dynamic reweighting between horizon lengths, it may be interesting to examine the choices that the model makes about which candidate targets to favor most heavily. In Figure 6, we plot the average model usage over the course of training. Intriguingly, most of the lines seem to remain stable at around 50% usage, with two notable exceptions: Humanoid-v1, the most complex environment tested (with an observation-space of size 376); and Swimmer-v1, the least complex environment tested (with an observation-space of size 8). This supports the hypothesis that STEVE is trading off between Q-function bias and model bias; it chooses to ignore the model almost immediately when the environment is too complex to learn, and gradually ignores the model as the Q-function improves if an optimal environment model is learned quickly. 6 Related Work Sutton and Barto [29] describe TD(λ), a family of Q-learning variants in which targets from multiple timesteps are merged via exponentially decay. STEVE is similar in that it is also computing a weighted average between targets. However, our approach is significantly more powerful because it adapts the weights to the specific characteristics of each individual rollout, rather than being constant between examples and throughout training. Our approach can be thought of as a generalization of TD(λ), in that the two approaches are equivalent in the specific case where the overall uncertainty grows exponentially at rate λ at every timestep. Munos et al. [24] propose Retrace(λ), a low-variance method for off-policy Q-learning. Retrace(λ) is an off-policy correction method, so it learns from n-step off-policy data by multiplying each term of the loss by a correction coefficient, the trace, in order to re-weight the data distribution to look more like the on-policy distribution. Specifically, at each timestep, Retrace(λ) updates the coefficient for that term by multiplying it by λmin(1, π(as|xs)µ(as|xs) ). Similarly to TD(λ), the λ parameter corresponds to an exponential decay of the weighting of potential targets. STEVE approximates this weighting in a more complex way, and additionally learns a predictive model of the environment (under which on-policy rollouts are possible) instead of using off-policy correction terms to re-weight real off-policy rollouts. Heess et al. [15] describe stochastic value gradient (SVG) methods, which are a general family of hybrid model-based/model-free control algorithms. By re-parameterizing distributions to separate out the noise, SVG is able to learn stochastic continuous control policies in stochastic environments. STEVE currently operates only with deterministic policies and environments, but this is a promising direction for future work. Kurutach et al. [20] propose model-ensemble trust-region policy optimization (ME-TRPO), which is motivated similarly to this work in that they also propose an algorithm which uses an ensemble of models to mitigate the deleterious effects of model bias. However, the algorithm is quite different. METRPO is a purely model-based policy-gradient approach, and uses the ensemble to avoid overfitting to any one model. In contrast, STEVE interpolates between model-free and model-based estimates, uses a value-estimation approach, and uses the ensemble to explicitly estimate uncertainty. Kalweit and Boedecker [17] train on a mix of real and imagined rollouts, and adjust the ratio over the course of training by tying it to the variance of the Q-function. Similarly to our work, this variance is computed via an ensemble. However, they do not adapt to the uncertainty of individual estimates, only the overall ratio of real to imagined data. Additionally, they do not take into account model bias, or uncertainty in model predictions. Weber et al. [31] use rollouts generated by the dynamics model as inputs to the policy function, by “summarizing” the outputs of the rollouts with a deep neural network. This second network allows the algorithm to implicitly calculate uncertainty over various parts of the rollout and use that information when making its decision. However, I2A has only been evaluated on discrete domains. Additionally, the lack of explicit model use likely tempers the sample-efficiency benefits gained relative to more traditional model-based learning. Gal et al. [11] use a deep neural network in combination with the PILCO algorithm [4] to do sampleefficient reinforcement learning. They demonstrate good performance on the continuous-control task of cartpole swing-up. They model uncertainty in the learned neural dynamics function using dropout as a Bayesian approximation, and provide evidence that maintaining these uncertainty estimates is very important for model-based reinforcement learning. Depeweg et al. [5] use a Bayesian neural network as the environment model in a policy search setting, learning a policy purely from imagined rollouts. This work also demonstrates that modeling uncertainty is important for model-based reinforcement learning with neural network models, and that uncertainty-aware models can escape many common pitfalls. Gu et al. [12] propose a continuous variant of Q-learning known as normalized advantage functions (NAF), and show that learning using NAF can be accelerated by using a model-based component. They use a variant of Dyna-Q [28], augmenting the experience available to the model-free learner with imaginary on-policy data generated via environment rollouts. They use an iLQG controller and a learned locally-linear model to plan over small, easily-modelled regions of the environment, but find that using more complex neural network models of the environment can yield errors. Thomas et al. [30] define the Ω-return, an alternative to the λ-return that accounts for the variance of, and correlations between, predicted returns at multiple timesteps. Similarly to STEVE, the target used is an unbiased linear combination of returns with minimum variance. However, the Ω-return is not directly computable in non-tabular state spaces, and does n-step off-policy learning rather than learn a predictive model of the environment. Drawing a theoretical connection between the STEVE algorithm and the Ω-return is an interesting potential direction for future work. 7 Conclusion In this work, we demonstrated that STEVE, an uncertainty-aware approach for merging model-free and model-based reinforcement learning, outperforms model-free approaches while reducing sample complexity by an order magnitude on several challenging tasks. We believe that this is a strong step towards enabling RL for practical, real-world applications. Since submitting this manuscript for publication, we have further explored the relationship between STEVE and recent work on overestimation bias [9], and found evidence that STEVE may help to reduce this bias. Other future directions include exploring more complex worldmodels for various tasks, as well as comparing various techniques for calculating uncertainty and estimating bias. Acknowledgments The authors would like to thank the following individuals for their valuable insights and discussion: David Ha, Prajit Ramachandran, Tuomas Haarnoja, Dustin Tran, Matt Johnson, Matt Hoffman, Ishaan Gulrajani, and Sergey Levine. Also, we would like to thank Jascha Sohl-Dickstein, Joseph Antognini, Shane Gu, and Samy Bengio for their feedback during the writing process, and Erwin Coumans for his help on PyBullet enivronments. Finally, we would like to thank our anonymous reviewers for their insightful suggestions.
1. What is the main contribution of the paper regarding sample-efficient RL? 2. What are the strengths of the proposed method, particularly in terms of its ability to improve sample complexity? 3. Do you have any questions or concerns regarding the algorithm's idea of using simulated trajectories to look H steps ahead? 4. How does the proposed method differ from other related methods such as DDPG and MVE? 5. Can you provide more details about the experimental results, such as the high variance in Figure 2 and the performance of STEVE across different values of H? 6. How does the method handle training and test time differences? 7. Can you explain the TD estimate computation and the use of N models being rolled out and converted into the final T_i mean and variance estimates? 8. What is the significance of the ablation study description in Section 4.3, particularly regarding the "additional parameters of the ensembles"? 9. Can you discuss the results shown in Figure 5 and the distribution of weights typically across non-zero timesteps? 10. Are there any limitations or potential drawbacks to the proposed approach that should be considered?
Review
Review # Paper ID 5026 Sample-efficient RL with stochastic ensemble value expansion ## Summary The paper proposes a method to learn a memoryless policy in a sample efficient manner using an ensemble of learned MDP models, policies and Q functions. The main algorithmic idea is a weighted combination of H step temporal differences, estimated on H steps (and rolled out by a learned model of the environment). The underlying idea is to allow the learner to tradeoff between estimation errors in model and Q function in different parts of the state-action space during learning. The updated TD estimator is incorporated into the DDPG algorithm in a straightforward manner. The update is computationally more intensive but the result is improved sample complexity. The experimental results on a variety of continuous control tasks show significant improvement over the baseline DDPG and a related method (MVE) (which is the precursor to this work). Overall, the paper is well written. The empirical results are very promising. The analysis and discussion is a bit limited but is not a major drawback. Overall, there is much to like about the paper. Detailed comments follow. ## Detailed Comments - The main idea is to replace the vanilla temporal difference update with one that uses simulated trajectories to look H steps ahead (followed by the regular TD update on step H+1). The simulated trajectories are generated from an ensemble of M models. Additional ensembles of reward functions and Q functions are maintained during learning. - My understanding is that this additional overhead (M models, N reward functions, L Q-functions) is only required during training. At test time, only the final learned Q function (and policy network) is required (right?). The paper could perhaps better describe the difference between training and test. - In Section 3.1, I'm not sure I completely followed how the mean and variance of the TD estimates at each time step $i$ are computed. I believe the algorithm creates MNL samples on each (s, a) pair visited during training and uses the MNL samples to fit the T_i mean and variance *for that particular (s, a) pair (or (r, s') pair)*. Is this correct? I think Section 3.1 needs to be described much more clearly. A graphical visualization of the N models being rolled out and converted into the final T_i mean and variance estimates may make the learning algorithm easier to visualize. - Assuming I've understood correctly, the proposed algorithm (STEVE) boils down to using the learned models to generate TD estimates at every step $0 \leq i \leq H$. The TD estimates are noisy (due to model errors) so the final TD estimate is a weighted combination of estimated means of the $T_i$, where the weights are larger when the variance in $T_i$ is small. - Overall, the approach makes good intuitive sense to me. The authors provide some theoretical justification of the algorithm. A more rigorous analysis in Section 3.2 would strengthen the paper. - The modified TD update is plugged into the DDPG RL algorithm which uses a pair of neural networks for the Q function and the policy. The paper evaluates the algorithm on a number of continuous control tasks. - The experimental section shows that the results are quite promising. The results show the STEVE algorithm doing significantly better. The baselines are the vanilla DDPG and the MVE variant of DDPG. - In Figure 2, I'd be interested in a discussion of the high variance. Perhaps the caption could include more details about the averaging that was performed? Additional baselines (besides DDPG) would also be good to see on the same chart. - The ablation study description in Section 4.3 was a bit confusing. Why would the "additional parameters of the ensembles" (line 181) matter to the test-time score in Figure 3? This seems to imply that the test time score depends on the ensemble somehow but I don't think that's happening here. What am I missing? - I found Figure 3 to be very interesting. The performance of STEVE across (H=1/3/5) was particularly interesting as well as the uniform weighting scheme. These results seem somewhat central to the claims of the method. GCP budget permitting, I think the the experimental section would be much improved if these graphs were generated for the other control tasks as well. Any preliminary results on these would be interesting to hear about as well. - Figure 5 suggests something interesting is happening but I'm not sure I fully understand the discussion in the paper. Why exactly does the model usage go to zero in the least complex AND the most complex environments? I think it might be useful to analyze and describe these three scenarios (going to zero in easy env, going to zero in complex env, hovering around 50%) in more detail. Finally, how are the weights typically distributed across non-zero timesteps? - Overall, I liked the paper. Additional experiments and baselines, experimental analysis and a clearer description of the algorithm would significantly strengthen the paper. Minor points: - I think the ensemble size ought to be mentioned somewhere in the paper and not the appendix. UPDATE ---------- After reading the other reviews and the author responses, I'm upgrading my evaluation. I congratulate the authors on their fine work.
NIPS
Title Sample-Efficient Reinforcement Learning with Stochastic Ensemble Value Expansion Abstract Integrating model-free and model-based approaches in reinforcement learning has the potential to achieve the high performance of model-free algorithms with low sample complexity. However, this is difficult because an imperfect dynamics model can degrade the performance of the learning algorithm, and in sufficiently complex environments, the dynamics model will almost always be imperfect. As a result, a key challenge is to combine model-based approaches with model-free learning in such a way that errors in the model do not degrade performance. We propose stochastic ensemble value expansion (STEVE), a novel model-based technique that addresses this issue. By dynamically interpolating between model rollouts of various horizon lengths for each individual example, STEVE ensures that the model is only utilized when doing so does not introduce significant errors. Our approach outperforms model-free baselines on challenging continuous control benchmarks with an order-of-magnitude increase in sample efficiency, and in contrast to previous model-based approaches, performance does not degrade in complex environments. 1 Introduction Deep model-free reinforcement learning has had great successes in recent years, notably in playing video games [23] and strategic board games [27]. However, training agents using these algorithms requires tens to hundreds of millions of samples, which makes many practical applications infeasible, particularly in real-world control problems (e.g., robotics) where data collection is expensive. Model-based approaches aim to reduce the number of samples required to learn a policy by modeling the dynamics of the environment. A dynamics model can be used to increase sample efficiency in various ways, including training the policy on rollouts from the dynamics model [28], using rollouts to improve targets for temporal difference (TD) learning [7], and using information gained from rollouts as inputs to the policy [31]. Model-based algorithms such as PILCO [4] have shown that it is possible to learn from orders-of-magnitude fewer samples. These successes have mostly been limited to environments where the dynamics are simple to model. In noisy, complex environments, it is difficult to learn an accurate model of the environment. When the model makes mistakes in this context, it can cause the wrong policy to be learned, hindering performance. Recent work has begun to address this issue. Kalweit and Boedecker [17] train a model-free algorithm on a mix of real and imagined data, adjusting the proportion in favor of real data as the Q-function becomes more confident. Kurutach et al. [20] train a model-free algorithm on purely imaginary data, but use an ensemble of environment models to avoid overfitting to errors made by any individual model. ∗This work was completed as part of the Google AI Residency program. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. We propose stochastic ensemble value expansion (STEVE), an extension to model-based value expansion (MVE) proposed by Feinberg et al. [7]. Both techniques use a dynamics model to compute “rollouts” that are used to improve the targets for temporal difference learning. MVE rolls out a fixed length into the future, potentially accumulating model errors or increasing value estimation error along the way. In contrast, STEVE interpolates between many different horizon lengths, favoring those whose estimates have lower uncertainty, and thus lower error. To compute the interpolated target, we replace both the model and Q-function with ensembles, approximating the uncertainty of an estimate by computing its variance under samples from the ensemble. Through these uncertainty estimates, STEVE dynamically utilizes the model rollouts only when they do not introduce significant errors. For illustration, Figure 1 compares the sample efficiency of various algorithms on a tabular toy environment, which shows that STEVE significantly outperforms MVE and TD-learning baselines when the dynamics model is noisy. We systematically evaluate STEVE on several challenging continuous control benchmarks and demonstrate that STEVE significantly outperforms model-free baselines with an order-of-magnitude increase in sample efficiency. 2 Background Reinforcement learning aims to learn an agent policy that maximizes the expected (discounted) sum of rewards [29]. The agent starts at an initial state s0 ∼ p(s0), where p(s0) is the distribution of initial states of the environment. Then, the agent deterministically chooses an action at according to its policy πφ(st) with parameters φ, deterministically transitions to a subsequent state st+1 according to the Markovian dynamics T (st, at) of the environment, and receives a reward rt = r(st, at, st+1). This generates a trajectory of states, actions, and rewards τ = (s0, a0, r0, s1, a1, . . .). If a trajectory reaches a terminal state, it concludes without further transitions or rewards; however, this is optional, and trajectories may instead be infinite in length. We abbreviate the trajectory by τ . The goal is to maximize the expected discounted sum of rewards along sampled trajectories J(θ) = Es0 [ ∑∞ t=0 γ trt] where γ ∈ [0, 1) is a discount parameter. 2.1 Value Estimation with TD-learning The action-value functionQπ(s0, a0) = ∑∞ t=0 γ trt is a critical quantity to estimate for many learning algorithms. Using the fact that Qπ(s, a) satisfies a recursion relation Qπ(s, a) = r(s, a) + γ(1− d(s′))Qπ(s′, π(s′)), where s′ = T (s, a) and d(s′) is an indicator function which returns 1 when s′ is a terminal state and 0 otherwise. We can estimate Qπ(s, a) off-policy with collected transitions of the form (s, a, r, s′) sampled uniformly from a replay buffer [29]. We approximate Qπ(s, a) with a deep neural network, Q̂πθ (s, a). We learn parameters θ to minimize the mean squared error (MSE) between Q-value estimates of states and their corresponding TD targets: T TD(r, s′) = r + γ(1− d(s′))Q̂πθ−(s ′, π(s′)) (1) Lθ = E(s,a,r,s′) [ (Q̂πθ (s, a)− T TD(r, s′))2 ] (2) This expectation is taken with respect to transitions sampled from our replay buffer. Note that we use an older copy of the parameters, θ−, when computing targets [23]. Since we evaluate our method in a continuous action space, it is not possible to compute a policy from our Q-function by simply taking maxa Q̂πθ (s, a). Instead, we use a neural network to approximate this maximization function [21], by learning a parameterized function πφ to minimize the negative Q-value: Lφ = −Q̂πθ (s, πφ(s)). (3) In this work, we use DDPG as the base learning algorithm, but our technique is generally applicable to other methods that use TD objectives. 2.2 Model-Based Value Expansion (MVE) Recently, Feinberg et al. [7] showed that a learned dynamics model can be used to improve value estimation. MVE forms TD targets by combining a short term value estimate formed by unrolling the model dynamics and a long term value estimate using the learned Q̂πθ− function. When the model is accurate, this reduces the bias of the targets, leading to improved performance. The learned dynamics model consists of three learned functions: the transition function T̂ξ(s, a), which returns a successor state s′; a termination function d̂ξ(s), which returns the probability that s is a terminal state; and the reward function r̂ψ(s, a, s′), which returns a scalar reward. This model is trained to minimize Lξ,ψ = E(s,a,r,s′) [ ||T̂ξ(s, a)− s′||2 + H ( d(s′), d̂ξ(T̂ξ(s, a)) ) + (r̂ψ(s, a, s ′)− r)2 ] , (4) where the expectation is over collected transitions (s, a, r, s′), and H is the cross-entropy. In this work, we consider continuous environments; for discrete environments, the first term can be replaced by a cross-entropy loss term. To incorporate the model into value estimation, Feinberg et al. [7] replace the standard Q-learning target with an improved target, T MVEH , computed by rolling the learned model out for H steps. s′0 = s ′, a′i = πφ(s ′ i), s ′ i = T̂ξ(s ′ i−1, a ′ i−1), D i = d(s′) i∏ j=1 (1− d̂ξ(s′j)) (5) T MVEH (r, s′) = r + ( H∑ i=1 Diγir̂ψ(s ′ i−1, a ′ i−1, s ′ i) ) +DH+1γH+1Q̂πθ−(s ′ H , a ′ H). (6) To use this target, we substitute T MVEH in place of T TD when training θ using Equation 2.2 Note that when H = 0, MVE reduces to TD-learning (i.e., T TD = T MVE0 ). When the model is perfect and the learned Q-function has similar bias on all states and actions, Feinberg et al. [7] show that the MVE target with rollout horizon H will decrease the target error by a factor of γ2H . Errors in the learned model can lead to worse targets, so in practice, we must tune H to balance between the errors in the model and the Q-function estimates. An additional challenge is that the bias in the learned Q-function is not uniform across states and actions [7]. In particular, 2This formulation is a minor generalization of the original MVE objective in that we additionally model the reward function and termination function; Feinberg et al. [7] consider “fully observable” environments in which the reward function and termination condition were known, deterministic functions of the observations. Because we use a function approximator for the termination condition, we compute the accumulated probability of termination, Di, at every timestep, and use this value to discount future returns. they find that the bias in the Q-function on states sampled from the replay buffer is lower than when the Q-function is evaluated on states generated from model rollouts. They term this the distribution mismatch problem and propose the TD-k trick as a solution; see Appendix B for further discussion of this trick. While the results of Feinberg et al. [7] are promising, they rely on task-specific tuning of the rollout horizon H . This sensitivity arises from the difficulty of modeling the transition dynamics and the Qfunction, which are task-specific and may change throughout training as the policy explores different parts of the state space. Complex environments require much smaller rollout horizon H , which limits the effectiveness of the approach (e.g., Feinberg et al. [7] used H = 10 for HalfCheetah-v1, but had to reduce to H = 3 on Walker2d-v1). Motivated by this limitation, we propose an approach that balances model error and Q-function error by dynamically adjusting the rollout horizon. 3 Stochastic Ensemble Value Expansion From a single rollout ofH timesteps, we can computeH+1 distinct candidate targets by considering rollouts of various horizon lengths: T MVE0 ,T MVE1 ,T MVE2 ,...,T MVEH . Standard TD learning uses T MVE0 as the target, while MVE uses T MVEH as the target. We propose interpolating all of the candidate targets to produce a target which is better than any individual. Conventionally, one could average the candidate targets, or weight the candidate targets in an exponentially-decaying fashion, similar to TD(λ) [29]. However, we show that we can do still better by weighting the candidate targets in a way that balances errors in the learnedQ-function and errors from longer model rollouts. STEVE provides a computationally-tractable and theoretically-motivated algorithm for choosing these weights. We describe the algorithm for STEVE in Section 3.1, and justify it in Section 3.2. 3.1 Algorithm To estimate uncertainty in our learned estimators, we maintain an ensemble of parameters for our Q-function, reward function, and model: θ = {θ1, ..., θL}, ψ = {ψ1, ..., ψN}, and ξ = {ξ1, ..., ξM}, respectively. Each parameterization is initialized independently and trained on different subsets of the data in each minibatch. We roll out an H step trajectory with each of the M models, τ ξ1 , ..., τ ξM . Each trajectory consists of H + 1 states, τ ξm0 , ..., τ ξm H , which correspond to s ′ 0, ..., s ′ H in Equation 5 with the transition function parameterized by ξm. Similarly, we use the N reward functions and L Q-functions to evaluate Equation 6 for each τ ξm at every rollout-length 0 ≤ i ≤ H . This gives us M ·N · L different values of T MVEi for each rollout-length i. See Figure 2 for a visualization of this process. Using these values, we can compute the empirical mean T µi and variance T σ 2 i for each partial rollout of length i. In order to form a single target, we use an inverse variance weighting of the means: T STEVEH (r, s′) = H∑ i=0 w̃i∑ j w̃j T µi , w̃ −1 i = T σ2 i (7) To learn a value function with STEVE, we substitute in T STEVEH in place of T TD when training θ using Equation 2. 3.2 Derivation We wish to find weights wi, where ∑ i wi = 1 that minimize the mean-squared error between the weighted-average of candidate targets T MVE0 ,T MVE1 ,T MVE2 ,...,T MVEH and the true Q-value. E ( H∑ i=0 wiT MVEi −Qπ(s, a) )2 = Bias(∑ i wiT MVEi )2 + Var (∑ i wiT MVEi ) ≈ Bias (∑ i wiT MVEi )2 + ∑ i w2i Var(T MVEi ), where the expectation considers the candidate targets as random variables conditioned on the collected data and minibatch sampling noise, and the approximation is due to assuming the candidate targets are independent3. Our goal is to minimize this with respect to wi. We can estimate the variance terms using empirical variance estimates from the ensemble. Unfortunately, we could not devise a reliable estimator for the bias terms, and this is a limitation of our approach and an area for future work. In this work, we ignore the bias terms and minimize the weighted sum of variances∑ i w2i Var(T MVEi ). With this approximation, which is equivalent to in inverse-variance weighting [8], we achieve stateof-the-art results. Setting each wi equal to 1Var(T MVEi ) and normalizing yields the formula for T STEVEH given in Equation 7. 3.3 Note on ensembles This technique for calculating uncertainty estimates is applicable to any family of models from which we can sample. For example, we could train a Bayesian neural network for each model [22], or use dropout as a Bayesian approximation by resampling the dropout masks each time we wish to sample a new model [10]. These options could potentially give better diversity of various samples from the family, and thus better uncertainty estimates; exploring them further is a promising direction for future work. However, we found that these methods degraded the accuracy of the base models. An ensemble is far easier to train, and so we focus on that in this work. This is a common choice, as the use of ensembles in the context of uncertainty estimations for deep reinforcement learning has seen wide adoption in the literature. It was first proposed by Osband et al. [25] as a technique to improve exploration, and subsequent work showed that this approach gives a good estimate of the uncertainty of both value functions [17] and models [20]. 4 Experiments 4.1 Implementation We use DDPG [21] as our baseline model-free algorithm. We train two deep feedforward neural networks, a Q-function network Q̂πθ (s, a) and a policy network πφ(s), by minimizing the loss functions given in Equations 2 and 3. We also train another three deep feedforward networks to represent our world model, corresponding to function approximators for the transition T̂ξ(s, a), termination d̂ξ(t | s), and reward r̂ψ(s, a, s′), and minimize the loss function given in Equation 4. When collecting rollouts for evaluation, we simply take the action selected by the policy, πφ(s), at every state s. (Note that only the policy is required at test-time, not the ensembles of Q-functions, 3Initial experiments suggested that omitting the covariance cross terms provided significant computational speedups at the cost of a slight performance degradation. As a result, we omitted the terms in the rest of the experiments. dynamics models, or reward models.) Each run was evaluated after every 500 updates by computing the mean total episode reward (referred to as score) across many environment restarts. To produce the lines in Figures 3, 4, and 5, these evaluation results were downsampled by splitting the domain into non-overlapping regions and computing the mean score within each region across several runs. The shaded area shows one standard deviation of scores in the region as defined above. When collecting rollouts for our replay buffer, we do -greedy exploration: with probability , we select a random action by adding Gaussian noise to the pre-tanh policy action. All algorithms were implemented in Tensorflow [1]. We use a distributed implementation to parallelize computation. In the style of ApeX [16], IMPALA [6], and D4PG [2], we use a centralized learner with several agents operating in parallel. Each agent periodically loads the most recent policy, interacts with the environment, and sends its observations to the central learner. The learner stores received frames in a replay buffer, and continuously loads batches of frames from this buffer to use as training data for a model update. In the algorithms with a model-based component, there are two learners: a policy-learner and a model-learner. In these cases, the policy-learner periodically reloads the latest copy of the model. All baselines reported in this section were re-implementations of existing methods. This allowed us to ensure that the various methods compared were consistent with one another, and that the differences reported are fully attributable to the independent variables in question. Our baselines are competitive with state-of-the-art implementations of these algorithms [7, 14]. All MVE experiments utilize the TD-k trick. For hyperparameters and additional implementation details, please see Appendix C.4 4.2 Comparison of Performance We evaluated STEVE on a variety of continuous control tasks [3, 19]; we plot learning curves in Figure 3. We found that STEVE yields significant improvements in both performance and sample efficiency across a wide range of environments. Importantly, the gains are most substantial in the complex environments. On the most challenging environments: Humanoid-v1, RoboschoolHumanoid- 4Our code is available open-source at: https://github.com/tensorflow/models/tree/master/ research/steve v1, RoboschoolHumanoidFlagrun-v1, and BipedalWalkerHardcore-v2, STEVE is the only algorithm to show significant learning within 5M frames. 4.3 Ablation Study In order to verify that STEVE’s gains in sample efficiency are due to the reweighting, and not simply due to the additional parameters of the ensembles of its components, we examine several ablations. Ensemble MVE is the regular MVE algorithm, but the model and Q-functions are replaced with with ensembles. Mean-MVE uses the exact same architecture as STEVE, but uses a simple uniform weighting instead of the uncertainty-aware reweighting scheme. Similarly, TDL25 and TDL75 correspond to TD(λ) reweighting schemes with λ = 0.25 and λ = 0.75, respectively. COV-STEVE is a version of STEVE which includes the covariances between candidate targets when computing the weights (see Section 3.2). We also investigate the effect of the horizon parameter on the performance of both STEVE and MVE. These results are shown in Figure 4. All of these variants show the same trend: fast initial gains, which quickly taper off and are overtaken by the baseline. STEVE is the only variant to converge faster and higher than the baseline; this provides strong evidence that the gains come specifically from the uncertainty-aware reweighting of targets. Additionally, we find that increasing the rollout horizon increases the sample efficiency of STEVE, even though the dynamics model for Humanoid-v1 has high error. 4.4 Wall-Clock Comparison In the previous experiments, we synchronized data collection, policy updates, and model updates. However, when we run these steps asynchronously, we can reduce the wall-clock time at the risk of instability. To evaluate this configuration, we compare DDPG, MVE-DDPG, and STEVE-DPPG on Humanoid-v1 and RoboschoolHumanoidFlagrun-v1. Both were trained on a P100 GPU and had 8 CPUs collecting data; STEVE-DPPG additionally used a second P100 to learn a model in parallel. We plot reward as a function of wall-clock time for these tasks in Figure 5. STEVE-DDPG learns more quickly on both tasks, and it achieves a higher reward than DDPG and MVE-DDPG on Humanoid-v1 and performs comparably to DDPG on RoboschoolHumanoidFlagrun-v1. Moreover, in future work, STEVE could be accelerated by parallelizing training of each component of the ensemble. 5 Discussion Our primary experiments (Section 4.2) show that STEVE greatly increases sample efficiency relative to baselines, matching or out-performing both MVE-DDPG and DDPG baselines on every task. STEVE also outperforms other recently-published results on these tasks in terms of sample efficiency [13, 14, 26]. Our ablation studies (Section 4.3) support the hypothesis that the increased performance is due to the uncertainty-dependent reweighting of targets, as well as demonstrate that the performance of STEVE consistently increases with longer horizon lengths, even in complex environments. Finally, our wall-clock experiments (Section 4.4) demonstrate that in spite of the additional computation per epoch, the gains in sample efficiency are enough that it is competitive with model-free algorithms in terms of wall-clock time. The speed gains associated with improved sample efficiency will only be exacerbated as samples become more expensive to collect, making STEVE a promising choice for applications involving real-world interaction. Given that the improvements stem from the dynamic reweighting between horizon lengths, it may be interesting to examine the choices that the model makes about which candidate targets to favor most heavily. In Figure 6, we plot the average model usage over the course of training. Intriguingly, most of the lines seem to remain stable at around 50% usage, with two notable exceptions: Humanoid-v1, the most complex environment tested (with an observation-space of size 376); and Swimmer-v1, the least complex environment tested (with an observation-space of size 8). This supports the hypothesis that STEVE is trading off between Q-function bias and model bias; it chooses to ignore the model almost immediately when the environment is too complex to learn, and gradually ignores the model as the Q-function improves if an optimal environment model is learned quickly. 6 Related Work Sutton and Barto [29] describe TD(λ), a family of Q-learning variants in which targets from multiple timesteps are merged via exponentially decay. STEVE is similar in that it is also computing a weighted average between targets. However, our approach is significantly more powerful because it adapts the weights to the specific characteristics of each individual rollout, rather than being constant between examples and throughout training. Our approach can be thought of as a generalization of TD(λ), in that the two approaches are equivalent in the specific case where the overall uncertainty grows exponentially at rate λ at every timestep. Munos et al. [24] propose Retrace(λ), a low-variance method for off-policy Q-learning. Retrace(λ) is an off-policy correction method, so it learns from n-step off-policy data by multiplying each term of the loss by a correction coefficient, the trace, in order to re-weight the data distribution to look more like the on-policy distribution. Specifically, at each timestep, Retrace(λ) updates the coefficient for that term by multiplying it by λmin(1, π(as|xs)µ(as|xs) ). Similarly to TD(λ), the λ parameter corresponds to an exponential decay of the weighting of potential targets. STEVE approximates this weighting in a more complex way, and additionally learns a predictive model of the environment (under which on-policy rollouts are possible) instead of using off-policy correction terms to re-weight real off-policy rollouts. Heess et al. [15] describe stochastic value gradient (SVG) methods, which are a general family of hybrid model-based/model-free control algorithms. By re-parameterizing distributions to separate out the noise, SVG is able to learn stochastic continuous control policies in stochastic environments. STEVE currently operates only with deterministic policies and environments, but this is a promising direction for future work. Kurutach et al. [20] propose model-ensemble trust-region policy optimization (ME-TRPO), which is motivated similarly to this work in that they also propose an algorithm which uses an ensemble of models to mitigate the deleterious effects of model bias. However, the algorithm is quite different. METRPO is a purely model-based policy-gradient approach, and uses the ensemble to avoid overfitting to any one model. In contrast, STEVE interpolates between model-free and model-based estimates, uses a value-estimation approach, and uses the ensemble to explicitly estimate uncertainty. Kalweit and Boedecker [17] train on a mix of real and imagined rollouts, and adjust the ratio over the course of training by tying it to the variance of the Q-function. Similarly to our work, this variance is computed via an ensemble. However, they do not adapt to the uncertainty of individual estimates, only the overall ratio of real to imagined data. Additionally, they do not take into account model bias, or uncertainty in model predictions. Weber et al. [31] use rollouts generated by the dynamics model as inputs to the policy function, by “summarizing” the outputs of the rollouts with a deep neural network. This second network allows the algorithm to implicitly calculate uncertainty over various parts of the rollout and use that information when making its decision. However, I2A has only been evaluated on discrete domains. Additionally, the lack of explicit model use likely tempers the sample-efficiency benefits gained relative to more traditional model-based learning. Gal et al. [11] use a deep neural network in combination with the PILCO algorithm [4] to do sampleefficient reinforcement learning. They demonstrate good performance on the continuous-control task of cartpole swing-up. They model uncertainty in the learned neural dynamics function using dropout as a Bayesian approximation, and provide evidence that maintaining these uncertainty estimates is very important for model-based reinforcement learning. Depeweg et al. [5] use a Bayesian neural network as the environment model in a policy search setting, learning a policy purely from imagined rollouts. This work also demonstrates that modeling uncertainty is important for model-based reinforcement learning with neural network models, and that uncertainty-aware models can escape many common pitfalls. Gu et al. [12] propose a continuous variant of Q-learning known as normalized advantage functions (NAF), and show that learning using NAF can be accelerated by using a model-based component. They use a variant of Dyna-Q [28], augmenting the experience available to the model-free learner with imaginary on-policy data generated via environment rollouts. They use an iLQG controller and a learned locally-linear model to plan over small, easily-modelled regions of the environment, but find that using more complex neural network models of the environment can yield errors. Thomas et al. [30] define the Ω-return, an alternative to the λ-return that accounts for the variance of, and correlations between, predicted returns at multiple timesteps. Similarly to STEVE, the target used is an unbiased linear combination of returns with minimum variance. However, the Ω-return is not directly computable in non-tabular state spaces, and does n-step off-policy learning rather than learn a predictive model of the environment. Drawing a theoretical connection between the STEVE algorithm and the Ω-return is an interesting potential direction for future work. 7 Conclusion In this work, we demonstrated that STEVE, an uncertainty-aware approach for merging model-free and model-based reinforcement learning, outperforms model-free approaches while reducing sample complexity by an order magnitude on several challenging tasks. We believe that this is a strong step towards enabling RL for practical, real-world applications. Since submitting this manuscript for publication, we have further explored the relationship between STEVE and recent work on overestimation bias [9], and found evidence that STEVE may help to reduce this bias. Other future directions include exploring more complex worldmodels for various tasks, as well as comparing various techniques for calculating uncertainty and estimating bias. Acknowledgments The authors would like to thank the following individuals for their valuable insights and discussion: David Ha, Prajit Ramachandran, Tuomas Haarnoja, Dustin Tran, Matt Johnson, Matt Hoffman, Ishaan Gulrajani, and Sergey Levine. Also, we would like to thank Jascha Sohl-Dickstein, Joseph Antognini, Shane Gu, and Samy Bengio for their feedback during the writing process, and Erwin Coumans for his help on PyBullet enivronments. Finally, we would like to thank our anonymous reviewers for their insightful suggestions.
1. What is the main contribution of the paper in reinforcement learning? 2. What are the strengths of the proposed algorithm, particularly in comparison to DDPG and MVE? 3. What are the weaknesses of the paper regarding theoretical justification and omitted factors? 4. How could the presentation be improved regarding explanations and clarity? 5. What are the limitations of the proposed method in terms of computational requirements?
Review
Review This paper presents an algorithm for reinforcement learning through an improved fusion of model-free and model-based RL. It builds on the model-based value expansion (MVE) algorithm by introducing dynamically-weighted rollout horizons and learned reward and termination functions. The algorithm improves significantly on DDPG in complex simulated environments with better sample efficiency. The work is compelling, showing marked useful improvement over DDPG and MVE. The ablation study is good, although it would be even better to test more than one environment. It is somewhat concerning that the theoretical justification suggests both bias and variance should be taken into account, but bias is ignored because a good estimate was not found. The results with only variance are good enough to justify the omission, but it would be more reassuring if some ideas were given of the magnitude of the bias. The paper is well-organized. It builds up to STEVE in a logical sequence and explains STEVE itself quite clearly. A few places for improvements: In line 22, the abbreviation 'TD' is used without being defined as temporal difference It is not explained until much later what 'H' means in the legend of figure 1. In line 50, what is 'p' in 'p(s_0)'? Before or after equations 3 and 4, it would be helpful to briefly explain in words the function of D. In figure 2, different algorithms have different numbers of steps for the same environment (the green DDPG line is longer than the red MVE line is longer than the blue STEVE line). It would be good to have them all the same length or explain why they're not. Please avoid using red and green in plots as they are difficult for colorblind people to distinguish. Related work is thoroughly cited, and STEVE is unique in adjusting each estimate's weight based on uncertainty. STEVE shows significantly improved adaptability to different environments, as well as good sample efficiency and improved learning rate for some environments. It appears to be a promising algorithm, although the amount of compute necessary may limit its audience.
NIPS
Title Sample-Efficient Reinforcement Learning with Stochastic Ensemble Value Expansion Abstract Integrating model-free and model-based approaches in reinforcement learning has the potential to achieve the high performance of model-free algorithms with low sample complexity. However, this is difficult because an imperfect dynamics model can degrade the performance of the learning algorithm, and in sufficiently complex environments, the dynamics model will almost always be imperfect. As a result, a key challenge is to combine model-based approaches with model-free learning in such a way that errors in the model do not degrade performance. We propose stochastic ensemble value expansion (STEVE), a novel model-based technique that addresses this issue. By dynamically interpolating between model rollouts of various horizon lengths for each individual example, STEVE ensures that the model is only utilized when doing so does not introduce significant errors. Our approach outperforms model-free baselines on challenging continuous control benchmarks with an order-of-magnitude increase in sample efficiency, and in contrast to previous model-based approaches, performance does not degrade in complex environments. 1 Introduction Deep model-free reinforcement learning has had great successes in recent years, notably in playing video games [23] and strategic board games [27]. However, training agents using these algorithms requires tens to hundreds of millions of samples, which makes many practical applications infeasible, particularly in real-world control problems (e.g., robotics) where data collection is expensive. Model-based approaches aim to reduce the number of samples required to learn a policy by modeling the dynamics of the environment. A dynamics model can be used to increase sample efficiency in various ways, including training the policy on rollouts from the dynamics model [28], using rollouts to improve targets for temporal difference (TD) learning [7], and using information gained from rollouts as inputs to the policy [31]. Model-based algorithms such as PILCO [4] have shown that it is possible to learn from orders-of-magnitude fewer samples. These successes have mostly been limited to environments where the dynamics are simple to model. In noisy, complex environments, it is difficult to learn an accurate model of the environment. When the model makes mistakes in this context, it can cause the wrong policy to be learned, hindering performance. Recent work has begun to address this issue. Kalweit and Boedecker [17] train a model-free algorithm on a mix of real and imagined data, adjusting the proportion in favor of real data as the Q-function becomes more confident. Kurutach et al. [20] train a model-free algorithm on purely imaginary data, but use an ensemble of environment models to avoid overfitting to errors made by any individual model. ∗This work was completed as part of the Google AI Residency program. 32nd Conference on Neural Information Processing Systems (NeurIPS 2018), Montréal, Canada. We propose stochastic ensemble value expansion (STEVE), an extension to model-based value expansion (MVE) proposed by Feinberg et al. [7]. Both techniques use a dynamics model to compute “rollouts” that are used to improve the targets for temporal difference learning. MVE rolls out a fixed length into the future, potentially accumulating model errors or increasing value estimation error along the way. In contrast, STEVE interpolates between many different horizon lengths, favoring those whose estimates have lower uncertainty, and thus lower error. To compute the interpolated target, we replace both the model and Q-function with ensembles, approximating the uncertainty of an estimate by computing its variance under samples from the ensemble. Through these uncertainty estimates, STEVE dynamically utilizes the model rollouts only when they do not introduce significant errors. For illustration, Figure 1 compares the sample efficiency of various algorithms on a tabular toy environment, which shows that STEVE significantly outperforms MVE and TD-learning baselines when the dynamics model is noisy. We systematically evaluate STEVE on several challenging continuous control benchmarks and demonstrate that STEVE significantly outperforms model-free baselines with an order-of-magnitude increase in sample efficiency. 2 Background Reinforcement learning aims to learn an agent policy that maximizes the expected (discounted) sum of rewards [29]. The agent starts at an initial state s0 ∼ p(s0), where p(s0) is the distribution of initial states of the environment. Then, the agent deterministically chooses an action at according to its policy πφ(st) with parameters φ, deterministically transitions to a subsequent state st+1 according to the Markovian dynamics T (st, at) of the environment, and receives a reward rt = r(st, at, st+1). This generates a trajectory of states, actions, and rewards τ = (s0, a0, r0, s1, a1, . . .). If a trajectory reaches a terminal state, it concludes without further transitions or rewards; however, this is optional, and trajectories may instead be infinite in length. We abbreviate the trajectory by τ . The goal is to maximize the expected discounted sum of rewards along sampled trajectories J(θ) = Es0 [ ∑∞ t=0 γ trt] where γ ∈ [0, 1) is a discount parameter. 2.1 Value Estimation with TD-learning The action-value functionQπ(s0, a0) = ∑∞ t=0 γ trt is a critical quantity to estimate for many learning algorithms. Using the fact that Qπ(s, a) satisfies a recursion relation Qπ(s, a) = r(s, a) + γ(1− d(s′))Qπ(s′, π(s′)), where s′ = T (s, a) and d(s′) is an indicator function which returns 1 when s′ is a terminal state and 0 otherwise. We can estimate Qπ(s, a) off-policy with collected transitions of the form (s, a, r, s′) sampled uniformly from a replay buffer [29]. We approximate Qπ(s, a) with a deep neural network, Q̂πθ (s, a). We learn parameters θ to minimize the mean squared error (MSE) between Q-value estimates of states and their corresponding TD targets: T TD(r, s′) = r + γ(1− d(s′))Q̂πθ−(s ′, π(s′)) (1) Lθ = E(s,a,r,s′) [ (Q̂πθ (s, a)− T TD(r, s′))2 ] (2) This expectation is taken with respect to transitions sampled from our replay buffer. Note that we use an older copy of the parameters, θ−, when computing targets [23]. Since we evaluate our method in a continuous action space, it is not possible to compute a policy from our Q-function by simply taking maxa Q̂πθ (s, a). Instead, we use a neural network to approximate this maximization function [21], by learning a parameterized function πφ to minimize the negative Q-value: Lφ = −Q̂πθ (s, πφ(s)). (3) In this work, we use DDPG as the base learning algorithm, but our technique is generally applicable to other methods that use TD objectives. 2.2 Model-Based Value Expansion (MVE) Recently, Feinberg et al. [7] showed that a learned dynamics model can be used to improve value estimation. MVE forms TD targets by combining a short term value estimate formed by unrolling the model dynamics and a long term value estimate using the learned Q̂πθ− function. When the model is accurate, this reduces the bias of the targets, leading to improved performance. The learned dynamics model consists of three learned functions: the transition function T̂ξ(s, a), which returns a successor state s′; a termination function d̂ξ(s), which returns the probability that s is a terminal state; and the reward function r̂ψ(s, a, s′), which returns a scalar reward. This model is trained to minimize Lξ,ψ = E(s,a,r,s′) [ ||T̂ξ(s, a)− s′||2 + H ( d(s′), d̂ξ(T̂ξ(s, a)) ) + (r̂ψ(s, a, s ′)− r)2 ] , (4) where the expectation is over collected transitions (s, a, r, s′), and H is the cross-entropy. In this work, we consider continuous environments; for discrete environments, the first term can be replaced by a cross-entropy loss term. To incorporate the model into value estimation, Feinberg et al. [7] replace the standard Q-learning target with an improved target, T MVEH , computed by rolling the learned model out for H steps. s′0 = s ′, a′i = πφ(s ′ i), s ′ i = T̂ξ(s ′ i−1, a ′ i−1), D i = d(s′) i∏ j=1 (1− d̂ξ(s′j)) (5) T MVEH (r, s′) = r + ( H∑ i=1 Diγir̂ψ(s ′ i−1, a ′ i−1, s ′ i) ) +DH+1γH+1Q̂πθ−(s ′ H , a ′ H). (6) To use this target, we substitute T MVEH in place of T TD when training θ using Equation 2.2 Note that when H = 0, MVE reduces to TD-learning (i.e., T TD = T MVE0 ). When the model is perfect and the learned Q-function has similar bias on all states and actions, Feinberg et al. [7] show that the MVE target with rollout horizon H will decrease the target error by a factor of γ2H . Errors in the learned model can lead to worse targets, so in practice, we must tune H to balance between the errors in the model and the Q-function estimates. An additional challenge is that the bias in the learned Q-function is not uniform across states and actions [7]. In particular, 2This formulation is a minor generalization of the original MVE objective in that we additionally model the reward function and termination function; Feinberg et al. [7] consider “fully observable” environments in which the reward function and termination condition were known, deterministic functions of the observations. Because we use a function approximator for the termination condition, we compute the accumulated probability of termination, Di, at every timestep, and use this value to discount future returns. they find that the bias in the Q-function on states sampled from the replay buffer is lower than when the Q-function is evaluated on states generated from model rollouts. They term this the distribution mismatch problem and propose the TD-k trick as a solution; see Appendix B for further discussion of this trick. While the results of Feinberg et al. [7] are promising, they rely on task-specific tuning of the rollout horizon H . This sensitivity arises from the difficulty of modeling the transition dynamics and the Qfunction, which are task-specific and may change throughout training as the policy explores different parts of the state space. Complex environments require much smaller rollout horizon H , which limits the effectiveness of the approach (e.g., Feinberg et al. [7] used H = 10 for HalfCheetah-v1, but had to reduce to H = 3 on Walker2d-v1). Motivated by this limitation, we propose an approach that balances model error and Q-function error by dynamically adjusting the rollout horizon. 3 Stochastic Ensemble Value Expansion From a single rollout ofH timesteps, we can computeH+1 distinct candidate targets by considering rollouts of various horizon lengths: T MVE0 ,T MVE1 ,T MVE2 ,...,T MVEH . Standard TD learning uses T MVE0 as the target, while MVE uses T MVEH as the target. We propose interpolating all of the candidate targets to produce a target which is better than any individual. Conventionally, one could average the candidate targets, or weight the candidate targets in an exponentially-decaying fashion, similar to TD(λ) [29]. However, we show that we can do still better by weighting the candidate targets in a way that balances errors in the learnedQ-function and errors from longer model rollouts. STEVE provides a computationally-tractable and theoretically-motivated algorithm for choosing these weights. We describe the algorithm for STEVE in Section 3.1, and justify it in Section 3.2. 3.1 Algorithm To estimate uncertainty in our learned estimators, we maintain an ensemble of parameters for our Q-function, reward function, and model: θ = {θ1, ..., θL}, ψ = {ψ1, ..., ψN}, and ξ = {ξ1, ..., ξM}, respectively. Each parameterization is initialized independently and trained on different subsets of the data in each minibatch. We roll out an H step trajectory with each of the M models, τ ξ1 , ..., τ ξM . Each trajectory consists of H + 1 states, τ ξm0 , ..., τ ξm H , which correspond to s ′ 0, ..., s ′ H in Equation 5 with the transition function parameterized by ξm. Similarly, we use the N reward functions and L Q-functions to evaluate Equation 6 for each τ ξm at every rollout-length 0 ≤ i ≤ H . This gives us M ·N · L different values of T MVEi for each rollout-length i. See Figure 2 for a visualization of this process. Using these values, we can compute the empirical mean T µi and variance T σ 2 i for each partial rollout of length i. In order to form a single target, we use an inverse variance weighting of the means: T STEVEH (r, s′) = H∑ i=0 w̃i∑ j w̃j T µi , w̃ −1 i = T σ2 i (7) To learn a value function with STEVE, we substitute in T STEVEH in place of T TD when training θ using Equation 2. 3.2 Derivation We wish to find weights wi, where ∑ i wi = 1 that minimize the mean-squared error between the weighted-average of candidate targets T MVE0 ,T MVE1 ,T MVE2 ,...,T MVEH and the true Q-value. E ( H∑ i=0 wiT MVEi −Qπ(s, a) )2 = Bias(∑ i wiT MVEi )2 + Var (∑ i wiT MVEi ) ≈ Bias (∑ i wiT MVEi )2 + ∑ i w2i Var(T MVEi ), where the expectation considers the candidate targets as random variables conditioned on the collected data and minibatch sampling noise, and the approximation is due to assuming the candidate targets are independent3. Our goal is to minimize this with respect to wi. We can estimate the variance terms using empirical variance estimates from the ensemble. Unfortunately, we could not devise a reliable estimator for the bias terms, and this is a limitation of our approach and an area for future work. In this work, we ignore the bias terms and minimize the weighted sum of variances∑ i w2i Var(T MVEi ). With this approximation, which is equivalent to in inverse-variance weighting [8], we achieve stateof-the-art results. Setting each wi equal to 1Var(T MVEi ) and normalizing yields the formula for T STEVEH given in Equation 7. 3.3 Note on ensembles This technique for calculating uncertainty estimates is applicable to any family of models from which we can sample. For example, we could train a Bayesian neural network for each model [22], or use dropout as a Bayesian approximation by resampling the dropout masks each time we wish to sample a new model [10]. These options could potentially give better diversity of various samples from the family, and thus better uncertainty estimates; exploring them further is a promising direction for future work. However, we found that these methods degraded the accuracy of the base models. An ensemble is far easier to train, and so we focus on that in this work. This is a common choice, as the use of ensembles in the context of uncertainty estimations for deep reinforcement learning has seen wide adoption in the literature. It was first proposed by Osband et al. [25] as a technique to improve exploration, and subsequent work showed that this approach gives a good estimate of the uncertainty of both value functions [17] and models [20]. 4 Experiments 4.1 Implementation We use DDPG [21] as our baseline model-free algorithm. We train two deep feedforward neural networks, a Q-function network Q̂πθ (s, a) and a policy network πφ(s), by minimizing the loss functions given in Equations 2 and 3. We also train another three deep feedforward networks to represent our world model, corresponding to function approximators for the transition T̂ξ(s, a), termination d̂ξ(t | s), and reward r̂ψ(s, a, s′), and minimize the loss function given in Equation 4. When collecting rollouts for evaluation, we simply take the action selected by the policy, πφ(s), at every state s. (Note that only the policy is required at test-time, not the ensembles of Q-functions, 3Initial experiments suggested that omitting the covariance cross terms provided significant computational speedups at the cost of a slight performance degradation. As a result, we omitted the terms in the rest of the experiments. dynamics models, or reward models.) Each run was evaluated after every 500 updates by computing the mean total episode reward (referred to as score) across many environment restarts. To produce the lines in Figures 3, 4, and 5, these evaluation results were downsampled by splitting the domain into non-overlapping regions and computing the mean score within each region across several runs. The shaded area shows one standard deviation of scores in the region as defined above. When collecting rollouts for our replay buffer, we do -greedy exploration: with probability , we select a random action by adding Gaussian noise to the pre-tanh policy action. All algorithms were implemented in Tensorflow [1]. We use a distributed implementation to parallelize computation. In the style of ApeX [16], IMPALA [6], and D4PG [2], we use a centralized learner with several agents operating in parallel. Each agent periodically loads the most recent policy, interacts with the environment, and sends its observations to the central learner. The learner stores received frames in a replay buffer, and continuously loads batches of frames from this buffer to use as training data for a model update. In the algorithms with a model-based component, there are two learners: a policy-learner and a model-learner. In these cases, the policy-learner periodically reloads the latest copy of the model. All baselines reported in this section were re-implementations of existing methods. This allowed us to ensure that the various methods compared were consistent with one another, and that the differences reported are fully attributable to the independent variables in question. Our baselines are competitive with state-of-the-art implementations of these algorithms [7, 14]. All MVE experiments utilize the TD-k trick. For hyperparameters and additional implementation details, please see Appendix C.4 4.2 Comparison of Performance We evaluated STEVE on a variety of continuous control tasks [3, 19]; we plot learning curves in Figure 3. We found that STEVE yields significant improvements in both performance and sample efficiency across a wide range of environments. Importantly, the gains are most substantial in the complex environments. On the most challenging environments: Humanoid-v1, RoboschoolHumanoid- 4Our code is available open-source at: https://github.com/tensorflow/models/tree/master/ research/steve v1, RoboschoolHumanoidFlagrun-v1, and BipedalWalkerHardcore-v2, STEVE is the only algorithm to show significant learning within 5M frames. 4.3 Ablation Study In order to verify that STEVE’s gains in sample efficiency are due to the reweighting, and not simply due to the additional parameters of the ensembles of its components, we examine several ablations. Ensemble MVE is the regular MVE algorithm, but the model and Q-functions are replaced with with ensembles. Mean-MVE uses the exact same architecture as STEVE, but uses a simple uniform weighting instead of the uncertainty-aware reweighting scheme. Similarly, TDL25 and TDL75 correspond to TD(λ) reweighting schemes with λ = 0.25 and λ = 0.75, respectively. COV-STEVE is a version of STEVE which includes the covariances between candidate targets when computing the weights (see Section 3.2). We also investigate the effect of the horizon parameter on the performance of both STEVE and MVE. These results are shown in Figure 4. All of these variants show the same trend: fast initial gains, which quickly taper off and are overtaken by the baseline. STEVE is the only variant to converge faster and higher than the baseline; this provides strong evidence that the gains come specifically from the uncertainty-aware reweighting of targets. Additionally, we find that increasing the rollout horizon increases the sample efficiency of STEVE, even though the dynamics model for Humanoid-v1 has high error. 4.4 Wall-Clock Comparison In the previous experiments, we synchronized data collection, policy updates, and model updates. However, when we run these steps asynchronously, we can reduce the wall-clock time at the risk of instability. To evaluate this configuration, we compare DDPG, MVE-DDPG, and STEVE-DPPG on Humanoid-v1 and RoboschoolHumanoidFlagrun-v1. Both were trained on a P100 GPU and had 8 CPUs collecting data; STEVE-DPPG additionally used a second P100 to learn a model in parallel. We plot reward as a function of wall-clock time for these tasks in Figure 5. STEVE-DDPG learns more quickly on both tasks, and it achieves a higher reward than DDPG and MVE-DDPG on Humanoid-v1 and performs comparably to DDPG on RoboschoolHumanoidFlagrun-v1. Moreover, in future work, STEVE could be accelerated by parallelizing training of each component of the ensemble. 5 Discussion Our primary experiments (Section 4.2) show that STEVE greatly increases sample efficiency relative to baselines, matching or out-performing both MVE-DDPG and DDPG baselines on every task. STEVE also outperforms other recently-published results on these tasks in terms of sample efficiency [13, 14, 26]. Our ablation studies (Section 4.3) support the hypothesis that the increased performance is due to the uncertainty-dependent reweighting of targets, as well as demonstrate that the performance of STEVE consistently increases with longer horizon lengths, even in complex environments. Finally, our wall-clock experiments (Section 4.4) demonstrate that in spite of the additional computation per epoch, the gains in sample efficiency are enough that it is competitive with model-free algorithms in terms of wall-clock time. The speed gains associated with improved sample efficiency will only be exacerbated as samples become more expensive to collect, making STEVE a promising choice for applications involving real-world interaction. Given that the improvements stem from the dynamic reweighting between horizon lengths, it may be interesting to examine the choices that the model makes about which candidate targets to favor most heavily. In Figure 6, we plot the average model usage over the course of training. Intriguingly, most of the lines seem to remain stable at around 50% usage, with two notable exceptions: Humanoid-v1, the most complex environment tested (with an observation-space of size 376); and Swimmer-v1, the least complex environment tested (with an observation-space of size 8). This supports the hypothesis that STEVE is trading off between Q-function bias and model bias; it chooses to ignore the model almost immediately when the environment is too complex to learn, and gradually ignores the model as the Q-function improves if an optimal environment model is learned quickly. 6 Related Work Sutton and Barto [29] describe TD(λ), a family of Q-learning variants in which targets from multiple timesteps are merged via exponentially decay. STEVE is similar in that it is also computing a weighted average between targets. However, our approach is significantly more powerful because it adapts the weights to the specific characteristics of each individual rollout, rather than being constant between examples and throughout training. Our approach can be thought of as a generalization of TD(λ), in that the two approaches are equivalent in the specific case where the overall uncertainty grows exponentially at rate λ at every timestep. Munos et al. [24] propose Retrace(λ), a low-variance method for off-policy Q-learning. Retrace(λ) is an off-policy correction method, so it learns from n-step off-policy data by multiplying each term of the loss by a correction coefficient, the trace, in order to re-weight the data distribution to look more like the on-policy distribution. Specifically, at each timestep, Retrace(λ) updates the coefficient for that term by multiplying it by λmin(1, π(as|xs)µ(as|xs) ). Similarly to TD(λ), the λ parameter corresponds to an exponential decay of the weighting of potential targets. STEVE approximates this weighting in a more complex way, and additionally learns a predictive model of the environment (under which on-policy rollouts are possible) instead of using off-policy correction terms to re-weight real off-policy rollouts. Heess et al. [15] describe stochastic value gradient (SVG) methods, which are a general family of hybrid model-based/model-free control algorithms. By re-parameterizing distributions to separate out the noise, SVG is able to learn stochastic continuous control policies in stochastic environments. STEVE currently operates only with deterministic policies and environments, but this is a promising direction for future work. Kurutach et al. [20] propose model-ensemble trust-region policy optimization (ME-TRPO), which is motivated similarly to this work in that they also propose an algorithm which uses an ensemble of models to mitigate the deleterious effects of model bias. However, the algorithm is quite different. METRPO is a purely model-based policy-gradient approach, and uses the ensemble to avoid overfitting to any one model. In contrast, STEVE interpolates between model-free and model-based estimates, uses a value-estimation approach, and uses the ensemble to explicitly estimate uncertainty. Kalweit and Boedecker [17] train on a mix of real and imagined rollouts, and adjust the ratio over the course of training by tying it to the variance of the Q-function. Similarly to our work, this variance is computed via an ensemble. However, they do not adapt to the uncertainty of individual estimates, only the overall ratio of real to imagined data. Additionally, they do not take into account model bias, or uncertainty in model predictions. Weber et al. [31] use rollouts generated by the dynamics model as inputs to the policy function, by “summarizing” the outputs of the rollouts with a deep neural network. This second network allows the algorithm to implicitly calculate uncertainty over various parts of the rollout and use that information when making its decision. However, I2A has only been evaluated on discrete domains. Additionally, the lack of explicit model use likely tempers the sample-efficiency benefits gained relative to more traditional model-based learning. Gal et al. [11] use a deep neural network in combination with the PILCO algorithm [4] to do sampleefficient reinforcement learning. They demonstrate good performance on the continuous-control task of cartpole swing-up. They model uncertainty in the learned neural dynamics function using dropout as a Bayesian approximation, and provide evidence that maintaining these uncertainty estimates is very important for model-based reinforcement learning. Depeweg et al. [5] use a Bayesian neural network as the environment model in a policy search setting, learning a policy purely from imagined rollouts. This work also demonstrates that modeling uncertainty is important for model-based reinforcement learning with neural network models, and that uncertainty-aware models can escape many common pitfalls. Gu et al. [12] propose a continuous variant of Q-learning known as normalized advantage functions (NAF), and show that learning using NAF can be accelerated by using a model-based component. They use a variant of Dyna-Q [28], augmenting the experience available to the model-free learner with imaginary on-policy data generated via environment rollouts. They use an iLQG controller and a learned locally-linear model to plan over small, easily-modelled regions of the environment, but find that using more complex neural network models of the environment can yield errors. Thomas et al. [30] define the Ω-return, an alternative to the λ-return that accounts for the variance of, and correlations between, predicted returns at multiple timesteps. Similarly to STEVE, the target used is an unbiased linear combination of returns with minimum variance. However, the Ω-return is not directly computable in non-tabular state spaces, and does n-step off-policy learning rather than learn a predictive model of the environment. Drawing a theoretical connection between the STEVE algorithm and the Ω-return is an interesting potential direction for future work. 7 Conclusion In this work, we demonstrated that STEVE, an uncertainty-aware approach for merging model-free and model-based reinforcement learning, outperforms model-free approaches while reducing sample complexity by an order magnitude on several challenging tasks. We believe that this is a strong step towards enabling RL for practical, real-world applications. Since submitting this manuscript for publication, we have further explored the relationship between STEVE and recent work on overestimation bias [9], and found evidence that STEVE may help to reduce this bias. Other future directions include exploring more complex worldmodels for various tasks, as well as comparing various techniques for calculating uncertainty and estimating bias. Acknowledgments The authors would like to thank the following individuals for their valuable insights and discussion: David Ha, Prajit Ramachandran, Tuomas Haarnoja, Dustin Tran, Matt Johnson, Matt Hoffman, Ishaan Gulrajani, and Sergey Levine. Also, we would like to thank Jascha Sohl-Dickstein, Joseph Antognini, Shane Gu, and Samy Bengio for their feedback during the writing process, and Erwin Coumans for his help on PyBullet enivronments. Finally, we would like to thank our anonymous reviewers for their insightful suggestions.
1. What is the main contribution of the paper, and how does it improve upon previous methods? 2. What are the strengths of the proposed approach, particularly in its ability to handle complex environments? 3. Are there any inconsistencies or contradictions in the results presented in the paper? If so, how might they be explained or resolved? 4. How does the paper address potential criticisms or limitations of the proposed method? 5. Are there any suggestions for further improvements or refinements to the proposed approach?
Review
Review The authors propose an extension of the Model-based Value Expansion (MVE) method for model-augmented reinforcement learning. They observe that a fixed prediction horizon H cause devolving performance in MVE. The interpretation is that errors due to uncertain predictions accumulate in long simulated trajectories, which restricts the algorithm to small H in complex environments. The authors propose Stochastic Ensemble Value Expansion (STEVE), which estimates the uncertainty of model and Q-function with an ensemble method. Instead of a fixed horizon, STEVE computes the targets as a sum of all horizon-targets, weighted by the normalized inverse variance of the ensemble estimate. This automatically adjusts the horizon for each trajectory and implicitly discards uncertain predictions. The algorithm is tested on a variety of continuous control tasks and ablation studies are provided for weighting and wall-clock time. The presented performance is significantly better than the model-free variant DDPG and the MVE variation thereof. The article is well written and the presented extension appears to significantly improve upon MVE. The authors discuss and answer many potential criticisms, for example, STEVE requires a large number of forward-passes through the Q-function but Figure 4 shows that the performance gain is significant in wall-clock time as well. While the amount of experiments is commendable, they also show some inconsistencies that could be better discussed, for example, in the majority of the tasks, the weighting in STEVE remained fairly "stable around 50%" (l.218 and Figure 5). This seems to contradict the bad performance of the ablation "Mean-MVE-DDPG" (and the TD-lambda variants) in Figure 3, which weights the horizon-targets equally. Another example is that in Humanoid-v1 of Figure 5, STEVE converges quickly to DDPG, but the performances of the two algorithms shown in Figures 2 and 3 differ significantly. The reviewer is aware that the contradicting results are not directly comparable, but this should be discussed in more depth. The definition of the TD-k trick in the appendix also appears to be wrong, which would be an alternative explanation for some of the discrepancies. The reviewer recommends to accept the paper with little reservation. The authors are encouraged to check the TD-k trick (and their implementation thereof) and to add some additional discussion of the points raised above. COMMENTS: eq.3: from your description, $D^i$ should depend on $\hat d_\xi(s'_j)$, not $d(s'_j)$. l.80f: $T^MVE_0$ in eq.4 is not equivalent to eq.1, the difference is $D^{H+1}$. I recommend to introduce $d(s')$ earlier and update eq.1 accordingly l.116: clarify that (or if) for each value of $T^MVE_i$ the same reward function is used to compute the rewards of all time steps $j$ ($\hat r_\psi(s'_{j-1}, a'_{j-1}, s'_j$). l.132ff: the optimization problem for the weights $w_i$ can be much more elegantly solved with the Lagrange method (in literally three lines; ignore the implicit constraint $w_i \geq 0$). Interestingly, solving the original approximated problem (including the bias term) yields the same weights plus an additional term that is proportional to the true $Q^\pi(s,a)$. Approximating the true Q-value with the mean estimate could yield the improved weighting you are discussing in l.133. fig.2: for the final version you should run all algorithms for the same number of environment steps. l.170f: did you test the MVE ablations without the TD-k trick (see below why)? appendix.B: the TD-k trick seems wrong. The reviewer is not familiar with the original definition, but in the presented equation the Q-values of all state-action pairs along the imagined trajectory are updated with the *same* target. Intuitively, the target should be $T^MVE_{H-i+1}(r_i, s'_{i+1})$ with appropriately defined $r_i$. This could be a simple typo, but if the implementation is wrong this could also explain why MVE (which you report uses TD-k) performs often so badly in comparison to STEVE (which does not).
NIPS
Title Learning State-Aware Visual Representations from Audible Interactions Abstract We propose a self-supervised algorithm to learn representations from egocentric video data. Recently, significant efforts have been made to capture humans interacting with their own environments as they go about their daily activities. In result, several large egocentric datasets of interaction-rich multi-modal data have emerged. However, learning representations from videos can be challenging. First, given the uncurated nature of long-form continuous videos, learning effective representations require focusing on moments in time when interactions take place. Second, visual representations of daily activities should be sensitive to changes in the state of the environment. However, current successful multimodal learning frameworks encourage representation invariance over time. To address these challenges, we leverage audio signals to identify moments of likely interactions which are conducive to better learning. We also propose a novel selfsupervised objective that learns from audible state changes caused by interactions. We validate these contributions extensively on two large-scale egocentric datasets, EPIC-Kitchens-100 and the recently released Ego4D, and show improvements on several downstream tasks, including action recognition, long-term action anticipation, and object state change classification. Code and pretrained model are available here: https://github.com/HimangiM/RepLAI N/A We propose a self-supervised algorithm to learn representations from egocentric video data. Recently, significant efforts have been made to capture humans interacting with their own environments as they go about their daily activities. In result, several large egocentric datasets of interaction-rich multi-modal data have emerged. However, learning representations from videos can be challenging. First, given the uncurated nature of long-form continuous videos, learning effective representations require focusing on moments in time when interactions take place. Second, visual representations of daily activities should be sensitive to changes in the state of the environment. However, current successful multimodal learning frameworks encourage representation invariance over time. To address these challenges, we leverage audio signals to identify moments of likely interactions which are conducive to better learning. We also propose a novel selfsupervised objective that learns from audible state changes caused by interactions. We validate these contributions extensively on two large-scale egocentric datasets, EPIC-Kitchens-100 and the recently released Ego4D, and show improvements on several downstream tasks, including action recognition, long-term action anticipation, and object state change classification. Code and pretrained model are available here: https://github.com/HimangiM/RepLAI 1 Introduction Recent successes in self-supervised learning (SSL) [48, 10, 31, 28] has brought into question the need for human annotations in order to learn strong visual representations. However, current approaches are bottlenecked by the lack of rich data – they learn from static images which lack temporal information and restrict the ability to learn object deformations and state changes. It is clear that we need videos to learn rich representations in self-supervised manner. Learning representations from videos is however quite challenging. The first challenge is choosing the right SSL loss. Approaches such as [67, 54] have attempted to learn representations that are invariant to object deformations/viewpoints. However, many downstream tasks require representations that are sensitive to these deformations. Another alternative has been to use the multi-modal data [3, 43, 57] and learn representations via audio. But again most of these approaches seek to align audio and visual features in a common space, leading to invariant representations as well. The second challenge is dealing with the fact that current video-based SSL approaches exploit the curated nature of video datasets, such as Kinetics [9]. These approaches are designed to leverage carefully selected clips, displaying a single action or object interaction. This is in contrast to the predominantly untrimmed real-world data characteristic of large egocentric datasets of daily activities. Here, unlike action centric datasets, the most ‘interesting‘ or ‘interaction-rich‘ clips have NOT been carefully selected by human annotators. Thus, learning from untrimmed video poses a major challenge, as a significant portion of the data does not focus on the concepts we want to learn. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). In this work, we ask the question, ‘Can we learn meaningful representations from interaction-rich, multi-modal streams of egocentric data?’ Learning from continuous streams of data requires focusing on the right moments when the actual interactions are likely to occur. Consider, for example, the acts of opening a fridge or placing a pan on the stove. Actions like these create clear and consistent sound signatures due to the physical interaction between objects. These moments can be easily detected from audio alone and can be used to target training on interesting portions of the untrimmed videos. We show that even a simple spectrogram-based handcrafted detector is sufficient to identify interesting moments in time, and that representation learning benefits substantially from using them to sample training clips. But what should the loss be? Prior work on audio-visual correspondence (AVC) [15, 4, 43] uses the natural co-occurrence of sounds and the visual manifestations of their sources as the source of supervision. However, since the AVC objective still favors invariance, the learned representations are not informative of the changes that happen over time (e.g., representations that can distinguish between closed and opened fridge, or vegetables before and after chopping them). To better capture state changes, we introduce a novel audio-visual self-supervised objective, in which audio representations at key moments in time are required to be informative of the change in the corresponding visual representations over time. The intuition behind this objective is that transitions between object states are often marked by characteristic sounds. Thus, models optimized under this objective would associate the distinct sounds not only with the objects themselves (as accomplished with AVC), but also with the transition between two different states of the object. To this end, we introduce RepLAI – Representation Learning from Audible Interactions, a selfsupervised algorithm for representation learning from videos of audible interactions. RepLAI uses the audio signals in two unique ways: (1) to identify moments in time that are conducive to better self-supervised learning and (2) to learn representations that focus on the visual state changes caused by audible interactions. We validate these contributions extensively on two egocentric datasets, EPIC-Kitchens-100 [14] and the recently released Ego4D [27], where we demonstrate the benefits of RepLAI for several downstream tasks, including action recognition, long term action anticipation, and object state change classification. 2 Related Work Self-supervised learning. Self-supervised learning methods operate on an unlabeled dataset by explicitly defining pretext tasks such as solving jigsaw puzzle [47], patch location prediction [16], inpainting [50], and image rotation [25] prediction. Following these, the next wave of self-supervised methods has been based on contrastive learning that learns representations with the help of data augmentation and instance discrimination [10, 28, 48, 31, 8]. These methods have shown rapid progress in self-supervised learning for images. While these approaches explore the spatial information of images, RepLAI leverages the temporal information of videos. Video representation learning. Relevant to our proposed approach is self-supervised representation learning for videos where the spatiotemporal pretext tasks are designed such as temporal order prediction [40, 70, 35, 69], predicting motion and appearance statistics [65], pace prediction [66], temporal cycle consistency [18, 68], and video colorization [64]. Contrastive learning has also been widely adopted in the domain of video [55, 29, 32, 57, 71, 30, 22] with impressive results on action recognition tasks. These methods however learn representations that are invariant to spatio-temporal augmentations, such as temporal jittering, and thus are incapable of representing object state changes. Closer to the objective of RepLAI, we include relevant literature on audio-visual representation learning from videos, where the audio stream is additionally utilized. Audio-visual representation learning. Learning without additional supervision has also been explored in the context of the audio modality with the help of audio-visual correspondence (AVC) [4, 5]. As stated simply, AVC is the binary classification task of predicting if a video clip and a short audio clip correspond with each other or not (details in Sec. 3.4). Similar tasks like temporal synchronization [36, 49] between audio and video, audio classification [6, 3, 11], spatial alignment prediction between audio and 360-degree videos [41], optimal combination of self-supervised tasks [52] have been shown beneficial for learning effective multi-modal video representations. Other works explore contrastive learning for both audio and video modality [43, 51, 42] as a cross-modal instance discrimination task. Fine-grained video understanding. Real-world videos are often untrimmed in nature and have multiple actions in a single video. Along this line, fine-grained analysis has been studied for videos in the form of a query-response temporal attention mechanism [72], bi-directional RNN s[58], and semi-supervised learning problem [17]. While these works only utilize the visual modality, another line of work has also explored multi-modal fine-grained video understanding as a transformer-based model [34], by exploiting the correspondence between modalities [44], or by exploring how to best combine multiple modalities - audio, visual, and language [2]. In our work, we try to conduct fine-grained video understanding in a self-supervised manner. Egocentric datasets. Egocentric datasets offer new opportunities to learn from a first-person point of view, where the world is seen through the eyes of an agent. Many egocentric datasets have been developed such as Epic-kitchens [13, 14] which consist of daily activities performed in a kitchen environment, Activities of Daily Living [53], UT Ego [37, 60], the Disney Dataset [20], and the recently released large-scale Ego4D dataset [27] which consists of day-to-day life activities in multiple scenarios such as household, outdoor spaces, workplace, etc. Multiple challenges and downstream tasks have explored for egocentric datasets like action recognition [34, 33, 38], action localization [56], action anticipation [26, 59, 39, 1, 23], human-object interactions [45, 12, 7], parsing social interactions [46], and domain adaptation [44]. In our work, we evaluate the efficiency of the representations learned by our self-supervised approach on the EPIC-Kitchens-100 and Ego4D datasets over multiple downstream tasks. 3 RepLAI In this section, we detail our approach to learn audio-visual representations from and for interactionrich egocentric data in a self-supervised manner, i.e., without relying on human annotated labels. Sec. 3.1 provides an overview of RepLAI and motivates the two key contributions of this work – identifying ‘moments of interaction’ (MoI) and learning from ‘audible visual state changes’. Sec. 3.2 details the proposed approach for MoI detection and section Sec. 3.3 explains the proposed selfsupervised objective for learning state-aware representations. Sec. 3.4 explains the objective of audio-visual correspondence learning used to train RepLAI. Sec. 3.5 brings both objectives together and includes necessary details for reproducibility. 3.1 Overview Given a dataset D = {(vi, ai)Ni=1} containing N long (untrimmed) audio-visual streams, our goal is to learn visual and audio encoders, denoted fV and fA, that can effectively represent egocentric data. An overview of the proposed approach is depicted in Fig. 2. For each sample (v, a) ∈ D, we search for moments of interaction (MoI) using the audio stream, and extract short audio and visual clips around these MoI. These trimmed clips are then encoded into a vectorized representation using fV and fA. The whole system is trained to optimize two self-supervised losses – an audio-visual correspondence loss LAVC, and a novel self-supervised loss that learns from audible state changes LAStC. Why detect moments of interaction (MoI)? Untrimmed video of daily activities often contains long periods without interactions, which aren’t useful for training. Instead, we search for moments in time that are more likely to contain interactions which we refer to as moments of interaction (MoI). Why learn from audible state changes? Visual representations of daily activities should be informative of the state of the environment and/or objects being interacted with. Moreover, changes in the environment are usually caused by physical interactions, which produce distinct sound signatures. We hypothesize that state-aware representations can be obtained by learning to associate audio with the change of visual representation during a moment of interaction. 3.2 Audio-driven detection of moments of interaction Audio signals are particularly informative of moments of interaction. To complete day-to-day activities, we physically interact with objects in our environments. These interactions typically produce distinct audio patterns - short bursts of energy that span all frequencies. This is illustrated in Fig. 1, where we visualize the untrimmed visual and audio data of a person performing a series of actions in the kitchen. The audio data is represented as a log mel spectrogram, where the x-axis represents time and y-axis the audio frequency in log-scale. As can be seen, moments of interaction appear in the spectrogram as vertical edges, which can be easily detected. Once detected, short clips around the moments of interaction are collected into a dataset DMoI, and used for training. The remaining question is how to locate the timestamp of such vertical edges? Intuitively, we do this by finding robust local maxima in the total energy (summed over all frequencies) of the spectrogram. Concretely, let M(t, ω) be the value of the log mel spectrogram of an audio clip at time t and frequency ω. To remove the influence of background noise and overall audio intensity/volume, we compute the z-score normalization of the spectrogram for each frequency independently M̄(t, ω) = s(t,ω)−µωσω+ϵ , where ϵ is small constant for numerical stability. Here, µω and σω are the mean and standard deviation of M(t, ω) over time, respectively.1 Next, we define moments of interaction as the set of timestamps which are local maxima of ∑ ω s̄(t, ω) (or peaks for short). Moreover, to avoid weak local maxima that may be caused by the noisy nature of audio signals, we ignore peaks with small prominence (lower than 1)2. For further robustness, when multiple close peaks are found (less than 50ms apart), only the highest prominence peak is kept. 3.3 Learning from audible state changes Physical interactions often cause both state changes in the environment and distinct audio signals. To leverage this natural co-occurrence, we propose a self-supervised task that seeks to associate the audio with changes in the visual state during a moment of interaction. 1Specifically, µω = Et[M(t, ω)], σ2ω = Et[(M(t, ω)− µω)2], and ϵ = 1e− 5. 2The prominence of a peak is defined as the difference between the peak value and the minimum value in a small window around it. The proposed task is optimized by minimizing a loss with two negative log-likelihood terms to: (1) increase the probability of associating the audio with the visual state change in the forward (i.e. correct) direction, (2) decrease the probability of associating the audio with the visual state change in the backward (i.e. incorrect) direction. Consider, for example, the interaction of ‘closing a fridge door’. To optimize for this task, the audio of closing the door should be (1) similar to the visual transition opened door → closed door and (2) dissimilar to the (backwards) transition closed → open. This encourages learning of representations that are informative of object states, making them useful for a variety of egocentric tasks. Specifically, the audible state change (AStC) loss is defined as LAStC = Evt,at∈DMoI [ − log ( pfrwd(vt, at) ) − log ( 1− pbkwd(vt, at) )] . (1) The probabilities (pfrwd, pbkwd) are computed from cross-modal similarities pfrwd(vt, at) = σ ( sim ( ∆vfrwdt ,at ) /τ ) , (2) pbkwd(vt, at) = σ ( sim ( ∆vbkwdt ,at ) /τ ) , (3) where τ = 0.2 is a temperature hyper-parameter, and σ denotes the sigmoid function. For better readability, we absorb the notations for the audio projection MLP head hAStCA and the state change projection MLP head hAStC∆V within sim(·, ·), but their usage is clearly illustrated in Fig. 3a. Audio representations (at) are obtained by encoding the trimmed audio clips at via the audio encoder fA (shared across all objectives). As explained above, at is further projected via hAStCA to a space where similarity to visual state changes is enforced. State change representations (∆vfrwdt , ∆vbkwdt ) are computed by considering two non-overlapping visual clips for each moment of interaction t, at timestamps t − δ and t + δ. The two clips, vt−δ and vt+δ , are encoded via the visual encoder fV (shared across all tasks) and a projection MLP head hAStCV (specific to the AStC task). Specifically, we represent forward and backward state changes as ∆vfrwdt = h AStC V ◦ fV (vt+δ)− hAStCV ◦ fV (vt−δ), (4) ∆vbkwdt = h AStC V ◦ fV (vt−δ)− hAStCV ◦ fV (vt+δ). (5) In summary, optimizing the loss of Eq. 1 not only requires the audio representation at to be aligned with representation of the visual change ∆vfrwdt that took place, but also to be different from the hypothetical backward state change ∆vbkwdt . 3.4 Learning from audio-visual correspondences [15, 4, 43] Audio-visual correspondence (AVC) is a well-studied self-supervised methodology for learning unimodal audio and visual encoders. The key idea is to bring visual and audio clips into a common feature space, where the representations of audio-visual pairs are aligned. Note that AVC differs from the proposed AStC task, as AVC seeks to associate the audio at with the corresponding visual clips vt, as opposed to the change in visual state ∆vt. As a result, visual representations learned through AVC are biased towards static concepts, while those learned through AStC are more sensitive to dynamic concepts. Since both types of representations can be useful for egocentric tasks, we further train the visual and audio encoders, fV and fA, for the AVC task. Specifically, consider a dataset of audio-visual pairs (vi, ai) with representations vi = fV (vi) and ai = fA(ai). In particular, we let (vi, ai) be short clips extracted from sample i around one of the detected moments of interest. Then, following [43, 61], audio-visual correspondence is established by minimizing a cross-modal InfoNCE loss of the form LAVC = Evi,ai∼D [ − log e sim(vi,ai)/τ∑ j e sim(vi,aj)/τ − log e sim(vi,ai)/τ∑ j e sim(vj ,ai)/τ ] , (6) where τ = 0.07 is a temperature hyper-parameter and sim(·, ·) denotes the cosine similarity. Both terms in Eq. 6 help bring vi and ai (i.e. the positives) together. The key difference is whether the negative set is composed of audio representations aj or visual representations vj where j ̸= i For readability of Eq. 6, we once again absorb the notation for the audio and visual projection MLP heads (hAVCA and h AVC V ) within sim(·, ·), and illustrate their usage in Fig. 3b. Fig. 3b also shows that we apply the AVC loss twice to associate both the visual clips (extracted slightly before and after the moment of interaction t) to the corresponding audio. 3.5 Training The audio-visual representation models fA and fV are trained to minimize both AVC and AStC losses L = αLAVC + (1− α)LAStC (7) where α is a weighting hyper-parameter between the two terms. While we experimented with different values of α, we found that equal weighting produced best results. Implementation details. We follow prior work on audio visual correspondence [43], and use an R(2+1)D video encoder [62] with depth 18 and a 10-layer 2D CNN as the audio encoder. Two video clips are extracted around moments of interaction at a frame rate of 16 FPS each with a duration of 0.5s, and separated by a gap of 0.2s. Video clips are augmented by random resizing, cropping, and horizontal flipping resulting in clips of 8 frames at a resolution of 112× 112,. As for the audio, we extract clips of 2s at 44.1kHz and downsample them to 16kHz. If the audio is stereo, we average the two waveforms to downgrade to mono, and then convert the mono signal to a log mel spectrogram with 80 frequency bands and 128 temporal frames. Models are trained with stochastic gradient descent for 100 epochs with a batch size of 128 trained over 4 GTX 1080 Ti GPUs, a learning rate of 0.005 and a momentum of 0.9. For Ego4D, we use a batch size of 512 trained over 8 RTX 2080 Ti GPUs with a learning rate of 0.05. The two loss terms in Eq. 7 are equally weighted with α = 0.5. 4 Experiments In this section, we demonstrate the benefits of identifying moments of interaction and learning state-aware representations through an audible state-change objective. We also show that, while large scale audio-visual correspondence (AVC) is beneficial, it is not sufficient to learn state-aware representations required for egocentric tasks. The setup used for our experiments is described in Sec. 4.1. Results and discussion of main takeaways are presented in Sec. 4.2. 4.1 Experimental Setup Datasets. We evaluate on two egocentric datasets: EPIC-Kitchens-100 [14] and Ego4D [27]. EPICKitchens-100 contains 100 hours of activities in the kitchen. Ego4D contains 3670 hours of egocentric video covering daily activities in the home, workplace, social settings, etc. For experiments on Ego4D, we use all videos from the Forecasting and Hand-Object interaction subsets. Baselines and ablations. We consider various baselines as well as ablated versions of RepLAI. Random represents an untrained (randomly initialized) model. AVID [43] and XDC [3] are two state-of-the-art models pre-trained on 2M audio-visual pairs from AudioSet [24] that only leverage audio-visual correspondence. For the full method RepLAI, we initialize the model weights from AVID before training on moments of interaction to minimize both AVC and state change loss, AStC. We also evaluate our method trained without AVID initialization (RepLAI from scratch), trained with only AVC (RepLAI w/o AStC ), only state change losses (RepLAI w/o AVC ), and trained on random moments in time (RepLAI w/o MoI). Finally, we compare our approach with the fully supervised methods presented in Ego4D [27]. Downstream tasks. After self-supervised pre-training, the models are evaluated on a range of egocentric downstream tasks. This is done, as is standard, by appending a task specific decoder to the backbone model, and training the decoder on a small annotated dataset. The tasks are: • Video action recognition (AR) on EPIC-Kitchens-100 and Ego4D. Given a short video clip, the task is to classify the ‘verb’ and ‘noun’ of the action taking place. This is done using two separate linear classifiers trained for this task. We report the top-1 and top-5 accuracies, following [14] (Tab. 1) and [27] (Tab. 2). We also evaluate on the unseen participants, head classes, and tail classes of EPIC-Kitchens-100 in Tab. 3. Through this task, we assess the efficacy of the spatial-temporal representations learned by the model in differentiating among different verbs and nouns. • Long-term action anticipation (LTA) on Ego4D. Given a video, the task is to predict the camera wearer’s future sequence of actions. For this task, the model is first presented with 4 consecutive clips of 2s, which are encoded using our visual backbone fV . Following [27], the representations are concatenated and fed to 20 separate linear classification heads to predict the following 20 actions. Performance is measured using the edit distance metric ED@(Z=20) [27].3 With the help of this task, we can evaluate if the representations learned by the model can be employed for long-horizon planning where the actions can change and may be of arbitrary duration. Results are reported in Tab. 2. • State change classification (StCC) on Ego4D. Given a video clip, the task is to classify if an object undergoes a state change or not. The video clip is encoded by fV and a state change classification head is used which performs global average pooling on the entire feature tensor and is followed by a classification layer. Performance is measured through the State Change Classification Accuracy (%), and reported in Tab. 2. This task is ideal for assessing the ability of the model in understanding the temporal change happening in the state of an object. 4.2 Discussion of results As can be seen in Tab. 1 and Tab. 2, RepLAI outperforms all other methods across all downstream tasks. Overall, this can be attributed to its ability to focus on interactions, both by detecting when they occur and by learning representations that are sensitive to interactions. A closer analysis of these results reveals several insights that we discuss next. RepLAI enhances large-scale AVC driven approaches. Prior work on self-supervised audio-visual learning has shown strong audio-visual representations for action recognition [43, 42]. One question that we seek to answer is, how useful these are representations to egocentric tasks and what are their limitations? To answer this question, we compare our model trained from scratch, RepLAI (Scratch), 3Edit distance measures the minimum number of operations required to convert the predicted sequence of actions to ground truth. To account for multi-modality of future actions, it also allows the model to make Z = 20 predictions, and only accounts for the best prediction. with our model using the weights from AVID [43] as initialization for both the visual and audio encoders. We also compare our method to standalone AVID and XDC i.e. without further selfsupervised training. Comparing rows (2), (3) and (8) in Tab. 1 and Tab. 2, it is clear that RepLAI enhances large-scale AVC pre-training by significant margins, leading to absolute improvements of 5% in top-1 verb accuracy on EPIC-Kitchens-100, 4.2% on Ego4D, 5.2% increase in state-change classification accuracy, and 5.6% reduction on the edit distance for long-term anticipation compared to AVID. Comparing rows (7) and (8), we also see that large-scale AVID pre-training enhances the representations learned by RepLAI on EPIC-Kitchens-100 significantly but only marginally on Ego-4D. This is likely due to the significantly large diversity of scenes in Ego4D. Thus, while relying on large-scale audio-visual pre-training (as with AVID) can help avoid overfitting on smaller egocentric datasets, this is less critical when training on larger and more diverse data. Detecting moments of interaction (MoI) helps representation learning. We hypothesize that to learn good representations for egocentric data of daily activities, self-supervised learning should focus on moments in time when interactions occur. To assess whether our audio-driven MoI detection algorithm helps representation learning, we compare RepLAI with an ablated version, RepLAI w/o MoI, where the model is trained on audio-visual clips extracted at random from the untrimmed videos. As can be seen by comparing rows (6) and (8) in Tab. 1 and Tab. 2, sampling clips around MoI leads to significantly better representations for all egocentric downstream tasks that we study. Moreover, even though RepLAI w/o MoI trains with AStC, it is unable to fully leverage the state change objective function without the information of moments of interactions which leads to a worse performance. This suggests that, an explicit state change objective function and sampling video clips around moments of interactions (which are likely to be aligned with the actual state changes) together provide an information-rich feedback to our model in better understanding how the state changes by an interaction and how the actions transition over time. These results also clearly show that the proposed MoI detection procedure is able to find moments in time that are especially useful for learning representations of daily activities. We emphasize the simplicity and effectiveness of our audio-driven detector, which shows how informative audio can be when searching for moments of interaction. In the future, we believe that learning-based approaches could further enhance MoI detection, and further improve the learned audio-visual representations. We also show several qualitative examples of detected MoI in the supplement. AVC and AStC are complementary. To assess the impact of both terms in Eq. 7, we evaluate RepLAI trained without LAVC and without LAStC. Comparing rows (4), (5) to row (2) and row (3) in Tab. 1 and Tab. 2 shows that each term enhances the representations obtained through large-scale audio-visual pre-training (AVID). Furthermore, comparing the ablated models in rows (4) and (5) to the full model in row (8) shows that these two terms are complementary to each other. This is because the AVC and AStC tasks encourage learning of representations with different characteristics. AVC focuses on learning visual representations that are informative of what kind of sounding objects are present in the video, while AStC forces the model to differentiate between visual representations that occur before and after state change interactions. RepLAI encourages state-aware representation learning. To study the representations learned by our approach for different states, we generate a t-SNE plot [63] for RepLAI and AVID as shown in Fig. 4. For generating a simpler visualization, a small dataset is prepared consisting of all the videos corresponding to a single participant, P01, in EPIC-Kitchens-100 and split into clips of 0.5s. We can observe that there is a larger spread in the t-SNE plot for RepLAI than AVID. A larger spread indicates that the representations of the various states are significantly different from each other and form more distant clusters as shown by RepLAI. Whereas, if the state representations are similar to each other, they are clustered together and show lesser spread as shown by AVID. MoI are the key moments of interactions with an object in an environment where the state is changing. AVID has no such information about the key moments and also does not have an explicit state change objective function. Therefore, it is unable to discriminate between the before and after state of an action and has less effective state-aware information in its representations. RepLAI representation are more generalizable and robust to long-tail. To assess RepLAI in a scenario with domain shift, we evaluate on unseen participants that were fully excluded from the pre-training of RepLAI. Tab. 3 shows that RepLAI significantly outperforms baselines and ablations, indicating that representation learning by our model provides much better generalization. Moreover, the verb and noun classes in EPIC-Kitchens-100 exhibit a long-tailed distribution. When further compared on head and tail classes separately in Tab. 3, we can observe that RepLAI outperforms all other methods highlighting its higher robustness on a long-tailed distribution. Self-supervised vs supervised representation learning Tab. 2 also compares RepLAI to fully supervised methods introduced in Ego4D [27] (rows S1, S2 and S3). We can observe that RepLAI can also perform competitively to the fully supervised approaches when we have access to larger and more diverse data. With further focus on SSL for untrimmed datasets, SSL methods will be able to match supervised approaches, and our work takes a step towards it. 5 Conclusion In this work, we propose an audio-driven self-supervised method for learning representations of egocentric video of daily activities. We show that in order to learn strong representations for this domain, two important challenges need to be addressed. First, learning should focus on moments of interaction (MoI). Since these moments only occur sporadically in untrimmed video data, we show that MoI detection is an important component of representation learning in untrimmed datasets. Second, learning should focus on the consequences of interactions, i.e., changes in the state of an environment caused by agents interacting with the world. In particular, by seeking to identify visible state changes from the audio alone, we can learn representations that are potentially more aware of the state of the environment and hence, particularly useful for egocentric downstream tasks. Acknowledgements We would like to thank DARPA MCS, ONR Young Investigator and DARPA SAIL-ON for the funding. Broader impact Deep learning models are capable of learning (and sometimes even amplifying) biases existing in datasets. While several steps have been taken in datasets like Ego4D to increase geographical diversity, we would like to encourage careful consideration of ethical implications when deploying these model. While public datasets are essential to make progress on how to represent visual egocentric data, premature deployment of our models is likely have negative societal impact, as we did not check for the presence or absence of such biases.
1. What is the focus and contribution of the paper on audio-visual self-supervised learning? 2. What are the strengths of the proposed approach, particularly in utilizing moments of interaction? 3. What are the weaknesses of the paper, especially regarding comparisons with other works and evaluating the ability to detect moments of interaction? 4. Do you have any concerns or suggestions regarding the paper's content or presentation? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper authors introduce a self-supervised learning method from audio-visual data. Specifically, the method revolves around using moments of interaction to learning meaningful audio-visual relation. Authors split the learning in two losses: an standard audio-visual loss, and a loss around the audible change of state and its relation to the visual change of state. Authors evaluate their proposed method in EpicKitchens and Ego4D. Strengths And Weaknesses Strengths: S1. Audio-visual self-supervised learning is a powerful tool to learn representations. However, typical approaches do not exploit the semantic importance of the moments where to sample the audio. This work is a step towards using the audio content in a most meaningful way. S2. The results show how the method improves over the baseline with standard audio-visual learning. S3. Authors evaluate in two very relevant well-known ego centric benchmarks. I believe that is important as it makes the paper stronger. S4. Ego-centric data is going to be becoming more important in the near future. With the progress of recording devices and embodied research, works along the lines of this paper will become more relevant. S5. I specially like the idea of detecting state changes through audio. According to the paper the procedure is quite simple and the self-supervised method certainly benefits from seeing samples around that moment. Weaknesses W1. Authors compare with a single baselines for audio-visual learning. I think other works such as MMV (Alayrac, NeuriPS 2020), Evolving Losses (Piergiovanni, CVPR 2020), Brave (Recasens, ICCV 2021), XDC (Alwassel, NeurIPS 2020) are very relevant in the community. I understand the topic is slightly different and authors cannot retrain with all the baselines, but I still believe that having a single baseline such as AVID is insufficient for a publication. W2. The authors are missing citations for some of the papers mentioned in W1 (e.g. MMV, Evolving Losses). I think those works are important in the space of self-supervised audio-visual learning. W3. I think authors do not properly evaluate the ability of their model to detect moments of interaction. The description of the method is very complete and in Table 1 they ablate using the method, but I think it would be good to somehow evaluate whether the proposed method works. W4. I am missing some examples of moments of interaction I understand the reasoning behind the methodology of using the reverse clip as negative, but I believe the readers would benefit from a few visual examples to understand that. Questions My concerns are listed in Weaknesses. My main questions to the authors would be: Have the authors considered adding additional baselines when comparing to the state-of-the-art? If not, why not? Which evidence do the authors have that the detection of change works properly if they don't have any evaluation for that signal. In general, could the authors show some more examples of moments of interaction (similar to the ones in supplementary) in the main paper. I think it would be specially interesting to see examples where the detection of those moments works and other where it fails. Limitations Authors discuss the limitations of the method in 4.2. I would suggest expanding that a bit with general negative societal impact that working with ego video can have. I believe egocentric video can be used to track everyday's life which could have some negative impact.
NIPS
Title Learning State-Aware Visual Representations from Audible Interactions Abstract We propose a self-supervised algorithm to learn representations from egocentric video data. Recently, significant efforts have been made to capture humans interacting with their own environments as they go about their daily activities. In result, several large egocentric datasets of interaction-rich multi-modal data have emerged. However, learning representations from videos can be challenging. First, given the uncurated nature of long-form continuous videos, learning effective representations require focusing on moments in time when interactions take place. Second, visual representations of daily activities should be sensitive to changes in the state of the environment. However, current successful multimodal learning frameworks encourage representation invariance over time. To address these challenges, we leverage audio signals to identify moments of likely interactions which are conducive to better learning. We also propose a novel selfsupervised objective that learns from audible state changes caused by interactions. We validate these contributions extensively on two large-scale egocentric datasets, EPIC-Kitchens-100 and the recently released Ego4D, and show improvements on several downstream tasks, including action recognition, long-term action anticipation, and object state change classification. Code and pretrained model are available here: https://github.com/HimangiM/RepLAI N/A We propose a self-supervised algorithm to learn representations from egocentric video data. Recently, significant efforts have been made to capture humans interacting with their own environments as they go about their daily activities. In result, several large egocentric datasets of interaction-rich multi-modal data have emerged. However, learning representations from videos can be challenging. First, given the uncurated nature of long-form continuous videos, learning effective representations require focusing on moments in time when interactions take place. Second, visual representations of daily activities should be sensitive to changes in the state of the environment. However, current successful multimodal learning frameworks encourage representation invariance over time. To address these challenges, we leverage audio signals to identify moments of likely interactions which are conducive to better learning. We also propose a novel selfsupervised objective that learns from audible state changes caused by interactions. We validate these contributions extensively on two large-scale egocentric datasets, EPIC-Kitchens-100 and the recently released Ego4D, and show improvements on several downstream tasks, including action recognition, long-term action anticipation, and object state change classification. Code and pretrained model are available here: https://github.com/HimangiM/RepLAI 1 Introduction Recent successes in self-supervised learning (SSL) [48, 10, 31, 28] has brought into question the need for human annotations in order to learn strong visual representations. However, current approaches are bottlenecked by the lack of rich data – they learn from static images which lack temporal information and restrict the ability to learn object deformations and state changes. It is clear that we need videos to learn rich representations in self-supervised manner. Learning representations from videos is however quite challenging. The first challenge is choosing the right SSL loss. Approaches such as [67, 54] have attempted to learn representations that are invariant to object deformations/viewpoints. However, many downstream tasks require representations that are sensitive to these deformations. Another alternative has been to use the multi-modal data [3, 43, 57] and learn representations via audio. But again most of these approaches seek to align audio and visual features in a common space, leading to invariant representations as well. The second challenge is dealing with the fact that current video-based SSL approaches exploit the curated nature of video datasets, such as Kinetics [9]. These approaches are designed to leverage carefully selected clips, displaying a single action or object interaction. This is in contrast to the predominantly untrimmed real-world data characteristic of large egocentric datasets of daily activities. Here, unlike action centric datasets, the most ‘interesting‘ or ‘interaction-rich‘ clips have NOT been carefully selected by human annotators. Thus, learning from untrimmed video poses a major challenge, as a significant portion of the data does not focus on the concepts we want to learn. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). In this work, we ask the question, ‘Can we learn meaningful representations from interaction-rich, multi-modal streams of egocentric data?’ Learning from continuous streams of data requires focusing on the right moments when the actual interactions are likely to occur. Consider, for example, the acts of opening a fridge or placing a pan on the stove. Actions like these create clear and consistent sound signatures due to the physical interaction between objects. These moments can be easily detected from audio alone and can be used to target training on interesting portions of the untrimmed videos. We show that even a simple spectrogram-based handcrafted detector is sufficient to identify interesting moments in time, and that representation learning benefits substantially from using them to sample training clips. But what should the loss be? Prior work on audio-visual correspondence (AVC) [15, 4, 43] uses the natural co-occurrence of sounds and the visual manifestations of their sources as the source of supervision. However, since the AVC objective still favors invariance, the learned representations are not informative of the changes that happen over time (e.g., representations that can distinguish between closed and opened fridge, or vegetables before and after chopping them). To better capture state changes, we introduce a novel audio-visual self-supervised objective, in which audio representations at key moments in time are required to be informative of the change in the corresponding visual representations over time. The intuition behind this objective is that transitions between object states are often marked by characteristic sounds. Thus, models optimized under this objective would associate the distinct sounds not only with the objects themselves (as accomplished with AVC), but also with the transition between two different states of the object. To this end, we introduce RepLAI – Representation Learning from Audible Interactions, a selfsupervised algorithm for representation learning from videos of audible interactions. RepLAI uses the audio signals in two unique ways: (1) to identify moments in time that are conducive to better self-supervised learning and (2) to learn representations that focus on the visual state changes caused by audible interactions. We validate these contributions extensively on two egocentric datasets, EPIC-Kitchens-100 [14] and the recently released Ego4D [27], where we demonstrate the benefits of RepLAI for several downstream tasks, including action recognition, long term action anticipation, and object state change classification. 2 Related Work Self-supervised learning. Self-supervised learning methods operate on an unlabeled dataset by explicitly defining pretext tasks such as solving jigsaw puzzle [47], patch location prediction [16], inpainting [50], and image rotation [25] prediction. Following these, the next wave of self-supervised methods has been based on contrastive learning that learns representations with the help of data augmentation and instance discrimination [10, 28, 48, 31, 8]. These methods have shown rapid progress in self-supervised learning for images. While these approaches explore the spatial information of images, RepLAI leverages the temporal information of videos. Video representation learning. Relevant to our proposed approach is self-supervised representation learning for videos where the spatiotemporal pretext tasks are designed such as temporal order prediction [40, 70, 35, 69], predicting motion and appearance statistics [65], pace prediction [66], temporal cycle consistency [18, 68], and video colorization [64]. Contrastive learning has also been widely adopted in the domain of video [55, 29, 32, 57, 71, 30, 22] with impressive results on action recognition tasks. These methods however learn representations that are invariant to spatio-temporal augmentations, such as temporal jittering, and thus are incapable of representing object state changes. Closer to the objective of RepLAI, we include relevant literature on audio-visual representation learning from videos, where the audio stream is additionally utilized. Audio-visual representation learning. Learning without additional supervision has also been explored in the context of the audio modality with the help of audio-visual correspondence (AVC) [4, 5]. As stated simply, AVC is the binary classification task of predicting if a video clip and a short audio clip correspond with each other or not (details in Sec. 3.4). Similar tasks like temporal synchronization [36, 49] between audio and video, audio classification [6, 3, 11], spatial alignment prediction between audio and 360-degree videos [41], optimal combination of self-supervised tasks [52] have been shown beneficial for learning effective multi-modal video representations. Other works explore contrastive learning for both audio and video modality [43, 51, 42] as a cross-modal instance discrimination task. Fine-grained video understanding. Real-world videos are often untrimmed in nature and have multiple actions in a single video. Along this line, fine-grained analysis has been studied for videos in the form of a query-response temporal attention mechanism [72], bi-directional RNN s[58], and semi-supervised learning problem [17]. While these works only utilize the visual modality, another line of work has also explored multi-modal fine-grained video understanding as a transformer-based model [34], by exploiting the correspondence between modalities [44], or by exploring how to best combine multiple modalities - audio, visual, and language [2]. In our work, we try to conduct fine-grained video understanding in a self-supervised manner. Egocentric datasets. Egocentric datasets offer new opportunities to learn from a first-person point of view, where the world is seen through the eyes of an agent. Many egocentric datasets have been developed such as Epic-kitchens [13, 14] which consist of daily activities performed in a kitchen environment, Activities of Daily Living [53], UT Ego [37, 60], the Disney Dataset [20], and the recently released large-scale Ego4D dataset [27] which consists of day-to-day life activities in multiple scenarios such as household, outdoor spaces, workplace, etc. Multiple challenges and downstream tasks have explored for egocentric datasets like action recognition [34, 33, 38], action localization [56], action anticipation [26, 59, 39, 1, 23], human-object interactions [45, 12, 7], parsing social interactions [46], and domain adaptation [44]. In our work, we evaluate the efficiency of the representations learned by our self-supervised approach on the EPIC-Kitchens-100 and Ego4D datasets over multiple downstream tasks. 3 RepLAI In this section, we detail our approach to learn audio-visual representations from and for interactionrich egocentric data in a self-supervised manner, i.e., without relying on human annotated labels. Sec. 3.1 provides an overview of RepLAI and motivates the two key contributions of this work – identifying ‘moments of interaction’ (MoI) and learning from ‘audible visual state changes’. Sec. 3.2 details the proposed approach for MoI detection and section Sec. 3.3 explains the proposed selfsupervised objective for learning state-aware representations. Sec. 3.4 explains the objective of audio-visual correspondence learning used to train RepLAI. Sec. 3.5 brings both objectives together and includes necessary details for reproducibility. 3.1 Overview Given a dataset D = {(vi, ai)Ni=1} containing N long (untrimmed) audio-visual streams, our goal is to learn visual and audio encoders, denoted fV and fA, that can effectively represent egocentric data. An overview of the proposed approach is depicted in Fig. 2. For each sample (v, a) ∈ D, we search for moments of interaction (MoI) using the audio stream, and extract short audio and visual clips around these MoI. These trimmed clips are then encoded into a vectorized representation using fV and fA. The whole system is trained to optimize two self-supervised losses – an audio-visual correspondence loss LAVC, and a novel self-supervised loss that learns from audible state changes LAStC. Why detect moments of interaction (MoI)? Untrimmed video of daily activities often contains long periods without interactions, which aren’t useful for training. Instead, we search for moments in time that are more likely to contain interactions which we refer to as moments of interaction (MoI). Why learn from audible state changes? Visual representations of daily activities should be informative of the state of the environment and/or objects being interacted with. Moreover, changes in the environment are usually caused by physical interactions, which produce distinct sound signatures. We hypothesize that state-aware representations can be obtained by learning to associate audio with the change of visual representation during a moment of interaction. 3.2 Audio-driven detection of moments of interaction Audio signals are particularly informative of moments of interaction. To complete day-to-day activities, we physically interact with objects in our environments. These interactions typically produce distinct audio patterns - short bursts of energy that span all frequencies. This is illustrated in Fig. 1, where we visualize the untrimmed visual and audio data of a person performing a series of actions in the kitchen. The audio data is represented as a log mel spectrogram, where the x-axis represents time and y-axis the audio frequency in log-scale. As can be seen, moments of interaction appear in the spectrogram as vertical edges, which can be easily detected. Once detected, short clips around the moments of interaction are collected into a dataset DMoI, and used for training. The remaining question is how to locate the timestamp of such vertical edges? Intuitively, we do this by finding robust local maxima in the total energy (summed over all frequencies) of the spectrogram. Concretely, let M(t, ω) be the value of the log mel spectrogram of an audio clip at time t and frequency ω. To remove the influence of background noise and overall audio intensity/volume, we compute the z-score normalization of the spectrogram for each frequency independently M̄(t, ω) = s(t,ω)−µωσω+ϵ , where ϵ is small constant for numerical stability. Here, µω and σω are the mean and standard deviation of M(t, ω) over time, respectively.1 Next, we define moments of interaction as the set of timestamps which are local maxima of ∑ ω s̄(t, ω) (or peaks for short). Moreover, to avoid weak local maxima that may be caused by the noisy nature of audio signals, we ignore peaks with small prominence (lower than 1)2. For further robustness, when multiple close peaks are found (less than 50ms apart), only the highest prominence peak is kept. 3.3 Learning from audible state changes Physical interactions often cause both state changes in the environment and distinct audio signals. To leverage this natural co-occurrence, we propose a self-supervised task that seeks to associate the audio with changes in the visual state during a moment of interaction. 1Specifically, µω = Et[M(t, ω)], σ2ω = Et[(M(t, ω)− µω)2], and ϵ = 1e− 5. 2The prominence of a peak is defined as the difference between the peak value and the minimum value in a small window around it. The proposed task is optimized by minimizing a loss with two negative log-likelihood terms to: (1) increase the probability of associating the audio with the visual state change in the forward (i.e. correct) direction, (2) decrease the probability of associating the audio with the visual state change in the backward (i.e. incorrect) direction. Consider, for example, the interaction of ‘closing a fridge door’. To optimize for this task, the audio of closing the door should be (1) similar to the visual transition opened door → closed door and (2) dissimilar to the (backwards) transition closed → open. This encourages learning of representations that are informative of object states, making them useful for a variety of egocentric tasks. Specifically, the audible state change (AStC) loss is defined as LAStC = Evt,at∈DMoI [ − log ( pfrwd(vt, at) ) − log ( 1− pbkwd(vt, at) )] . (1) The probabilities (pfrwd, pbkwd) are computed from cross-modal similarities pfrwd(vt, at) = σ ( sim ( ∆vfrwdt ,at ) /τ ) , (2) pbkwd(vt, at) = σ ( sim ( ∆vbkwdt ,at ) /τ ) , (3) where τ = 0.2 is a temperature hyper-parameter, and σ denotes the sigmoid function. For better readability, we absorb the notations for the audio projection MLP head hAStCA and the state change projection MLP head hAStC∆V within sim(·, ·), but their usage is clearly illustrated in Fig. 3a. Audio representations (at) are obtained by encoding the trimmed audio clips at via the audio encoder fA (shared across all objectives). As explained above, at is further projected via hAStCA to a space where similarity to visual state changes is enforced. State change representations (∆vfrwdt , ∆vbkwdt ) are computed by considering two non-overlapping visual clips for each moment of interaction t, at timestamps t − δ and t + δ. The two clips, vt−δ and vt+δ , are encoded via the visual encoder fV (shared across all tasks) and a projection MLP head hAStCV (specific to the AStC task). Specifically, we represent forward and backward state changes as ∆vfrwdt = h AStC V ◦ fV (vt+δ)− hAStCV ◦ fV (vt−δ), (4) ∆vbkwdt = h AStC V ◦ fV (vt−δ)− hAStCV ◦ fV (vt+δ). (5) In summary, optimizing the loss of Eq. 1 not only requires the audio representation at to be aligned with representation of the visual change ∆vfrwdt that took place, but also to be different from the hypothetical backward state change ∆vbkwdt . 3.4 Learning from audio-visual correspondences [15, 4, 43] Audio-visual correspondence (AVC) is a well-studied self-supervised methodology for learning unimodal audio and visual encoders. The key idea is to bring visual and audio clips into a common feature space, where the representations of audio-visual pairs are aligned. Note that AVC differs from the proposed AStC task, as AVC seeks to associate the audio at with the corresponding visual clips vt, as opposed to the change in visual state ∆vt. As a result, visual representations learned through AVC are biased towards static concepts, while those learned through AStC are more sensitive to dynamic concepts. Since both types of representations can be useful for egocentric tasks, we further train the visual and audio encoders, fV and fA, for the AVC task. Specifically, consider a dataset of audio-visual pairs (vi, ai) with representations vi = fV (vi) and ai = fA(ai). In particular, we let (vi, ai) be short clips extracted from sample i around one of the detected moments of interest. Then, following [43, 61], audio-visual correspondence is established by minimizing a cross-modal InfoNCE loss of the form LAVC = Evi,ai∼D [ − log e sim(vi,ai)/τ∑ j e sim(vi,aj)/τ − log e sim(vi,ai)/τ∑ j e sim(vj ,ai)/τ ] , (6) where τ = 0.07 is a temperature hyper-parameter and sim(·, ·) denotes the cosine similarity. Both terms in Eq. 6 help bring vi and ai (i.e. the positives) together. The key difference is whether the negative set is composed of audio representations aj or visual representations vj where j ̸= i For readability of Eq. 6, we once again absorb the notation for the audio and visual projection MLP heads (hAVCA and h AVC V ) within sim(·, ·), and illustrate their usage in Fig. 3b. Fig. 3b also shows that we apply the AVC loss twice to associate both the visual clips (extracted slightly before and after the moment of interaction t) to the corresponding audio. 3.5 Training The audio-visual representation models fA and fV are trained to minimize both AVC and AStC losses L = αLAVC + (1− α)LAStC (7) where α is a weighting hyper-parameter between the two terms. While we experimented with different values of α, we found that equal weighting produced best results. Implementation details. We follow prior work on audio visual correspondence [43], and use an R(2+1)D video encoder [62] with depth 18 and a 10-layer 2D CNN as the audio encoder. Two video clips are extracted around moments of interaction at a frame rate of 16 FPS each with a duration of 0.5s, and separated by a gap of 0.2s. Video clips are augmented by random resizing, cropping, and horizontal flipping resulting in clips of 8 frames at a resolution of 112× 112,. As for the audio, we extract clips of 2s at 44.1kHz and downsample them to 16kHz. If the audio is stereo, we average the two waveforms to downgrade to mono, and then convert the mono signal to a log mel spectrogram with 80 frequency bands and 128 temporal frames. Models are trained with stochastic gradient descent for 100 epochs with a batch size of 128 trained over 4 GTX 1080 Ti GPUs, a learning rate of 0.005 and a momentum of 0.9. For Ego4D, we use a batch size of 512 trained over 8 RTX 2080 Ti GPUs with a learning rate of 0.05. The two loss terms in Eq. 7 are equally weighted with α = 0.5. 4 Experiments In this section, we demonstrate the benefits of identifying moments of interaction and learning state-aware representations through an audible state-change objective. We also show that, while large scale audio-visual correspondence (AVC) is beneficial, it is not sufficient to learn state-aware representations required for egocentric tasks. The setup used for our experiments is described in Sec. 4.1. Results and discussion of main takeaways are presented in Sec. 4.2. 4.1 Experimental Setup Datasets. We evaluate on two egocentric datasets: EPIC-Kitchens-100 [14] and Ego4D [27]. EPICKitchens-100 contains 100 hours of activities in the kitchen. Ego4D contains 3670 hours of egocentric video covering daily activities in the home, workplace, social settings, etc. For experiments on Ego4D, we use all videos from the Forecasting and Hand-Object interaction subsets. Baselines and ablations. We consider various baselines as well as ablated versions of RepLAI. Random represents an untrained (randomly initialized) model. AVID [43] and XDC [3] are two state-of-the-art models pre-trained on 2M audio-visual pairs from AudioSet [24] that only leverage audio-visual correspondence. For the full method RepLAI, we initialize the model weights from AVID before training on moments of interaction to minimize both AVC and state change loss, AStC. We also evaluate our method trained without AVID initialization (RepLAI from scratch), trained with only AVC (RepLAI w/o AStC ), only state change losses (RepLAI w/o AVC ), and trained on random moments in time (RepLAI w/o MoI). Finally, we compare our approach with the fully supervised methods presented in Ego4D [27]. Downstream tasks. After self-supervised pre-training, the models are evaluated on a range of egocentric downstream tasks. This is done, as is standard, by appending a task specific decoder to the backbone model, and training the decoder on a small annotated dataset. The tasks are: • Video action recognition (AR) on EPIC-Kitchens-100 and Ego4D. Given a short video clip, the task is to classify the ‘verb’ and ‘noun’ of the action taking place. This is done using two separate linear classifiers trained for this task. We report the top-1 and top-5 accuracies, following [14] (Tab. 1) and [27] (Tab. 2). We also evaluate on the unseen participants, head classes, and tail classes of EPIC-Kitchens-100 in Tab. 3. Through this task, we assess the efficacy of the spatial-temporal representations learned by the model in differentiating among different verbs and nouns. • Long-term action anticipation (LTA) on Ego4D. Given a video, the task is to predict the camera wearer’s future sequence of actions. For this task, the model is first presented with 4 consecutive clips of 2s, which are encoded using our visual backbone fV . Following [27], the representations are concatenated and fed to 20 separate linear classification heads to predict the following 20 actions. Performance is measured using the edit distance metric ED@(Z=20) [27].3 With the help of this task, we can evaluate if the representations learned by the model can be employed for long-horizon planning where the actions can change and may be of arbitrary duration. Results are reported in Tab. 2. • State change classification (StCC) on Ego4D. Given a video clip, the task is to classify if an object undergoes a state change or not. The video clip is encoded by fV and a state change classification head is used which performs global average pooling on the entire feature tensor and is followed by a classification layer. Performance is measured through the State Change Classification Accuracy (%), and reported in Tab. 2. This task is ideal for assessing the ability of the model in understanding the temporal change happening in the state of an object. 4.2 Discussion of results As can be seen in Tab. 1 and Tab. 2, RepLAI outperforms all other methods across all downstream tasks. Overall, this can be attributed to its ability to focus on interactions, both by detecting when they occur and by learning representations that are sensitive to interactions. A closer analysis of these results reveals several insights that we discuss next. RepLAI enhances large-scale AVC driven approaches. Prior work on self-supervised audio-visual learning has shown strong audio-visual representations for action recognition [43, 42]. One question that we seek to answer is, how useful these are representations to egocentric tasks and what are their limitations? To answer this question, we compare our model trained from scratch, RepLAI (Scratch), 3Edit distance measures the minimum number of operations required to convert the predicted sequence of actions to ground truth. To account for multi-modality of future actions, it also allows the model to make Z = 20 predictions, and only accounts for the best prediction. with our model using the weights from AVID [43] as initialization for both the visual and audio encoders. We also compare our method to standalone AVID and XDC i.e. without further selfsupervised training. Comparing rows (2), (3) and (8) in Tab. 1 and Tab. 2, it is clear that RepLAI enhances large-scale AVC pre-training by significant margins, leading to absolute improvements of 5% in top-1 verb accuracy on EPIC-Kitchens-100, 4.2% on Ego4D, 5.2% increase in state-change classification accuracy, and 5.6% reduction on the edit distance for long-term anticipation compared to AVID. Comparing rows (7) and (8), we also see that large-scale AVID pre-training enhances the representations learned by RepLAI on EPIC-Kitchens-100 significantly but only marginally on Ego-4D. This is likely due to the significantly large diversity of scenes in Ego4D. Thus, while relying on large-scale audio-visual pre-training (as with AVID) can help avoid overfitting on smaller egocentric datasets, this is less critical when training on larger and more diverse data. Detecting moments of interaction (MoI) helps representation learning. We hypothesize that to learn good representations for egocentric data of daily activities, self-supervised learning should focus on moments in time when interactions occur. To assess whether our audio-driven MoI detection algorithm helps representation learning, we compare RepLAI with an ablated version, RepLAI w/o MoI, where the model is trained on audio-visual clips extracted at random from the untrimmed videos. As can be seen by comparing rows (6) and (8) in Tab. 1 and Tab. 2, sampling clips around MoI leads to significantly better representations for all egocentric downstream tasks that we study. Moreover, even though RepLAI w/o MoI trains with AStC, it is unable to fully leverage the state change objective function without the information of moments of interactions which leads to a worse performance. This suggests that, an explicit state change objective function and sampling video clips around moments of interactions (which are likely to be aligned with the actual state changes) together provide an information-rich feedback to our model in better understanding how the state changes by an interaction and how the actions transition over time. These results also clearly show that the proposed MoI detection procedure is able to find moments in time that are especially useful for learning representations of daily activities. We emphasize the simplicity and effectiveness of our audio-driven detector, which shows how informative audio can be when searching for moments of interaction. In the future, we believe that learning-based approaches could further enhance MoI detection, and further improve the learned audio-visual representations. We also show several qualitative examples of detected MoI in the supplement. AVC and AStC are complementary. To assess the impact of both terms in Eq. 7, we evaluate RepLAI trained without LAVC and without LAStC. Comparing rows (4), (5) to row (2) and row (3) in Tab. 1 and Tab. 2 shows that each term enhances the representations obtained through large-scale audio-visual pre-training (AVID). Furthermore, comparing the ablated models in rows (4) and (5) to the full model in row (8) shows that these two terms are complementary to each other. This is because the AVC and AStC tasks encourage learning of representations with different characteristics. AVC focuses on learning visual representations that are informative of what kind of sounding objects are present in the video, while AStC forces the model to differentiate between visual representations that occur before and after state change interactions. RepLAI encourages state-aware representation learning. To study the representations learned by our approach for different states, we generate a t-SNE plot [63] for RepLAI and AVID as shown in Fig. 4. For generating a simpler visualization, a small dataset is prepared consisting of all the videos corresponding to a single participant, P01, in EPIC-Kitchens-100 and split into clips of 0.5s. We can observe that there is a larger spread in the t-SNE plot for RepLAI than AVID. A larger spread indicates that the representations of the various states are significantly different from each other and form more distant clusters as shown by RepLAI. Whereas, if the state representations are similar to each other, they are clustered together and show lesser spread as shown by AVID. MoI are the key moments of interactions with an object in an environment where the state is changing. AVID has no such information about the key moments and also does not have an explicit state change objective function. Therefore, it is unable to discriminate between the before and after state of an action and has less effective state-aware information in its representations. RepLAI representation are more generalizable and robust to long-tail. To assess RepLAI in a scenario with domain shift, we evaluate on unseen participants that were fully excluded from the pre-training of RepLAI. Tab. 3 shows that RepLAI significantly outperforms baselines and ablations, indicating that representation learning by our model provides much better generalization. Moreover, the verb and noun classes in EPIC-Kitchens-100 exhibit a long-tailed distribution. When further compared on head and tail classes separately in Tab. 3, we can observe that RepLAI outperforms all other methods highlighting its higher robustness on a long-tailed distribution. Self-supervised vs supervised representation learning Tab. 2 also compares RepLAI to fully supervised methods introduced in Ego4D [27] (rows S1, S2 and S3). We can observe that RepLAI can also perform competitively to the fully supervised approaches when we have access to larger and more diverse data. With further focus on SSL for untrimmed datasets, SSL methods will be able to match supervised approaches, and our work takes a step towards it. 5 Conclusion In this work, we propose an audio-driven self-supervised method for learning representations of egocentric video of daily activities. We show that in order to learn strong representations for this domain, two important challenges need to be addressed. First, learning should focus on moments of interaction (MoI). Since these moments only occur sporadically in untrimmed video data, we show that MoI detection is an important component of representation learning in untrimmed datasets. Second, learning should focus on the consequences of interactions, i.e., changes in the state of an environment caused by agents interacting with the world. In particular, by seeking to identify visible state changes from the audio alone, we can learn representations that are potentially more aware of the state of the environment and hence, particularly useful for egocentric downstream tasks. Acknowledgements We would like to thank DARPA MCS, ONR Young Investigator and DARPA SAIL-ON for the funding. Broader impact Deep learning models are capable of learning (and sometimes even amplifying) biases existing in datasets. While several steps have been taken in datasets like Ego4D to increase geographical diversity, we would like to encourage careful consideration of ethical implications when deploying these model. While public datasets are essential to make progress on how to represent visual egocentric data, premature deployment of our models is likely have negative societal impact, as we did not check for the presence or absence of such biases.
1. How does the proposed method handle audio diversity in semantically similar interactions? 2. Have you considered using a linear head for ΔVAS t C since the forward delta seems to be the negative of the backward one? 3. Why do the AVC and AStC objectives seem to be pulling in opposite directions, and would choosing audio at t - δ and t + δ instead of t make more sense? 4. Can you provide insights into why AVC seems to be doing most of the job in Table 1, especially for the "verb" category? 5. How does the proposed method handle change in location or variant environment that results in changes in audio, rather than just human-object interaction? 6. How does the model perform on datasets that don't contain the same type of interactions as those included in the training data?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes leveraging audible state changes for self-supervised representation learning from long-form egocentric videos. Authors initially detect the timestamps where interesting interactions occur (MoI) and then compute a cross modal contrastive objective where the natural transition (before, interaction, after) is more likely than one which is temporally reverse (after, interaction, before). Strengths And Weaknesses Paper is clear in presentation and has provided an interesting view to self-supervised multi-modal representation learning in egocentric videos with audible state change. The idea is pretty close to "Actions ∼ Transformations" https://arxiv.org/pdf/1512.00795.pdf which unfortunately is missing from the references. Below are my detailed comments. Line 48-49: Authors argue that AVC objective leads to representations that are not informative of the changes over time. If indeed the sound of “opening fridge” is different from that of “closing fridge”, AVC will encourage video representation of “opening fridge” to be more similar to the audio representation of “opening fridge” than “closing fridge” since audio embedding of the latter will be served as a negative instance within the contrastive framework. Note that, you do not have to sample after the fact e.g once the fridge is closed/opened, instead you can sample during the action e.g while fridge is being opened/closed, something that working with video in a cross-modal contrastive setup easily allows you to do. With that said, I would like to see authors clarifying their point on the lack of AVC’s suitability for the task specially given the ablation studies that show dropping AVC vs AStC is not that much different. On finding MoI: As paper mentions, egocentric videos are naturally long-form, as they continuously capture daily activities. Hence, a person moving in an environment i.e change in location, even without any interaction can result in changes in audio spectrogram. For example, a person is cooking some food in the kitchen, then walks to the living room to pick up a book. The fact that there is a fan or stereo playing in the living room, will result in a change in audio (note that the sound of fan or stereo is not necessarily audible in the kitchen), however those are not MoI since there has been no interaction with either stereo or fan yet from Sec 3.2 it seems to me that the proposed MoI approach should pick those. In nutshell, change in audio is not only as a result of human-object interaction, it can be due to change of location or variant environment as well and I cannot see how the proposed MoI detection method can work in a realistic environment. I am not convinced that learning from audible state change as described in Sec 3.3 is generic enough. For example, the visual state is different after hearing the sound of MoI for “opening fridge” , while before state shows a closed fridge, the after state depicts an open one. Also, due to “distinct” sound of “opening fridge” the backward transition should be less likely. Now consider the example of “cutting cucumber”, the backward transition almost never happens (humans usually don’t stitch cucumber slices together!), while it is reasonable to go from a closed fridge to an open one and vice versa. There are also cases which despite audible MoI, visuals look almost identical in before and after like “stirring a pot”. I suspect that more clear performance gains seen on Ego4D versus Epic-Kitchens is partly related to the state change properties which are more prominent in the former dataset. I would like to hear authors feedback on the different aforementioned types of interactions and why their proposed model should work in a self-supervised setup where we don’t know which type of these interactions are included in the training data. Line 269: I do not think it is fair to compare rows 2 and 7 since row 2 has been only trained on AudioSet which is very different from evaluation egocentric datasets. To see the true additional value of your contributions (use of MoI and AStC loss), AVID should be fine-tuned on the egocentric datasets (equally RepLAI w/o AStC and MoI) Questions How do you handle the audio diversity of semantically similar interactions? For example, “cutting cucumber” on a wooden board will sound different from doing it on a plastic board, or sound of “sautéing” mushrooms/onions in a pan is meaningfully influenced by the oven/pan/oil temperature. As part of ablation studies, have you tried a linear head for h Δ V A S t C since forward delta seems to be the negative of the backward one. AVC loss shown in Figure 3.b encourages the embedding of V t − δ to be close to V t + δ by both anchoring on audio embedding at t while the AStC encourages them to be different. These two objective functions, to the best of my understanding are pulling in opposite directions! Would not it make more sense to choose the audio at t − δ and t + δ , instead of t, when computing two AVC losses? In Table 1, Top 5 Acc: AVC seems to be doing most of the job while w/o AStC and MoI, performance on “verb” is almost maintained (~73%) but the pattern is different for “noun”. Any insights? Limitations Authors have addressed the limitations
NIPS
Title Learning State-Aware Visual Representations from Audible Interactions Abstract We propose a self-supervised algorithm to learn representations from egocentric video data. Recently, significant efforts have been made to capture humans interacting with their own environments as they go about their daily activities. In result, several large egocentric datasets of interaction-rich multi-modal data have emerged. However, learning representations from videos can be challenging. First, given the uncurated nature of long-form continuous videos, learning effective representations require focusing on moments in time when interactions take place. Second, visual representations of daily activities should be sensitive to changes in the state of the environment. However, current successful multimodal learning frameworks encourage representation invariance over time. To address these challenges, we leverage audio signals to identify moments of likely interactions which are conducive to better learning. We also propose a novel selfsupervised objective that learns from audible state changes caused by interactions. We validate these contributions extensively on two large-scale egocentric datasets, EPIC-Kitchens-100 and the recently released Ego4D, and show improvements on several downstream tasks, including action recognition, long-term action anticipation, and object state change classification. Code and pretrained model are available here: https://github.com/HimangiM/RepLAI N/A We propose a self-supervised algorithm to learn representations from egocentric video data. Recently, significant efforts have been made to capture humans interacting with their own environments as they go about their daily activities. In result, several large egocentric datasets of interaction-rich multi-modal data have emerged. However, learning representations from videos can be challenging. First, given the uncurated nature of long-form continuous videos, learning effective representations require focusing on moments in time when interactions take place. Second, visual representations of daily activities should be sensitive to changes in the state of the environment. However, current successful multimodal learning frameworks encourage representation invariance over time. To address these challenges, we leverage audio signals to identify moments of likely interactions which are conducive to better learning. We also propose a novel selfsupervised objective that learns from audible state changes caused by interactions. We validate these contributions extensively on two large-scale egocentric datasets, EPIC-Kitchens-100 and the recently released Ego4D, and show improvements on several downstream tasks, including action recognition, long-term action anticipation, and object state change classification. Code and pretrained model are available here: https://github.com/HimangiM/RepLAI 1 Introduction Recent successes in self-supervised learning (SSL) [48, 10, 31, 28] has brought into question the need for human annotations in order to learn strong visual representations. However, current approaches are bottlenecked by the lack of rich data – they learn from static images which lack temporal information and restrict the ability to learn object deformations and state changes. It is clear that we need videos to learn rich representations in self-supervised manner. Learning representations from videos is however quite challenging. The first challenge is choosing the right SSL loss. Approaches such as [67, 54] have attempted to learn representations that are invariant to object deformations/viewpoints. However, many downstream tasks require representations that are sensitive to these deformations. Another alternative has been to use the multi-modal data [3, 43, 57] and learn representations via audio. But again most of these approaches seek to align audio and visual features in a common space, leading to invariant representations as well. The second challenge is dealing with the fact that current video-based SSL approaches exploit the curated nature of video datasets, such as Kinetics [9]. These approaches are designed to leverage carefully selected clips, displaying a single action or object interaction. This is in contrast to the predominantly untrimmed real-world data characteristic of large egocentric datasets of daily activities. Here, unlike action centric datasets, the most ‘interesting‘ or ‘interaction-rich‘ clips have NOT been carefully selected by human annotators. Thus, learning from untrimmed video poses a major challenge, as a significant portion of the data does not focus on the concepts we want to learn. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). In this work, we ask the question, ‘Can we learn meaningful representations from interaction-rich, multi-modal streams of egocentric data?’ Learning from continuous streams of data requires focusing on the right moments when the actual interactions are likely to occur. Consider, for example, the acts of opening a fridge or placing a pan on the stove. Actions like these create clear and consistent sound signatures due to the physical interaction between objects. These moments can be easily detected from audio alone and can be used to target training on interesting portions of the untrimmed videos. We show that even a simple spectrogram-based handcrafted detector is sufficient to identify interesting moments in time, and that representation learning benefits substantially from using them to sample training clips. But what should the loss be? Prior work on audio-visual correspondence (AVC) [15, 4, 43] uses the natural co-occurrence of sounds and the visual manifestations of their sources as the source of supervision. However, since the AVC objective still favors invariance, the learned representations are not informative of the changes that happen over time (e.g., representations that can distinguish between closed and opened fridge, or vegetables before and after chopping them). To better capture state changes, we introduce a novel audio-visual self-supervised objective, in which audio representations at key moments in time are required to be informative of the change in the corresponding visual representations over time. The intuition behind this objective is that transitions between object states are often marked by characteristic sounds. Thus, models optimized under this objective would associate the distinct sounds not only with the objects themselves (as accomplished with AVC), but also with the transition between two different states of the object. To this end, we introduce RepLAI – Representation Learning from Audible Interactions, a selfsupervised algorithm for representation learning from videos of audible interactions. RepLAI uses the audio signals in two unique ways: (1) to identify moments in time that are conducive to better self-supervised learning and (2) to learn representations that focus on the visual state changes caused by audible interactions. We validate these contributions extensively on two egocentric datasets, EPIC-Kitchens-100 [14] and the recently released Ego4D [27], where we demonstrate the benefits of RepLAI for several downstream tasks, including action recognition, long term action anticipation, and object state change classification. 2 Related Work Self-supervised learning. Self-supervised learning methods operate on an unlabeled dataset by explicitly defining pretext tasks such as solving jigsaw puzzle [47], patch location prediction [16], inpainting [50], and image rotation [25] prediction. Following these, the next wave of self-supervised methods has been based on contrastive learning that learns representations with the help of data augmentation and instance discrimination [10, 28, 48, 31, 8]. These methods have shown rapid progress in self-supervised learning for images. While these approaches explore the spatial information of images, RepLAI leverages the temporal information of videos. Video representation learning. Relevant to our proposed approach is self-supervised representation learning for videos where the spatiotemporal pretext tasks are designed such as temporal order prediction [40, 70, 35, 69], predicting motion and appearance statistics [65], pace prediction [66], temporal cycle consistency [18, 68], and video colorization [64]. Contrastive learning has also been widely adopted in the domain of video [55, 29, 32, 57, 71, 30, 22] with impressive results on action recognition tasks. These methods however learn representations that are invariant to spatio-temporal augmentations, such as temporal jittering, and thus are incapable of representing object state changes. Closer to the objective of RepLAI, we include relevant literature on audio-visual representation learning from videos, where the audio stream is additionally utilized. Audio-visual representation learning. Learning without additional supervision has also been explored in the context of the audio modality with the help of audio-visual correspondence (AVC) [4, 5]. As stated simply, AVC is the binary classification task of predicting if a video clip and a short audio clip correspond with each other or not (details in Sec. 3.4). Similar tasks like temporal synchronization [36, 49] between audio and video, audio classification [6, 3, 11], spatial alignment prediction between audio and 360-degree videos [41], optimal combination of self-supervised tasks [52] have been shown beneficial for learning effective multi-modal video representations. Other works explore contrastive learning for both audio and video modality [43, 51, 42] as a cross-modal instance discrimination task. Fine-grained video understanding. Real-world videos are often untrimmed in nature and have multiple actions in a single video. Along this line, fine-grained analysis has been studied for videos in the form of a query-response temporal attention mechanism [72], bi-directional RNN s[58], and semi-supervised learning problem [17]. While these works only utilize the visual modality, another line of work has also explored multi-modal fine-grained video understanding as a transformer-based model [34], by exploiting the correspondence between modalities [44], or by exploring how to best combine multiple modalities - audio, visual, and language [2]. In our work, we try to conduct fine-grained video understanding in a self-supervised manner. Egocentric datasets. Egocentric datasets offer new opportunities to learn from a first-person point of view, where the world is seen through the eyes of an agent. Many egocentric datasets have been developed such as Epic-kitchens [13, 14] which consist of daily activities performed in a kitchen environment, Activities of Daily Living [53], UT Ego [37, 60], the Disney Dataset [20], and the recently released large-scale Ego4D dataset [27] which consists of day-to-day life activities in multiple scenarios such as household, outdoor spaces, workplace, etc. Multiple challenges and downstream tasks have explored for egocentric datasets like action recognition [34, 33, 38], action localization [56], action anticipation [26, 59, 39, 1, 23], human-object interactions [45, 12, 7], parsing social interactions [46], and domain adaptation [44]. In our work, we evaluate the efficiency of the representations learned by our self-supervised approach on the EPIC-Kitchens-100 and Ego4D datasets over multiple downstream tasks. 3 RepLAI In this section, we detail our approach to learn audio-visual representations from and for interactionrich egocentric data in a self-supervised manner, i.e., without relying on human annotated labels. Sec. 3.1 provides an overview of RepLAI and motivates the two key contributions of this work – identifying ‘moments of interaction’ (MoI) and learning from ‘audible visual state changes’. Sec. 3.2 details the proposed approach for MoI detection and section Sec. 3.3 explains the proposed selfsupervised objective for learning state-aware representations. Sec. 3.4 explains the objective of audio-visual correspondence learning used to train RepLAI. Sec. 3.5 brings both objectives together and includes necessary details for reproducibility. 3.1 Overview Given a dataset D = {(vi, ai)Ni=1} containing N long (untrimmed) audio-visual streams, our goal is to learn visual and audio encoders, denoted fV and fA, that can effectively represent egocentric data. An overview of the proposed approach is depicted in Fig. 2. For each sample (v, a) ∈ D, we search for moments of interaction (MoI) using the audio stream, and extract short audio and visual clips around these MoI. These trimmed clips are then encoded into a vectorized representation using fV and fA. The whole system is trained to optimize two self-supervised losses – an audio-visual correspondence loss LAVC, and a novel self-supervised loss that learns from audible state changes LAStC. Why detect moments of interaction (MoI)? Untrimmed video of daily activities often contains long periods without interactions, which aren’t useful for training. Instead, we search for moments in time that are more likely to contain interactions which we refer to as moments of interaction (MoI). Why learn from audible state changes? Visual representations of daily activities should be informative of the state of the environment and/or objects being interacted with. Moreover, changes in the environment are usually caused by physical interactions, which produce distinct sound signatures. We hypothesize that state-aware representations can be obtained by learning to associate audio with the change of visual representation during a moment of interaction. 3.2 Audio-driven detection of moments of interaction Audio signals are particularly informative of moments of interaction. To complete day-to-day activities, we physically interact with objects in our environments. These interactions typically produce distinct audio patterns - short bursts of energy that span all frequencies. This is illustrated in Fig. 1, where we visualize the untrimmed visual and audio data of a person performing a series of actions in the kitchen. The audio data is represented as a log mel spectrogram, where the x-axis represents time and y-axis the audio frequency in log-scale. As can be seen, moments of interaction appear in the spectrogram as vertical edges, which can be easily detected. Once detected, short clips around the moments of interaction are collected into a dataset DMoI, and used for training. The remaining question is how to locate the timestamp of such vertical edges? Intuitively, we do this by finding robust local maxima in the total energy (summed over all frequencies) of the spectrogram. Concretely, let M(t, ω) be the value of the log mel spectrogram of an audio clip at time t and frequency ω. To remove the influence of background noise and overall audio intensity/volume, we compute the z-score normalization of the spectrogram for each frequency independently M̄(t, ω) = s(t,ω)−µωσω+ϵ , where ϵ is small constant for numerical stability. Here, µω and σω are the mean and standard deviation of M(t, ω) over time, respectively.1 Next, we define moments of interaction as the set of timestamps which are local maxima of ∑ ω s̄(t, ω) (or peaks for short). Moreover, to avoid weak local maxima that may be caused by the noisy nature of audio signals, we ignore peaks with small prominence (lower than 1)2. For further robustness, when multiple close peaks are found (less than 50ms apart), only the highest prominence peak is kept. 3.3 Learning from audible state changes Physical interactions often cause both state changes in the environment and distinct audio signals. To leverage this natural co-occurrence, we propose a self-supervised task that seeks to associate the audio with changes in the visual state during a moment of interaction. 1Specifically, µω = Et[M(t, ω)], σ2ω = Et[(M(t, ω)− µω)2], and ϵ = 1e− 5. 2The prominence of a peak is defined as the difference between the peak value and the minimum value in a small window around it. The proposed task is optimized by minimizing a loss with two negative log-likelihood terms to: (1) increase the probability of associating the audio with the visual state change in the forward (i.e. correct) direction, (2) decrease the probability of associating the audio with the visual state change in the backward (i.e. incorrect) direction. Consider, for example, the interaction of ‘closing a fridge door’. To optimize for this task, the audio of closing the door should be (1) similar to the visual transition opened door → closed door and (2) dissimilar to the (backwards) transition closed → open. This encourages learning of representations that are informative of object states, making them useful for a variety of egocentric tasks. Specifically, the audible state change (AStC) loss is defined as LAStC = Evt,at∈DMoI [ − log ( pfrwd(vt, at) ) − log ( 1− pbkwd(vt, at) )] . (1) The probabilities (pfrwd, pbkwd) are computed from cross-modal similarities pfrwd(vt, at) = σ ( sim ( ∆vfrwdt ,at ) /τ ) , (2) pbkwd(vt, at) = σ ( sim ( ∆vbkwdt ,at ) /τ ) , (3) where τ = 0.2 is a temperature hyper-parameter, and σ denotes the sigmoid function. For better readability, we absorb the notations for the audio projection MLP head hAStCA and the state change projection MLP head hAStC∆V within sim(·, ·), but their usage is clearly illustrated in Fig. 3a. Audio representations (at) are obtained by encoding the trimmed audio clips at via the audio encoder fA (shared across all objectives). As explained above, at is further projected via hAStCA to a space where similarity to visual state changes is enforced. State change representations (∆vfrwdt , ∆vbkwdt ) are computed by considering two non-overlapping visual clips for each moment of interaction t, at timestamps t − δ and t + δ. The two clips, vt−δ and vt+δ , are encoded via the visual encoder fV (shared across all tasks) and a projection MLP head hAStCV (specific to the AStC task). Specifically, we represent forward and backward state changes as ∆vfrwdt = h AStC V ◦ fV (vt+δ)− hAStCV ◦ fV (vt−δ), (4) ∆vbkwdt = h AStC V ◦ fV (vt−δ)− hAStCV ◦ fV (vt+δ). (5) In summary, optimizing the loss of Eq. 1 not only requires the audio representation at to be aligned with representation of the visual change ∆vfrwdt that took place, but also to be different from the hypothetical backward state change ∆vbkwdt . 3.4 Learning from audio-visual correspondences [15, 4, 43] Audio-visual correspondence (AVC) is a well-studied self-supervised methodology for learning unimodal audio and visual encoders. The key idea is to bring visual and audio clips into a common feature space, where the representations of audio-visual pairs are aligned. Note that AVC differs from the proposed AStC task, as AVC seeks to associate the audio at with the corresponding visual clips vt, as opposed to the change in visual state ∆vt. As a result, visual representations learned through AVC are biased towards static concepts, while those learned through AStC are more sensitive to dynamic concepts. Since both types of representations can be useful for egocentric tasks, we further train the visual and audio encoders, fV and fA, for the AVC task. Specifically, consider a dataset of audio-visual pairs (vi, ai) with representations vi = fV (vi) and ai = fA(ai). In particular, we let (vi, ai) be short clips extracted from sample i around one of the detected moments of interest. Then, following [43, 61], audio-visual correspondence is established by minimizing a cross-modal InfoNCE loss of the form LAVC = Evi,ai∼D [ − log e sim(vi,ai)/τ∑ j e sim(vi,aj)/τ − log e sim(vi,ai)/τ∑ j e sim(vj ,ai)/τ ] , (6) where τ = 0.07 is a temperature hyper-parameter and sim(·, ·) denotes the cosine similarity. Both terms in Eq. 6 help bring vi and ai (i.e. the positives) together. The key difference is whether the negative set is composed of audio representations aj or visual representations vj where j ̸= i For readability of Eq. 6, we once again absorb the notation for the audio and visual projection MLP heads (hAVCA and h AVC V ) within sim(·, ·), and illustrate their usage in Fig. 3b. Fig. 3b also shows that we apply the AVC loss twice to associate both the visual clips (extracted slightly before and after the moment of interaction t) to the corresponding audio. 3.5 Training The audio-visual representation models fA and fV are trained to minimize both AVC and AStC losses L = αLAVC + (1− α)LAStC (7) where α is a weighting hyper-parameter between the two terms. While we experimented with different values of α, we found that equal weighting produced best results. Implementation details. We follow prior work on audio visual correspondence [43], and use an R(2+1)D video encoder [62] with depth 18 and a 10-layer 2D CNN as the audio encoder. Two video clips are extracted around moments of interaction at a frame rate of 16 FPS each with a duration of 0.5s, and separated by a gap of 0.2s. Video clips are augmented by random resizing, cropping, and horizontal flipping resulting in clips of 8 frames at a resolution of 112× 112,. As for the audio, we extract clips of 2s at 44.1kHz and downsample them to 16kHz. If the audio is stereo, we average the two waveforms to downgrade to mono, and then convert the mono signal to a log mel spectrogram with 80 frequency bands and 128 temporal frames. Models are trained with stochastic gradient descent for 100 epochs with a batch size of 128 trained over 4 GTX 1080 Ti GPUs, a learning rate of 0.005 and a momentum of 0.9. For Ego4D, we use a batch size of 512 trained over 8 RTX 2080 Ti GPUs with a learning rate of 0.05. The two loss terms in Eq. 7 are equally weighted with α = 0.5. 4 Experiments In this section, we demonstrate the benefits of identifying moments of interaction and learning state-aware representations through an audible state-change objective. We also show that, while large scale audio-visual correspondence (AVC) is beneficial, it is not sufficient to learn state-aware representations required for egocentric tasks. The setup used for our experiments is described in Sec. 4.1. Results and discussion of main takeaways are presented in Sec. 4.2. 4.1 Experimental Setup Datasets. We evaluate on two egocentric datasets: EPIC-Kitchens-100 [14] and Ego4D [27]. EPICKitchens-100 contains 100 hours of activities in the kitchen. Ego4D contains 3670 hours of egocentric video covering daily activities in the home, workplace, social settings, etc. For experiments on Ego4D, we use all videos from the Forecasting and Hand-Object interaction subsets. Baselines and ablations. We consider various baselines as well as ablated versions of RepLAI. Random represents an untrained (randomly initialized) model. AVID [43] and XDC [3] are two state-of-the-art models pre-trained on 2M audio-visual pairs from AudioSet [24] that only leverage audio-visual correspondence. For the full method RepLAI, we initialize the model weights from AVID before training on moments of interaction to minimize both AVC and state change loss, AStC. We also evaluate our method trained without AVID initialization (RepLAI from scratch), trained with only AVC (RepLAI w/o AStC ), only state change losses (RepLAI w/o AVC ), and trained on random moments in time (RepLAI w/o MoI). Finally, we compare our approach with the fully supervised methods presented in Ego4D [27]. Downstream tasks. After self-supervised pre-training, the models are evaluated on a range of egocentric downstream tasks. This is done, as is standard, by appending a task specific decoder to the backbone model, and training the decoder on a small annotated dataset. The tasks are: • Video action recognition (AR) on EPIC-Kitchens-100 and Ego4D. Given a short video clip, the task is to classify the ‘verb’ and ‘noun’ of the action taking place. This is done using two separate linear classifiers trained for this task. We report the top-1 and top-5 accuracies, following [14] (Tab. 1) and [27] (Tab. 2). We also evaluate on the unseen participants, head classes, and tail classes of EPIC-Kitchens-100 in Tab. 3. Through this task, we assess the efficacy of the spatial-temporal representations learned by the model in differentiating among different verbs and nouns. • Long-term action anticipation (LTA) on Ego4D. Given a video, the task is to predict the camera wearer’s future sequence of actions. For this task, the model is first presented with 4 consecutive clips of 2s, which are encoded using our visual backbone fV . Following [27], the representations are concatenated and fed to 20 separate linear classification heads to predict the following 20 actions. Performance is measured using the edit distance metric ED@(Z=20) [27].3 With the help of this task, we can evaluate if the representations learned by the model can be employed for long-horizon planning where the actions can change and may be of arbitrary duration. Results are reported in Tab. 2. • State change classification (StCC) on Ego4D. Given a video clip, the task is to classify if an object undergoes a state change or not. The video clip is encoded by fV and a state change classification head is used which performs global average pooling on the entire feature tensor and is followed by a classification layer. Performance is measured through the State Change Classification Accuracy (%), and reported in Tab. 2. This task is ideal for assessing the ability of the model in understanding the temporal change happening in the state of an object. 4.2 Discussion of results As can be seen in Tab. 1 and Tab. 2, RepLAI outperforms all other methods across all downstream tasks. Overall, this can be attributed to its ability to focus on interactions, both by detecting when they occur and by learning representations that are sensitive to interactions. A closer analysis of these results reveals several insights that we discuss next. RepLAI enhances large-scale AVC driven approaches. Prior work on self-supervised audio-visual learning has shown strong audio-visual representations for action recognition [43, 42]. One question that we seek to answer is, how useful these are representations to egocentric tasks and what are their limitations? To answer this question, we compare our model trained from scratch, RepLAI (Scratch), 3Edit distance measures the minimum number of operations required to convert the predicted sequence of actions to ground truth. To account for multi-modality of future actions, it also allows the model to make Z = 20 predictions, and only accounts for the best prediction. with our model using the weights from AVID [43] as initialization for both the visual and audio encoders. We also compare our method to standalone AVID and XDC i.e. without further selfsupervised training. Comparing rows (2), (3) and (8) in Tab. 1 and Tab. 2, it is clear that RepLAI enhances large-scale AVC pre-training by significant margins, leading to absolute improvements of 5% in top-1 verb accuracy on EPIC-Kitchens-100, 4.2% on Ego4D, 5.2% increase in state-change classification accuracy, and 5.6% reduction on the edit distance for long-term anticipation compared to AVID. Comparing rows (7) and (8), we also see that large-scale AVID pre-training enhances the representations learned by RepLAI on EPIC-Kitchens-100 significantly but only marginally on Ego-4D. This is likely due to the significantly large diversity of scenes in Ego4D. Thus, while relying on large-scale audio-visual pre-training (as with AVID) can help avoid overfitting on smaller egocentric datasets, this is less critical when training on larger and more diverse data. Detecting moments of interaction (MoI) helps representation learning. We hypothesize that to learn good representations for egocentric data of daily activities, self-supervised learning should focus on moments in time when interactions occur. To assess whether our audio-driven MoI detection algorithm helps representation learning, we compare RepLAI with an ablated version, RepLAI w/o MoI, where the model is trained on audio-visual clips extracted at random from the untrimmed videos. As can be seen by comparing rows (6) and (8) in Tab. 1 and Tab. 2, sampling clips around MoI leads to significantly better representations for all egocentric downstream tasks that we study. Moreover, even though RepLAI w/o MoI trains with AStC, it is unable to fully leverage the state change objective function without the information of moments of interactions which leads to a worse performance. This suggests that, an explicit state change objective function and sampling video clips around moments of interactions (which are likely to be aligned with the actual state changes) together provide an information-rich feedback to our model in better understanding how the state changes by an interaction and how the actions transition over time. These results also clearly show that the proposed MoI detection procedure is able to find moments in time that are especially useful for learning representations of daily activities. We emphasize the simplicity and effectiveness of our audio-driven detector, which shows how informative audio can be when searching for moments of interaction. In the future, we believe that learning-based approaches could further enhance MoI detection, and further improve the learned audio-visual representations. We also show several qualitative examples of detected MoI in the supplement. AVC and AStC are complementary. To assess the impact of both terms in Eq. 7, we evaluate RepLAI trained without LAVC and without LAStC. Comparing rows (4), (5) to row (2) and row (3) in Tab. 1 and Tab. 2 shows that each term enhances the representations obtained through large-scale audio-visual pre-training (AVID). Furthermore, comparing the ablated models in rows (4) and (5) to the full model in row (8) shows that these two terms are complementary to each other. This is because the AVC and AStC tasks encourage learning of representations with different characteristics. AVC focuses on learning visual representations that are informative of what kind of sounding objects are present in the video, while AStC forces the model to differentiate between visual representations that occur before and after state change interactions. RepLAI encourages state-aware representation learning. To study the representations learned by our approach for different states, we generate a t-SNE plot [63] for RepLAI and AVID as shown in Fig. 4. For generating a simpler visualization, a small dataset is prepared consisting of all the videos corresponding to a single participant, P01, in EPIC-Kitchens-100 and split into clips of 0.5s. We can observe that there is a larger spread in the t-SNE plot for RepLAI than AVID. A larger spread indicates that the representations of the various states are significantly different from each other and form more distant clusters as shown by RepLAI. Whereas, if the state representations are similar to each other, they are clustered together and show lesser spread as shown by AVID. MoI are the key moments of interactions with an object in an environment where the state is changing. AVID has no such information about the key moments and also does not have an explicit state change objective function. Therefore, it is unable to discriminate between the before and after state of an action and has less effective state-aware information in its representations. RepLAI representation are more generalizable and robust to long-tail. To assess RepLAI in a scenario with domain shift, we evaluate on unseen participants that were fully excluded from the pre-training of RepLAI. Tab. 3 shows that RepLAI significantly outperforms baselines and ablations, indicating that representation learning by our model provides much better generalization. Moreover, the verb and noun classes in EPIC-Kitchens-100 exhibit a long-tailed distribution. When further compared on head and tail classes separately in Tab. 3, we can observe that RepLAI outperforms all other methods highlighting its higher robustness on a long-tailed distribution. Self-supervised vs supervised representation learning Tab. 2 also compares RepLAI to fully supervised methods introduced in Ego4D [27] (rows S1, S2 and S3). We can observe that RepLAI can also perform competitively to the fully supervised approaches when we have access to larger and more diverse data. With further focus on SSL for untrimmed datasets, SSL methods will be able to match supervised approaches, and our work takes a step towards it. 5 Conclusion In this work, we propose an audio-driven self-supervised method for learning representations of egocentric video of daily activities. We show that in order to learn strong representations for this domain, two important challenges need to be addressed. First, learning should focus on moments of interaction (MoI). Since these moments only occur sporadically in untrimmed video data, we show that MoI detection is an important component of representation learning in untrimmed datasets. Second, learning should focus on the consequences of interactions, i.e., changes in the state of an environment caused by agents interacting with the world. In particular, by seeking to identify visible state changes from the audio alone, we can learn representations that are potentially more aware of the state of the environment and hence, particularly useful for egocentric downstream tasks. Acknowledgements We would like to thank DARPA MCS, ONR Young Investigator and DARPA SAIL-ON for the funding. Broader impact Deep learning models are capable of learning (and sometimes even amplifying) biases existing in datasets. While several steps have been taken in datasets like Ego4D to increase geographical diversity, we would like to encourage careful consideration of ethical implications when deploying these model. While public datasets are essential to make progress on how to represent visual egocentric data, premature deployment of our models is likely have negative societal impact, as we did not check for the presence or absence of such biases.
1. What is the focus and contribution of the paper on self-supervised learning with egocentric videos? 2. What are the strengths of the proposed approach, particularly in its use of audio signals? 3. What are the weaknesses of the paper regarding its examples and experimental comparisons? 4. Do you have any concerns about the generalization ability of the proposed method for different activities? 5. What are the limitations and potential negative impacts of the method, as discussed in the supplementary material?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes a new way for self-supervised learning with egocentric videos to learn from audible interactions. Specifically, it uses audio signals to identify the state changes and use these transition states to learn the audio-visual correlations. The proposed method performs much better than a recent method AVID [42] and the ablation study shows the effectiveness of each model component. Strengths And Weaknesses Strengths The idea of using audio to identify the state changes for learning audio-visual correlation is novel and seems effective. The paper is well written and easy to read The performance of this method is good. Weaknesses The example used for illustrating Eq. 1 are not convincing. The given example is the audio of closing the door should be (1) similar to the visual transition opened door → closed door and (2) dissimilar to the (backwards) transition closed → open. This activity has strong temporal order. But how to deal with activities like stirring and washing? There is lack of the fully-supervised performance for comparison in experiments. Questions My main question is the generalization ability of the proposed method, i.e., increasing the probability of associating the audio with the visual state change in the forward direction but decreasing the probability in the backward direction. It can work well for activities like opening/closing the door intuitively. However, how can it work for others like stirring and washing? Limitations The limitations and potential negative impact are discussed in the supplementary.
NIPS
Title Pre-activation Distributions Expose Backdoor Neurons Abstract Convolutional neural networks (CNN) can be manipulated to perform specific behaviors when encountering a particular trigger pattern without affecting the performance on normal samples, which is referred to as backdoor attack. The backdoor attack is usually achieved by injecting a small proportion of poisoned samples into the training set, through which the victim trains a model embedded with the designated backdoor. In this work, we demonstrate that backdoor neurons are exposed by their pre-activation distributions, where populations from benign data and poisoned data show significantly different moments. This property is shown to be attack-invariant and allows us to efficiently locate backdoor neurons. On this basis, we make several proper assumptions on the neuron activation distributions, and propose two backdoor neuron detection strategies based on (1) the differential entropy of the neurons, and (2) the Kullback-Leibler divergence between the benign sample distribution and a poisoned statistics based hypothetical distribution. Experimental results show that our proposed defense strategies are both efficient and effective against various backdoor attacks. Source code is available here. 1 Introduction Convolutional neural networks (CNNs) have achieved tremendous success during the past few years in a wide range of areas. However, training a CNN from scratch involves a large amount of data and expensive computational costs, which is sometimes infeasible. A more practical strategy is to obtain pretrained models or utilize public datasets from a third party, which brings convenience but also raises severe security problems into the deployment of models. For example, a malicious third party may provide pretrained models embedded with a designated backdoor, such that the model will have a predefined response to some specific pattern, which is also called the trigger. More realistically, the attacker can inject only a small proportion of malicious data into the public dataset to mislead the trained model, which is referred to as backdoor poisoning attacks [24]. For instance, the malicious data can be created by patching a particular pattern into the benign data and changing the label to the desired target. The correlation between the trigger and the specified target label will be learned by the models during the training time. In this way, the infected model will misclassify the input to the attack target when the pattern is patched, while behaving normally otherwise, as shown in Figure 1. According to previous studies, it was empirically found that an infected model always possesses one or more neurons that have high correlation with the trigger activation, and pruning these neurons can ∗Eual Contribution †Corresponding Author 36th Conference on Neural Information Processing Systems (NeurIPS 2022). significantly alleviate the backdoor behaviors, while retaining the model performance [41, 27, 7]. Nevertheless, how to precisely find out these backdoor neurons in an infected model is still a challenging problem, and has attracted a lot of attentions from the community. In this work, we take an inspection on the pre-activation distributions of infected models on each layer. In general, the pre-activations in each neuron follow an unimodal distribution that can be approximated by a Gaussian distribution. We demonstrate that backdoor neurons do not hold such property. Instead, in a typical backdoor neuron, the pre-activation distributions of benign data and poisoned data present significant different moments, and can be approximated by a mixture of two Gaussian distributions. This property allows us to locate potential backdoor neurons through simple statistical analysis on pre-activations. Specifically, in case the defender has access to the poisoned dataset, the abnormal pre-activation distributions can be directly observed by forward propagating the data. The mixture of benign and poisoned data, where the small proportion of poisoned points being away from the benign mean, leads to a skewed, and even a bimodal distribution. Based on the maximum entropy property of the Gaussian distribution, the skewness will cause a reduction on the differential entropy, compared with a single Gaussian distribution with the same variance. Hence, after standardizing the pre-activation distribution to be unit variance, the abnormal distributions should have lower differential entropy. Those neurons could potentially separate benign and poisoned data. As for another defense setting, in which only an infected model and a set of benign data are provided, we are not able to observe the bimodal distributions since the poisoned data is not available. In this case, we propose to rely on the recorded statistics in Batch Normalization (BN) layers. Specifically, if the infected model is trained on poisoned data, the population statistics in backdoor neurons recorded in the BN layer will be different from those of only benign data. More importantly, benign neurons will not exhibit such mismatch in statistics, allowing for differentiation between benign and backdoor neurons. Based on the differential entropy and the statistics discrepancy, we are able to locate and prune potential backdoor neurons to recover the model flexibly under two defense settings. In summary, our contributions include: 1. We take a deep inspection on the infected model, and summarize the law of pre-activation distributions on poisoned dataset. We find that (1) the standardized entropy of backdoor neurons can be significantly lower than benign neurons, and (2) the BN statistics in infected model are mismatched with the benign sample statistics. 2. We propose to prune potential backdoor neurons based on either the differential entropy of pre-activation distribution or the statistics discrepancy, depending on the defense settings. Under certain assumptions, we claim that both the proposed indices can successfully separate the benign neurons and backdoor neurons by an appropriate threshold. 3. We conduct extensive experiments to verify our assumptions and evaluate our proposed methods, and achieve the state-of-the-art results under two different defense settings. 2 Related work In this section, we briefly discuss recent works in backdoor attack and defense, and a specific branch of related studies on distributional properties of poisoned features. 2.1 Backdoor attacks The most famous backdoor attack is introduced in [14], where the adversary injects a small set of targeted label-flipped data with a specific trigger into the training set, leading to a misclassification when predicting the samples with such trigger. To make the trigger pattern even more invisible to human beings, the blending strategy is used in [6] to generate poison images, while the form of natural reflection is utilized in trigger design in [30]. The input image is perturbed in [39] to keep its content consistent with the target label such that the model better memorizes the trigger pattern, and keep it imperceptible to human beings. Moreover, the multi-target and multi-trigger attacks are proposed in [42, 32], and make the attack more flexible and covert. Recently, some sample-specific trigger design strategies [26] are proposed, making the defense against such backdoor attack much harder. Generally, the above attacks can be referred to as the poisoning based backdoor attacks. Under some settings, the attackers can control the training process to inject the backdoor without modifying the training data, referred as the non-poisoning based backdoor attacks. This is achieved in [29, 35, 5] through targeted modification of the neurons’ weight in a network. Such attacks will not be evaluated in our work due to its strong attack setting. 2.2 Backdoor defenses Training stage defense. Under such setting, the defender has access to the training process, so that they can detect and filter the poisoned data or add some restrictions to suppress the backdoor effect in training. Since the poisoned data can be regarded as outliers, different strategies are applied in [10, 12, 1, 38, 15], such as the robust statistics in feature space and input perturbation techniques to filter them out of training data. Other methods aim at suppressing the backdoor effect during training phase with strong data augmentation methods [2] [25] [34] [31] including CutMix [43], Flip, ShrinkPad [25], CutOut and MaxUp [13], or differential privacy constraints [11, 18]. Model post-processing defense. Under some specific scenarios, the defenders are only given a suspicious DNN model without access to the training process or the full training set. Therefore, they must eliminate the backdoor threat with limited resources, such as a small set of clean data. A straightforward way is to reconstruct the trigger, and then mitigate the model with the knowledge of the reversed trigger [40]. Some try to find the relationship between backdoor behaviors and the neurons in a DNN model. Different levels of stimulation to a neuron are introduced in [28] to see how to determine the output activation change, if the model is attacked. Simple neuron pruning strategies are applied in [7] to repair the model, while redundant neuron pruning and fine-tuning are combined in [27] to erase the backdoor effect. Adversarial perturbations are added to the neurons in [41] and precisely prunes the easily-perturbed neurons with more limited clean data requirement and better performance. There are other fine-tuning based methods with the implementation of knowledge distillation [23, 17]. Mode connectivity repair technique [44] is also explored to mitigate the backdoored model. Recently, the K-Arm optimization [37] is applied in backdoor detection, helping curtail the threat of backdoor attack. 2.3 Distributional properties in poisoned dataset One branch of research on backdoor learning focuses specifically on using statistical differences in the distribution of benign and poisoned samples to filter out malicious data. Activation clustering [4] uses K-means to separate benign and poisoned samples in feature space. Spectral signatures [38] reveal that feature vectors tend to leave strong signals in the top eigenvectors of their covariance matrix. SPECTRE (Spectral Poison Excision Through Robust Estimation)[15] utilizes tools from robust statistics to amplify the spectral signature by outlier-robust data whitening. Our work differentiated from the above works from the following three aspects: 1) the above works focus on the penultimate layer feature representation, while our work inspects deep into each layer 2) the above works take the representation space from all neurons as a whole, while our finding indicates that the distributional difference only exists in some neurons 3) the above works aim at filtering out poisoned samples for retraining, while our methods directly repair the trained neural network by pruning potential backdoor neurons. 3 Preliminaries 3.1 Notations Consider a multi-class classification problem with C classes. Let the original training set D = {(xi, yi)}Ni=1 contains N i.i.d. sample images xi ∈ Rdc×dh×dw and the corresponding labels yi ∈ {1, 2, ..., C} drawn from X × Y . Here, we denote by dc, dh and dw the number of channels, the height and the width of images, respectively. In particular, we have dc = 3 for RGB images. As in Section 2.1, the backdoor poisoning attack involves changes to the input images and the corresponding labels on a subset Dp ⊆ D. In this work, we define the ratio ρ = |Dp||D| as the poisoning rate. We denote the poisoning function to the input images as δ(x). A dataset D is said to be ρ-poisoned if the poisoning rate of the dataset is ρ. Consider a neural network F (x; θ) with L layers. Denote F (l) = f (l) ◦ ϕ ◦ f (l−1) ◦ ϕ ◦ · · · ◦ ϕ ◦ f (1), for 1 ≤ l ≤ L, where f (l) is a linear function (e.g., convolution) in the l-th layer, and ϕ is a nonlinear activation function applied element wise. In this work, we may denote F (x; θ) as F (x) or F for simplicity. We denote by W(l) ∈ Rdc′×dc×dh×dw the weight tensor of a convolutional layer. To do pruning, we apply a mask M(l) ∈ {0, 1}dc′×dc×dh×dw starting with M(l) = 1dc′×dc×dh×dw in each layer. Pruning neurons on the network refers to getting a collection of indices I = {(l, k)i}Ii=1 and setting M(l)k = 0dc×dh×dw if (l, k) ∈ I. The pruned network F−I has the same architecture as F but with all the weight matrices of convolutional layers set to W(l) ⊙M(l), where ⊙ denotes the Hadamard product. 3.2 Differential entropy To measure the uncertainty of a discrete random variable Z, the entropy [36, 8] was defined as H(Z) = − ∑ z∈Z p(z) log p(z). At the same time, as an extension of entropy, the differential entropy was also introduced for a continuous random variable. More concretely, if Z is a continuous random variable, then it was defined as h(Z) = − ∫ Z p(z) log p(z)dz. (1) An important fact about the differential entropy is that, among all the real-valued distributions supported on (−∞,∞) with a specified finite variance, the Gaussian distribution maximizes the differential entropy [8]. In this work, the differential entropy (1) will be utilized to identify the distributions that are far different from a Gaussian distribution. 3.3 Backdoor neurons It was found that there exist one or more neurons that contribute the most to the backdoor behaviors in a infected model [41, 27]. If some of or all of these neurons are pruned, the attack success rate will be reduced greatly [41]. In this work, to better quantify the importance of neurons to backdoor behaviors, we would like to introduce the sensitivity of neurons to the backdoor. We first introduce the definition of backdoor loss on a specific poisoning function: Definition 1. Given a model F and a poisoning function δ, the backdoor loss on a dataset D is defined as: Lbd(f) = E(x,y)∼D[DCE(y, f(δ(x))], where DCE denotes the cross entropy loss. Then: Definition 2. Given a model F , the index of a neuron (l, k) and the backdoor loss Lbd, the sensitivity of that neuron to the backdoor is defined as: α(F, l, k) = Lbd(F )− Lbd(F−{(l,k)}), (2) where F−{(l,k)} is the network after pruning the k-th neuron of the l-th layer. The backdoor loss is high when the model is infected, and will be reduced when the backdoor effect is alleviated. Using the quantity defined in (2), we are now able to find the neurons that are mostly correlated with the backdoor behaviors: Definition 3. Given a model F and a threshold τ > 0, the set of backdoor neurons are defined as: BF,τ = {(l, k) : α(F, l, k) > τ}. (3) 3.4 Pre-activation distribution During the forward propagation of an input x, we denote x(l) = F (l)(x) ∈ Rd(l)c ×d (l) h ×d (l) w as the output of the l-th layer. For the k-th neuron of the l-th layer, the pre-activation ϕ(l)k = ϕ(x (l) k ) is defined as the maximum value of the k-th slice matrix of dimension d(l)h × d (l) w in x(l). The reason we choose pre-activations instead of activations is that the distribution of activations after non-linear function might be distorted. For example, ReLU (rectified linear unit) will cut out all negative values. It is a common assumption that, for every neuron, the pre-activations before the non-linear function follow a Gaussian distribution, if the network is randomly initialized and the number of neurons is large enough [20]. This assumption is based on the central limit theorem under weak dependence [3]. In a trained network, although this assumption may not strictly hold, the pre-activation of every neuron can be still regarded as approximately following a Gaussian distribution. However, in this work, for the first time, we observe a bimodal pre-activation distribution in backdoor neurons formed by the benign data and poisoned data. This phenomenon is shown in Figure 2, where a typical backdoor neuron is compared with the benign neurons. It can be seen that, after the model is infected, the pre-activation distributions of benign neurons hardly change when the data is poisoned, while the pre-activation distributions of backdoor neurons become significantly different. 4 Methodology 4.1 Basic assumptions We now introduce two preliminarily assumptions on the pre-activation distribution of an infected neural network. Assumption 1. Given an infected model F , we have |BF,τ | > 0 for some threshold τ > 0. This is a primary assumption that guarantees proper pruning of neurons can correct the network’s predictions on poisoned samples to some extent. Hence, it is a prerequisite of good performance of all the pruning-based defense methods. The next assumption provides a precondition for our methods: Assumption 2. Given a model F infected by a poisoning function δ with a ρ-poisoned dataset D, the pre-activation of sample from D on each single neuron of F follow a Gaussian mixture distribution, that is: ϕ (l) k ∼ (1− ρ)N (µ (l) k , σ (l)2 k ) + ρN (µ̂ (l) k , σ̂ (l)2 k ), with |µ(l)k − µ̂ (l) k | { < ϵ, if (l, k) /∈ BF,τ , ≫ ϵ, if (l, k) ∈ BF,τ , and |σ(l)2k − σ̂ (l)2 k | < ϵ, ∀k /∈ BF,τ where ϵ > 0 is a small enough value, µ(l)k and σ (l)2 k , µ̂ (l) k and σ̂ (l)2 k are the mean and variance of {ϕ(F (l)(x)k) : x ∼ X}, {ϕ(F (l)(δ(x))k) : x ∼ X}, respectively. This assumes that the mean value of pre-activation distribution of benign and poisoned samples only significantly differentiated in backdoor neurons. This assumption is made based on empirical observation, and our methods work only when this assumption holds. 4.2 Entropy-based pruning (EP) Based on the given assumptions, standardizing the pre-activation distributions (by subtracting the mean and dividing by the standard deviation) will maximize the differential entropy in benign neurons, which approximately follow a standard Gaussian distribution (N (0, 1)). However, in backdoor neurons, the mixture distributions resulting from differences in the moments of Gaussian components cannot be Gaussian distributions, leading to a smaller differential entropy than that of a standardized Gaussian distribution. Corollary 1. Let ϕ̇(l)k = ϕ (l) k −µ (l) k σ (l) k be the standardized pre-activations, then the following inequality is satisfied: h(ϕ̇ (l) k ) < h(ϕ̇ (l) k′ ) ≤ h(Z), ∀k ∈ BF,τ , k ′ /∈ BF,τ , where Z ∼ N (0, 1) is the standardized Gaussian distribution. This inequality gives a guarantee that with an appropriately chosen threshold, the backdoor neurons can be well separated with the benign neurons. 4.3 BN statistics-based pruning (BNP) BN layer involves using the statistics of a mini-batch to normalize the data in each layer for each neuron. It is known to be able to smooth the optimization landscape, and has gradually become a common setting of neural networks [19]. During inference, BN uses the fixed statistics obtained by averaging the sample statistics of mini-batches during training time, including the mean and the variance. If the model is trained on a poisoned dataset, BN will record the mean and the variance of the poison-benign mixed data. Note that the mean and variance here are not defined on the pre-activations ϕ(l)k , but on x (l) k . Based on the above discussions, we know that the poisoned preactivations in backdoor neurons follow a different distribution from the benign samples. The recorded statistics during training are actually that of the mixture distribution. Hence, we can expect that the BN statistics of a trained backdoor neural network are biased. If we are able to access a small set of benign data, we can calculate an approximation of the true statistics on benign data. Then we calculate the Kullback-Leibler (KL) divergence [9] between the sample distribution and the BN induced distribution as the measurement of the bias. By assuming both of the distributions follow Gaussian distributions, we have a closed-form solution: DKL(N (l)sample,N (l) BN)k = log σ̃ (l) k σ (l) k + σ (l)2 k + (µ (l) k − µ̃ (l) k ) 2 2σ̃ (l)2 k − 1 2 , where N (l)sample = N (µ (l) k , σ (l)2 k ), N (l) BN = N (µ̃ (l) k , σ̃ (l)2 k ), µ (l) k and σ (l)2 k are the statistics obtained from benign samples, µ̃(l)2k and σ̃ (l) k are the BN statistics. Note that BN statistics is the mixture statistics of benign and poisoned distributions. Thus, we have the following corollary: Corollary 2. According to Assumption 2, when ϵ → 0, the following inequality is satisfied: DKL(N (l)sample,N (l) BN)k > DKL(N (l) sample,N (l) BN)k′ = 0, ∀k ∈ BF,τ , k ′ /∈ BF,τ , The corollary indicates that backdoor neurons should have larger KL divergences than benign neurons, as illustrated in Figure 2(c). 4.4 Overview of the two pruning strategies In Section 3.4, we reveal the discrepancy between the pre-activation distributions in backdoor neurons and that in the benign neurons. This enables fast detecting the neurons that are more related to the backdoor behaviours. The index we choose to detect the abnormal neurons depends on what kind of data we are able to access. Mixture training data In this case, the victim is given a poisoned training dataset with a specified poisoning rate ρ. Our goal is to obtain a benign model based on the poisoned dataset. To achieve this, we first train an infected model F on the poisoned dataset. The resulting model should have a certain number of backdoor neurons based on empirical observation and assumption. Since ρ > 0 for the dataset, all the neurons follow Gaussian mixture distributions, and we have h(ẋ(l)k ) < h(ẋ (l) k′ ) for all k ∈ BF,τ , k′ /∈ BF,τ . This implies that with an appropriate threshold τ∗h , we can perfectly separate the benign neurons and backdoor neurons, which can be formulated as: ∃τ∗h , h(ẋ (l) k ) < τ ∗ h , ∀k ∈ BF,τ , h(ẋ (l) k′ ) > τ ∗ h , ∀k′ /∈ BF,τ . Setting the threshold τ∗h is crucial to the solution, and it is a trade-off between the accuracy on benign samples and that on the backdoored samples. Note that |B(l)F,τ | << d (l) c . We can treat the low entropy neurons as outliers in each layer, and set different thresholds for different layers. Specifically, let h(l) = [h(x(l)1 ), h(x (l) 2 ), . . . , h(x (l) d (l) c )]T ∈ Rd(l)c be a vector of differential entropy of the l-th layer calculated from the poisoned dataset. Then we set τ (l)h = h̄ (l) − uh · s(l)h , where h̄(l) = 1 d (l) c ∑d(l)c k=1 h (l) k and s (l) h = √ 1 d (l) c ∑d(l)c k=1(h (l) k − h̄(l))2 are the mean and standard deviation of h(l), uh is a hyperparameter controlling how low the threshold is. Then we have a set of indices of potential backdoor neurons Ih = {(l, k) : h(l)k < τ (l) h }. Finally, we prune the infected model F using Ih, which results in a final model F−Ih . Benign training data This is the case that the victim is given a trained poisoned model F with a small set of benign data. Our goal is to utilize the benign data to clean up the poisoned model and eliminate the backdoor threat. Similar to the pruning process based on differential entropy, we first construct a vector of KL divergences of all neurons for each layer K(l) = [K(l)1 ,K (l) 2 , . . . ,K (l) d (l) c ]T ∈ Rd(l)c according to equation (4). We set τ (l)K = K̄(l) + uK · s (l) K , where K̄ (l) = 1 d (l) c ∑d(l)c k=1 K (l) k and s(l)K = √ 1 d (l) c ∑d(l)c k=1(K (l) k − K̄(l))2 are the mean and standard deviation of K(l), uK is a hyperparameter. The set of selected neurons is IK = {(l, k) : K(l)k > τ (l) K } and the pruned model can be represented as F−IK . 5 Experiments 5.1 Implementation details Datasets In this section, the experiments are conducted on two influential benchmarks, CIFAR-10 [21] and Tiny-ImageNet [22]. We use 90% of the data set for training, the rest of the data is used for validating or recovering the poisoned model. Models We use ResNet-18 [16] as the baseline model to evaluate our proposed method, and compare it with other methods. We train the network for 150 epochs on CIFAR-10 and 100 epochs on Tiny-ImageNet with SGD optimizer. The initial learning rate is set to 0.1 and the momentum is set to 0.9. We adopt the cosine learning rate scheduler to adjust the learning rate. The batch size is set to 128 by default. Attacks Our experiments are based on both the classical and the most advanced attack strategies, including the BadNet [14], Clean Label Attack (CLA) [39], Reflection Backdoor (Refool) [30], Warping-based poisoned Networks [33], Blended backdoor attack (Blended) [6], Input-aware backdoor attack (IAB) [32] and Sample Specific Backdoor Attack (SSBA) [26]. For BadNets, we test both the All-to-All (A2A) attack and All-to-One (A2O) attack, i.e., the attack target labels are set to yt = (y + 1) mod C, or one particular label yt = Ct, respectively. The target for A2O attacks of all the attack strategies is set to class 0. The triggers for BadNets and CLA are set to randomly generated patterns with size 3×3 for CIFAR-10 and 5×5 for Tiny-ImageNet. The poisoning rate is set to 10% by default. Note that, due to the image size restraint, SSBA is only performed on Tiny-Imagenet. Defenses We conduct experiments under two defense settings, one of which allows the defender to access the poisoned training set, while the other only has a small clean data set. Both the defense goals are to obtain a clean model without backdoor behaviors. We compare our approaches with the l∞ pruning [7], fine-tuning (FT), fine-pruning (FP) [27], neural attention distillation (NAD) [23] and adversarial neuron pruning (ANP) [41]. The number of benign samples allowed to access is set to 500 (1%) for CIFAR-10 and 5000 (5%) for Tiny-ImageNet by default. We set the threshold hyperparameter uh/uk to 3 on CIFAR-10 and 4 on Tiny-ImageNet of all tested attacks by default. Evaluation metrics In this work, we use the clean accuracy (ACC) and attack success rate (ASR) to evaluate the effectiveness of different methods. The ACC for a given model F is defined as: ACC(F,Dtest) = ∑ (x,y)∈Dtest I{argmax(F (x)) = y}, where I is the indicator function. The ASR is defined as: ASR(F,Dtest) = ∑ (x,y)∈Dtest,y ̸=yt I{argmax(F (δ(x))) = yt}, where yt is the attack target label. The ACC measures the model performance on benign samples, while the ASR reflects the degree of backdoor behavior retainment in the model. Given an infected model, our goal is to reduce the ASR, while keeping the ACC from dropping too much. 5.2 Experimental results CIFAR-10 We show the results on CIFAR-10 in Table 1. The recently proposed NAD and ANP perform significantly better than other defense methods, reducing the ASR to a very low level with a slight drop on ACC. However, they also have a significant drop (3 ∼ 4%) on ACC when defending CLA, which is the most robust backdoor attack in our experiments, and ANP even failed when defending BadNets(A2A). Nevertheless, both of our methods successfully eliminate the backdoor (ASR < 1%) with negligible loss on ACC. We even observe a little rise on ACC when defending BadNets by EP. This phenomenon demonstrates that backdoor neurons may hurt the ACC in some way, and thus the ACC will rise when the backdoor neurons are precisely pruned. Overall, our methods achieve the most advanced defense results. Tiny-ImageNet Tiny-ImageNet is a larger scale dataset with higher resolution images, and it is harder to defend against the attacks performed on it. Note that the A2A attack is absent, since we cannot successfully perform the attack due to the large number of its classes (up to 200). Our experimental results show that all the defense methods suffer from the performance degradation compared with the results in CIFAR-10, and they fail to defend against WaNet with a large ACC drop but even unchanged ASR, especially the ANP and l∞ defense. This phenomenon shows that the principles for finding backdoor neurons of both ANP and l∞ don’t work in such case. Nevertheless, our methods totally remove the backdoor and the ACC are not even affected, which indicates that our methods can precisely locate the backdoor neurons even on such large scale dataset. 5.3 Ablation study To be fair, we compare BNP with other re-training based methods using 500 benign samples in Section 5.2. However, BNP doesn’t require re-training the model, and the samples are just used for detecting the distribution discrepancy. As the statistical differences may be detected with much fewer samples, we now study how the number of samples affects the effectiveness of BNP. We train BadNets, CLA, Refool and Blended on CIFAR-10 with ρ = 10%, and use 10 to 500 benign samples to recover the model using BNP. We record the changes of ACC and ASR with respect to the number of benign samples. The results are shown in Section 5.3. The influence of the number of samples to our methods comes from the randomness on estimating moments. As the number of samples grows, the randomness is reduced and BNP has more stable performance, but the average performances are not improved, except for Refool. Compared with other attacks, Refool clearly needs more samples to reduce the ASR. A possible reason is that the mixture distribution in Refool has closer moments and is harder to distinguish. Besides, we surprisingly find that BNP can recover BadNets, CLA and Blended using only 10 benign samples. We also conduct experiments to show the high correlation between the backdoor neurons and our proposed evaluation metrics, and the results are shown in Appendix D. 6 Discussion The proposed methods are superior to other existing defense methods in the following three aspects: Better performance As demonstrated in Section 5, both of the proposed methods achieve stateof-the-art results. Moreover, according to the ablation study, the proposed BNP can successfully defend most of the attacks within 10 benign samples, which shows the amazing effectiveness of our proposed methods. Higher efficiency The proposed methods are highly efficient. We record the running time of several defense methods on 500 CIFAR-10 images with ResNet-18, and show the results in Table 3. It can be seen that both of the proposed methods require less time than the baseline defense methods. Since both methods require scanning on each neuron once, the computational complexity scale linearly with the number of the neurons in the neural network. Therefore, the efficiency of our methods is promised. More robust to hyperparameter choosing One of the most general problems in backdoor defense is the choice of hyperparameters. Under realistic settings, the defenders can only perform defenses without any prior knowledge about poison data, including poisoning rate and examples of poisoned data. So the defenders should carefully tune the hyperparameters, or the ACC and ASR can change suddenly even under small fluctuations of those hyperparameters. In comparison, both of the proposed pruning strategies only require one universal hyperparameter u. Moreover, they show reliable consistency against different attacks in the same dataset, only vary from different datasets, which is inevitable. Besides, we leave a wide range of parameters to choose, so that the ACC remains high while the ASR is controlled to a very small number, as shown in Section 5.3. 7 Conclusion In this work, we take an inspection on the characteristics of an infected model, and find backdoor sensitive neurons distinguishable by their pre-activations on poisoned dataset. Specifically, in backdoor neurons, pre-activations from benign data and poisoned data form distribution with extraordinarily different moments. This property makes it possible for defenders to efficiently locate the potential backdoor neurons based on the distributional property of pre-activations. When direct access to the poisoned dataset is available, we propose to measure the mixture property of pre-activations via differential entropy to detect potential backdoor neurons. In another case, where defenders only have access to a benign dataset, we propose to check abnormality of pre-activation distribution based on the inconsistency of the recorded BN statistics and the sample statistics on the given benign dataset. We then do pruning on potential backdoor neurons to recover the model. Experiments show that the proposed defending strategies can efficiently locate the backdoor neurons, and greatly reduce the backdoor threat with negligible loss of clean accuracy. Our approaches achieve superior results compared with all other defense methods under various attacks on the tested datasets. The results shed lights on the field of backdoor defense, and can be a guidance for designing more robust backdoor attacks. 8 Acknowledgement This work is supported by the National Natural Science Foundation of China (No. 62101351), the GuangDong Basic and Applied Basic Research Foundation (No.2020A1515110376), Shenzhen Outstanding Scientific and Technological Innovation Talents PhD Startup Project (No. RCBS20210609104447108), the Key-Area Research and Development Program of Guangdong Province (2020B0101350001), and the Chinese University of Hong Kong (Shenzhen).
1. What is the focus and contribution of the paper regarding backdoor robustness? 2. What are the strengths of the proposed approach, particularly in terms of its intuitive correctness and effectiveness? 3. What are the weaknesses of the paper, especially concerning generalizability and hyperparameter tuning? 4. Do you have any concerns about the experimental setup or results, such as the choice of hyperparameters or the visualization of the blend attack? 5. How do you assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Based on the observation that clean and backdoor training samples have different feature statistics, the paper proposes to detect and remove the neurons that are mostly affected by backdoor training samples. Empirical results show the effectiveness of the proposed method under certain scenarios. Strengths And Weaknesses Strength: The proposed method is intuitively correct: Clean samples and backdoor samples should have somehow different feature statistics. This is because the model overfits superficial correlation on backdoor samples while it learns semantic information on clean samples. I don't doubt the proposed method is effective in improving backdoor robustness. The paper is well-written and easy to follow. All notations and terms are defined clearly without ambiguous. The proposed method can defend some backdoor attacks when few clean samples are available. Weakness: My main concern is how such good performance reported in this paper can be generalized to different settings. 1.1 In practice, we don't know what type of backdoor attacks are used. So, it is very important for the defense method to be generally robust against multiple different attacks using the same hyper-parameters in the defense method. However, in this paper, the authors only mentioned the only hyperparameter μ k "is usually set to 3". The use of the word "usually" makes me feel skeptical about the results. The readers need to know what value μ k is used under all cases (on all datasets and against all attacks). The ideal case is we have a value that works well for all attacks, even under the mixture attack setting (i.e. when multiple different attack simultaneous exists in the training set). But that should be challenging in my point of view since the distributions of backdoor samples from different attacks can be very different. So what are the "unusual" cases in this paper? If different hyper-parameter values are used against different attacks in the proposed method, then it is unfair comparison with the baselines. As far as I know, NAD can achieve better performance if the hyper-parameters are tuned for different attacks. Again, I don't doubt for a second that the proposed method can help defense backdoor attacks. I just concern whether it can achieve such good performance across all reported different attacks using the same hyper-parameter setting. It won't harm the contribution of this paper if the performance drops a little bit when the same μ k value is used for different attacks. 1.2 Please consider more different attack settings. For example, when a different poisoning ratio (eg 5%, 20%, 30%...) is used? Does the proposed method need to use a different uk value under different poisoning ratios to achieve good performance? 1.3 Please provide ablation study results on different uk values. In Figure 5 (c) in appendix, the visualization of Blend attack looks a bit weird to me. I can't see any effect of the blend image (which is usually a hello kitty image or strong random noise). Please see Figure 5 in [1] or the original Blend attack paper for example. May I ask what blending ratio and blend image/pattern you are using? I will improve my score if the concerns are addressed. Questions The title in the submitted PDF is different from that in Openreview. Limitations Please see above.
NIPS
Title Pre-activation Distributions Expose Backdoor Neurons Abstract Convolutional neural networks (CNN) can be manipulated to perform specific behaviors when encountering a particular trigger pattern without affecting the performance on normal samples, which is referred to as backdoor attack. The backdoor attack is usually achieved by injecting a small proportion of poisoned samples into the training set, through which the victim trains a model embedded with the designated backdoor. In this work, we demonstrate that backdoor neurons are exposed by their pre-activation distributions, where populations from benign data and poisoned data show significantly different moments. This property is shown to be attack-invariant and allows us to efficiently locate backdoor neurons. On this basis, we make several proper assumptions on the neuron activation distributions, and propose two backdoor neuron detection strategies based on (1) the differential entropy of the neurons, and (2) the Kullback-Leibler divergence between the benign sample distribution and a poisoned statistics based hypothetical distribution. Experimental results show that our proposed defense strategies are both efficient and effective against various backdoor attacks. Source code is available here. 1 Introduction Convolutional neural networks (CNNs) have achieved tremendous success during the past few years in a wide range of areas. However, training a CNN from scratch involves a large amount of data and expensive computational costs, which is sometimes infeasible. A more practical strategy is to obtain pretrained models or utilize public datasets from a third party, which brings convenience but also raises severe security problems into the deployment of models. For example, a malicious third party may provide pretrained models embedded with a designated backdoor, such that the model will have a predefined response to some specific pattern, which is also called the trigger. More realistically, the attacker can inject only a small proportion of malicious data into the public dataset to mislead the trained model, which is referred to as backdoor poisoning attacks [24]. For instance, the malicious data can be created by patching a particular pattern into the benign data and changing the label to the desired target. The correlation between the trigger and the specified target label will be learned by the models during the training time. In this way, the infected model will misclassify the input to the attack target when the pattern is patched, while behaving normally otherwise, as shown in Figure 1. According to previous studies, it was empirically found that an infected model always possesses one or more neurons that have high correlation with the trigger activation, and pruning these neurons can ∗Eual Contribution †Corresponding Author 36th Conference on Neural Information Processing Systems (NeurIPS 2022). significantly alleviate the backdoor behaviors, while retaining the model performance [41, 27, 7]. Nevertheless, how to precisely find out these backdoor neurons in an infected model is still a challenging problem, and has attracted a lot of attentions from the community. In this work, we take an inspection on the pre-activation distributions of infected models on each layer. In general, the pre-activations in each neuron follow an unimodal distribution that can be approximated by a Gaussian distribution. We demonstrate that backdoor neurons do not hold such property. Instead, in a typical backdoor neuron, the pre-activation distributions of benign data and poisoned data present significant different moments, and can be approximated by a mixture of two Gaussian distributions. This property allows us to locate potential backdoor neurons through simple statistical analysis on pre-activations. Specifically, in case the defender has access to the poisoned dataset, the abnormal pre-activation distributions can be directly observed by forward propagating the data. The mixture of benign and poisoned data, where the small proportion of poisoned points being away from the benign mean, leads to a skewed, and even a bimodal distribution. Based on the maximum entropy property of the Gaussian distribution, the skewness will cause a reduction on the differential entropy, compared with a single Gaussian distribution with the same variance. Hence, after standardizing the pre-activation distribution to be unit variance, the abnormal distributions should have lower differential entropy. Those neurons could potentially separate benign and poisoned data. As for another defense setting, in which only an infected model and a set of benign data are provided, we are not able to observe the bimodal distributions since the poisoned data is not available. In this case, we propose to rely on the recorded statistics in Batch Normalization (BN) layers. Specifically, if the infected model is trained on poisoned data, the population statistics in backdoor neurons recorded in the BN layer will be different from those of only benign data. More importantly, benign neurons will not exhibit such mismatch in statistics, allowing for differentiation between benign and backdoor neurons. Based on the differential entropy and the statistics discrepancy, we are able to locate and prune potential backdoor neurons to recover the model flexibly under two defense settings. In summary, our contributions include: 1. We take a deep inspection on the infected model, and summarize the law of pre-activation distributions on poisoned dataset. We find that (1) the standardized entropy of backdoor neurons can be significantly lower than benign neurons, and (2) the BN statistics in infected model are mismatched with the benign sample statistics. 2. We propose to prune potential backdoor neurons based on either the differential entropy of pre-activation distribution or the statistics discrepancy, depending on the defense settings. Under certain assumptions, we claim that both the proposed indices can successfully separate the benign neurons and backdoor neurons by an appropriate threshold. 3. We conduct extensive experiments to verify our assumptions and evaluate our proposed methods, and achieve the state-of-the-art results under two different defense settings. 2 Related work In this section, we briefly discuss recent works in backdoor attack and defense, and a specific branch of related studies on distributional properties of poisoned features. 2.1 Backdoor attacks The most famous backdoor attack is introduced in [14], where the adversary injects a small set of targeted label-flipped data with a specific trigger into the training set, leading to a misclassification when predicting the samples with such trigger. To make the trigger pattern even more invisible to human beings, the blending strategy is used in [6] to generate poison images, while the form of natural reflection is utilized in trigger design in [30]. The input image is perturbed in [39] to keep its content consistent with the target label such that the model better memorizes the trigger pattern, and keep it imperceptible to human beings. Moreover, the multi-target and multi-trigger attacks are proposed in [42, 32], and make the attack more flexible and covert. Recently, some sample-specific trigger design strategies [26] are proposed, making the defense against such backdoor attack much harder. Generally, the above attacks can be referred to as the poisoning based backdoor attacks. Under some settings, the attackers can control the training process to inject the backdoor without modifying the training data, referred as the non-poisoning based backdoor attacks. This is achieved in [29, 35, 5] through targeted modification of the neurons’ weight in a network. Such attacks will not be evaluated in our work due to its strong attack setting. 2.2 Backdoor defenses Training stage defense. Under such setting, the defender has access to the training process, so that they can detect and filter the poisoned data or add some restrictions to suppress the backdoor effect in training. Since the poisoned data can be regarded as outliers, different strategies are applied in [10, 12, 1, 38, 15], such as the robust statistics in feature space and input perturbation techniques to filter them out of training data. Other methods aim at suppressing the backdoor effect during training phase with strong data augmentation methods [2] [25] [34] [31] including CutMix [43], Flip, ShrinkPad [25], CutOut and MaxUp [13], or differential privacy constraints [11, 18]. Model post-processing defense. Under some specific scenarios, the defenders are only given a suspicious DNN model without access to the training process or the full training set. Therefore, they must eliminate the backdoor threat with limited resources, such as a small set of clean data. A straightforward way is to reconstruct the trigger, and then mitigate the model with the knowledge of the reversed trigger [40]. Some try to find the relationship between backdoor behaviors and the neurons in a DNN model. Different levels of stimulation to a neuron are introduced in [28] to see how to determine the output activation change, if the model is attacked. Simple neuron pruning strategies are applied in [7] to repair the model, while redundant neuron pruning and fine-tuning are combined in [27] to erase the backdoor effect. Adversarial perturbations are added to the neurons in [41] and precisely prunes the easily-perturbed neurons with more limited clean data requirement and better performance. There are other fine-tuning based methods with the implementation of knowledge distillation [23, 17]. Mode connectivity repair technique [44] is also explored to mitigate the backdoored model. Recently, the K-Arm optimization [37] is applied in backdoor detection, helping curtail the threat of backdoor attack. 2.3 Distributional properties in poisoned dataset One branch of research on backdoor learning focuses specifically on using statistical differences in the distribution of benign and poisoned samples to filter out malicious data. Activation clustering [4] uses K-means to separate benign and poisoned samples in feature space. Spectral signatures [38] reveal that feature vectors tend to leave strong signals in the top eigenvectors of their covariance matrix. SPECTRE (Spectral Poison Excision Through Robust Estimation)[15] utilizes tools from robust statistics to amplify the spectral signature by outlier-robust data whitening. Our work differentiated from the above works from the following three aspects: 1) the above works focus on the penultimate layer feature representation, while our work inspects deep into each layer 2) the above works take the representation space from all neurons as a whole, while our finding indicates that the distributional difference only exists in some neurons 3) the above works aim at filtering out poisoned samples for retraining, while our methods directly repair the trained neural network by pruning potential backdoor neurons. 3 Preliminaries 3.1 Notations Consider a multi-class classification problem with C classes. Let the original training set D = {(xi, yi)}Ni=1 contains N i.i.d. sample images xi ∈ Rdc×dh×dw and the corresponding labels yi ∈ {1, 2, ..., C} drawn from X × Y . Here, we denote by dc, dh and dw the number of channels, the height and the width of images, respectively. In particular, we have dc = 3 for RGB images. As in Section 2.1, the backdoor poisoning attack involves changes to the input images and the corresponding labels on a subset Dp ⊆ D. In this work, we define the ratio ρ = |Dp||D| as the poisoning rate. We denote the poisoning function to the input images as δ(x). A dataset D is said to be ρ-poisoned if the poisoning rate of the dataset is ρ. Consider a neural network F (x; θ) with L layers. Denote F (l) = f (l) ◦ ϕ ◦ f (l−1) ◦ ϕ ◦ · · · ◦ ϕ ◦ f (1), for 1 ≤ l ≤ L, where f (l) is a linear function (e.g., convolution) in the l-th layer, and ϕ is a nonlinear activation function applied element wise. In this work, we may denote F (x; θ) as F (x) or F for simplicity. We denote by W(l) ∈ Rdc′×dc×dh×dw the weight tensor of a convolutional layer. To do pruning, we apply a mask M(l) ∈ {0, 1}dc′×dc×dh×dw starting with M(l) = 1dc′×dc×dh×dw in each layer. Pruning neurons on the network refers to getting a collection of indices I = {(l, k)i}Ii=1 and setting M(l)k = 0dc×dh×dw if (l, k) ∈ I. The pruned network F−I has the same architecture as F but with all the weight matrices of convolutional layers set to W(l) ⊙M(l), where ⊙ denotes the Hadamard product. 3.2 Differential entropy To measure the uncertainty of a discrete random variable Z, the entropy [36, 8] was defined as H(Z) = − ∑ z∈Z p(z) log p(z). At the same time, as an extension of entropy, the differential entropy was also introduced for a continuous random variable. More concretely, if Z is a continuous random variable, then it was defined as h(Z) = − ∫ Z p(z) log p(z)dz. (1) An important fact about the differential entropy is that, among all the real-valued distributions supported on (−∞,∞) with a specified finite variance, the Gaussian distribution maximizes the differential entropy [8]. In this work, the differential entropy (1) will be utilized to identify the distributions that are far different from a Gaussian distribution. 3.3 Backdoor neurons It was found that there exist one or more neurons that contribute the most to the backdoor behaviors in a infected model [41, 27]. If some of or all of these neurons are pruned, the attack success rate will be reduced greatly [41]. In this work, to better quantify the importance of neurons to backdoor behaviors, we would like to introduce the sensitivity of neurons to the backdoor. We first introduce the definition of backdoor loss on a specific poisoning function: Definition 1. Given a model F and a poisoning function δ, the backdoor loss on a dataset D is defined as: Lbd(f) = E(x,y)∼D[DCE(y, f(δ(x))], where DCE denotes the cross entropy loss. Then: Definition 2. Given a model F , the index of a neuron (l, k) and the backdoor loss Lbd, the sensitivity of that neuron to the backdoor is defined as: α(F, l, k) = Lbd(F )− Lbd(F−{(l,k)}), (2) where F−{(l,k)} is the network after pruning the k-th neuron of the l-th layer. The backdoor loss is high when the model is infected, and will be reduced when the backdoor effect is alleviated. Using the quantity defined in (2), we are now able to find the neurons that are mostly correlated with the backdoor behaviors: Definition 3. Given a model F and a threshold τ > 0, the set of backdoor neurons are defined as: BF,τ = {(l, k) : α(F, l, k) > τ}. (3) 3.4 Pre-activation distribution During the forward propagation of an input x, we denote x(l) = F (l)(x) ∈ Rd(l)c ×d (l) h ×d (l) w as the output of the l-th layer. For the k-th neuron of the l-th layer, the pre-activation ϕ(l)k = ϕ(x (l) k ) is defined as the maximum value of the k-th slice matrix of dimension d(l)h × d (l) w in x(l). The reason we choose pre-activations instead of activations is that the distribution of activations after non-linear function might be distorted. For example, ReLU (rectified linear unit) will cut out all negative values. It is a common assumption that, for every neuron, the pre-activations before the non-linear function follow a Gaussian distribution, if the network is randomly initialized and the number of neurons is large enough [20]. This assumption is based on the central limit theorem under weak dependence [3]. In a trained network, although this assumption may not strictly hold, the pre-activation of every neuron can be still regarded as approximately following a Gaussian distribution. However, in this work, for the first time, we observe a bimodal pre-activation distribution in backdoor neurons formed by the benign data and poisoned data. This phenomenon is shown in Figure 2, where a typical backdoor neuron is compared with the benign neurons. It can be seen that, after the model is infected, the pre-activation distributions of benign neurons hardly change when the data is poisoned, while the pre-activation distributions of backdoor neurons become significantly different. 4 Methodology 4.1 Basic assumptions We now introduce two preliminarily assumptions on the pre-activation distribution of an infected neural network. Assumption 1. Given an infected model F , we have |BF,τ | > 0 for some threshold τ > 0. This is a primary assumption that guarantees proper pruning of neurons can correct the network’s predictions on poisoned samples to some extent. Hence, it is a prerequisite of good performance of all the pruning-based defense methods. The next assumption provides a precondition for our methods: Assumption 2. Given a model F infected by a poisoning function δ with a ρ-poisoned dataset D, the pre-activation of sample from D on each single neuron of F follow a Gaussian mixture distribution, that is: ϕ (l) k ∼ (1− ρ)N (µ (l) k , σ (l)2 k ) + ρN (µ̂ (l) k , σ̂ (l)2 k ), with |µ(l)k − µ̂ (l) k | { < ϵ, if (l, k) /∈ BF,τ , ≫ ϵ, if (l, k) ∈ BF,τ , and |σ(l)2k − σ̂ (l)2 k | < ϵ, ∀k /∈ BF,τ where ϵ > 0 is a small enough value, µ(l)k and σ (l)2 k , µ̂ (l) k and σ̂ (l)2 k are the mean and variance of {ϕ(F (l)(x)k) : x ∼ X}, {ϕ(F (l)(δ(x))k) : x ∼ X}, respectively. This assumes that the mean value of pre-activation distribution of benign and poisoned samples only significantly differentiated in backdoor neurons. This assumption is made based on empirical observation, and our methods work only when this assumption holds. 4.2 Entropy-based pruning (EP) Based on the given assumptions, standardizing the pre-activation distributions (by subtracting the mean and dividing by the standard deviation) will maximize the differential entropy in benign neurons, which approximately follow a standard Gaussian distribution (N (0, 1)). However, in backdoor neurons, the mixture distributions resulting from differences in the moments of Gaussian components cannot be Gaussian distributions, leading to a smaller differential entropy than that of a standardized Gaussian distribution. Corollary 1. Let ϕ̇(l)k = ϕ (l) k −µ (l) k σ (l) k be the standardized pre-activations, then the following inequality is satisfied: h(ϕ̇ (l) k ) < h(ϕ̇ (l) k′ ) ≤ h(Z), ∀k ∈ BF,τ , k ′ /∈ BF,τ , where Z ∼ N (0, 1) is the standardized Gaussian distribution. This inequality gives a guarantee that with an appropriately chosen threshold, the backdoor neurons can be well separated with the benign neurons. 4.3 BN statistics-based pruning (BNP) BN layer involves using the statistics of a mini-batch to normalize the data in each layer for each neuron. It is known to be able to smooth the optimization landscape, and has gradually become a common setting of neural networks [19]. During inference, BN uses the fixed statistics obtained by averaging the sample statistics of mini-batches during training time, including the mean and the variance. If the model is trained on a poisoned dataset, BN will record the mean and the variance of the poison-benign mixed data. Note that the mean and variance here are not defined on the pre-activations ϕ(l)k , but on x (l) k . Based on the above discussions, we know that the poisoned preactivations in backdoor neurons follow a different distribution from the benign samples. The recorded statistics during training are actually that of the mixture distribution. Hence, we can expect that the BN statistics of a trained backdoor neural network are biased. If we are able to access a small set of benign data, we can calculate an approximation of the true statistics on benign data. Then we calculate the Kullback-Leibler (KL) divergence [9] between the sample distribution and the BN induced distribution as the measurement of the bias. By assuming both of the distributions follow Gaussian distributions, we have a closed-form solution: DKL(N (l)sample,N (l) BN)k = log σ̃ (l) k σ (l) k + σ (l)2 k + (µ (l) k − µ̃ (l) k ) 2 2σ̃ (l)2 k − 1 2 , where N (l)sample = N (µ (l) k , σ (l)2 k ), N (l) BN = N (µ̃ (l) k , σ̃ (l)2 k ), µ (l) k and σ (l)2 k are the statistics obtained from benign samples, µ̃(l)2k and σ̃ (l) k are the BN statistics. Note that BN statistics is the mixture statistics of benign and poisoned distributions. Thus, we have the following corollary: Corollary 2. According to Assumption 2, when ϵ → 0, the following inequality is satisfied: DKL(N (l)sample,N (l) BN)k > DKL(N (l) sample,N (l) BN)k′ = 0, ∀k ∈ BF,τ , k ′ /∈ BF,τ , The corollary indicates that backdoor neurons should have larger KL divergences than benign neurons, as illustrated in Figure 2(c). 4.4 Overview of the two pruning strategies In Section 3.4, we reveal the discrepancy between the pre-activation distributions in backdoor neurons and that in the benign neurons. This enables fast detecting the neurons that are more related to the backdoor behaviours. The index we choose to detect the abnormal neurons depends on what kind of data we are able to access. Mixture training data In this case, the victim is given a poisoned training dataset with a specified poisoning rate ρ. Our goal is to obtain a benign model based on the poisoned dataset. To achieve this, we first train an infected model F on the poisoned dataset. The resulting model should have a certain number of backdoor neurons based on empirical observation and assumption. Since ρ > 0 for the dataset, all the neurons follow Gaussian mixture distributions, and we have h(ẋ(l)k ) < h(ẋ (l) k′ ) for all k ∈ BF,τ , k′ /∈ BF,τ . This implies that with an appropriate threshold τ∗h , we can perfectly separate the benign neurons and backdoor neurons, which can be formulated as: ∃τ∗h , h(ẋ (l) k ) < τ ∗ h , ∀k ∈ BF,τ , h(ẋ (l) k′ ) > τ ∗ h , ∀k′ /∈ BF,τ . Setting the threshold τ∗h is crucial to the solution, and it is a trade-off between the accuracy on benign samples and that on the backdoored samples. Note that |B(l)F,τ | << d (l) c . We can treat the low entropy neurons as outliers in each layer, and set different thresholds for different layers. Specifically, let h(l) = [h(x(l)1 ), h(x (l) 2 ), . . . , h(x (l) d (l) c )]T ∈ Rd(l)c be a vector of differential entropy of the l-th layer calculated from the poisoned dataset. Then we set τ (l)h = h̄ (l) − uh · s(l)h , where h̄(l) = 1 d (l) c ∑d(l)c k=1 h (l) k and s (l) h = √ 1 d (l) c ∑d(l)c k=1(h (l) k − h̄(l))2 are the mean and standard deviation of h(l), uh is a hyperparameter controlling how low the threshold is. Then we have a set of indices of potential backdoor neurons Ih = {(l, k) : h(l)k < τ (l) h }. Finally, we prune the infected model F using Ih, which results in a final model F−Ih . Benign training data This is the case that the victim is given a trained poisoned model F with a small set of benign data. Our goal is to utilize the benign data to clean up the poisoned model and eliminate the backdoor threat. Similar to the pruning process based on differential entropy, we first construct a vector of KL divergences of all neurons for each layer K(l) = [K(l)1 ,K (l) 2 , . . . ,K (l) d (l) c ]T ∈ Rd(l)c according to equation (4). We set τ (l)K = K̄(l) + uK · s (l) K , where K̄ (l) = 1 d (l) c ∑d(l)c k=1 K (l) k and s(l)K = √ 1 d (l) c ∑d(l)c k=1(K (l) k − K̄(l))2 are the mean and standard deviation of K(l), uK is a hyperparameter. The set of selected neurons is IK = {(l, k) : K(l)k > τ (l) K } and the pruned model can be represented as F−IK . 5 Experiments 5.1 Implementation details Datasets In this section, the experiments are conducted on two influential benchmarks, CIFAR-10 [21] and Tiny-ImageNet [22]. We use 90% of the data set for training, the rest of the data is used for validating or recovering the poisoned model. Models We use ResNet-18 [16] as the baseline model to evaluate our proposed method, and compare it with other methods. We train the network for 150 epochs on CIFAR-10 and 100 epochs on Tiny-ImageNet with SGD optimizer. The initial learning rate is set to 0.1 and the momentum is set to 0.9. We adopt the cosine learning rate scheduler to adjust the learning rate. The batch size is set to 128 by default. Attacks Our experiments are based on both the classical and the most advanced attack strategies, including the BadNet [14], Clean Label Attack (CLA) [39], Reflection Backdoor (Refool) [30], Warping-based poisoned Networks [33], Blended backdoor attack (Blended) [6], Input-aware backdoor attack (IAB) [32] and Sample Specific Backdoor Attack (SSBA) [26]. For BadNets, we test both the All-to-All (A2A) attack and All-to-One (A2O) attack, i.e., the attack target labels are set to yt = (y + 1) mod C, or one particular label yt = Ct, respectively. The target for A2O attacks of all the attack strategies is set to class 0. The triggers for BadNets and CLA are set to randomly generated patterns with size 3×3 for CIFAR-10 and 5×5 for Tiny-ImageNet. The poisoning rate is set to 10% by default. Note that, due to the image size restraint, SSBA is only performed on Tiny-Imagenet. Defenses We conduct experiments under two defense settings, one of which allows the defender to access the poisoned training set, while the other only has a small clean data set. Both the defense goals are to obtain a clean model without backdoor behaviors. We compare our approaches with the l∞ pruning [7], fine-tuning (FT), fine-pruning (FP) [27], neural attention distillation (NAD) [23] and adversarial neuron pruning (ANP) [41]. The number of benign samples allowed to access is set to 500 (1%) for CIFAR-10 and 5000 (5%) for Tiny-ImageNet by default. We set the threshold hyperparameter uh/uk to 3 on CIFAR-10 and 4 on Tiny-ImageNet of all tested attacks by default. Evaluation metrics In this work, we use the clean accuracy (ACC) and attack success rate (ASR) to evaluate the effectiveness of different methods. The ACC for a given model F is defined as: ACC(F,Dtest) = ∑ (x,y)∈Dtest I{argmax(F (x)) = y}, where I is the indicator function. The ASR is defined as: ASR(F,Dtest) = ∑ (x,y)∈Dtest,y ̸=yt I{argmax(F (δ(x))) = yt}, where yt is the attack target label. The ACC measures the model performance on benign samples, while the ASR reflects the degree of backdoor behavior retainment in the model. Given an infected model, our goal is to reduce the ASR, while keeping the ACC from dropping too much. 5.2 Experimental results CIFAR-10 We show the results on CIFAR-10 in Table 1. The recently proposed NAD and ANP perform significantly better than other defense methods, reducing the ASR to a very low level with a slight drop on ACC. However, they also have a significant drop (3 ∼ 4%) on ACC when defending CLA, which is the most robust backdoor attack in our experiments, and ANP even failed when defending BadNets(A2A). Nevertheless, both of our methods successfully eliminate the backdoor (ASR < 1%) with negligible loss on ACC. We even observe a little rise on ACC when defending BadNets by EP. This phenomenon demonstrates that backdoor neurons may hurt the ACC in some way, and thus the ACC will rise when the backdoor neurons are precisely pruned. Overall, our methods achieve the most advanced defense results. Tiny-ImageNet Tiny-ImageNet is a larger scale dataset with higher resolution images, and it is harder to defend against the attacks performed on it. Note that the A2A attack is absent, since we cannot successfully perform the attack due to the large number of its classes (up to 200). Our experimental results show that all the defense methods suffer from the performance degradation compared with the results in CIFAR-10, and they fail to defend against WaNet with a large ACC drop but even unchanged ASR, especially the ANP and l∞ defense. This phenomenon shows that the principles for finding backdoor neurons of both ANP and l∞ don’t work in such case. Nevertheless, our methods totally remove the backdoor and the ACC are not even affected, which indicates that our methods can precisely locate the backdoor neurons even on such large scale dataset. 5.3 Ablation study To be fair, we compare BNP with other re-training based methods using 500 benign samples in Section 5.2. However, BNP doesn’t require re-training the model, and the samples are just used for detecting the distribution discrepancy. As the statistical differences may be detected with much fewer samples, we now study how the number of samples affects the effectiveness of BNP. We train BadNets, CLA, Refool and Blended on CIFAR-10 with ρ = 10%, and use 10 to 500 benign samples to recover the model using BNP. We record the changes of ACC and ASR with respect to the number of benign samples. The results are shown in Section 5.3. The influence of the number of samples to our methods comes from the randomness on estimating moments. As the number of samples grows, the randomness is reduced and BNP has more stable performance, but the average performances are not improved, except for Refool. Compared with other attacks, Refool clearly needs more samples to reduce the ASR. A possible reason is that the mixture distribution in Refool has closer moments and is harder to distinguish. Besides, we surprisingly find that BNP can recover BadNets, CLA and Blended using only 10 benign samples. We also conduct experiments to show the high correlation between the backdoor neurons and our proposed evaluation metrics, and the results are shown in Appendix D. 6 Discussion The proposed methods are superior to other existing defense methods in the following three aspects: Better performance As demonstrated in Section 5, both of the proposed methods achieve stateof-the-art results. Moreover, according to the ablation study, the proposed BNP can successfully defend most of the attacks within 10 benign samples, which shows the amazing effectiveness of our proposed methods. Higher efficiency The proposed methods are highly efficient. We record the running time of several defense methods on 500 CIFAR-10 images with ResNet-18, and show the results in Table 3. It can be seen that both of the proposed methods require less time than the baseline defense methods. Since both methods require scanning on each neuron once, the computational complexity scale linearly with the number of the neurons in the neural network. Therefore, the efficiency of our methods is promised. More robust to hyperparameter choosing One of the most general problems in backdoor defense is the choice of hyperparameters. Under realistic settings, the defenders can only perform defenses without any prior knowledge about poison data, including poisoning rate and examples of poisoned data. So the defenders should carefully tune the hyperparameters, or the ACC and ASR can change suddenly even under small fluctuations of those hyperparameters. In comparison, both of the proposed pruning strategies only require one universal hyperparameter u. Moreover, they show reliable consistency against different attacks in the same dataset, only vary from different datasets, which is inevitable. Besides, we leave a wide range of parameters to choose, so that the ACC remains high while the ASR is controlled to a very small number, as shown in Section 5.3. 7 Conclusion In this work, we take an inspection on the characteristics of an infected model, and find backdoor sensitive neurons distinguishable by their pre-activations on poisoned dataset. Specifically, in backdoor neurons, pre-activations from benign data and poisoned data form distribution with extraordinarily different moments. This property makes it possible for defenders to efficiently locate the potential backdoor neurons based on the distributional property of pre-activations. When direct access to the poisoned dataset is available, we propose to measure the mixture property of pre-activations via differential entropy to detect potential backdoor neurons. In another case, where defenders only have access to a benign dataset, we propose to check abnormality of pre-activation distribution based on the inconsistency of the recorded BN statistics and the sample statistics on the given benign dataset. We then do pruning on potential backdoor neurons to recover the model. Experiments show that the proposed defending strategies can efficiently locate the backdoor neurons, and greatly reduce the backdoor threat with negligible loss of clean accuracy. Our approaches achieve superior results compared with all other defense methods under various attacks on the tested datasets. The results shed lights on the field of backdoor defense, and can be a guidance for designing more robust backdoor attacks. 8 Acknowledgement This work is supported by the National Natural Science Foundation of China (No. 62101351), the GuangDong Basic and Applied Basic Research Foundation (No.2020A1515110376), Shenzhen Outstanding Scientific and Technological Innovation Talents PhD Startup Project (No. RCBS20210609104447108), the Key-Area Research and Development Program of Guangdong Province (2020B0101350001), and the Chinese University of Hong Kong (Shenzhen).
1. What is the focus and contribution of the paper regarding backdoor attacks? 2. What are the strengths of the proposed approach, particularly in terms of its effectiveness and simplicity? 3. Are there any concerns or limitations regarding the applicability of the method in different settings? 4. How does the reviewer assess the clarity and presentation of the paper's content? 5. What are the experiments conducted to validate the effectiveness of the proposed approach?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper, the authors studies how to better defend models from backdoor attacks. Specifically, they identified that there are only a subset of neurons, which are responsible for the poisoned behaviors. The found that if one prunes these poisoned neurons, then the backdoor behavior can be effectively stopped. The authors found a simple way of identifying such neurons through the two measurements: discrepancy of differential entropy and mismatched batchnorm statistics. By using these two measurements, they are able to more effectively stop the backdoor behavior while requiring very few additional compute. The also can identified such backdoor neurons with as few as 10 benign examples. Strengths And Weaknesses Strength The biggest strength of the paper is the method's effective as well as simplicity The presentation of the paper is very clear and easy to follow The experiments are thorough and convincingly shows the benefit of the proposed approach. Weakness I do not find any obvious weakness in the paper. The wording may be improved slightly, but it doesn't impact the message of the paper very much. Given its superior effectiveness in defending against the standard backdoor attacks in standard settings, I am curious whether it can also help stop the backdoor attacks in self-supervised or unsupervised settings. I think the contribution of the paper is already solid, but am simply interested in learning more about its implication for other backdoor settings. Overall, I really enjoy reading the paper, and recommend the paper being accepted. Questions I don't have any questions about the paper as of now. Limitations The authors have adequately address the limitation of their approach and societal impact of their work.
NIPS
Title Pre-activation Distributions Expose Backdoor Neurons Abstract Convolutional neural networks (CNN) can be manipulated to perform specific behaviors when encountering a particular trigger pattern without affecting the performance on normal samples, which is referred to as backdoor attack. The backdoor attack is usually achieved by injecting a small proportion of poisoned samples into the training set, through which the victim trains a model embedded with the designated backdoor. In this work, we demonstrate that backdoor neurons are exposed by their pre-activation distributions, where populations from benign data and poisoned data show significantly different moments. This property is shown to be attack-invariant and allows us to efficiently locate backdoor neurons. On this basis, we make several proper assumptions on the neuron activation distributions, and propose two backdoor neuron detection strategies based on (1) the differential entropy of the neurons, and (2) the Kullback-Leibler divergence between the benign sample distribution and a poisoned statistics based hypothetical distribution. Experimental results show that our proposed defense strategies are both efficient and effective against various backdoor attacks. Source code is available here. 1 Introduction Convolutional neural networks (CNNs) have achieved tremendous success during the past few years in a wide range of areas. However, training a CNN from scratch involves a large amount of data and expensive computational costs, which is sometimes infeasible. A more practical strategy is to obtain pretrained models or utilize public datasets from a third party, which brings convenience but also raises severe security problems into the deployment of models. For example, a malicious third party may provide pretrained models embedded with a designated backdoor, such that the model will have a predefined response to some specific pattern, which is also called the trigger. More realistically, the attacker can inject only a small proportion of malicious data into the public dataset to mislead the trained model, which is referred to as backdoor poisoning attacks [24]. For instance, the malicious data can be created by patching a particular pattern into the benign data and changing the label to the desired target. The correlation between the trigger and the specified target label will be learned by the models during the training time. In this way, the infected model will misclassify the input to the attack target when the pattern is patched, while behaving normally otherwise, as shown in Figure 1. According to previous studies, it was empirically found that an infected model always possesses one or more neurons that have high correlation with the trigger activation, and pruning these neurons can ∗Eual Contribution †Corresponding Author 36th Conference on Neural Information Processing Systems (NeurIPS 2022). significantly alleviate the backdoor behaviors, while retaining the model performance [41, 27, 7]. Nevertheless, how to precisely find out these backdoor neurons in an infected model is still a challenging problem, and has attracted a lot of attentions from the community. In this work, we take an inspection on the pre-activation distributions of infected models on each layer. In general, the pre-activations in each neuron follow an unimodal distribution that can be approximated by a Gaussian distribution. We demonstrate that backdoor neurons do not hold such property. Instead, in a typical backdoor neuron, the pre-activation distributions of benign data and poisoned data present significant different moments, and can be approximated by a mixture of two Gaussian distributions. This property allows us to locate potential backdoor neurons through simple statistical analysis on pre-activations. Specifically, in case the defender has access to the poisoned dataset, the abnormal pre-activation distributions can be directly observed by forward propagating the data. The mixture of benign and poisoned data, where the small proportion of poisoned points being away from the benign mean, leads to a skewed, and even a bimodal distribution. Based on the maximum entropy property of the Gaussian distribution, the skewness will cause a reduction on the differential entropy, compared with a single Gaussian distribution with the same variance. Hence, after standardizing the pre-activation distribution to be unit variance, the abnormal distributions should have lower differential entropy. Those neurons could potentially separate benign and poisoned data. As for another defense setting, in which only an infected model and a set of benign data are provided, we are not able to observe the bimodal distributions since the poisoned data is not available. In this case, we propose to rely on the recorded statistics in Batch Normalization (BN) layers. Specifically, if the infected model is trained on poisoned data, the population statistics in backdoor neurons recorded in the BN layer will be different from those of only benign data. More importantly, benign neurons will not exhibit such mismatch in statistics, allowing for differentiation between benign and backdoor neurons. Based on the differential entropy and the statistics discrepancy, we are able to locate and prune potential backdoor neurons to recover the model flexibly under two defense settings. In summary, our contributions include: 1. We take a deep inspection on the infected model, and summarize the law of pre-activation distributions on poisoned dataset. We find that (1) the standardized entropy of backdoor neurons can be significantly lower than benign neurons, and (2) the BN statistics in infected model are mismatched with the benign sample statistics. 2. We propose to prune potential backdoor neurons based on either the differential entropy of pre-activation distribution or the statistics discrepancy, depending on the defense settings. Under certain assumptions, we claim that both the proposed indices can successfully separate the benign neurons and backdoor neurons by an appropriate threshold. 3. We conduct extensive experiments to verify our assumptions and evaluate our proposed methods, and achieve the state-of-the-art results under two different defense settings. 2 Related work In this section, we briefly discuss recent works in backdoor attack and defense, and a specific branch of related studies on distributional properties of poisoned features. 2.1 Backdoor attacks The most famous backdoor attack is introduced in [14], where the adversary injects a small set of targeted label-flipped data with a specific trigger into the training set, leading to a misclassification when predicting the samples with such trigger. To make the trigger pattern even more invisible to human beings, the blending strategy is used in [6] to generate poison images, while the form of natural reflection is utilized in trigger design in [30]. The input image is perturbed in [39] to keep its content consistent with the target label such that the model better memorizes the trigger pattern, and keep it imperceptible to human beings. Moreover, the multi-target and multi-trigger attacks are proposed in [42, 32], and make the attack more flexible and covert. Recently, some sample-specific trigger design strategies [26] are proposed, making the defense against such backdoor attack much harder. Generally, the above attacks can be referred to as the poisoning based backdoor attacks. Under some settings, the attackers can control the training process to inject the backdoor without modifying the training data, referred as the non-poisoning based backdoor attacks. This is achieved in [29, 35, 5] through targeted modification of the neurons’ weight in a network. Such attacks will not be evaluated in our work due to its strong attack setting. 2.2 Backdoor defenses Training stage defense. Under such setting, the defender has access to the training process, so that they can detect and filter the poisoned data or add some restrictions to suppress the backdoor effect in training. Since the poisoned data can be regarded as outliers, different strategies are applied in [10, 12, 1, 38, 15], such as the robust statistics in feature space and input perturbation techniques to filter them out of training data. Other methods aim at suppressing the backdoor effect during training phase with strong data augmentation methods [2] [25] [34] [31] including CutMix [43], Flip, ShrinkPad [25], CutOut and MaxUp [13], or differential privacy constraints [11, 18]. Model post-processing defense. Under some specific scenarios, the defenders are only given a suspicious DNN model without access to the training process or the full training set. Therefore, they must eliminate the backdoor threat with limited resources, such as a small set of clean data. A straightforward way is to reconstruct the trigger, and then mitigate the model with the knowledge of the reversed trigger [40]. Some try to find the relationship between backdoor behaviors and the neurons in a DNN model. Different levels of stimulation to a neuron are introduced in [28] to see how to determine the output activation change, if the model is attacked. Simple neuron pruning strategies are applied in [7] to repair the model, while redundant neuron pruning and fine-tuning are combined in [27] to erase the backdoor effect. Adversarial perturbations are added to the neurons in [41] and precisely prunes the easily-perturbed neurons with more limited clean data requirement and better performance. There are other fine-tuning based methods with the implementation of knowledge distillation [23, 17]. Mode connectivity repair technique [44] is also explored to mitigate the backdoored model. Recently, the K-Arm optimization [37] is applied in backdoor detection, helping curtail the threat of backdoor attack. 2.3 Distributional properties in poisoned dataset One branch of research on backdoor learning focuses specifically on using statistical differences in the distribution of benign and poisoned samples to filter out malicious data. Activation clustering [4] uses K-means to separate benign and poisoned samples in feature space. Spectral signatures [38] reveal that feature vectors tend to leave strong signals in the top eigenvectors of their covariance matrix. SPECTRE (Spectral Poison Excision Through Robust Estimation)[15] utilizes tools from robust statistics to amplify the spectral signature by outlier-robust data whitening. Our work differentiated from the above works from the following three aspects: 1) the above works focus on the penultimate layer feature representation, while our work inspects deep into each layer 2) the above works take the representation space from all neurons as a whole, while our finding indicates that the distributional difference only exists in some neurons 3) the above works aim at filtering out poisoned samples for retraining, while our methods directly repair the trained neural network by pruning potential backdoor neurons. 3 Preliminaries 3.1 Notations Consider a multi-class classification problem with C classes. Let the original training set D = {(xi, yi)}Ni=1 contains N i.i.d. sample images xi ∈ Rdc×dh×dw and the corresponding labels yi ∈ {1, 2, ..., C} drawn from X × Y . Here, we denote by dc, dh and dw the number of channels, the height and the width of images, respectively. In particular, we have dc = 3 for RGB images. As in Section 2.1, the backdoor poisoning attack involves changes to the input images and the corresponding labels on a subset Dp ⊆ D. In this work, we define the ratio ρ = |Dp||D| as the poisoning rate. We denote the poisoning function to the input images as δ(x). A dataset D is said to be ρ-poisoned if the poisoning rate of the dataset is ρ. Consider a neural network F (x; θ) with L layers. Denote F (l) = f (l) ◦ ϕ ◦ f (l−1) ◦ ϕ ◦ · · · ◦ ϕ ◦ f (1), for 1 ≤ l ≤ L, where f (l) is a linear function (e.g., convolution) in the l-th layer, and ϕ is a nonlinear activation function applied element wise. In this work, we may denote F (x; θ) as F (x) or F for simplicity. We denote by W(l) ∈ Rdc′×dc×dh×dw the weight tensor of a convolutional layer. To do pruning, we apply a mask M(l) ∈ {0, 1}dc′×dc×dh×dw starting with M(l) = 1dc′×dc×dh×dw in each layer. Pruning neurons on the network refers to getting a collection of indices I = {(l, k)i}Ii=1 and setting M(l)k = 0dc×dh×dw if (l, k) ∈ I. The pruned network F−I has the same architecture as F but with all the weight matrices of convolutional layers set to W(l) ⊙M(l), where ⊙ denotes the Hadamard product. 3.2 Differential entropy To measure the uncertainty of a discrete random variable Z, the entropy [36, 8] was defined as H(Z) = − ∑ z∈Z p(z) log p(z). At the same time, as an extension of entropy, the differential entropy was also introduced for a continuous random variable. More concretely, if Z is a continuous random variable, then it was defined as h(Z) = − ∫ Z p(z) log p(z)dz. (1) An important fact about the differential entropy is that, among all the real-valued distributions supported on (−∞,∞) with a specified finite variance, the Gaussian distribution maximizes the differential entropy [8]. In this work, the differential entropy (1) will be utilized to identify the distributions that are far different from a Gaussian distribution. 3.3 Backdoor neurons It was found that there exist one or more neurons that contribute the most to the backdoor behaviors in a infected model [41, 27]. If some of or all of these neurons are pruned, the attack success rate will be reduced greatly [41]. In this work, to better quantify the importance of neurons to backdoor behaviors, we would like to introduce the sensitivity of neurons to the backdoor. We first introduce the definition of backdoor loss on a specific poisoning function: Definition 1. Given a model F and a poisoning function δ, the backdoor loss on a dataset D is defined as: Lbd(f) = E(x,y)∼D[DCE(y, f(δ(x))], where DCE denotes the cross entropy loss. Then: Definition 2. Given a model F , the index of a neuron (l, k) and the backdoor loss Lbd, the sensitivity of that neuron to the backdoor is defined as: α(F, l, k) = Lbd(F )− Lbd(F−{(l,k)}), (2) where F−{(l,k)} is the network after pruning the k-th neuron of the l-th layer. The backdoor loss is high when the model is infected, and will be reduced when the backdoor effect is alleviated. Using the quantity defined in (2), we are now able to find the neurons that are mostly correlated with the backdoor behaviors: Definition 3. Given a model F and a threshold τ > 0, the set of backdoor neurons are defined as: BF,τ = {(l, k) : α(F, l, k) > τ}. (3) 3.4 Pre-activation distribution During the forward propagation of an input x, we denote x(l) = F (l)(x) ∈ Rd(l)c ×d (l) h ×d (l) w as the output of the l-th layer. For the k-th neuron of the l-th layer, the pre-activation ϕ(l)k = ϕ(x (l) k ) is defined as the maximum value of the k-th slice matrix of dimension d(l)h × d (l) w in x(l). The reason we choose pre-activations instead of activations is that the distribution of activations after non-linear function might be distorted. For example, ReLU (rectified linear unit) will cut out all negative values. It is a common assumption that, for every neuron, the pre-activations before the non-linear function follow a Gaussian distribution, if the network is randomly initialized and the number of neurons is large enough [20]. This assumption is based on the central limit theorem under weak dependence [3]. In a trained network, although this assumption may not strictly hold, the pre-activation of every neuron can be still regarded as approximately following a Gaussian distribution. However, in this work, for the first time, we observe a bimodal pre-activation distribution in backdoor neurons formed by the benign data and poisoned data. This phenomenon is shown in Figure 2, where a typical backdoor neuron is compared with the benign neurons. It can be seen that, after the model is infected, the pre-activation distributions of benign neurons hardly change when the data is poisoned, while the pre-activation distributions of backdoor neurons become significantly different. 4 Methodology 4.1 Basic assumptions We now introduce two preliminarily assumptions on the pre-activation distribution of an infected neural network. Assumption 1. Given an infected model F , we have |BF,τ | > 0 for some threshold τ > 0. This is a primary assumption that guarantees proper pruning of neurons can correct the network’s predictions on poisoned samples to some extent. Hence, it is a prerequisite of good performance of all the pruning-based defense methods. The next assumption provides a precondition for our methods: Assumption 2. Given a model F infected by a poisoning function δ with a ρ-poisoned dataset D, the pre-activation of sample from D on each single neuron of F follow a Gaussian mixture distribution, that is: ϕ (l) k ∼ (1− ρ)N (µ (l) k , σ (l)2 k ) + ρN (µ̂ (l) k , σ̂ (l)2 k ), with |µ(l)k − µ̂ (l) k | { < ϵ, if (l, k) /∈ BF,τ , ≫ ϵ, if (l, k) ∈ BF,τ , and |σ(l)2k − σ̂ (l)2 k | < ϵ, ∀k /∈ BF,τ where ϵ > 0 is a small enough value, µ(l)k and σ (l)2 k , µ̂ (l) k and σ̂ (l)2 k are the mean and variance of {ϕ(F (l)(x)k) : x ∼ X}, {ϕ(F (l)(δ(x))k) : x ∼ X}, respectively. This assumes that the mean value of pre-activation distribution of benign and poisoned samples only significantly differentiated in backdoor neurons. This assumption is made based on empirical observation, and our methods work only when this assumption holds. 4.2 Entropy-based pruning (EP) Based on the given assumptions, standardizing the pre-activation distributions (by subtracting the mean and dividing by the standard deviation) will maximize the differential entropy in benign neurons, which approximately follow a standard Gaussian distribution (N (0, 1)). However, in backdoor neurons, the mixture distributions resulting from differences in the moments of Gaussian components cannot be Gaussian distributions, leading to a smaller differential entropy than that of a standardized Gaussian distribution. Corollary 1. Let ϕ̇(l)k = ϕ (l) k −µ (l) k σ (l) k be the standardized pre-activations, then the following inequality is satisfied: h(ϕ̇ (l) k ) < h(ϕ̇ (l) k′ ) ≤ h(Z), ∀k ∈ BF,τ , k ′ /∈ BF,τ , where Z ∼ N (0, 1) is the standardized Gaussian distribution. This inequality gives a guarantee that with an appropriately chosen threshold, the backdoor neurons can be well separated with the benign neurons. 4.3 BN statistics-based pruning (BNP) BN layer involves using the statistics of a mini-batch to normalize the data in each layer for each neuron. It is known to be able to smooth the optimization landscape, and has gradually become a common setting of neural networks [19]. During inference, BN uses the fixed statistics obtained by averaging the sample statistics of mini-batches during training time, including the mean and the variance. If the model is trained on a poisoned dataset, BN will record the mean and the variance of the poison-benign mixed data. Note that the mean and variance here are not defined on the pre-activations ϕ(l)k , but on x (l) k . Based on the above discussions, we know that the poisoned preactivations in backdoor neurons follow a different distribution from the benign samples. The recorded statistics during training are actually that of the mixture distribution. Hence, we can expect that the BN statistics of a trained backdoor neural network are biased. If we are able to access a small set of benign data, we can calculate an approximation of the true statistics on benign data. Then we calculate the Kullback-Leibler (KL) divergence [9] between the sample distribution and the BN induced distribution as the measurement of the bias. By assuming both of the distributions follow Gaussian distributions, we have a closed-form solution: DKL(N (l)sample,N (l) BN)k = log σ̃ (l) k σ (l) k + σ (l)2 k + (µ (l) k − µ̃ (l) k ) 2 2σ̃ (l)2 k − 1 2 , where N (l)sample = N (µ (l) k , σ (l)2 k ), N (l) BN = N (µ̃ (l) k , σ̃ (l)2 k ), µ (l) k and σ (l)2 k are the statistics obtained from benign samples, µ̃(l)2k and σ̃ (l) k are the BN statistics. Note that BN statistics is the mixture statistics of benign and poisoned distributions. Thus, we have the following corollary: Corollary 2. According to Assumption 2, when ϵ → 0, the following inequality is satisfied: DKL(N (l)sample,N (l) BN)k > DKL(N (l) sample,N (l) BN)k′ = 0, ∀k ∈ BF,τ , k ′ /∈ BF,τ , The corollary indicates that backdoor neurons should have larger KL divergences than benign neurons, as illustrated in Figure 2(c). 4.4 Overview of the two pruning strategies In Section 3.4, we reveal the discrepancy between the pre-activation distributions in backdoor neurons and that in the benign neurons. This enables fast detecting the neurons that are more related to the backdoor behaviours. The index we choose to detect the abnormal neurons depends on what kind of data we are able to access. Mixture training data In this case, the victim is given a poisoned training dataset with a specified poisoning rate ρ. Our goal is to obtain a benign model based on the poisoned dataset. To achieve this, we first train an infected model F on the poisoned dataset. The resulting model should have a certain number of backdoor neurons based on empirical observation and assumption. Since ρ > 0 for the dataset, all the neurons follow Gaussian mixture distributions, and we have h(ẋ(l)k ) < h(ẋ (l) k′ ) for all k ∈ BF,τ , k′ /∈ BF,τ . This implies that with an appropriate threshold τ∗h , we can perfectly separate the benign neurons and backdoor neurons, which can be formulated as: ∃τ∗h , h(ẋ (l) k ) < τ ∗ h , ∀k ∈ BF,τ , h(ẋ (l) k′ ) > τ ∗ h , ∀k′ /∈ BF,τ . Setting the threshold τ∗h is crucial to the solution, and it is a trade-off between the accuracy on benign samples and that on the backdoored samples. Note that |B(l)F,τ | << d (l) c . We can treat the low entropy neurons as outliers in each layer, and set different thresholds for different layers. Specifically, let h(l) = [h(x(l)1 ), h(x (l) 2 ), . . . , h(x (l) d (l) c )]T ∈ Rd(l)c be a vector of differential entropy of the l-th layer calculated from the poisoned dataset. Then we set τ (l)h = h̄ (l) − uh · s(l)h , where h̄(l) = 1 d (l) c ∑d(l)c k=1 h (l) k and s (l) h = √ 1 d (l) c ∑d(l)c k=1(h (l) k − h̄(l))2 are the mean and standard deviation of h(l), uh is a hyperparameter controlling how low the threshold is. Then we have a set of indices of potential backdoor neurons Ih = {(l, k) : h(l)k < τ (l) h }. Finally, we prune the infected model F using Ih, which results in a final model F−Ih . Benign training data This is the case that the victim is given a trained poisoned model F with a small set of benign data. Our goal is to utilize the benign data to clean up the poisoned model and eliminate the backdoor threat. Similar to the pruning process based on differential entropy, we first construct a vector of KL divergences of all neurons for each layer K(l) = [K(l)1 ,K (l) 2 , . . . ,K (l) d (l) c ]T ∈ Rd(l)c according to equation (4). We set τ (l)K = K̄(l) + uK · s (l) K , where K̄ (l) = 1 d (l) c ∑d(l)c k=1 K (l) k and s(l)K = √ 1 d (l) c ∑d(l)c k=1(K (l) k − K̄(l))2 are the mean and standard deviation of K(l), uK is a hyperparameter. The set of selected neurons is IK = {(l, k) : K(l)k > τ (l) K } and the pruned model can be represented as F−IK . 5 Experiments 5.1 Implementation details Datasets In this section, the experiments are conducted on two influential benchmarks, CIFAR-10 [21] and Tiny-ImageNet [22]. We use 90% of the data set for training, the rest of the data is used for validating or recovering the poisoned model. Models We use ResNet-18 [16] as the baseline model to evaluate our proposed method, and compare it with other methods. We train the network for 150 epochs on CIFAR-10 and 100 epochs on Tiny-ImageNet with SGD optimizer. The initial learning rate is set to 0.1 and the momentum is set to 0.9. We adopt the cosine learning rate scheduler to adjust the learning rate. The batch size is set to 128 by default. Attacks Our experiments are based on both the classical and the most advanced attack strategies, including the BadNet [14], Clean Label Attack (CLA) [39], Reflection Backdoor (Refool) [30], Warping-based poisoned Networks [33], Blended backdoor attack (Blended) [6], Input-aware backdoor attack (IAB) [32] and Sample Specific Backdoor Attack (SSBA) [26]. For BadNets, we test both the All-to-All (A2A) attack and All-to-One (A2O) attack, i.e., the attack target labels are set to yt = (y + 1) mod C, or one particular label yt = Ct, respectively. The target for A2O attacks of all the attack strategies is set to class 0. The triggers for BadNets and CLA are set to randomly generated patterns with size 3×3 for CIFAR-10 and 5×5 for Tiny-ImageNet. The poisoning rate is set to 10% by default. Note that, due to the image size restraint, SSBA is only performed on Tiny-Imagenet. Defenses We conduct experiments under two defense settings, one of which allows the defender to access the poisoned training set, while the other only has a small clean data set. Both the defense goals are to obtain a clean model without backdoor behaviors. We compare our approaches with the l∞ pruning [7], fine-tuning (FT), fine-pruning (FP) [27], neural attention distillation (NAD) [23] and adversarial neuron pruning (ANP) [41]. The number of benign samples allowed to access is set to 500 (1%) for CIFAR-10 and 5000 (5%) for Tiny-ImageNet by default. We set the threshold hyperparameter uh/uk to 3 on CIFAR-10 and 4 on Tiny-ImageNet of all tested attacks by default. Evaluation metrics In this work, we use the clean accuracy (ACC) and attack success rate (ASR) to evaluate the effectiveness of different methods. The ACC for a given model F is defined as: ACC(F,Dtest) = ∑ (x,y)∈Dtest I{argmax(F (x)) = y}, where I is the indicator function. The ASR is defined as: ASR(F,Dtest) = ∑ (x,y)∈Dtest,y ̸=yt I{argmax(F (δ(x))) = yt}, where yt is the attack target label. The ACC measures the model performance on benign samples, while the ASR reflects the degree of backdoor behavior retainment in the model. Given an infected model, our goal is to reduce the ASR, while keeping the ACC from dropping too much. 5.2 Experimental results CIFAR-10 We show the results on CIFAR-10 in Table 1. The recently proposed NAD and ANP perform significantly better than other defense methods, reducing the ASR to a very low level with a slight drop on ACC. However, they also have a significant drop (3 ∼ 4%) on ACC when defending CLA, which is the most robust backdoor attack in our experiments, and ANP even failed when defending BadNets(A2A). Nevertheless, both of our methods successfully eliminate the backdoor (ASR < 1%) with negligible loss on ACC. We even observe a little rise on ACC when defending BadNets by EP. This phenomenon demonstrates that backdoor neurons may hurt the ACC in some way, and thus the ACC will rise when the backdoor neurons are precisely pruned. Overall, our methods achieve the most advanced defense results. Tiny-ImageNet Tiny-ImageNet is a larger scale dataset with higher resolution images, and it is harder to defend against the attacks performed on it. Note that the A2A attack is absent, since we cannot successfully perform the attack due to the large number of its classes (up to 200). Our experimental results show that all the defense methods suffer from the performance degradation compared with the results in CIFAR-10, and they fail to defend against WaNet with a large ACC drop but even unchanged ASR, especially the ANP and l∞ defense. This phenomenon shows that the principles for finding backdoor neurons of both ANP and l∞ don’t work in such case. Nevertheless, our methods totally remove the backdoor and the ACC are not even affected, which indicates that our methods can precisely locate the backdoor neurons even on such large scale dataset. 5.3 Ablation study To be fair, we compare BNP with other re-training based methods using 500 benign samples in Section 5.2. However, BNP doesn’t require re-training the model, and the samples are just used for detecting the distribution discrepancy. As the statistical differences may be detected with much fewer samples, we now study how the number of samples affects the effectiveness of BNP. We train BadNets, CLA, Refool and Blended on CIFAR-10 with ρ = 10%, and use 10 to 500 benign samples to recover the model using BNP. We record the changes of ACC and ASR with respect to the number of benign samples. The results are shown in Section 5.3. The influence of the number of samples to our methods comes from the randomness on estimating moments. As the number of samples grows, the randomness is reduced and BNP has more stable performance, but the average performances are not improved, except for Refool. Compared with other attacks, Refool clearly needs more samples to reduce the ASR. A possible reason is that the mixture distribution in Refool has closer moments and is harder to distinguish. Besides, we surprisingly find that BNP can recover BadNets, CLA and Blended using only 10 benign samples. We also conduct experiments to show the high correlation between the backdoor neurons and our proposed evaluation metrics, and the results are shown in Appendix D. 6 Discussion The proposed methods are superior to other existing defense methods in the following three aspects: Better performance As demonstrated in Section 5, both of the proposed methods achieve stateof-the-art results. Moreover, according to the ablation study, the proposed BNP can successfully defend most of the attacks within 10 benign samples, which shows the amazing effectiveness of our proposed methods. Higher efficiency The proposed methods are highly efficient. We record the running time of several defense methods on 500 CIFAR-10 images with ResNet-18, and show the results in Table 3. It can be seen that both of the proposed methods require less time than the baseline defense methods. Since both methods require scanning on each neuron once, the computational complexity scale linearly with the number of the neurons in the neural network. Therefore, the efficiency of our methods is promised. More robust to hyperparameter choosing One of the most general problems in backdoor defense is the choice of hyperparameters. Under realistic settings, the defenders can only perform defenses without any prior knowledge about poison data, including poisoning rate and examples of poisoned data. So the defenders should carefully tune the hyperparameters, or the ACC and ASR can change suddenly even under small fluctuations of those hyperparameters. In comparison, both of the proposed pruning strategies only require one universal hyperparameter u. Moreover, they show reliable consistency against different attacks in the same dataset, only vary from different datasets, which is inevitable. Besides, we leave a wide range of parameters to choose, so that the ACC remains high while the ASR is controlled to a very small number, as shown in Section 5.3. 7 Conclusion In this work, we take an inspection on the characteristics of an infected model, and find backdoor sensitive neurons distinguishable by their pre-activations on poisoned dataset. Specifically, in backdoor neurons, pre-activations from benign data and poisoned data form distribution with extraordinarily different moments. This property makes it possible for defenders to efficiently locate the potential backdoor neurons based on the distributional property of pre-activations. When direct access to the poisoned dataset is available, we propose to measure the mixture property of pre-activations via differential entropy to detect potential backdoor neurons. In another case, where defenders only have access to a benign dataset, we propose to check abnormality of pre-activation distribution based on the inconsistency of the recorded BN statistics and the sample statistics on the given benign dataset. We then do pruning on potential backdoor neurons to recover the model. Experiments show that the proposed defending strategies can efficiently locate the backdoor neurons, and greatly reduce the backdoor threat with negligible loss of clean accuracy. Our approaches achieve superior results compared with all other defense methods under various attacks on the tested datasets. The results shed lights on the field of backdoor defense, and can be a guidance for designing more robust backdoor attacks. 8 Acknowledgement This work is supported by the National Natural Science Foundation of China (No. 62101351), the GuangDong Basic and Applied Basic Research Foundation (No.2020A1515110376), Shenzhen Outstanding Scientific and Technological Innovation Talents PhD Startup Project (No. RCBS20210609104447108), the Key-Area Research and Development Program of Guangdong Province (2020B0101350001), and the Chinese University of Hong Kong (Shenzhen).
1. What is the focus and contribution of the paper regarding backdoor attacks on CNNs? 2. What are the strengths of the proposed approach, particularly in its comprehensive evaluation and consideration of adaptive attacks? 3. What are the weaknesses of the paper, especially regarding its novelty and limitations in practical scenarios? 4. Do you have any concerns about the effectiveness of the approach in multiple-label scenarios or with multiple backdoors? 5. What device was used to test the running time in the experiments, and how does it affect the results? 6. How does the requirement for clean validation data impact the practicality of the approach?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper In this paper, the authors propose a defense approach to mitigate the backdoor attacks on convolution neural networks (CNNs). Specifically, the authors first observe that the neuron activation distributions within the backdoor-infected CNN are different for benign and backdoored samples. Based on such an observation, the authors further propose to calculate the differential entropy to distinguish the compromised neuron. In the experiment section, the authors evaluate the proposed approach to CIFAR-10 and TinyImageNet benchmarks under different settings(e.g., training phase, post-training phase, etc). Moreover, the authors also test the robustness of the proposed approach on two adaptive attacks. The results demonstrate that the proposed method can significantly outperform comparison work in the dimensions of robustness, efficacy, and efficiency. Strengths And Weaknesses +++++++Strengths+++++++ Well-structured presentation and easy to follow. Comprehensive evaluation. Considering two adaptive attacks. -------------Weakness------------ The novelty is somehow limited. Since inspecting compromised neurons for defending against backdoor attacks has been widely adopted by previous work, this work may not seem that novel. I think more previous unlearning-based work or training-phase stage defense should also be considered for comparison. For example, DBD[1] and the most recent approach[2]. Missing evaluation on several practical scenarios. I think the proposed approach should also be evaluated under some rather practical scenarios, for example, multiple-label scenarios, and multiple backdoors within the same infected label which previous work (neural cleanse) has discussed that. [1]https://openreview.net/forum?id=TySnJ-0RdKI [2]ADVERSARIAL UNLEARNING OF BACKDOORS VIA IMPLICIT HYPERGRADIENT Questions Is this work can still perform efficacy under multiple-label and other practical scenarios? What device for you to test the running time in the experiment section? Limitations Requiring clean validation data makes the approach somehow impractical.
NIPS
Title Pre-activation Distributions Expose Backdoor Neurons Abstract Convolutional neural networks (CNN) can be manipulated to perform specific behaviors when encountering a particular trigger pattern without affecting the performance on normal samples, which is referred to as backdoor attack. The backdoor attack is usually achieved by injecting a small proportion of poisoned samples into the training set, through which the victim trains a model embedded with the designated backdoor. In this work, we demonstrate that backdoor neurons are exposed by their pre-activation distributions, where populations from benign data and poisoned data show significantly different moments. This property is shown to be attack-invariant and allows us to efficiently locate backdoor neurons. On this basis, we make several proper assumptions on the neuron activation distributions, and propose two backdoor neuron detection strategies based on (1) the differential entropy of the neurons, and (2) the Kullback-Leibler divergence between the benign sample distribution and a poisoned statistics based hypothetical distribution. Experimental results show that our proposed defense strategies are both efficient and effective against various backdoor attacks. Source code is available here. 1 Introduction Convolutional neural networks (CNNs) have achieved tremendous success during the past few years in a wide range of areas. However, training a CNN from scratch involves a large amount of data and expensive computational costs, which is sometimes infeasible. A more practical strategy is to obtain pretrained models or utilize public datasets from a third party, which brings convenience but also raises severe security problems into the deployment of models. For example, a malicious third party may provide pretrained models embedded with a designated backdoor, such that the model will have a predefined response to some specific pattern, which is also called the trigger. More realistically, the attacker can inject only a small proportion of malicious data into the public dataset to mislead the trained model, which is referred to as backdoor poisoning attacks [24]. For instance, the malicious data can be created by patching a particular pattern into the benign data and changing the label to the desired target. The correlation between the trigger and the specified target label will be learned by the models during the training time. In this way, the infected model will misclassify the input to the attack target when the pattern is patched, while behaving normally otherwise, as shown in Figure 1. According to previous studies, it was empirically found that an infected model always possesses one or more neurons that have high correlation with the trigger activation, and pruning these neurons can ∗Eual Contribution †Corresponding Author 36th Conference on Neural Information Processing Systems (NeurIPS 2022). significantly alleviate the backdoor behaviors, while retaining the model performance [41, 27, 7]. Nevertheless, how to precisely find out these backdoor neurons in an infected model is still a challenging problem, and has attracted a lot of attentions from the community. In this work, we take an inspection on the pre-activation distributions of infected models on each layer. In general, the pre-activations in each neuron follow an unimodal distribution that can be approximated by a Gaussian distribution. We demonstrate that backdoor neurons do not hold such property. Instead, in a typical backdoor neuron, the pre-activation distributions of benign data and poisoned data present significant different moments, and can be approximated by a mixture of two Gaussian distributions. This property allows us to locate potential backdoor neurons through simple statistical analysis on pre-activations. Specifically, in case the defender has access to the poisoned dataset, the abnormal pre-activation distributions can be directly observed by forward propagating the data. The mixture of benign and poisoned data, where the small proportion of poisoned points being away from the benign mean, leads to a skewed, and even a bimodal distribution. Based on the maximum entropy property of the Gaussian distribution, the skewness will cause a reduction on the differential entropy, compared with a single Gaussian distribution with the same variance. Hence, after standardizing the pre-activation distribution to be unit variance, the abnormal distributions should have lower differential entropy. Those neurons could potentially separate benign and poisoned data. As for another defense setting, in which only an infected model and a set of benign data are provided, we are not able to observe the bimodal distributions since the poisoned data is not available. In this case, we propose to rely on the recorded statistics in Batch Normalization (BN) layers. Specifically, if the infected model is trained on poisoned data, the population statistics in backdoor neurons recorded in the BN layer will be different from those of only benign data. More importantly, benign neurons will not exhibit such mismatch in statistics, allowing for differentiation between benign and backdoor neurons. Based on the differential entropy and the statistics discrepancy, we are able to locate and prune potential backdoor neurons to recover the model flexibly under two defense settings. In summary, our contributions include: 1. We take a deep inspection on the infected model, and summarize the law of pre-activation distributions on poisoned dataset. We find that (1) the standardized entropy of backdoor neurons can be significantly lower than benign neurons, and (2) the BN statistics in infected model are mismatched with the benign sample statistics. 2. We propose to prune potential backdoor neurons based on either the differential entropy of pre-activation distribution or the statistics discrepancy, depending on the defense settings. Under certain assumptions, we claim that both the proposed indices can successfully separate the benign neurons and backdoor neurons by an appropriate threshold. 3. We conduct extensive experiments to verify our assumptions and evaluate our proposed methods, and achieve the state-of-the-art results under two different defense settings. 2 Related work In this section, we briefly discuss recent works in backdoor attack and defense, and a specific branch of related studies on distributional properties of poisoned features. 2.1 Backdoor attacks The most famous backdoor attack is introduced in [14], where the adversary injects a small set of targeted label-flipped data with a specific trigger into the training set, leading to a misclassification when predicting the samples with such trigger. To make the trigger pattern even more invisible to human beings, the blending strategy is used in [6] to generate poison images, while the form of natural reflection is utilized in trigger design in [30]. The input image is perturbed in [39] to keep its content consistent with the target label such that the model better memorizes the trigger pattern, and keep it imperceptible to human beings. Moreover, the multi-target and multi-trigger attacks are proposed in [42, 32], and make the attack more flexible and covert. Recently, some sample-specific trigger design strategies [26] are proposed, making the defense against such backdoor attack much harder. Generally, the above attacks can be referred to as the poisoning based backdoor attacks. Under some settings, the attackers can control the training process to inject the backdoor without modifying the training data, referred as the non-poisoning based backdoor attacks. This is achieved in [29, 35, 5] through targeted modification of the neurons’ weight in a network. Such attacks will not be evaluated in our work due to its strong attack setting. 2.2 Backdoor defenses Training stage defense. Under such setting, the defender has access to the training process, so that they can detect and filter the poisoned data or add some restrictions to suppress the backdoor effect in training. Since the poisoned data can be regarded as outliers, different strategies are applied in [10, 12, 1, 38, 15], such as the robust statistics in feature space and input perturbation techniques to filter them out of training data. Other methods aim at suppressing the backdoor effect during training phase with strong data augmentation methods [2] [25] [34] [31] including CutMix [43], Flip, ShrinkPad [25], CutOut and MaxUp [13], or differential privacy constraints [11, 18]. Model post-processing defense. Under some specific scenarios, the defenders are only given a suspicious DNN model without access to the training process or the full training set. Therefore, they must eliminate the backdoor threat with limited resources, such as a small set of clean data. A straightforward way is to reconstruct the trigger, and then mitigate the model with the knowledge of the reversed trigger [40]. Some try to find the relationship between backdoor behaviors and the neurons in a DNN model. Different levels of stimulation to a neuron are introduced in [28] to see how to determine the output activation change, if the model is attacked. Simple neuron pruning strategies are applied in [7] to repair the model, while redundant neuron pruning and fine-tuning are combined in [27] to erase the backdoor effect. Adversarial perturbations are added to the neurons in [41] and precisely prunes the easily-perturbed neurons with more limited clean data requirement and better performance. There are other fine-tuning based methods with the implementation of knowledge distillation [23, 17]. Mode connectivity repair technique [44] is also explored to mitigate the backdoored model. Recently, the K-Arm optimization [37] is applied in backdoor detection, helping curtail the threat of backdoor attack. 2.3 Distributional properties in poisoned dataset One branch of research on backdoor learning focuses specifically on using statistical differences in the distribution of benign and poisoned samples to filter out malicious data. Activation clustering [4] uses K-means to separate benign and poisoned samples in feature space. Spectral signatures [38] reveal that feature vectors tend to leave strong signals in the top eigenvectors of their covariance matrix. SPECTRE (Spectral Poison Excision Through Robust Estimation)[15] utilizes tools from robust statistics to amplify the spectral signature by outlier-robust data whitening. Our work differentiated from the above works from the following three aspects: 1) the above works focus on the penultimate layer feature representation, while our work inspects deep into each layer 2) the above works take the representation space from all neurons as a whole, while our finding indicates that the distributional difference only exists in some neurons 3) the above works aim at filtering out poisoned samples for retraining, while our methods directly repair the trained neural network by pruning potential backdoor neurons. 3 Preliminaries 3.1 Notations Consider a multi-class classification problem with C classes. Let the original training set D = {(xi, yi)}Ni=1 contains N i.i.d. sample images xi ∈ Rdc×dh×dw and the corresponding labels yi ∈ {1, 2, ..., C} drawn from X × Y . Here, we denote by dc, dh and dw the number of channels, the height and the width of images, respectively. In particular, we have dc = 3 for RGB images. As in Section 2.1, the backdoor poisoning attack involves changes to the input images and the corresponding labels on a subset Dp ⊆ D. In this work, we define the ratio ρ = |Dp||D| as the poisoning rate. We denote the poisoning function to the input images as δ(x). A dataset D is said to be ρ-poisoned if the poisoning rate of the dataset is ρ. Consider a neural network F (x; θ) with L layers. Denote F (l) = f (l) ◦ ϕ ◦ f (l−1) ◦ ϕ ◦ · · · ◦ ϕ ◦ f (1), for 1 ≤ l ≤ L, where f (l) is a linear function (e.g., convolution) in the l-th layer, and ϕ is a nonlinear activation function applied element wise. In this work, we may denote F (x; θ) as F (x) or F for simplicity. We denote by W(l) ∈ Rdc′×dc×dh×dw the weight tensor of a convolutional layer. To do pruning, we apply a mask M(l) ∈ {0, 1}dc′×dc×dh×dw starting with M(l) = 1dc′×dc×dh×dw in each layer. Pruning neurons on the network refers to getting a collection of indices I = {(l, k)i}Ii=1 and setting M(l)k = 0dc×dh×dw if (l, k) ∈ I. The pruned network F−I has the same architecture as F but with all the weight matrices of convolutional layers set to W(l) ⊙M(l), where ⊙ denotes the Hadamard product. 3.2 Differential entropy To measure the uncertainty of a discrete random variable Z, the entropy [36, 8] was defined as H(Z) = − ∑ z∈Z p(z) log p(z). At the same time, as an extension of entropy, the differential entropy was also introduced for a continuous random variable. More concretely, if Z is a continuous random variable, then it was defined as h(Z) = − ∫ Z p(z) log p(z)dz. (1) An important fact about the differential entropy is that, among all the real-valued distributions supported on (−∞,∞) with a specified finite variance, the Gaussian distribution maximizes the differential entropy [8]. In this work, the differential entropy (1) will be utilized to identify the distributions that are far different from a Gaussian distribution. 3.3 Backdoor neurons It was found that there exist one or more neurons that contribute the most to the backdoor behaviors in a infected model [41, 27]. If some of or all of these neurons are pruned, the attack success rate will be reduced greatly [41]. In this work, to better quantify the importance of neurons to backdoor behaviors, we would like to introduce the sensitivity of neurons to the backdoor. We first introduce the definition of backdoor loss on a specific poisoning function: Definition 1. Given a model F and a poisoning function δ, the backdoor loss on a dataset D is defined as: Lbd(f) = E(x,y)∼D[DCE(y, f(δ(x))], where DCE denotes the cross entropy loss. Then: Definition 2. Given a model F , the index of a neuron (l, k) and the backdoor loss Lbd, the sensitivity of that neuron to the backdoor is defined as: α(F, l, k) = Lbd(F )− Lbd(F−{(l,k)}), (2) where F−{(l,k)} is the network after pruning the k-th neuron of the l-th layer. The backdoor loss is high when the model is infected, and will be reduced when the backdoor effect is alleviated. Using the quantity defined in (2), we are now able to find the neurons that are mostly correlated with the backdoor behaviors: Definition 3. Given a model F and a threshold τ > 0, the set of backdoor neurons are defined as: BF,τ = {(l, k) : α(F, l, k) > τ}. (3) 3.4 Pre-activation distribution During the forward propagation of an input x, we denote x(l) = F (l)(x) ∈ Rd(l)c ×d (l) h ×d (l) w as the output of the l-th layer. For the k-th neuron of the l-th layer, the pre-activation ϕ(l)k = ϕ(x (l) k ) is defined as the maximum value of the k-th slice matrix of dimension d(l)h × d (l) w in x(l). The reason we choose pre-activations instead of activations is that the distribution of activations after non-linear function might be distorted. For example, ReLU (rectified linear unit) will cut out all negative values. It is a common assumption that, for every neuron, the pre-activations before the non-linear function follow a Gaussian distribution, if the network is randomly initialized and the number of neurons is large enough [20]. This assumption is based on the central limit theorem under weak dependence [3]. In a trained network, although this assumption may not strictly hold, the pre-activation of every neuron can be still regarded as approximately following a Gaussian distribution. However, in this work, for the first time, we observe a bimodal pre-activation distribution in backdoor neurons formed by the benign data and poisoned data. This phenomenon is shown in Figure 2, where a typical backdoor neuron is compared with the benign neurons. It can be seen that, after the model is infected, the pre-activation distributions of benign neurons hardly change when the data is poisoned, while the pre-activation distributions of backdoor neurons become significantly different. 4 Methodology 4.1 Basic assumptions We now introduce two preliminarily assumptions on the pre-activation distribution of an infected neural network. Assumption 1. Given an infected model F , we have |BF,τ | > 0 for some threshold τ > 0. This is a primary assumption that guarantees proper pruning of neurons can correct the network’s predictions on poisoned samples to some extent. Hence, it is a prerequisite of good performance of all the pruning-based defense methods. The next assumption provides a precondition for our methods: Assumption 2. Given a model F infected by a poisoning function δ with a ρ-poisoned dataset D, the pre-activation of sample from D on each single neuron of F follow a Gaussian mixture distribution, that is: ϕ (l) k ∼ (1− ρ)N (µ (l) k , σ (l)2 k ) + ρN (µ̂ (l) k , σ̂ (l)2 k ), with |µ(l)k − µ̂ (l) k | { < ϵ, if (l, k) /∈ BF,τ , ≫ ϵ, if (l, k) ∈ BF,τ , and |σ(l)2k − σ̂ (l)2 k | < ϵ, ∀k /∈ BF,τ where ϵ > 0 is a small enough value, µ(l)k and σ (l)2 k , µ̂ (l) k and σ̂ (l)2 k are the mean and variance of {ϕ(F (l)(x)k) : x ∼ X}, {ϕ(F (l)(δ(x))k) : x ∼ X}, respectively. This assumes that the mean value of pre-activation distribution of benign and poisoned samples only significantly differentiated in backdoor neurons. This assumption is made based on empirical observation, and our methods work only when this assumption holds. 4.2 Entropy-based pruning (EP) Based on the given assumptions, standardizing the pre-activation distributions (by subtracting the mean and dividing by the standard deviation) will maximize the differential entropy in benign neurons, which approximately follow a standard Gaussian distribution (N (0, 1)). However, in backdoor neurons, the mixture distributions resulting from differences in the moments of Gaussian components cannot be Gaussian distributions, leading to a smaller differential entropy than that of a standardized Gaussian distribution. Corollary 1. Let ϕ̇(l)k = ϕ (l) k −µ (l) k σ (l) k be the standardized pre-activations, then the following inequality is satisfied: h(ϕ̇ (l) k ) < h(ϕ̇ (l) k′ ) ≤ h(Z), ∀k ∈ BF,τ , k ′ /∈ BF,τ , where Z ∼ N (0, 1) is the standardized Gaussian distribution. This inequality gives a guarantee that with an appropriately chosen threshold, the backdoor neurons can be well separated with the benign neurons. 4.3 BN statistics-based pruning (BNP) BN layer involves using the statistics of a mini-batch to normalize the data in each layer for each neuron. It is known to be able to smooth the optimization landscape, and has gradually become a common setting of neural networks [19]. During inference, BN uses the fixed statistics obtained by averaging the sample statistics of mini-batches during training time, including the mean and the variance. If the model is trained on a poisoned dataset, BN will record the mean and the variance of the poison-benign mixed data. Note that the mean and variance here are not defined on the pre-activations ϕ(l)k , but on x (l) k . Based on the above discussions, we know that the poisoned preactivations in backdoor neurons follow a different distribution from the benign samples. The recorded statistics during training are actually that of the mixture distribution. Hence, we can expect that the BN statistics of a trained backdoor neural network are biased. If we are able to access a small set of benign data, we can calculate an approximation of the true statistics on benign data. Then we calculate the Kullback-Leibler (KL) divergence [9] between the sample distribution and the BN induced distribution as the measurement of the bias. By assuming both of the distributions follow Gaussian distributions, we have a closed-form solution: DKL(N (l)sample,N (l) BN)k = log σ̃ (l) k σ (l) k + σ (l)2 k + (µ (l) k − µ̃ (l) k ) 2 2σ̃ (l)2 k − 1 2 , where N (l)sample = N (µ (l) k , σ (l)2 k ), N (l) BN = N (µ̃ (l) k , σ̃ (l)2 k ), µ (l) k and σ (l)2 k are the statistics obtained from benign samples, µ̃(l)2k and σ̃ (l) k are the BN statistics. Note that BN statistics is the mixture statistics of benign and poisoned distributions. Thus, we have the following corollary: Corollary 2. According to Assumption 2, when ϵ → 0, the following inequality is satisfied: DKL(N (l)sample,N (l) BN)k > DKL(N (l) sample,N (l) BN)k′ = 0, ∀k ∈ BF,τ , k ′ /∈ BF,τ , The corollary indicates that backdoor neurons should have larger KL divergences than benign neurons, as illustrated in Figure 2(c). 4.4 Overview of the two pruning strategies In Section 3.4, we reveal the discrepancy between the pre-activation distributions in backdoor neurons and that in the benign neurons. This enables fast detecting the neurons that are more related to the backdoor behaviours. The index we choose to detect the abnormal neurons depends on what kind of data we are able to access. Mixture training data In this case, the victim is given a poisoned training dataset with a specified poisoning rate ρ. Our goal is to obtain a benign model based on the poisoned dataset. To achieve this, we first train an infected model F on the poisoned dataset. The resulting model should have a certain number of backdoor neurons based on empirical observation and assumption. Since ρ > 0 for the dataset, all the neurons follow Gaussian mixture distributions, and we have h(ẋ(l)k ) < h(ẋ (l) k′ ) for all k ∈ BF,τ , k′ /∈ BF,τ . This implies that with an appropriate threshold τ∗h , we can perfectly separate the benign neurons and backdoor neurons, which can be formulated as: ∃τ∗h , h(ẋ (l) k ) < τ ∗ h , ∀k ∈ BF,τ , h(ẋ (l) k′ ) > τ ∗ h , ∀k′ /∈ BF,τ . Setting the threshold τ∗h is crucial to the solution, and it is a trade-off between the accuracy on benign samples and that on the backdoored samples. Note that |B(l)F,τ | << d (l) c . We can treat the low entropy neurons as outliers in each layer, and set different thresholds for different layers. Specifically, let h(l) = [h(x(l)1 ), h(x (l) 2 ), . . . , h(x (l) d (l) c )]T ∈ Rd(l)c be a vector of differential entropy of the l-th layer calculated from the poisoned dataset. Then we set τ (l)h = h̄ (l) − uh · s(l)h , where h̄(l) = 1 d (l) c ∑d(l)c k=1 h (l) k and s (l) h = √ 1 d (l) c ∑d(l)c k=1(h (l) k − h̄(l))2 are the mean and standard deviation of h(l), uh is a hyperparameter controlling how low the threshold is. Then we have a set of indices of potential backdoor neurons Ih = {(l, k) : h(l)k < τ (l) h }. Finally, we prune the infected model F using Ih, which results in a final model F−Ih . Benign training data This is the case that the victim is given a trained poisoned model F with a small set of benign data. Our goal is to utilize the benign data to clean up the poisoned model and eliminate the backdoor threat. Similar to the pruning process based on differential entropy, we first construct a vector of KL divergences of all neurons for each layer K(l) = [K(l)1 ,K (l) 2 , . . . ,K (l) d (l) c ]T ∈ Rd(l)c according to equation (4). We set τ (l)K = K̄(l) + uK · s (l) K , where K̄ (l) = 1 d (l) c ∑d(l)c k=1 K (l) k and s(l)K = √ 1 d (l) c ∑d(l)c k=1(K (l) k − K̄(l))2 are the mean and standard deviation of K(l), uK is a hyperparameter. The set of selected neurons is IK = {(l, k) : K(l)k > τ (l) K } and the pruned model can be represented as F−IK . 5 Experiments 5.1 Implementation details Datasets In this section, the experiments are conducted on two influential benchmarks, CIFAR-10 [21] and Tiny-ImageNet [22]. We use 90% of the data set for training, the rest of the data is used for validating or recovering the poisoned model. Models We use ResNet-18 [16] as the baseline model to evaluate our proposed method, and compare it with other methods. We train the network for 150 epochs on CIFAR-10 and 100 epochs on Tiny-ImageNet with SGD optimizer. The initial learning rate is set to 0.1 and the momentum is set to 0.9. We adopt the cosine learning rate scheduler to adjust the learning rate. The batch size is set to 128 by default. Attacks Our experiments are based on both the classical and the most advanced attack strategies, including the BadNet [14], Clean Label Attack (CLA) [39], Reflection Backdoor (Refool) [30], Warping-based poisoned Networks [33], Blended backdoor attack (Blended) [6], Input-aware backdoor attack (IAB) [32] and Sample Specific Backdoor Attack (SSBA) [26]. For BadNets, we test both the All-to-All (A2A) attack and All-to-One (A2O) attack, i.e., the attack target labels are set to yt = (y + 1) mod C, or one particular label yt = Ct, respectively. The target for A2O attacks of all the attack strategies is set to class 0. The triggers for BadNets and CLA are set to randomly generated patterns with size 3×3 for CIFAR-10 and 5×5 for Tiny-ImageNet. The poisoning rate is set to 10% by default. Note that, due to the image size restraint, SSBA is only performed on Tiny-Imagenet. Defenses We conduct experiments under two defense settings, one of which allows the defender to access the poisoned training set, while the other only has a small clean data set. Both the defense goals are to obtain a clean model without backdoor behaviors. We compare our approaches with the l∞ pruning [7], fine-tuning (FT), fine-pruning (FP) [27], neural attention distillation (NAD) [23] and adversarial neuron pruning (ANP) [41]. The number of benign samples allowed to access is set to 500 (1%) for CIFAR-10 and 5000 (5%) for Tiny-ImageNet by default. We set the threshold hyperparameter uh/uk to 3 on CIFAR-10 and 4 on Tiny-ImageNet of all tested attacks by default. Evaluation metrics In this work, we use the clean accuracy (ACC) and attack success rate (ASR) to evaluate the effectiveness of different methods. The ACC for a given model F is defined as: ACC(F,Dtest) = ∑ (x,y)∈Dtest I{argmax(F (x)) = y}, where I is the indicator function. The ASR is defined as: ASR(F,Dtest) = ∑ (x,y)∈Dtest,y ̸=yt I{argmax(F (δ(x))) = yt}, where yt is the attack target label. The ACC measures the model performance on benign samples, while the ASR reflects the degree of backdoor behavior retainment in the model. Given an infected model, our goal is to reduce the ASR, while keeping the ACC from dropping too much. 5.2 Experimental results CIFAR-10 We show the results on CIFAR-10 in Table 1. The recently proposed NAD and ANP perform significantly better than other defense methods, reducing the ASR to a very low level with a slight drop on ACC. However, they also have a significant drop (3 ∼ 4%) on ACC when defending CLA, which is the most robust backdoor attack in our experiments, and ANP even failed when defending BadNets(A2A). Nevertheless, both of our methods successfully eliminate the backdoor (ASR < 1%) with negligible loss on ACC. We even observe a little rise on ACC when defending BadNets by EP. This phenomenon demonstrates that backdoor neurons may hurt the ACC in some way, and thus the ACC will rise when the backdoor neurons are precisely pruned. Overall, our methods achieve the most advanced defense results. Tiny-ImageNet Tiny-ImageNet is a larger scale dataset with higher resolution images, and it is harder to defend against the attacks performed on it. Note that the A2A attack is absent, since we cannot successfully perform the attack due to the large number of its classes (up to 200). Our experimental results show that all the defense methods suffer from the performance degradation compared with the results in CIFAR-10, and they fail to defend against WaNet with a large ACC drop but even unchanged ASR, especially the ANP and l∞ defense. This phenomenon shows that the principles for finding backdoor neurons of both ANP and l∞ don’t work in such case. Nevertheless, our methods totally remove the backdoor and the ACC are not even affected, which indicates that our methods can precisely locate the backdoor neurons even on such large scale dataset. 5.3 Ablation study To be fair, we compare BNP with other re-training based methods using 500 benign samples in Section 5.2. However, BNP doesn’t require re-training the model, and the samples are just used for detecting the distribution discrepancy. As the statistical differences may be detected with much fewer samples, we now study how the number of samples affects the effectiveness of BNP. We train BadNets, CLA, Refool and Blended on CIFAR-10 with ρ = 10%, and use 10 to 500 benign samples to recover the model using BNP. We record the changes of ACC and ASR with respect to the number of benign samples. The results are shown in Section 5.3. The influence of the number of samples to our methods comes from the randomness on estimating moments. As the number of samples grows, the randomness is reduced and BNP has more stable performance, but the average performances are not improved, except for Refool. Compared with other attacks, Refool clearly needs more samples to reduce the ASR. A possible reason is that the mixture distribution in Refool has closer moments and is harder to distinguish. Besides, we surprisingly find that BNP can recover BadNets, CLA and Blended using only 10 benign samples. We also conduct experiments to show the high correlation between the backdoor neurons and our proposed evaluation metrics, and the results are shown in Appendix D. 6 Discussion The proposed methods are superior to other existing defense methods in the following three aspects: Better performance As demonstrated in Section 5, both of the proposed methods achieve stateof-the-art results. Moreover, according to the ablation study, the proposed BNP can successfully defend most of the attacks within 10 benign samples, which shows the amazing effectiveness of our proposed methods. Higher efficiency The proposed methods are highly efficient. We record the running time of several defense methods on 500 CIFAR-10 images with ResNet-18, and show the results in Table 3. It can be seen that both of the proposed methods require less time than the baseline defense methods. Since both methods require scanning on each neuron once, the computational complexity scale linearly with the number of the neurons in the neural network. Therefore, the efficiency of our methods is promised. More robust to hyperparameter choosing One of the most general problems in backdoor defense is the choice of hyperparameters. Under realistic settings, the defenders can only perform defenses without any prior knowledge about poison data, including poisoning rate and examples of poisoned data. So the defenders should carefully tune the hyperparameters, or the ACC and ASR can change suddenly even under small fluctuations of those hyperparameters. In comparison, both of the proposed pruning strategies only require one universal hyperparameter u. Moreover, they show reliable consistency against different attacks in the same dataset, only vary from different datasets, which is inevitable. Besides, we leave a wide range of parameters to choose, so that the ACC remains high while the ASR is controlled to a very small number, as shown in Section 5.3. 7 Conclusion In this work, we take an inspection on the characteristics of an infected model, and find backdoor sensitive neurons distinguishable by their pre-activations on poisoned dataset. Specifically, in backdoor neurons, pre-activations from benign data and poisoned data form distribution with extraordinarily different moments. This property makes it possible for defenders to efficiently locate the potential backdoor neurons based on the distributional property of pre-activations. When direct access to the poisoned dataset is available, we propose to measure the mixture property of pre-activations via differential entropy to detect potential backdoor neurons. In another case, where defenders only have access to a benign dataset, we propose to check abnormality of pre-activation distribution based on the inconsistency of the recorded BN statistics and the sample statistics on the given benign dataset. We then do pruning on potential backdoor neurons to recover the model. Experiments show that the proposed defending strategies can efficiently locate the backdoor neurons, and greatly reduce the backdoor threat with negligible loss of clean accuracy. Our approaches achieve superior results compared with all other defense methods under various attacks on the tested datasets. The results shed lights on the field of backdoor defense, and can be a guidance for designing more robust backdoor attacks. 8 Acknowledgement This work is supported by the National Natural Science Foundation of China (No. 62101351), the GuangDong Basic and Applied Basic Research Foundation (No.2020A1515110376), Shenzhen Outstanding Scientific and Technological Innovation Talents PhD Startup Project (No. RCBS20210609104447108), the Key-Area Research and Development Program of Guangdong Province (2020B0101350001), and the Chinese University of Hong Kong (Shenzhen).
1. What are the strengths and weaknesses of the paper regarding its claims and contributions to the field of backdoor defense? 2. Are there any over-claims or missing related works in the paper that need to be addressed? 3. How does the reviewer assess the experiment parts of the paper, particularly in terms of hyperparameter selection and fair comparison among pruning-based defenses? 4. Does the paper adequately discuss the resistance of the proposed methods to adaptive attacks and their potential limitations? 5. Are there any minor comments or suggestions for improvement that the reviewer has regarding the paper's content or presentation?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper reveals that poisoned neurons and benign neurons have different behaviors, measured by the entropy of neuron activations and the BN statistics. Motivated by these understandings, the authors propose DDE and MBNS targeting the defense of poison-only backdoor attacks and attacked pre-trained models, respectively. The proposed methods are evaluated on both CIFAR-10 and Tiny-ImageNet datasets under seven attacks. Strengths And Weaknesses Pros The topic is of sufficient significance and interest to NeurIPS audiences. The authors discuss the resistance to potential adaptive attacks, which should be encouraged. In general, the idea is easy to follow. The authors discuss the efficiency of different defenses, which should be encouraged. In general, I enjoy reading this paper. However, I still have some concerns about this paper. I will increase my scores if the authors can (partly) address my concerns. The detailed comments are as follows: Cons Some statements need further support or modifications to avoid over-claim. ‘we claim that both the proposed indices can perfectly separate...’ (Line 60-62, Page 2): This statement should be modified. This claim can be made only if your methods have solid theoretical foundations. ‘there was no such quantity defined in the literature to measure...’ (Line 139-140, Page 4): This is an over-claim. Specifically, all existing pruning-based defenses provided a quantity (e.g., activation values on benign samples) to measure it. Missing some important related works. No discussion about pre-processing-based backdoor defenses (e.g., [1-3]) Missing two poison-suppression-based backdoor defenses [4, 5]. My biggest concern is about the experiment parts. How to select the hyper-parameter u? Do you use the same u under different attacks on all datasets? To ensure a fair comparison among all pruning-based defenses, I would like to see the results (i.e., ACC and ASR) with respect to the pruning ratio. It is necessary to verify whether the proposed methods are truly better in finding malicious neurons. I am interested in whether the proposed defenses are truly resistant to adaptive methods since only poison-only backdoor attacks are used in the evaluations. The author should also test their methods on some training-controlled attacks (e.g., [6, 7]), which aim to reduce the difference between poisoned samples and benign samples in the hidden feature space. Minor Comments The ‘we will’ should be ‘we’ in Line 36 (Page 1). There are two typos in Table 2: ‘0.1’-->‘0.10’, ‘64.2’-->‘64.20’. Please recheck and cite the official version of all references (e.g., [8-9]). References DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation. AsiaCCS, 2021. Backdoor Attack in the Physical World. ICLR Workshop, 2021. Neural Trojans. ICCD, 2017. Backdoor Defense via Decoupling the Training Process. ICLR, 2022. Anti-Backdoor Learning: Training Clean Models on Poisoned Data. NeurIPS, 2021. Backdoor Attack with Imperceptible Input and Latent Modification. NeurIPS, 2021. Bypassing Backdoor Detection Algorithms in Deep Learning. EuroS&P, 2020. Backdoor Learning: A Survey. IEEE TNNLS, 2022. Backdoor Scanning for Deep Neural Networks through K-Arm Optimization. ICML, 2021. Questions Cons Some statements need further support or modifications to avoid over-claim. ‘we claim that both the proposed indices can perfectly separate...’ (Line 60-62, Page 2): This statement should be modified. This claim can be made only if your methods have solid theoretical foundations. ‘there was no such quantity defined in the literature to measure...’ (Line 139-140, Page 4): This is an over-claim. Specifically, all existing pruning-based defenses provided a quantity (e.g., activation values on benign samples) to measure it. Missing some important related works. No discussion about pre-processing-based backdoor defenses (e.g., [1-3]) Missing two poison-suppression-based backdoor defenses [4, 5]. My biggest concern is about the experiment parts. How to select the hyper-parameter u? Do you use the same u under different attacks on all datasets? To ensure a fair comparison among all pruning-based defenses, I would like to see the results (i.e., ACC and ASR) with respect to the pruning ratio. It is necessary to verify whether the proposed methods are truly better in finding malicious neurons. I am interested in whether the proposed defenses are truly resistant to adaptive methods since only poison-only backdoor attacks are used in the evaluations. The author should also test their methods on some training-controlled attacks (e.g., [6, 7]), which aim to reduce the difference between poisoned samples and benign samples in the hidden feature space. Minor Comments The ‘we will’ should be ‘we’ in Line 36 (Page 1). There are two typos in Table 2: ‘0.1’-->‘0.10’, ‘64.2’-->‘64.20’. Please recheck and cite the official version of all references (e.g., [8-9]). Limitations The authors fail to adequately mention the limitations and potential negative societal impact of their work. Please discuss them explictly.
NIPS
Title Do Neural Optimal Transport Solvers Work? A Continuous Wasserstein-2 Benchmark Abstract Despite the recent popularity of neural network-based solvers for optimal transport (OT), there is no standard quantitative way to evaluate their performance. In this paper, we address this issue for quadratic-cost transport—specifically, computation of the Wasserstein-2 distance, a commonly-used formulation of optimal transport in machine learning. To overcome the challenge of computing ground truth transport maps between continuous measures needed to assess these solvers, we use inputconvex neural networks (ICNN) to construct pairs of measures whose ground truth OT maps can be obtained analytically. This strategy yields pairs of continuous benchmark measures in high-dimensional spaces such as spaces of images. We thoroughly evaluate existing optimal transport solvers using these benchmark measures. Even though these solvers perform well in downstream tasks, many do not faithfully recover optimal transport maps. To investigate the cause of this discrepancy, we further test the solvers in a setting of image generation. Our study reveals crucial limitations of existing solvers and shows that increased OT accuracy does not necessarily correlate to better results downstream. Solving optimal transport (OT) with continuous methods has become widespread in machine learning, including methods for large-scale OT [11, 36] and the popular Wasserstein Generative Adversarial Network (W-GAN) [3, 12]. Rather than discretizing the problem [31], continuous OT algorithms use neural networks or kernel expansions to estimate transport maps or dual solutions. This helps scale OT to large-scale and higher-dimensional problems not handled by discrete methods. Notable successes of continuous OT are in generative modeling [42, 20, 19, 7] and domain adaptation [43, 37, 25]. In these applications, OT is typically incorporated as part of the loss terms for a neural network model. For example, in W-GANs, the OT cost is used as a loss function for the generator; the model incorporates a neural network-based OT solver to estimate the loss. Although recent W-GANs provide state-of-the-art generative performance, however, it remains unclear to which extent this 35th Conference on Neural Information Processing Systems (NeurIPS 2021). success is connected to OT. For example, [28, 32, 38] show that popular solvers for the Wasserstein-1 (W1) distance in GANs fail to estimate W1 accurately. While W-GANs were initially introduced with W1 in [3], state-of-the art solvers now use both W1 and W2 (the Wasserstein-2 distance, i.e., OT with the quadratic cost). While their experimental performance on GANs is similar, W2 solvers tend to converge faster (see [19, Table 4]) with better theoretical guarantees [19, 26, 16]. Contributions. In this paper, we develop a generic methodology for evaluating continuous quadraticcost OT solvers (W2). Our main contributions are as follows: • We use input-convex neural networks (ICNNs [2]) to construct pairs of continuous measures that we use as a benchmark with analytically-known solutions for quadratic-cost OT (M3, M4.1). • We use these benchmark measures to evaluate popular quadratic-cost OT solvers in highdimensional spaces (M4.3), including the image space of 64ˆ 64 CelebA faces (M4.4). • We evaluate the performance of these OT solvers as a loss in generative modeling of images (M4.5). Our experiments show that some OT solvers exhibit moderate error even in small dimensions (M4.3), performing similarly to trivial baselines (M4.2). The most successful solvers are those using parametrization via ICNNs. Surprisingly, however, solvers that faithfully recover W2 maps across dimensions struggle to achieve state-of-the-art performance in generative modeling. Our benchmark measures can be used to evaluate future W2 solvers in high-dimensional spaces, a crucial step to improve the transparency and replicability of continuous OT research. Note the benchmark from [35] does not fulfill this purpose, since it is designed to test discrete OT methods and uses discrete low-dimensional measures with limited support. Notation. We use P2pRq to denote the set of Borel probability measures on R with finite second moment and P2,acpRq to denote its subset of absolutely continuous probability measures. We denote by ΠpP,Qq the set of the set of probability measures on R ˆ R with marginals P and Q. For some measurable map T : R Ñ R, we denote by T 7 the associated push-forward operator. For φ : R Ñ R, we denote by φ its Legendre-Fenchel transform [10] defined by φpyq “ maxxPRD rxx, yy ́ φpxqs. Recall that φ is a convex function, even when φ is not. 1 Background on Optimal Transport We start by stating the definition and some properties of optimal transport with quadratic cost. We refer the reader to [34, Chapter 1] for formal statements and proofs. Primal formulation. For P,Q P P2pRq, Monge’s primal formulation of the squared Wasserstein-2 distance, i.e., OT with quadratic cost, is given by W2pP,Qq def “ min T 7P“Q ż RD }x ́ T pxq} 2 dPpxq, (1) where the minimum is taken over measurable functions (transport maps) T : R Ñ R mapping P to Q. The optimal T ̊ is called the optimal transport map (OT map). Note that (1) is not symmetric, and this formulation does not allow for mass splitting, i.e., for some P,Q P P2pRq, there is no map T that satisfies T 7P “ Q. Thus, Kantorovich proposed the following relaxation [14]: W2pP,Qq def “ min πPΠpP,Qq ż RDˆRD }x ́ y} 2 dπpx, yq, (2) where the minimum is taken over all transport plans π, i.e., measures on R ˆ R whose marginals are P and Q. The optimal π ̊ P ΠpP,Qq is called the optimal transport plan (OT plan). If π ̊ is of the form ridRD , T ̊s7P P ΠpP,Qq for some T ̊, then T ̊ is the minimizer of (1). Dual formulation. For P,Q P P2pRq, the dual formulation of W2 is given by [40]: W2pP,Qq “ max f‘gď 12 } ̈}2 „ ż N/A Solving optimal transport (OT) with continuous methods has become widespread in machine learning, including methods for large-scale OT [11, 36] and the popular Wasserstein Generative Adversarial Network (W-GAN) [3, 12]. Rather than discretizing the problem [31], continuous OT algorithms use neural networks or kernel expansions to estimate transport maps or dual solutions. This helps scale OT to large-scale and higher-dimensional problems not handled by discrete methods. Notable successes of continuous OT are in generative modeling [42, 20, 19, 7] and domain adaptation [43, 37, 25]. In these applications, OT is typically incorporated as part of the loss terms for a neural network model. For example, in W-GANs, the OT cost is used as a loss function for the generator; the model incorporates a neural network-based OT solver to estimate the loss. Although recent W-GANs provide state-of-the-art generative performance, however, it remains unclear to which extent this 35th Conference on Neural Information Processing Systems (NeurIPS 2021). success is connected to OT. For example, [28, 32, 38] show that popular solvers for the Wasserstein-1 (W1) distance in GANs fail to estimate W1 accurately. While W-GANs were initially introduced with W1 in [3], state-of-the art solvers now use both W1 and W2 (the Wasserstein-2 distance, i.e., OT with the quadratic cost). While their experimental performance on GANs is similar, W2 solvers tend to converge faster (see [19, Table 4]) with better theoretical guarantees [19, 26, 16]. Contributions. In this paper, we develop a generic methodology for evaluating continuous quadraticcost OT solvers (W2). Our main contributions are as follows: • We use input-convex neural networks (ICNNs [2]) to construct pairs of continuous measures that we use as a benchmark with analytically-known solutions for quadratic-cost OT (M3, M4.1). • We use these benchmark measures to evaluate popular quadratic-cost OT solvers in highdimensional spaces (M4.3), including the image space of 64ˆ 64 CelebA faces (M4.4). • We evaluate the performance of these OT solvers as a loss in generative modeling of images (M4.5). Our experiments show that some OT solvers exhibit moderate error even in small dimensions (M4.3), performing similarly to trivial baselines (M4.2). The most successful solvers are those using parametrization via ICNNs. Surprisingly, however, solvers that faithfully recover W2 maps across dimensions struggle to achieve state-of-the-art performance in generative modeling. Our benchmark measures can be used to evaluate future W2 solvers in high-dimensional spaces, a crucial step to improve the transparency and replicability of continuous OT research. Note the benchmark from [35] does not fulfill this purpose, since it is designed to test discrete OT methods and uses discrete low-dimensional measures with limited support. Notation. We use P2pRDq to denote the set of Borel probability measures on RD with finite second moment and P2,acpRDq to denote its subset of absolutely continuous probability measures. We denote by ΠpP,Qq the set of the set of probability measures on RD ˆ RD with marginals P and Q. For some measurable map T : RD Ñ RD, we denote by T 7 the associated push-forward operator. For φ : RD Ñ R, we denote by φ its Legendre-Fenchel transform [10] defined by φpyq “ maxxPRD rxx, yy ´ φpxqs. Recall that φ is a convex function, even when φ is not. 1 Background on Optimal Transport We start by stating the definition and some properties of optimal transport with quadratic cost. We refer the reader to [34, Chapter 1] for formal statements and proofs. Primal formulation. For P,Q P P2pRDq, Monge’s primal formulation of the squared Wasserstein-2 distance, i.e., OT with quadratic cost, is given by W22pP,Qq def“ min T 7P“Q ż RD }x´ T pxq}2 2 dPpxq, (1) where the minimum is taken over measurable functions (transport maps) T : RD Ñ RD mapping P to Q. The optimal T˚ is called the optimal transport map (OT map). Note that (1) is not symmetric, and this formulation does not allow for mass splitting, i.e., for some P,Q P P2pRDq, there is no map T that satisfies T 7P “ Q. Thus, Kantorovich proposed the following relaxation [14]: W22pP,Qq def“ min πPΠpP,Qq ż RDˆRD }x´ y}2 2 dπpx, yq, (2) where the minimum is taken over all transport plans π, i.e., measures on RD ˆ RD whose marginals are P and Q. The optimal π˚ P ΠpP,Qq is called the optimal transport plan (OT plan). If π˚ is of the form ridRD , T˚s7P P ΠpP,Qq for some T˚, then T˚ is the minimizer of (1). Dual formulation. For P,Q P P2pRDq, the dual formulation of W22 is given by [40]: W22pP,Qq “ max f‘gď 12 }¨}2 „ ż RD fpxqdPpxq ` ż RD gpyqdQpyq , (3) where the maximum is taken over all f P L1pP,RD Ñ Rq and g P L1pQ,RD Ñ Rq satisfying fpxq ` gpyq ď 12}x´ y} 2 for all x, y P RD. From the optimal dual potential f˚, we can recover the optimal transport plan T˚pxq “ x´∇f˚pxq [34, Theorem 1.17]. The optimal f˚, g˚ satisfy pf˚qc “ g˚ and pg˚qc “ f˚, where uc : RD Ñ R is the c´transform of u defined by ucpyq “ minxPRD “ 1{2}x´ y}2 ´ upxq ‰ . We can rewrite (3) as W22pP,Qq “ max f „ ż RD fpxqdPpxq ` ż RD f cpyqdQpyq , (4) where the maximum is taken over all f P L1pP,RD Ñ Rq. Since f˚ and g˚ are each other’s c-transforms, they are both c-concave [34, M1.6], which is equivalent to saying that functions ψ˚ : x ÞÑ 12}x} 2 ´ f˚pxq and φ˚ : x ÞÑ 12}x} 2 ´ g˚pxq are convex [34, Proposition 1.21]. In particular, ψ˚ “ φ˚ and φ˚ “ ψ˚. Since T˚pxq “ x´∇f˚pxq “ ∇ ˆ }x}2 2 ´ f˚pxq ˙ “ ∇ψ˚, (5) we see that the OT maps are gradients of convex functions, a fact known as Brenier’s theorem [6]. “Solving” optimal transport problems. In applications, for given P,Q P P2pRDq, the W2 optimal transport problem is typically considered in the following three similar but not equivalent tasks: • Evaluating W22pP,Qq. The Wasserstein-2 distance is a geometrically meaningful way to compare probability measures, providing a metric on P2pRDq. • Computing the optimal map T˚ or plan π˚. The map T˚ provides an intuitive way to interpolate between measures. It is often used as a generative map between measures in problems like domain adaptation [36, 43] and image style transfer [16]. • Using the gradient BW22pPα,Qq{Bα to update generative models. Derivatives of W22 are used implicitly in generative modeling that incorporates W2 loss [19, 33], in which case P “ Pα is a parametric measure and Q is the data measure. Typically, Pα “ Gα7S is the measure generated from a fixed latent measure S by a parameterized function Gα, e.g., a neural network. The goal is to find parameters α that minimize W22pPα,Qq via gradient descent. In the generative model setting, by definition of the pushforward Pα “ Gα7S, we have W22pPα,Qq “ ż z f˚pGαpzqqdSpzq ` ż RD g˚pyqdQpyq, where f˚ and g˚ are the optimal dual potentials. At each generator training step, f˚ and g˚ are fixed so that when we take the gradient with respect to α, by applying the chain rule we have: BW22pPα,Qq Bα “ ż z JαGαpzqT∇f˚ ` Gαpzq ˘ dSpzq, (6) where JαGαpzqT is the transpose of the Jacobian matrix of Gαpzq w.r.t. parameters α. This result still holds without assuming the potentials are fixed by the envelope theorem [29]. To capture the gradient, we need a good estimate of ∇f˚ “ idRD ´T˚ by (5). This task is somewhat different from computing the OT map T˚: since the estimate of ∇f˚ is only involved in the gradient update for the generator, it is allowed to differ while still resulting in a good generative model. We will use the generic phrase OT solver to refer to a method for solving any of the tasks above. Quantitative evaluation of OT solvers. For discrete OT methods, a benchmark dataset [35] exists but the mechanism for producing the dataset does not extend to continuous OT. Existing continuous solvers are typically evaluated on a set of self-generated examples or tested in generative models without evaluating its actual OT performance. Two kinds of metrics are often used: Direct metrics compare the computed transport map T̂ with the true one T˚, e.g., by using L2 Unexplained Variance Percentage (L2-UVP) metric [16, M5.1], [17, M5]. There are relatively few direct metrics available, since the number of examples of P,Q with known ground truth T˚ is small: it is known that T˚ can be analytically derived or explicitly computed in the discrete case [31, M3], 1-dimensional case [31, M2.6], and Gaussian/location-scatter cases [1]. Indirect metrics use an OT solver as a component in a larger pipeline, using end-to-end performance as a proxy for solver quality. For example, in generative modeling where OT is used as the generator loss [19, 27], the quality of the generator can be assessed through metrics for GANs, such as the Fréchet Inception distance (FID) [13]. Indirect metrics do not provide clear understanding about the quality of the solver itself, since they depend on components of the model that are not related to OT. 2 Continuous Dual Solvers for Quadratic Cost Transport While our benchmark might be used to test any continuous solver which computes map T˚ or gradient ∇f˚, in this paper, we perform evaluation only on dual-form continuous solvers based on (3) or (4). Such solvers have straightforward optimization procedures and can be adapted to various datasets without extensive hyperparameter search. In contrast, primal-form solvers based on (1), e.g., [18, 43, 21, 23], typically parameterize T˚ using complicated generative modeling techniques that depend on careful hyperparameter search and complex optimization procedures [24]. We summarize existing continuous dual form solvers in Table 1. These fit a parametric function fθ (or ψθ) to approximate f˚ (or ψ˚ “ idRD ´ f˚). The resulting fθ produces an approximate OT map idRD´∇fθ“∇ψθ « T˚ and derivative ∇fθ“ idRD´∇ψθ needed to update generative models (6). To our knowledge, none of these solvers has been quantitatively evaluated in a non-Gaussian setting. For tMMs, tMM-Bs, and tQCs, the quality of the recovered derivatives ∇f˚ for BW22pPα,Qq{Bα has only been evaluated implicitly through GAN metrics. Moreover, these three solvers have not been quantitatively evaluated on solving OT tasks. We now overview each solver from Table 1. tLSs optimizes an unconstrained regularized dual form of (3) [36]: max f,g „ ż RD fpxqdPpxq ` ż RD gpyqdQpyq ´Rpf, gq. (7) The entropic or quadratic regularizer R penalizes potentials f, g for violating the constraint f ‘ g ď 12} ¨ } 2 [36, M3]. In practice, f “ fθ and g “ gω are linear combinations of kernel functions [11] or neural networks [36]. The parameters θ, ω are obtained by applying stochastic gradient ascent (SGA) over random mini-batches sampled from P,Q. Most other solvers are based on an expansion of (4): W22pP,Qq “ max f ż RD fpxqdPpxq ` ż RD “fcpyq hkkkkkkkkkkkkkkkikkkkkkkkkkkkkkkj min xPRD „ 1 2 }x´ y}2 ´ fpxq dQpyq. (8) The challenge of (8) is the inner minimization over x P RD, i.e., evaluating f cpyq. The main difference between existing solvers is the procedure used to solve this inner problem. tMM-Bs uses a neural network fθ as the potential trained using mini-batch SGA [27]. To solve the inner problem, the authors restrict the minimization of x to the current mini-batch from P instead of RD. The strategy is fast but leads to overestimation of the inner problem’s solution since the minimum is taken over a restricted subset. tMM-v1s exploits the property that f˚ “ 12} ¨ } 2 ´ ψ˚, where ψ˚ is convex [39]. The authors parametrize fθ “ 12} ¨ } 2 ´ ψθ, where ψθ is an input convex neural network (ICNN) [2]. Hence, for every y P RD, the inner problem of (8) becomes convex in x. This problem can be solved using SGA to high precision, but doing so is computationally costly [16, MC.4]. tMMs uses a formulation equivalent to (8) [30]: W22pP,Qq “ max f ż RD fpxqdPpxq ` ż RD min H „ 1 2 }Hpyq ´ y}2 ´ fpHpyqq dQpyq, (9) where the minimization is performed over functions H : RD Ñ RD. The authors use neural networks fθ and Hω to parametrize the potential and the minimizer of the inner problem. To train θ, ω, the authors apply stochastic gradient ascent/descent (SGAD) over mini-batches from P,Q. tMMs is generic and can be modified to compute arbitrary transport costs and derivatives, not just W22, although the authors have tested only on the Wasserstein-1 (W1) distance. Similarly to tMMv1s, tMMv2s parametrizes fθ “ 12} ¨ } 2 ´ ψθ, where ψθ is an ICNN [26]. For a fixed fθ, the optimal solution H is given by H “ p∇ψθq´1 which is an inverse gradient of a convex function, so it is also a gradient of a convex function. Hence, the authors parametrize Hω “ ∇φω, where φω is an ICNN, and use tMMs to fit θ, ω. tW2s uses the same ICNN parametrization as [26] but introduces cycle-consistency regularization to avoid solving a maximin problem [16, M4]. Finally, we highlight the solver tQCs [19]. Similarly to tMM-Bs, a neural network fθ is used as the potential. When each pair of mini-batches txnu, tynu from P,Q is sampled, the authors solve a discrete OT problem to obtain dual variables tf˚n u, tg˚nu, which are used to regress fθpxnq onto f˚n . Gradient deviation. The solvers above optimize for potentials like fθ (or ψθ), but it is the gradient of fθ (or ψθ) that is used to recover the OT map via T “ x´∇fθ. Even if }f ´ f˚}2L2pPq is small, the difference }∇fθ ´∇f˚}2L2pPq may be arbitrarily large since ∇fθ is not directly involved in optimization process. We call this issue gradient deviation. This issue is only addressed formally for ICNN-based solvers tMMv1s, tMMv2s, tW2s [16, Theorem 4.1], [26, Theorem 3.6]. Reversed solvers. tMMs, tMMv2s, tW2s recover not only the forward OT map ∇ψθ « ∇ψ˚ “ T˚, but also the inverse, given by Hω « pT˚q´1 “ p∇ψ˚q´1 “ ∇ψ˚, see [26, M3] or [16, M4.1]. These solvers are asymmetric in P,Q and an alternative is to swap P and Q during training. We denote such reversed solvers by tMM:Rs, tMMv2:Rs, tW2:Rs. In M4 we show that surprisingly tMM:Rs works better in generative modeling than tMMs. 3 Benchmarking OT Solvers In this section, we develop a generic method to produce benchmark pairs, i.e., measures pP,Qq such that Q “ T 7P with sample access and an analytically known OT solution T˚ between them. Key idea. Our method is based on the fact that for a differentiable convex function ψ : RD Ñ R, its gradient ∇ψ is an optimal transport map between any P P P2,acpRDq and its pushforward ∇ψ7P by ∇ψ : RD Ñ RD. This follows from Brenier’s theorem [6], [41, Theorem 2.12]. Thus, for a continuous measure P with sample access and a known convex ψ, pP,∇ψ7Pq can be used as a benchmark pair. We sample from ∇ψ7P by drawing samples from P and pushing forward by ∇ψ. Arbitrary pairs pP,Qq. It is difficult to compute the exact continuous OT solution for an arbitrary pair pP,Qq. As a compromise, we compute an approximate transport map as the gradient of an ICNN using tW2s. That is, we find ψθ parameterized as an ICNN such that ∇ψθ7P « Q. Then, the modified pair pP,∇ψθ7Pq can be used to benchmark OT solvers. We choose tW2s because it exhibits good performance in higher dimensions, but other solvers can also be used so long as ψθ is convex. Because of the choice of tW2s, subsequent evaluation might slightly favor ICNN-based methods. Extensions. Convex functions can be modified to produce more benchmark pairs. If ψ1, . . . , ψN are convex, then σpψ1, . . . , ψN q is convex when σ : RN Ñ R is convex and monotone. For example, c ¨ ψ1 (c ě 0q, ř n ψn, maxn ψn are convex, and their gradients produce new benchmark pairs. Inversion. If ∇ψθ is bijective, then the inverse transport map for pP,∇ψθ7Pq exists and is given by p∇ψθq´1. For each y P RD, the value p∇ψθq´1pyq can be obtained by solving a convex problem [39, M6], [16, M3]. All ICNNs ψθ we use have bijective gradients ∇ψθ, as detailed in Appendix B.1. 4 Benchmark Details and Results We implement our benchmark in PyTorch and provide the pre-trained transport maps for all the benchmark pairs. The code is publicly available at https://github.com/iamalexkorotin/Wasserstein2Benchmark The experiments are conducted on 4 GTX 1080ti GPUs and require about 100 hours of computation (per GPU). We provide implementation details in Appendix B. 4.1 Datasets High-dimensional measures. We develop benchmark pairs to test whether the OT solvers can redistribute mass among modes of measures. For this purpose, we use Gaussian mixtures in dimensions D “ 21, 22, . . . , 28. In each dimension D, we consider a random mixture P of 3 Gaussians and two random mixtures Q1,Q2 of 10 Gaussians. We train approximate transport maps ∇ψi7P « Qi (i “ 1, 2) using the tW2s solver. Each potential is an ICNN with DenseICNN architecture [16, MB.2]. We create a benchmark pair via the half-sum of computed potentials pP, 12 p∇ψ1 `∇ψ2q7Pq. The first measure P is a mixture of 3 Gaussians and the second is obtained by averaging potentials, which transforms it to approximate mixtures of 10 Gaussians. See Appendix A.1 and Figure 1 for details. Images. We use the aligned images of CelebA64 faces dataset1 [22] to produce additional benchmark pairs. First, we fit 3 generative models (WGAN-QC [19]) on the dataset and pick intermediate training checkpoints to produce continuous measures QkEarly,QkMid,QkLate for the first 2 models (k “ 1, 2) and the final checkpoint of the third model (k “ 3) to produce measure P3Final. To make measures absolutely continuous, we add small Gaussian noise to the generator’s output. Each checkpoint (Early, Mid, Late, Final) represents images of faces of a particular quality. Next, for k P t1, 2u and Cpkt P tEarly, Mid, Lateu, we use tW2s solver to fit an approximate transport map ∇ψkCpkt for the pair pP3Final,QkCpktq, i.e., ∇ψkCpkt7P3Final « QkCpkt. The potential ψkCpkt is a convolutional ICNN with ConvICNN64 architecture (MB.1). For each Cpkt, we define a benchmark pair pPCelebA,QCpktq def“pP3Final, rp∇ψ1Cpkt `∇ψ2Cpktq{2s7P3Finalq. See Appendix A.2 and Figure 2 for details. 4.2 Metrics and Baselines Baselines. We propose three baseline methods: identity tIDs, constant tCs and linear tLs. The identity solver outputs T id “ idRD as the transport map. The constant solver outputs the mean value of Q, i.e., T 0 ” EQrys ” µQ. The linear solver outputs T 1pxq “ Σ ´ 12 P ` Σ 1 2 P ΣQΣ 1 2 P ˘ 1 2 Σ ´ 12 P px´ µPq ` µQ, i.e., the OT map between measures coarsened to Gaussians [1, Theorem 2.3]. Metrics. To assess the quality of the recovered transport map T̂ : RD Ñ RD from P to Q, we use unexplained variance percentage (UVP) [16]: L2-UVPpT̂ q def“ 100 ¨ }T̂ ´ T˚}2L2pPq{VarpQq%. Here T˚ is the OT map. For values « 0%, T̂ approximates T˚ well. For values ě 100%, map T̂ is far from optimal. The constant baseline provides L2-UVPpT 0q “ 100%. 1http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html pPCelebA,QCpktq def“ ` P3Final, 12 p∇ψ 1 Cpkt `∇ψ2Cpktq7P3Final ˘ . In the visualization, Cpkt is Early. To measure the quality of approximation of the derivative of the potential ridRD ´ T̂ s « ∇f˚ that is used to update generative models (6), we use cosine similarity (cos): cospid´ T̂ , id´ T˚q def“ xT̂ ´ id,∇ψ˚ ´ idyL2pPq }T˚ ´ id}L2pPq ¨ }T̂ ´ id}L2pPq P r´1, 1s. To estimate L2-UVP and cos metrics, we use 214 random samples from P. 4.3 Evaluation of Solvers on High-dimensional Benchmark Pairs We evaluate the solvers on the benchmark and report the computed metric values for the fitted transport map. For fair comparison, in each method the potential f and the map H (where applicable) are parametrized as fθ “ 12} ¨ } 2 ´ ψθ and Hω “ ∇φω respectively, where ψθ, φω use DenseICNN architectures [16, MB.2]. In solvers tQCs, tLSs, tMM-Bs,tMMs we do not impose any restrictions on the weights θ, ω, i.e. ψθ, φω are usual fully connected nets with additional skip connections. We provide the computed metric values in Table 2 and visualize fitted maps (for D “ 64) in Figure 3. All the solvers perform well (L2-UVP« 0, cos « 1) in dimensionD “ 2. In higher dimensions, only tMMv1s, tMMs, tMMv2s, tW2s and their reversed versions produce reasonable results. However, tMMv1s solver is slow since each optimization step solves a hard subproblem for computing f c. Maximin solvers tMMs,tMMv2s,tMM:Rs are also hard to optimize: they either diverge from the start (Û) or diverge after converging to nearly-optimal saddle point (í). This behavior is typical for maximin optimization and possibly can be avoided by a more careful choice of hyperparameters. For tQCs, tLSs,tMM-Bs, as the dimension increases, the L2-UVP drastically grows. Only tMM-Bs notably outperforms the trivial tLs baseline. The error of tMM-Bs is explained by the overestimation of the inner problem in (8), yielding biased optimal potentials. The error of tLSs comes from bias introduced by regularization [36]. In tQCs, error arises because a discrete OT problem solved on sampled mini-batches, which is typically biased [5, Theorem 1], is used to update fθ. Interestingly, although tQCs, tLSs are imprecise in terms of L2-UVP, they provide a high cos metric. Due to optimization issues and performance differences, wall-clock times for convergence are not representative. All solvers except tMMv1s converged in several hours. Among solvers that substantially outperform the linear baseline, i.e. tMMs, tMMv1s, tMMv2s, tW2s, tMM-Bs, the fastest converging one is tMM-Bs, but it is biased. tMMs, tMMv2s, tW2s require more time. 4.4 Evaluation of Solvers in CelebA 64ˆ 64 Images Benchmark Pairs For evaluation on the CelebA benchmark, we excluded tLSs and tMMv1s: the first is unstable in high dimensions [33], and the second takes too long to converge. ICNN-based solvers tMMv2s, tW2s and their reversed versions perform roughly the same in this experiment. For simplicity, we treat them as one solver tW2s. In tW2s, we parametrize fθ “ 12} ¨ } 2 ´ ψθ and Hω “ ∇φω, where ψθ, φω are input-convex neural nets with ConvexICNN64 architecture (MB.1). All the other solvers are designed in the generative modeling setting to work with convolutional architectures for images. Thus, in tMMs, tQCs, tMM-Bs we parametrize networks fθ as ResNet and Hω as U-Net (in tMMs). In turn, in tMM:Rs we parametrize Tθ by UNet and gω by ResNet. We compute the transport map QCpkt Ñ PCelebA for each solver on three image benchmarks. The results are in Figure 4 and Table 3 and echo patterns observed on high-dimensional problems (M4.3). tQCs, tMM-Bs suffer from extreme bias thanks to the high dimension of images, and the derivative of W22 computed by these solvers is almost orthogonal to the true derivative (cos « 0). This means that these solvers do not extract W22. tMMs, tMM:Rs, tW2s recover the transport maps well. tMMs’s map is slightly noisier than the one by tMM:Rs, a minor example of gradient deviation. 4.5 Evaluation of Solvers in Generative Modeling of CelebA 64ˆ 64 Faces Based on our previous evaluation, many existing neural OT solvers are notably imprecise. This leads us to ask: To what extent does solver quality matter in real-world applications? To address this question, we evaluate the most promising solvers in the task of generative modeling for CelebA 64 ˆ 64 images of faces. For comparison, we add tQCs, which has good generative performance [19]. For each solver, we train a generative network Gα with ResNet architecture from [19] to map a 128-dimensional normal distribution S to the data distribution Q. As the loss function for generator, we use W22pPα,Qq “W22pGα7S,Qq estimated by each solver. We perform GAN-style training, where gradient updates of the generator alternate with gradient steps of OT solver (discriminator) (MB.2.3). We show sample generated images in the top row of each subplot of Figure 5 and report FID [13]. On the bottom row, we show the pushforward of the OT map from Pα “ Gα7S to Q extracted from the OT solver. Since the model converged (Pα « Q), the map should be nearly equal to the identity. tW2s provides the least quality (Figure 5a). This can be explained by the use of ConvICNN: the other solvers use convolutional architectures and work better. In general, the applicability of ICNNs to image-based tasks is questionable [16, M5.3] which might be a serious practical limitation. tQCs has strong generative performance (Figure 5b). However, as in M4.3-4.4, the recovered map is far from the identity. We suspect this solver has decent generative performance because it approximates some non-W22 dissimilarity measure in practice. tMMs results in a generative model that produces blurry images (Figure 5c). The computed transport map idRD ´∇fθ is too far from the identity due to the gradient deviation. This leads to inaccurate gradient computation used to update the generator and explains why the generator struggles to improve. We emphasize that in M4.4 tMMs does not notably suffer from the gradient deviation. Probably, this is due to measures being absolutely continuous and supported on the entire RD. This is not the case in our generative modeling setup, where generated and data measures are supported on low-dimensional manifolds in RD. Reversed tMM:Rs overcomes the problem of tMMs with the gradient deviation but still leads to blurry images (Figure 5d). Interestingly, the fitted transport map Tθ significantly improves the quality and images Tθ ˝Gαpzq are comparable to the ones with tQCs solver (Figure 5b). We emphasize that formulations from tMMs, tMM:Rs solvers are maximin: using them in GANs requires solving a challenging min-max-min optimization problem. To handle this, we use three nested loops and stochastic gradient descent-ascent-descent. In our experiments, the training was not stable and often diverged: the reported results use the best hyperparameters we found, although there may exist better ones. The difficulty in selecting hyperparameters and the unstable training process are limitations of these solvers that need to be addressed before using in practice. 5 Conclusion Our methodology creates pairs of continuous measures with ground truth quadratic-cost optimal transport maps, filling the missing gap of benchmarking continuous OT solvers. This development allows us to evaluate the performance of quadratic-cost OT solvers in OT-related tasks. Beyond benchmarking the basic transport problem, our study of generative modeling reveals surprising patterns: bad OT solvers can yield good generative performance, and simply reversing asymmetric solvers can affect performance dramatically. Limitations. We rely on ICNN gradients as W2 optimal transport maps to generate pairs of benchmark measures. It is unclear whether analogous constructions can be used for other costs such as W1. We also limit our benchmark pairs to be absolutely continuous measures while limiting the ground truth transport maps to be gradients of ICNNs, which may not have enough representational power. While we reveal a discrepancy between performance in OT-related tasks and performance in generative modeling, in-depth study is needed to answer questions such as what exact dissimilarity metric tQCs implies that explains its generative performance while poorly approximating W2. Potential impact. We expect our benchmark to become a standard benchmark for continuous optimal transport as part of the ongoing effort of advancing computational OT, in particular, in its application to generative modeling. As a result, we hope our work can improve the quality and reusability of OT-related research. One potential negative is that our benchmark might narrow the evaluation of future OT solvers to the datasets of our benchmark. To avoid this, besides particular benchmark datasets, in M3 we describe a generic method to produce new benchmark pairs. ACKNOWLEDGEMENTS. The problem statement was developed in the framework of Skoltech-MIT NGP program. The work of Evgeny Burnaev was supported by the Ministry of Science and Higher Education of the Russian Federation grant No. 075-10-2021-068. The MIT Geometric Data Processing group acknowledges the generous support of Army Research Office grants W911NF2010168 and W911NF2110293, of Air Force Office of Scientific Research award FA9550-19-1-031, of National Science Foundation grants IIS-1838071 and CHS-1955697, from the CSAIL Systems that Learn program, from the MIT–IBM Watson AI Laboratory, from the Toyota–CSAIL Joint Research Center, from a gift from Adobe Systems, from an MIT.nano Immersion Lab/NCSOFT Gaming Program seed grant, and from the Skoltech–MIT Next Generation Program.
1. What is the focus and contribution of the paper regarding neural OT solvers? 2. What are the strengths of the proposed benchmark, particularly in evaluating different OT solvers? 3. What are the weaknesses of the paper, especially regarding theoretical problems? 4. How does the reviewer assess the sample complexity of estimating certain metrics in the paper? 5. Does the reviewer have any concerns about the construction of the benchmark pair and the ground truth transport map?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a continuous Wasserstein-2 benchmark to evaluate different neural OT solvers including WGAN varients. Extensive experiments are performed on toy datasets and a real face dataset. Review Strength: It's good to see a benchmark that evaluates different neural OT solvers. This could benefit future OT application works that choose the best OT solver. Comprehensive OT solvers are investigated, including some WGAN variants that can be potentially used as OT solvers. Various metrics such as L^2-UVP, cos, and FID are used to evaluate OT solvers. Weakness: I think two important theoretical problems remain unsolved in this paper: In line 216, the authors mentioned that to compute the L^2-UVP and the cos metrics, they use 2^14 random samples from P. They actually computed the estimated values for the two metrics. I would like to know the sample complexity of estimating the two metrics, i.e., the estimation error of L^2-UVP and cos in terms of the number of samples required. Is there a curse of dimensionality problem in estimating the L^2-UVP and cos, just like computing the Wasserstein distance for two continuous distributions using sampling methods? This is important because if the number of required samples grows exponentially as the dimensionality increases, then the metrics may not be accurate in evaluating OT solvers in the 64d space but only with 2^14 samples. In line 175, the authors construct a benchmark pair (P, 1/2 (\nabla \psi_1 + \nabla \psi_2)#P), and the 1/2 (\nabla \psi_1 + \nabla \psi_2) is used as the ground truth transport map if I understand correctly. My concern is that "is 1/2 (\nabla \psi_1 + \nabla \psi_2) really the optimal transport map between P and 1/2 (\nabla \psi_1 + \nabla \psi_2)#P"? Remember that \psi_1 and \psi_2 are approximated by the W2 solver. I think this paper should theoretically show "how far is 1/2 (\nabla \psi_1 + \nabla \psi_2) away from the real optimal transport map between P and 1/2 (\nabla \psi_1 + \nabla \psi_2)#P", and show how this gap affects the metrics that are used to evaluate OT solvers.
NIPS
Title Do Neural Optimal Transport Solvers Work? A Continuous Wasserstein-2 Benchmark Abstract Despite the recent popularity of neural network-based solvers for optimal transport (OT), there is no standard quantitative way to evaluate their performance. In this paper, we address this issue for quadratic-cost transport—specifically, computation of the Wasserstein-2 distance, a commonly-used formulation of optimal transport in machine learning. To overcome the challenge of computing ground truth transport maps between continuous measures needed to assess these solvers, we use inputconvex neural networks (ICNN) to construct pairs of measures whose ground truth OT maps can be obtained analytically. This strategy yields pairs of continuous benchmark measures in high-dimensional spaces such as spaces of images. We thoroughly evaluate existing optimal transport solvers using these benchmark measures. Even though these solvers perform well in downstream tasks, many do not faithfully recover optimal transport maps. To investigate the cause of this discrepancy, we further test the solvers in a setting of image generation. Our study reveals crucial limitations of existing solvers and shows that increased OT accuracy does not necessarily correlate to better results downstream. Solving optimal transport (OT) with continuous methods has become widespread in machine learning, including methods for large-scale OT [11, 36] and the popular Wasserstein Generative Adversarial Network (W-GAN) [3, 12]. Rather than discretizing the problem [31], continuous OT algorithms use neural networks or kernel expansions to estimate transport maps or dual solutions. This helps scale OT to large-scale and higher-dimensional problems not handled by discrete methods. Notable successes of continuous OT are in generative modeling [42, 20, 19, 7] and domain adaptation [43, 37, 25]. In these applications, OT is typically incorporated as part of the loss terms for a neural network model. For example, in W-GANs, the OT cost is used as a loss function for the generator; the model incorporates a neural network-based OT solver to estimate the loss. Although recent W-GANs provide state-of-the-art generative performance, however, it remains unclear to which extent this 35th Conference on Neural Information Processing Systems (NeurIPS 2021). success is connected to OT. For example, [28, 32, 38] show that popular solvers for the Wasserstein-1 (W1) distance in GANs fail to estimate W1 accurately. While W-GANs were initially introduced with W1 in [3], state-of-the art solvers now use both W1 and W2 (the Wasserstein-2 distance, i.e., OT with the quadratic cost). While their experimental performance on GANs is similar, W2 solvers tend to converge faster (see [19, Table 4]) with better theoretical guarantees [19, 26, 16]. Contributions. In this paper, we develop a generic methodology for evaluating continuous quadraticcost OT solvers (W2). Our main contributions are as follows: • We use input-convex neural networks (ICNNs [2]) to construct pairs of continuous measures that we use as a benchmark with analytically-known solutions for quadratic-cost OT (M3, M4.1). • We use these benchmark measures to evaluate popular quadratic-cost OT solvers in highdimensional spaces (M4.3), including the image space of 64ˆ 64 CelebA faces (M4.4). • We evaluate the performance of these OT solvers as a loss in generative modeling of images (M4.5). Our experiments show that some OT solvers exhibit moderate error even in small dimensions (M4.3), performing similarly to trivial baselines (M4.2). The most successful solvers are those using parametrization via ICNNs. Surprisingly, however, solvers that faithfully recover W2 maps across dimensions struggle to achieve state-of-the-art performance in generative modeling. Our benchmark measures can be used to evaluate future W2 solvers in high-dimensional spaces, a crucial step to improve the transparency and replicability of continuous OT research. Note the benchmark from [35] does not fulfill this purpose, since it is designed to test discrete OT methods and uses discrete low-dimensional measures with limited support. Notation. We use P2pRq to denote the set of Borel probability measures on R with finite second moment and P2,acpRq to denote its subset of absolutely continuous probability measures. We denote by ΠpP,Qq the set of the set of probability measures on R ˆ R with marginals P and Q. For some measurable map T : R Ñ R, we denote by T 7 the associated push-forward operator. For φ : R Ñ R, we denote by φ its Legendre-Fenchel transform [10] defined by φpyq “ maxxPRD rxx, yy ́ φpxqs. Recall that φ is a convex function, even when φ is not. 1 Background on Optimal Transport We start by stating the definition and some properties of optimal transport with quadratic cost. We refer the reader to [34, Chapter 1] for formal statements and proofs. Primal formulation. For P,Q P P2pRq, Monge’s primal formulation of the squared Wasserstein-2 distance, i.e., OT with quadratic cost, is given by W2pP,Qq def “ min T 7P“Q ż RD }x ́ T pxq} 2 dPpxq, (1) where the minimum is taken over measurable functions (transport maps) T : R Ñ R mapping P to Q. The optimal T ̊ is called the optimal transport map (OT map). Note that (1) is not symmetric, and this formulation does not allow for mass splitting, i.e., for some P,Q P P2pRq, there is no map T that satisfies T 7P “ Q. Thus, Kantorovich proposed the following relaxation [14]: W2pP,Qq def “ min πPΠpP,Qq ż RDˆRD }x ́ y} 2 dπpx, yq, (2) where the minimum is taken over all transport plans π, i.e., measures on R ˆ R whose marginals are P and Q. The optimal π ̊ P ΠpP,Qq is called the optimal transport plan (OT plan). If π ̊ is of the form ridRD , T ̊s7P P ΠpP,Qq for some T ̊, then T ̊ is the minimizer of (1). Dual formulation. For P,Q P P2pRq, the dual formulation of W2 is given by [40]: W2pP,Qq “ max f‘gď 12 } ̈}2 „ ż N/A Solving optimal transport (OT) with continuous methods has become widespread in machine learning, including methods for large-scale OT [11, 36] and the popular Wasserstein Generative Adversarial Network (W-GAN) [3, 12]. Rather than discretizing the problem [31], continuous OT algorithms use neural networks or kernel expansions to estimate transport maps or dual solutions. This helps scale OT to large-scale and higher-dimensional problems not handled by discrete methods. Notable successes of continuous OT are in generative modeling [42, 20, 19, 7] and domain adaptation [43, 37, 25]. In these applications, OT is typically incorporated as part of the loss terms for a neural network model. For example, in W-GANs, the OT cost is used as a loss function for the generator; the model incorporates a neural network-based OT solver to estimate the loss. Although recent W-GANs provide state-of-the-art generative performance, however, it remains unclear to which extent this 35th Conference on Neural Information Processing Systems (NeurIPS 2021). success is connected to OT. For example, [28, 32, 38] show that popular solvers for the Wasserstein-1 (W1) distance in GANs fail to estimate W1 accurately. While W-GANs were initially introduced with W1 in [3], state-of-the art solvers now use both W1 and W2 (the Wasserstein-2 distance, i.e., OT with the quadratic cost). While their experimental performance on GANs is similar, W2 solvers tend to converge faster (see [19, Table 4]) with better theoretical guarantees [19, 26, 16]. Contributions. In this paper, we develop a generic methodology for evaluating continuous quadraticcost OT solvers (W2). Our main contributions are as follows: • We use input-convex neural networks (ICNNs [2]) to construct pairs of continuous measures that we use as a benchmark with analytically-known solutions for quadratic-cost OT (M3, M4.1). • We use these benchmark measures to evaluate popular quadratic-cost OT solvers in highdimensional spaces (M4.3), including the image space of 64ˆ 64 CelebA faces (M4.4). • We evaluate the performance of these OT solvers as a loss in generative modeling of images (M4.5). Our experiments show that some OT solvers exhibit moderate error even in small dimensions (M4.3), performing similarly to trivial baselines (M4.2). The most successful solvers are those using parametrization via ICNNs. Surprisingly, however, solvers that faithfully recover W2 maps across dimensions struggle to achieve state-of-the-art performance in generative modeling. Our benchmark measures can be used to evaluate future W2 solvers in high-dimensional spaces, a crucial step to improve the transparency and replicability of continuous OT research. Note the benchmark from [35] does not fulfill this purpose, since it is designed to test discrete OT methods and uses discrete low-dimensional measures with limited support. Notation. We use P2pRDq to denote the set of Borel probability measures on RD with finite second moment and P2,acpRDq to denote its subset of absolutely continuous probability measures. We denote by ΠpP,Qq the set of the set of probability measures on RD ˆ RD with marginals P and Q. For some measurable map T : RD Ñ RD, we denote by T 7 the associated push-forward operator. For φ : RD Ñ R, we denote by φ its Legendre-Fenchel transform [10] defined by φpyq “ maxxPRD rxx, yy ´ φpxqs. Recall that φ is a convex function, even when φ is not. 1 Background on Optimal Transport We start by stating the definition and some properties of optimal transport with quadratic cost. We refer the reader to [34, Chapter 1] for formal statements and proofs. Primal formulation. For P,Q P P2pRDq, Monge’s primal formulation of the squared Wasserstein-2 distance, i.e., OT with quadratic cost, is given by W22pP,Qq def“ min T 7P“Q ż RD }x´ T pxq}2 2 dPpxq, (1) where the minimum is taken over measurable functions (transport maps) T : RD Ñ RD mapping P to Q. The optimal T˚ is called the optimal transport map (OT map). Note that (1) is not symmetric, and this formulation does not allow for mass splitting, i.e., for some P,Q P P2pRDq, there is no map T that satisfies T 7P “ Q. Thus, Kantorovich proposed the following relaxation [14]: W22pP,Qq def“ min πPΠpP,Qq ż RDˆRD }x´ y}2 2 dπpx, yq, (2) where the minimum is taken over all transport plans π, i.e., measures on RD ˆ RD whose marginals are P and Q. The optimal π˚ P ΠpP,Qq is called the optimal transport plan (OT plan). If π˚ is of the form ridRD , T˚s7P P ΠpP,Qq for some T˚, then T˚ is the minimizer of (1). Dual formulation. For P,Q P P2pRDq, the dual formulation of W22 is given by [40]: W22pP,Qq “ max f‘gď 12 }¨}2 „ ż RD fpxqdPpxq ` ż RD gpyqdQpyq , (3) where the maximum is taken over all f P L1pP,RD Ñ Rq and g P L1pQ,RD Ñ Rq satisfying fpxq ` gpyq ď 12}x´ y} 2 for all x, y P RD. From the optimal dual potential f˚, we can recover the optimal transport plan T˚pxq “ x´∇f˚pxq [34, Theorem 1.17]. The optimal f˚, g˚ satisfy pf˚qc “ g˚ and pg˚qc “ f˚, where uc : RD Ñ R is the c´transform of u defined by ucpyq “ minxPRD “ 1{2}x´ y}2 ´ upxq ‰ . We can rewrite (3) as W22pP,Qq “ max f „ ż RD fpxqdPpxq ` ż RD f cpyqdQpyq , (4) where the maximum is taken over all f P L1pP,RD Ñ Rq. Since f˚ and g˚ are each other’s c-transforms, they are both c-concave [34, M1.6], which is equivalent to saying that functions ψ˚ : x ÞÑ 12}x} 2 ´ f˚pxq and φ˚ : x ÞÑ 12}x} 2 ´ g˚pxq are convex [34, Proposition 1.21]. In particular, ψ˚ “ φ˚ and φ˚ “ ψ˚. Since T˚pxq “ x´∇f˚pxq “ ∇ ˆ }x}2 2 ´ f˚pxq ˙ “ ∇ψ˚, (5) we see that the OT maps are gradients of convex functions, a fact known as Brenier’s theorem [6]. “Solving” optimal transport problems. In applications, for given P,Q P P2pRDq, the W2 optimal transport problem is typically considered in the following three similar but not equivalent tasks: • Evaluating W22pP,Qq. The Wasserstein-2 distance is a geometrically meaningful way to compare probability measures, providing a metric on P2pRDq. • Computing the optimal map T˚ or plan π˚. The map T˚ provides an intuitive way to interpolate between measures. It is often used as a generative map between measures in problems like domain adaptation [36, 43] and image style transfer [16]. • Using the gradient BW22pPα,Qq{Bα to update generative models. Derivatives of W22 are used implicitly in generative modeling that incorporates W2 loss [19, 33], in which case P “ Pα is a parametric measure and Q is the data measure. Typically, Pα “ Gα7S is the measure generated from a fixed latent measure S by a parameterized function Gα, e.g., a neural network. The goal is to find parameters α that minimize W22pPα,Qq via gradient descent. In the generative model setting, by definition of the pushforward Pα “ Gα7S, we have W22pPα,Qq “ ż z f˚pGαpzqqdSpzq ` ż RD g˚pyqdQpyq, where f˚ and g˚ are the optimal dual potentials. At each generator training step, f˚ and g˚ are fixed so that when we take the gradient with respect to α, by applying the chain rule we have: BW22pPα,Qq Bα “ ż z JαGαpzqT∇f˚ ` Gαpzq ˘ dSpzq, (6) where JαGαpzqT is the transpose of the Jacobian matrix of Gαpzq w.r.t. parameters α. This result still holds without assuming the potentials are fixed by the envelope theorem [29]. To capture the gradient, we need a good estimate of ∇f˚ “ idRD ´T˚ by (5). This task is somewhat different from computing the OT map T˚: since the estimate of ∇f˚ is only involved in the gradient update for the generator, it is allowed to differ while still resulting in a good generative model. We will use the generic phrase OT solver to refer to a method for solving any of the tasks above. Quantitative evaluation of OT solvers. For discrete OT methods, a benchmark dataset [35] exists but the mechanism for producing the dataset does not extend to continuous OT. Existing continuous solvers are typically evaluated on a set of self-generated examples or tested in generative models without evaluating its actual OT performance. Two kinds of metrics are often used: Direct metrics compare the computed transport map T̂ with the true one T˚, e.g., by using L2 Unexplained Variance Percentage (L2-UVP) metric [16, M5.1], [17, M5]. There are relatively few direct metrics available, since the number of examples of P,Q with known ground truth T˚ is small: it is known that T˚ can be analytically derived or explicitly computed in the discrete case [31, M3], 1-dimensional case [31, M2.6], and Gaussian/location-scatter cases [1]. Indirect metrics use an OT solver as a component in a larger pipeline, using end-to-end performance as a proxy for solver quality. For example, in generative modeling where OT is used as the generator loss [19, 27], the quality of the generator can be assessed through metrics for GANs, such as the Fréchet Inception distance (FID) [13]. Indirect metrics do not provide clear understanding about the quality of the solver itself, since they depend on components of the model that are not related to OT. 2 Continuous Dual Solvers for Quadratic Cost Transport While our benchmark might be used to test any continuous solver which computes map T˚ or gradient ∇f˚, in this paper, we perform evaluation only on dual-form continuous solvers based on (3) or (4). Such solvers have straightforward optimization procedures and can be adapted to various datasets without extensive hyperparameter search. In contrast, primal-form solvers based on (1), e.g., [18, 43, 21, 23], typically parameterize T˚ using complicated generative modeling techniques that depend on careful hyperparameter search and complex optimization procedures [24]. We summarize existing continuous dual form solvers in Table 1. These fit a parametric function fθ (or ψθ) to approximate f˚ (or ψ˚ “ idRD ´ f˚). The resulting fθ produces an approximate OT map idRD´∇fθ“∇ψθ « T˚ and derivative ∇fθ“ idRD´∇ψθ needed to update generative models (6). To our knowledge, none of these solvers has been quantitatively evaluated in a non-Gaussian setting. For tMMs, tMM-Bs, and tQCs, the quality of the recovered derivatives ∇f˚ for BW22pPα,Qq{Bα has only been evaluated implicitly through GAN metrics. Moreover, these three solvers have not been quantitatively evaluated on solving OT tasks. We now overview each solver from Table 1. tLSs optimizes an unconstrained regularized dual form of (3) [36]: max f,g „ ż RD fpxqdPpxq ` ż RD gpyqdQpyq ´Rpf, gq. (7) The entropic or quadratic regularizer R penalizes potentials f, g for violating the constraint f ‘ g ď 12} ¨ } 2 [36, M3]. In practice, f “ fθ and g “ gω are linear combinations of kernel functions [11] or neural networks [36]. The parameters θ, ω are obtained by applying stochastic gradient ascent (SGA) over random mini-batches sampled from P,Q. Most other solvers are based on an expansion of (4): W22pP,Qq “ max f ż RD fpxqdPpxq ` ż RD “fcpyq hkkkkkkkkkkkkkkkikkkkkkkkkkkkkkkj min xPRD „ 1 2 }x´ y}2 ´ fpxq dQpyq. (8) The challenge of (8) is the inner minimization over x P RD, i.e., evaluating f cpyq. The main difference between existing solvers is the procedure used to solve this inner problem. tMM-Bs uses a neural network fθ as the potential trained using mini-batch SGA [27]. To solve the inner problem, the authors restrict the minimization of x to the current mini-batch from P instead of RD. The strategy is fast but leads to overestimation of the inner problem’s solution since the minimum is taken over a restricted subset. tMM-v1s exploits the property that f˚ “ 12} ¨ } 2 ´ ψ˚, where ψ˚ is convex [39]. The authors parametrize fθ “ 12} ¨ } 2 ´ ψθ, where ψθ is an input convex neural network (ICNN) [2]. Hence, for every y P RD, the inner problem of (8) becomes convex in x. This problem can be solved using SGA to high precision, but doing so is computationally costly [16, MC.4]. tMMs uses a formulation equivalent to (8) [30]: W22pP,Qq “ max f ż RD fpxqdPpxq ` ż RD min H „ 1 2 }Hpyq ´ y}2 ´ fpHpyqq dQpyq, (9) where the minimization is performed over functions H : RD Ñ RD. The authors use neural networks fθ and Hω to parametrize the potential and the minimizer of the inner problem. To train θ, ω, the authors apply stochastic gradient ascent/descent (SGAD) over mini-batches from P,Q. tMMs is generic and can be modified to compute arbitrary transport costs and derivatives, not just W22, although the authors have tested only on the Wasserstein-1 (W1) distance. Similarly to tMMv1s, tMMv2s parametrizes fθ “ 12} ¨ } 2 ´ ψθ, where ψθ is an ICNN [26]. For a fixed fθ, the optimal solution H is given by H “ p∇ψθq´1 which is an inverse gradient of a convex function, so it is also a gradient of a convex function. Hence, the authors parametrize Hω “ ∇φω, where φω is an ICNN, and use tMMs to fit θ, ω. tW2s uses the same ICNN parametrization as [26] but introduces cycle-consistency regularization to avoid solving a maximin problem [16, M4]. Finally, we highlight the solver tQCs [19]. Similarly to tMM-Bs, a neural network fθ is used as the potential. When each pair of mini-batches txnu, tynu from P,Q is sampled, the authors solve a discrete OT problem to obtain dual variables tf˚n u, tg˚nu, which are used to regress fθpxnq onto f˚n . Gradient deviation. The solvers above optimize for potentials like fθ (or ψθ), but it is the gradient of fθ (or ψθ) that is used to recover the OT map via T “ x´∇fθ. Even if }f ´ f˚}2L2pPq is small, the difference }∇fθ ´∇f˚}2L2pPq may be arbitrarily large since ∇fθ is not directly involved in optimization process. We call this issue gradient deviation. This issue is only addressed formally for ICNN-based solvers tMMv1s, tMMv2s, tW2s [16, Theorem 4.1], [26, Theorem 3.6]. Reversed solvers. tMMs, tMMv2s, tW2s recover not only the forward OT map ∇ψθ « ∇ψ˚ “ T˚, but also the inverse, given by Hω « pT˚q´1 “ p∇ψ˚q´1 “ ∇ψ˚, see [26, M3] or [16, M4.1]. These solvers are asymmetric in P,Q and an alternative is to swap P and Q during training. We denote such reversed solvers by tMM:Rs, tMMv2:Rs, tW2:Rs. In M4 we show that surprisingly tMM:Rs works better in generative modeling than tMMs. 3 Benchmarking OT Solvers In this section, we develop a generic method to produce benchmark pairs, i.e., measures pP,Qq such that Q “ T 7P with sample access and an analytically known OT solution T˚ between them. Key idea. Our method is based on the fact that for a differentiable convex function ψ : RD Ñ R, its gradient ∇ψ is an optimal transport map between any P P P2,acpRDq and its pushforward ∇ψ7P by ∇ψ : RD Ñ RD. This follows from Brenier’s theorem [6], [41, Theorem 2.12]. Thus, for a continuous measure P with sample access and a known convex ψ, pP,∇ψ7Pq can be used as a benchmark pair. We sample from ∇ψ7P by drawing samples from P and pushing forward by ∇ψ. Arbitrary pairs pP,Qq. It is difficult to compute the exact continuous OT solution for an arbitrary pair pP,Qq. As a compromise, we compute an approximate transport map as the gradient of an ICNN using tW2s. That is, we find ψθ parameterized as an ICNN such that ∇ψθ7P « Q. Then, the modified pair pP,∇ψθ7Pq can be used to benchmark OT solvers. We choose tW2s because it exhibits good performance in higher dimensions, but other solvers can also be used so long as ψθ is convex. Because of the choice of tW2s, subsequent evaluation might slightly favor ICNN-based methods. Extensions. Convex functions can be modified to produce more benchmark pairs. If ψ1, . . . , ψN are convex, then σpψ1, . . . , ψN q is convex when σ : RN Ñ R is convex and monotone. For example, c ¨ ψ1 (c ě 0q, ř n ψn, maxn ψn are convex, and their gradients produce new benchmark pairs. Inversion. If ∇ψθ is bijective, then the inverse transport map for pP,∇ψθ7Pq exists and is given by p∇ψθq´1. For each y P RD, the value p∇ψθq´1pyq can be obtained by solving a convex problem [39, M6], [16, M3]. All ICNNs ψθ we use have bijective gradients ∇ψθ, as detailed in Appendix B.1. 4 Benchmark Details and Results We implement our benchmark in PyTorch and provide the pre-trained transport maps for all the benchmark pairs. The code is publicly available at https://github.com/iamalexkorotin/Wasserstein2Benchmark The experiments are conducted on 4 GTX 1080ti GPUs and require about 100 hours of computation (per GPU). We provide implementation details in Appendix B. 4.1 Datasets High-dimensional measures. We develop benchmark pairs to test whether the OT solvers can redistribute mass among modes of measures. For this purpose, we use Gaussian mixtures in dimensions D “ 21, 22, . . . , 28. In each dimension D, we consider a random mixture P of 3 Gaussians and two random mixtures Q1,Q2 of 10 Gaussians. We train approximate transport maps ∇ψi7P « Qi (i “ 1, 2) using the tW2s solver. Each potential is an ICNN with DenseICNN architecture [16, MB.2]. We create a benchmark pair via the half-sum of computed potentials pP, 12 p∇ψ1 `∇ψ2q7Pq. The first measure P is a mixture of 3 Gaussians and the second is obtained by averaging potentials, which transforms it to approximate mixtures of 10 Gaussians. See Appendix A.1 and Figure 1 for details. Images. We use the aligned images of CelebA64 faces dataset1 [22] to produce additional benchmark pairs. First, we fit 3 generative models (WGAN-QC [19]) on the dataset and pick intermediate training checkpoints to produce continuous measures QkEarly,QkMid,QkLate for the first 2 models (k “ 1, 2) and the final checkpoint of the third model (k “ 3) to produce measure P3Final. To make measures absolutely continuous, we add small Gaussian noise to the generator’s output. Each checkpoint (Early, Mid, Late, Final) represents images of faces of a particular quality. Next, for k P t1, 2u and Cpkt P tEarly, Mid, Lateu, we use tW2s solver to fit an approximate transport map ∇ψkCpkt for the pair pP3Final,QkCpktq, i.e., ∇ψkCpkt7P3Final « QkCpkt. The potential ψkCpkt is a convolutional ICNN with ConvICNN64 architecture (MB.1). For each Cpkt, we define a benchmark pair pPCelebA,QCpktq def“pP3Final, rp∇ψ1Cpkt `∇ψ2Cpktq{2s7P3Finalq. See Appendix A.2 and Figure 2 for details. 4.2 Metrics and Baselines Baselines. We propose three baseline methods: identity tIDs, constant tCs and linear tLs. The identity solver outputs T id “ idRD as the transport map. The constant solver outputs the mean value of Q, i.e., T 0 ” EQrys ” µQ. The linear solver outputs T 1pxq “ Σ ´ 12 P ` Σ 1 2 P ΣQΣ 1 2 P ˘ 1 2 Σ ´ 12 P px´ µPq ` µQ, i.e., the OT map between measures coarsened to Gaussians [1, Theorem 2.3]. Metrics. To assess the quality of the recovered transport map T̂ : RD Ñ RD from P to Q, we use unexplained variance percentage (UVP) [16]: L2-UVPpT̂ q def“ 100 ¨ }T̂ ´ T˚}2L2pPq{VarpQq%. Here T˚ is the OT map. For values « 0%, T̂ approximates T˚ well. For values ě 100%, map T̂ is far from optimal. The constant baseline provides L2-UVPpT 0q “ 100%. 1http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html pPCelebA,QCpktq def“ ` P3Final, 12 p∇ψ 1 Cpkt `∇ψ2Cpktq7P3Final ˘ . In the visualization, Cpkt is Early. To measure the quality of approximation of the derivative of the potential ridRD ´ T̂ s « ∇f˚ that is used to update generative models (6), we use cosine similarity (cos): cospid´ T̂ , id´ T˚q def“ xT̂ ´ id,∇ψ˚ ´ idyL2pPq }T˚ ´ id}L2pPq ¨ }T̂ ´ id}L2pPq P r´1, 1s. To estimate L2-UVP and cos metrics, we use 214 random samples from P. 4.3 Evaluation of Solvers on High-dimensional Benchmark Pairs We evaluate the solvers on the benchmark and report the computed metric values for the fitted transport map. For fair comparison, in each method the potential f and the map H (where applicable) are parametrized as fθ “ 12} ¨ } 2 ´ ψθ and Hω “ ∇φω respectively, where ψθ, φω use DenseICNN architectures [16, MB.2]. In solvers tQCs, tLSs, tMM-Bs,tMMs we do not impose any restrictions on the weights θ, ω, i.e. ψθ, φω are usual fully connected nets with additional skip connections. We provide the computed metric values in Table 2 and visualize fitted maps (for D “ 64) in Figure 3. All the solvers perform well (L2-UVP« 0, cos « 1) in dimensionD “ 2. In higher dimensions, only tMMv1s, tMMs, tMMv2s, tW2s and their reversed versions produce reasonable results. However, tMMv1s solver is slow since each optimization step solves a hard subproblem for computing f c. Maximin solvers tMMs,tMMv2s,tMM:Rs are also hard to optimize: they either diverge from the start (Û) or diverge after converging to nearly-optimal saddle point (í). This behavior is typical for maximin optimization and possibly can be avoided by a more careful choice of hyperparameters. For tQCs, tLSs,tMM-Bs, as the dimension increases, the L2-UVP drastically grows. Only tMM-Bs notably outperforms the trivial tLs baseline. The error of tMM-Bs is explained by the overestimation of the inner problem in (8), yielding biased optimal potentials. The error of tLSs comes from bias introduced by regularization [36]. In tQCs, error arises because a discrete OT problem solved on sampled mini-batches, which is typically biased [5, Theorem 1], is used to update fθ. Interestingly, although tQCs, tLSs are imprecise in terms of L2-UVP, they provide a high cos metric. Due to optimization issues and performance differences, wall-clock times for convergence are not representative. All solvers except tMMv1s converged in several hours. Among solvers that substantially outperform the linear baseline, i.e. tMMs, tMMv1s, tMMv2s, tW2s, tMM-Bs, the fastest converging one is tMM-Bs, but it is biased. tMMs, tMMv2s, tW2s require more time. 4.4 Evaluation of Solvers in CelebA 64ˆ 64 Images Benchmark Pairs For evaluation on the CelebA benchmark, we excluded tLSs and tMMv1s: the first is unstable in high dimensions [33], and the second takes too long to converge. ICNN-based solvers tMMv2s, tW2s and their reversed versions perform roughly the same in this experiment. For simplicity, we treat them as one solver tW2s. In tW2s, we parametrize fθ “ 12} ¨ } 2 ´ ψθ and Hω “ ∇φω, where ψθ, φω are input-convex neural nets with ConvexICNN64 architecture (MB.1). All the other solvers are designed in the generative modeling setting to work with convolutional architectures for images. Thus, in tMMs, tQCs, tMM-Bs we parametrize networks fθ as ResNet and Hω as U-Net (in tMMs). In turn, in tMM:Rs we parametrize Tθ by UNet and gω by ResNet. We compute the transport map QCpkt Ñ PCelebA for each solver on three image benchmarks. The results are in Figure 4 and Table 3 and echo patterns observed on high-dimensional problems (M4.3). tQCs, tMM-Bs suffer from extreme bias thanks to the high dimension of images, and the derivative of W22 computed by these solvers is almost orthogonal to the true derivative (cos « 0). This means that these solvers do not extract W22. tMMs, tMM:Rs, tW2s recover the transport maps well. tMMs’s map is slightly noisier than the one by tMM:Rs, a minor example of gradient deviation. 4.5 Evaluation of Solvers in Generative Modeling of CelebA 64ˆ 64 Faces Based on our previous evaluation, many existing neural OT solvers are notably imprecise. This leads us to ask: To what extent does solver quality matter in real-world applications? To address this question, we evaluate the most promising solvers in the task of generative modeling for CelebA 64 ˆ 64 images of faces. For comparison, we add tQCs, which has good generative performance [19]. For each solver, we train a generative network Gα with ResNet architecture from [19] to map a 128-dimensional normal distribution S to the data distribution Q. As the loss function for generator, we use W22pPα,Qq “W22pGα7S,Qq estimated by each solver. We perform GAN-style training, where gradient updates of the generator alternate with gradient steps of OT solver (discriminator) (MB.2.3). We show sample generated images in the top row of each subplot of Figure 5 and report FID [13]. On the bottom row, we show the pushforward of the OT map from Pα “ Gα7S to Q extracted from the OT solver. Since the model converged (Pα « Q), the map should be nearly equal to the identity. tW2s provides the least quality (Figure 5a). This can be explained by the use of ConvICNN: the other solvers use convolutional architectures and work better. In general, the applicability of ICNNs to image-based tasks is questionable [16, M5.3] which might be a serious practical limitation. tQCs has strong generative performance (Figure 5b). However, as in M4.3-4.4, the recovered map is far from the identity. We suspect this solver has decent generative performance because it approximates some non-W22 dissimilarity measure in practice. tMMs results in a generative model that produces blurry images (Figure 5c). The computed transport map idRD ´∇fθ is too far from the identity due to the gradient deviation. This leads to inaccurate gradient computation used to update the generator and explains why the generator struggles to improve. We emphasize that in M4.4 tMMs does not notably suffer from the gradient deviation. Probably, this is due to measures being absolutely continuous and supported on the entire RD. This is not the case in our generative modeling setup, where generated and data measures are supported on low-dimensional manifolds in RD. Reversed tMM:Rs overcomes the problem of tMMs with the gradient deviation but still leads to blurry images (Figure 5d). Interestingly, the fitted transport map Tθ significantly improves the quality and images Tθ ˝Gαpzq are comparable to the ones with tQCs solver (Figure 5b). We emphasize that formulations from tMMs, tMM:Rs solvers are maximin: using them in GANs requires solving a challenging min-max-min optimization problem. To handle this, we use three nested loops and stochastic gradient descent-ascent-descent. In our experiments, the training was not stable and often diverged: the reported results use the best hyperparameters we found, although there may exist better ones. The difficulty in selecting hyperparameters and the unstable training process are limitations of these solvers that need to be addressed before using in practice. 5 Conclusion Our methodology creates pairs of continuous measures with ground truth quadratic-cost optimal transport maps, filling the missing gap of benchmarking continuous OT solvers. This development allows us to evaluate the performance of quadratic-cost OT solvers in OT-related tasks. Beyond benchmarking the basic transport problem, our study of generative modeling reveals surprising patterns: bad OT solvers can yield good generative performance, and simply reversing asymmetric solvers can affect performance dramatically. Limitations. We rely on ICNN gradients as W2 optimal transport maps to generate pairs of benchmark measures. It is unclear whether analogous constructions can be used for other costs such as W1. We also limit our benchmark pairs to be absolutely continuous measures while limiting the ground truth transport maps to be gradients of ICNNs, which may not have enough representational power. While we reveal a discrepancy between performance in OT-related tasks and performance in generative modeling, in-depth study is needed to answer questions such as what exact dissimilarity metric tQCs implies that explains its generative performance while poorly approximating W2. Potential impact. We expect our benchmark to become a standard benchmark for continuous optimal transport as part of the ongoing effort of advancing computational OT, in particular, in its application to generative modeling. As a result, we hope our work can improve the quality and reusability of OT-related research. One potential negative is that our benchmark might narrow the evaluation of future OT solvers to the datasets of our benchmark. To avoid this, besides particular benchmark datasets, in M3 we describe a generic method to produce new benchmark pairs. ACKNOWLEDGEMENTS. The problem statement was developed in the framework of Skoltech-MIT NGP program. The work of Evgeny Burnaev was supported by the Ministry of Science and Higher Education of the Russian Federation grant No. 075-10-2021-068. The MIT Geometric Data Processing group acknowledges the generous support of Army Research Office grants W911NF2010168 and W911NF2110293, of Air Force Office of Scientific Research award FA9550-19-1-031, of National Science Foundation grants IIS-1838071 and CHS-1955697, from the CSAIL Systems that Learn program, from the MIT–IBM Watson AI Laboratory, from the Toyota–CSAIL Joint Research Center, from a gift from Adobe Systems, from an MIT.nano Immersion Lab/NCSOFT Gaming Program seed grant, and from the Skoltech–MIT Next Generation Program.
1. What is the focus and contribution of the paper regarding continuous Wasserstein-2 benchmarking? 2. What are the strengths of the proposed approach, particularly in using input convex neural networks? 3. What are the weaknesses and limitations of the paper, especially in terms of bias and evaluation? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any suggestions or recommendations for improving the paper, such as emphasizing the construction of the ground truth transport maps or providing a clearer discussion of the use of DenseICNN?
Summary Of The Paper Review
Summary Of The Paper The authors present a continuous Wasserstein-2 benchmark using input convex neural networks to obtain ground truth continuous OT maps. They evaluate continuous OT solvers based on how well they reproduce the ground truth transport map, how well they approximate the derivative of the potential, and how well they perform in a GAN framework. This provides a basis to evaluate current and future continuous 2-Wasserstein solvers. Review Overall this was an excellent read and I found the use of the ICNN to generate ground truth OT maps novel and clever. The authors mention "Because of the choice of [W2], subsequent evaluation might slightly favor ICNN-based methods" (Line 177) and elaborate in the limitations section. This seems like a major limitation, the fact that the transport maps are not based on any real distribution and are in fact sampled from the set of potentials parameterized by ICNNs is highly biased. Would it be possible to examine this bias in some way? Perhaps some P, Q pairs with tractable transport maps that do not require ICNNs? The evaluation of W2 solvers in generative modeling is also extremely useful. While this study has a number of limitations it is well posed and a useful benchmark for future work in W_2 solvers. I found this a fascinating read and applaud the authors for their efforts. Edit post response: I am satisfied by the authors' response. I keep my score. I initially had the same understanding as reviewer oJwC made in his point 2. Given that this is a mainly a benchmark paper, the construction of the benchmark is of utmost importance. It would improve the paper to further emphasis the construction of the ground truth based on maps model-able by ICNNs, which is the most significant limitation of this work. Having a clear discussion of the use of DenseICNN parameterizing the ground truth transport maps earlier in this work perhaps in section 3 (rather than just in the limitations paragraph at the end) would both make it clearer how the benchmarks are constructed and what the limitations are. The paragraph titled "Arbitrary pairs"on Line 172 is misleading as a ground truth map is not constructed between arbitrary pairs, but between P and something close to Q based on how well the ICNN is fit; I suggest "Approximating arbitrary pairs" or similar would be clearer. While I agree other convex parameterizations could be used, this is an important point and deserves further clarification.
NIPS
Title Do Neural Optimal Transport Solvers Work? A Continuous Wasserstein-2 Benchmark Abstract Despite the recent popularity of neural network-based solvers for optimal transport (OT), there is no standard quantitative way to evaluate their performance. In this paper, we address this issue for quadratic-cost transport—specifically, computation of the Wasserstein-2 distance, a commonly-used formulation of optimal transport in machine learning. To overcome the challenge of computing ground truth transport maps between continuous measures needed to assess these solvers, we use inputconvex neural networks (ICNN) to construct pairs of measures whose ground truth OT maps can be obtained analytically. This strategy yields pairs of continuous benchmark measures in high-dimensional spaces such as spaces of images. We thoroughly evaluate existing optimal transport solvers using these benchmark measures. Even though these solvers perform well in downstream tasks, many do not faithfully recover optimal transport maps. To investigate the cause of this discrepancy, we further test the solvers in a setting of image generation. Our study reveals crucial limitations of existing solvers and shows that increased OT accuracy does not necessarily correlate to better results downstream. Solving optimal transport (OT) with continuous methods has become widespread in machine learning, including methods for large-scale OT [11, 36] and the popular Wasserstein Generative Adversarial Network (W-GAN) [3, 12]. Rather than discretizing the problem [31], continuous OT algorithms use neural networks or kernel expansions to estimate transport maps or dual solutions. This helps scale OT to large-scale and higher-dimensional problems not handled by discrete methods. Notable successes of continuous OT are in generative modeling [42, 20, 19, 7] and domain adaptation [43, 37, 25]. In these applications, OT is typically incorporated as part of the loss terms for a neural network model. For example, in W-GANs, the OT cost is used as a loss function for the generator; the model incorporates a neural network-based OT solver to estimate the loss. Although recent W-GANs provide state-of-the-art generative performance, however, it remains unclear to which extent this 35th Conference on Neural Information Processing Systems (NeurIPS 2021). success is connected to OT. For example, [28, 32, 38] show that popular solvers for the Wasserstein-1 (W1) distance in GANs fail to estimate W1 accurately. While W-GANs were initially introduced with W1 in [3], state-of-the art solvers now use both W1 and W2 (the Wasserstein-2 distance, i.e., OT with the quadratic cost). While their experimental performance on GANs is similar, W2 solvers tend to converge faster (see [19, Table 4]) with better theoretical guarantees [19, 26, 16]. Contributions. In this paper, we develop a generic methodology for evaluating continuous quadraticcost OT solvers (W2). Our main contributions are as follows: • We use input-convex neural networks (ICNNs [2]) to construct pairs of continuous measures that we use as a benchmark with analytically-known solutions for quadratic-cost OT (M3, M4.1). • We use these benchmark measures to evaluate popular quadratic-cost OT solvers in highdimensional spaces (M4.3), including the image space of 64ˆ 64 CelebA faces (M4.4). • We evaluate the performance of these OT solvers as a loss in generative modeling of images (M4.5). Our experiments show that some OT solvers exhibit moderate error even in small dimensions (M4.3), performing similarly to trivial baselines (M4.2). The most successful solvers are those using parametrization via ICNNs. Surprisingly, however, solvers that faithfully recover W2 maps across dimensions struggle to achieve state-of-the-art performance in generative modeling. Our benchmark measures can be used to evaluate future W2 solvers in high-dimensional spaces, a crucial step to improve the transparency and replicability of continuous OT research. Note the benchmark from [35] does not fulfill this purpose, since it is designed to test discrete OT methods and uses discrete low-dimensional measures with limited support. Notation. We use P2pRq to denote the set of Borel probability measures on R with finite second moment and P2,acpRq to denote its subset of absolutely continuous probability measures. We denote by ΠpP,Qq the set of the set of probability measures on R ˆ R with marginals P and Q. For some measurable map T : R Ñ R, we denote by T 7 the associated push-forward operator. For φ : R Ñ R, we denote by φ its Legendre-Fenchel transform [10] defined by φpyq “ maxxPRD rxx, yy ́ φpxqs. Recall that φ is a convex function, even when φ is not. 1 Background on Optimal Transport We start by stating the definition and some properties of optimal transport with quadratic cost. We refer the reader to [34, Chapter 1] for formal statements and proofs. Primal formulation. For P,Q P P2pRq, Monge’s primal formulation of the squared Wasserstein-2 distance, i.e., OT with quadratic cost, is given by W2pP,Qq def “ min T 7P“Q ż RD }x ́ T pxq} 2 dPpxq, (1) where the minimum is taken over measurable functions (transport maps) T : R Ñ R mapping P to Q. The optimal T ̊ is called the optimal transport map (OT map). Note that (1) is not symmetric, and this formulation does not allow for mass splitting, i.e., for some P,Q P P2pRq, there is no map T that satisfies T 7P “ Q. Thus, Kantorovich proposed the following relaxation [14]: W2pP,Qq def “ min πPΠpP,Qq ż RDˆRD }x ́ y} 2 dπpx, yq, (2) where the minimum is taken over all transport plans π, i.e., measures on R ˆ R whose marginals are P and Q. The optimal π ̊ P ΠpP,Qq is called the optimal transport plan (OT plan). If π ̊ is of the form ridRD , T ̊s7P P ΠpP,Qq for some T ̊, then T ̊ is the minimizer of (1). Dual formulation. For P,Q P P2pRq, the dual formulation of W2 is given by [40]: W2pP,Qq “ max f‘gď 12 } ̈}2 „ ż N/A Solving optimal transport (OT) with continuous methods has become widespread in machine learning, including methods for large-scale OT [11, 36] and the popular Wasserstein Generative Adversarial Network (W-GAN) [3, 12]. Rather than discretizing the problem [31], continuous OT algorithms use neural networks or kernel expansions to estimate transport maps or dual solutions. This helps scale OT to large-scale and higher-dimensional problems not handled by discrete methods. Notable successes of continuous OT are in generative modeling [42, 20, 19, 7] and domain adaptation [43, 37, 25]. In these applications, OT is typically incorporated as part of the loss terms for a neural network model. For example, in W-GANs, the OT cost is used as a loss function for the generator; the model incorporates a neural network-based OT solver to estimate the loss. Although recent W-GANs provide state-of-the-art generative performance, however, it remains unclear to which extent this 35th Conference on Neural Information Processing Systems (NeurIPS 2021). success is connected to OT. For example, [28, 32, 38] show that popular solvers for the Wasserstein-1 (W1) distance in GANs fail to estimate W1 accurately. While W-GANs were initially introduced with W1 in [3], state-of-the art solvers now use both W1 and W2 (the Wasserstein-2 distance, i.e., OT with the quadratic cost). While their experimental performance on GANs is similar, W2 solvers tend to converge faster (see [19, Table 4]) with better theoretical guarantees [19, 26, 16]. Contributions. In this paper, we develop a generic methodology for evaluating continuous quadraticcost OT solvers (W2). Our main contributions are as follows: • We use input-convex neural networks (ICNNs [2]) to construct pairs of continuous measures that we use as a benchmark with analytically-known solutions for quadratic-cost OT (M3, M4.1). • We use these benchmark measures to evaluate popular quadratic-cost OT solvers in highdimensional spaces (M4.3), including the image space of 64ˆ 64 CelebA faces (M4.4). • We evaluate the performance of these OT solvers as a loss in generative modeling of images (M4.5). Our experiments show that some OT solvers exhibit moderate error even in small dimensions (M4.3), performing similarly to trivial baselines (M4.2). The most successful solvers are those using parametrization via ICNNs. Surprisingly, however, solvers that faithfully recover W2 maps across dimensions struggle to achieve state-of-the-art performance in generative modeling. Our benchmark measures can be used to evaluate future W2 solvers in high-dimensional spaces, a crucial step to improve the transparency and replicability of continuous OT research. Note the benchmark from [35] does not fulfill this purpose, since it is designed to test discrete OT methods and uses discrete low-dimensional measures with limited support. Notation. We use P2pRDq to denote the set of Borel probability measures on RD with finite second moment and P2,acpRDq to denote its subset of absolutely continuous probability measures. We denote by ΠpP,Qq the set of the set of probability measures on RD ˆ RD with marginals P and Q. For some measurable map T : RD Ñ RD, we denote by T 7 the associated push-forward operator. For φ : RD Ñ R, we denote by φ its Legendre-Fenchel transform [10] defined by φpyq “ maxxPRD rxx, yy ´ φpxqs. Recall that φ is a convex function, even when φ is not. 1 Background on Optimal Transport We start by stating the definition and some properties of optimal transport with quadratic cost. We refer the reader to [34, Chapter 1] for formal statements and proofs. Primal formulation. For P,Q P P2pRDq, Monge’s primal formulation of the squared Wasserstein-2 distance, i.e., OT with quadratic cost, is given by W22pP,Qq def“ min T 7P“Q ż RD }x´ T pxq}2 2 dPpxq, (1) where the minimum is taken over measurable functions (transport maps) T : RD Ñ RD mapping P to Q. The optimal T˚ is called the optimal transport map (OT map). Note that (1) is not symmetric, and this formulation does not allow for mass splitting, i.e., for some P,Q P P2pRDq, there is no map T that satisfies T 7P “ Q. Thus, Kantorovich proposed the following relaxation [14]: W22pP,Qq def“ min πPΠpP,Qq ż RDˆRD }x´ y}2 2 dπpx, yq, (2) where the minimum is taken over all transport plans π, i.e., measures on RD ˆ RD whose marginals are P and Q. The optimal π˚ P ΠpP,Qq is called the optimal transport plan (OT plan). If π˚ is of the form ridRD , T˚s7P P ΠpP,Qq for some T˚, then T˚ is the minimizer of (1). Dual formulation. For P,Q P P2pRDq, the dual formulation of W22 is given by [40]: W22pP,Qq “ max f‘gď 12 }¨}2 „ ż RD fpxqdPpxq ` ż RD gpyqdQpyq , (3) where the maximum is taken over all f P L1pP,RD Ñ Rq and g P L1pQ,RD Ñ Rq satisfying fpxq ` gpyq ď 12}x´ y} 2 for all x, y P RD. From the optimal dual potential f˚, we can recover the optimal transport plan T˚pxq “ x´∇f˚pxq [34, Theorem 1.17]. The optimal f˚, g˚ satisfy pf˚qc “ g˚ and pg˚qc “ f˚, where uc : RD Ñ R is the c´transform of u defined by ucpyq “ minxPRD “ 1{2}x´ y}2 ´ upxq ‰ . We can rewrite (3) as W22pP,Qq “ max f „ ż RD fpxqdPpxq ` ż RD f cpyqdQpyq , (4) where the maximum is taken over all f P L1pP,RD Ñ Rq. Since f˚ and g˚ are each other’s c-transforms, they are both c-concave [34, M1.6], which is equivalent to saying that functions ψ˚ : x ÞÑ 12}x} 2 ´ f˚pxq and φ˚ : x ÞÑ 12}x} 2 ´ g˚pxq are convex [34, Proposition 1.21]. In particular, ψ˚ “ φ˚ and φ˚ “ ψ˚. Since T˚pxq “ x´∇f˚pxq “ ∇ ˆ }x}2 2 ´ f˚pxq ˙ “ ∇ψ˚, (5) we see that the OT maps are gradients of convex functions, a fact known as Brenier’s theorem [6]. “Solving” optimal transport problems. In applications, for given P,Q P P2pRDq, the W2 optimal transport problem is typically considered in the following three similar but not equivalent tasks: • Evaluating W22pP,Qq. The Wasserstein-2 distance is a geometrically meaningful way to compare probability measures, providing a metric on P2pRDq. • Computing the optimal map T˚ or plan π˚. The map T˚ provides an intuitive way to interpolate between measures. It is often used as a generative map between measures in problems like domain adaptation [36, 43] and image style transfer [16]. • Using the gradient BW22pPα,Qq{Bα to update generative models. Derivatives of W22 are used implicitly in generative modeling that incorporates W2 loss [19, 33], in which case P “ Pα is a parametric measure and Q is the data measure. Typically, Pα “ Gα7S is the measure generated from a fixed latent measure S by a parameterized function Gα, e.g., a neural network. The goal is to find parameters α that minimize W22pPα,Qq via gradient descent. In the generative model setting, by definition of the pushforward Pα “ Gα7S, we have W22pPα,Qq “ ż z f˚pGαpzqqdSpzq ` ż RD g˚pyqdQpyq, where f˚ and g˚ are the optimal dual potentials. At each generator training step, f˚ and g˚ are fixed so that when we take the gradient with respect to α, by applying the chain rule we have: BW22pPα,Qq Bα “ ż z JαGαpzqT∇f˚ ` Gαpzq ˘ dSpzq, (6) where JαGαpzqT is the transpose of the Jacobian matrix of Gαpzq w.r.t. parameters α. This result still holds without assuming the potentials are fixed by the envelope theorem [29]. To capture the gradient, we need a good estimate of ∇f˚ “ idRD ´T˚ by (5). This task is somewhat different from computing the OT map T˚: since the estimate of ∇f˚ is only involved in the gradient update for the generator, it is allowed to differ while still resulting in a good generative model. We will use the generic phrase OT solver to refer to a method for solving any of the tasks above. Quantitative evaluation of OT solvers. For discrete OT methods, a benchmark dataset [35] exists but the mechanism for producing the dataset does not extend to continuous OT. Existing continuous solvers are typically evaluated on a set of self-generated examples or tested in generative models without evaluating its actual OT performance. Two kinds of metrics are often used: Direct metrics compare the computed transport map T̂ with the true one T˚, e.g., by using L2 Unexplained Variance Percentage (L2-UVP) metric [16, M5.1], [17, M5]. There are relatively few direct metrics available, since the number of examples of P,Q with known ground truth T˚ is small: it is known that T˚ can be analytically derived or explicitly computed in the discrete case [31, M3], 1-dimensional case [31, M2.6], and Gaussian/location-scatter cases [1]. Indirect metrics use an OT solver as a component in a larger pipeline, using end-to-end performance as a proxy for solver quality. For example, in generative modeling where OT is used as the generator loss [19, 27], the quality of the generator can be assessed through metrics for GANs, such as the Fréchet Inception distance (FID) [13]. Indirect metrics do not provide clear understanding about the quality of the solver itself, since they depend on components of the model that are not related to OT. 2 Continuous Dual Solvers for Quadratic Cost Transport While our benchmark might be used to test any continuous solver which computes map T˚ or gradient ∇f˚, in this paper, we perform evaluation only on dual-form continuous solvers based on (3) or (4). Such solvers have straightforward optimization procedures and can be adapted to various datasets without extensive hyperparameter search. In contrast, primal-form solvers based on (1), e.g., [18, 43, 21, 23], typically parameterize T˚ using complicated generative modeling techniques that depend on careful hyperparameter search and complex optimization procedures [24]. We summarize existing continuous dual form solvers in Table 1. These fit a parametric function fθ (or ψθ) to approximate f˚ (or ψ˚ “ idRD ´ f˚). The resulting fθ produces an approximate OT map idRD´∇fθ“∇ψθ « T˚ and derivative ∇fθ“ idRD´∇ψθ needed to update generative models (6). To our knowledge, none of these solvers has been quantitatively evaluated in a non-Gaussian setting. For tMMs, tMM-Bs, and tQCs, the quality of the recovered derivatives ∇f˚ for BW22pPα,Qq{Bα has only been evaluated implicitly through GAN metrics. Moreover, these three solvers have not been quantitatively evaluated on solving OT tasks. We now overview each solver from Table 1. tLSs optimizes an unconstrained regularized dual form of (3) [36]: max f,g „ ż RD fpxqdPpxq ` ż RD gpyqdQpyq ´Rpf, gq. (7) The entropic or quadratic regularizer R penalizes potentials f, g for violating the constraint f ‘ g ď 12} ¨ } 2 [36, M3]. In practice, f “ fθ and g “ gω are linear combinations of kernel functions [11] or neural networks [36]. The parameters θ, ω are obtained by applying stochastic gradient ascent (SGA) over random mini-batches sampled from P,Q. Most other solvers are based on an expansion of (4): W22pP,Qq “ max f ż RD fpxqdPpxq ` ż RD “fcpyq hkkkkkkkkkkkkkkkikkkkkkkkkkkkkkkj min xPRD „ 1 2 }x´ y}2 ´ fpxq dQpyq. (8) The challenge of (8) is the inner minimization over x P RD, i.e., evaluating f cpyq. The main difference between existing solvers is the procedure used to solve this inner problem. tMM-Bs uses a neural network fθ as the potential trained using mini-batch SGA [27]. To solve the inner problem, the authors restrict the minimization of x to the current mini-batch from P instead of RD. The strategy is fast but leads to overestimation of the inner problem’s solution since the minimum is taken over a restricted subset. tMM-v1s exploits the property that f˚ “ 12} ¨ } 2 ´ ψ˚, where ψ˚ is convex [39]. The authors parametrize fθ “ 12} ¨ } 2 ´ ψθ, where ψθ is an input convex neural network (ICNN) [2]. Hence, for every y P RD, the inner problem of (8) becomes convex in x. This problem can be solved using SGA to high precision, but doing so is computationally costly [16, MC.4]. tMMs uses a formulation equivalent to (8) [30]: W22pP,Qq “ max f ż RD fpxqdPpxq ` ż RD min H „ 1 2 }Hpyq ´ y}2 ´ fpHpyqq dQpyq, (9) where the minimization is performed over functions H : RD Ñ RD. The authors use neural networks fθ and Hω to parametrize the potential and the minimizer of the inner problem. To train θ, ω, the authors apply stochastic gradient ascent/descent (SGAD) over mini-batches from P,Q. tMMs is generic and can be modified to compute arbitrary transport costs and derivatives, not just W22, although the authors have tested only on the Wasserstein-1 (W1) distance. Similarly to tMMv1s, tMMv2s parametrizes fθ “ 12} ¨ } 2 ´ ψθ, where ψθ is an ICNN [26]. For a fixed fθ, the optimal solution H is given by H “ p∇ψθq´1 which is an inverse gradient of a convex function, so it is also a gradient of a convex function. Hence, the authors parametrize Hω “ ∇φω, where φω is an ICNN, and use tMMs to fit θ, ω. tW2s uses the same ICNN parametrization as [26] but introduces cycle-consistency regularization to avoid solving a maximin problem [16, M4]. Finally, we highlight the solver tQCs [19]. Similarly to tMM-Bs, a neural network fθ is used as the potential. When each pair of mini-batches txnu, tynu from P,Q is sampled, the authors solve a discrete OT problem to obtain dual variables tf˚n u, tg˚nu, which are used to regress fθpxnq onto f˚n . Gradient deviation. The solvers above optimize for potentials like fθ (or ψθ), but it is the gradient of fθ (or ψθ) that is used to recover the OT map via T “ x´∇fθ. Even if }f ´ f˚}2L2pPq is small, the difference }∇fθ ´∇f˚}2L2pPq may be arbitrarily large since ∇fθ is not directly involved in optimization process. We call this issue gradient deviation. This issue is only addressed formally for ICNN-based solvers tMMv1s, tMMv2s, tW2s [16, Theorem 4.1], [26, Theorem 3.6]. Reversed solvers. tMMs, tMMv2s, tW2s recover not only the forward OT map ∇ψθ « ∇ψ˚ “ T˚, but also the inverse, given by Hω « pT˚q´1 “ p∇ψ˚q´1 “ ∇ψ˚, see [26, M3] or [16, M4.1]. These solvers are asymmetric in P,Q and an alternative is to swap P and Q during training. We denote such reversed solvers by tMM:Rs, tMMv2:Rs, tW2:Rs. In M4 we show that surprisingly tMM:Rs works better in generative modeling than tMMs. 3 Benchmarking OT Solvers In this section, we develop a generic method to produce benchmark pairs, i.e., measures pP,Qq such that Q “ T 7P with sample access and an analytically known OT solution T˚ between them. Key idea. Our method is based on the fact that for a differentiable convex function ψ : RD Ñ R, its gradient ∇ψ is an optimal transport map between any P P P2,acpRDq and its pushforward ∇ψ7P by ∇ψ : RD Ñ RD. This follows from Brenier’s theorem [6], [41, Theorem 2.12]. Thus, for a continuous measure P with sample access and a known convex ψ, pP,∇ψ7Pq can be used as a benchmark pair. We sample from ∇ψ7P by drawing samples from P and pushing forward by ∇ψ. Arbitrary pairs pP,Qq. It is difficult to compute the exact continuous OT solution for an arbitrary pair pP,Qq. As a compromise, we compute an approximate transport map as the gradient of an ICNN using tW2s. That is, we find ψθ parameterized as an ICNN such that ∇ψθ7P « Q. Then, the modified pair pP,∇ψθ7Pq can be used to benchmark OT solvers. We choose tW2s because it exhibits good performance in higher dimensions, but other solvers can also be used so long as ψθ is convex. Because of the choice of tW2s, subsequent evaluation might slightly favor ICNN-based methods. Extensions. Convex functions can be modified to produce more benchmark pairs. If ψ1, . . . , ψN are convex, then σpψ1, . . . , ψN q is convex when σ : RN Ñ R is convex and monotone. For example, c ¨ ψ1 (c ě 0q, ř n ψn, maxn ψn are convex, and their gradients produce new benchmark pairs. Inversion. If ∇ψθ is bijective, then the inverse transport map for pP,∇ψθ7Pq exists and is given by p∇ψθq´1. For each y P RD, the value p∇ψθq´1pyq can be obtained by solving a convex problem [39, M6], [16, M3]. All ICNNs ψθ we use have bijective gradients ∇ψθ, as detailed in Appendix B.1. 4 Benchmark Details and Results We implement our benchmark in PyTorch and provide the pre-trained transport maps for all the benchmark pairs. The code is publicly available at https://github.com/iamalexkorotin/Wasserstein2Benchmark The experiments are conducted on 4 GTX 1080ti GPUs and require about 100 hours of computation (per GPU). We provide implementation details in Appendix B. 4.1 Datasets High-dimensional measures. We develop benchmark pairs to test whether the OT solvers can redistribute mass among modes of measures. For this purpose, we use Gaussian mixtures in dimensions D “ 21, 22, . . . , 28. In each dimension D, we consider a random mixture P of 3 Gaussians and two random mixtures Q1,Q2 of 10 Gaussians. We train approximate transport maps ∇ψi7P « Qi (i “ 1, 2) using the tW2s solver. Each potential is an ICNN with DenseICNN architecture [16, MB.2]. We create a benchmark pair via the half-sum of computed potentials pP, 12 p∇ψ1 `∇ψ2q7Pq. The first measure P is a mixture of 3 Gaussians and the second is obtained by averaging potentials, which transforms it to approximate mixtures of 10 Gaussians. See Appendix A.1 and Figure 1 for details. Images. We use the aligned images of CelebA64 faces dataset1 [22] to produce additional benchmark pairs. First, we fit 3 generative models (WGAN-QC [19]) on the dataset and pick intermediate training checkpoints to produce continuous measures QkEarly,QkMid,QkLate for the first 2 models (k “ 1, 2) and the final checkpoint of the third model (k “ 3) to produce measure P3Final. To make measures absolutely continuous, we add small Gaussian noise to the generator’s output. Each checkpoint (Early, Mid, Late, Final) represents images of faces of a particular quality. Next, for k P t1, 2u and Cpkt P tEarly, Mid, Lateu, we use tW2s solver to fit an approximate transport map ∇ψkCpkt for the pair pP3Final,QkCpktq, i.e., ∇ψkCpkt7P3Final « QkCpkt. The potential ψkCpkt is a convolutional ICNN with ConvICNN64 architecture (MB.1). For each Cpkt, we define a benchmark pair pPCelebA,QCpktq def“pP3Final, rp∇ψ1Cpkt `∇ψ2Cpktq{2s7P3Finalq. See Appendix A.2 and Figure 2 for details. 4.2 Metrics and Baselines Baselines. We propose three baseline methods: identity tIDs, constant tCs and linear tLs. The identity solver outputs T id “ idRD as the transport map. The constant solver outputs the mean value of Q, i.e., T 0 ” EQrys ” µQ. The linear solver outputs T 1pxq “ Σ ´ 12 P ` Σ 1 2 P ΣQΣ 1 2 P ˘ 1 2 Σ ´ 12 P px´ µPq ` µQ, i.e., the OT map between measures coarsened to Gaussians [1, Theorem 2.3]. Metrics. To assess the quality of the recovered transport map T̂ : RD Ñ RD from P to Q, we use unexplained variance percentage (UVP) [16]: L2-UVPpT̂ q def“ 100 ¨ }T̂ ´ T˚}2L2pPq{VarpQq%. Here T˚ is the OT map. For values « 0%, T̂ approximates T˚ well. For values ě 100%, map T̂ is far from optimal. The constant baseline provides L2-UVPpT 0q “ 100%. 1http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html pPCelebA,QCpktq def“ ` P3Final, 12 p∇ψ 1 Cpkt `∇ψ2Cpktq7P3Final ˘ . In the visualization, Cpkt is Early. To measure the quality of approximation of the derivative of the potential ridRD ´ T̂ s « ∇f˚ that is used to update generative models (6), we use cosine similarity (cos): cospid´ T̂ , id´ T˚q def“ xT̂ ´ id,∇ψ˚ ´ idyL2pPq }T˚ ´ id}L2pPq ¨ }T̂ ´ id}L2pPq P r´1, 1s. To estimate L2-UVP and cos metrics, we use 214 random samples from P. 4.3 Evaluation of Solvers on High-dimensional Benchmark Pairs We evaluate the solvers on the benchmark and report the computed metric values for the fitted transport map. For fair comparison, in each method the potential f and the map H (where applicable) are parametrized as fθ “ 12} ¨ } 2 ´ ψθ and Hω “ ∇φω respectively, where ψθ, φω use DenseICNN architectures [16, MB.2]. In solvers tQCs, tLSs, tMM-Bs,tMMs we do not impose any restrictions on the weights θ, ω, i.e. ψθ, φω are usual fully connected nets with additional skip connections. We provide the computed metric values in Table 2 and visualize fitted maps (for D “ 64) in Figure 3. All the solvers perform well (L2-UVP« 0, cos « 1) in dimensionD “ 2. In higher dimensions, only tMMv1s, tMMs, tMMv2s, tW2s and their reversed versions produce reasonable results. However, tMMv1s solver is slow since each optimization step solves a hard subproblem for computing f c. Maximin solvers tMMs,tMMv2s,tMM:Rs are also hard to optimize: they either diverge from the start (Û) or diverge after converging to nearly-optimal saddle point (í). This behavior is typical for maximin optimization and possibly can be avoided by a more careful choice of hyperparameters. For tQCs, tLSs,tMM-Bs, as the dimension increases, the L2-UVP drastically grows. Only tMM-Bs notably outperforms the trivial tLs baseline. The error of tMM-Bs is explained by the overestimation of the inner problem in (8), yielding biased optimal potentials. The error of tLSs comes from bias introduced by regularization [36]. In tQCs, error arises because a discrete OT problem solved on sampled mini-batches, which is typically biased [5, Theorem 1], is used to update fθ. Interestingly, although tQCs, tLSs are imprecise in terms of L2-UVP, they provide a high cos metric. Due to optimization issues and performance differences, wall-clock times for convergence are not representative. All solvers except tMMv1s converged in several hours. Among solvers that substantially outperform the linear baseline, i.e. tMMs, tMMv1s, tMMv2s, tW2s, tMM-Bs, the fastest converging one is tMM-Bs, but it is biased. tMMs, tMMv2s, tW2s require more time. 4.4 Evaluation of Solvers in CelebA 64ˆ 64 Images Benchmark Pairs For evaluation on the CelebA benchmark, we excluded tLSs and tMMv1s: the first is unstable in high dimensions [33], and the second takes too long to converge. ICNN-based solvers tMMv2s, tW2s and their reversed versions perform roughly the same in this experiment. For simplicity, we treat them as one solver tW2s. In tW2s, we parametrize fθ “ 12} ¨ } 2 ´ ψθ and Hω “ ∇φω, where ψθ, φω are input-convex neural nets with ConvexICNN64 architecture (MB.1). All the other solvers are designed in the generative modeling setting to work with convolutional architectures for images. Thus, in tMMs, tQCs, tMM-Bs we parametrize networks fθ as ResNet and Hω as U-Net (in tMMs). In turn, in tMM:Rs we parametrize Tθ by UNet and gω by ResNet. We compute the transport map QCpkt Ñ PCelebA for each solver on three image benchmarks. The results are in Figure 4 and Table 3 and echo patterns observed on high-dimensional problems (M4.3). tQCs, tMM-Bs suffer from extreme bias thanks to the high dimension of images, and the derivative of W22 computed by these solvers is almost orthogonal to the true derivative (cos « 0). This means that these solvers do not extract W22. tMMs, tMM:Rs, tW2s recover the transport maps well. tMMs’s map is slightly noisier than the one by tMM:Rs, a minor example of gradient deviation. 4.5 Evaluation of Solvers in Generative Modeling of CelebA 64ˆ 64 Faces Based on our previous evaluation, many existing neural OT solvers are notably imprecise. This leads us to ask: To what extent does solver quality matter in real-world applications? To address this question, we evaluate the most promising solvers in the task of generative modeling for CelebA 64 ˆ 64 images of faces. For comparison, we add tQCs, which has good generative performance [19]. For each solver, we train a generative network Gα with ResNet architecture from [19] to map a 128-dimensional normal distribution S to the data distribution Q. As the loss function for generator, we use W22pPα,Qq “W22pGα7S,Qq estimated by each solver. We perform GAN-style training, where gradient updates of the generator alternate with gradient steps of OT solver (discriminator) (MB.2.3). We show sample generated images in the top row of each subplot of Figure 5 and report FID [13]. On the bottom row, we show the pushforward of the OT map from Pα “ Gα7S to Q extracted from the OT solver. Since the model converged (Pα « Q), the map should be nearly equal to the identity. tW2s provides the least quality (Figure 5a). This can be explained by the use of ConvICNN: the other solvers use convolutional architectures and work better. In general, the applicability of ICNNs to image-based tasks is questionable [16, M5.3] which might be a serious practical limitation. tQCs has strong generative performance (Figure 5b). However, as in M4.3-4.4, the recovered map is far from the identity. We suspect this solver has decent generative performance because it approximates some non-W22 dissimilarity measure in practice. tMMs results in a generative model that produces blurry images (Figure 5c). The computed transport map idRD ´∇fθ is too far from the identity due to the gradient deviation. This leads to inaccurate gradient computation used to update the generator and explains why the generator struggles to improve. We emphasize that in M4.4 tMMs does not notably suffer from the gradient deviation. Probably, this is due to measures being absolutely continuous and supported on the entire RD. This is not the case in our generative modeling setup, where generated and data measures are supported on low-dimensional manifolds in RD. Reversed tMM:Rs overcomes the problem of tMMs with the gradient deviation but still leads to blurry images (Figure 5d). Interestingly, the fitted transport map Tθ significantly improves the quality and images Tθ ˝Gαpzq are comparable to the ones with tQCs solver (Figure 5b). We emphasize that formulations from tMMs, tMM:Rs solvers are maximin: using them in GANs requires solving a challenging min-max-min optimization problem. To handle this, we use three nested loops and stochastic gradient descent-ascent-descent. In our experiments, the training was not stable and often diverged: the reported results use the best hyperparameters we found, although there may exist better ones. The difficulty in selecting hyperparameters and the unstable training process are limitations of these solvers that need to be addressed before using in practice. 5 Conclusion Our methodology creates pairs of continuous measures with ground truth quadratic-cost optimal transport maps, filling the missing gap of benchmarking continuous OT solvers. This development allows us to evaluate the performance of quadratic-cost OT solvers in OT-related tasks. Beyond benchmarking the basic transport problem, our study of generative modeling reveals surprising patterns: bad OT solvers can yield good generative performance, and simply reversing asymmetric solvers can affect performance dramatically. Limitations. We rely on ICNN gradients as W2 optimal transport maps to generate pairs of benchmark measures. It is unclear whether analogous constructions can be used for other costs such as W1. We also limit our benchmark pairs to be absolutely continuous measures while limiting the ground truth transport maps to be gradients of ICNNs, which may not have enough representational power. While we reveal a discrepancy between performance in OT-related tasks and performance in generative modeling, in-depth study is needed to answer questions such as what exact dissimilarity metric tQCs implies that explains its generative performance while poorly approximating W2. Potential impact. We expect our benchmark to become a standard benchmark for continuous optimal transport as part of the ongoing effort of advancing computational OT, in particular, in its application to generative modeling. As a result, we hope our work can improve the quality and reusability of OT-related research. One potential negative is that our benchmark might narrow the evaluation of future OT solvers to the datasets of our benchmark. To avoid this, besides particular benchmark datasets, in M3 we describe a generic method to produce new benchmark pairs. ACKNOWLEDGEMENTS. The problem statement was developed in the framework of Skoltech-MIT NGP program. The work of Evgeny Burnaev was supported by the Ministry of Science and Higher Education of the Russian Federation grant No. 075-10-2021-068. The MIT Geometric Data Processing group acknowledges the generous support of Army Research Office grants W911NF2010168 and W911NF2110293, of Air Force Office of Scientific Research award FA9550-19-1-031, of National Science Foundation grants IIS-1838071 and CHS-1955697, from the CSAIL Systems that Learn program, from the MIT–IBM Watson AI Laboratory, from the Toyota–CSAIL Joint Research Center, from a gift from Adobe Systems, from an MIT.nano Immersion Lab/NCSOFT Gaming Program seed grant, and from the Skoltech–MIT Next Generation Program.
1. What is the main contribution of the paper regarding neural optimal transport? 2. How does the proposed benchmark differ from existing methods in assessing optimal transport solvers? 3. What are the strengths and weaknesses of the paper regarding its theoretical foundation and background information? 4. How does the reviewer suggest improving the benchmark, particularly in considering the boundary conditions and regularity of the OT maps? 5. Are there any concerns or suggestions regarding the experimental results and their presentation?
Summary Of The Paper Review
Summary Of The Paper This work introduces a continuous Wasserstein-2 benchmark to assess the qualities of different neural optimal transport solvers. The work uses input convex neural networks (ICNN) to construct pairs of measures whose ground truth OT maps can be obtained analytically. This strategy yields pairs of continuous benchmark measures in high-dimensional spaces such as spaces of images. The authors thoroughly evaluate existing optimal transport solvers. The study reveals crucial limitations of existing solvers. Review Optimal transportation maps become more and more important and popular in deep learning field. Due to its highly non-linear nature, reliable computation with high precision is challenging. So far, there is no solid benchmark to assess different solvers. This work proposes a benchmark to tackle this problem. Hence it has importance for deep learning fields. The manuscript is well written. The introduction part is highly motivated and encouraging. The theoretic foundation part is clearly written, easy to follow, but can be further improved. The background on optimal transport is insufficient. There are many existing methods based on Brenier theorem and solving Monge-Ampere equations. For example, Benamou-Brenier used fluid dynamic formulation to solve L2 optimal transportation map; Yau et al give the intrinsic connection between convex geometry and optimal transport, and solve the OT map problem using Alexandrov formulation, see the work of Lei et al "A Geometric Understanding of Deep Learning", Engineering, Volume 6, Issue 3, March 2020, Pages 361-374. The algorithm there has theoretic foundation. The benchmark ignores one important aspect of optimal transportation map: the boundary condition. In theory, the image of the OT map of the source domain should equal to the target domain, this is the most difficult constraint to satisfy in practice and affect the computational result crucially. According to Figalli's works, if the target domain is non-convex, then the OT map itself may not be continuous, therefore cannot be represented by the deep nerual networks. Even for the benchmark itself, the ground truth map may not be representable by the network. According to Yau's works, this is the reason for mode collapsing. Hence this need to be emphasized and carefully designed in the bench mark. The experimental results are convincing, different solvers are compared. It will be more helpful to add geometric based approaches for the test. It will be more convincing if the convergence error is also added. According to the theoretic analysis, the Brenier potential error is O(h^2) where h is the diameter of the cells in the tessellation. In general, the idea of Benchmark is important, it will make the whole field more rigorous. Current design can be further improved by emphasizing the 2nd boundary condition, and considering the regularity of the OT maps.
NIPS
Title Improving Barely Supervised Learning by Discriminating Unlabeled Samples with Super-Class Abstract In semi-supervised learning (SSL), a common practice is to learn consistent information from unlabeled data and discriminative information from labeled data to ensure both the immutability and the separability of the classification model. Existing SSL methods suffer from failures in barely-supervised learning (BSL), where only one or two labels per class are available, as the insufficient labels cause the discriminative information to be difficult or even infeasible to learn. To bridge this gap, we investigate a simple yet effective way to leverage unlabeled data for discriminative learning, and propose a novel discriminative information learning module to benefit model training. Specifically, we formulate the learning objective of discriminative information at the super-class level and dynamically assign different categories into different super-classes based on model performance improvement. On top of this on-the-fly process, we further propose a distribution-based loss to learn discriminative information by utilizing the similarity between samples and super-classes. It encourages the unlabeled data to stay closer to the distribution of their corresponding super-class than those of others. Such a constraint is softer than the direct assignment of pseudo labels, while the latter could be very noisy in BSL. We compare our method with state-of-the-art SSL and BSL methods through extensive experiments on standard SSL benchmarks. Our method can achieve superior results, e.g., an average accuracy of 76.76% on CIFAR-10 with merely 1 label per class. The code is available at https://github.com/GuanGui-nju/SCMatch. 1 Introduction As a paradigm to reduce the dependency on a large amount of labeled data, semi-supervised learning (SSL) has been widely concerned and utilized [1, 2]. Although existing advanced SSL methods [3, 4, 5, 6] could achieve outstanding performance even with less than 1% labels on several datasets (e.g., CIFAR-10), the labeling process could still be lengthy, especially when there is a large number of object categories, which may preclude the deployment of SSL model in those applications. To tackle this challenge, barely-supervised learning (BSL), a novel paradigm with rising interest [3, 7], has been proposed recently to explore whether the model can be trained with the extremely scarce label, e.g., only one label per class. ∗Corresponding author. †Guan Gui, Yinghuan Shi are with the National Key Laboratory for Novel Software Technology and the National Institute of Healthcare Data Science, Nanjing University. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Unfortunately, current state-of-the-art SSL methods cannot well address the label-scarce challenge in BSL. As shown in Figure 1(a), many recent SSL methods can achieve auspicious performance when sufficient labeled data are provided, e.g., higher than 90% accuracy with more than 40 labels on CIFAR-10. However, these methods will suffer severe performance degeneration when the label amount is reduced. For example, when only 10 labels are available on CIFAR-10, the test accuracy of FixMatch will drop sharply by more than 45% compared to that of 40 labels. In order to explore the reasons for such performance dropping, we then track the predicted class distribution of FixMatch during the training process. As shown in Figure 1(b), FixMatch will end with the model collapse after training for 100 epochs, i.e., the model completely cannot distinguish different samples, and all samples are predicted as a single class. To analyze the reasons for above phenomenon, we prefer to investigate classification models in term of separability and immutability. Here immutability refers to the capacity of the model to be robust to perturbations. It can be mathematically expressed as argmax pm(y|ui) = argmax pm(y|α(ui)), where ui is a sample, y is the model prediction, and α(·) is a random perturbation. Separability, on the other hand, refers to the capacity of the model to differentiate two different categories of samples, i.e., argmax pm(y|ui) ̸= argmax pm(y|uj), where ui, uj are sampled from different categories. Figure 2 shows a graphic explanation of these two properties. For SSL classification models, in common practice, the immutability is often achieved by learning the consistent information of the unlabeled data, and the separability is achieved by learning the discriminative information of the labeled data. To achieve good performance, SSL models need to well balance their immutability and separability. However, such a balance is destroyed in BSL. The insufficient supervision information from the extremely scarce labels significantly damages the learning for separability, so that the model performance is dominated by the learning towards immutability. That’s why the model collapse could be observed when all samples are predicted as one same class. Motivated by these observations, in this paper, we aim to enhance the discriminative learning for the model’s separability under BSL. Since the labeled data are very limited, we explore how to mine additional discriminative supervision from unlabeled data. Although without label information, the unlabeled data could still provide some “latent guidance" to complement the process of learning only from labeled data. We hereby propose a novel module to dynamically form super-classes to “roughly categorize" unlabeled samples, then the discriminative information is learned by measuring the similarity between samples and super-classes, which is realized by our newly proposed loss function on the distribution level. Furthermore, with the improvement of the model, we gradually form more super-classes for finer categorization of unlabeled samples, aiming to provide more fine-grained discriminative information to guide the model training. In a nutshell, our proposed method is a simple yet effective way to mine discriminative information from unlabeled data. Compared to directly assigning pseudo labels to each sample [3, 4], which could be very noisy in BSL, learning the similarity between super-classes and samples is the softer guidance, thus reducing the error risk of pseudo labels. We evaluate our method on CIFAR-10, CIFAR-100, and STL-10, showing that our method outperforms all other SSL and BSL methods by a large margin. For example, only using one label per class on CIFAR-10, our method successfully avoided the occurrence of model collapse and achieved an accuracy of 76.76% with a variance of 6.78%. 2 Method Similar to the setting of SSL, a labeled set X and an unlabeled set U are also given in BSL. X = {(x1, y1), (x2, y2), . . . , (xn, yn)}, where yi denotes the label of the i-th labeled sample xi. Each sample is classified into one of nk classes denoted as {c1, c2, . . . , cnk}. U = {u1, u2, . . . , un}, where ui denotes i-th unlabeled sample, and typically |X | ≪ |U|. In BSL, a more challenging setting is considered, |X | < 4nk, where only few labeled data are available. In the implementation, the samples are provided on a per batch basis, with a batch of labeled data Bx and unlabeled data Bu. As discussed before, the key to BSL lies in training a robust and stable model by efficiently leveraging the unlabeled data together with such scarce labeled data. Unlike recent state-of-the-art SSL methods that only encourage consistency regularization on unlabeled data, our method aims to learn consistent and discriminative information from the unlabeled data simultaneously. As shown in Figure 3, we construct two modules to leverage unlabeled data accordingly, i.e., the consistent information learning module and the discriminative information learning module. In the consistent information learning module, we learn the information from the samples and their corresponding augmented versions, like [3]. While in the discriminative information learning module, we develops the super-class distributions by clustering unlabeled samples within a mini-batch and then uses them to minimize a novel distribution loss on unlabeled samples. 2.1 Consistent information learning module Like most consistency-based SSL methods, we encourage the model to output the same predictions on two differently-augmented versions of the same sample. Specifically, we produce pseudo labels on weakly-augmented samples and use them as training targets for their corresponding stronglyaugmented variants. Of them, the weak augmentation α(·) includes standard flip and shift operations, while the strong augmentation strategy A(·) consists of RandAugment [8] and CutOut [9]. Formally, this consistency-based unsupervised loss Lcon is defined as, Lcon = 1 |Bu| |Bu|∑ i=1 1(max(pm(y|α(ui))) ≥ τ1)H(pm(y|α(ui)), pm(y|A(ui))) (1) where H(p1, p2) denotes the standard cross entropy between p1 and p2, and τ1 is a pre-defined threshold to retain only high-confidence pseudo labels. As discussed in [3], τ1 is commonly set as a high value to alleviate the confirmation bias in SSL. 2.2 Discriminative information learning module In addition to relying on labeled data to learn discriminative information, we propose a novel module, an on-the-fly learning process to first form super-classes and then exploit the similarity between super-classes and samples to improve the model’s separability. One of the most intuitive ways to explore discriminative information is to generate class information for unlabeled data by clustering in the feature space. Ideally, samples of the same category will form a separate cluster so that the model can discriminate the samples from all other nk − 1 clusters of samples. However, forming such fine-grained clusters carries a considerable risk of errors, especially for tasks with a large number of object categories. What’s worse, in the early training stage, due to the weak feature extraction ability, the model inevitably produces wrong discriminative information, resulting in severe accumulated errors. To properly explore the discriminative information for unlabeled data, we propose the following designs, • First, instead of fine-grained clusters, we simplify the clustering task by allowing a cluster to contain multiple categories, i.e., a super-class cluster. In this way, the discriminative information is relatively weakened but more robust to clustering errors. • Second, it can still be noisy to adopt the super-class label as training targets for unlabeled data. Therefore, we tend to utilize the similarity between each sample and the super-classes rather than explicitly assign the training targets for unlabeled data. Concretely, our method encourages the unlabeled samples to stay closer to the predicted class probability distribution of their corresponding super-class than those of others. Such a smoothing way can better tolerate the inaccurate prediction of a single sample as well as potential clustering errors. • Third, although the discriminative information provided by the coarse-grained clusters is robust, it will be insufficient when the model’s separability is improved. Thus we propose the progressive construction of super-classes to gradually increase the clustering number so that our discriminative information learning module can adapt to the model evolution during the training process. When the cluster number is small, each super-class provides more moderate discriminative information, called a low-level super-class. In contrast, a large cluster number can enforce each super-class to abstract more concrete information, and we call it a high-level super-class. Super-class representation As shown in Figure 3, we employ standard K-Means on these features zwi of weakly-augmented samples within a mini-batch. With a given target number K of super-classes, these features are gathered into K clusters, and each cluster is denoted by Ck, k = 1, 2, . . . ,K. Each super-class can then be represented by the mean distribution of all the samples it contains. Given unlabeled sample ui, and its predicted class probability distribution pm(y|α(ui)) and pm(y|A(ui)), the super-class distribution qk for each super-class Ck can be calculated by, qk = 1 |Ck| |Ck|∑ i=1 pm(y|α(ui)), withui ∈ Ck (2) In this way, the super-class distribution can represent the distribution characteristics of the categories it contains so that it can be well discriminated from other super-classes. As shown in the lower half of Figure 3, in the automobile-and-airplane super-class, it is possibly not easy to determine the exact category for a single sample. However, we can find that the sample in this super-class should be closer to the super-class distribution of the automobile-and-airplane super-class compared to those of other super-classes. Additionally, the super-class is more robust to the noisy samples. For samples likely to be misclassified (e.g., samples inside the dashed box), their negative impact on the super-class distribution is well suppressed by other correctly classified samples. Discriminative distribution loss To distinguish the sample from other super-classes, this sample is supposed to be more similar to its corresponding super-class on distribution. Inspired by [10, 11], we design a contrastivelike distribution loss to distinguish the sample from other super-classes. Formally, this auxiliary distribution loss is, Ldis = − 1 |Bu| |Bu|∑ i=1 1(max(pm(y|α(ui))) ≥ τ2) log exp(pm(y|A(ui)) · qk/T )∑K j=1 exp(pm(y|A(ui)) · qj/T ) (3) where T is a common temperature parameter, and ui is assigned to super-class Ck. Like in Equation 1, we adopt a parameter τ2 to control the learned unlabeled data. As mentioned before, the similarity between samples and super-classes is a weak constraint, so we are conditioned to use a lower threshold to learn more samples. We provide an empirical value via extensive ablation studies. Notice that we compute gradients only on strongly-augmented samples. Progressive super-class construction Although small K reduces the clustering error, it comes at the cost that the learned discriminant information is limited. Assuming the extreme case, when K = 1, the amount of information is 0 because all samples belong to one super-class, the model will not discriminate against any samples. With this point of view, we propose the progressive construction of super-class to adapt to the model evolution during training. That is, when the model is not well trained at the beginning, we use a small K to form the coarser super-classes to ease the clustering task and thus attain relatively reliable discriminative guidance. When the model is better trained, to avoid the training of the model being stagnant due to the limitation of discriminative information, we gradually increase K to provide enhanced discriminative guidance. In practice, a daunting challenge is that we do not know the most appropriate number of super-class for the training samples without prior knowledge. To this end, we design a dichotomous method and set the value range of K by: Ki ∈ {2, . . . , ⌈nk/4⌉, ⌈nk/2⌉, nk} (4) The above formula restricts the value of K based on the principle of dichotomy so that frequent changes of K can be avoided. Especially when there are many classes in the sample set (e.g.,100 classes on CIFAR-100), it would be tedious and pointless to learn all K values. Furthermore, to ensure that the clustering task with different K can be performed for a certain period, we adopt a linear-step growth strategy to adjust K dynamically: K = Ki, if Ki ≤ t α ∗ ts < Ki+1, (5) where t and ts denote the value of the current iteration and the total number of iterations, respectively. α ∈ (0, 1) and it controls the growth rate of K. With this clustering task, K super-classes are dynamically formed at each iteration. Algorithm 1 Algorithm of our method Input: Labeled batch Bx = {(xi, yi)}, unlabeled batch Bu = {ui}, weak augmentation strategy α(·), strong augmentation strategy A(·) Parameter: threshold τ1, τ2, temperature T , loss weight λcon, λdis 1: compute Lsup = 1 |Bx| ∑|Bx| i=1 H(pm(y|α(ui)), yi): 2: for t← 1 to ts do 3: for ui ∈ Bu do 4: zwi = Encoder(α(ui)) // record features of weakly augmented samples. 5: pm(y|α(ui)), pm(y|A(ui)) // compute prediction of α(ui) and A(ui). 6: end for 7: Lcon = 1 |Bu| ∑|Bu| i=1 1(max(pm(y|α(ui))) ≥ τ1)H(pm(y|α(ui)), pm(y|A(ui))) 8: update K. 9: form super-classes by K-Means(K, zwi ). 10: qk = 1/|Ck| ∑|Ck| i=1 pm(y|α(ui)), ∀ui ∈ Ck 11: Ldis = − 1 |Bu| ∑|Bu| i=1 1(max(pm(y|α(ui))) ≥ τ2) log exp(pm(y|A(ui)) · qk/T )∑K j=1 exp(pm(y|A(ui)) · qj/T ) 12: minimizing the total loss L = Lsup + λconLcon + λdisLdis. 13: end for 2.3 Total Loss Similar to most SSL methods, the supervised loss for a batch of labeled data Bx is obtained by a standard cross-entropy loss, Lsup = 1 |Bx| |Bx|∑ i=1 H(pm(y|α(ui)), yi) (6) In summary, the total loss in our method is, L = Lsup + λconLcon + λdisLdis (7) where λcon and λdis are the weights of Lcon and Ldis, respectively. The full algorithm is provided in Algorithm 1. 3 Experiments In this section, we validate the effectiveness of our proposed method by conducting experiments on widely-used SSL benchmark datasets: CIFAR-10, CIFAR-100 [12], and STL-10 [13]. 3.1 Implementation details To face the challenge of BSL, we randomly sample 1 or 2 labels for each class on these data sets. We adopt "WideResNet-28-2" and "WideResNet-28-8" [14] as the backbone for CIFAR-10 and CIFAR-100, respectively, while using "ResNet18" [15] for STL-10. For the consistent information learned module, we follow the same setting with [3], where τ1 = 0.95, |Bx| = 64, |Bu| = 7|Bx|. And for the discriminative information learned module, we set T = 1, τ2 = 0.8. In addition, Since the essence of the three losses of Lsup, Lcon, Ldis is in the form of cross entropy, it’s prefer to set λcon = λdis = 1 to further reduce of hyperparameters. For CIFAR-10 and STL-10 task, we set the K ∈ {⌊nk/3⌋, nk/2, nk} = {3, 5, 10}. For CIFAR-100, considering that the samples of each cluster should be sufficient, we set the K ∈ {nk/20, nk/10, nk/5} = {5, 10, 20}. In fact, the specific numerical setting of K has little effect on the model performance, and more analysis and experiments about K will be discussed in the later ablation experiments. Mean-Teacher 15.48 ± 3.19 17.50 ± 1.16 5.17 ± 2.52 8.26 ± 3.43 11.05 ± 6.45 15.99 ± 6.45 MixMatch 17.18 ± 4.45 26.45 ± 8.17 12.85 ± 2.21 21.56 ± 4.84 10.94 ± 5.18 21.48 ± 3.17 ReMixMatch 60.29 ± 15.20 78.56 ± 9.63 26.18 ± 3.79 35.90 ± 3.66 30.86 ± 10.80 45.58 ± 8.36 FixMatch 44.47 ± 24.99 80.46 ± 5.15 25.49 ± 4.37 35.55 ± 1.59 25.75 ± 8.99 48.98 ± 6.46 FixMatch (w/DA) 67.79 ± 15.42 84.16 ± 9.27 31.10 ± 2.29 43.22 ± 1.87 42.08 ± 6.24 54.76 ± 5.44 CoMatch 60.79 ± 12.42 81.19 ± 8.55 27.54 ± 4.25 36.98 ± 2.17 29.11 ± 9.31 50.20 ± 7.57 FlexMatch 66.07 ± 10.58 85.69 ± 6.24 31.50 ± 3.61 38.05 ± 2.66 41.17 ± 6.20 54.30 ± 5.65 SLA 65.87 ± 10.83 81.89 ± 6.77 28.45 ± 2.16 38.65 ± 2.67 32.38 ± 8.32 47.50 ± 6.38 LESS 64.40 ± 10.90 81.20 ± 5.60 28.20 ± 3.00 42.50 ± 3.20 34.25 ± 7.19 48.98 ± 5.19 our method 76.76 ± 6.78 88.49 ± 3.26 37.50 ± 1.72 45.62 ± 1.39 52.51 ± 3.20 57.98 ± 3.18 The model is trained with a total of 220 iterations, and the K increased in the first 30% iterations. We use an exponential moving average with a decay rate of 0.999 to test our model and repeat the same experiment for five runs with different seeds to report the mean accuracy. 3.2 Baseline methods First, FixMatch [3], Dash [16], CoMatch [5], FlexMatch [4] are the advanced semi-supervised models in recent years, and we compare these methods under the challenge of barely-supervised learning. We also use FixMatch with the distribution alignment (DA). SLA [6] and LESS [7] are the latest models on BSL, and we also use them as our comparison method. In addition to this, we also select some classical semi-supervised methods such as MeanTeacher [17], MixMatch [18] and ReMixMatch [19] for comparison. 3.3 Experimental results Performance comparisons. In Table 1, we compare the test accuracy of our proposed method against recent SSL and BSL methods. It can be seen that our results are state-of-the-art in all settings. Especially when there is only one label per class, our method compensates for the shortage of labeled data by mining latent discriminative information from unlabeled data, thus showing enormous superiority. LESS [7], recent work on BSL, since it generates predictions for samples with low confidence and then learns more consistent information, still ignores the learning of discriminative information, it cannot solve the challenges in BSL. On CIFAR-10 task with 10 labels, our method achieves the mean accuracy of 76.76%, which outperforms other methods by 10%. On STL-10 task with 10 labels, the recent BSL methods LESS and SLA achieve the accuracy of 34.25% and 32.38%, respectively, while our method achieves the mean accuracy of 52.51%, which improved nearly by 20%. For larger dataset CIFAR-100, our method also outperforms other methods by at least 6% when there is 1 label per class. Besides, we can see that, regardless of the dataset, the performance of our method when using only 1 label per class is close to or even exceeds the performance of other methods when using 2 labels per class. On CIFAR-100 task, LESS and SLA achieve the mean accuracy of 42.50% and 38.65% with 100 labels, while our method achieves the mean accuracy of 37.50% with half of the labels they use. As mentioned by [3], the quality of very few labeled data will significantly affect the performance of the model. Taking the CIFAR-10 task with 10 labels as an example, the variance of advanced SSL methods and BSL methods are all more than 10%, while the variance of our method is only 6.78%. These results further illustrate that our method can alleviate the dependence on labeled data by learning discriminative information from unlabeled data. We also find that the technique of distribution alignment, which forces the alignment of probability distributions is still an effective technique under BSL. Through the result of FixMatch (w/DA), we can see that DA successfully helped FixMatch improve its performance significantly. However, DA is a technique that relies on prior information, and our method does not rely on any prior information and can achieve better performance than it. Stability of the model. First of all, we discuss the phenomenon of model collapse under the BSL challenge. For a fair comparison, we use the same random seed in each trial for FixMatch and our method. Since only 1 label per class is available, this SSL method that depends on labeled data to learn discriminative information would be volatile. As shown in table 2, we can see that the per- formance of FixMatch is extremely unstable, where it can achieve very high accuracy of 85.11% when seed= 3 but obtain an extremely low accuracy of 17.09% when seed= 4. Differently, integrating the proposed super-class distribution to provide more discriminative information, our method can successfully alleviate the model collapse: the accuracy exceeded 70% in all experiments and also exceeded 80% sometimes. Performance under SSL settings. We also analyze our method in standard SSL settings where sufficient labeled data are provided. As shown in Table 3, we test our method on CIFAR-10 with 40, 250, and 4000 labels. It can be seen that when the number of labels increases, our method is not SOTA, but the gap with other methods is within 1%. It can be interpreted as these methods using other advanced techniques in the learning consistent information pro- cess. For example, FlexMatch [4] and SLA [6] both leverage prior knowledge of class proportions, and CoMatch [5] leverages graph-based contrastive learning, etc.While we study an independent module for solving BSL, so when the labels are enough, our method does not prevail. However, though our implementation is based on FixMatch [3], the results show that our method can still improve slightly when the number of labels is large. 3.4 Ablation study Performance under different strategies of K. We explore the effect of K for different strategies on the model: (1) fix-mode, the number of super-class remains constant during model training. (2) linear-mode, the number of super-class increases linearly during model training. (3) exp-mode, the number of super-class increases exponentially. (4) step-mode, a step-by-step jump growth based on linear-mode. We fixed different K values for experiments in terms of the fixed strategy. As shown in Figure 4(a), when K is small, the model tends to outperform the larger K. When K is large, the model in the early stage does not provide high-quality features to perform the formation of high-level superclasses, so the discriminative information learned has an extremely high risk of error, leading to the model’s failure. On the other hand, when K is small, our method can learn effective discriminative information from these low-level super-classes. However, as the model performance improves, this limited discriminative information provided by low-level super-classes can no longer help the model learn continuously, so the model’s performance will stagnate. It is worth noting that even if we adopt the fix-mode with K, the model can learn a certain degree of discriminative from the super-class to face the challenge of BSL, and its performance also exceeds other methods. Linear-mode, exp-mode, step-mode can all work well to solve the problem in the fix-mode above, while there is an additional hyperparameter to control the rate of K growth in these modes. As shown in Figure 4(b), we conduct experiments for different growth rates, and it turns out that the growth rate of K does not affect the performance of the model too much. In addition, since this mode can explore different levels of discriminative information, the performance is significantly better than that in the fix-mode. Although the performance of these modes is exceptionally close, we prefer step-mode as it can be more suitable for large data sets, e.g., it is impractical to increase K from 3 to 100 sequentially when we test on CIFAR-100. Performance under different τ2. We investigate 5 different τ2 values on CIFAR-10 datasets with 10 labels. As shown in Figure 4(c), the test performance achieve the best when τ2 = 0.8. It shows that appropriately lowering the threshold can learn discriminative information from more samples, thereby helping model training. However, if the threshold τ2 is too low, the noise of the sample will increase, which is not conducive to the training of the model. 4 Related Work Recent popular semi-supervised learning studies can be classified into entropy minimization (ER) based methods and consistency regularization (CR) based methods. Self training is the typical representative of ER-based methods. In these methods [1], the model is first trained on the provided labeled data and then used to generate pseudo-labels for unlabeled data. After that, such methods add these unlabeled data with high-confidence predictions into the labeled set to retrain the model, repeating this process until all unlabeled data are involved [20]. Recent studies tend to involve more advanced techniques in this framework to enhance the SSL performance. [21] introduces multiple views to provide more robust pseudo-labels. LaSSL [22] and Curriculum Labeling [23] integrate the contrastive learning and curriculum learning techniques to improve the accuracy of pseudo-labels further. As the most widely-used and successful technique in recent SSL methods, CR is the semantics of a sample should be consistent after data perturbations [24, 17, 18, 25, 26]. FixMatch [3] combines strong augmentation technology [8, 9] and the labels of weakly augmented samples with high confidence are used to guide the learning of strong augmented samples. Although a major breakthrough has been made in conventional semi-supervised learning, it still cannot avoid model collapse. [16, 4] further dynamically adjusts the confidence threshold based on FixMatch. Although it can learn more low-confidence samples to improve the performance of the model, it cannot cope with the challenge of lack of discriminative information under BSL. In the literature, there have been few works on barely-supervised learning. FixMatch [3] initially came up with the concept of BSL and emphasized that the quality of the labeled data played a crucial role in the test performance. Our experiment results also demonstrate that its testing results have a very high variance under the BSL settings. Recent SLA [6] also achieved better performance in BSL by formulating an optimal transportation problem between samples and labels. It introduced many extra hyper-parameters and adopted the Sinkhorn-Knopp algorithm to solve the optimization problem approximately. Differently, our method gets rid of complicated operations but can effectively improve the BSL performance. Another work [7] argues that the dilemma in BSL is that there are no pseudo-labels that can be predicted with high confidence, so online deep clustering is used to supplement the pseudo-labels predicted by the model. Although more pseudo-labels can be used, it still learns consistent information. [27] is a very recent work that uses the coarse-grained class labels to guide the SSL model. However, it requires strong prior knowledge about the class hierarchy structure in advance; while we are faced with BSL scenarios without any prior knowledge, even the hierarchy structure may not exist. 5 Conclusion In this paper, we analyze the failure of SSL methods in the face of BSL as insufficient discriminative information learning. To tackle this problem, we design a discriminative learning module to leverage unlabeled data for additional discriminative supervision. In this module, super-classes are dynamically reformed with the model training, and then the discriminative information is learned by measuring the similarity between samples and super-classes. We conduct our methods on several SSL benchmarks, and it shows that our method outperforms other methods in BSL. 6 Acknowledgment This work was supported by the Science and Technology Innovation 2030 New Generation Artificial Intelligence Major Project (2021ZD0113303), the NSFC Program (62222604, 62206052, 62192783), CAAI-Huawei MindSpore Project (CAAIXSJLJJ-2021-042A), China Postdoctoral Science Foundation Project (2021M690609), Jiangsu Natural Science Foundation Project (BK20210224), and CCF-Lenovo Bule Ocean Research Fund.
1. What is the focus and contribution of the paper on semi-supervised learning? 2. What are the strengths of the proposed approach, particularly in terms of its ability to improve discriminability? 3. What are the weaknesses of the paper, especially regarding its effectiveness in certain scenarios? 4. Do you have any concerns or questions about the method's ability to handle imbalanced data? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper From the notion and empirical analyses from barely supervised learning, it introduces super class based regularization which improves the discriminability of unlabeled data instances. It largely improves the performances with various benchmark datasets with high confidences. The introduction of clustered super class is somewhat novel in the semi-supervised learning. Strengths And Weaknesses Strengths : Introduction of super class is the main novelty of this paper. It mitigates the imbalanced ability of barely supervised learning from the side of immutability and separability, which is sound. Weaknesses : Application on the semi-supervised learning with more labels provides ineffective performances, which is not explainable from the view of immutability and separability. If it really matters, the performances can be calibrated from the hyper-parameter tuning between consistent learning and the introduced discriminative learning. Questions Stated in weaknesses session. Limitations Stated in weakness session.
NIPS
Title Improving Barely Supervised Learning by Discriminating Unlabeled Samples with Super-Class Abstract In semi-supervised learning (SSL), a common practice is to learn consistent information from unlabeled data and discriminative information from labeled data to ensure both the immutability and the separability of the classification model. Existing SSL methods suffer from failures in barely-supervised learning (BSL), where only one or two labels per class are available, as the insufficient labels cause the discriminative information to be difficult or even infeasible to learn. To bridge this gap, we investigate a simple yet effective way to leverage unlabeled data for discriminative learning, and propose a novel discriminative information learning module to benefit model training. Specifically, we formulate the learning objective of discriminative information at the super-class level and dynamically assign different categories into different super-classes based on model performance improvement. On top of this on-the-fly process, we further propose a distribution-based loss to learn discriminative information by utilizing the similarity between samples and super-classes. It encourages the unlabeled data to stay closer to the distribution of their corresponding super-class than those of others. Such a constraint is softer than the direct assignment of pseudo labels, while the latter could be very noisy in BSL. We compare our method with state-of-the-art SSL and BSL methods through extensive experiments on standard SSL benchmarks. Our method can achieve superior results, e.g., an average accuracy of 76.76% on CIFAR-10 with merely 1 label per class. The code is available at https://github.com/GuanGui-nju/SCMatch. 1 Introduction As a paradigm to reduce the dependency on a large amount of labeled data, semi-supervised learning (SSL) has been widely concerned and utilized [1, 2]. Although existing advanced SSL methods [3, 4, 5, 6] could achieve outstanding performance even with less than 1% labels on several datasets (e.g., CIFAR-10), the labeling process could still be lengthy, especially when there is a large number of object categories, which may preclude the deployment of SSL model in those applications. To tackle this challenge, barely-supervised learning (BSL), a novel paradigm with rising interest [3, 7], has been proposed recently to explore whether the model can be trained with the extremely scarce label, e.g., only one label per class. ∗Corresponding author. †Guan Gui, Yinghuan Shi are with the National Key Laboratory for Novel Software Technology and the National Institute of Healthcare Data Science, Nanjing University. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Unfortunately, current state-of-the-art SSL methods cannot well address the label-scarce challenge in BSL. As shown in Figure 1(a), many recent SSL methods can achieve auspicious performance when sufficient labeled data are provided, e.g., higher than 90% accuracy with more than 40 labels on CIFAR-10. However, these methods will suffer severe performance degeneration when the label amount is reduced. For example, when only 10 labels are available on CIFAR-10, the test accuracy of FixMatch will drop sharply by more than 45% compared to that of 40 labels. In order to explore the reasons for such performance dropping, we then track the predicted class distribution of FixMatch during the training process. As shown in Figure 1(b), FixMatch will end with the model collapse after training for 100 epochs, i.e., the model completely cannot distinguish different samples, and all samples are predicted as a single class. To analyze the reasons for above phenomenon, we prefer to investigate classification models in term of separability and immutability. Here immutability refers to the capacity of the model to be robust to perturbations. It can be mathematically expressed as argmax pm(y|ui) = argmax pm(y|α(ui)), where ui is a sample, y is the model prediction, and α(·) is a random perturbation. Separability, on the other hand, refers to the capacity of the model to differentiate two different categories of samples, i.e., argmax pm(y|ui) ̸= argmax pm(y|uj), where ui, uj are sampled from different categories. Figure 2 shows a graphic explanation of these two properties. For SSL classification models, in common practice, the immutability is often achieved by learning the consistent information of the unlabeled data, and the separability is achieved by learning the discriminative information of the labeled data. To achieve good performance, SSL models need to well balance their immutability and separability. However, such a balance is destroyed in BSL. The insufficient supervision information from the extremely scarce labels significantly damages the learning for separability, so that the model performance is dominated by the learning towards immutability. That’s why the model collapse could be observed when all samples are predicted as one same class. Motivated by these observations, in this paper, we aim to enhance the discriminative learning for the model’s separability under BSL. Since the labeled data are very limited, we explore how to mine additional discriminative supervision from unlabeled data. Although without label information, the unlabeled data could still provide some “latent guidance" to complement the process of learning only from labeled data. We hereby propose a novel module to dynamically form super-classes to “roughly categorize" unlabeled samples, then the discriminative information is learned by measuring the similarity between samples and super-classes, which is realized by our newly proposed loss function on the distribution level. Furthermore, with the improvement of the model, we gradually form more super-classes for finer categorization of unlabeled samples, aiming to provide more fine-grained discriminative information to guide the model training. In a nutshell, our proposed method is a simple yet effective way to mine discriminative information from unlabeled data. Compared to directly assigning pseudo labels to each sample [3, 4], which could be very noisy in BSL, learning the similarity between super-classes and samples is the softer guidance, thus reducing the error risk of pseudo labels. We evaluate our method on CIFAR-10, CIFAR-100, and STL-10, showing that our method outperforms all other SSL and BSL methods by a large margin. For example, only using one label per class on CIFAR-10, our method successfully avoided the occurrence of model collapse and achieved an accuracy of 76.76% with a variance of 6.78%. 2 Method Similar to the setting of SSL, a labeled set X and an unlabeled set U are also given in BSL. X = {(x1, y1), (x2, y2), . . . , (xn, yn)}, where yi denotes the label of the i-th labeled sample xi. Each sample is classified into one of nk classes denoted as {c1, c2, . . . , cnk}. U = {u1, u2, . . . , un}, where ui denotes i-th unlabeled sample, and typically |X | ≪ |U|. In BSL, a more challenging setting is considered, |X | < 4nk, where only few labeled data are available. In the implementation, the samples are provided on a per batch basis, with a batch of labeled data Bx and unlabeled data Bu. As discussed before, the key to BSL lies in training a robust and stable model by efficiently leveraging the unlabeled data together with such scarce labeled data. Unlike recent state-of-the-art SSL methods that only encourage consistency regularization on unlabeled data, our method aims to learn consistent and discriminative information from the unlabeled data simultaneously. As shown in Figure 3, we construct two modules to leverage unlabeled data accordingly, i.e., the consistent information learning module and the discriminative information learning module. In the consistent information learning module, we learn the information from the samples and their corresponding augmented versions, like [3]. While in the discriminative information learning module, we develops the super-class distributions by clustering unlabeled samples within a mini-batch and then uses them to minimize a novel distribution loss on unlabeled samples. 2.1 Consistent information learning module Like most consistency-based SSL methods, we encourage the model to output the same predictions on two differently-augmented versions of the same sample. Specifically, we produce pseudo labels on weakly-augmented samples and use them as training targets for their corresponding stronglyaugmented variants. Of them, the weak augmentation α(·) includes standard flip and shift operations, while the strong augmentation strategy A(·) consists of RandAugment [8] and CutOut [9]. Formally, this consistency-based unsupervised loss Lcon is defined as, Lcon = 1 |Bu| |Bu|∑ i=1 1(max(pm(y|α(ui))) ≥ τ1)H(pm(y|α(ui)), pm(y|A(ui))) (1) where H(p1, p2) denotes the standard cross entropy between p1 and p2, and τ1 is a pre-defined threshold to retain only high-confidence pseudo labels. As discussed in [3], τ1 is commonly set as a high value to alleviate the confirmation bias in SSL. 2.2 Discriminative information learning module In addition to relying on labeled data to learn discriminative information, we propose a novel module, an on-the-fly learning process to first form super-classes and then exploit the similarity between super-classes and samples to improve the model’s separability. One of the most intuitive ways to explore discriminative information is to generate class information for unlabeled data by clustering in the feature space. Ideally, samples of the same category will form a separate cluster so that the model can discriminate the samples from all other nk − 1 clusters of samples. However, forming such fine-grained clusters carries a considerable risk of errors, especially for tasks with a large number of object categories. What’s worse, in the early training stage, due to the weak feature extraction ability, the model inevitably produces wrong discriminative information, resulting in severe accumulated errors. To properly explore the discriminative information for unlabeled data, we propose the following designs, • First, instead of fine-grained clusters, we simplify the clustering task by allowing a cluster to contain multiple categories, i.e., a super-class cluster. In this way, the discriminative information is relatively weakened but more robust to clustering errors. • Second, it can still be noisy to adopt the super-class label as training targets for unlabeled data. Therefore, we tend to utilize the similarity between each sample and the super-classes rather than explicitly assign the training targets for unlabeled data. Concretely, our method encourages the unlabeled samples to stay closer to the predicted class probability distribution of their corresponding super-class than those of others. Such a smoothing way can better tolerate the inaccurate prediction of a single sample as well as potential clustering errors. • Third, although the discriminative information provided by the coarse-grained clusters is robust, it will be insufficient when the model’s separability is improved. Thus we propose the progressive construction of super-classes to gradually increase the clustering number so that our discriminative information learning module can adapt to the model evolution during the training process. When the cluster number is small, each super-class provides more moderate discriminative information, called a low-level super-class. In contrast, a large cluster number can enforce each super-class to abstract more concrete information, and we call it a high-level super-class. Super-class representation As shown in Figure 3, we employ standard K-Means on these features zwi of weakly-augmented samples within a mini-batch. With a given target number K of super-classes, these features are gathered into K clusters, and each cluster is denoted by Ck, k = 1, 2, . . . ,K. Each super-class can then be represented by the mean distribution of all the samples it contains. Given unlabeled sample ui, and its predicted class probability distribution pm(y|α(ui)) and pm(y|A(ui)), the super-class distribution qk for each super-class Ck can be calculated by, qk = 1 |Ck| |Ck|∑ i=1 pm(y|α(ui)), withui ∈ Ck (2) In this way, the super-class distribution can represent the distribution characteristics of the categories it contains so that it can be well discriminated from other super-classes. As shown in the lower half of Figure 3, in the automobile-and-airplane super-class, it is possibly not easy to determine the exact category for a single sample. However, we can find that the sample in this super-class should be closer to the super-class distribution of the automobile-and-airplane super-class compared to those of other super-classes. Additionally, the super-class is more robust to the noisy samples. For samples likely to be misclassified (e.g., samples inside the dashed box), their negative impact on the super-class distribution is well suppressed by other correctly classified samples. Discriminative distribution loss To distinguish the sample from other super-classes, this sample is supposed to be more similar to its corresponding super-class on distribution. Inspired by [10, 11], we design a contrastivelike distribution loss to distinguish the sample from other super-classes. Formally, this auxiliary distribution loss is, Ldis = − 1 |Bu| |Bu|∑ i=1 1(max(pm(y|α(ui))) ≥ τ2) log exp(pm(y|A(ui)) · qk/T )∑K j=1 exp(pm(y|A(ui)) · qj/T ) (3) where T is a common temperature parameter, and ui is assigned to super-class Ck. Like in Equation 1, we adopt a parameter τ2 to control the learned unlabeled data. As mentioned before, the similarity between samples and super-classes is a weak constraint, so we are conditioned to use a lower threshold to learn more samples. We provide an empirical value via extensive ablation studies. Notice that we compute gradients only on strongly-augmented samples. Progressive super-class construction Although small K reduces the clustering error, it comes at the cost that the learned discriminant information is limited. Assuming the extreme case, when K = 1, the amount of information is 0 because all samples belong to one super-class, the model will not discriminate against any samples. With this point of view, we propose the progressive construction of super-class to adapt to the model evolution during training. That is, when the model is not well trained at the beginning, we use a small K to form the coarser super-classes to ease the clustering task and thus attain relatively reliable discriminative guidance. When the model is better trained, to avoid the training of the model being stagnant due to the limitation of discriminative information, we gradually increase K to provide enhanced discriminative guidance. In practice, a daunting challenge is that we do not know the most appropriate number of super-class for the training samples without prior knowledge. To this end, we design a dichotomous method and set the value range of K by: Ki ∈ {2, . . . , ⌈nk/4⌉, ⌈nk/2⌉, nk} (4) The above formula restricts the value of K based on the principle of dichotomy so that frequent changes of K can be avoided. Especially when there are many classes in the sample set (e.g.,100 classes on CIFAR-100), it would be tedious and pointless to learn all K values. Furthermore, to ensure that the clustering task with different K can be performed for a certain period, we adopt a linear-step growth strategy to adjust K dynamically: K = Ki, if Ki ≤ t α ∗ ts < Ki+1, (5) where t and ts denote the value of the current iteration and the total number of iterations, respectively. α ∈ (0, 1) and it controls the growth rate of K. With this clustering task, K super-classes are dynamically formed at each iteration. Algorithm 1 Algorithm of our method Input: Labeled batch Bx = {(xi, yi)}, unlabeled batch Bu = {ui}, weak augmentation strategy α(·), strong augmentation strategy A(·) Parameter: threshold τ1, τ2, temperature T , loss weight λcon, λdis 1: compute Lsup = 1 |Bx| ∑|Bx| i=1 H(pm(y|α(ui)), yi): 2: for t← 1 to ts do 3: for ui ∈ Bu do 4: zwi = Encoder(α(ui)) // record features of weakly augmented samples. 5: pm(y|α(ui)), pm(y|A(ui)) // compute prediction of α(ui) and A(ui). 6: end for 7: Lcon = 1 |Bu| ∑|Bu| i=1 1(max(pm(y|α(ui))) ≥ τ1)H(pm(y|α(ui)), pm(y|A(ui))) 8: update K. 9: form super-classes by K-Means(K, zwi ). 10: qk = 1/|Ck| ∑|Ck| i=1 pm(y|α(ui)), ∀ui ∈ Ck 11: Ldis = − 1 |Bu| ∑|Bu| i=1 1(max(pm(y|α(ui))) ≥ τ2) log exp(pm(y|A(ui)) · qk/T )∑K j=1 exp(pm(y|A(ui)) · qj/T ) 12: minimizing the total loss L = Lsup + λconLcon + λdisLdis. 13: end for 2.3 Total Loss Similar to most SSL methods, the supervised loss for a batch of labeled data Bx is obtained by a standard cross-entropy loss, Lsup = 1 |Bx| |Bx|∑ i=1 H(pm(y|α(ui)), yi) (6) In summary, the total loss in our method is, L = Lsup + λconLcon + λdisLdis (7) where λcon and λdis are the weights of Lcon and Ldis, respectively. The full algorithm is provided in Algorithm 1. 3 Experiments In this section, we validate the effectiveness of our proposed method by conducting experiments on widely-used SSL benchmark datasets: CIFAR-10, CIFAR-100 [12], and STL-10 [13]. 3.1 Implementation details To face the challenge of BSL, we randomly sample 1 or 2 labels for each class on these data sets. We adopt "WideResNet-28-2" and "WideResNet-28-8" [14] as the backbone for CIFAR-10 and CIFAR-100, respectively, while using "ResNet18" [15] for STL-10. For the consistent information learned module, we follow the same setting with [3], where τ1 = 0.95, |Bx| = 64, |Bu| = 7|Bx|. And for the discriminative information learned module, we set T = 1, τ2 = 0.8. In addition, Since the essence of the three losses of Lsup, Lcon, Ldis is in the form of cross entropy, it’s prefer to set λcon = λdis = 1 to further reduce of hyperparameters. For CIFAR-10 and STL-10 task, we set the K ∈ {⌊nk/3⌋, nk/2, nk} = {3, 5, 10}. For CIFAR-100, considering that the samples of each cluster should be sufficient, we set the K ∈ {nk/20, nk/10, nk/5} = {5, 10, 20}. In fact, the specific numerical setting of K has little effect on the model performance, and more analysis and experiments about K will be discussed in the later ablation experiments. Mean-Teacher 15.48 ± 3.19 17.50 ± 1.16 5.17 ± 2.52 8.26 ± 3.43 11.05 ± 6.45 15.99 ± 6.45 MixMatch 17.18 ± 4.45 26.45 ± 8.17 12.85 ± 2.21 21.56 ± 4.84 10.94 ± 5.18 21.48 ± 3.17 ReMixMatch 60.29 ± 15.20 78.56 ± 9.63 26.18 ± 3.79 35.90 ± 3.66 30.86 ± 10.80 45.58 ± 8.36 FixMatch 44.47 ± 24.99 80.46 ± 5.15 25.49 ± 4.37 35.55 ± 1.59 25.75 ± 8.99 48.98 ± 6.46 FixMatch (w/DA) 67.79 ± 15.42 84.16 ± 9.27 31.10 ± 2.29 43.22 ± 1.87 42.08 ± 6.24 54.76 ± 5.44 CoMatch 60.79 ± 12.42 81.19 ± 8.55 27.54 ± 4.25 36.98 ± 2.17 29.11 ± 9.31 50.20 ± 7.57 FlexMatch 66.07 ± 10.58 85.69 ± 6.24 31.50 ± 3.61 38.05 ± 2.66 41.17 ± 6.20 54.30 ± 5.65 SLA 65.87 ± 10.83 81.89 ± 6.77 28.45 ± 2.16 38.65 ± 2.67 32.38 ± 8.32 47.50 ± 6.38 LESS 64.40 ± 10.90 81.20 ± 5.60 28.20 ± 3.00 42.50 ± 3.20 34.25 ± 7.19 48.98 ± 5.19 our method 76.76 ± 6.78 88.49 ± 3.26 37.50 ± 1.72 45.62 ± 1.39 52.51 ± 3.20 57.98 ± 3.18 The model is trained with a total of 220 iterations, and the K increased in the first 30% iterations. We use an exponential moving average with a decay rate of 0.999 to test our model and repeat the same experiment for five runs with different seeds to report the mean accuracy. 3.2 Baseline methods First, FixMatch [3], Dash [16], CoMatch [5], FlexMatch [4] are the advanced semi-supervised models in recent years, and we compare these methods under the challenge of barely-supervised learning. We also use FixMatch with the distribution alignment (DA). SLA [6] and LESS [7] are the latest models on BSL, and we also use them as our comparison method. In addition to this, we also select some classical semi-supervised methods such as MeanTeacher [17], MixMatch [18] and ReMixMatch [19] for comparison. 3.3 Experimental results Performance comparisons. In Table 1, we compare the test accuracy of our proposed method against recent SSL and BSL methods. It can be seen that our results are state-of-the-art in all settings. Especially when there is only one label per class, our method compensates for the shortage of labeled data by mining latent discriminative information from unlabeled data, thus showing enormous superiority. LESS [7], recent work on BSL, since it generates predictions for samples with low confidence and then learns more consistent information, still ignores the learning of discriminative information, it cannot solve the challenges in BSL. On CIFAR-10 task with 10 labels, our method achieves the mean accuracy of 76.76%, which outperforms other methods by 10%. On STL-10 task with 10 labels, the recent BSL methods LESS and SLA achieve the accuracy of 34.25% and 32.38%, respectively, while our method achieves the mean accuracy of 52.51%, which improved nearly by 20%. For larger dataset CIFAR-100, our method also outperforms other methods by at least 6% when there is 1 label per class. Besides, we can see that, regardless of the dataset, the performance of our method when using only 1 label per class is close to or even exceeds the performance of other methods when using 2 labels per class. On CIFAR-100 task, LESS and SLA achieve the mean accuracy of 42.50% and 38.65% with 100 labels, while our method achieves the mean accuracy of 37.50% with half of the labels they use. As mentioned by [3], the quality of very few labeled data will significantly affect the performance of the model. Taking the CIFAR-10 task with 10 labels as an example, the variance of advanced SSL methods and BSL methods are all more than 10%, while the variance of our method is only 6.78%. These results further illustrate that our method can alleviate the dependence on labeled data by learning discriminative information from unlabeled data. We also find that the technique of distribution alignment, which forces the alignment of probability distributions is still an effective technique under BSL. Through the result of FixMatch (w/DA), we can see that DA successfully helped FixMatch improve its performance significantly. However, DA is a technique that relies on prior information, and our method does not rely on any prior information and can achieve better performance than it. Stability of the model. First of all, we discuss the phenomenon of model collapse under the BSL challenge. For a fair comparison, we use the same random seed in each trial for FixMatch and our method. Since only 1 label per class is available, this SSL method that depends on labeled data to learn discriminative information would be volatile. As shown in table 2, we can see that the per- formance of FixMatch is extremely unstable, where it can achieve very high accuracy of 85.11% when seed= 3 but obtain an extremely low accuracy of 17.09% when seed= 4. Differently, integrating the proposed super-class distribution to provide more discriminative information, our method can successfully alleviate the model collapse: the accuracy exceeded 70% in all experiments and also exceeded 80% sometimes. Performance under SSL settings. We also analyze our method in standard SSL settings where sufficient labeled data are provided. As shown in Table 3, we test our method on CIFAR-10 with 40, 250, and 4000 labels. It can be seen that when the number of labels increases, our method is not SOTA, but the gap with other methods is within 1%. It can be interpreted as these methods using other advanced techniques in the learning consistent information pro- cess. For example, FlexMatch [4] and SLA [6] both leverage prior knowledge of class proportions, and CoMatch [5] leverages graph-based contrastive learning, etc.While we study an independent module for solving BSL, so when the labels are enough, our method does not prevail. However, though our implementation is based on FixMatch [3], the results show that our method can still improve slightly when the number of labels is large. 3.4 Ablation study Performance under different strategies of K. We explore the effect of K for different strategies on the model: (1) fix-mode, the number of super-class remains constant during model training. (2) linear-mode, the number of super-class increases linearly during model training. (3) exp-mode, the number of super-class increases exponentially. (4) step-mode, a step-by-step jump growth based on linear-mode. We fixed different K values for experiments in terms of the fixed strategy. As shown in Figure 4(a), when K is small, the model tends to outperform the larger K. When K is large, the model in the early stage does not provide high-quality features to perform the formation of high-level superclasses, so the discriminative information learned has an extremely high risk of error, leading to the model’s failure. On the other hand, when K is small, our method can learn effective discriminative information from these low-level super-classes. However, as the model performance improves, this limited discriminative information provided by low-level super-classes can no longer help the model learn continuously, so the model’s performance will stagnate. It is worth noting that even if we adopt the fix-mode with K, the model can learn a certain degree of discriminative from the super-class to face the challenge of BSL, and its performance also exceeds other methods. Linear-mode, exp-mode, step-mode can all work well to solve the problem in the fix-mode above, while there is an additional hyperparameter to control the rate of K growth in these modes. As shown in Figure 4(b), we conduct experiments for different growth rates, and it turns out that the growth rate of K does not affect the performance of the model too much. In addition, since this mode can explore different levels of discriminative information, the performance is significantly better than that in the fix-mode. Although the performance of these modes is exceptionally close, we prefer step-mode as it can be more suitable for large data sets, e.g., it is impractical to increase K from 3 to 100 sequentially when we test on CIFAR-100. Performance under different τ2. We investigate 5 different τ2 values on CIFAR-10 datasets with 10 labels. As shown in Figure 4(c), the test performance achieve the best when τ2 = 0.8. It shows that appropriately lowering the threshold can learn discriminative information from more samples, thereby helping model training. However, if the threshold τ2 is too low, the noise of the sample will increase, which is not conducive to the training of the model. 4 Related Work Recent popular semi-supervised learning studies can be classified into entropy minimization (ER) based methods and consistency regularization (CR) based methods. Self training is the typical representative of ER-based methods. In these methods [1], the model is first trained on the provided labeled data and then used to generate pseudo-labels for unlabeled data. After that, such methods add these unlabeled data with high-confidence predictions into the labeled set to retrain the model, repeating this process until all unlabeled data are involved [20]. Recent studies tend to involve more advanced techniques in this framework to enhance the SSL performance. [21] introduces multiple views to provide more robust pseudo-labels. LaSSL [22] and Curriculum Labeling [23] integrate the contrastive learning and curriculum learning techniques to improve the accuracy of pseudo-labels further. As the most widely-used and successful technique in recent SSL methods, CR is the semantics of a sample should be consistent after data perturbations [24, 17, 18, 25, 26]. FixMatch [3] combines strong augmentation technology [8, 9] and the labels of weakly augmented samples with high confidence are used to guide the learning of strong augmented samples. Although a major breakthrough has been made in conventional semi-supervised learning, it still cannot avoid model collapse. [16, 4] further dynamically adjusts the confidence threshold based on FixMatch. Although it can learn more low-confidence samples to improve the performance of the model, it cannot cope with the challenge of lack of discriminative information under BSL. In the literature, there have been few works on barely-supervised learning. FixMatch [3] initially came up with the concept of BSL and emphasized that the quality of the labeled data played a crucial role in the test performance. Our experiment results also demonstrate that its testing results have a very high variance under the BSL settings. Recent SLA [6] also achieved better performance in BSL by formulating an optimal transportation problem between samples and labels. It introduced many extra hyper-parameters and adopted the Sinkhorn-Knopp algorithm to solve the optimization problem approximately. Differently, our method gets rid of complicated operations but can effectively improve the BSL performance. Another work [7] argues that the dilemma in BSL is that there are no pseudo-labels that can be predicted with high confidence, so online deep clustering is used to supplement the pseudo-labels predicted by the model. Although more pseudo-labels can be used, it still learns consistent information. [27] is a very recent work that uses the coarse-grained class labels to guide the SSL model. However, it requires strong prior knowledge about the class hierarchy structure in advance; while we are faced with BSL scenarios without any prior knowledge, even the hierarchy structure may not exist. 5 Conclusion In this paper, we analyze the failure of SSL methods in the face of BSL as insufficient discriminative information learning. To tackle this problem, we design a discriminative learning module to leverage unlabeled data for additional discriminative supervision. In this module, super-classes are dynamically reformed with the model training, and then the discriminative information is learned by measuring the similarity between samples and super-classes. We conduct our methods on several SSL benchmarks, and it shows that our method outperforms other methods in BSL. 6 Acknowledgment This work was supported by the Science and Technology Innovation 2030 New Generation Artificial Intelligence Major Project (2021ZD0113303), the NSFC Program (62222604, 62206052, 62192783), CAAI-Huawei MindSpore Project (CAAIXSJLJJ-2021-042A), China Postdoctoral Science Foundation Project (2021M690609), Jiangsu Natural Science Foundation Project (BK20210224), and CCF-Lenovo Bule Ocean Research Fund.
1. What is the focus and contribution of the paper regarding barely-supervised learning? 2. What are the strengths of the proposed approach, particularly in extending FixMatch with a discriminative information learning module? 3. What are the weaknesses of the paper, especially concerning the choice of K and its impact on performance? 4. Do you have any concerns about the method's reliance on K-means clustering? 5. How does the algorithm perform on an unbalanced dataset? 6. Can you compare the accuracy of pseudo-labels of FixMatch with the quality of generated clusters in the proposed approach? 7. Are there any limitations to the approach that the authors should discuss?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper tackles the problem of barely-supervised learning – an SSL setting in which only a few-labels per class are annotated. The authors propose a novel approach that extends the FixMatch with a discriminative information learning module. The key idea of this module is to form super-classes by clustering the data using K-means clustering and then rely on the similarity between super-classes and samples to guide the training. The number of clusters is gradually increased during the training. In that way, super-classes are dynamically formed and gradually become more fine-grained. The final objective function consists of standard cross-entropy loss on the labeled examples, standard consistency loss based on the weakly and strongly augmented versions of the data and the newly introduced discriminative distribution loss based on the formed super-classes. The performance of the approach is compared to state-of-the-art SSL and barely-supervised methods on CIFAR-10, CIFAR-100 and STL-10 datasets. Strengths And Weaknesses Strengths: Paper is well written, well structured and it was a pleasure to read it. The explanation with immutability and separability motivates the paper and the proposed approach really well. Barely-supervised learning is a challenging setting, and the proposed method solves the challenge in a simple, elegant and effective way The proposed method achieves state-of-the art performance on the barely-supervised learning setting on three datasets, and comparable performance on the standard SSL setting. Weaknesses: (Transductive) few-shot learning should be discussed in the related work and in the introduction and the differences between the settings need to be explained. Barely supervised learning should be better defined in the introduction and put in the relation with the more studied problem of the few-shot learning. The performance does depend on K and it is not clear how to set it. For example, it is not clear why for the CIFAR-100 K was set in the range {5,10,20} and differently then for the CIFAR-10 and STL-10 datasets. How does setting K differently affect performance on the more challenging CIFAR-100 dataset? The analysis is only done on the CIFAR-10 dataset. Reliance on the K-means clustering. Clusters can only be sperical in shape and the method may fail for the unbalanced dataset. Questions Why K ∈ {5,10,20} for CIFAR-100? What is the performance on CIFAR-100 when the K goes to the maximum of 100 classes? Why is K increased only in the 30% of iterations? How is that determined and how does a different setup affect the performance? How does the algorithm compare to others on the SVHN benchmark dataset on the barely-supervised learning as well as standard SSL setting? How does the algorithm perform on the unbalanced dataset? Authors motivate super-class idea by the fact that the pseudo-labelling approach can be very noisy. But how accurate are the resulting clusters (super-classes) during model training? It would be beneficial to compare the accuracy of pseudo-labels of FixMatch with the the quality of generated clusters in the proposed approach (e.g., whether examples that belong to the same ground-truth class are assigned to the same super-class during training) Minor comments: Line 194: it seems it should be \lambda_con=\lambda_dis=1 not L Line 294: sentence is not finished Limitations No. I suggest authors comment on the limitations of their reliance on the K-means clustering.
NIPS
Title Improving Barely Supervised Learning by Discriminating Unlabeled Samples with Super-Class Abstract In semi-supervised learning (SSL), a common practice is to learn consistent information from unlabeled data and discriminative information from labeled data to ensure both the immutability and the separability of the classification model. Existing SSL methods suffer from failures in barely-supervised learning (BSL), where only one or two labels per class are available, as the insufficient labels cause the discriminative information to be difficult or even infeasible to learn. To bridge this gap, we investigate a simple yet effective way to leverage unlabeled data for discriminative learning, and propose a novel discriminative information learning module to benefit model training. Specifically, we formulate the learning objective of discriminative information at the super-class level and dynamically assign different categories into different super-classes based on model performance improvement. On top of this on-the-fly process, we further propose a distribution-based loss to learn discriminative information by utilizing the similarity between samples and super-classes. It encourages the unlabeled data to stay closer to the distribution of their corresponding super-class than those of others. Such a constraint is softer than the direct assignment of pseudo labels, while the latter could be very noisy in BSL. We compare our method with state-of-the-art SSL and BSL methods through extensive experiments on standard SSL benchmarks. Our method can achieve superior results, e.g., an average accuracy of 76.76% on CIFAR-10 with merely 1 label per class. The code is available at https://github.com/GuanGui-nju/SCMatch. 1 Introduction As a paradigm to reduce the dependency on a large amount of labeled data, semi-supervised learning (SSL) has been widely concerned and utilized [1, 2]. Although existing advanced SSL methods [3, 4, 5, 6] could achieve outstanding performance even with less than 1% labels on several datasets (e.g., CIFAR-10), the labeling process could still be lengthy, especially when there is a large number of object categories, which may preclude the deployment of SSL model in those applications. To tackle this challenge, barely-supervised learning (BSL), a novel paradigm with rising interest [3, 7], has been proposed recently to explore whether the model can be trained with the extremely scarce label, e.g., only one label per class. ∗Corresponding author. †Guan Gui, Yinghuan Shi are with the National Key Laboratory for Novel Software Technology and the National Institute of Healthcare Data Science, Nanjing University. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Unfortunately, current state-of-the-art SSL methods cannot well address the label-scarce challenge in BSL. As shown in Figure 1(a), many recent SSL methods can achieve auspicious performance when sufficient labeled data are provided, e.g., higher than 90% accuracy with more than 40 labels on CIFAR-10. However, these methods will suffer severe performance degeneration when the label amount is reduced. For example, when only 10 labels are available on CIFAR-10, the test accuracy of FixMatch will drop sharply by more than 45% compared to that of 40 labels. In order to explore the reasons for such performance dropping, we then track the predicted class distribution of FixMatch during the training process. As shown in Figure 1(b), FixMatch will end with the model collapse after training for 100 epochs, i.e., the model completely cannot distinguish different samples, and all samples are predicted as a single class. To analyze the reasons for above phenomenon, we prefer to investigate classification models in term of separability and immutability. Here immutability refers to the capacity of the model to be robust to perturbations. It can be mathematically expressed as argmax pm(y|ui) = argmax pm(y|α(ui)), where ui is a sample, y is the model prediction, and α(·) is a random perturbation. Separability, on the other hand, refers to the capacity of the model to differentiate two different categories of samples, i.e., argmax pm(y|ui) ̸= argmax pm(y|uj), where ui, uj are sampled from different categories. Figure 2 shows a graphic explanation of these two properties. For SSL classification models, in common practice, the immutability is often achieved by learning the consistent information of the unlabeled data, and the separability is achieved by learning the discriminative information of the labeled data. To achieve good performance, SSL models need to well balance their immutability and separability. However, such a balance is destroyed in BSL. The insufficient supervision information from the extremely scarce labels significantly damages the learning for separability, so that the model performance is dominated by the learning towards immutability. That’s why the model collapse could be observed when all samples are predicted as one same class. Motivated by these observations, in this paper, we aim to enhance the discriminative learning for the model’s separability under BSL. Since the labeled data are very limited, we explore how to mine additional discriminative supervision from unlabeled data. Although without label information, the unlabeled data could still provide some “latent guidance" to complement the process of learning only from labeled data. We hereby propose a novel module to dynamically form super-classes to “roughly categorize" unlabeled samples, then the discriminative information is learned by measuring the similarity between samples and super-classes, which is realized by our newly proposed loss function on the distribution level. Furthermore, with the improvement of the model, we gradually form more super-classes for finer categorization of unlabeled samples, aiming to provide more fine-grained discriminative information to guide the model training. In a nutshell, our proposed method is a simple yet effective way to mine discriminative information from unlabeled data. Compared to directly assigning pseudo labels to each sample [3, 4], which could be very noisy in BSL, learning the similarity between super-classes and samples is the softer guidance, thus reducing the error risk of pseudo labels. We evaluate our method on CIFAR-10, CIFAR-100, and STL-10, showing that our method outperforms all other SSL and BSL methods by a large margin. For example, only using one label per class on CIFAR-10, our method successfully avoided the occurrence of model collapse and achieved an accuracy of 76.76% with a variance of 6.78%. 2 Method Similar to the setting of SSL, a labeled set X and an unlabeled set U are also given in BSL. X = {(x1, y1), (x2, y2), . . . , (xn, yn)}, where yi denotes the label of the i-th labeled sample xi. Each sample is classified into one of nk classes denoted as {c1, c2, . . . , cnk}. U = {u1, u2, . . . , un}, where ui denotes i-th unlabeled sample, and typically |X | ≪ |U|. In BSL, a more challenging setting is considered, |X | < 4nk, where only few labeled data are available. In the implementation, the samples are provided on a per batch basis, with a batch of labeled data Bx and unlabeled data Bu. As discussed before, the key to BSL lies in training a robust and stable model by efficiently leveraging the unlabeled data together with such scarce labeled data. Unlike recent state-of-the-art SSL methods that only encourage consistency regularization on unlabeled data, our method aims to learn consistent and discriminative information from the unlabeled data simultaneously. As shown in Figure 3, we construct two modules to leverage unlabeled data accordingly, i.e., the consistent information learning module and the discriminative information learning module. In the consistent information learning module, we learn the information from the samples and their corresponding augmented versions, like [3]. While in the discriminative information learning module, we develops the super-class distributions by clustering unlabeled samples within a mini-batch and then uses them to minimize a novel distribution loss on unlabeled samples. 2.1 Consistent information learning module Like most consistency-based SSL methods, we encourage the model to output the same predictions on two differently-augmented versions of the same sample. Specifically, we produce pseudo labels on weakly-augmented samples and use them as training targets for their corresponding stronglyaugmented variants. Of them, the weak augmentation α(·) includes standard flip and shift operations, while the strong augmentation strategy A(·) consists of RandAugment [8] and CutOut [9]. Formally, this consistency-based unsupervised loss Lcon is defined as, Lcon = 1 |Bu| |Bu|∑ i=1 1(max(pm(y|α(ui))) ≥ τ1)H(pm(y|α(ui)), pm(y|A(ui))) (1) where H(p1, p2) denotes the standard cross entropy between p1 and p2, and τ1 is a pre-defined threshold to retain only high-confidence pseudo labels. As discussed in [3], τ1 is commonly set as a high value to alleviate the confirmation bias in SSL. 2.2 Discriminative information learning module In addition to relying on labeled data to learn discriminative information, we propose a novel module, an on-the-fly learning process to first form super-classes and then exploit the similarity between super-classes and samples to improve the model’s separability. One of the most intuitive ways to explore discriminative information is to generate class information for unlabeled data by clustering in the feature space. Ideally, samples of the same category will form a separate cluster so that the model can discriminate the samples from all other nk − 1 clusters of samples. However, forming such fine-grained clusters carries a considerable risk of errors, especially for tasks with a large number of object categories. What’s worse, in the early training stage, due to the weak feature extraction ability, the model inevitably produces wrong discriminative information, resulting in severe accumulated errors. To properly explore the discriminative information for unlabeled data, we propose the following designs, • First, instead of fine-grained clusters, we simplify the clustering task by allowing a cluster to contain multiple categories, i.e., a super-class cluster. In this way, the discriminative information is relatively weakened but more robust to clustering errors. • Second, it can still be noisy to adopt the super-class label as training targets for unlabeled data. Therefore, we tend to utilize the similarity between each sample and the super-classes rather than explicitly assign the training targets for unlabeled data. Concretely, our method encourages the unlabeled samples to stay closer to the predicted class probability distribution of their corresponding super-class than those of others. Such a smoothing way can better tolerate the inaccurate prediction of a single sample as well as potential clustering errors. • Third, although the discriminative information provided by the coarse-grained clusters is robust, it will be insufficient when the model’s separability is improved. Thus we propose the progressive construction of super-classes to gradually increase the clustering number so that our discriminative information learning module can adapt to the model evolution during the training process. When the cluster number is small, each super-class provides more moderate discriminative information, called a low-level super-class. In contrast, a large cluster number can enforce each super-class to abstract more concrete information, and we call it a high-level super-class. Super-class representation As shown in Figure 3, we employ standard K-Means on these features zwi of weakly-augmented samples within a mini-batch. With a given target number K of super-classes, these features are gathered into K clusters, and each cluster is denoted by Ck, k = 1, 2, . . . ,K. Each super-class can then be represented by the mean distribution of all the samples it contains. Given unlabeled sample ui, and its predicted class probability distribution pm(y|α(ui)) and pm(y|A(ui)), the super-class distribution qk for each super-class Ck can be calculated by, qk = 1 |Ck| |Ck|∑ i=1 pm(y|α(ui)), withui ∈ Ck (2) In this way, the super-class distribution can represent the distribution characteristics of the categories it contains so that it can be well discriminated from other super-classes. As shown in the lower half of Figure 3, in the automobile-and-airplane super-class, it is possibly not easy to determine the exact category for a single sample. However, we can find that the sample in this super-class should be closer to the super-class distribution of the automobile-and-airplane super-class compared to those of other super-classes. Additionally, the super-class is more robust to the noisy samples. For samples likely to be misclassified (e.g., samples inside the dashed box), their negative impact on the super-class distribution is well suppressed by other correctly classified samples. Discriminative distribution loss To distinguish the sample from other super-classes, this sample is supposed to be more similar to its corresponding super-class on distribution. Inspired by [10, 11], we design a contrastivelike distribution loss to distinguish the sample from other super-classes. Formally, this auxiliary distribution loss is, Ldis = − 1 |Bu| |Bu|∑ i=1 1(max(pm(y|α(ui))) ≥ τ2) log exp(pm(y|A(ui)) · qk/T )∑K j=1 exp(pm(y|A(ui)) · qj/T ) (3) where T is a common temperature parameter, and ui is assigned to super-class Ck. Like in Equation 1, we adopt a parameter τ2 to control the learned unlabeled data. As mentioned before, the similarity between samples and super-classes is a weak constraint, so we are conditioned to use a lower threshold to learn more samples. We provide an empirical value via extensive ablation studies. Notice that we compute gradients only on strongly-augmented samples. Progressive super-class construction Although small K reduces the clustering error, it comes at the cost that the learned discriminant information is limited. Assuming the extreme case, when K = 1, the amount of information is 0 because all samples belong to one super-class, the model will not discriminate against any samples. With this point of view, we propose the progressive construction of super-class to adapt to the model evolution during training. That is, when the model is not well trained at the beginning, we use a small K to form the coarser super-classes to ease the clustering task and thus attain relatively reliable discriminative guidance. When the model is better trained, to avoid the training of the model being stagnant due to the limitation of discriminative information, we gradually increase K to provide enhanced discriminative guidance. In practice, a daunting challenge is that we do not know the most appropriate number of super-class for the training samples without prior knowledge. To this end, we design a dichotomous method and set the value range of K by: Ki ∈ {2, . . . , ⌈nk/4⌉, ⌈nk/2⌉, nk} (4) The above formula restricts the value of K based on the principle of dichotomy so that frequent changes of K can be avoided. Especially when there are many classes in the sample set (e.g.,100 classes on CIFAR-100), it would be tedious and pointless to learn all K values. Furthermore, to ensure that the clustering task with different K can be performed for a certain period, we adopt a linear-step growth strategy to adjust K dynamically: K = Ki, if Ki ≤ t α ∗ ts < Ki+1, (5) where t and ts denote the value of the current iteration and the total number of iterations, respectively. α ∈ (0, 1) and it controls the growth rate of K. With this clustering task, K super-classes are dynamically formed at each iteration. Algorithm 1 Algorithm of our method Input: Labeled batch Bx = {(xi, yi)}, unlabeled batch Bu = {ui}, weak augmentation strategy α(·), strong augmentation strategy A(·) Parameter: threshold τ1, τ2, temperature T , loss weight λcon, λdis 1: compute Lsup = 1 |Bx| ∑|Bx| i=1 H(pm(y|α(ui)), yi): 2: for t← 1 to ts do 3: for ui ∈ Bu do 4: zwi = Encoder(α(ui)) // record features of weakly augmented samples. 5: pm(y|α(ui)), pm(y|A(ui)) // compute prediction of α(ui) and A(ui). 6: end for 7: Lcon = 1 |Bu| ∑|Bu| i=1 1(max(pm(y|α(ui))) ≥ τ1)H(pm(y|α(ui)), pm(y|A(ui))) 8: update K. 9: form super-classes by K-Means(K, zwi ). 10: qk = 1/|Ck| ∑|Ck| i=1 pm(y|α(ui)), ∀ui ∈ Ck 11: Ldis = − 1 |Bu| ∑|Bu| i=1 1(max(pm(y|α(ui))) ≥ τ2) log exp(pm(y|A(ui)) · qk/T )∑K j=1 exp(pm(y|A(ui)) · qj/T ) 12: minimizing the total loss L = Lsup + λconLcon + λdisLdis. 13: end for 2.3 Total Loss Similar to most SSL methods, the supervised loss for a batch of labeled data Bx is obtained by a standard cross-entropy loss, Lsup = 1 |Bx| |Bx|∑ i=1 H(pm(y|α(ui)), yi) (6) In summary, the total loss in our method is, L = Lsup + λconLcon + λdisLdis (7) where λcon and λdis are the weights of Lcon and Ldis, respectively. The full algorithm is provided in Algorithm 1. 3 Experiments In this section, we validate the effectiveness of our proposed method by conducting experiments on widely-used SSL benchmark datasets: CIFAR-10, CIFAR-100 [12], and STL-10 [13]. 3.1 Implementation details To face the challenge of BSL, we randomly sample 1 or 2 labels for each class on these data sets. We adopt "WideResNet-28-2" and "WideResNet-28-8" [14] as the backbone for CIFAR-10 and CIFAR-100, respectively, while using "ResNet18" [15] for STL-10. For the consistent information learned module, we follow the same setting with [3], where τ1 = 0.95, |Bx| = 64, |Bu| = 7|Bx|. And for the discriminative information learned module, we set T = 1, τ2 = 0.8. In addition, Since the essence of the three losses of Lsup, Lcon, Ldis is in the form of cross entropy, it’s prefer to set λcon = λdis = 1 to further reduce of hyperparameters. For CIFAR-10 and STL-10 task, we set the K ∈ {⌊nk/3⌋, nk/2, nk} = {3, 5, 10}. For CIFAR-100, considering that the samples of each cluster should be sufficient, we set the K ∈ {nk/20, nk/10, nk/5} = {5, 10, 20}. In fact, the specific numerical setting of K has little effect on the model performance, and more analysis and experiments about K will be discussed in the later ablation experiments. Mean-Teacher 15.48 ± 3.19 17.50 ± 1.16 5.17 ± 2.52 8.26 ± 3.43 11.05 ± 6.45 15.99 ± 6.45 MixMatch 17.18 ± 4.45 26.45 ± 8.17 12.85 ± 2.21 21.56 ± 4.84 10.94 ± 5.18 21.48 ± 3.17 ReMixMatch 60.29 ± 15.20 78.56 ± 9.63 26.18 ± 3.79 35.90 ± 3.66 30.86 ± 10.80 45.58 ± 8.36 FixMatch 44.47 ± 24.99 80.46 ± 5.15 25.49 ± 4.37 35.55 ± 1.59 25.75 ± 8.99 48.98 ± 6.46 FixMatch (w/DA) 67.79 ± 15.42 84.16 ± 9.27 31.10 ± 2.29 43.22 ± 1.87 42.08 ± 6.24 54.76 ± 5.44 CoMatch 60.79 ± 12.42 81.19 ± 8.55 27.54 ± 4.25 36.98 ± 2.17 29.11 ± 9.31 50.20 ± 7.57 FlexMatch 66.07 ± 10.58 85.69 ± 6.24 31.50 ± 3.61 38.05 ± 2.66 41.17 ± 6.20 54.30 ± 5.65 SLA 65.87 ± 10.83 81.89 ± 6.77 28.45 ± 2.16 38.65 ± 2.67 32.38 ± 8.32 47.50 ± 6.38 LESS 64.40 ± 10.90 81.20 ± 5.60 28.20 ± 3.00 42.50 ± 3.20 34.25 ± 7.19 48.98 ± 5.19 our method 76.76 ± 6.78 88.49 ± 3.26 37.50 ± 1.72 45.62 ± 1.39 52.51 ± 3.20 57.98 ± 3.18 The model is trained with a total of 220 iterations, and the K increased in the first 30% iterations. We use an exponential moving average with a decay rate of 0.999 to test our model and repeat the same experiment for five runs with different seeds to report the mean accuracy. 3.2 Baseline methods First, FixMatch [3], Dash [16], CoMatch [5], FlexMatch [4] are the advanced semi-supervised models in recent years, and we compare these methods under the challenge of barely-supervised learning. We also use FixMatch with the distribution alignment (DA). SLA [6] and LESS [7] are the latest models on BSL, and we also use them as our comparison method. In addition to this, we also select some classical semi-supervised methods such as MeanTeacher [17], MixMatch [18] and ReMixMatch [19] for comparison. 3.3 Experimental results Performance comparisons. In Table 1, we compare the test accuracy of our proposed method against recent SSL and BSL methods. It can be seen that our results are state-of-the-art in all settings. Especially when there is only one label per class, our method compensates for the shortage of labeled data by mining latent discriminative information from unlabeled data, thus showing enormous superiority. LESS [7], recent work on BSL, since it generates predictions for samples with low confidence and then learns more consistent information, still ignores the learning of discriminative information, it cannot solve the challenges in BSL. On CIFAR-10 task with 10 labels, our method achieves the mean accuracy of 76.76%, which outperforms other methods by 10%. On STL-10 task with 10 labels, the recent BSL methods LESS and SLA achieve the accuracy of 34.25% and 32.38%, respectively, while our method achieves the mean accuracy of 52.51%, which improved nearly by 20%. For larger dataset CIFAR-100, our method also outperforms other methods by at least 6% when there is 1 label per class. Besides, we can see that, regardless of the dataset, the performance of our method when using only 1 label per class is close to or even exceeds the performance of other methods when using 2 labels per class. On CIFAR-100 task, LESS and SLA achieve the mean accuracy of 42.50% and 38.65% with 100 labels, while our method achieves the mean accuracy of 37.50% with half of the labels they use. As mentioned by [3], the quality of very few labeled data will significantly affect the performance of the model. Taking the CIFAR-10 task with 10 labels as an example, the variance of advanced SSL methods and BSL methods are all more than 10%, while the variance of our method is only 6.78%. These results further illustrate that our method can alleviate the dependence on labeled data by learning discriminative information from unlabeled data. We also find that the technique of distribution alignment, which forces the alignment of probability distributions is still an effective technique under BSL. Through the result of FixMatch (w/DA), we can see that DA successfully helped FixMatch improve its performance significantly. However, DA is a technique that relies on prior information, and our method does not rely on any prior information and can achieve better performance than it. Stability of the model. First of all, we discuss the phenomenon of model collapse under the BSL challenge. For a fair comparison, we use the same random seed in each trial for FixMatch and our method. Since only 1 label per class is available, this SSL method that depends on labeled data to learn discriminative information would be volatile. As shown in table 2, we can see that the per- formance of FixMatch is extremely unstable, where it can achieve very high accuracy of 85.11% when seed= 3 but obtain an extremely low accuracy of 17.09% when seed= 4. Differently, integrating the proposed super-class distribution to provide more discriminative information, our method can successfully alleviate the model collapse: the accuracy exceeded 70% in all experiments and also exceeded 80% sometimes. Performance under SSL settings. We also analyze our method in standard SSL settings where sufficient labeled data are provided. As shown in Table 3, we test our method on CIFAR-10 with 40, 250, and 4000 labels. It can be seen that when the number of labels increases, our method is not SOTA, but the gap with other methods is within 1%. It can be interpreted as these methods using other advanced techniques in the learning consistent information pro- cess. For example, FlexMatch [4] and SLA [6] both leverage prior knowledge of class proportions, and CoMatch [5] leverages graph-based contrastive learning, etc.While we study an independent module for solving BSL, so when the labels are enough, our method does not prevail. However, though our implementation is based on FixMatch [3], the results show that our method can still improve slightly when the number of labels is large. 3.4 Ablation study Performance under different strategies of K. We explore the effect of K for different strategies on the model: (1) fix-mode, the number of super-class remains constant during model training. (2) linear-mode, the number of super-class increases linearly during model training. (3) exp-mode, the number of super-class increases exponentially. (4) step-mode, a step-by-step jump growth based on linear-mode. We fixed different K values for experiments in terms of the fixed strategy. As shown in Figure 4(a), when K is small, the model tends to outperform the larger K. When K is large, the model in the early stage does not provide high-quality features to perform the formation of high-level superclasses, so the discriminative information learned has an extremely high risk of error, leading to the model’s failure. On the other hand, when K is small, our method can learn effective discriminative information from these low-level super-classes. However, as the model performance improves, this limited discriminative information provided by low-level super-classes can no longer help the model learn continuously, so the model’s performance will stagnate. It is worth noting that even if we adopt the fix-mode with K, the model can learn a certain degree of discriminative from the super-class to face the challenge of BSL, and its performance also exceeds other methods. Linear-mode, exp-mode, step-mode can all work well to solve the problem in the fix-mode above, while there is an additional hyperparameter to control the rate of K growth in these modes. As shown in Figure 4(b), we conduct experiments for different growth rates, and it turns out that the growth rate of K does not affect the performance of the model too much. In addition, since this mode can explore different levels of discriminative information, the performance is significantly better than that in the fix-mode. Although the performance of these modes is exceptionally close, we prefer step-mode as it can be more suitable for large data sets, e.g., it is impractical to increase K from 3 to 100 sequentially when we test on CIFAR-100. Performance under different τ2. We investigate 5 different τ2 values on CIFAR-10 datasets with 10 labels. As shown in Figure 4(c), the test performance achieve the best when τ2 = 0.8. It shows that appropriately lowering the threshold can learn discriminative information from more samples, thereby helping model training. However, if the threshold τ2 is too low, the noise of the sample will increase, which is not conducive to the training of the model. 4 Related Work Recent popular semi-supervised learning studies can be classified into entropy minimization (ER) based methods and consistency regularization (CR) based methods. Self training is the typical representative of ER-based methods. In these methods [1], the model is first trained on the provided labeled data and then used to generate pseudo-labels for unlabeled data. After that, such methods add these unlabeled data with high-confidence predictions into the labeled set to retrain the model, repeating this process until all unlabeled data are involved [20]. Recent studies tend to involve more advanced techniques in this framework to enhance the SSL performance. [21] introduces multiple views to provide more robust pseudo-labels. LaSSL [22] and Curriculum Labeling [23] integrate the contrastive learning and curriculum learning techniques to improve the accuracy of pseudo-labels further. As the most widely-used and successful technique in recent SSL methods, CR is the semantics of a sample should be consistent after data perturbations [24, 17, 18, 25, 26]. FixMatch [3] combines strong augmentation technology [8, 9] and the labels of weakly augmented samples with high confidence are used to guide the learning of strong augmented samples. Although a major breakthrough has been made in conventional semi-supervised learning, it still cannot avoid model collapse. [16, 4] further dynamically adjusts the confidence threshold based on FixMatch. Although it can learn more low-confidence samples to improve the performance of the model, it cannot cope with the challenge of lack of discriminative information under BSL. In the literature, there have been few works on barely-supervised learning. FixMatch [3] initially came up with the concept of BSL and emphasized that the quality of the labeled data played a crucial role in the test performance. Our experiment results also demonstrate that its testing results have a very high variance under the BSL settings. Recent SLA [6] also achieved better performance in BSL by formulating an optimal transportation problem between samples and labels. It introduced many extra hyper-parameters and adopted the Sinkhorn-Knopp algorithm to solve the optimization problem approximately. Differently, our method gets rid of complicated operations but can effectively improve the BSL performance. Another work [7] argues that the dilemma in BSL is that there are no pseudo-labels that can be predicted with high confidence, so online deep clustering is used to supplement the pseudo-labels predicted by the model. Although more pseudo-labels can be used, it still learns consistent information. [27] is a very recent work that uses the coarse-grained class labels to guide the SSL model. However, it requires strong prior knowledge about the class hierarchy structure in advance; while we are faced with BSL scenarios without any prior knowledge, even the hierarchy structure may not exist. 5 Conclusion In this paper, we analyze the failure of SSL methods in the face of BSL as insufficient discriminative information learning. To tackle this problem, we design a discriminative learning module to leverage unlabeled data for additional discriminative supervision. In this module, super-classes are dynamically reformed with the model training, and then the discriminative information is learned by measuring the similarity between samples and super-classes. We conduct our methods on several SSL benchmarks, and it shows that our method outperforms other methods in BSL. 6 Acknowledgment This work was supported by the Science and Technology Innovation 2030 New Generation Artificial Intelligence Major Project (2021ZD0113303), the NSFC Program (62222604, 62206052, 62192783), CAAI-Huawei MindSpore Project (CAAIXSJLJJ-2021-042A), China Postdoctoral Science Foundation Project (2021M690609), Jiangsu Natural Science Foundation Project (BK20210224), and CCF-Lenovo Bule Ocean Research Fund.
1. What is the focus of the paper regarding Barely Supervised Learning? 2. What are the strengths of the proposed approach, particularly in addressing the problem with few labeled samples? 3. What are the concerns or questions raised by the reviewer regarding the methodology and results? 4. How does the reviewer assess the novelty and significance of the proposed method compared to existing works? 5. Are there any typos or minor issues in the paper that need to be addressed?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper addresses Barely Supervised Learning (BSL), an SSL problem when only a few labeled samples are available. The key is the introduction of a new loss function to the general consistency-based framework of SSL; the new loss measures the consistency between unlabeled samples and the centroids of "superclasses," which are obtained by applying K-means to the unlabeled data with K <= # classes. K is gradually increased as the learning progresses, like a “curriculum learning” idea. Experiments with CIFAR-10, CIFAR-100, and STL-10 have shown that the proposed method outperforms the existing SoTA SSL/BSL methods under BSL settings. Strengths And Weaknesses --------------------------------------Pros-------------------------------------- P1. The paper addresses BSL, an SSL problem with only a few labels. While it is important and practical, few studies have been reported in the literature. P2. This paper provides a good insight that BSL can be improved by a very simple method of adding consistency loss with K-means centroids. P3. The method outperforms SoTA SSL/BSL methods. P4. Ablation study on K is reported. --------------------------------------Cons-------------------------------------- C1. Batch size and K : Algorithm 1 says that K-means is performed batch by batch. Just to make sure, does this mean that K-means is applied to the samples within a batch, i.e., 64 samples? If yes, I wonder the resulting clusters will be quite different between iterations and may not be meaningful when the number of clusters is large. C2. Computation Time: Since every iteration K-means is performed, the training time may increase. There is no discussion on this point. C3. Results: I have some questions about the results. For example, in the CIFAR-100 100-label setting in Table 1, the score of LESS [4] is 28.2 ± 3.0, which is consistent with the number reported in [4]. However, in the 200-label setting, there is a gap between the numbers reported in this paper (39.5 ± 3.2) and those given in the original [4] (42.5 ± 3.2). It is unclear what is causing this difference. Such inconsistencies are also observed in Table 3. C4. Analysis: The ablation study on the proposed loss term L d i s would be essential, as it is the main idea of this paper. Some sensitivity analysis and discussions on the weight for L d i s , λ d i s , would be necessary. C5. Novelty: The method looks a little incremental. The proposed method only adds the consistency loss with K-means centroids to the typical SSL framework (i.e., consistency between weak and strong augmented samples). C6. The discussion of immutability and separability given in the introduction is interesting, but its significance is not clear. The relationship to the idea of the proposed method seems somewhat tenuous, and these measures are not used in the evaluations. C7. Typos: L37 and others: “model collapse” -> "mode collapse" would be better. L49: “two different samples” -> "two samples from different classes" would be what the authors want to mean. L140 and others: “super-class super-class” -> “super-class” L190: “the consistent information learned model” -> "consistent information learning module" (the same is applied to L192: “the discriminative information learned model”) L221: “52.52%” -> “76.76%” Questions There are several unclear points. In particular, I would like the authors to clarify C1-C3. Limitations I did not find any discussion on limitations. Related to C2 above, if the training time increases when running K-means on a batch basis, it would be a clear weakness.
NIPS
Title Improving Barely Supervised Learning by Discriminating Unlabeled Samples with Super-Class Abstract In semi-supervised learning (SSL), a common practice is to learn consistent information from unlabeled data and discriminative information from labeled data to ensure both the immutability and the separability of the classification model. Existing SSL methods suffer from failures in barely-supervised learning (BSL), where only one or two labels per class are available, as the insufficient labels cause the discriminative information to be difficult or even infeasible to learn. To bridge this gap, we investigate a simple yet effective way to leverage unlabeled data for discriminative learning, and propose a novel discriminative information learning module to benefit model training. Specifically, we formulate the learning objective of discriminative information at the super-class level and dynamically assign different categories into different super-classes based on model performance improvement. On top of this on-the-fly process, we further propose a distribution-based loss to learn discriminative information by utilizing the similarity between samples and super-classes. It encourages the unlabeled data to stay closer to the distribution of their corresponding super-class than those of others. Such a constraint is softer than the direct assignment of pseudo labels, while the latter could be very noisy in BSL. We compare our method with state-of-the-art SSL and BSL methods through extensive experiments on standard SSL benchmarks. Our method can achieve superior results, e.g., an average accuracy of 76.76% on CIFAR-10 with merely 1 label per class. The code is available at https://github.com/GuanGui-nju/SCMatch. 1 Introduction As a paradigm to reduce the dependency on a large amount of labeled data, semi-supervised learning (SSL) has been widely concerned and utilized [1, 2]. Although existing advanced SSL methods [3, 4, 5, 6] could achieve outstanding performance even with less than 1% labels on several datasets (e.g., CIFAR-10), the labeling process could still be lengthy, especially when there is a large number of object categories, which may preclude the deployment of SSL model in those applications. To tackle this challenge, barely-supervised learning (BSL), a novel paradigm with rising interest [3, 7], has been proposed recently to explore whether the model can be trained with the extremely scarce label, e.g., only one label per class. ∗Corresponding author. †Guan Gui, Yinghuan Shi are with the National Key Laboratory for Novel Software Technology and the National Institute of Healthcare Data Science, Nanjing University. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Unfortunately, current state-of-the-art SSL methods cannot well address the label-scarce challenge in BSL. As shown in Figure 1(a), many recent SSL methods can achieve auspicious performance when sufficient labeled data are provided, e.g., higher than 90% accuracy with more than 40 labels on CIFAR-10. However, these methods will suffer severe performance degeneration when the label amount is reduced. For example, when only 10 labels are available on CIFAR-10, the test accuracy of FixMatch will drop sharply by more than 45% compared to that of 40 labels. In order to explore the reasons for such performance dropping, we then track the predicted class distribution of FixMatch during the training process. As shown in Figure 1(b), FixMatch will end with the model collapse after training for 100 epochs, i.e., the model completely cannot distinguish different samples, and all samples are predicted as a single class. To analyze the reasons for above phenomenon, we prefer to investigate classification models in term of separability and immutability. Here immutability refers to the capacity of the model to be robust to perturbations. It can be mathematically expressed as argmax pm(y|ui) = argmax pm(y|α(ui)), where ui is a sample, y is the model prediction, and α(·) is a random perturbation. Separability, on the other hand, refers to the capacity of the model to differentiate two different categories of samples, i.e., argmax pm(y|ui) ̸= argmax pm(y|uj), where ui, uj are sampled from different categories. Figure 2 shows a graphic explanation of these two properties. For SSL classification models, in common practice, the immutability is often achieved by learning the consistent information of the unlabeled data, and the separability is achieved by learning the discriminative information of the labeled data. To achieve good performance, SSL models need to well balance their immutability and separability. However, such a balance is destroyed in BSL. The insufficient supervision information from the extremely scarce labels significantly damages the learning for separability, so that the model performance is dominated by the learning towards immutability. That’s why the model collapse could be observed when all samples are predicted as one same class. Motivated by these observations, in this paper, we aim to enhance the discriminative learning for the model’s separability under BSL. Since the labeled data are very limited, we explore how to mine additional discriminative supervision from unlabeled data. Although without label information, the unlabeled data could still provide some “latent guidance" to complement the process of learning only from labeled data. We hereby propose a novel module to dynamically form super-classes to “roughly categorize" unlabeled samples, then the discriminative information is learned by measuring the similarity between samples and super-classes, which is realized by our newly proposed loss function on the distribution level. Furthermore, with the improvement of the model, we gradually form more super-classes for finer categorization of unlabeled samples, aiming to provide more fine-grained discriminative information to guide the model training. In a nutshell, our proposed method is a simple yet effective way to mine discriminative information from unlabeled data. Compared to directly assigning pseudo labels to each sample [3, 4], which could be very noisy in BSL, learning the similarity between super-classes and samples is the softer guidance, thus reducing the error risk of pseudo labels. We evaluate our method on CIFAR-10, CIFAR-100, and STL-10, showing that our method outperforms all other SSL and BSL methods by a large margin. For example, only using one label per class on CIFAR-10, our method successfully avoided the occurrence of model collapse and achieved an accuracy of 76.76% with a variance of 6.78%. 2 Method Similar to the setting of SSL, a labeled set X and an unlabeled set U are also given in BSL. X = {(x1, y1), (x2, y2), . . . , (xn, yn)}, where yi denotes the label of the i-th labeled sample xi. Each sample is classified into one of nk classes denoted as {c1, c2, . . . , cnk}. U = {u1, u2, . . . , un}, where ui denotes i-th unlabeled sample, and typically |X | ≪ |U|. In BSL, a more challenging setting is considered, |X | < 4nk, where only few labeled data are available. In the implementation, the samples are provided on a per batch basis, with a batch of labeled data Bx and unlabeled data Bu. As discussed before, the key to BSL lies in training a robust and stable model by efficiently leveraging the unlabeled data together with such scarce labeled data. Unlike recent state-of-the-art SSL methods that only encourage consistency regularization on unlabeled data, our method aims to learn consistent and discriminative information from the unlabeled data simultaneously. As shown in Figure 3, we construct two modules to leverage unlabeled data accordingly, i.e., the consistent information learning module and the discriminative information learning module. In the consistent information learning module, we learn the information from the samples and their corresponding augmented versions, like [3]. While in the discriminative information learning module, we develops the super-class distributions by clustering unlabeled samples within a mini-batch and then uses them to minimize a novel distribution loss on unlabeled samples. 2.1 Consistent information learning module Like most consistency-based SSL methods, we encourage the model to output the same predictions on two differently-augmented versions of the same sample. Specifically, we produce pseudo labels on weakly-augmented samples and use them as training targets for their corresponding stronglyaugmented variants. Of them, the weak augmentation α(·) includes standard flip and shift operations, while the strong augmentation strategy A(·) consists of RandAugment [8] and CutOut [9]. Formally, this consistency-based unsupervised loss Lcon is defined as, Lcon = 1 |Bu| |Bu|∑ i=1 1(max(pm(y|α(ui))) ≥ τ1)H(pm(y|α(ui)), pm(y|A(ui))) (1) where H(p1, p2) denotes the standard cross entropy between p1 and p2, and τ1 is a pre-defined threshold to retain only high-confidence pseudo labels. As discussed in [3], τ1 is commonly set as a high value to alleviate the confirmation bias in SSL. 2.2 Discriminative information learning module In addition to relying on labeled data to learn discriminative information, we propose a novel module, an on-the-fly learning process to first form super-classes and then exploit the similarity between super-classes and samples to improve the model’s separability. One of the most intuitive ways to explore discriminative information is to generate class information for unlabeled data by clustering in the feature space. Ideally, samples of the same category will form a separate cluster so that the model can discriminate the samples from all other nk − 1 clusters of samples. However, forming such fine-grained clusters carries a considerable risk of errors, especially for tasks with a large number of object categories. What’s worse, in the early training stage, due to the weak feature extraction ability, the model inevitably produces wrong discriminative information, resulting in severe accumulated errors. To properly explore the discriminative information for unlabeled data, we propose the following designs, • First, instead of fine-grained clusters, we simplify the clustering task by allowing a cluster to contain multiple categories, i.e., a super-class cluster. In this way, the discriminative information is relatively weakened but more robust to clustering errors. • Second, it can still be noisy to adopt the super-class label as training targets for unlabeled data. Therefore, we tend to utilize the similarity between each sample and the super-classes rather than explicitly assign the training targets for unlabeled data. Concretely, our method encourages the unlabeled samples to stay closer to the predicted class probability distribution of their corresponding super-class than those of others. Such a smoothing way can better tolerate the inaccurate prediction of a single sample as well as potential clustering errors. • Third, although the discriminative information provided by the coarse-grained clusters is robust, it will be insufficient when the model’s separability is improved. Thus we propose the progressive construction of super-classes to gradually increase the clustering number so that our discriminative information learning module can adapt to the model evolution during the training process. When the cluster number is small, each super-class provides more moderate discriminative information, called a low-level super-class. In contrast, a large cluster number can enforce each super-class to abstract more concrete information, and we call it a high-level super-class. Super-class representation As shown in Figure 3, we employ standard K-Means on these features zwi of weakly-augmented samples within a mini-batch. With a given target number K of super-classes, these features are gathered into K clusters, and each cluster is denoted by Ck, k = 1, 2, . . . ,K. Each super-class can then be represented by the mean distribution of all the samples it contains. Given unlabeled sample ui, and its predicted class probability distribution pm(y|α(ui)) and pm(y|A(ui)), the super-class distribution qk for each super-class Ck can be calculated by, qk = 1 |Ck| |Ck|∑ i=1 pm(y|α(ui)), withui ∈ Ck (2) In this way, the super-class distribution can represent the distribution characteristics of the categories it contains so that it can be well discriminated from other super-classes. As shown in the lower half of Figure 3, in the automobile-and-airplane super-class, it is possibly not easy to determine the exact category for a single sample. However, we can find that the sample in this super-class should be closer to the super-class distribution of the automobile-and-airplane super-class compared to those of other super-classes. Additionally, the super-class is more robust to the noisy samples. For samples likely to be misclassified (e.g., samples inside the dashed box), their negative impact on the super-class distribution is well suppressed by other correctly classified samples. Discriminative distribution loss To distinguish the sample from other super-classes, this sample is supposed to be more similar to its corresponding super-class on distribution. Inspired by [10, 11], we design a contrastivelike distribution loss to distinguish the sample from other super-classes. Formally, this auxiliary distribution loss is, Ldis = − 1 |Bu| |Bu|∑ i=1 1(max(pm(y|α(ui))) ≥ τ2) log exp(pm(y|A(ui)) · qk/T )∑K j=1 exp(pm(y|A(ui)) · qj/T ) (3) where T is a common temperature parameter, and ui is assigned to super-class Ck. Like in Equation 1, we adopt a parameter τ2 to control the learned unlabeled data. As mentioned before, the similarity between samples and super-classes is a weak constraint, so we are conditioned to use a lower threshold to learn more samples. We provide an empirical value via extensive ablation studies. Notice that we compute gradients only on strongly-augmented samples. Progressive super-class construction Although small K reduces the clustering error, it comes at the cost that the learned discriminant information is limited. Assuming the extreme case, when K = 1, the amount of information is 0 because all samples belong to one super-class, the model will not discriminate against any samples. With this point of view, we propose the progressive construction of super-class to adapt to the model evolution during training. That is, when the model is not well trained at the beginning, we use a small K to form the coarser super-classes to ease the clustering task and thus attain relatively reliable discriminative guidance. When the model is better trained, to avoid the training of the model being stagnant due to the limitation of discriminative information, we gradually increase K to provide enhanced discriminative guidance. In practice, a daunting challenge is that we do not know the most appropriate number of super-class for the training samples without prior knowledge. To this end, we design a dichotomous method and set the value range of K by: Ki ∈ {2, . . . , ⌈nk/4⌉, ⌈nk/2⌉, nk} (4) The above formula restricts the value of K based on the principle of dichotomy so that frequent changes of K can be avoided. Especially when there are many classes in the sample set (e.g.,100 classes on CIFAR-100), it would be tedious and pointless to learn all K values. Furthermore, to ensure that the clustering task with different K can be performed for a certain period, we adopt a linear-step growth strategy to adjust K dynamically: K = Ki, if Ki ≤ t α ∗ ts < Ki+1, (5) where t and ts denote the value of the current iteration and the total number of iterations, respectively. α ∈ (0, 1) and it controls the growth rate of K. With this clustering task, K super-classes are dynamically formed at each iteration. Algorithm 1 Algorithm of our method Input: Labeled batch Bx = {(xi, yi)}, unlabeled batch Bu = {ui}, weak augmentation strategy α(·), strong augmentation strategy A(·) Parameter: threshold τ1, τ2, temperature T , loss weight λcon, λdis 1: compute Lsup = 1 |Bx| ∑|Bx| i=1 H(pm(y|α(ui)), yi): 2: for t← 1 to ts do 3: for ui ∈ Bu do 4: zwi = Encoder(α(ui)) // record features of weakly augmented samples. 5: pm(y|α(ui)), pm(y|A(ui)) // compute prediction of α(ui) and A(ui). 6: end for 7: Lcon = 1 |Bu| ∑|Bu| i=1 1(max(pm(y|α(ui))) ≥ τ1)H(pm(y|α(ui)), pm(y|A(ui))) 8: update K. 9: form super-classes by K-Means(K, zwi ). 10: qk = 1/|Ck| ∑|Ck| i=1 pm(y|α(ui)), ∀ui ∈ Ck 11: Ldis = − 1 |Bu| ∑|Bu| i=1 1(max(pm(y|α(ui))) ≥ τ2) log exp(pm(y|A(ui)) · qk/T )∑K j=1 exp(pm(y|A(ui)) · qj/T ) 12: minimizing the total loss L = Lsup + λconLcon + λdisLdis. 13: end for 2.3 Total Loss Similar to most SSL methods, the supervised loss for a batch of labeled data Bx is obtained by a standard cross-entropy loss, Lsup = 1 |Bx| |Bx|∑ i=1 H(pm(y|α(ui)), yi) (6) In summary, the total loss in our method is, L = Lsup + λconLcon + λdisLdis (7) where λcon and λdis are the weights of Lcon and Ldis, respectively. The full algorithm is provided in Algorithm 1. 3 Experiments In this section, we validate the effectiveness of our proposed method by conducting experiments on widely-used SSL benchmark datasets: CIFAR-10, CIFAR-100 [12], and STL-10 [13]. 3.1 Implementation details To face the challenge of BSL, we randomly sample 1 or 2 labels for each class on these data sets. We adopt "WideResNet-28-2" and "WideResNet-28-8" [14] as the backbone for CIFAR-10 and CIFAR-100, respectively, while using "ResNet18" [15] for STL-10. For the consistent information learned module, we follow the same setting with [3], where τ1 = 0.95, |Bx| = 64, |Bu| = 7|Bx|. And for the discriminative information learned module, we set T = 1, τ2 = 0.8. In addition, Since the essence of the three losses of Lsup, Lcon, Ldis is in the form of cross entropy, it’s prefer to set λcon = λdis = 1 to further reduce of hyperparameters. For CIFAR-10 and STL-10 task, we set the K ∈ {⌊nk/3⌋, nk/2, nk} = {3, 5, 10}. For CIFAR-100, considering that the samples of each cluster should be sufficient, we set the K ∈ {nk/20, nk/10, nk/5} = {5, 10, 20}. In fact, the specific numerical setting of K has little effect on the model performance, and more analysis and experiments about K will be discussed in the later ablation experiments. Mean-Teacher 15.48 ± 3.19 17.50 ± 1.16 5.17 ± 2.52 8.26 ± 3.43 11.05 ± 6.45 15.99 ± 6.45 MixMatch 17.18 ± 4.45 26.45 ± 8.17 12.85 ± 2.21 21.56 ± 4.84 10.94 ± 5.18 21.48 ± 3.17 ReMixMatch 60.29 ± 15.20 78.56 ± 9.63 26.18 ± 3.79 35.90 ± 3.66 30.86 ± 10.80 45.58 ± 8.36 FixMatch 44.47 ± 24.99 80.46 ± 5.15 25.49 ± 4.37 35.55 ± 1.59 25.75 ± 8.99 48.98 ± 6.46 FixMatch (w/DA) 67.79 ± 15.42 84.16 ± 9.27 31.10 ± 2.29 43.22 ± 1.87 42.08 ± 6.24 54.76 ± 5.44 CoMatch 60.79 ± 12.42 81.19 ± 8.55 27.54 ± 4.25 36.98 ± 2.17 29.11 ± 9.31 50.20 ± 7.57 FlexMatch 66.07 ± 10.58 85.69 ± 6.24 31.50 ± 3.61 38.05 ± 2.66 41.17 ± 6.20 54.30 ± 5.65 SLA 65.87 ± 10.83 81.89 ± 6.77 28.45 ± 2.16 38.65 ± 2.67 32.38 ± 8.32 47.50 ± 6.38 LESS 64.40 ± 10.90 81.20 ± 5.60 28.20 ± 3.00 42.50 ± 3.20 34.25 ± 7.19 48.98 ± 5.19 our method 76.76 ± 6.78 88.49 ± 3.26 37.50 ± 1.72 45.62 ± 1.39 52.51 ± 3.20 57.98 ± 3.18 The model is trained with a total of 220 iterations, and the K increased in the first 30% iterations. We use an exponential moving average with a decay rate of 0.999 to test our model and repeat the same experiment for five runs with different seeds to report the mean accuracy. 3.2 Baseline methods First, FixMatch [3], Dash [16], CoMatch [5], FlexMatch [4] are the advanced semi-supervised models in recent years, and we compare these methods under the challenge of barely-supervised learning. We also use FixMatch with the distribution alignment (DA). SLA [6] and LESS [7] are the latest models on BSL, and we also use them as our comparison method. In addition to this, we also select some classical semi-supervised methods such as MeanTeacher [17], MixMatch [18] and ReMixMatch [19] for comparison. 3.3 Experimental results Performance comparisons. In Table 1, we compare the test accuracy of our proposed method against recent SSL and BSL methods. It can be seen that our results are state-of-the-art in all settings. Especially when there is only one label per class, our method compensates for the shortage of labeled data by mining latent discriminative information from unlabeled data, thus showing enormous superiority. LESS [7], recent work on BSL, since it generates predictions for samples with low confidence and then learns more consistent information, still ignores the learning of discriminative information, it cannot solve the challenges in BSL. On CIFAR-10 task with 10 labels, our method achieves the mean accuracy of 76.76%, which outperforms other methods by 10%. On STL-10 task with 10 labels, the recent BSL methods LESS and SLA achieve the accuracy of 34.25% and 32.38%, respectively, while our method achieves the mean accuracy of 52.51%, which improved nearly by 20%. For larger dataset CIFAR-100, our method also outperforms other methods by at least 6% when there is 1 label per class. Besides, we can see that, regardless of the dataset, the performance of our method when using only 1 label per class is close to or even exceeds the performance of other methods when using 2 labels per class. On CIFAR-100 task, LESS and SLA achieve the mean accuracy of 42.50% and 38.65% with 100 labels, while our method achieves the mean accuracy of 37.50% with half of the labels they use. As mentioned by [3], the quality of very few labeled data will significantly affect the performance of the model. Taking the CIFAR-10 task with 10 labels as an example, the variance of advanced SSL methods and BSL methods are all more than 10%, while the variance of our method is only 6.78%. These results further illustrate that our method can alleviate the dependence on labeled data by learning discriminative information from unlabeled data. We also find that the technique of distribution alignment, which forces the alignment of probability distributions is still an effective technique under BSL. Through the result of FixMatch (w/DA), we can see that DA successfully helped FixMatch improve its performance significantly. However, DA is a technique that relies on prior information, and our method does not rely on any prior information and can achieve better performance than it. Stability of the model. First of all, we discuss the phenomenon of model collapse under the BSL challenge. For a fair comparison, we use the same random seed in each trial for FixMatch and our method. Since only 1 label per class is available, this SSL method that depends on labeled data to learn discriminative information would be volatile. As shown in table 2, we can see that the per- formance of FixMatch is extremely unstable, where it can achieve very high accuracy of 85.11% when seed= 3 but obtain an extremely low accuracy of 17.09% when seed= 4. Differently, integrating the proposed super-class distribution to provide more discriminative information, our method can successfully alleviate the model collapse: the accuracy exceeded 70% in all experiments and also exceeded 80% sometimes. Performance under SSL settings. We also analyze our method in standard SSL settings where sufficient labeled data are provided. As shown in Table 3, we test our method on CIFAR-10 with 40, 250, and 4000 labels. It can be seen that when the number of labels increases, our method is not SOTA, but the gap with other methods is within 1%. It can be interpreted as these methods using other advanced techniques in the learning consistent information pro- cess. For example, FlexMatch [4] and SLA [6] both leverage prior knowledge of class proportions, and CoMatch [5] leverages graph-based contrastive learning, etc.While we study an independent module for solving BSL, so when the labels are enough, our method does not prevail. However, though our implementation is based on FixMatch [3], the results show that our method can still improve slightly when the number of labels is large. 3.4 Ablation study Performance under different strategies of K. We explore the effect of K for different strategies on the model: (1) fix-mode, the number of super-class remains constant during model training. (2) linear-mode, the number of super-class increases linearly during model training. (3) exp-mode, the number of super-class increases exponentially. (4) step-mode, a step-by-step jump growth based on linear-mode. We fixed different K values for experiments in terms of the fixed strategy. As shown in Figure 4(a), when K is small, the model tends to outperform the larger K. When K is large, the model in the early stage does not provide high-quality features to perform the formation of high-level superclasses, so the discriminative information learned has an extremely high risk of error, leading to the model’s failure. On the other hand, when K is small, our method can learn effective discriminative information from these low-level super-classes. However, as the model performance improves, this limited discriminative information provided by low-level super-classes can no longer help the model learn continuously, so the model’s performance will stagnate. It is worth noting that even if we adopt the fix-mode with K, the model can learn a certain degree of discriminative from the super-class to face the challenge of BSL, and its performance also exceeds other methods. Linear-mode, exp-mode, step-mode can all work well to solve the problem in the fix-mode above, while there is an additional hyperparameter to control the rate of K growth in these modes. As shown in Figure 4(b), we conduct experiments for different growth rates, and it turns out that the growth rate of K does not affect the performance of the model too much. In addition, since this mode can explore different levels of discriminative information, the performance is significantly better than that in the fix-mode. Although the performance of these modes is exceptionally close, we prefer step-mode as it can be more suitable for large data sets, e.g., it is impractical to increase K from 3 to 100 sequentially when we test on CIFAR-100. Performance under different τ2. We investigate 5 different τ2 values on CIFAR-10 datasets with 10 labels. As shown in Figure 4(c), the test performance achieve the best when τ2 = 0.8. It shows that appropriately lowering the threshold can learn discriminative information from more samples, thereby helping model training. However, if the threshold τ2 is too low, the noise of the sample will increase, which is not conducive to the training of the model. 4 Related Work Recent popular semi-supervised learning studies can be classified into entropy minimization (ER) based methods and consistency regularization (CR) based methods. Self training is the typical representative of ER-based methods. In these methods [1], the model is first trained on the provided labeled data and then used to generate pseudo-labels for unlabeled data. After that, such methods add these unlabeled data with high-confidence predictions into the labeled set to retrain the model, repeating this process until all unlabeled data are involved [20]. Recent studies tend to involve more advanced techniques in this framework to enhance the SSL performance. [21] introduces multiple views to provide more robust pseudo-labels. LaSSL [22] and Curriculum Labeling [23] integrate the contrastive learning and curriculum learning techniques to improve the accuracy of pseudo-labels further. As the most widely-used and successful technique in recent SSL methods, CR is the semantics of a sample should be consistent after data perturbations [24, 17, 18, 25, 26]. FixMatch [3] combines strong augmentation technology [8, 9] and the labels of weakly augmented samples with high confidence are used to guide the learning of strong augmented samples. Although a major breakthrough has been made in conventional semi-supervised learning, it still cannot avoid model collapse. [16, 4] further dynamically adjusts the confidence threshold based on FixMatch. Although it can learn more low-confidence samples to improve the performance of the model, it cannot cope with the challenge of lack of discriminative information under BSL. In the literature, there have been few works on barely-supervised learning. FixMatch [3] initially came up with the concept of BSL and emphasized that the quality of the labeled data played a crucial role in the test performance. Our experiment results also demonstrate that its testing results have a very high variance under the BSL settings. Recent SLA [6] also achieved better performance in BSL by formulating an optimal transportation problem between samples and labels. It introduced many extra hyper-parameters and adopted the Sinkhorn-Knopp algorithm to solve the optimization problem approximately. Differently, our method gets rid of complicated operations but can effectively improve the BSL performance. Another work [7] argues that the dilemma in BSL is that there are no pseudo-labels that can be predicted with high confidence, so online deep clustering is used to supplement the pseudo-labels predicted by the model. Although more pseudo-labels can be used, it still learns consistent information. [27] is a very recent work that uses the coarse-grained class labels to guide the SSL model. However, it requires strong prior knowledge about the class hierarchy structure in advance; while we are faced with BSL scenarios without any prior knowledge, even the hierarchy structure may not exist. 5 Conclusion In this paper, we analyze the failure of SSL methods in the face of BSL as insufficient discriminative information learning. To tackle this problem, we design a discriminative learning module to leverage unlabeled data for additional discriminative supervision. In this module, super-classes are dynamically reformed with the model training, and then the discriminative information is learned by measuring the similarity between samples and super-classes. We conduct our methods on several SSL benchmarks, and it shows that our method outperforms other methods in BSL. 6 Acknowledgment This work was supported by the Science and Technology Innovation 2030 New Generation Artificial Intelligence Major Project (2021ZD0113303), the NSFC Program (62222604, 62206052, 62192783), CAAI-Huawei MindSpore Project (CAAIXSJLJJ-2021-042A), China Postdoctoral Science Foundation Project (2021M690609), Jiangsu Natural Science Foundation Project (BK20210224), and CCF-Lenovo Bule Ocean Research Fund.
1. What is the focus and contribution of the paper on semi-supervised learning? 2. What are the strengths of the proposed approach, particularly in terms of discriminative information learning module? 3. What are the weaknesses of the paper, especially regarding the design choice on auxiliary distribution loss? 4. Do you have any concerns about the training speed and random seed selection in the experiments? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 6. Are there any limitations or shortcomings in the paper that the reviewer wants to highlight?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper presents a semi-supervised learning method for barely supervised setting. The core contribution is the design of discriminative information learning module with clustered super classes. The experiment on barely supervised setting demonstrates the consistently improved results on small datasets. Strengths And Weaknesses Strengths: The paper is well-written and organized, easy to follow. The improvement on the studied settings is significant and consistent. Weakness: The design choice on auxiliary distribution loss (eq.3) is not shown There is a mismatch of EMA of model parameter in main paper 0.999 (L201) versus 0.99 in appendix 0.99 (in table1) Questions Questions: For eq.3, why a constative-like loss is exploited. Is simple CE loss on the super classes not working? Cosine similarity is adopted in eq.3, but for softmax probabilities. It important to show the training speed with the proposed discriminative information learning module. Would running K-means for each training iteration slow down the training? Comparison between the training speed would be expected. For barely supervised setting, random seed often plays important roles for the distribution of the labeled data. Can you provide more details on how the random seed is set? Is it randomly selected or with more sophisticated selection? For example, in FixMatch, they expain how they choise the 10 labels for CIFAR-10 All experiments are run with small datasets. Also, a challenging dataset SVHN is not shown. SVHN is more challenging because its noisy background. It would be interesting to see the performance on SVHN and larger datasets such as (Mini)-ImageNet. To my understanding, for barely supervised setting, the quantity of the samples accepted for calculating the loss is especially important for early training. If a high threshold (tau_1, tau_2) is employed, the SSL algorithms would collapse to one class as shown in Fig.1 because not diverse enough samples are enrolled during training. This is also demonstrated in Fig.4(c), when tau_2 is higher, the performance drops more significantly than when tau_2 is lower. In that case, compare the enrolled rate of samples and the pseudo-label accuracy to FixMatch and FlexMatch is important to demonstrate the effectiveness of the proposed method. The Super class concept is very similar to SimMatch (https://arxiv.org/abs/2203.06915) yet no discuss and comparision is provided. Limitations see above
NIPS
Title Unsupervised Learning of Lagrangian Dynamics from Images for Prediction and Control Abstract Recent approaches for modelling dynamics of physical systems with neural networks enforce Lagrangian or Hamiltonian structure to improve prediction and generalization. However, when coordinates are embedded in high-dimensional data such as images, these approaches either lose interpretability or can only be applied to one particular example. We introduce a new unsupervised neural network model that learns Lagrangian dynamics from images, with interpretability that benefits prediction and control. The model infers Lagrangian dynamics on generalized coordinates that are simultaneously learned with a coordinate-aware variational autoencoder (VAE). The VAE is designed to account for the geometry of physical systems composed of multiple rigid bodies in the plane. By inferring interpretable Lagrangian dynamics, the model learns physical system properties, such as kinetic and potential energy, which enables long-term prediction of dynamics in the image space and synthesis of energy-based controllers. 1 Introduction Humans can learn to predict the trajectories of mechanical systems, e.g., a basketball or a drone, from high-dimensional visual input, and learn to control the system, e.g., catch a ball or maneuver a drone, after a small number of interactions with those systems. We hypothesize that humans use domainspecific knowledge, e.g., physics laws, to achieve efficient learning. Motivated by this hypothesis, in this work, we propose incorporation of physics priors to learn and control dynamics from image data, aiming to gain interpretability and data efficiency. Specifically, we incorporate Lagrangian dynamics as the physic prior, which enables us to represent a broad class of physical systems. Recently, an increasing number of works [1, 2, 3, 4, 5] have incorporated Lagrangian/Hamiltonian dynamics into learning dynamical systems from coordinate data, to improve prediction and generalization. These approaches, however, require coordinate data, which are not always available in real-world applications. Hamiltonian Neural Network (HNN) [3] provides a single experiment with image observations, which requires a modification in the model. This modification is hard to generalize to systems with multiple rigid bodies. Hamiltonian Generative Network (HGN) [6] learns Hamiltonian dynamics from image sequences. However, the dimension of latent generalized coordinates is 4×4×16 = 256, making interpretation difficult. Moreover, both HNN and HGN focus on prediction and have no design of control. Another class of approaches learn physical models from images, by either learning the map from images to coordinates with supervision on coordinate data [7] or learning the coordinates in an unsupervised way but only with translational coordinates [8, 9]. The unsupervised learning of rotational coordinates such as angles are under-explored in the literature. In this work, we propose an unsupervised neural network model that learns coordinates and Lagrangian dynamics on those coordinates from images of physical systems in motion in the plane. The latent dynamical model enforces Lagrangian dynamics, which benefits long term prediction of the system. As Lagrangian dynamics commonly involve rotational coordinates to describe the changing 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. configurations of objects in the system, we propose a coordinate-aware variational autoencoder (VAE) that can infer interpretable rotational and translational coordinates from images without supervision. The interpretable coordinates together with the interpretable Lagrangian dynamics pave the way for introducing energy-based controllers of the learned dynamics. 1.1 Related work Lagrangian/Hamiltonian prior in learning dynamics To improve prediction and generalization of physical system modelling, a class of approaches has incorporated the physics prior of Hamiltonian or Lagrangian dynamics into deep learning. Deep Lagrangian Network [1] and Lagrangian Neural Network [2] learn Lagrangian dynamics from position, velocity and acceleration data. Hamiltonian Neural Networks [3] learn Hamiltonian dynamics from position, velocity and acceleration data. By leveraging ODE integrators, Hamiltonian Graph Networks [10] and Symplectic ODE-Net [4] learn Hamiltonian dynamics from only position and velocity data. All of these works (except one particular experiment in HNN [3]) require direct observation of low dimensional position and velocity data. Hamiltonian Generative Network [6] learns Hamiltonian dynamics from images. Unsupervised learning of dynamics We assume we are given no coordinate data and aim to learn coordinates and dynamics in an unsupervised way. With little position and velocity data, BelbutePeres et al. [11] learn underlying dynamics. However, the authors observe that their model fails to learn meaningful dynamics when there is no supervision on position and velocity data at all. Without supervision, Watter et al. [12] and Levine et al. [13] learn locally linear dynamics and Jaques et al. [14] learns unknown parameters in latent dynamics with a given form. Kossen et al. [15] extracts position and velocity of each object from videos and learns the underlying dynamics. Watters et al. [16] adopts an object-oriented design to gain data efficiency and robustness. Battaglia et al. [17], Sanchez-Gonzalez et al. [18] and Watters et al. [7] learn dynamics with supervision by taking into account the prior of objects and their relations. These object-oriented designs focus little on rotational coordinates. Variational Integrator Network [19] considers rotational coordinates but cannot handle systems with multiple rotational coordinates. Neural visual control Besides learning dynamics for prediction, we would like to learn how control input influences the dynamics and to design control laws based on the learned model. This goal is relevant to neural motion planning and model-based reinforcement learning from images. PlaNet [20] learns latent dynamical models from images and designs control input by fast planning in the latent space. Kalman VAE [21] can potentially learn locally linear dynamics and control from images, although no control result has been shown. Dreamer [22] is a scalable reinforcement learning agent which learns from images using a world model. Ebert et al. [23] propose a self-supervised model-based method for robotic control. We leave the comparison of our energy-based control methods and these model-based control methods in the literature to future work. 1.2 Contribution The main contribution of this work is two-fold. First, we introduce an unsupervised learning framework to learn Lagrangian dynamics from image data for prediction. The Lagrangian prior conserves energy with no control applied; this helps learn more accurate dynamics as compared to a MLP dynamical model. Moreover, the coordinate-aware VAE in the proposed learning framework infers interpretable latent rigid body coordinates in the sense that a coordinate encoded from an image of a system in a position with high potential energy has a high learned potential energy, and vice versa. This interpretability enables us to design energy-based controllers to control physical systems to target positions. We implement this work with PyTorch [24] and refactor our code into PyTorch Lightning format [25], which makes our code easy to read and our results easy to reproduce. The code for all experiments is available at https://github.com/DesmondZhong/Lagrangian_caVAE. 2 Preliminary concepts 2.1 Lagrangian dynamics Lagrangian dynamics are a reformulation of Newton’s second law of motion. The configuration of a system in motion at time t is described by generalized coordinates q(t) = (q1(t), q2(t), ..., qm(t)), where m is the number of degrees of freedom (DOF) of the system. For planar rigid body systems with n rigid bodies and k holonomic constraints, the DOF is m = 3n − k. From D’Alembert’s principle, the equations of motion of the system, also known as the Euler-Lagrange equation, are d dt (∂L ∂q̇ ) − ∂L ∂q = Qnc, (1) where the scalar function L(q, q̇) is the Lagrangian, q̇ = dq/dt, and Qnc is a vector of nonconservative generalized forces. The Lagrangian L(q, q̇) is the difference between kinetic energy T (q, q̇) and potential energy V (q). For rigid body systems, the Lagrangian is L(q, q̇) = T (q, q̇)− V (q) = 1 2 q̇TM(q)q̇− V (q), (2) where M(q) is the mass matrix. In this work, we assume that the control inputs are the only nonconservative generalized forces, i.e., Qnc = g(q)u, where g(q) is the input matrix and u is a vector of control inputs such as forces or torques. Substituting Qnc = g(q)u and L(q, q̇) from (2) into (1), we get the equations of motion in the form of m second-order ordinary differential equations (ODE): q̈ = M−1(q) ( − 1 2 dM(q) dt q̇− dV (q) dq + g(q)u ) . (3) 2.2 Control via energy shaping Our goal is to control the system to a reference configuration q?, inferred from a goal image x?, based on the learned dynamics. As we are essentially learning the kinetic and potential energy associated with the system, we can leverage the learned energy for control by energy shaping [26, 27]. If rank(g(q)) = m, we have control over every DOF and the system is fully actuated. For such systems, control to the reference configuration q? can be achieved with the control law u(q, q̇) = β(q)+v(q̇), where β(q) is the potential energy shaping and v(q̇) is the damping injection. The goal of potential energy shaping is to let the system behave as if it is governed by a desired Lagrangian Ld with no non-conservative generalized forces. d dt (∂L ∂q̇ ) − ∂L ∂q = g(q)β(q) ⇐⇒ d dt (∂Ld ∂q̇ ) − ∂Ld ∂q = 0, (4) where the desired Lagrangian has desired potential energy Vd(q): Ld(q, q̇) = T (q, q̇)− Vd(q) = 1 2 q̇TM(q)q̇− Vd(q). (5) The difference between Ld and L is the difference between V and Vd, which explains the name potential energy shaping: β(q) shapes the potential energy V of the original system into a desired potential energy Vd. The potential energy Vd is designed to have a global minimum at q?. By the equivalence (4), we get β(q) = gT (ggT )−1 (∂V ∂q − ∂Vd ∂q ) . (6) With only potential energy shaping, the system dynamics will oscillate around q?.1 The purpose of damping injection v(q̇) is to impose convergence, exponentially in time, to q?. The damping injection has the form v(q̇) = −gT (ggT )−1(Kdq̇). (7) For underactuated systems, however, this controller design is not valid since ggT will not be invertible. In general, we also need kinetic energy shaping [27] to achieve a control goal. Remark The design parameters here are Vd and Kd. A quadratic desired potential energy Vd(q) = 1 2 (q− q?)TKp(q− q?), (8) results in a controller design u(q, q̇) = gT (ggT )−1 ( ∂V ∂q −Kp(q− q?)−Kdq̇ ) . (9) This can be interpreted as a proportional-derivative (PD) controller with energy compensation. 1Please see Supplementary Materials for more details. 2.3 Training Neural ODE with constant control The Lagrangian dynamics can be formulated as a set of first-order ODE ṡ = f(s,u), (10) where s is a state vector and unknown vector field f , which is a vector-valued function, can be parameterized with a neural network fψ . We leverage Neural ODE, proposed by Chen et al. [28], to learn the function f that explains the trajectory data of s. The idea is to predict future states from an initial state by integrating the ODE with an ODE solver. As all the operations in the ODE solver are differentiable, fψ can be updated by back-propagating through the ODE solver and approximating the true f . However, Neural ODE cannot be applied to (10) directly since the input dimension and the output dimension of f are not the same. Zhong et al. [4] showed that if the control remains constant for each trajectory in the training data, Neural ODE can be applied to the following augmented ODE:( ṡ u̇ ) = ( fψ(s,u) 0 ) = f̃ψ(s,u). (11) With a learned fψ , we can apply a controller design u = u(s) that is not constant, e.g., an energy-based controller, by integrating the ODE ṡ = f(s,u(s)). 3 Model architecture Let X = ((x0,uc), (x1,uc)), ..., (xTpred ,uc)) be a given sequence of image and control pairs, where xτ , τ = 0, 1, . . . , Tpred, is the image of the trajectory of a rigid-body system under constant control uc at time t = τ∆t. From X we want to learn a state-space model (10) that governs the time evolution of the rigid-body system dynamics. We assume the number of rigid bodies n is known and the segmentation of each object in the image is given. Each image can be written as xτ = (xτ1 , ...,x τ n), where xτi ∈ Rnx contains visual information about the ith rigid body at t = τ∆t and nx is the dimension of the image space. In Section 3.1, we parameterize f(s,u) with a neural network and design the architecture of the neural network such that (10) is constrained to follow Lagrangian dynamics, where the physical properties such as mass and potential energy are learned from data. Since we have no access to state data, we need to infer states s, i.e., generalized coordinates and velocities from image data. Sections 3.2 and 3.4 introduce an inference model (encoder) and a generative model (decoder) pair. Together they make up a variational autoencoder (VAE) [29] to infer the generalized coordinates in an unsupervised way. Section 3.3 introduces a simple estimator of velocity from learned generalized coordinates. The VAE and the state-space model are trained together, as described in Section 3.5. The model architecture is shown in Figure 1. 3.1 Latent Lagrangian dynamics The Lagrangian dynamics (3) yield a second-order ODE. From a model-based perspective, they can be re-written as a first-order ODE (10) by choosing the state as s = (q, q̇). However, from a data-driven perspective, this choice of state is problematic when the generalized coordinates involve angles. Consider the pendulum task in Figure 2 as an example where we want to infer the generalized coordinate, i.e., the angle of the pendulum φ, from an image of the pendulum. The map from the image to the angle φ should be bijective. However, if we choose the state as s = (φ, φ̇), the map is not bijective, since φ and φ + 2π map to the same image. If we restrict φ ∈ [−π, π), then the dynamics are not continuous when the pendulum moves around the inverted position. Inspired by Zhong et al. [4], we solve this issue by proposing the state as s = (cosφ, sinφ, φ̇), such that the mapping from the pendulum image to (cosφ, sinφ) is bijective. In general, for a planar rigid-body system with q = (r,φ), where r ∈ RmR are translational generalized coordinates and φ ∈ TmT are rotational generalized coordinates , the proposed state is s = (s1, s2, s3, s4, s5) = (r, cosφ, sinφ, ṙ, φ̇), where cos and sin are applied element-wise to φ. To enforce Lagrangian dynamics in the state-space model, we take the derivative of s with respect to t and substitute in (3) to get ṡ= s4 −s3 ◦ s5 s2 ◦ s5 M−1(s1,s2,s3) ( − 12 dM(s1,s2,,s3) dt ( s4 s5 ) + ( −∂V (s1,s2,s3)∂s1 ∂V (s1,s2,s3) ∂s2 s3− ∂V (s1,s2,s3)∂s3 s2 ) +g(s1,s2,s3)u ) (12) where ◦ is the element-wise product. We use three neural networks, Mψ1(s1, s2, s3), Vψ2(s1, s2, s3), and gψ3(s1, s2, s3), to approximate the mass matrix, the potential energy and the input matrix, respectively. Equation (12) is then a state-space model parameterized by a neural network ṡ = fψ(s,u). It can be trained as stated in Section 2.3 given the initial condition s0 = (r0, cosφ0, sinφ0, ṙ0, φ̇0) and uc. Next, we present the means to infer s0 from the given images. 3.2 Coordinate-aware encoder From a latent variable modelling perspective, an image x of a rigid-body system can be generated by first specifying the values of the generalized coordinates and then assigning values to pixels based on the generalized coordinates with a generative model - the decoder. In order to infer those generalized coordinates from images, we need an inference model - the encoder. We perform variational inference with a coordinate-aware VAE. The coordinate-aware encoder infers a distribution on the generalized coordinates. The Gaussian distribution is the default for modelling latent variables in VAE. This is appropriate for modelling a translational generalized coordinate r since r resides in R1. However, this is not appropriate for modelling a rotational generalized coordinate φ since a Gaussian distribution is not a distribution on S1. If we use a Gaussian distribution to model hyperspherical latent variables, the VAE performs worse than a traditional autoencoder [30]. Thus, to model φ, we use the von Mises (vM) distribution, a family of distributions on S1. Analogous to a Gaussian distribution, a von Mises distribution is characterized by two parameters: µ ∈ R2, ||µ||2 = 1 is the mean, and κ ∈ R≥0 is the concentration around µ. The von Mises distribution reduces to a uniform distribution when κ = 0. In our model, for a rotational generalized coordinate φ, we assume a posterior distribution Q(φ|x) = vM((cosφm, sinφm), φκ) with prior P (φ) = vM(·, 0) = U(S1). For a translational generalized coordinate r, we assume a posterior distribution Q(r|x) = N (rm, rvar) with prior N (0, 1). We denote the joint posterior distribution as Q(q|x) and joint prior distribution as P (q). The encoder is a neural network that takes an image as input and provides the parameters of the distributions as output. A black-box neural network encoder would not be able to learn interpretable generalized coordinates for a system in motion described by Lagrangian dynamics. Instead, we propose a coordinate-aware encoder by designing the architecture of the neural network to account for the geometry of the system. This is the key to interpretable encoding of generalized coordinates. Recall that each generalized coordinate qj specifies the position/rotation of a rigid body ij in the system. In principle, the coordinate can be learned from the image segmentation of ij . However, the reference frame of a generalized coordinate might depend on other generalized coordinates and change across images. Take the CartPole example in Figure 2 as motivation. The system has two DOF and natural choices of generalized coordinates are the horizontal position of the cart q1 = r and the angle of the pole q2 = φ. The origin of the reference frame of r is the center of the image, which is the same across all images. The origin of the reference frame of φ, however, is the center of the cart, which is not the same across all the images since the cart can move. In order to learn the angle of the pole, we can either use a translation invariant architecture such as Convolution Neural Networks (CNN) or place the center of the encoding attention window of the pole segmentation image at the center of the cart. The former approach does not work well in extracting generalized coordinates.2 Thus, we adopt the latter approach, where we shift our encoding attention window horizontally with direction and magnitude given by generalized coordinate r, before feeding it into a neural network to learn φ. In this way we exploit the geometry of the system in the encoder. The default attention window is the image grid and corresponds to the default reference frame, where the origin is at the center of the image with horizontal and vertical axes. The above encoding attention window mechanism for a general system can be formalized by considering the transformation from the default reference frame to the reference frame of each generalized coordinate. The transformation of a point (xd, yd) in the default reference frame to a point (xt, yt) in the target reference frame is captured by transformation T (x, y, θ) corresponding to translation by (x, y) and rotation by θ as follows: ( xt yt 1 ) = T (x, y, θ) ( xd yd 1 ) , where T (x, y, θ) = ( cos θ sin θ x − sin θ cos θ y 0 0 1 ) . (13) So let T ((x, y, θ)encj ) be the transformation from default frame to reference frame of generalized coordinate qj . This transformation might depend on constant parameters c associated with the shape and size of the rigid bodies and generalized coordinates q−j , which denotes the vector of generalized coordinates with qj removed. Let (x, y, θ)encj = T enc j (q−j , c). Both q−j and c are learned from images. However, the function T encj is specified by leveraging the geometry of the system. In the CartPole example, (q1, q2) = (r, φ), and T enc1 ≡ (0, 0, 0) and T enc2 (q1) = (q1, 0, 0). In the Acrobot example, (q1, q2) = (φ1, φ2), and T enc1 ≡ (0, 0, 0) and T enc2 (q1, l1) = (l1 sin q1, l1 cos q1, 0). The shift of attention window can be implemented with a spatial transformer network (STN) [32], which generates a transformed image x̃ij from xij , i.e., x̃ij = STN(xij , T (T encj (q−j , c))). In general, to encode qj , we use a multilayer perceptron (MLP) that takes x̃ij as input and provides the parameters of the qj distribution as output. For a translational coordinate qj , we have (qmj , log q var j ) = MLPencj (x̃ij ). For a rotational coordinate qj , we have (αj , βj , log q κ j ) = MLP enc j (x̃ij ), where the mean of the von Mises distribution is computed as (cos qmj , sin q m j ) = (αj , βj)/ √ α2j + β 2 j . We then 2Here we expect to encode the angle of the pole from a pole image regardless of where it appears in the image. As the translation invariance of CNN is shown by Kauderer-Abrams [31] to be primarily dependent on data augmentation, the encoding of generalized coordinates might not generalize well to unseen trajectories. Also, in general we need both translation invariance and rotation invariance, a property that CNN do not have. take a sample from the qj distribution.3 Doing this for every generalized coordinate qj , we can get (rτ , cosφτ , sinφτ ) from xτ for any τ .4 We will use (r0, cosφ0, sinφ0) and (r1, cosφ1, sinφ1). 3.3 Velocity estimator To integrate Equation (12), we also need to infer (ṙ0, φ̇0), the initial velocity. We can estimate the initial velocity from the encoded generalized coordinates by finite difference. We use the following simple first-order finite difference estimator: ṙ0 = (rm1 − rm0)/∆t, (14) φ̇0 = ( (sinφm1 − sinφm0) ◦ cosφm0 − (cosφm1 − cosφm0) ◦ sinφm0 ) /∆t, (15) where (rm0, cosφm0, sinφm0) and (rm1, cosφm1, sinφm1) are the means of the generalized coordinates encoded from the image at time t = 0 and t = ∆t, respectively. Jaques et al. [14] proposed to use a neural network to estimate velocity. From our experiments, our simple estimator works better than a neural network estimator. 3.4 Coordinate-aware decoder The decoder provides a distribution P (x|q) = N (x̂, I) as output, given a generalized coordinate q as input, where the mean x̂ is the reconstruction image of the image data x. Instead of using a black box decoder, we propose a coordinate-aware decoder. The coordinate-aware decoder first generates a static image xci of every rigid body i in the system, at a default position and orientation, using a MLP with a constant input, i.e., xci = MLP dec i (1). The coordinate-aware decoder then determines x̂i, the image of rigid body i positioned and oriented on the image plane according to the generalized coordinates. The proposed decoder is inspired by the coordinate-consistent decoder by Jaques et al. [14]. However, the decoder of [14] cannot handle a system of multiple rigid bodies with constraints such as the Acrobot and the CartPole, whereas our coordinate-aware decoder can. As in Jaques et al. [14], to find x̂i we use the inverse transformation matrix T −1((x, y, θ)deci ) where T is given by (13) and (x, y, θ)deci = T deci (q, c). In the CartPole example, (q1, q2) = (r, φ), and T dec1 (r) = (r, 0, 0) and T dec 2 (r, φ) = (r, 0, φ). In the Acrobot example, (q1, q2) = (φ1, φ2), and T dec1 (φ1) = (0, 0, φ1) and T dec 2 (φ1, φ2) = (l1 sinφ1, l1 cosφ1, φ2). The reconstruction image is then x̂ = (x̂1, ..., x̂n), where x̂i = STN(xci , T −1(T deci (q, c))). 3.5 Loss function The loss L(X) consists of the sum of three terms: L(X) = −Eq0∼Q[logP (x0|q0)]+KL(Q(q0|x0)||P (q0))︸ ︷︷ ︸ VAE loss + Tpred∑ τ=1 ||x̂τ−xτ ||22︸ ︷︷ ︸ prediction loss +λ ∑ j √ α2j+β 2 j︸ ︷︷ ︸ vM regularization . (16) The VAE loss is a variational bound on the marginal log-likelihood of initial data P (x0). The prediction loss captures inaccurate predictions of the latent Lagrangian dynamics. The vM regularization with weight λ penalizes large norms of vectors (αj , βj), preventing them from blowing up. 4 Results We train our model on three systems: the Pendulum, the fully-actuated CartPole and the fully-actuated Acrobot. The training images are generated by OpenAI Gym simulator [33]. The training setup is detailed in Supplementary Materials. As the mean square error in the image space is not a good metric 3We use the reparametrization trick proposed by Davidson et al. [30] to sample from a von Mises distribution. 4For a transformation that depends on one or more generalized coordinate, those generalized coordinates must be encoded before the transformation can be applied. In the CartPole example, we need to encode r before applying the transformation to put the attention window centered at the cart to encode φ. We use the mean of the distribution, i.e., qmj or (cos q m j , sin q m j ), for those transformations that depend on qj . of long term prediction accuracy [8], we report on the prediction image sequences of a previously unseen initial condition and highlight the interpretability of our model. Lagrangian dynamics and coordinate-aware VAE improve prediction. As the Acrobot is a chaotic system, accurate long term prediction is impossible. Figure 3 shows the prediction sequences of images up to 48 time steps of the Pendulum and CartPole experiments with models trained with Tpred = 4. We compare the prediction results of our model (labelled as Lagrangian+caVAE) with two model variants: MLPdyn+caVAE, which replaces the Lagrangian latent dynamics with MLP latent dynamics, and Lagrangian+VAE, which replaces the coordinate-aware VAE with a traditional VAE. The traditional VAE fails to reconstruct meaningful images for CartPole, although it works well in the simpler Pendulum system. With well-learned coordinates, models that enforce Lagrangian dynamics result in better long term prediction, e.g., as compared to MLPdyn+caVAE, since Lagrangian dynamics with zero control preserves energy (see Supplementary Materials). Learned potential energy enables energy-based control. Figure 4 shows the learned potential energy of the three systems and reconstruction images at selected coordinates with Lagrangian+caVAE. The learned potential energy is consistent with the true potential energy of those systems, e.g., the pendulum at the upward position has the highest potential energy while the pendulum at the downward position has the lowest potential energy. Figure 4 also visualizes the learned coordinates. Learning interpretable coordinates and potential energy enables energy-based controllers. Based on the learned encoding and dynamics, we are able to control Pendulum and fully-actuated Acrobot to the inverted position, and fully-actuated CartPole to a position where the pole points upward. The sequences of images of controlled trajectories as shown in Figure 3 are generated based on learned dynamics and encoding with Lagrangian+caVAE as follows. We first encode an image of the goal position x? to the goal generalized coordinates q?. At each time step, the OpenAI Gym simulator of a system can take a control input, integrate one time step forward, and output an image of the system at the next time step. The control input to the simulator is u(q, q̇) = β(q) + v(q̇) which is designed as in Section 2.2 with the learned potential energy, input matrix, coordinates encoded from the output images, and q?. Baselines We set up two baseline models: HGN [6] and PixelHNN [3]. Neither model considers control input so we only use the trajectories with zero control in our dataset to train the models. Because of this, it is not fair to compare the pixel MSE of HGN in Table 1 with those of other models. We implemented HGN based on the architecture described in the paper and used the official code for PixelHNN. From Figure 3, we can see that HGN makes good predictions for the pendulum up to the training sequence length (τ = 4), but makes blurry long term predictions. HGN fails to generalize to the test dataset for the CartPole task. This is probably because HGN is not data efficient. In the original HGN experiment [6], 30 × 50K = 1.5M training images are used in the pendulum task, while here we use 20× 256 = 5120 training images (with zero control). Moreover, the dimension of latent coordinate q is 4 × 4 × 16 = 256. With such a fixed high dimension (for various tasks), HGN does not need to assume degrees of freedom. However, it might not be easy to interpret the learned q, whereas the learned coordinates in our model are interpretable. PixelHNN does not use an integrator and requires a special term in the loss function. PixelHNN does not account for the rotational nature of coordinate q, so the reconstruction images around the unstable equilibrium point are blurry and the learned coordinates are not easy to interpret (see Supplementary Material). In the original implementation of PixelHNN [3], the angle of the pendulum is constrained to be from −π/6 to π/6, where a linear approximation of the nonlinear dynamics is learned, which makes the learned coordinates easy to interpret. This constraint does not hold for more challenging control tasks. Ablation study To understand which component in our model contributes to learning interpretable generalized coordinates the most, we also report results of four ablations, which are obtained by (a) replacing the coordinate-aware encoder with a black-box MLP, (b) replacing the coordinate-aware decoder with a black-box MLP, (c) replacing the coordinate-aware VAE with a coordinate-aware AE, and (d) a Physics-as-inverse-graphics (PAIG) model [14]. We observe that the coordinate-aware decoder makes the primary contribution to learning interpretable coordinates, and the coordinateaware encoder makes a secondary contribution. The coordinate-aware AE succeeds in Pendulum and Acrobot tasks but fails in the CartPole task. PAIG uses AE with a neural network velocity estimator. We find that PAIG’s velocity estimator overfits the training data, which results in inaccurate long term prediction. Please see Supplementary Materials for prediction sequences of the ablation study. 5 Conclusion We propose an unsupervised model that learns planar rigid-body dynamics from images in an explainable and transparent way by incorporating the physics prior of Lagrangian dynamics and a coordinate-aware VAE, both of which we show are important for accurate prediction in the image space. The interpretability of the model allows for synthesis of model-based controllers. Broader Impact We focus on the impact of using our model to provide explanations for physical system modelling. Our model could be used to provide explanations regarding the underlying symmetries, i.e., conservation laws, of physical systems. Further, the incorporation of the physics prior of Lagrangian dynamics improves robustness and generalizability for both prediction and control applications. We see opportunities for research applying our model to improve transparency and explanability in reinforcement learning, which is typically solved with low-dimensional observation data instead of image data. Our work also enables future research on vision-based controllers. The limitations of our work will also motivate research on unsupervised segmentation of images of physical systems. Acknowledgments and Disclosure of Funding This research has been supported in part by ONR grant #N00014-18-1-2873 and by the School of Engineering and Applied Science at Princeton University through the generosity of William Addy ’82. Yaofeng Desmond Zhong would like to thank Christine Allen-Blanchette, Shinkyu Park, Sushant Veer and Anirudha Majumdar for helpful discussions. We also thank the anonymous reviewers for providing detailed and helpful feedback.
1. What is the main contribution of the paper, and how does it differ from other works in the field? 2. What are the strengths of the proposed approach, particularly in terms of its ability to learn physical dynamics from image/control pairs? 3. What are the weaknesses of the paper, especially regarding the assumptions made about the number of objects and their segmentation? 4. How do the experimental results support the claims made in the paper, and what do they reveal about the effectiveness of the proposed method? 5. Are there any limitations or potential drawbacks to the approach proposed in the paper, and how might they be addressed in future work?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors propose a VAE coordinate aware to learn Lagrangian dynamics. The architecture assumes that the number of (rigid) objects in an image are given and segmented. The encoder learns generalized coordinates from image/control pairs. The latent coordinates are modeled using Gaussian (for translational) and vonMises distributions (for rotational). The approach was tested with 3 types of dynamics: a pendulum, a cart-pole and a acrobot. Strengths The problem set up is well done. Learning physical dynamics from image/control pairs is novel and interesting. It uses first principles and hence the resulting architecture is explainable. The experiments have ablation studies showing the contribution of the modules of the model. Weaknesses It does presume that the number of objects is given and that they have been segmented; the examples are of simple dynamics. Experiments are qualitative rather than quantitative. Figure 3, second row shows that even for the simplest case (pendulum) the synthesis are not quite what they should be (frames 3, 4, 5, 8, 9, and 10 are noticeable different from the true sequence frames)
NIPS
Title Unsupervised Learning of Lagrangian Dynamics from Images for Prediction and Control Abstract Recent approaches for modelling dynamics of physical systems with neural networks enforce Lagrangian or Hamiltonian structure to improve prediction and generalization. However, when coordinates are embedded in high-dimensional data such as images, these approaches either lose interpretability or can only be applied to one particular example. We introduce a new unsupervised neural network model that learns Lagrangian dynamics from images, with interpretability that benefits prediction and control. The model infers Lagrangian dynamics on generalized coordinates that are simultaneously learned with a coordinate-aware variational autoencoder (VAE). The VAE is designed to account for the geometry of physical systems composed of multiple rigid bodies in the plane. By inferring interpretable Lagrangian dynamics, the model learns physical system properties, such as kinetic and potential energy, which enables long-term prediction of dynamics in the image space and synthesis of energy-based controllers. 1 Introduction Humans can learn to predict the trajectories of mechanical systems, e.g., a basketball or a drone, from high-dimensional visual input, and learn to control the system, e.g., catch a ball or maneuver a drone, after a small number of interactions with those systems. We hypothesize that humans use domainspecific knowledge, e.g., physics laws, to achieve efficient learning. Motivated by this hypothesis, in this work, we propose incorporation of physics priors to learn and control dynamics from image data, aiming to gain interpretability and data efficiency. Specifically, we incorporate Lagrangian dynamics as the physic prior, which enables us to represent a broad class of physical systems. Recently, an increasing number of works [1, 2, 3, 4, 5] have incorporated Lagrangian/Hamiltonian dynamics into learning dynamical systems from coordinate data, to improve prediction and generalization. These approaches, however, require coordinate data, which are not always available in real-world applications. Hamiltonian Neural Network (HNN) [3] provides a single experiment with image observations, which requires a modification in the model. This modification is hard to generalize to systems with multiple rigid bodies. Hamiltonian Generative Network (HGN) [6] learns Hamiltonian dynamics from image sequences. However, the dimension of latent generalized coordinates is 4×4×16 = 256, making interpretation difficult. Moreover, both HNN and HGN focus on prediction and have no design of control. Another class of approaches learn physical models from images, by either learning the map from images to coordinates with supervision on coordinate data [7] or learning the coordinates in an unsupervised way but only with translational coordinates [8, 9]. The unsupervised learning of rotational coordinates such as angles are under-explored in the literature. In this work, we propose an unsupervised neural network model that learns coordinates and Lagrangian dynamics on those coordinates from images of physical systems in motion in the plane. The latent dynamical model enforces Lagrangian dynamics, which benefits long term prediction of the system. As Lagrangian dynamics commonly involve rotational coordinates to describe the changing 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. configurations of objects in the system, we propose a coordinate-aware variational autoencoder (VAE) that can infer interpretable rotational and translational coordinates from images without supervision. The interpretable coordinates together with the interpretable Lagrangian dynamics pave the way for introducing energy-based controllers of the learned dynamics. 1.1 Related work Lagrangian/Hamiltonian prior in learning dynamics To improve prediction and generalization of physical system modelling, a class of approaches has incorporated the physics prior of Hamiltonian or Lagrangian dynamics into deep learning. Deep Lagrangian Network [1] and Lagrangian Neural Network [2] learn Lagrangian dynamics from position, velocity and acceleration data. Hamiltonian Neural Networks [3] learn Hamiltonian dynamics from position, velocity and acceleration data. By leveraging ODE integrators, Hamiltonian Graph Networks [10] and Symplectic ODE-Net [4] learn Hamiltonian dynamics from only position and velocity data. All of these works (except one particular experiment in HNN [3]) require direct observation of low dimensional position and velocity data. Hamiltonian Generative Network [6] learns Hamiltonian dynamics from images. Unsupervised learning of dynamics We assume we are given no coordinate data and aim to learn coordinates and dynamics in an unsupervised way. With little position and velocity data, BelbutePeres et al. [11] learn underlying dynamics. However, the authors observe that their model fails to learn meaningful dynamics when there is no supervision on position and velocity data at all. Without supervision, Watter et al. [12] and Levine et al. [13] learn locally linear dynamics and Jaques et al. [14] learns unknown parameters in latent dynamics with a given form. Kossen et al. [15] extracts position and velocity of each object from videos and learns the underlying dynamics. Watters et al. [16] adopts an object-oriented design to gain data efficiency and robustness. Battaglia et al. [17], Sanchez-Gonzalez et al. [18] and Watters et al. [7] learn dynamics with supervision by taking into account the prior of objects and their relations. These object-oriented designs focus little on rotational coordinates. Variational Integrator Network [19] considers rotational coordinates but cannot handle systems with multiple rotational coordinates. Neural visual control Besides learning dynamics for prediction, we would like to learn how control input influences the dynamics and to design control laws based on the learned model. This goal is relevant to neural motion planning and model-based reinforcement learning from images. PlaNet [20] learns latent dynamical models from images and designs control input by fast planning in the latent space. Kalman VAE [21] can potentially learn locally linear dynamics and control from images, although no control result has been shown. Dreamer [22] is a scalable reinforcement learning agent which learns from images using a world model. Ebert et al. [23] propose a self-supervised model-based method for robotic control. We leave the comparison of our energy-based control methods and these model-based control methods in the literature to future work. 1.2 Contribution The main contribution of this work is two-fold. First, we introduce an unsupervised learning framework to learn Lagrangian dynamics from image data for prediction. The Lagrangian prior conserves energy with no control applied; this helps learn more accurate dynamics as compared to a MLP dynamical model. Moreover, the coordinate-aware VAE in the proposed learning framework infers interpretable latent rigid body coordinates in the sense that a coordinate encoded from an image of a system in a position with high potential energy has a high learned potential energy, and vice versa. This interpretability enables us to design energy-based controllers to control physical systems to target positions. We implement this work with PyTorch [24] and refactor our code into PyTorch Lightning format [25], which makes our code easy to read and our results easy to reproduce. The code for all experiments is available at https://github.com/DesmondZhong/Lagrangian_caVAE. 2 Preliminary concepts 2.1 Lagrangian dynamics Lagrangian dynamics are a reformulation of Newton’s second law of motion. The configuration of a system in motion at time t is described by generalized coordinates q(t) = (q1(t), q2(t), ..., qm(t)), where m is the number of degrees of freedom (DOF) of the system. For planar rigid body systems with n rigid bodies and k holonomic constraints, the DOF is m = 3n − k. From D’Alembert’s principle, the equations of motion of the system, also known as the Euler-Lagrange equation, are d dt (∂L ∂q̇ ) − ∂L ∂q = Qnc, (1) where the scalar function L(q, q̇) is the Lagrangian, q̇ = dq/dt, and Qnc is a vector of nonconservative generalized forces. The Lagrangian L(q, q̇) is the difference between kinetic energy T (q, q̇) and potential energy V (q). For rigid body systems, the Lagrangian is L(q, q̇) = T (q, q̇)− V (q) = 1 2 q̇TM(q)q̇− V (q), (2) where M(q) is the mass matrix. In this work, we assume that the control inputs are the only nonconservative generalized forces, i.e., Qnc = g(q)u, where g(q) is the input matrix and u is a vector of control inputs such as forces or torques. Substituting Qnc = g(q)u and L(q, q̇) from (2) into (1), we get the equations of motion in the form of m second-order ordinary differential equations (ODE): q̈ = M−1(q) ( − 1 2 dM(q) dt q̇− dV (q) dq + g(q)u ) . (3) 2.2 Control via energy shaping Our goal is to control the system to a reference configuration q?, inferred from a goal image x?, based on the learned dynamics. As we are essentially learning the kinetic and potential energy associated with the system, we can leverage the learned energy for control by energy shaping [26, 27]. If rank(g(q)) = m, we have control over every DOF and the system is fully actuated. For such systems, control to the reference configuration q? can be achieved with the control law u(q, q̇) = β(q)+v(q̇), where β(q) is the potential energy shaping and v(q̇) is the damping injection. The goal of potential energy shaping is to let the system behave as if it is governed by a desired Lagrangian Ld with no non-conservative generalized forces. d dt (∂L ∂q̇ ) − ∂L ∂q = g(q)β(q) ⇐⇒ d dt (∂Ld ∂q̇ ) − ∂Ld ∂q = 0, (4) where the desired Lagrangian has desired potential energy Vd(q): Ld(q, q̇) = T (q, q̇)− Vd(q) = 1 2 q̇TM(q)q̇− Vd(q). (5) The difference between Ld and L is the difference between V and Vd, which explains the name potential energy shaping: β(q) shapes the potential energy V of the original system into a desired potential energy Vd. The potential energy Vd is designed to have a global minimum at q?. By the equivalence (4), we get β(q) = gT (ggT )−1 (∂V ∂q − ∂Vd ∂q ) . (6) With only potential energy shaping, the system dynamics will oscillate around q?.1 The purpose of damping injection v(q̇) is to impose convergence, exponentially in time, to q?. The damping injection has the form v(q̇) = −gT (ggT )−1(Kdq̇). (7) For underactuated systems, however, this controller design is not valid since ggT will not be invertible. In general, we also need kinetic energy shaping [27] to achieve a control goal. Remark The design parameters here are Vd and Kd. A quadratic desired potential energy Vd(q) = 1 2 (q− q?)TKp(q− q?), (8) results in a controller design u(q, q̇) = gT (ggT )−1 ( ∂V ∂q −Kp(q− q?)−Kdq̇ ) . (9) This can be interpreted as a proportional-derivative (PD) controller with energy compensation. 1Please see Supplementary Materials for more details. 2.3 Training Neural ODE with constant control The Lagrangian dynamics can be formulated as a set of first-order ODE ṡ = f(s,u), (10) where s is a state vector and unknown vector field f , which is a vector-valued function, can be parameterized with a neural network fψ . We leverage Neural ODE, proposed by Chen et al. [28], to learn the function f that explains the trajectory data of s. The idea is to predict future states from an initial state by integrating the ODE with an ODE solver. As all the operations in the ODE solver are differentiable, fψ can be updated by back-propagating through the ODE solver and approximating the true f . However, Neural ODE cannot be applied to (10) directly since the input dimension and the output dimension of f are not the same. Zhong et al. [4] showed that if the control remains constant for each trajectory in the training data, Neural ODE can be applied to the following augmented ODE:( ṡ u̇ ) = ( fψ(s,u) 0 ) = f̃ψ(s,u). (11) With a learned fψ , we can apply a controller design u = u(s) that is not constant, e.g., an energy-based controller, by integrating the ODE ṡ = f(s,u(s)). 3 Model architecture Let X = ((x0,uc), (x1,uc)), ..., (xTpred ,uc)) be a given sequence of image and control pairs, where xτ , τ = 0, 1, . . . , Tpred, is the image of the trajectory of a rigid-body system under constant control uc at time t = τ∆t. From X we want to learn a state-space model (10) that governs the time evolution of the rigid-body system dynamics. We assume the number of rigid bodies n is known and the segmentation of each object in the image is given. Each image can be written as xτ = (xτ1 , ...,x τ n), where xτi ∈ Rnx contains visual information about the ith rigid body at t = τ∆t and nx is the dimension of the image space. In Section 3.1, we parameterize f(s,u) with a neural network and design the architecture of the neural network such that (10) is constrained to follow Lagrangian dynamics, where the physical properties such as mass and potential energy are learned from data. Since we have no access to state data, we need to infer states s, i.e., generalized coordinates and velocities from image data. Sections 3.2 and 3.4 introduce an inference model (encoder) and a generative model (decoder) pair. Together they make up a variational autoencoder (VAE) [29] to infer the generalized coordinates in an unsupervised way. Section 3.3 introduces a simple estimator of velocity from learned generalized coordinates. The VAE and the state-space model are trained together, as described in Section 3.5. The model architecture is shown in Figure 1. 3.1 Latent Lagrangian dynamics The Lagrangian dynamics (3) yield a second-order ODE. From a model-based perspective, they can be re-written as a first-order ODE (10) by choosing the state as s = (q, q̇). However, from a data-driven perspective, this choice of state is problematic when the generalized coordinates involve angles. Consider the pendulum task in Figure 2 as an example where we want to infer the generalized coordinate, i.e., the angle of the pendulum φ, from an image of the pendulum. The map from the image to the angle φ should be bijective. However, if we choose the state as s = (φ, φ̇), the map is not bijective, since φ and φ + 2π map to the same image. If we restrict φ ∈ [−π, π), then the dynamics are not continuous when the pendulum moves around the inverted position. Inspired by Zhong et al. [4], we solve this issue by proposing the state as s = (cosφ, sinφ, φ̇), such that the mapping from the pendulum image to (cosφ, sinφ) is bijective. In general, for a planar rigid-body system with q = (r,φ), where r ∈ RmR are translational generalized coordinates and φ ∈ TmT are rotational generalized coordinates , the proposed state is s = (s1, s2, s3, s4, s5) = (r, cosφ, sinφ, ṙ, φ̇), where cos and sin are applied element-wise to φ. To enforce Lagrangian dynamics in the state-space model, we take the derivative of s with respect to t and substitute in (3) to get ṡ= s4 −s3 ◦ s5 s2 ◦ s5 M−1(s1,s2,s3) ( − 12 dM(s1,s2,,s3) dt ( s4 s5 ) + ( −∂V (s1,s2,s3)∂s1 ∂V (s1,s2,s3) ∂s2 s3− ∂V (s1,s2,s3)∂s3 s2 ) +g(s1,s2,s3)u ) (12) where ◦ is the element-wise product. We use three neural networks, Mψ1(s1, s2, s3), Vψ2(s1, s2, s3), and gψ3(s1, s2, s3), to approximate the mass matrix, the potential energy and the input matrix, respectively. Equation (12) is then a state-space model parameterized by a neural network ṡ = fψ(s,u). It can be trained as stated in Section 2.3 given the initial condition s0 = (r0, cosφ0, sinφ0, ṙ0, φ̇0) and uc. Next, we present the means to infer s0 from the given images. 3.2 Coordinate-aware encoder From a latent variable modelling perspective, an image x of a rigid-body system can be generated by first specifying the values of the generalized coordinates and then assigning values to pixels based on the generalized coordinates with a generative model - the decoder. In order to infer those generalized coordinates from images, we need an inference model - the encoder. We perform variational inference with a coordinate-aware VAE. The coordinate-aware encoder infers a distribution on the generalized coordinates. The Gaussian distribution is the default for modelling latent variables in VAE. This is appropriate for modelling a translational generalized coordinate r since r resides in R1. However, this is not appropriate for modelling a rotational generalized coordinate φ since a Gaussian distribution is not a distribution on S1. If we use a Gaussian distribution to model hyperspherical latent variables, the VAE performs worse than a traditional autoencoder [30]. Thus, to model φ, we use the von Mises (vM) distribution, a family of distributions on S1. Analogous to a Gaussian distribution, a von Mises distribution is characterized by two parameters: µ ∈ R2, ||µ||2 = 1 is the mean, and κ ∈ R≥0 is the concentration around µ. The von Mises distribution reduces to a uniform distribution when κ = 0. In our model, for a rotational generalized coordinate φ, we assume a posterior distribution Q(φ|x) = vM((cosφm, sinφm), φκ) with prior P (φ) = vM(·, 0) = U(S1). For a translational generalized coordinate r, we assume a posterior distribution Q(r|x) = N (rm, rvar) with prior N (0, 1). We denote the joint posterior distribution as Q(q|x) and joint prior distribution as P (q). The encoder is a neural network that takes an image as input and provides the parameters of the distributions as output. A black-box neural network encoder would not be able to learn interpretable generalized coordinates for a system in motion described by Lagrangian dynamics. Instead, we propose a coordinate-aware encoder by designing the architecture of the neural network to account for the geometry of the system. This is the key to interpretable encoding of generalized coordinates. Recall that each generalized coordinate qj specifies the position/rotation of a rigid body ij in the system. In principle, the coordinate can be learned from the image segmentation of ij . However, the reference frame of a generalized coordinate might depend on other generalized coordinates and change across images. Take the CartPole example in Figure 2 as motivation. The system has two DOF and natural choices of generalized coordinates are the horizontal position of the cart q1 = r and the angle of the pole q2 = φ. The origin of the reference frame of r is the center of the image, which is the same across all images. The origin of the reference frame of φ, however, is the center of the cart, which is not the same across all the images since the cart can move. In order to learn the angle of the pole, we can either use a translation invariant architecture such as Convolution Neural Networks (CNN) or place the center of the encoding attention window of the pole segmentation image at the center of the cart. The former approach does not work well in extracting generalized coordinates.2 Thus, we adopt the latter approach, where we shift our encoding attention window horizontally with direction and magnitude given by generalized coordinate r, before feeding it into a neural network to learn φ. In this way we exploit the geometry of the system in the encoder. The default attention window is the image grid and corresponds to the default reference frame, where the origin is at the center of the image with horizontal and vertical axes. The above encoding attention window mechanism for a general system can be formalized by considering the transformation from the default reference frame to the reference frame of each generalized coordinate. The transformation of a point (xd, yd) in the default reference frame to a point (xt, yt) in the target reference frame is captured by transformation T (x, y, θ) corresponding to translation by (x, y) and rotation by θ as follows: ( xt yt 1 ) = T (x, y, θ) ( xd yd 1 ) , where T (x, y, θ) = ( cos θ sin θ x − sin θ cos θ y 0 0 1 ) . (13) So let T ((x, y, θ)encj ) be the transformation from default frame to reference frame of generalized coordinate qj . This transformation might depend on constant parameters c associated with the shape and size of the rigid bodies and generalized coordinates q−j , which denotes the vector of generalized coordinates with qj removed. Let (x, y, θ)encj = T enc j (q−j , c). Both q−j and c are learned from images. However, the function T encj is specified by leveraging the geometry of the system. In the CartPole example, (q1, q2) = (r, φ), and T enc1 ≡ (0, 0, 0) and T enc2 (q1) = (q1, 0, 0). In the Acrobot example, (q1, q2) = (φ1, φ2), and T enc1 ≡ (0, 0, 0) and T enc2 (q1, l1) = (l1 sin q1, l1 cos q1, 0). The shift of attention window can be implemented with a spatial transformer network (STN) [32], which generates a transformed image x̃ij from xij , i.e., x̃ij = STN(xij , T (T encj (q−j , c))). In general, to encode qj , we use a multilayer perceptron (MLP) that takes x̃ij as input and provides the parameters of the qj distribution as output. For a translational coordinate qj , we have (qmj , log q var j ) = MLPencj (x̃ij ). For a rotational coordinate qj , we have (αj , βj , log q κ j ) = MLP enc j (x̃ij ), where the mean of the von Mises distribution is computed as (cos qmj , sin q m j ) = (αj , βj)/ √ α2j + β 2 j . We then 2Here we expect to encode the angle of the pole from a pole image regardless of where it appears in the image. As the translation invariance of CNN is shown by Kauderer-Abrams [31] to be primarily dependent on data augmentation, the encoding of generalized coordinates might not generalize well to unseen trajectories. Also, in general we need both translation invariance and rotation invariance, a property that CNN do not have. take a sample from the qj distribution.3 Doing this for every generalized coordinate qj , we can get (rτ , cosφτ , sinφτ ) from xτ for any τ .4 We will use (r0, cosφ0, sinφ0) and (r1, cosφ1, sinφ1). 3.3 Velocity estimator To integrate Equation (12), we also need to infer (ṙ0, φ̇0), the initial velocity. We can estimate the initial velocity from the encoded generalized coordinates by finite difference. We use the following simple first-order finite difference estimator: ṙ0 = (rm1 − rm0)/∆t, (14) φ̇0 = ( (sinφm1 − sinφm0) ◦ cosφm0 − (cosφm1 − cosφm0) ◦ sinφm0 ) /∆t, (15) where (rm0, cosφm0, sinφm0) and (rm1, cosφm1, sinφm1) are the means of the generalized coordinates encoded from the image at time t = 0 and t = ∆t, respectively. Jaques et al. [14] proposed to use a neural network to estimate velocity. From our experiments, our simple estimator works better than a neural network estimator. 3.4 Coordinate-aware decoder The decoder provides a distribution P (x|q) = N (x̂, I) as output, given a generalized coordinate q as input, where the mean x̂ is the reconstruction image of the image data x. Instead of using a black box decoder, we propose a coordinate-aware decoder. The coordinate-aware decoder first generates a static image xci of every rigid body i in the system, at a default position and orientation, using a MLP with a constant input, i.e., xci = MLP dec i (1). The coordinate-aware decoder then determines x̂i, the image of rigid body i positioned and oriented on the image plane according to the generalized coordinates. The proposed decoder is inspired by the coordinate-consistent decoder by Jaques et al. [14]. However, the decoder of [14] cannot handle a system of multiple rigid bodies with constraints such as the Acrobot and the CartPole, whereas our coordinate-aware decoder can. As in Jaques et al. [14], to find x̂i we use the inverse transformation matrix T −1((x, y, θ)deci ) where T is given by (13) and (x, y, θ)deci = T deci (q, c). In the CartPole example, (q1, q2) = (r, φ), and T dec1 (r) = (r, 0, 0) and T dec 2 (r, φ) = (r, 0, φ). In the Acrobot example, (q1, q2) = (φ1, φ2), and T dec1 (φ1) = (0, 0, φ1) and T dec 2 (φ1, φ2) = (l1 sinφ1, l1 cosφ1, φ2). The reconstruction image is then x̂ = (x̂1, ..., x̂n), where x̂i = STN(xci , T −1(T deci (q, c))). 3.5 Loss function The loss L(X) consists of the sum of three terms: L(X) = −Eq0∼Q[logP (x0|q0)]+KL(Q(q0|x0)||P (q0))︸ ︷︷ ︸ VAE loss + Tpred∑ τ=1 ||x̂τ−xτ ||22︸ ︷︷ ︸ prediction loss +λ ∑ j √ α2j+β 2 j︸ ︷︷ ︸ vM regularization . (16) The VAE loss is a variational bound on the marginal log-likelihood of initial data P (x0). The prediction loss captures inaccurate predictions of the latent Lagrangian dynamics. The vM regularization with weight λ penalizes large norms of vectors (αj , βj), preventing them from blowing up. 4 Results We train our model on three systems: the Pendulum, the fully-actuated CartPole and the fully-actuated Acrobot. The training images are generated by OpenAI Gym simulator [33]. The training setup is detailed in Supplementary Materials. As the mean square error in the image space is not a good metric 3We use the reparametrization trick proposed by Davidson et al. [30] to sample from a von Mises distribution. 4For a transformation that depends on one or more generalized coordinate, those generalized coordinates must be encoded before the transformation can be applied. In the CartPole example, we need to encode r before applying the transformation to put the attention window centered at the cart to encode φ. We use the mean of the distribution, i.e., qmj or (cos q m j , sin q m j ), for those transformations that depend on qj . of long term prediction accuracy [8], we report on the prediction image sequences of a previously unseen initial condition and highlight the interpretability of our model. Lagrangian dynamics and coordinate-aware VAE improve prediction. As the Acrobot is a chaotic system, accurate long term prediction is impossible. Figure 3 shows the prediction sequences of images up to 48 time steps of the Pendulum and CartPole experiments with models trained with Tpred = 4. We compare the prediction results of our model (labelled as Lagrangian+caVAE) with two model variants: MLPdyn+caVAE, which replaces the Lagrangian latent dynamics with MLP latent dynamics, and Lagrangian+VAE, which replaces the coordinate-aware VAE with a traditional VAE. The traditional VAE fails to reconstruct meaningful images for CartPole, although it works well in the simpler Pendulum system. With well-learned coordinates, models that enforce Lagrangian dynamics result in better long term prediction, e.g., as compared to MLPdyn+caVAE, since Lagrangian dynamics with zero control preserves energy (see Supplementary Materials). Learned potential energy enables energy-based control. Figure 4 shows the learned potential energy of the three systems and reconstruction images at selected coordinates with Lagrangian+caVAE. The learned potential energy is consistent with the true potential energy of those systems, e.g., the pendulum at the upward position has the highest potential energy while the pendulum at the downward position has the lowest potential energy. Figure 4 also visualizes the learned coordinates. Learning interpretable coordinates and potential energy enables energy-based controllers. Based on the learned encoding and dynamics, we are able to control Pendulum and fully-actuated Acrobot to the inverted position, and fully-actuated CartPole to a position where the pole points upward. The sequences of images of controlled trajectories as shown in Figure 3 are generated based on learned dynamics and encoding with Lagrangian+caVAE as follows. We first encode an image of the goal position x? to the goal generalized coordinates q?. At each time step, the OpenAI Gym simulator of a system can take a control input, integrate one time step forward, and output an image of the system at the next time step. The control input to the simulator is u(q, q̇) = β(q) + v(q̇) which is designed as in Section 2.2 with the learned potential energy, input matrix, coordinates encoded from the output images, and q?. Baselines We set up two baseline models: HGN [6] and PixelHNN [3]. Neither model considers control input so we only use the trajectories with zero control in our dataset to train the models. Because of this, it is not fair to compare the pixel MSE of HGN in Table 1 with those of other models. We implemented HGN based on the architecture described in the paper and used the official code for PixelHNN. From Figure 3, we can see that HGN makes good predictions for the pendulum up to the training sequence length (τ = 4), but makes blurry long term predictions. HGN fails to generalize to the test dataset for the CartPole task. This is probably because HGN is not data efficient. In the original HGN experiment [6], 30 × 50K = 1.5M training images are used in the pendulum task, while here we use 20× 256 = 5120 training images (with zero control). Moreover, the dimension of latent coordinate q is 4 × 4 × 16 = 256. With such a fixed high dimension (for various tasks), HGN does not need to assume degrees of freedom. However, it might not be easy to interpret the learned q, whereas the learned coordinates in our model are interpretable. PixelHNN does not use an integrator and requires a special term in the loss function. PixelHNN does not account for the rotational nature of coordinate q, so the reconstruction images around the unstable equilibrium point are blurry and the learned coordinates are not easy to interpret (see Supplementary Material). In the original implementation of PixelHNN [3], the angle of the pendulum is constrained to be from −π/6 to π/6, where a linear approximation of the nonlinear dynamics is learned, which makes the learned coordinates easy to interpret. This constraint does not hold for more challenging control tasks. Ablation study To understand which component in our model contributes to learning interpretable generalized coordinates the most, we also report results of four ablations, which are obtained by (a) replacing the coordinate-aware encoder with a black-box MLP, (b) replacing the coordinate-aware decoder with a black-box MLP, (c) replacing the coordinate-aware VAE with a coordinate-aware AE, and (d) a Physics-as-inverse-graphics (PAIG) model [14]. We observe that the coordinate-aware decoder makes the primary contribution to learning interpretable coordinates, and the coordinateaware encoder makes a secondary contribution. The coordinate-aware AE succeeds in Pendulum and Acrobot tasks but fails in the CartPole task. PAIG uses AE with a neural network velocity estimator. We find that PAIG’s velocity estimator overfits the training data, which results in inaccurate long term prediction. Please see Supplementary Materials for prediction sequences of the ablation study. 5 Conclusion We propose an unsupervised model that learns planar rigid-body dynamics from images in an explainable and transparent way by incorporating the physics prior of Lagrangian dynamics and a coordinate-aware VAE, both of which we show are important for accurate prediction in the image space. The interpretability of the model allows for synthesis of model-based controllers. Broader Impact We focus on the impact of using our model to provide explanations for physical system modelling. Our model could be used to provide explanations regarding the underlying symmetries, i.e., conservation laws, of physical systems. Further, the incorporation of the physics prior of Lagrangian dynamics improves robustness and generalizability for both prediction and control applications. We see opportunities for research applying our model to improve transparency and explanability in reinforcement learning, which is typically solved with low-dimensional observation data instead of image data. Our work also enables future research on vision-based controllers. The limitations of our work will also motivate research on unsupervised segmentation of images of physical systems. Acknowledgments and Disclosure of Funding This research has been supported in part by ONR grant #N00014-18-1-2873 and by the School of Engineering and Applied Science at Princeton University through the generosity of William Addy ’82. Yaofeng Desmond Zhong would like to thank Christine Allen-Blanchette, Shinkyu Park, Sushant Veer and Anirudha Majumdar for helpful discussions. We also thank the anonymous reviewers for providing detailed and helpful feedback.
1. What is the main contribution of the paper regarding Lagrangian dynamics learning? 2. What are the strengths and weaknesses of the proposed NeuralODE approach? 3. How does the reviewer assess the novelty and significance of the paper's content compared to prior works? 4. What are the questions raised by the reviewer regarding the paper's positioning and focus? 5. How can the authors improve the paper's clarity and impact?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper presents a way of learning the lagrangian dynamics for rigid body systems from images. Similarly to recent work (Greydanus'19, Toth'20), this is done with a NeuralODE approach wherein the state of the system (in generalized coordinates) is inferred from an image with a neural network, and this representation is learned end-to-end with a prediction loss. It is shown that the resulting system learns interpretable potential energies and can be used for prediction and control in continuous dynamics settings. --- Decision --- The paper presents an interesting and potentially promising method for learning Lagrangian dynamics from images. However, prior work exists that already solves a very similar problem for Hamiltonian dynamics, some of which is not referenced in the paper (Toth'20). These baselines are not evaluated. Moreover, the paper does not specify neither the contributions nor the motivation behind the work. Unfortunately, I am not able to accept this paper since the importance of solving this problem for Lagrangian rather than Hamiltonian networks is not explained and the paper does not compare to prior work. Toth'20, HAMILTONIAN GENERATIVE NETWORKS ---- Update ---- The authors' response and the internal discussion have clarified many of my questions about the paper, and I raise my rating to borderline. Further, I believe the new changes will improve the paper significantly after they are incorporated. However, I believe that since significant changes in writing and more experiments are needed to correctly position the paper with respect to the prior work, the paper should undergo another review round after the updates. I provide my detailed reasoning and feedback below. ---- Positioning I believe that the paper could have the following key contributions: image-based lagrangian dynamics for prediction, interpretability of the latent rigid body coordinates, and image-based lagrangian dynamics for control. Unfortunately, current paper does not specify which contribution they are claiming, and does not convincingly defend any of these contributions. The authors may wish to focus on one or multiple of these contributions and defend it by proper theoretical and empirical comparison. Given the promising results in the rebuttal, I suspect that the next iteration of the paper can successfully argue for the first contribution by discussing and comparing to the baselines discussed in the rebuttal. Alternatively, the authors may wish to focus on the second contribution, in which case it is crucial to extensively discuss how prior work relates to the method, provide baseline comparisons beyond ablations, and provide quantitative metrics that show that the method outperforms the baselines and ablations. Should the authors wish to focus on third contribution, I would expect a discussion of why energy shaping control is beneficial to the mainstream visual control methods. ---- Additional comments The abstract and perhaps the title needs to clarify that the Lagrangian of a rigid body system is learned. This is not obvious since recent work has applied Hamiltonian/Lagrangian to general latent spaces as opposed to rigid body systems. The introduction talks about learning physical dynamics with neural networks. Rather confusingly, it entirely omits references to the mainstream methods that do so, such as Ebert'18 or Hafner'19. It is crucial to clarify in the introduction that this paper focuses on interpretable dynamics models for rigid body systems instead of general learned physics. Re motivation: I am glad the authors have added some motivation referencing the results in the rebuttal. I am hopeful that once incorporated in the main paper, this motivation will be clear to the future reader. Strengths The paper proposes an interesting approach to learning Lagrangian dynamics for rigid body systems from images. Weaknesses The paper does not specify its contributions and there is no motivation provided. The introduction mentions that prior Hamiltonian neural networks have not been applied to image data, but this is not true (see [6] or Toth'20). These baselines are missed. The experimental section does not explain why the experiments were performed, and the conclusion does not discuss the impact of the paper.
NIPS
Title Unsupervised Learning of Lagrangian Dynamics from Images for Prediction and Control Abstract Recent approaches for modelling dynamics of physical systems with neural networks enforce Lagrangian or Hamiltonian structure to improve prediction and generalization. However, when coordinates are embedded in high-dimensional data such as images, these approaches either lose interpretability or can only be applied to one particular example. We introduce a new unsupervised neural network model that learns Lagrangian dynamics from images, with interpretability that benefits prediction and control. The model infers Lagrangian dynamics on generalized coordinates that are simultaneously learned with a coordinate-aware variational autoencoder (VAE). The VAE is designed to account for the geometry of physical systems composed of multiple rigid bodies in the plane. By inferring interpretable Lagrangian dynamics, the model learns physical system properties, such as kinetic and potential energy, which enables long-term prediction of dynamics in the image space and synthesis of energy-based controllers. 1 Introduction Humans can learn to predict the trajectories of mechanical systems, e.g., a basketball or a drone, from high-dimensional visual input, and learn to control the system, e.g., catch a ball or maneuver a drone, after a small number of interactions with those systems. We hypothesize that humans use domainspecific knowledge, e.g., physics laws, to achieve efficient learning. Motivated by this hypothesis, in this work, we propose incorporation of physics priors to learn and control dynamics from image data, aiming to gain interpretability and data efficiency. Specifically, we incorporate Lagrangian dynamics as the physic prior, which enables us to represent a broad class of physical systems. Recently, an increasing number of works [1, 2, 3, 4, 5] have incorporated Lagrangian/Hamiltonian dynamics into learning dynamical systems from coordinate data, to improve prediction and generalization. These approaches, however, require coordinate data, which are not always available in real-world applications. Hamiltonian Neural Network (HNN) [3] provides a single experiment with image observations, which requires a modification in the model. This modification is hard to generalize to systems with multiple rigid bodies. Hamiltonian Generative Network (HGN) [6] learns Hamiltonian dynamics from image sequences. However, the dimension of latent generalized coordinates is 4×4×16 = 256, making interpretation difficult. Moreover, both HNN and HGN focus on prediction and have no design of control. Another class of approaches learn physical models from images, by either learning the map from images to coordinates with supervision on coordinate data [7] or learning the coordinates in an unsupervised way but only with translational coordinates [8, 9]. The unsupervised learning of rotational coordinates such as angles are under-explored in the literature. In this work, we propose an unsupervised neural network model that learns coordinates and Lagrangian dynamics on those coordinates from images of physical systems in motion in the plane. The latent dynamical model enforces Lagrangian dynamics, which benefits long term prediction of the system. As Lagrangian dynamics commonly involve rotational coordinates to describe the changing 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. configurations of objects in the system, we propose a coordinate-aware variational autoencoder (VAE) that can infer interpretable rotational and translational coordinates from images without supervision. The interpretable coordinates together with the interpretable Lagrangian dynamics pave the way for introducing energy-based controllers of the learned dynamics. 1.1 Related work Lagrangian/Hamiltonian prior in learning dynamics To improve prediction and generalization of physical system modelling, a class of approaches has incorporated the physics prior of Hamiltonian or Lagrangian dynamics into deep learning. Deep Lagrangian Network [1] and Lagrangian Neural Network [2] learn Lagrangian dynamics from position, velocity and acceleration data. Hamiltonian Neural Networks [3] learn Hamiltonian dynamics from position, velocity and acceleration data. By leveraging ODE integrators, Hamiltonian Graph Networks [10] and Symplectic ODE-Net [4] learn Hamiltonian dynamics from only position and velocity data. All of these works (except one particular experiment in HNN [3]) require direct observation of low dimensional position and velocity data. Hamiltonian Generative Network [6] learns Hamiltonian dynamics from images. Unsupervised learning of dynamics We assume we are given no coordinate data and aim to learn coordinates and dynamics in an unsupervised way. With little position and velocity data, BelbutePeres et al. [11] learn underlying dynamics. However, the authors observe that their model fails to learn meaningful dynamics when there is no supervision on position and velocity data at all. Without supervision, Watter et al. [12] and Levine et al. [13] learn locally linear dynamics and Jaques et al. [14] learns unknown parameters in latent dynamics with a given form. Kossen et al. [15] extracts position and velocity of each object from videos and learns the underlying dynamics. Watters et al. [16] adopts an object-oriented design to gain data efficiency and robustness. Battaglia et al. [17], Sanchez-Gonzalez et al. [18] and Watters et al. [7] learn dynamics with supervision by taking into account the prior of objects and their relations. These object-oriented designs focus little on rotational coordinates. Variational Integrator Network [19] considers rotational coordinates but cannot handle systems with multiple rotational coordinates. Neural visual control Besides learning dynamics for prediction, we would like to learn how control input influences the dynamics and to design control laws based on the learned model. This goal is relevant to neural motion planning and model-based reinforcement learning from images. PlaNet [20] learns latent dynamical models from images and designs control input by fast planning in the latent space. Kalman VAE [21] can potentially learn locally linear dynamics and control from images, although no control result has been shown. Dreamer [22] is a scalable reinforcement learning agent which learns from images using a world model. Ebert et al. [23] propose a self-supervised model-based method for robotic control. We leave the comparison of our energy-based control methods and these model-based control methods in the literature to future work. 1.2 Contribution The main contribution of this work is two-fold. First, we introduce an unsupervised learning framework to learn Lagrangian dynamics from image data for prediction. The Lagrangian prior conserves energy with no control applied; this helps learn more accurate dynamics as compared to a MLP dynamical model. Moreover, the coordinate-aware VAE in the proposed learning framework infers interpretable latent rigid body coordinates in the sense that a coordinate encoded from an image of a system in a position with high potential energy has a high learned potential energy, and vice versa. This interpretability enables us to design energy-based controllers to control physical systems to target positions. We implement this work with PyTorch [24] and refactor our code into PyTorch Lightning format [25], which makes our code easy to read and our results easy to reproduce. The code for all experiments is available at https://github.com/DesmondZhong/Lagrangian_caVAE. 2 Preliminary concepts 2.1 Lagrangian dynamics Lagrangian dynamics are a reformulation of Newton’s second law of motion. The configuration of a system in motion at time t is described by generalized coordinates q(t) = (q1(t), q2(t), ..., qm(t)), where m is the number of degrees of freedom (DOF) of the system. For planar rigid body systems with n rigid bodies and k holonomic constraints, the DOF is m = 3n − k. From D’Alembert’s principle, the equations of motion of the system, also known as the Euler-Lagrange equation, are d dt (∂L ∂q̇ ) − ∂L ∂q = Qnc, (1) where the scalar function L(q, q̇) is the Lagrangian, q̇ = dq/dt, and Qnc is a vector of nonconservative generalized forces. The Lagrangian L(q, q̇) is the difference between kinetic energy T (q, q̇) and potential energy V (q). For rigid body systems, the Lagrangian is L(q, q̇) = T (q, q̇)− V (q) = 1 2 q̇TM(q)q̇− V (q), (2) where M(q) is the mass matrix. In this work, we assume that the control inputs are the only nonconservative generalized forces, i.e., Qnc = g(q)u, where g(q) is the input matrix and u is a vector of control inputs such as forces or torques. Substituting Qnc = g(q)u and L(q, q̇) from (2) into (1), we get the equations of motion in the form of m second-order ordinary differential equations (ODE): q̈ = M−1(q) ( − 1 2 dM(q) dt q̇− dV (q) dq + g(q)u ) . (3) 2.2 Control via energy shaping Our goal is to control the system to a reference configuration q?, inferred from a goal image x?, based on the learned dynamics. As we are essentially learning the kinetic and potential energy associated with the system, we can leverage the learned energy for control by energy shaping [26, 27]. If rank(g(q)) = m, we have control over every DOF and the system is fully actuated. For such systems, control to the reference configuration q? can be achieved with the control law u(q, q̇) = β(q)+v(q̇), where β(q) is the potential energy shaping and v(q̇) is the damping injection. The goal of potential energy shaping is to let the system behave as if it is governed by a desired Lagrangian Ld with no non-conservative generalized forces. d dt (∂L ∂q̇ ) − ∂L ∂q = g(q)β(q) ⇐⇒ d dt (∂Ld ∂q̇ ) − ∂Ld ∂q = 0, (4) where the desired Lagrangian has desired potential energy Vd(q): Ld(q, q̇) = T (q, q̇)− Vd(q) = 1 2 q̇TM(q)q̇− Vd(q). (5) The difference between Ld and L is the difference between V and Vd, which explains the name potential energy shaping: β(q) shapes the potential energy V of the original system into a desired potential energy Vd. The potential energy Vd is designed to have a global minimum at q?. By the equivalence (4), we get β(q) = gT (ggT )−1 (∂V ∂q − ∂Vd ∂q ) . (6) With only potential energy shaping, the system dynamics will oscillate around q?.1 The purpose of damping injection v(q̇) is to impose convergence, exponentially in time, to q?. The damping injection has the form v(q̇) = −gT (ggT )−1(Kdq̇). (7) For underactuated systems, however, this controller design is not valid since ggT will not be invertible. In general, we also need kinetic energy shaping [27] to achieve a control goal. Remark The design parameters here are Vd and Kd. A quadratic desired potential energy Vd(q) = 1 2 (q− q?)TKp(q− q?), (8) results in a controller design u(q, q̇) = gT (ggT )−1 ( ∂V ∂q −Kp(q− q?)−Kdq̇ ) . (9) This can be interpreted as a proportional-derivative (PD) controller with energy compensation. 1Please see Supplementary Materials for more details. 2.3 Training Neural ODE with constant control The Lagrangian dynamics can be formulated as a set of first-order ODE ṡ = f(s,u), (10) where s is a state vector and unknown vector field f , which is a vector-valued function, can be parameterized with a neural network fψ . We leverage Neural ODE, proposed by Chen et al. [28], to learn the function f that explains the trajectory data of s. The idea is to predict future states from an initial state by integrating the ODE with an ODE solver. As all the operations in the ODE solver are differentiable, fψ can be updated by back-propagating through the ODE solver and approximating the true f . However, Neural ODE cannot be applied to (10) directly since the input dimension and the output dimension of f are not the same. Zhong et al. [4] showed that if the control remains constant for each trajectory in the training data, Neural ODE can be applied to the following augmented ODE:( ṡ u̇ ) = ( fψ(s,u) 0 ) = f̃ψ(s,u). (11) With a learned fψ , we can apply a controller design u = u(s) that is not constant, e.g., an energy-based controller, by integrating the ODE ṡ = f(s,u(s)). 3 Model architecture Let X = ((x0,uc), (x1,uc)), ..., (xTpred ,uc)) be a given sequence of image and control pairs, where xτ , τ = 0, 1, . . . , Tpred, is the image of the trajectory of a rigid-body system under constant control uc at time t = τ∆t. From X we want to learn a state-space model (10) that governs the time evolution of the rigid-body system dynamics. We assume the number of rigid bodies n is known and the segmentation of each object in the image is given. Each image can be written as xτ = (xτ1 , ...,x τ n), where xτi ∈ Rnx contains visual information about the ith rigid body at t = τ∆t and nx is the dimension of the image space. In Section 3.1, we parameterize f(s,u) with a neural network and design the architecture of the neural network such that (10) is constrained to follow Lagrangian dynamics, where the physical properties such as mass and potential energy are learned from data. Since we have no access to state data, we need to infer states s, i.e., generalized coordinates and velocities from image data. Sections 3.2 and 3.4 introduce an inference model (encoder) and a generative model (decoder) pair. Together they make up a variational autoencoder (VAE) [29] to infer the generalized coordinates in an unsupervised way. Section 3.3 introduces a simple estimator of velocity from learned generalized coordinates. The VAE and the state-space model are trained together, as described in Section 3.5. The model architecture is shown in Figure 1. 3.1 Latent Lagrangian dynamics The Lagrangian dynamics (3) yield a second-order ODE. From a model-based perspective, they can be re-written as a first-order ODE (10) by choosing the state as s = (q, q̇). However, from a data-driven perspective, this choice of state is problematic when the generalized coordinates involve angles. Consider the pendulum task in Figure 2 as an example where we want to infer the generalized coordinate, i.e., the angle of the pendulum φ, from an image of the pendulum. The map from the image to the angle φ should be bijective. However, if we choose the state as s = (φ, φ̇), the map is not bijective, since φ and φ + 2π map to the same image. If we restrict φ ∈ [−π, π), then the dynamics are not continuous when the pendulum moves around the inverted position. Inspired by Zhong et al. [4], we solve this issue by proposing the state as s = (cosφ, sinφ, φ̇), such that the mapping from the pendulum image to (cosφ, sinφ) is bijective. In general, for a planar rigid-body system with q = (r,φ), where r ∈ RmR are translational generalized coordinates and φ ∈ TmT are rotational generalized coordinates , the proposed state is s = (s1, s2, s3, s4, s5) = (r, cosφ, sinφ, ṙ, φ̇), where cos and sin are applied element-wise to φ. To enforce Lagrangian dynamics in the state-space model, we take the derivative of s with respect to t and substitute in (3) to get ṡ= s4 −s3 ◦ s5 s2 ◦ s5 M−1(s1,s2,s3) ( − 12 dM(s1,s2,,s3) dt ( s4 s5 ) + ( −∂V (s1,s2,s3)∂s1 ∂V (s1,s2,s3) ∂s2 s3− ∂V (s1,s2,s3)∂s3 s2 ) +g(s1,s2,s3)u ) (12) where ◦ is the element-wise product. We use three neural networks, Mψ1(s1, s2, s3), Vψ2(s1, s2, s3), and gψ3(s1, s2, s3), to approximate the mass matrix, the potential energy and the input matrix, respectively. Equation (12) is then a state-space model parameterized by a neural network ṡ = fψ(s,u). It can be trained as stated in Section 2.3 given the initial condition s0 = (r0, cosφ0, sinφ0, ṙ0, φ̇0) and uc. Next, we present the means to infer s0 from the given images. 3.2 Coordinate-aware encoder From a latent variable modelling perspective, an image x of a rigid-body system can be generated by first specifying the values of the generalized coordinates and then assigning values to pixels based on the generalized coordinates with a generative model - the decoder. In order to infer those generalized coordinates from images, we need an inference model - the encoder. We perform variational inference with a coordinate-aware VAE. The coordinate-aware encoder infers a distribution on the generalized coordinates. The Gaussian distribution is the default for modelling latent variables in VAE. This is appropriate for modelling a translational generalized coordinate r since r resides in R1. However, this is not appropriate for modelling a rotational generalized coordinate φ since a Gaussian distribution is not a distribution on S1. If we use a Gaussian distribution to model hyperspherical latent variables, the VAE performs worse than a traditional autoencoder [30]. Thus, to model φ, we use the von Mises (vM) distribution, a family of distributions on S1. Analogous to a Gaussian distribution, a von Mises distribution is characterized by two parameters: µ ∈ R2, ||µ||2 = 1 is the mean, and κ ∈ R≥0 is the concentration around µ. The von Mises distribution reduces to a uniform distribution when κ = 0. In our model, for a rotational generalized coordinate φ, we assume a posterior distribution Q(φ|x) = vM((cosφm, sinφm), φκ) with prior P (φ) = vM(·, 0) = U(S1). For a translational generalized coordinate r, we assume a posterior distribution Q(r|x) = N (rm, rvar) with prior N (0, 1). We denote the joint posterior distribution as Q(q|x) and joint prior distribution as P (q). The encoder is a neural network that takes an image as input and provides the parameters of the distributions as output. A black-box neural network encoder would not be able to learn interpretable generalized coordinates for a system in motion described by Lagrangian dynamics. Instead, we propose a coordinate-aware encoder by designing the architecture of the neural network to account for the geometry of the system. This is the key to interpretable encoding of generalized coordinates. Recall that each generalized coordinate qj specifies the position/rotation of a rigid body ij in the system. In principle, the coordinate can be learned from the image segmentation of ij . However, the reference frame of a generalized coordinate might depend on other generalized coordinates and change across images. Take the CartPole example in Figure 2 as motivation. The system has two DOF and natural choices of generalized coordinates are the horizontal position of the cart q1 = r and the angle of the pole q2 = φ. The origin of the reference frame of r is the center of the image, which is the same across all images. The origin of the reference frame of φ, however, is the center of the cart, which is not the same across all the images since the cart can move. In order to learn the angle of the pole, we can either use a translation invariant architecture such as Convolution Neural Networks (CNN) or place the center of the encoding attention window of the pole segmentation image at the center of the cart. The former approach does not work well in extracting generalized coordinates.2 Thus, we adopt the latter approach, where we shift our encoding attention window horizontally with direction and magnitude given by generalized coordinate r, before feeding it into a neural network to learn φ. In this way we exploit the geometry of the system in the encoder. The default attention window is the image grid and corresponds to the default reference frame, where the origin is at the center of the image with horizontal and vertical axes. The above encoding attention window mechanism for a general system can be formalized by considering the transformation from the default reference frame to the reference frame of each generalized coordinate. The transformation of a point (xd, yd) in the default reference frame to a point (xt, yt) in the target reference frame is captured by transformation T (x, y, θ) corresponding to translation by (x, y) and rotation by θ as follows: ( xt yt 1 ) = T (x, y, θ) ( xd yd 1 ) , where T (x, y, θ) = ( cos θ sin θ x − sin θ cos θ y 0 0 1 ) . (13) So let T ((x, y, θ)encj ) be the transformation from default frame to reference frame of generalized coordinate qj . This transformation might depend on constant parameters c associated with the shape and size of the rigid bodies and generalized coordinates q−j , which denotes the vector of generalized coordinates with qj removed. Let (x, y, θ)encj = T enc j (q−j , c). Both q−j and c are learned from images. However, the function T encj is specified by leveraging the geometry of the system. In the CartPole example, (q1, q2) = (r, φ), and T enc1 ≡ (0, 0, 0) and T enc2 (q1) = (q1, 0, 0). In the Acrobot example, (q1, q2) = (φ1, φ2), and T enc1 ≡ (0, 0, 0) and T enc2 (q1, l1) = (l1 sin q1, l1 cos q1, 0). The shift of attention window can be implemented with a spatial transformer network (STN) [32], which generates a transformed image x̃ij from xij , i.e., x̃ij = STN(xij , T (T encj (q−j , c))). In general, to encode qj , we use a multilayer perceptron (MLP) that takes x̃ij as input and provides the parameters of the qj distribution as output. For a translational coordinate qj , we have (qmj , log q var j ) = MLPencj (x̃ij ). For a rotational coordinate qj , we have (αj , βj , log q κ j ) = MLP enc j (x̃ij ), where the mean of the von Mises distribution is computed as (cos qmj , sin q m j ) = (αj , βj)/ √ α2j + β 2 j . We then 2Here we expect to encode the angle of the pole from a pole image regardless of where it appears in the image. As the translation invariance of CNN is shown by Kauderer-Abrams [31] to be primarily dependent on data augmentation, the encoding of generalized coordinates might not generalize well to unseen trajectories. Also, in general we need both translation invariance and rotation invariance, a property that CNN do not have. take a sample from the qj distribution.3 Doing this for every generalized coordinate qj , we can get (rτ , cosφτ , sinφτ ) from xτ for any τ .4 We will use (r0, cosφ0, sinφ0) and (r1, cosφ1, sinφ1). 3.3 Velocity estimator To integrate Equation (12), we also need to infer (ṙ0, φ̇0), the initial velocity. We can estimate the initial velocity from the encoded generalized coordinates by finite difference. We use the following simple first-order finite difference estimator: ṙ0 = (rm1 − rm0)/∆t, (14) φ̇0 = ( (sinφm1 − sinφm0) ◦ cosφm0 − (cosφm1 − cosφm0) ◦ sinφm0 ) /∆t, (15) where (rm0, cosφm0, sinφm0) and (rm1, cosφm1, sinφm1) are the means of the generalized coordinates encoded from the image at time t = 0 and t = ∆t, respectively. Jaques et al. [14] proposed to use a neural network to estimate velocity. From our experiments, our simple estimator works better than a neural network estimator. 3.4 Coordinate-aware decoder The decoder provides a distribution P (x|q) = N (x̂, I) as output, given a generalized coordinate q as input, where the mean x̂ is the reconstruction image of the image data x. Instead of using a black box decoder, we propose a coordinate-aware decoder. The coordinate-aware decoder first generates a static image xci of every rigid body i in the system, at a default position and orientation, using a MLP with a constant input, i.e., xci = MLP dec i (1). The coordinate-aware decoder then determines x̂i, the image of rigid body i positioned and oriented on the image plane according to the generalized coordinates. The proposed decoder is inspired by the coordinate-consistent decoder by Jaques et al. [14]. However, the decoder of [14] cannot handle a system of multiple rigid bodies with constraints such as the Acrobot and the CartPole, whereas our coordinate-aware decoder can. As in Jaques et al. [14], to find x̂i we use the inverse transformation matrix T −1((x, y, θ)deci ) where T is given by (13) and (x, y, θ)deci = T deci (q, c). In the CartPole example, (q1, q2) = (r, φ), and T dec1 (r) = (r, 0, 0) and T dec 2 (r, φ) = (r, 0, φ). In the Acrobot example, (q1, q2) = (φ1, φ2), and T dec1 (φ1) = (0, 0, φ1) and T dec 2 (φ1, φ2) = (l1 sinφ1, l1 cosφ1, φ2). The reconstruction image is then x̂ = (x̂1, ..., x̂n), where x̂i = STN(xci , T −1(T deci (q, c))). 3.5 Loss function The loss L(X) consists of the sum of three terms: L(X) = −Eq0∼Q[logP (x0|q0)]+KL(Q(q0|x0)||P (q0))︸ ︷︷ ︸ VAE loss + Tpred∑ τ=1 ||x̂τ−xτ ||22︸ ︷︷ ︸ prediction loss +λ ∑ j √ α2j+β 2 j︸ ︷︷ ︸ vM regularization . (16) The VAE loss is a variational bound on the marginal log-likelihood of initial data P (x0). The prediction loss captures inaccurate predictions of the latent Lagrangian dynamics. The vM regularization with weight λ penalizes large norms of vectors (αj , βj), preventing them from blowing up. 4 Results We train our model on three systems: the Pendulum, the fully-actuated CartPole and the fully-actuated Acrobot. The training images are generated by OpenAI Gym simulator [33]. The training setup is detailed in Supplementary Materials. As the mean square error in the image space is not a good metric 3We use the reparametrization trick proposed by Davidson et al. [30] to sample from a von Mises distribution. 4For a transformation that depends on one or more generalized coordinate, those generalized coordinates must be encoded before the transformation can be applied. In the CartPole example, we need to encode r before applying the transformation to put the attention window centered at the cart to encode φ. We use the mean of the distribution, i.e., qmj or (cos q m j , sin q m j ), for those transformations that depend on qj . of long term prediction accuracy [8], we report on the prediction image sequences of a previously unseen initial condition and highlight the interpretability of our model. Lagrangian dynamics and coordinate-aware VAE improve prediction. As the Acrobot is a chaotic system, accurate long term prediction is impossible. Figure 3 shows the prediction sequences of images up to 48 time steps of the Pendulum and CartPole experiments with models trained with Tpred = 4. We compare the prediction results of our model (labelled as Lagrangian+caVAE) with two model variants: MLPdyn+caVAE, which replaces the Lagrangian latent dynamics with MLP latent dynamics, and Lagrangian+VAE, which replaces the coordinate-aware VAE with a traditional VAE. The traditional VAE fails to reconstruct meaningful images for CartPole, although it works well in the simpler Pendulum system. With well-learned coordinates, models that enforce Lagrangian dynamics result in better long term prediction, e.g., as compared to MLPdyn+caVAE, since Lagrangian dynamics with zero control preserves energy (see Supplementary Materials). Learned potential energy enables energy-based control. Figure 4 shows the learned potential energy of the three systems and reconstruction images at selected coordinates with Lagrangian+caVAE. The learned potential energy is consistent with the true potential energy of those systems, e.g., the pendulum at the upward position has the highest potential energy while the pendulum at the downward position has the lowest potential energy. Figure 4 also visualizes the learned coordinates. Learning interpretable coordinates and potential energy enables energy-based controllers. Based on the learned encoding and dynamics, we are able to control Pendulum and fully-actuated Acrobot to the inverted position, and fully-actuated CartPole to a position where the pole points upward. The sequences of images of controlled trajectories as shown in Figure 3 are generated based on learned dynamics and encoding with Lagrangian+caVAE as follows. We first encode an image of the goal position x? to the goal generalized coordinates q?. At each time step, the OpenAI Gym simulator of a system can take a control input, integrate one time step forward, and output an image of the system at the next time step. The control input to the simulator is u(q, q̇) = β(q) + v(q̇) which is designed as in Section 2.2 with the learned potential energy, input matrix, coordinates encoded from the output images, and q?. Baselines We set up two baseline models: HGN [6] and PixelHNN [3]. Neither model considers control input so we only use the trajectories with zero control in our dataset to train the models. Because of this, it is not fair to compare the pixel MSE of HGN in Table 1 with those of other models. We implemented HGN based on the architecture described in the paper and used the official code for PixelHNN. From Figure 3, we can see that HGN makes good predictions for the pendulum up to the training sequence length (τ = 4), but makes blurry long term predictions. HGN fails to generalize to the test dataset for the CartPole task. This is probably because HGN is not data efficient. In the original HGN experiment [6], 30 × 50K = 1.5M training images are used in the pendulum task, while here we use 20× 256 = 5120 training images (with zero control). Moreover, the dimension of latent coordinate q is 4 × 4 × 16 = 256. With such a fixed high dimension (for various tasks), HGN does not need to assume degrees of freedom. However, it might not be easy to interpret the learned q, whereas the learned coordinates in our model are interpretable. PixelHNN does not use an integrator and requires a special term in the loss function. PixelHNN does not account for the rotational nature of coordinate q, so the reconstruction images around the unstable equilibrium point are blurry and the learned coordinates are not easy to interpret (see Supplementary Material). In the original implementation of PixelHNN [3], the angle of the pendulum is constrained to be from −π/6 to π/6, where a linear approximation of the nonlinear dynamics is learned, which makes the learned coordinates easy to interpret. This constraint does not hold for more challenging control tasks. Ablation study To understand which component in our model contributes to learning interpretable generalized coordinates the most, we also report results of four ablations, which are obtained by (a) replacing the coordinate-aware encoder with a black-box MLP, (b) replacing the coordinate-aware decoder with a black-box MLP, (c) replacing the coordinate-aware VAE with a coordinate-aware AE, and (d) a Physics-as-inverse-graphics (PAIG) model [14]. We observe that the coordinate-aware decoder makes the primary contribution to learning interpretable coordinates, and the coordinateaware encoder makes a secondary contribution. The coordinate-aware AE succeeds in Pendulum and Acrobot tasks but fails in the CartPole task. PAIG uses AE with a neural network velocity estimator. We find that PAIG’s velocity estimator overfits the training data, which results in inaccurate long term prediction. Please see Supplementary Materials for prediction sequences of the ablation study. 5 Conclusion We propose an unsupervised model that learns planar rigid-body dynamics from images in an explainable and transparent way by incorporating the physics prior of Lagrangian dynamics and a coordinate-aware VAE, both of which we show are important for accurate prediction in the image space. The interpretability of the model allows for synthesis of model-based controllers. Broader Impact We focus on the impact of using our model to provide explanations for physical system modelling. Our model could be used to provide explanations regarding the underlying symmetries, i.e., conservation laws, of physical systems. Further, the incorporation of the physics prior of Lagrangian dynamics improves robustness and generalizability for both prediction and control applications. We see opportunities for research applying our model to improve transparency and explanability in reinforcement learning, which is typically solved with low-dimensional observation data instead of image data. Our work also enables future research on vision-based controllers. The limitations of our work will also motivate research on unsupervised segmentation of images of physical systems. Acknowledgments and Disclosure of Funding This research has been supported in part by ONR grant #N00014-18-1-2873 and by the School of Engineering and Applied Science at Princeton University through the generosity of William Addy ’82. Yaofeng Desmond Zhong would like to thank Christine Allen-Blanchette, Shinkyu Park, Sushant Veer and Anirudha Majumdar for helpful discussions. We also thank the anonymous reviewers for providing detailed and helpful feedback.
1. What is the main contribution of the paper regarding Lagrangian and Hamiltonian dynamics? 2. What are the strengths of the proposed method, particularly in its application to control problems? 3. What are the weaknesses of the paper, especially regarding its assumptions and generalizability? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any relevant works that the authors have not acknowledged or discussed in their paper?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper builds on a recent line of work that enforces Lagrangian and Hamiltonian dynamics on neural network-based models of dynamical systems. The authors consider the case where coordinates are embedded in high-dimensional data such as images, show how to learn a Lagrangian for such systems, and then use them to solve control problems. Their empirical results represent a significant contribution. The paper is candidly written, has good ablation studies, and is easy to follow. ### Note: I have updated this review to take author feedback and reviewer discussion into account. See "Additional feedback" for details. Strengths - The writing, equations, and figures are clear and coherent. - The empirical tasks are challenging and relevant to the NeurIPS community. - The authors tackle a significant problem which is underexplored by the community. - The method is sound and general; it incorporates strong physics priors but is still flexible enough to do well in three disparate domains (pendulum, cartpole, acrobot) - The authors do a good job of placing this paper in the broader context of recent work on learning Lagrangians and Hamiltonians from data (with one exception, see next section) Weaknesses - The authors do not acknowledge work by Toth et al 2019 (arxiv.org/abs/1909.13789) which learns dynamics from pixels (using Hamiltonians rather than Lagrangians). Seems extremely relevant. To be clear: I am not one of the authors on that paper; I am making this note because I think it’s a potential blind spot of this paper - Equation 2 makes a simplifying assumption such that this approach only works only for rigid body systems. - It's unclear how well the coordinate-aware encoder will generalize to real-world systems. Although showing such applications is beyond the scope of this paper, I would like to see some discussion from the authors on how they believe this might take place. To the authors: in what ways do you see this new technique - the ability to learn Lagrangians from pixels - being helpful in applied settings? - There are several important assumptions embedded in this model. The first is that the coordinates are rotational (there are periodic activation functions on them). The second is the coordinate transformation function learned by the STN, which one can restrict or relax according to the problem in question (lines 194-200). The third is that the Lagrangian is that of a rigid body system (line 65). These assumptions appear to be sufficiently general. However, I’d like to hear the authors discuss these assumptions, and add any that I’ve missed here (To the authors and other reviewers: are these assumptions sufficiently general?). Explicitly stating all physical priors/assumptions embedded into this model in one place would improve this work.
NIPS
Title Unsupervised Learning of Lagrangian Dynamics from Images for Prediction and Control Abstract Recent approaches for modelling dynamics of physical systems with neural networks enforce Lagrangian or Hamiltonian structure to improve prediction and generalization. However, when coordinates are embedded in high-dimensional data such as images, these approaches either lose interpretability or can only be applied to one particular example. We introduce a new unsupervised neural network model that learns Lagrangian dynamics from images, with interpretability that benefits prediction and control. The model infers Lagrangian dynamics on generalized coordinates that are simultaneously learned with a coordinate-aware variational autoencoder (VAE). The VAE is designed to account for the geometry of physical systems composed of multiple rigid bodies in the plane. By inferring interpretable Lagrangian dynamics, the model learns physical system properties, such as kinetic and potential energy, which enables long-term prediction of dynamics in the image space and synthesis of energy-based controllers. 1 Introduction Humans can learn to predict the trajectories of mechanical systems, e.g., a basketball or a drone, from high-dimensional visual input, and learn to control the system, e.g., catch a ball or maneuver a drone, after a small number of interactions with those systems. We hypothesize that humans use domainspecific knowledge, e.g., physics laws, to achieve efficient learning. Motivated by this hypothesis, in this work, we propose incorporation of physics priors to learn and control dynamics from image data, aiming to gain interpretability and data efficiency. Specifically, we incorporate Lagrangian dynamics as the physic prior, which enables us to represent a broad class of physical systems. Recently, an increasing number of works [1, 2, 3, 4, 5] have incorporated Lagrangian/Hamiltonian dynamics into learning dynamical systems from coordinate data, to improve prediction and generalization. These approaches, however, require coordinate data, which are not always available in real-world applications. Hamiltonian Neural Network (HNN) [3] provides a single experiment with image observations, which requires a modification in the model. This modification is hard to generalize to systems with multiple rigid bodies. Hamiltonian Generative Network (HGN) [6] learns Hamiltonian dynamics from image sequences. However, the dimension of latent generalized coordinates is 4×4×16 = 256, making interpretation difficult. Moreover, both HNN and HGN focus on prediction and have no design of control. Another class of approaches learn physical models from images, by either learning the map from images to coordinates with supervision on coordinate data [7] or learning the coordinates in an unsupervised way but only with translational coordinates [8, 9]. The unsupervised learning of rotational coordinates such as angles are under-explored in the literature. In this work, we propose an unsupervised neural network model that learns coordinates and Lagrangian dynamics on those coordinates from images of physical systems in motion in the plane. The latent dynamical model enforces Lagrangian dynamics, which benefits long term prediction of the system. As Lagrangian dynamics commonly involve rotational coordinates to describe the changing 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. configurations of objects in the system, we propose a coordinate-aware variational autoencoder (VAE) that can infer interpretable rotational and translational coordinates from images without supervision. The interpretable coordinates together with the interpretable Lagrangian dynamics pave the way for introducing energy-based controllers of the learned dynamics. 1.1 Related work Lagrangian/Hamiltonian prior in learning dynamics To improve prediction and generalization of physical system modelling, a class of approaches has incorporated the physics prior of Hamiltonian or Lagrangian dynamics into deep learning. Deep Lagrangian Network [1] and Lagrangian Neural Network [2] learn Lagrangian dynamics from position, velocity and acceleration data. Hamiltonian Neural Networks [3] learn Hamiltonian dynamics from position, velocity and acceleration data. By leveraging ODE integrators, Hamiltonian Graph Networks [10] and Symplectic ODE-Net [4] learn Hamiltonian dynamics from only position and velocity data. All of these works (except one particular experiment in HNN [3]) require direct observation of low dimensional position and velocity data. Hamiltonian Generative Network [6] learns Hamiltonian dynamics from images. Unsupervised learning of dynamics We assume we are given no coordinate data and aim to learn coordinates and dynamics in an unsupervised way. With little position and velocity data, BelbutePeres et al. [11] learn underlying dynamics. However, the authors observe that their model fails to learn meaningful dynamics when there is no supervision on position and velocity data at all. Without supervision, Watter et al. [12] and Levine et al. [13] learn locally linear dynamics and Jaques et al. [14] learns unknown parameters in latent dynamics with a given form. Kossen et al. [15] extracts position and velocity of each object from videos and learns the underlying dynamics. Watters et al. [16] adopts an object-oriented design to gain data efficiency and robustness. Battaglia et al. [17], Sanchez-Gonzalez et al. [18] and Watters et al. [7] learn dynamics with supervision by taking into account the prior of objects and their relations. These object-oriented designs focus little on rotational coordinates. Variational Integrator Network [19] considers rotational coordinates but cannot handle systems with multiple rotational coordinates. Neural visual control Besides learning dynamics for prediction, we would like to learn how control input influences the dynamics and to design control laws based on the learned model. This goal is relevant to neural motion planning and model-based reinforcement learning from images. PlaNet [20] learns latent dynamical models from images and designs control input by fast planning in the latent space. Kalman VAE [21] can potentially learn locally linear dynamics and control from images, although no control result has been shown. Dreamer [22] is a scalable reinforcement learning agent which learns from images using a world model. Ebert et al. [23] propose a self-supervised model-based method for robotic control. We leave the comparison of our energy-based control methods and these model-based control methods in the literature to future work. 1.2 Contribution The main contribution of this work is two-fold. First, we introduce an unsupervised learning framework to learn Lagrangian dynamics from image data for prediction. The Lagrangian prior conserves energy with no control applied; this helps learn more accurate dynamics as compared to a MLP dynamical model. Moreover, the coordinate-aware VAE in the proposed learning framework infers interpretable latent rigid body coordinates in the sense that a coordinate encoded from an image of a system in a position with high potential energy has a high learned potential energy, and vice versa. This interpretability enables us to design energy-based controllers to control physical systems to target positions. We implement this work with PyTorch [24] and refactor our code into PyTorch Lightning format [25], which makes our code easy to read and our results easy to reproduce. The code for all experiments is available at https://github.com/DesmondZhong/Lagrangian_caVAE. 2 Preliminary concepts 2.1 Lagrangian dynamics Lagrangian dynamics are a reformulation of Newton’s second law of motion. The configuration of a system in motion at time t is described by generalized coordinates q(t) = (q1(t), q2(t), ..., qm(t)), where m is the number of degrees of freedom (DOF) of the system. For planar rigid body systems with n rigid bodies and k holonomic constraints, the DOF is m = 3n − k. From D’Alembert’s principle, the equations of motion of the system, also known as the Euler-Lagrange equation, are d dt (∂L ∂q̇ ) − ∂L ∂q = Qnc, (1) where the scalar function L(q, q̇) is the Lagrangian, q̇ = dq/dt, and Qnc is a vector of nonconservative generalized forces. The Lagrangian L(q, q̇) is the difference between kinetic energy T (q, q̇) and potential energy V (q). For rigid body systems, the Lagrangian is L(q, q̇) = T (q, q̇)− V (q) = 1 2 q̇TM(q)q̇− V (q), (2) where M(q) is the mass matrix. In this work, we assume that the control inputs are the only nonconservative generalized forces, i.e., Qnc = g(q)u, where g(q) is the input matrix and u is a vector of control inputs such as forces or torques. Substituting Qnc = g(q)u and L(q, q̇) from (2) into (1), we get the equations of motion in the form of m second-order ordinary differential equations (ODE): q̈ = M−1(q) ( − 1 2 dM(q) dt q̇− dV (q) dq + g(q)u ) . (3) 2.2 Control via energy shaping Our goal is to control the system to a reference configuration q?, inferred from a goal image x?, based on the learned dynamics. As we are essentially learning the kinetic and potential energy associated with the system, we can leverage the learned energy for control by energy shaping [26, 27]. If rank(g(q)) = m, we have control over every DOF and the system is fully actuated. For such systems, control to the reference configuration q? can be achieved with the control law u(q, q̇) = β(q)+v(q̇), where β(q) is the potential energy shaping and v(q̇) is the damping injection. The goal of potential energy shaping is to let the system behave as if it is governed by a desired Lagrangian Ld with no non-conservative generalized forces. d dt (∂L ∂q̇ ) − ∂L ∂q = g(q)β(q) ⇐⇒ d dt (∂Ld ∂q̇ ) − ∂Ld ∂q = 0, (4) where the desired Lagrangian has desired potential energy Vd(q): Ld(q, q̇) = T (q, q̇)− Vd(q) = 1 2 q̇TM(q)q̇− Vd(q). (5) The difference between Ld and L is the difference between V and Vd, which explains the name potential energy shaping: β(q) shapes the potential energy V of the original system into a desired potential energy Vd. The potential energy Vd is designed to have a global minimum at q?. By the equivalence (4), we get β(q) = gT (ggT )−1 (∂V ∂q − ∂Vd ∂q ) . (6) With only potential energy shaping, the system dynamics will oscillate around q?.1 The purpose of damping injection v(q̇) is to impose convergence, exponentially in time, to q?. The damping injection has the form v(q̇) = −gT (ggT )−1(Kdq̇). (7) For underactuated systems, however, this controller design is not valid since ggT will not be invertible. In general, we also need kinetic energy shaping [27] to achieve a control goal. Remark The design parameters here are Vd and Kd. A quadratic desired potential energy Vd(q) = 1 2 (q− q?)TKp(q− q?), (8) results in a controller design u(q, q̇) = gT (ggT )−1 ( ∂V ∂q −Kp(q− q?)−Kdq̇ ) . (9) This can be interpreted as a proportional-derivative (PD) controller with energy compensation. 1Please see Supplementary Materials for more details. 2.3 Training Neural ODE with constant control The Lagrangian dynamics can be formulated as a set of first-order ODE ṡ = f(s,u), (10) where s is a state vector and unknown vector field f , which is a vector-valued function, can be parameterized with a neural network fψ . We leverage Neural ODE, proposed by Chen et al. [28], to learn the function f that explains the trajectory data of s. The idea is to predict future states from an initial state by integrating the ODE with an ODE solver. As all the operations in the ODE solver are differentiable, fψ can be updated by back-propagating through the ODE solver and approximating the true f . However, Neural ODE cannot be applied to (10) directly since the input dimension and the output dimension of f are not the same. Zhong et al. [4] showed that if the control remains constant for each trajectory in the training data, Neural ODE can be applied to the following augmented ODE:( ṡ u̇ ) = ( fψ(s,u) 0 ) = f̃ψ(s,u). (11) With a learned fψ , we can apply a controller design u = u(s) that is not constant, e.g., an energy-based controller, by integrating the ODE ṡ = f(s,u(s)). 3 Model architecture Let X = ((x0,uc), (x1,uc)), ..., (xTpred ,uc)) be a given sequence of image and control pairs, where xτ , τ = 0, 1, . . . , Tpred, is the image of the trajectory of a rigid-body system under constant control uc at time t = τ∆t. From X we want to learn a state-space model (10) that governs the time evolution of the rigid-body system dynamics. We assume the number of rigid bodies n is known and the segmentation of each object in the image is given. Each image can be written as xτ = (xτ1 , ...,x τ n), where xτi ∈ Rnx contains visual information about the ith rigid body at t = τ∆t and nx is the dimension of the image space. In Section 3.1, we parameterize f(s,u) with a neural network and design the architecture of the neural network such that (10) is constrained to follow Lagrangian dynamics, where the physical properties such as mass and potential energy are learned from data. Since we have no access to state data, we need to infer states s, i.e., generalized coordinates and velocities from image data. Sections 3.2 and 3.4 introduce an inference model (encoder) and a generative model (decoder) pair. Together they make up a variational autoencoder (VAE) [29] to infer the generalized coordinates in an unsupervised way. Section 3.3 introduces a simple estimator of velocity from learned generalized coordinates. The VAE and the state-space model are trained together, as described in Section 3.5. The model architecture is shown in Figure 1. 3.1 Latent Lagrangian dynamics The Lagrangian dynamics (3) yield a second-order ODE. From a model-based perspective, they can be re-written as a first-order ODE (10) by choosing the state as s = (q, q̇). However, from a data-driven perspective, this choice of state is problematic when the generalized coordinates involve angles. Consider the pendulum task in Figure 2 as an example where we want to infer the generalized coordinate, i.e., the angle of the pendulum φ, from an image of the pendulum. The map from the image to the angle φ should be bijective. However, if we choose the state as s = (φ, φ̇), the map is not bijective, since φ and φ + 2π map to the same image. If we restrict φ ∈ [−π, π), then the dynamics are not continuous when the pendulum moves around the inverted position. Inspired by Zhong et al. [4], we solve this issue by proposing the state as s = (cosφ, sinφ, φ̇), such that the mapping from the pendulum image to (cosφ, sinφ) is bijective. In general, for a planar rigid-body system with q = (r,φ), where r ∈ RmR are translational generalized coordinates and φ ∈ TmT are rotational generalized coordinates , the proposed state is s = (s1, s2, s3, s4, s5) = (r, cosφ, sinφ, ṙ, φ̇), where cos and sin are applied element-wise to φ. To enforce Lagrangian dynamics in the state-space model, we take the derivative of s with respect to t and substitute in (3) to get ṡ= s4 −s3 ◦ s5 s2 ◦ s5 M−1(s1,s2,s3) ( − 12 dM(s1,s2,,s3) dt ( s4 s5 ) + ( −∂V (s1,s2,s3)∂s1 ∂V (s1,s2,s3) ∂s2 s3− ∂V (s1,s2,s3)∂s3 s2 ) +g(s1,s2,s3)u ) (12) where ◦ is the element-wise product. We use three neural networks, Mψ1(s1, s2, s3), Vψ2(s1, s2, s3), and gψ3(s1, s2, s3), to approximate the mass matrix, the potential energy and the input matrix, respectively. Equation (12) is then a state-space model parameterized by a neural network ṡ = fψ(s,u). It can be trained as stated in Section 2.3 given the initial condition s0 = (r0, cosφ0, sinφ0, ṙ0, φ̇0) and uc. Next, we present the means to infer s0 from the given images. 3.2 Coordinate-aware encoder From a latent variable modelling perspective, an image x of a rigid-body system can be generated by first specifying the values of the generalized coordinates and then assigning values to pixels based on the generalized coordinates with a generative model - the decoder. In order to infer those generalized coordinates from images, we need an inference model - the encoder. We perform variational inference with a coordinate-aware VAE. The coordinate-aware encoder infers a distribution on the generalized coordinates. The Gaussian distribution is the default for modelling latent variables in VAE. This is appropriate for modelling a translational generalized coordinate r since r resides in R1. However, this is not appropriate for modelling a rotational generalized coordinate φ since a Gaussian distribution is not a distribution on S1. If we use a Gaussian distribution to model hyperspherical latent variables, the VAE performs worse than a traditional autoencoder [30]. Thus, to model φ, we use the von Mises (vM) distribution, a family of distributions on S1. Analogous to a Gaussian distribution, a von Mises distribution is characterized by two parameters: µ ∈ R2, ||µ||2 = 1 is the mean, and κ ∈ R≥0 is the concentration around µ. The von Mises distribution reduces to a uniform distribution when κ = 0. In our model, for a rotational generalized coordinate φ, we assume a posterior distribution Q(φ|x) = vM((cosφm, sinφm), φκ) with prior P (φ) = vM(·, 0) = U(S1). For a translational generalized coordinate r, we assume a posterior distribution Q(r|x) = N (rm, rvar) with prior N (0, 1). We denote the joint posterior distribution as Q(q|x) and joint prior distribution as P (q). The encoder is a neural network that takes an image as input and provides the parameters of the distributions as output. A black-box neural network encoder would not be able to learn interpretable generalized coordinates for a system in motion described by Lagrangian dynamics. Instead, we propose a coordinate-aware encoder by designing the architecture of the neural network to account for the geometry of the system. This is the key to interpretable encoding of generalized coordinates. Recall that each generalized coordinate qj specifies the position/rotation of a rigid body ij in the system. In principle, the coordinate can be learned from the image segmentation of ij . However, the reference frame of a generalized coordinate might depend on other generalized coordinates and change across images. Take the CartPole example in Figure 2 as motivation. The system has two DOF and natural choices of generalized coordinates are the horizontal position of the cart q1 = r and the angle of the pole q2 = φ. The origin of the reference frame of r is the center of the image, which is the same across all images. The origin of the reference frame of φ, however, is the center of the cart, which is not the same across all the images since the cart can move. In order to learn the angle of the pole, we can either use a translation invariant architecture such as Convolution Neural Networks (CNN) or place the center of the encoding attention window of the pole segmentation image at the center of the cart. The former approach does not work well in extracting generalized coordinates.2 Thus, we adopt the latter approach, where we shift our encoding attention window horizontally with direction and magnitude given by generalized coordinate r, before feeding it into a neural network to learn φ. In this way we exploit the geometry of the system in the encoder. The default attention window is the image grid and corresponds to the default reference frame, where the origin is at the center of the image with horizontal and vertical axes. The above encoding attention window mechanism for a general system can be formalized by considering the transformation from the default reference frame to the reference frame of each generalized coordinate. The transformation of a point (xd, yd) in the default reference frame to a point (xt, yt) in the target reference frame is captured by transformation T (x, y, θ) corresponding to translation by (x, y) and rotation by θ as follows: ( xt yt 1 ) = T (x, y, θ) ( xd yd 1 ) , where T (x, y, θ) = ( cos θ sin θ x − sin θ cos θ y 0 0 1 ) . (13) So let T ((x, y, θ)encj ) be the transformation from default frame to reference frame of generalized coordinate qj . This transformation might depend on constant parameters c associated with the shape and size of the rigid bodies and generalized coordinates q−j , which denotes the vector of generalized coordinates with qj removed. Let (x, y, θ)encj = T enc j (q−j , c). Both q−j and c are learned from images. However, the function T encj is specified by leveraging the geometry of the system. In the CartPole example, (q1, q2) = (r, φ), and T enc1 ≡ (0, 0, 0) and T enc2 (q1) = (q1, 0, 0). In the Acrobot example, (q1, q2) = (φ1, φ2), and T enc1 ≡ (0, 0, 0) and T enc2 (q1, l1) = (l1 sin q1, l1 cos q1, 0). The shift of attention window can be implemented with a spatial transformer network (STN) [32], which generates a transformed image x̃ij from xij , i.e., x̃ij = STN(xij , T (T encj (q−j , c))). In general, to encode qj , we use a multilayer perceptron (MLP) that takes x̃ij as input and provides the parameters of the qj distribution as output. For a translational coordinate qj , we have (qmj , log q var j ) = MLPencj (x̃ij ). For a rotational coordinate qj , we have (αj , βj , log q κ j ) = MLP enc j (x̃ij ), where the mean of the von Mises distribution is computed as (cos qmj , sin q m j ) = (αj , βj)/ √ α2j + β 2 j . We then 2Here we expect to encode the angle of the pole from a pole image regardless of where it appears in the image. As the translation invariance of CNN is shown by Kauderer-Abrams [31] to be primarily dependent on data augmentation, the encoding of generalized coordinates might not generalize well to unseen trajectories. Also, in general we need both translation invariance and rotation invariance, a property that CNN do not have. take a sample from the qj distribution.3 Doing this for every generalized coordinate qj , we can get (rτ , cosφτ , sinφτ ) from xτ for any τ .4 We will use (r0, cosφ0, sinφ0) and (r1, cosφ1, sinφ1). 3.3 Velocity estimator To integrate Equation (12), we also need to infer (ṙ0, φ̇0), the initial velocity. We can estimate the initial velocity from the encoded generalized coordinates by finite difference. We use the following simple first-order finite difference estimator: ṙ0 = (rm1 − rm0)/∆t, (14) φ̇0 = ( (sinφm1 − sinφm0) ◦ cosφm0 − (cosφm1 − cosφm0) ◦ sinφm0 ) /∆t, (15) where (rm0, cosφm0, sinφm0) and (rm1, cosφm1, sinφm1) are the means of the generalized coordinates encoded from the image at time t = 0 and t = ∆t, respectively. Jaques et al. [14] proposed to use a neural network to estimate velocity. From our experiments, our simple estimator works better than a neural network estimator. 3.4 Coordinate-aware decoder The decoder provides a distribution P (x|q) = N (x̂, I) as output, given a generalized coordinate q as input, where the mean x̂ is the reconstruction image of the image data x. Instead of using a black box decoder, we propose a coordinate-aware decoder. The coordinate-aware decoder first generates a static image xci of every rigid body i in the system, at a default position and orientation, using a MLP with a constant input, i.e., xci = MLP dec i (1). The coordinate-aware decoder then determines x̂i, the image of rigid body i positioned and oriented on the image plane according to the generalized coordinates. The proposed decoder is inspired by the coordinate-consistent decoder by Jaques et al. [14]. However, the decoder of [14] cannot handle a system of multiple rigid bodies with constraints such as the Acrobot and the CartPole, whereas our coordinate-aware decoder can. As in Jaques et al. [14], to find x̂i we use the inverse transformation matrix T −1((x, y, θ)deci ) where T is given by (13) and (x, y, θ)deci = T deci (q, c). In the CartPole example, (q1, q2) = (r, φ), and T dec1 (r) = (r, 0, 0) and T dec 2 (r, φ) = (r, 0, φ). In the Acrobot example, (q1, q2) = (φ1, φ2), and T dec1 (φ1) = (0, 0, φ1) and T dec 2 (φ1, φ2) = (l1 sinφ1, l1 cosφ1, φ2). The reconstruction image is then x̂ = (x̂1, ..., x̂n), where x̂i = STN(xci , T −1(T deci (q, c))). 3.5 Loss function The loss L(X) consists of the sum of three terms: L(X) = −Eq0∼Q[logP (x0|q0)]+KL(Q(q0|x0)||P (q0))︸ ︷︷ ︸ VAE loss + Tpred∑ τ=1 ||x̂τ−xτ ||22︸ ︷︷ ︸ prediction loss +λ ∑ j √ α2j+β 2 j︸ ︷︷ ︸ vM regularization . (16) The VAE loss is a variational bound on the marginal log-likelihood of initial data P (x0). The prediction loss captures inaccurate predictions of the latent Lagrangian dynamics. The vM regularization with weight λ penalizes large norms of vectors (αj , βj), preventing them from blowing up. 4 Results We train our model on three systems: the Pendulum, the fully-actuated CartPole and the fully-actuated Acrobot. The training images are generated by OpenAI Gym simulator [33]. The training setup is detailed in Supplementary Materials. As the mean square error in the image space is not a good metric 3We use the reparametrization trick proposed by Davidson et al. [30] to sample from a von Mises distribution. 4For a transformation that depends on one or more generalized coordinate, those generalized coordinates must be encoded before the transformation can be applied. In the CartPole example, we need to encode r before applying the transformation to put the attention window centered at the cart to encode φ. We use the mean of the distribution, i.e., qmj or (cos q m j , sin q m j ), for those transformations that depend on qj . of long term prediction accuracy [8], we report on the prediction image sequences of a previously unseen initial condition and highlight the interpretability of our model. Lagrangian dynamics and coordinate-aware VAE improve prediction. As the Acrobot is a chaotic system, accurate long term prediction is impossible. Figure 3 shows the prediction sequences of images up to 48 time steps of the Pendulum and CartPole experiments with models trained with Tpred = 4. We compare the prediction results of our model (labelled as Lagrangian+caVAE) with two model variants: MLPdyn+caVAE, which replaces the Lagrangian latent dynamics with MLP latent dynamics, and Lagrangian+VAE, which replaces the coordinate-aware VAE with a traditional VAE. The traditional VAE fails to reconstruct meaningful images for CartPole, although it works well in the simpler Pendulum system. With well-learned coordinates, models that enforce Lagrangian dynamics result in better long term prediction, e.g., as compared to MLPdyn+caVAE, since Lagrangian dynamics with zero control preserves energy (see Supplementary Materials). Learned potential energy enables energy-based control. Figure 4 shows the learned potential energy of the three systems and reconstruction images at selected coordinates with Lagrangian+caVAE. The learned potential energy is consistent with the true potential energy of those systems, e.g., the pendulum at the upward position has the highest potential energy while the pendulum at the downward position has the lowest potential energy. Figure 4 also visualizes the learned coordinates. Learning interpretable coordinates and potential energy enables energy-based controllers. Based on the learned encoding and dynamics, we are able to control Pendulum and fully-actuated Acrobot to the inverted position, and fully-actuated CartPole to a position where the pole points upward. The sequences of images of controlled trajectories as shown in Figure 3 are generated based on learned dynamics and encoding with Lagrangian+caVAE as follows. We first encode an image of the goal position x? to the goal generalized coordinates q?. At each time step, the OpenAI Gym simulator of a system can take a control input, integrate one time step forward, and output an image of the system at the next time step. The control input to the simulator is u(q, q̇) = β(q) + v(q̇) which is designed as in Section 2.2 with the learned potential energy, input matrix, coordinates encoded from the output images, and q?. Baselines We set up two baseline models: HGN [6] and PixelHNN [3]. Neither model considers control input so we only use the trajectories with zero control in our dataset to train the models. Because of this, it is not fair to compare the pixel MSE of HGN in Table 1 with those of other models. We implemented HGN based on the architecture described in the paper and used the official code for PixelHNN. From Figure 3, we can see that HGN makes good predictions for the pendulum up to the training sequence length (τ = 4), but makes blurry long term predictions. HGN fails to generalize to the test dataset for the CartPole task. This is probably because HGN is not data efficient. In the original HGN experiment [6], 30 × 50K = 1.5M training images are used in the pendulum task, while here we use 20× 256 = 5120 training images (with zero control). Moreover, the dimension of latent coordinate q is 4 × 4 × 16 = 256. With such a fixed high dimension (for various tasks), HGN does not need to assume degrees of freedom. However, it might not be easy to interpret the learned q, whereas the learned coordinates in our model are interpretable. PixelHNN does not use an integrator and requires a special term in the loss function. PixelHNN does not account for the rotational nature of coordinate q, so the reconstruction images around the unstable equilibrium point are blurry and the learned coordinates are not easy to interpret (see Supplementary Material). In the original implementation of PixelHNN [3], the angle of the pendulum is constrained to be from −π/6 to π/6, where a linear approximation of the nonlinear dynamics is learned, which makes the learned coordinates easy to interpret. This constraint does not hold for more challenging control tasks. Ablation study To understand which component in our model contributes to learning interpretable generalized coordinates the most, we also report results of four ablations, which are obtained by (a) replacing the coordinate-aware encoder with a black-box MLP, (b) replacing the coordinate-aware decoder with a black-box MLP, (c) replacing the coordinate-aware VAE with a coordinate-aware AE, and (d) a Physics-as-inverse-graphics (PAIG) model [14]. We observe that the coordinate-aware decoder makes the primary contribution to learning interpretable coordinates, and the coordinateaware encoder makes a secondary contribution. The coordinate-aware AE succeeds in Pendulum and Acrobot tasks but fails in the CartPole task. PAIG uses AE with a neural network velocity estimator. We find that PAIG’s velocity estimator overfits the training data, which results in inaccurate long term prediction. Please see Supplementary Materials for prediction sequences of the ablation study. 5 Conclusion We propose an unsupervised model that learns planar rigid-body dynamics from images in an explainable and transparent way by incorporating the physics prior of Lagrangian dynamics and a coordinate-aware VAE, both of which we show are important for accurate prediction in the image space. The interpretability of the model allows for synthesis of model-based controllers. Broader Impact We focus on the impact of using our model to provide explanations for physical system modelling. Our model could be used to provide explanations regarding the underlying symmetries, i.e., conservation laws, of physical systems. Further, the incorporation of the physics prior of Lagrangian dynamics improves robustness and generalizability for both prediction and control applications. We see opportunities for research applying our model to improve transparency and explanability in reinforcement learning, which is typically solved with low-dimensional observation data instead of image data. Our work also enables future research on vision-based controllers. The limitations of our work will also motivate research on unsupervised segmentation of images of physical systems. Acknowledgments and Disclosure of Funding This research has been supported in part by ONR grant #N00014-18-1-2873 and by the School of Engineering and Applied Science at Princeton University through the generosity of William Addy ’82. Yaofeng Desmond Zhong would like to thank Christine Allen-Blanchette, Shinkyu Park, Sushant Veer and Anirudha Majumdar for helpful discussions. We also thank the anonymous reviewers for providing detailed and helpful feedback.
1. What is the main contribution of the paper regarding motion prediction and control? 2. What are the strengths of the proposed model, particularly in its technical aspects and ability to encode important system properties? 3. What are the weaknesses of the paper regarding its comparisons with other works and experimental details? 4. How can the authors improve their evaluation methods and provide more convincing results?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This work incorporates Lagrangian dynamics as a physical prior into neural networks for motion prediction and control. Variational autoencoders are used to infer rotational and translational coordinates from images. Strengths 1. The proposed model is technically sounding. Their framework mainly consists of VAE to extract coordinates, and a learned neural ODE to constrain the states to follow Lagrangian dynamics. This model can operate on raw images, compared with other works that require low-dimensional states. 2. Their model can encode important system properties such as potential energy, which can further allow energy based control for Pendulum and Acrobot. This point is very interesting and useful. 3. Their model enables long-term prediction up to 48 time steps. Weaknesses 1. In the experiment, they only compares with some baselines they created, but there is a lack of comparison with either existing Lagrangian dynamics network or VAE based prediction network. For example, Kalman VAE combines kalman filtering with VAE for prediction and control, which performs well in pendulum control and trajectory prediction from images. Their code is open-sourced. I would suggest to compare your method with it for a more convincing evaluation. 2. Some important experiment details are missing, such as the number of training/validation/testing samples, batch-size, learning rate, hidden states inside your framework. Also, the authors didn't report quantitative results. It's hard to convince me, with only two testing trajectories for prediction and one testing trajectory for control provided.
NIPS
Title Sparse Training via Boosting Pruning Plasticity with Neuroregeneration Abstract Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised a lot of attention currently on post-training pruning (iterative magnitude pruning), and before-training pruning (pruning at initialization). The former method suffers from an extremely large computation cost and the latter usually struggles with insufficient performance. In comparison, during-training pruning, a class of pruning methods that simultaneously enjoys the training/inference efficiency and the comparable performance, temporarily, has been less explored. To better understand during-training pruning, we quantitatively study the effect of pruning throughout training from the perspective of pruning plasticity (the ability of the pruned networks to recover the original performance). Pruning plasticity can help explain several other empirical observations about neural network pruning in literature. We further find that pruning plasticity can be substantially improved by injecting a brain-inspired mechanism called neuroregeneration, i.e., to regenerate the same number of connections as pruned. We design a novel gradual magnitude pruning (GMP) method, named gradual pruning with zerocost neuroregeneration (GraNet), that advances state of the art. Perhaps most impressively, its sparse-to-sparse version for the first time boosts the sparse-tosparse training performance over various dense-to-sparse methods with ResNet50 on ImageNet without extending the training time. We release all codes in https://github.com/Shiweiliuiiiiiii/GraNet. 1 Introduction Neural network pruning is the most common technique to reduce the parameter count, storage requirements, and computational costs of modern neural network architectures. Recently, posttraining pruning [49, 29, 18, 47, 10, 54, 74, 5, 57, 75] and before-training pruning [31, 30, 67, 63, 6, 11] have been two fast-rising fields, boosted by lottery tickets hypothesis (LTH) [10] and singleshot network pruning (SNIP) [31]. The process of post-training pruning typically involves fully pre-training a dense network as well as many cycles of retraining (either fine-tuning [18, 17, 39] or rewinding [12, 54]). As the training costs of the state-of-the-art models, e.g., GPT-3 [4] and FixEfficientNet-L2 [64] have exploded, this process can lead to a large amount of overhead cost. Recently emerged methods for pruning at initialization significantly reduce the training cost by identifying a trainable sub-network before the main training process. While promising, the existing methods fail to match the performance achieved by the magnitude pruning after training [11]. ∗Partial of this work have been done when Shiwei Liu worked as an intern at JD Explore Academy. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Compared with the above-mentioned two classes of pruning, during-training pruning is a class of methods that reap the acceleration benefits of sparsity early on the training and meanwhile achieve promising performance by consulting the information obtained during training. There are some works [77, 13, 33] attempting to gradually prune the network to the desired sparsity during training, while they mainly focus on the performance improvement. Up to now, the understanding of duringtraining pruning has been less explored due to its more complicated dynamical process, and the performance gap still exists between pruning during training and full dense training. To better understand the effect of pruning during the optimization process (not at inference), we study the ability of the pruned models to recover the original performance after a short continued training with the current learning rate, which we call pruning plasticity (see Section 3.1 for a more formal definition). Inspired by the neuroregeneration mechanism in the nervous system where new neurons and connections are synthesized to recover the damage in the nervous system [26, 41, 73], we examine if allowing the pruned network to regenerate new connections can improve pruning plasticity, and hence contribute to pruning during training. We consequently propose a parameter-efficient method to regenerate new connections during the gradual pruning process. Different from the existing works for pruning understanding which mainly focus on dense-to-sparse training [42] (training a dense model and prune it to the target sparsity), we also consider sparse-to-sparse training (training a sparse model yet adaptively re-creating the sparsity pattern) which recently has received an upsurge of interest in machine learning [44, 3, 9, 48, 8, 37, 36]. In short, we have the following main findings during the course of the study: #1. Both pruning rate and learning rate matter for pruning plasticity. When pruned with low pruning rates (e.g., 0.2), both dense-to-sparse training and sparse-to-sparse training can easily recover from pruning. On the contrary, if too many parameters are removed at one time, almost all models suffer from accuracy drops. This finding makes a connection to the success of the iterative magnitude pruning [10, 54, 5, 6, 65], where usually a pruning process with a small pruning rate (e.g., 0.2) needs to be iteratively repeated for good performance. Pruning plasticity also gradually decreases as the learning rate drops. When pruning happens during the training phase with large learning rates, models can easily recover from pruning (up to a certain level). However, pruning plasticity drops significantly after the second learning rate decay, leading to a situation where the pruned networks can not recover with continued training. This finding helps to explain several observations (1) for gradual magnitude pruning (GMP), it is always optimal to end pruning before the second learning rate drop [77, 13]; (2) dynamic sparse training (DST) benefits from a monotonically decreasing pruning rate with cosine or linear update schedule [8, 9]; (3) rewinding techniques [12, 54] outperform fine-tuning as rewinding retrains subnetworks with the original learning rate schedule whereas fine-tuning often retrains with the smallest learning rate. #2. Neuroregeneration improves pruning plasticity. Neuroregeneration [41, 73] refers to the regrowth or repair of nervous tissues, cells, or cell products. Conceptually, it involves synthesizing new neurons, glia, axons, myelin, or synapses, providing extra resources in the long term to replace those damaged by the injury, and achieving a lasting functional recovery. Such mechanism is closely related to the brain plasticity [51], and we borrow this concept to developing a computational regime. We show that, while regenerating the same number of connections as pruned, the pruning plasticity is observed to improve remarkably, indicating a more neuroplastic model being developed. However, it increases memory and computational overheads and seems to contradict the benefits of pruningduring-training. This however raises the question: can we achieve efficient neuroregeneration during training with no extra costs? We provide an affirmative answer to this question. #3. Pruning plasticity with neuroregeneration can be leveraged to substantially boost sparse training performance. The above-mentioned findings of pruning plasticity can generalize to the final performance level under a full continued training to the end. Imitating the neuroregeneration behavior [41, 73], we propose a new sparse training method – gradual pruning with zero-cost neuroregeneration (GraNet), which is capable of performing regeneration without increasing the parameter count. In experiments, GraNet establishes the new state-of-the-art performance bar for dense-to-sparse training and sparse-to-sparse training, respectively. Particularly, the latter for the first time boosts the sparse-to-sparse training performance over various dense-to-sparse methods by a large margin without extending the training time, with ResNet-50 on ImageNet. Besides the consistent performance improvement, we find the subnetworks that GraNet learns are more accurate than the ones learned by the existing gradual pruning method, providing explanations for the success of GraNet. 2 Related Work Post-Training Pruning. Methods that yield a sparse neural network from a pre-trained network by pruning the unimportant weights or neurons, to the best of our knowledge, were proposed in [24] and [50]. After that, various pruning methods have emerged to provide increasingly efficient methods to identify sparse neural networks for inference. The pruning criterion includes weight magnitude [18, 10], gradient [61] Hessian [29, 19, 59], Taylor expansion [47, 46], etc. Low-rank decomposition [7, 23, 17, 71] are also used to induce structured sparsity in terms of channels or filters. Most of the above-mentioned pruning methods require many pruning and re-training cycles to achieve the desired performance. During-Training Pruning. Instead of inheriting weights from a pre-trained model, some works attempt to discover well-performing sparse neural networks with one single training process. Gradual Magnitude Pruning (GMP), introduced in [77] and studied further in [13], gradually sparsifies the neural network during the training process until the desired sparsity is reached. Besides, [40] and [68] are prior works that enforce the network to sparse during training via L0 and L1 regularization, respectively. [60, 34, 55, 70, 28] moved further by introducing trainable sparsity heuristics to learn the sparse masks and weights simultaneously. These methods are all classified as dense-to-sparse training as they start from a dense network. Dynamic Sparse Training (DST) [44, 3, 48, 8, 9, 36, 35, 25] is another class of methods that prune models during training. The key factor of DST is that it starts from a random initialized sparse network and optimizes the sparse topology as well as the weights simultaneously during training (sparse-to-sparse training). Without an extended training time [37], sparse-to-sparse training usually falls short of dense-to-sparse training in terms of the prediction accuracy. For further details, see the survey of [43, 21]. Before-Training Pruning. Motivated by SNIP [31], many works [67, 63, 6] have emerged recently to explore the possibility of obtaining a trainable sparse neural network before the main training process. [11] demonstrates that the existing methods for pruning at initialization perform equally well when the unpruned weights are randomly shuffled, which reveals that what these methods discover is the layer-wise sparsity ratio, rather than the indispensable weight values and positions. Our analysis shows that both the mask positions and weight values are crucial for GraNet. 3 Methodology for Pruning Plasticity The primary goal of this paper is to study the effect of pruning as well as neuroregeneration on neural networks during the standard training process. Therefore, we do not consider post-training pruning and before-training pruning. Below, we introduce in detail the definition of pruning plasticity and the experimental design that we used to study pruning plasticity. 3.1 Metrics Let us denote Wt ∈ Rd as the weights of the network and mt ∈ {0, 1}d as the binary mask yielded from the pruning method at epoch t. Thus, the pruned network can be denoted as Wt mt. Let T be the total number of epochs the model should be trained. Let CONTRAINk(Wt mt, a) refers to the function that continues to train the pruned model for k epochs with the learning rate schedule a. Definition of Pruning plasticity. We define pruning plasticity as tCONTRAINk(Wt mt,at)− tPRE, where tPRE is the test accuracy measured before pruning and tCONTRAINk(Wt mt,at) is the test accuracy measured after k epoch of continued training CONTRAINk(Wt mt, at). Specifically, to better understand the effect of pruning on the current model status and to avoid the effect of learning rate decay, we fix the learning rate as the one when the model is pruned, i.e, at. This setting is also appealing to GMP [77, 13] and DST [44, 9, 48, 37] in which most of the pruned models are continually trained with the current learning rate for some time. Final performance gap. Nevertheless, we also investigate the effect of pruning on the final performance, that is, continually training the pruned networks to the end with the remaining learning rate schedule CONTRAINT−t(Wt mt, a[t+1:T ]). In this case, we report tCONTRAINT−t(Wt mt,a[t+1:T ]) − tFINAL, where tFINAL is the final test accuracy of the unpruned models. 3.2 Architectures and Datasets We choose two commonly used architectures to study pruning plasticity, VGG-19 [58] with batch normalization on CIFAR-10 [27], and ResNet-20 [20] on CIFAR-10. We share the summary of the networks, data, and hyperparameters of dense-to-sparse training in Table 1. We use standard implementations and hyperparameters available online, with the exception of the small batch size for the ResNet-50 on ImageNet due to the limited hardware resources (2× Tesla V100). All accuracies are in line with the baselines reported in the references [8, 11, 67, 9, 37]. 3.3 How to Prune, and How to Regenerate Structured and Unstructured Pruning. We consider unstructured and structured pruning in this paper. Structured pruning prunes weights in groups, or removes the entire neurons, convolutional filters, or channels, enabling acceleration with the off-the-shelf hardware. In particular, we choose the filter pruning method used in Li et al. [32]. Unstructured sparsity is a more promising direction not only due to its outstanding performance at extreme sparsities but the increasing support for sparse operation in the practical hardware [35, 14, 52, 76, 22]. For example, Liu et al. [35] illustrated for the first time the true potential of DST, demonstrating significant training/inference efficiency improvement over the dense training. Different from prior conventions [77, 13, 33, 2] where values of the pruned weights are kept, we set the pruned weights to zero to eliminate the historical information for all implementations in this paper. Magnitude pruning. We prune the weights with the smallest magnitude, as it has evolved as the standard method when pruning happens during training, e.g., GMP [77, 13] and DST [44, 9, 37]. We are also aware of other pruning criteria including but not limited to Hessian [29, 19, 59], Taylor expansion [47, 46], connection sensitivity [31], Gradient Flow [67], Neural Tangent Kernel [38, 16]. One-shot pruning. To isolate the pruning effect at different training stages and to avoid the interaction between two iterations of pruning, we focus on one-shot pruning. Please note that iterative pruning can also be generalized in our setting, as our experimental design includes neural networks trained at various sparsities and each of them is further pruned with various pruning rates. Layer-wise pruning and global pruning. We study both the layer-wise magnitude pruning and global magnitude pruning for pruning plasticity. Global magnitude pruning prunes different layers together and leads to non-uniform sparsity distributions; layer-wise pruning operates layer by layer, resulting in uniform distributions. Gradient-based regeneration. The simplest regeneration scheme is to randomly activate new connections [3, 44]. However, it would take a lot of time for random regeneration to discover the important connections, especially for the very extreme sparsities. Alternatively, gradients, including those for the connections with zero weights, provide good indicators for the connection importance. For this reason, we focus on gradient-based regeneration proposed in Rigged Lottery ( RigL) [9], i.e., regenerating the same number of connections as pruned with the largest gradient magnitude. 3.4 Experimental Results We study pruning plasticity during training with/without regeneration, for both dense training and sparse training. We report the results of ResNet-20 on CIFAR-10 with unstructured global pruning in the main body of the paper. The rest of the experiments are given in Appendix A. Unless otherwise stated, results are qualitatively similar across all networks. Concretely, we first pre-train networks at four sparsity levels, including 0, 0.5, 0.9, and 0.98. The sparse neural networks are trained with uniform distribution (i.e., all layers have the same sparsity). We further choose four pruning rates, e.g., 0.2, 0.5, 0.9, and 0.98, to measure the corresponding pruning plasticity of the pre-trained networks. Pruning plasticity. We continue to train the pruned model for 30 epochs and report pruning plasticity in Figure 2. Overall, the learning rate schedule, the pruning rate, and the sparsity of the original models all have a big impact on pruning plasticity. Pruning plasticity decreases as the learning rate decays for all models with different sparsity levels. The models trained with a large learning rate 0.1 can easily recover, or exceed the original performance except for the extremely large pruning rate 0.98. However, the models obtained during the later training phases can recover only with the mild pruning rate choices, e.g., 0.2 (orange lines) and 0.5 (green lines). We next demonstrate the effect of connection regeneration on pruning plasticity in the bottom row of Figure 2. It is clear to see that connection regeneration significantly improves pruning plasticity of all the cases, especially for the models that are over-pruned (purple lines). Still, even with connection regeneration, pruning plasticity suffers from performance degradation when pruning occurs after the learning rate drops. Final performance gap. Compared with the current model status, people might be more interested in the effect of pruning on the final performance. We further measure the performance gap between the original test accuracy of the unpruned models and the final test accuracy of the pruned model under a full continued training CONTRAINT−t(Wt mt, a[t+1:T ]) in Figure 3. We observe that, in this case, large learning rates do not enjoy large performance improvement, but still, the performance gap increases as the learning rate drops. It is reasonable to conjecture that the accuracy improvement of pruning plasticity with the large learning rate, 0.1, is due to the unconverged performance during the early phase of training. Besides, it is surprising to find that the final performance of extreme sparse networks (e.g., the third column and the fourth column) significantly benefits from mild pruning. Again, the ability of the pruned model to recover from pruning remarkably improves after regenerating the connections back. 4 Gradual Pruning with Zero-Cost Neuroregeneration So far, we have known that regenerating the important connections to the pruned models during training substantially improves pruning plasticity as well as the final performance. However, naively regenerating extra connections increases the parameter count and conflicts with the motivation of gradual pruning. Inspired by the mechanism of neuroregeneration in the nervous system, we propose a novel sparse training method which we call gradual pruning with zero-cost neuroregeneration (GraNet). GraNet consults the information produced throughout training and regenerates important connections during training in a parameter-efficient fashion. See Appendix B.1 for the pseudocode of GraNet. We introduce the main components of GraNet below. 4.1 Gradual Pruning We follow the gradual pruning scheme used in [77] and gradually sparsifies the dense network to the target sparsity level over n pruning iterations. Let us define si is the initial sparsity, sf is the target sparsity, t0 is is the starting epoch of gradual pruning, tf is the end epoch of gradual pruning, and ∆t is the pruning frequency. The pruning rate of each pruning iteration is: st = sf + (si − sf ) ( 1− t− t0 n∆t )3 , t ∈ {t0, t0 + ∆t, ..., t0 + n∆t} . (1) We choose global pruning for our method as it generally achieves better performance than uniform pruning. We also report the performance of the uniform sparsity as used in [13] in Appendix C.3. The conventional gradual pruning methods [77, 13] change the mask (not the weight values) to fulfill the pruning operation, so that the pruned connections have the possibility to be reactivated in the later training phases. Despite this, since the weights of the pruned connections are not updated, they have a small chance to receive sufficient updates to exceed the pruning threshold. This hinders the regeneration of the important connections. 4.2 Zero-Cost Neuroregeneration The main difference between GraNet and the conventional GMP methods [77, 13] is the Zero-Cost Neuroregeneration. Imitating the neuroregeneration of the peripheral nervous system [41, 73] where new neurons and connections are synthesized to replace the damaged ones, we first detect and eliminate the “damaged” connections, and then regenerate the same number of new connections. By doing this, we can achieve connection regeneration without increasing the number of connections. Concretely, we identify the “damaged” connections as the ones with the smallest weight magnitudes. Small magnitude indicates that either the weight’s gradient is small or a large number of oscillations occur to the gradient direction. Therefore, these weights have a small contribution to the training loss and can be removed. Again, we use the gradient as the importance score for regeneration, same as the regrow method as used in RigL [9]. Why we call it “Zero-Cost Neuroregeneration"? In addition to not increasing the connection (parameter) count, the backward pass of our method is sparse most of the time even though our regeneration utilizes the dense gradient to identify the important connections. We perform neuroregeneration immediately after each gradual pruning step, meaning that the regeneration occurs only once every several thousand iterations. The extra overhead to calculate the dense gradient can be amortized compared with the whole training costs. Compared with the methods [33, 69] that require updating all the weights in the backward pass, our method is much more training efficient, as around 2/3 of the training FLOPs is owing to the backward pass [9, 72]. Let us denote r as the ratio of the number of the regenerated connections to the total number of connections; W is the network weight. We first remove r proportion of “damaged” weights with the smallest magnitude by: W ′ = TopK (|W |, 1− r) . (2) Here TopK(v, k) returns the weight tensor retaining the top k-proportion of elements from v. Immediately after that, we regenerate r proportion of new connections based on the gradient magnitude: W = W ′ + TopK (|gi/∈W ′ |, r) , (3) where |gi/∈W ′ | are the gradient magnitude of the zero weights. We perform Zero-Cost Neuroregeneration layer by layer from the beginning of the training to the end. GraNet can naturally generalize to the dense-to-sparse training scenario and the sparse-to-sparse training scenario by setting the initial sparsity level si = 0 and si > 0 in Eq. (1), respectively. For simplicity, we set si = 0.5, t0 = 0, and tf as the epoch when performing the first learning rate decay for the sparse-to-sparse training. Different from the existing sparse-to-sparse training methods, i.e., SET [44], RigL [9], and ITOP [37], in which the sparsity is fixed throughout training, GraNet starts from a denser yet still sparse model and gradually prunes the sparse model to the desired sparsity. Although starting with more parameters, the global pruning technique of gradual pruning helps GraNet quickly evolve to a better sparsity distribution than RigL with lower feedforward FLOPs and higher test accuracy. What’s more, GraNet sparsifies all layers including the first convolutional layer and the last fully-connected layer. 4.3 Experimental Results We conduct various experiments to evaluate the effectiveness of GraNet. We compare GraNet with various dense-to-sparse methods and sparse-to-sparse methods. The results of Rigged Lottery (RigL) and GMP with CIFAR-10/100 were reproduced by our implementation with PyTorch so that the only difference between GraNet and GMP is the Zero-Cost Neuroregeneration. For each model, we divide the results into three groups from top to bottom: pruning at initialization, dynamic sparse training and dense-to-sparse methods. See Appendix B for more implementation details used in the experiments. GraNet (si = 0.5) refers to the sparse-to-sparse version and the and GraNet (si = 0) refers to the dense-to-sparse version. CIFAR-10/100. The results of CIFAR-10/100 are shared in Table 2. We can observe that performance differences among different methods on CIFAR-10 are generally small, but still, GraNet (si = 0) consistently improves the performance over GMP except for the sparsity 95%, and achieves the highest accuracy in 4 out of 6 cases. In terms of the more complex data CIFAR-100, the performance differences between the during-training pruning methods and before-training pruning methods are much larger. GraNet (si = 0) again consistently outperforms GMP with all sparsities, highlighting the benefits of Zero-Cost Neuroregeneration. It is maybe more interesting that GraNet (si = 0) even outperforms the post-training method, subdifferential inclusion for sparsity (SIS), by a large margin. In terms of sparse-to-sparse training, our proposed GraNet (si = 0.5) has a dominant performance over other methods. Especially at the very extreme sparsity 0.98, our method outperforms RigL by 1.40% and 2.22% with VGG-19 on CIFAR-10 and CIFAR-100, respectively. ImageNet. Due to the small data size, the experiments with CIFAR-10/100 may not be sufficient to draw a solid conclusion. We further evaluate our method with ResNet-50 on ImageNet in Table 3. We only run this experiment once due to the limited resources. We set t0 = 0 and tf = 30 for both GraNet (si = 0) and GraNet (si = 0.5) on ImageNet. Again, GraNet (si = 0) outperforms GMP consistently with only half training FLOPs and achieves the highest accuracy among all the dense-to-sparse methods at sparsity of 0.9. Surprisingly, GraNet (si = 0.5) significantly boosts the sparse-to-sparse training performance, even over the dense-to-sparse training. Concretely, GraNet (si = 0.5) outperforms RigL by 0.9% and 1.5% at sparsity 0.8 and 0.9, respectively. To the best of our knowledge, this is the first time in the literature that sparse-to-sparse training reaches a test accuracy of 76% with ResNet-50 on ImageNet at sparsity 0.8, without extension of training time. It is reasonable for GraNet (si = 0.5) to achieve better accuracy than RigL, since the denser models at the beginning help GraNet explore more the parameter space. According to the In-Time Over-Parameterization hypothesis [37], the performance of sparse training methods is highly correlated with the total number of parameters that the sparse model has visited. We further report the training/inference FLOPs required by all pruning methods. Compared with other dense-to-sparse methods, the final networks learned by GraNet (si = 0) require more FLOPs to test, whereas the overall training FLOPs required by GraNet (si = 0) are smaller than others. Even though starting from a denser model, GraNet (si = 0.5) requires less training and inference FLOPs than the state-of-the-art method, i.e., RigL. The sparsity budgets for 0.9 sparse ResNet-50 on ImageNet-1K learned by our methods are reported in Appendix D. We also report how FLOPs of the pruned ResNet-50 evolve during the course of training in Appendix E. 4.4 Effect of the Initial Sparsity As we mentioned earlier, the denser initial network is the key factor in the success of GraNet. We conducted experiments to study the effect of the initial sparsity on GraNet with ResNet-50 on ImageNet. The initial sparsity is chosen from [0.0, 0.5, 0.6, 0.7, 0.8, 0.9] and the final sparsity is fixed as 0.9. The results are shared in Table 4. We can see the training FLOPs of GraNet are quite robust to the initial sparsity. Surprisingly yet reasonably, it seems that the the smaller the initial sparsity is (up to 0.5), the better final sparsity distribution GraNet finds, with higher test accuracy and fewer feedforward FLOPs. The lower feedforward FLOPs of the final network perfectly balance the overhead caused by the denser initial network. 4.5 Performance of GraNet at Extreme Sparsities In this section, we share the results of GraNet and RigL at extreme sparsities. The initial sparsity is set as 0.5. When the final sparsity is relatively smaller (e.g., 0.8, 0.9), GraNet requires a lower (or the same) number of training FLOPs than RigL, whereas GraNet requires more training FLOPs than RigL when the final sparsity is extremely high (e.g., 0.95, 0.965). This makes sense since when the sparsity is extremely high, the saved FLOPs count of the distribution discovered by GraNet is too small to amortize the overhead caused by denser initial models. Yet, the increased number of training FLOPs of GraNet leads to substantial accuracy improvement (> 2%) over RigL. The efficiency of GraNet (si = 0.5) comes from two important technical differences compared with RigL: (1) better final sparse distribution discovered by global pruning; (2) a shorter period of gradual pruning time (the first 30 epochs for ResNet-50 on ImageNet). Although starting with more parameters, the global pruning enables GraNet to quickly (first 30 epochs) evolve to a better sparsity distribution with lower test FLOPs than ERK. After 30 epochs of gradual pruning, the network continues to be trained with this better distribution for 70 epochs, so that the overhead in the early training phase with larger training FLOPs is amortized by the later and longer training phase with fewer training FLOPs. 4.6 Ablation Study of Random Reinitialization Next, we ask whether what GraNet learned are the specific sparse connectivity or the sparse connectivity together with the weight values. We randomly reinitialize the pruned network with the same mask and retrain it. The results are given in Figure 4. The performance of the reinitialized networks falls significantly short of the performance achieved by GraNet (si = 0), indicating that what was learned by GraNet is the sparse connectivity together with the weight values. Besides, we find that the retraining performance of GraNet is higher than GMP. This further confirms that Zero-Cost Neuroregeneration helps the gradual pruning find more accurate mask positions. 0.5 0.6 0.7 0.8 0.9 0.95 0.98 Sparsity 92.25 92.50 92.75 93.00 93.25 93.50 93.75 94.00 Te st A cc ur ac y [% ] VGG19 on CIFAR-10 0.5 0.6 0.7 0.8 0.9 0.95 0.98 Sparsity 69 70 71 72 73 74 Te st A cc ur ac y [% ] VGG19 on CIFAR-100 GraNet GMP Reinitialization with GDP mask Reinitialization with GMP mask Figure 4: Reinitialization ablation on subnetworks discovered by GMP and GraNet (si = 0). 4.7 Comparison between Re-training and Extended Training In this section, we study if re-training techniques can further improve the performance of the subnetworks discovered by GraNet. The authors of Lottery Ticket Hypothesis (LTH) [10] introduced a retraining technique, even if they did not evaluate it as such, where the subnetworks discovered by iterative magnitude pruning can be re-trained in isolation to full accuracy with the original initializations. Later on, learning rate rewinding (LRR) [54] was proposed further to improve the re-training performance by only rewinding the learning rate. Since GraNet also utilizes magnitude pruning to discover subnetworks, it is natural to test if these re-training techniques can bring benefits to GraNet. As shown in Table 6, both re-training techniques do not bring benefits to GraNet. Instead of re-training the subnetworks, we find that simply extending the training time significantly boosts the performance of GraNet with similar computational costs. 5 Conclusion, and Reflection of Broader Impacts In this paper, we re-emphasize the merit of during-training pruning. Compared with the recently proposed works, i.e., LTH and SNIP, during-training pruning is an efficient yet performant class of pruning methods that have received much less attention. We quantitatively study pruning during training from the perspective of pruning plasticity. Inspired by the findings from pruning plasticity and the mechanism of neuroregeneration in the nervous system, we further proposed a novel sparse training method, GraNet, that performs the cost-free connection regeneration during training. GraNet advances the state of the art in both dense-to-sparse training and sparse-to-sparse training. Our paper re-emphasizes the great potential of during-training pruning in reducing the training/inference resources required by ML models without sacrificing accuracy. It has a significant environmental impact on reducing the energy cost of the ML models and CO2 emissions [1, 53, 15, 56, 62]. 6 Acknowledgement This project is partially financed by the Dutch Research Council (NWO). We thank the reviewers for the constructive comments and questions, which improved the quality of our paper.
1. What is the focus of the paper regarding training plasticity and pruning? 2. What are the strengths of the proposed approach, particularly in terms of neuroregeneration? 3. What are the weaknesses of the paper, especially in comparison with other works like STR? 4. How does the reviewer assess the effectiveness of the proposed method, GraNet, and its dynamic sparse training variant? 5. Do you have any concerns or questions about the training plasticity aspect?
Summary Of The Paper Review
Summary Of The Paper This paper understands the during-training pruning from the perspective of training plasticity which is the ability of the pruned networks to recover the original performance. A technique named neuroregeneration is further shown to improve the training plasticity. Based on these findings, a gradual magnitude pruning (GMP) method with neuroregeneration (GraNet), and its dynamic sparse training variant are proposed. Experiments show the effectiveness of GradNet and GradNet-ST. Review Pros The paper is well-written. The contributions are clearly presented. Model compression is an important topic in scaling deep models on mobile devices. It is essential to design effective channel pruning method. The key idea of neuron regeneration is well-motivated. Extensive experiments are conducted to show the superiority of GradNet.. Cons It seems that GradNet achieves marginal improvements over STR on the ImageNet dataset. Does neuron regeneration work on a structured pruning framework? The training plasticity is sensitive to hyper-parameters. It would be better to provide experiments to show how neuron regeneration helps training plasticity.
NIPS
Title Sparse Training via Boosting Pruning Plasticity with Neuroregeneration Abstract Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised a lot of attention currently on post-training pruning (iterative magnitude pruning), and before-training pruning (pruning at initialization). The former method suffers from an extremely large computation cost and the latter usually struggles with insufficient performance. In comparison, during-training pruning, a class of pruning methods that simultaneously enjoys the training/inference efficiency and the comparable performance, temporarily, has been less explored. To better understand during-training pruning, we quantitatively study the effect of pruning throughout training from the perspective of pruning plasticity (the ability of the pruned networks to recover the original performance). Pruning plasticity can help explain several other empirical observations about neural network pruning in literature. We further find that pruning plasticity can be substantially improved by injecting a brain-inspired mechanism called neuroregeneration, i.e., to regenerate the same number of connections as pruned. We design a novel gradual magnitude pruning (GMP) method, named gradual pruning with zerocost neuroregeneration (GraNet), that advances state of the art. Perhaps most impressively, its sparse-to-sparse version for the first time boosts the sparse-tosparse training performance over various dense-to-sparse methods with ResNet50 on ImageNet without extending the training time. We release all codes in https://github.com/Shiweiliuiiiiiii/GraNet. 1 Introduction Neural network pruning is the most common technique to reduce the parameter count, storage requirements, and computational costs of modern neural network architectures. Recently, posttraining pruning [49, 29, 18, 47, 10, 54, 74, 5, 57, 75] and before-training pruning [31, 30, 67, 63, 6, 11] have been two fast-rising fields, boosted by lottery tickets hypothesis (LTH) [10] and singleshot network pruning (SNIP) [31]. The process of post-training pruning typically involves fully pre-training a dense network as well as many cycles of retraining (either fine-tuning [18, 17, 39] or rewinding [12, 54]). As the training costs of the state-of-the-art models, e.g., GPT-3 [4] and FixEfficientNet-L2 [64] have exploded, this process can lead to a large amount of overhead cost. Recently emerged methods for pruning at initialization significantly reduce the training cost by identifying a trainable sub-network before the main training process. While promising, the existing methods fail to match the performance achieved by the magnitude pruning after training [11]. ∗Partial of this work have been done when Shiwei Liu worked as an intern at JD Explore Academy. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Compared with the above-mentioned two classes of pruning, during-training pruning is a class of methods that reap the acceleration benefits of sparsity early on the training and meanwhile achieve promising performance by consulting the information obtained during training. There are some works [77, 13, 33] attempting to gradually prune the network to the desired sparsity during training, while they mainly focus on the performance improvement. Up to now, the understanding of duringtraining pruning has been less explored due to its more complicated dynamical process, and the performance gap still exists between pruning during training and full dense training. To better understand the effect of pruning during the optimization process (not at inference), we study the ability of the pruned models to recover the original performance after a short continued training with the current learning rate, which we call pruning plasticity (see Section 3.1 for a more formal definition). Inspired by the neuroregeneration mechanism in the nervous system where new neurons and connections are synthesized to recover the damage in the nervous system [26, 41, 73], we examine if allowing the pruned network to regenerate new connections can improve pruning plasticity, and hence contribute to pruning during training. We consequently propose a parameter-efficient method to regenerate new connections during the gradual pruning process. Different from the existing works for pruning understanding which mainly focus on dense-to-sparse training [42] (training a dense model and prune it to the target sparsity), we also consider sparse-to-sparse training (training a sparse model yet adaptively re-creating the sparsity pattern) which recently has received an upsurge of interest in machine learning [44, 3, 9, 48, 8, 37, 36]. In short, we have the following main findings during the course of the study: #1. Both pruning rate and learning rate matter for pruning plasticity. When pruned with low pruning rates (e.g., 0.2), both dense-to-sparse training and sparse-to-sparse training can easily recover from pruning. On the contrary, if too many parameters are removed at one time, almost all models suffer from accuracy drops. This finding makes a connection to the success of the iterative magnitude pruning [10, 54, 5, 6, 65], where usually a pruning process with a small pruning rate (e.g., 0.2) needs to be iteratively repeated for good performance. Pruning plasticity also gradually decreases as the learning rate drops. When pruning happens during the training phase with large learning rates, models can easily recover from pruning (up to a certain level). However, pruning plasticity drops significantly after the second learning rate decay, leading to a situation where the pruned networks can not recover with continued training. This finding helps to explain several observations (1) for gradual magnitude pruning (GMP), it is always optimal to end pruning before the second learning rate drop [77, 13]; (2) dynamic sparse training (DST) benefits from a monotonically decreasing pruning rate with cosine or linear update schedule [8, 9]; (3) rewinding techniques [12, 54] outperform fine-tuning as rewinding retrains subnetworks with the original learning rate schedule whereas fine-tuning often retrains with the smallest learning rate. #2. Neuroregeneration improves pruning plasticity. Neuroregeneration [41, 73] refers to the regrowth or repair of nervous tissues, cells, or cell products. Conceptually, it involves synthesizing new neurons, glia, axons, myelin, or synapses, providing extra resources in the long term to replace those damaged by the injury, and achieving a lasting functional recovery. Such mechanism is closely related to the brain plasticity [51], and we borrow this concept to developing a computational regime. We show that, while regenerating the same number of connections as pruned, the pruning plasticity is observed to improve remarkably, indicating a more neuroplastic model being developed. However, it increases memory and computational overheads and seems to contradict the benefits of pruningduring-training. This however raises the question: can we achieve efficient neuroregeneration during training with no extra costs? We provide an affirmative answer to this question. #3. Pruning plasticity with neuroregeneration can be leveraged to substantially boost sparse training performance. The above-mentioned findings of pruning plasticity can generalize to the final performance level under a full continued training to the end. Imitating the neuroregeneration behavior [41, 73], we propose a new sparse training method – gradual pruning with zero-cost neuroregeneration (GraNet), which is capable of performing regeneration without increasing the parameter count. In experiments, GraNet establishes the new state-of-the-art performance bar for dense-to-sparse training and sparse-to-sparse training, respectively. Particularly, the latter for the first time boosts the sparse-to-sparse training performance over various dense-to-sparse methods by a large margin without extending the training time, with ResNet-50 on ImageNet. Besides the consistent performance improvement, we find the subnetworks that GraNet learns are more accurate than the ones learned by the existing gradual pruning method, providing explanations for the success of GraNet. 2 Related Work Post-Training Pruning. Methods that yield a sparse neural network from a pre-trained network by pruning the unimportant weights or neurons, to the best of our knowledge, were proposed in [24] and [50]. After that, various pruning methods have emerged to provide increasingly efficient methods to identify sparse neural networks for inference. The pruning criterion includes weight magnitude [18, 10], gradient [61] Hessian [29, 19, 59], Taylor expansion [47, 46], etc. Low-rank decomposition [7, 23, 17, 71] are also used to induce structured sparsity in terms of channels or filters. Most of the above-mentioned pruning methods require many pruning and re-training cycles to achieve the desired performance. During-Training Pruning. Instead of inheriting weights from a pre-trained model, some works attempt to discover well-performing sparse neural networks with one single training process. Gradual Magnitude Pruning (GMP), introduced in [77] and studied further in [13], gradually sparsifies the neural network during the training process until the desired sparsity is reached. Besides, [40] and [68] are prior works that enforce the network to sparse during training via L0 and L1 regularization, respectively. [60, 34, 55, 70, 28] moved further by introducing trainable sparsity heuristics to learn the sparse masks and weights simultaneously. These methods are all classified as dense-to-sparse training as they start from a dense network. Dynamic Sparse Training (DST) [44, 3, 48, 8, 9, 36, 35, 25] is another class of methods that prune models during training. The key factor of DST is that it starts from a random initialized sparse network and optimizes the sparse topology as well as the weights simultaneously during training (sparse-to-sparse training). Without an extended training time [37], sparse-to-sparse training usually falls short of dense-to-sparse training in terms of the prediction accuracy. For further details, see the survey of [43, 21]. Before-Training Pruning. Motivated by SNIP [31], many works [67, 63, 6] have emerged recently to explore the possibility of obtaining a trainable sparse neural network before the main training process. [11] demonstrates that the existing methods for pruning at initialization perform equally well when the unpruned weights are randomly shuffled, which reveals that what these methods discover is the layer-wise sparsity ratio, rather than the indispensable weight values and positions. Our analysis shows that both the mask positions and weight values are crucial for GraNet. 3 Methodology for Pruning Plasticity The primary goal of this paper is to study the effect of pruning as well as neuroregeneration on neural networks during the standard training process. Therefore, we do not consider post-training pruning and before-training pruning. Below, we introduce in detail the definition of pruning plasticity and the experimental design that we used to study pruning plasticity. 3.1 Metrics Let us denote Wt ∈ Rd as the weights of the network and mt ∈ {0, 1}d as the binary mask yielded from the pruning method at epoch t. Thus, the pruned network can be denoted as Wt mt. Let T be the total number of epochs the model should be trained. Let CONTRAINk(Wt mt, a) refers to the function that continues to train the pruned model for k epochs with the learning rate schedule a. Definition of Pruning plasticity. We define pruning plasticity as tCONTRAINk(Wt mt,at)− tPRE, where tPRE is the test accuracy measured before pruning and tCONTRAINk(Wt mt,at) is the test accuracy measured after k epoch of continued training CONTRAINk(Wt mt, at). Specifically, to better understand the effect of pruning on the current model status and to avoid the effect of learning rate decay, we fix the learning rate as the one when the model is pruned, i.e, at. This setting is also appealing to GMP [77, 13] and DST [44, 9, 48, 37] in which most of the pruned models are continually trained with the current learning rate for some time. Final performance gap. Nevertheless, we also investigate the effect of pruning on the final performance, that is, continually training the pruned networks to the end with the remaining learning rate schedule CONTRAINT−t(Wt mt, a[t+1:T ]). In this case, we report tCONTRAINT−t(Wt mt,a[t+1:T ]) − tFINAL, where tFINAL is the final test accuracy of the unpruned models. 3.2 Architectures and Datasets We choose two commonly used architectures to study pruning plasticity, VGG-19 [58] with batch normalization on CIFAR-10 [27], and ResNet-20 [20] on CIFAR-10. We share the summary of the networks, data, and hyperparameters of dense-to-sparse training in Table 1. We use standard implementations and hyperparameters available online, with the exception of the small batch size for the ResNet-50 on ImageNet due to the limited hardware resources (2× Tesla V100). All accuracies are in line with the baselines reported in the references [8, 11, 67, 9, 37]. 3.3 How to Prune, and How to Regenerate Structured and Unstructured Pruning. We consider unstructured and structured pruning in this paper. Structured pruning prunes weights in groups, or removes the entire neurons, convolutional filters, or channels, enabling acceleration with the off-the-shelf hardware. In particular, we choose the filter pruning method used in Li et al. [32]. Unstructured sparsity is a more promising direction not only due to its outstanding performance at extreme sparsities but the increasing support for sparse operation in the practical hardware [35, 14, 52, 76, 22]. For example, Liu et al. [35] illustrated for the first time the true potential of DST, demonstrating significant training/inference efficiency improvement over the dense training. Different from prior conventions [77, 13, 33, 2] where values of the pruned weights are kept, we set the pruned weights to zero to eliminate the historical information for all implementations in this paper. Magnitude pruning. We prune the weights with the smallest magnitude, as it has evolved as the standard method when pruning happens during training, e.g., GMP [77, 13] and DST [44, 9, 37]. We are also aware of other pruning criteria including but not limited to Hessian [29, 19, 59], Taylor expansion [47, 46], connection sensitivity [31], Gradient Flow [67], Neural Tangent Kernel [38, 16]. One-shot pruning. To isolate the pruning effect at different training stages and to avoid the interaction between two iterations of pruning, we focus on one-shot pruning. Please note that iterative pruning can also be generalized in our setting, as our experimental design includes neural networks trained at various sparsities and each of them is further pruned with various pruning rates. Layer-wise pruning and global pruning. We study both the layer-wise magnitude pruning and global magnitude pruning for pruning plasticity. Global magnitude pruning prunes different layers together and leads to non-uniform sparsity distributions; layer-wise pruning operates layer by layer, resulting in uniform distributions. Gradient-based regeneration. The simplest regeneration scheme is to randomly activate new connections [3, 44]. However, it would take a lot of time for random regeneration to discover the important connections, especially for the very extreme sparsities. Alternatively, gradients, including those for the connections with zero weights, provide good indicators for the connection importance. For this reason, we focus on gradient-based regeneration proposed in Rigged Lottery ( RigL) [9], i.e., regenerating the same number of connections as pruned with the largest gradient magnitude. 3.4 Experimental Results We study pruning plasticity during training with/without regeneration, for both dense training and sparse training. We report the results of ResNet-20 on CIFAR-10 with unstructured global pruning in the main body of the paper. The rest of the experiments are given in Appendix A. Unless otherwise stated, results are qualitatively similar across all networks. Concretely, we first pre-train networks at four sparsity levels, including 0, 0.5, 0.9, and 0.98. The sparse neural networks are trained with uniform distribution (i.e., all layers have the same sparsity). We further choose four pruning rates, e.g., 0.2, 0.5, 0.9, and 0.98, to measure the corresponding pruning plasticity of the pre-trained networks. Pruning plasticity. We continue to train the pruned model for 30 epochs and report pruning plasticity in Figure 2. Overall, the learning rate schedule, the pruning rate, and the sparsity of the original models all have a big impact on pruning plasticity. Pruning plasticity decreases as the learning rate decays for all models with different sparsity levels. The models trained with a large learning rate 0.1 can easily recover, or exceed the original performance except for the extremely large pruning rate 0.98. However, the models obtained during the later training phases can recover only with the mild pruning rate choices, e.g., 0.2 (orange lines) and 0.5 (green lines). We next demonstrate the effect of connection regeneration on pruning plasticity in the bottom row of Figure 2. It is clear to see that connection regeneration significantly improves pruning plasticity of all the cases, especially for the models that are over-pruned (purple lines). Still, even with connection regeneration, pruning plasticity suffers from performance degradation when pruning occurs after the learning rate drops. Final performance gap. Compared with the current model status, people might be more interested in the effect of pruning on the final performance. We further measure the performance gap between the original test accuracy of the unpruned models and the final test accuracy of the pruned model under a full continued training CONTRAINT−t(Wt mt, a[t+1:T ]) in Figure 3. We observe that, in this case, large learning rates do not enjoy large performance improvement, but still, the performance gap increases as the learning rate drops. It is reasonable to conjecture that the accuracy improvement of pruning plasticity with the large learning rate, 0.1, is due to the unconverged performance during the early phase of training. Besides, it is surprising to find that the final performance of extreme sparse networks (e.g., the third column and the fourth column) significantly benefits from mild pruning. Again, the ability of the pruned model to recover from pruning remarkably improves after regenerating the connections back. 4 Gradual Pruning with Zero-Cost Neuroregeneration So far, we have known that regenerating the important connections to the pruned models during training substantially improves pruning plasticity as well as the final performance. However, naively regenerating extra connections increases the parameter count and conflicts with the motivation of gradual pruning. Inspired by the mechanism of neuroregeneration in the nervous system, we propose a novel sparse training method which we call gradual pruning with zero-cost neuroregeneration (GraNet). GraNet consults the information produced throughout training and regenerates important connections during training in a parameter-efficient fashion. See Appendix B.1 for the pseudocode of GraNet. We introduce the main components of GraNet below. 4.1 Gradual Pruning We follow the gradual pruning scheme used in [77] and gradually sparsifies the dense network to the target sparsity level over n pruning iterations. Let us define si is the initial sparsity, sf is the target sparsity, t0 is is the starting epoch of gradual pruning, tf is the end epoch of gradual pruning, and ∆t is the pruning frequency. The pruning rate of each pruning iteration is: st = sf + (si − sf ) ( 1− t− t0 n∆t )3 , t ∈ {t0, t0 + ∆t, ..., t0 + n∆t} . (1) We choose global pruning for our method as it generally achieves better performance than uniform pruning. We also report the performance of the uniform sparsity as used in [13] in Appendix C.3. The conventional gradual pruning methods [77, 13] change the mask (not the weight values) to fulfill the pruning operation, so that the pruned connections have the possibility to be reactivated in the later training phases. Despite this, since the weights of the pruned connections are not updated, they have a small chance to receive sufficient updates to exceed the pruning threshold. This hinders the regeneration of the important connections. 4.2 Zero-Cost Neuroregeneration The main difference between GraNet and the conventional GMP methods [77, 13] is the Zero-Cost Neuroregeneration. Imitating the neuroregeneration of the peripheral nervous system [41, 73] where new neurons and connections are synthesized to replace the damaged ones, we first detect and eliminate the “damaged” connections, and then regenerate the same number of new connections. By doing this, we can achieve connection regeneration without increasing the number of connections. Concretely, we identify the “damaged” connections as the ones with the smallest weight magnitudes. Small magnitude indicates that either the weight’s gradient is small or a large number of oscillations occur to the gradient direction. Therefore, these weights have a small contribution to the training loss and can be removed. Again, we use the gradient as the importance score for regeneration, same as the regrow method as used in RigL [9]. Why we call it “Zero-Cost Neuroregeneration"? In addition to not increasing the connection (parameter) count, the backward pass of our method is sparse most of the time even though our regeneration utilizes the dense gradient to identify the important connections. We perform neuroregeneration immediately after each gradual pruning step, meaning that the regeneration occurs only once every several thousand iterations. The extra overhead to calculate the dense gradient can be amortized compared with the whole training costs. Compared with the methods [33, 69] that require updating all the weights in the backward pass, our method is much more training efficient, as around 2/3 of the training FLOPs is owing to the backward pass [9, 72]. Let us denote r as the ratio of the number of the regenerated connections to the total number of connections; W is the network weight. We first remove r proportion of “damaged” weights with the smallest magnitude by: W ′ = TopK (|W |, 1− r) . (2) Here TopK(v, k) returns the weight tensor retaining the top k-proportion of elements from v. Immediately after that, we regenerate r proportion of new connections based on the gradient magnitude: W = W ′ + TopK (|gi/∈W ′ |, r) , (3) where |gi/∈W ′ | are the gradient magnitude of the zero weights. We perform Zero-Cost Neuroregeneration layer by layer from the beginning of the training to the end. GraNet can naturally generalize to the dense-to-sparse training scenario and the sparse-to-sparse training scenario by setting the initial sparsity level si = 0 and si > 0 in Eq. (1), respectively. For simplicity, we set si = 0.5, t0 = 0, and tf as the epoch when performing the first learning rate decay for the sparse-to-sparse training. Different from the existing sparse-to-sparse training methods, i.e., SET [44], RigL [9], and ITOP [37], in which the sparsity is fixed throughout training, GraNet starts from a denser yet still sparse model and gradually prunes the sparse model to the desired sparsity. Although starting with more parameters, the global pruning technique of gradual pruning helps GraNet quickly evolve to a better sparsity distribution than RigL with lower feedforward FLOPs and higher test accuracy. What’s more, GraNet sparsifies all layers including the first convolutional layer and the last fully-connected layer. 4.3 Experimental Results We conduct various experiments to evaluate the effectiveness of GraNet. We compare GraNet with various dense-to-sparse methods and sparse-to-sparse methods. The results of Rigged Lottery (RigL) and GMP with CIFAR-10/100 were reproduced by our implementation with PyTorch so that the only difference between GraNet and GMP is the Zero-Cost Neuroregeneration. For each model, we divide the results into three groups from top to bottom: pruning at initialization, dynamic sparse training and dense-to-sparse methods. See Appendix B for more implementation details used in the experiments. GraNet (si = 0.5) refers to the sparse-to-sparse version and the and GraNet (si = 0) refers to the dense-to-sparse version. CIFAR-10/100. The results of CIFAR-10/100 are shared in Table 2. We can observe that performance differences among different methods on CIFAR-10 are generally small, but still, GraNet (si = 0) consistently improves the performance over GMP except for the sparsity 95%, and achieves the highest accuracy in 4 out of 6 cases. In terms of the more complex data CIFAR-100, the performance differences between the during-training pruning methods and before-training pruning methods are much larger. GraNet (si = 0) again consistently outperforms GMP with all sparsities, highlighting the benefits of Zero-Cost Neuroregeneration. It is maybe more interesting that GraNet (si = 0) even outperforms the post-training method, subdifferential inclusion for sparsity (SIS), by a large margin. In terms of sparse-to-sparse training, our proposed GraNet (si = 0.5) has a dominant performance over other methods. Especially at the very extreme sparsity 0.98, our method outperforms RigL by 1.40% and 2.22% with VGG-19 on CIFAR-10 and CIFAR-100, respectively. ImageNet. Due to the small data size, the experiments with CIFAR-10/100 may not be sufficient to draw a solid conclusion. We further evaluate our method with ResNet-50 on ImageNet in Table 3. We only run this experiment once due to the limited resources. We set t0 = 0 and tf = 30 for both GraNet (si = 0) and GraNet (si = 0.5) on ImageNet. Again, GraNet (si = 0) outperforms GMP consistently with only half training FLOPs and achieves the highest accuracy among all the dense-to-sparse methods at sparsity of 0.9. Surprisingly, GraNet (si = 0.5) significantly boosts the sparse-to-sparse training performance, even over the dense-to-sparse training. Concretely, GraNet (si = 0.5) outperforms RigL by 0.9% and 1.5% at sparsity 0.8 and 0.9, respectively. To the best of our knowledge, this is the first time in the literature that sparse-to-sparse training reaches a test accuracy of 76% with ResNet-50 on ImageNet at sparsity 0.8, without extension of training time. It is reasonable for GraNet (si = 0.5) to achieve better accuracy than RigL, since the denser models at the beginning help GraNet explore more the parameter space. According to the In-Time Over-Parameterization hypothesis [37], the performance of sparse training methods is highly correlated with the total number of parameters that the sparse model has visited. We further report the training/inference FLOPs required by all pruning methods. Compared with other dense-to-sparse methods, the final networks learned by GraNet (si = 0) require more FLOPs to test, whereas the overall training FLOPs required by GraNet (si = 0) are smaller than others. Even though starting from a denser model, GraNet (si = 0.5) requires less training and inference FLOPs than the state-of-the-art method, i.e., RigL. The sparsity budgets for 0.9 sparse ResNet-50 on ImageNet-1K learned by our methods are reported in Appendix D. We also report how FLOPs of the pruned ResNet-50 evolve during the course of training in Appendix E. 4.4 Effect of the Initial Sparsity As we mentioned earlier, the denser initial network is the key factor in the success of GraNet. We conducted experiments to study the effect of the initial sparsity on GraNet with ResNet-50 on ImageNet. The initial sparsity is chosen from [0.0, 0.5, 0.6, 0.7, 0.8, 0.9] and the final sparsity is fixed as 0.9. The results are shared in Table 4. We can see the training FLOPs of GraNet are quite robust to the initial sparsity. Surprisingly yet reasonably, it seems that the the smaller the initial sparsity is (up to 0.5), the better final sparsity distribution GraNet finds, with higher test accuracy and fewer feedforward FLOPs. The lower feedforward FLOPs of the final network perfectly balance the overhead caused by the denser initial network. 4.5 Performance of GraNet at Extreme Sparsities In this section, we share the results of GraNet and RigL at extreme sparsities. The initial sparsity is set as 0.5. When the final sparsity is relatively smaller (e.g., 0.8, 0.9), GraNet requires a lower (or the same) number of training FLOPs than RigL, whereas GraNet requires more training FLOPs than RigL when the final sparsity is extremely high (e.g., 0.95, 0.965). This makes sense since when the sparsity is extremely high, the saved FLOPs count of the distribution discovered by GraNet is too small to amortize the overhead caused by denser initial models. Yet, the increased number of training FLOPs of GraNet leads to substantial accuracy improvement (> 2%) over RigL. The efficiency of GraNet (si = 0.5) comes from two important technical differences compared with RigL: (1) better final sparse distribution discovered by global pruning; (2) a shorter period of gradual pruning time (the first 30 epochs for ResNet-50 on ImageNet). Although starting with more parameters, the global pruning enables GraNet to quickly (first 30 epochs) evolve to a better sparsity distribution with lower test FLOPs than ERK. After 30 epochs of gradual pruning, the network continues to be trained with this better distribution for 70 epochs, so that the overhead in the early training phase with larger training FLOPs is amortized by the later and longer training phase with fewer training FLOPs. 4.6 Ablation Study of Random Reinitialization Next, we ask whether what GraNet learned are the specific sparse connectivity or the sparse connectivity together with the weight values. We randomly reinitialize the pruned network with the same mask and retrain it. The results are given in Figure 4. The performance of the reinitialized networks falls significantly short of the performance achieved by GraNet (si = 0), indicating that what was learned by GraNet is the sparse connectivity together with the weight values. Besides, we find that the retraining performance of GraNet is higher than GMP. This further confirms that Zero-Cost Neuroregeneration helps the gradual pruning find more accurate mask positions. 0.5 0.6 0.7 0.8 0.9 0.95 0.98 Sparsity 92.25 92.50 92.75 93.00 93.25 93.50 93.75 94.00 Te st A cc ur ac y [% ] VGG19 on CIFAR-10 0.5 0.6 0.7 0.8 0.9 0.95 0.98 Sparsity 69 70 71 72 73 74 Te st A cc ur ac y [% ] VGG19 on CIFAR-100 GraNet GMP Reinitialization with GDP mask Reinitialization with GMP mask Figure 4: Reinitialization ablation on subnetworks discovered by GMP and GraNet (si = 0). 4.7 Comparison between Re-training and Extended Training In this section, we study if re-training techniques can further improve the performance of the subnetworks discovered by GraNet. The authors of Lottery Ticket Hypothesis (LTH) [10] introduced a retraining technique, even if they did not evaluate it as such, where the subnetworks discovered by iterative magnitude pruning can be re-trained in isolation to full accuracy with the original initializations. Later on, learning rate rewinding (LRR) [54] was proposed further to improve the re-training performance by only rewinding the learning rate. Since GraNet also utilizes magnitude pruning to discover subnetworks, it is natural to test if these re-training techniques can bring benefits to GraNet. As shown in Table 6, both re-training techniques do not bring benefits to GraNet. Instead of re-training the subnetworks, we find that simply extending the training time significantly boosts the performance of GraNet with similar computational costs. 5 Conclusion, and Reflection of Broader Impacts In this paper, we re-emphasize the merit of during-training pruning. Compared with the recently proposed works, i.e., LTH and SNIP, during-training pruning is an efficient yet performant class of pruning methods that have received much less attention. We quantitatively study pruning during training from the perspective of pruning plasticity. Inspired by the findings from pruning plasticity and the mechanism of neuroregeneration in the nervous system, we further proposed a novel sparse training method, GraNet, that performs the cost-free connection regeneration during training. GraNet advances the state of the art in both dense-to-sparse training and sparse-to-sparse training. Our paper re-emphasizes the great potential of during-training pruning in reducing the training/inference resources required by ML models without sacrificing accuracy. It has a significant environmental impact on reducing the energy cost of the ML models and CO2 emissions [1, 53, 15, 56, 62]. 6 Acknowledgement This project is partially financed by the Dutch Research Council (NWO). We thank the reviewers for the constructive comments and questions, which improved the quality of our paper.
1. What is the focus of the paper regarding pruning during training? 2. What are the strengths of the proposed approach, particularly in its connections with other phenomena in the literature? 3. How does the reviewer assess the originality and significance of the work? 4. What are the weaknesses or areas for improvement in the paper, such as considering structured pruning or providing more emphasis on dynamic sparse training? 5. Do you have any questions regarding the paper's content or suggestions for further research?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a concept of pruning plasticity to study the effect of pruning during training. Using the insights obtained from the pruning plasticity, the authors improve the state-of-the-art sparse training performance. Since a large body of recent works has emerged to accelerate the training process, this work is helpful for this area. The findings are interesting and make good connections with separately reported phenomena in the literature. Based on the insights from pruning plasticity, the authors further propose two gradual pruning methods which improve the performance over the existing methods. Review Originality: (1) The paper does a thorough study of pruning during training, which is indeed recently ignored by researchers compared with after-training pruning and before-training pruning. The recently emerged before-training pruning (pruning at initialization) absolutely receive a lot of attention. This work, however, doesn’t follow the crowd and highlights the values of an existing technique - pruning during training. Due to the failure of the existing before-training pruning methods to capture the important masks pointed by [1], discovering either early-existing matching subnetworks or more accurate subnetworks during training have a great value for the community. (2) The observations associated with pruning plasticity make good connections with several existing phenomena in the pruning literature, providing valuable insights to the people who want to use these techniques in reality. (3) With the insights provided by pruning plasticity and the neuroregeneration from the nervous system, authors modify the existing gradual magnitude pruning methods and demonstrate the state of the art sparse training performance as well as the gradual pruning performance. Quality: (1) The paper is clearly written and easy to follow. The authors provide an adequate related work review. (2) I like the random initialization analysis of the discovered masks between GMP and GraNet, making the overall paper more robust. (3) The claim is well supported by the extensive experiments. Significance: (1) While the idea of during-training pruning is not new, it indeed achieves the best performance-efficiency tradeoff compared with after-training pruning and before-training pruning. Works like this paper that provides better understanding and performance improvement of pruning during training are important for the ML community, providing good motivations for future hardware design. (2) The proposed methods are sound and performant. Especially, the 0.9% and 1.5% accuracy improvement achieved by GraNet-ST over RigL with ResNet50 on ImageNet at sparsity of 80% and 90%, respectively, is quite impressive. It is the first study which, up to my knowledge, shows consistently that dynamic sparse training can outperform dense-to-sparse training with even fewer training FLOPs. Improvements: (1) This paper only considers unstructured pruning. I suppose that the findings of pruning plasticity can also generalize to the structured setting. It would be nice to test pruning plasticity on the structured pruning. (2) The authors should cite [2,3,4] for the zero-cost neuroregeneration. (3) It is unclear to me if the pruned weights are still updated or not during the backward pass. This would cause a significant computational difference. It would be nice to make this point clearer. (4) I suggest the authors to put more emphasis on the dynamic sparse training part (GraNet-ST) rather than GraNet, which is more promising in terms of the accuracy-efficiency trade-off. The description of GraNet-ST in the current version is somehow unclear, especially for readers who are not familiar with sparse training. Reference: [1] Frankle, Jonathan, et al. "Pruning neural networks at initialization: Why are we missing the mark?." arXiv preprint arXiv:2009.08576 (2020). [2] Mocanu, Decebal Constantin, et al. "Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science." Nature communications 9.1 (2018): 1-12. [3] Evci, Utku, et al. "Rigging the lottery: Making all tickets winners." International Conference on Machine Learning. PMLR, 2020. [4] Dettmers, Tim, and Luke Zettlemoyer. "Sparse networks from scratch: Faster training without losing performance." arXiv preprint arXiv:1907.04840 (2019).
NIPS
Title Sparse Training via Boosting Pruning Plasticity with Neuroregeneration Abstract Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised a lot of attention currently on post-training pruning (iterative magnitude pruning), and before-training pruning (pruning at initialization). The former method suffers from an extremely large computation cost and the latter usually struggles with insufficient performance. In comparison, during-training pruning, a class of pruning methods that simultaneously enjoys the training/inference efficiency and the comparable performance, temporarily, has been less explored. To better understand during-training pruning, we quantitatively study the effect of pruning throughout training from the perspective of pruning plasticity (the ability of the pruned networks to recover the original performance). Pruning plasticity can help explain several other empirical observations about neural network pruning in literature. We further find that pruning plasticity can be substantially improved by injecting a brain-inspired mechanism called neuroregeneration, i.e., to regenerate the same number of connections as pruned. We design a novel gradual magnitude pruning (GMP) method, named gradual pruning with zerocost neuroregeneration (GraNet), that advances state of the art. Perhaps most impressively, its sparse-to-sparse version for the first time boosts the sparse-tosparse training performance over various dense-to-sparse methods with ResNet50 on ImageNet without extending the training time. We release all codes in https://github.com/Shiweiliuiiiiiii/GraNet. 1 Introduction Neural network pruning is the most common technique to reduce the parameter count, storage requirements, and computational costs of modern neural network architectures. Recently, posttraining pruning [49, 29, 18, 47, 10, 54, 74, 5, 57, 75] and before-training pruning [31, 30, 67, 63, 6, 11] have been two fast-rising fields, boosted by lottery tickets hypothesis (LTH) [10] and singleshot network pruning (SNIP) [31]. The process of post-training pruning typically involves fully pre-training a dense network as well as many cycles of retraining (either fine-tuning [18, 17, 39] or rewinding [12, 54]). As the training costs of the state-of-the-art models, e.g., GPT-3 [4] and FixEfficientNet-L2 [64] have exploded, this process can lead to a large amount of overhead cost. Recently emerged methods for pruning at initialization significantly reduce the training cost by identifying a trainable sub-network before the main training process. While promising, the existing methods fail to match the performance achieved by the magnitude pruning after training [11]. ∗Partial of this work have been done when Shiwei Liu worked as an intern at JD Explore Academy. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). Compared with the above-mentioned two classes of pruning, during-training pruning is a class of methods that reap the acceleration benefits of sparsity early on the training and meanwhile achieve promising performance by consulting the information obtained during training. There are some works [77, 13, 33] attempting to gradually prune the network to the desired sparsity during training, while they mainly focus on the performance improvement. Up to now, the understanding of duringtraining pruning has been less explored due to its more complicated dynamical process, and the performance gap still exists between pruning during training and full dense training. To better understand the effect of pruning during the optimization process (not at inference), we study the ability of the pruned models to recover the original performance after a short continued training with the current learning rate, which we call pruning plasticity (see Section 3.1 for a more formal definition). Inspired by the neuroregeneration mechanism in the nervous system where new neurons and connections are synthesized to recover the damage in the nervous system [26, 41, 73], we examine if allowing the pruned network to regenerate new connections can improve pruning plasticity, and hence contribute to pruning during training. We consequently propose a parameter-efficient method to regenerate new connections during the gradual pruning process. Different from the existing works for pruning understanding which mainly focus on dense-to-sparse training [42] (training a dense model and prune it to the target sparsity), we also consider sparse-to-sparse training (training a sparse model yet adaptively re-creating the sparsity pattern) which recently has received an upsurge of interest in machine learning [44, 3, 9, 48, 8, 37, 36]. In short, we have the following main findings during the course of the study: #1. Both pruning rate and learning rate matter for pruning plasticity. When pruned with low pruning rates (e.g., 0.2), both dense-to-sparse training and sparse-to-sparse training can easily recover from pruning. On the contrary, if too many parameters are removed at one time, almost all models suffer from accuracy drops. This finding makes a connection to the success of the iterative magnitude pruning [10, 54, 5, 6, 65], where usually a pruning process with a small pruning rate (e.g., 0.2) needs to be iteratively repeated for good performance. Pruning plasticity also gradually decreases as the learning rate drops. When pruning happens during the training phase with large learning rates, models can easily recover from pruning (up to a certain level). However, pruning plasticity drops significantly after the second learning rate decay, leading to a situation where the pruned networks can not recover with continued training. This finding helps to explain several observations (1) for gradual magnitude pruning (GMP), it is always optimal to end pruning before the second learning rate drop [77, 13]; (2) dynamic sparse training (DST) benefits from a monotonically decreasing pruning rate with cosine or linear update schedule [8, 9]; (3) rewinding techniques [12, 54] outperform fine-tuning as rewinding retrains subnetworks with the original learning rate schedule whereas fine-tuning often retrains with the smallest learning rate. #2. Neuroregeneration improves pruning plasticity. Neuroregeneration [41, 73] refers to the regrowth or repair of nervous tissues, cells, or cell products. Conceptually, it involves synthesizing new neurons, glia, axons, myelin, or synapses, providing extra resources in the long term to replace those damaged by the injury, and achieving a lasting functional recovery. Such mechanism is closely related to the brain plasticity [51], and we borrow this concept to developing a computational regime. We show that, while regenerating the same number of connections as pruned, the pruning plasticity is observed to improve remarkably, indicating a more neuroplastic model being developed. However, it increases memory and computational overheads and seems to contradict the benefits of pruningduring-training. This however raises the question: can we achieve efficient neuroregeneration during training with no extra costs? We provide an affirmative answer to this question. #3. Pruning plasticity with neuroregeneration can be leveraged to substantially boost sparse training performance. The above-mentioned findings of pruning plasticity can generalize to the final performance level under a full continued training to the end. Imitating the neuroregeneration behavior [41, 73], we propose a new sparse training method – gradual pruning with zero-cost neuroregeneration (GraNet), which is capable of performing regeneration without increasing the parameter count. In experiments, GraNet establishes the new state-of-the-art performance bar for dense-to-sparse training and sparse-to-sparse training, respectively. Particularly, the latter for the first time boosts the sparse-to-sparse training performance over various dense-to-sparse methods by a large margin without extending the training time, with ResNet-50 on ImageNet. Besides the consistent performance improvement, we find the subnetworks that GraNet learns are more accurate than the ones learned by the existing gradual pruning method, providing explanations for the success of GraNet. 2 Related Work Post-Training Pruning. Methods that yield a sparse neural network from a pre-trained network by pruning the unimportant weights or neurons, to the best of our knowledge, were proposed in [24] and [50]. After that, various pruning methods have emerged to provide increasingly efficient methods to identify sparse neural networks for inference. The pruning criterion includes weight magnitude [18, 10], gradient [61] Hessian [29, 19, 59], Taylor expansion [47, 46], etc. Low-rank decomposition [7, 23, 17, 71] are also used to induce structured sparsity in terms of channels or filters. Most of the above-mentioned pruning methods require many pruning and re-training cycles to achieve the desired performance. During-Training Pruning. Instead of inheriting weights from a pre-trained model, some works attempt to discover well-performing sparse neural networks with one single training process. Gradual Magnitude Pruning (GMP), introduced in [77] and studied further in [13], gradually sparsifies the neural network during the training process until the desired sparsity is reached. Besides, [40] and [68] are prior works that enforce the network to sparse during training via L0 and L1 regularization, respectively. [60, 34, 55, 70, 28] moved further by introducing trainable sparsity heuristics to learn the sparse masks and weights simultaneously. These methods are all classified as dense-to-sparse training as they start from a dense network. Dynamic Sparse Training (DST) [44, 3, 48, 8, 9, 36, 35, 25] is another class of methods that prune models during training. The key factor of DST is that it starts from a random initialized sparse network and optimizes the sparse topology as well as the weights simultaneously during training (sparse-to-sparse training). Without an extended training time [37], sparse-to-sparse training usually falls short of dense-to-sparse training in terms of the prediction accuracy. For further details, see the survey of [43, 21]. Before-Training Pruning. Motivated by SNIP [31], many works [67, 63, 6] have emerged recently to explore the possibility of obtaining a trainable sparse neural network before the main training process. [11] demonstrates that the existing methods for pruning at initialization perform equally well when the unpruned weights are randomly shuffled, which reveals that what these methods discover is the layer-wise sparsity ratio, rather than the indispensable weight values and positions. Our analysis shows that both the mask positions and weight values are crucial for GraNet. 3 Methodology for Pruning Plasticity The primary goal of this paper is to study the effect of pruning as well as neuroregeneration on neural networks during the standard training process. Therefore, we do not consider post-training pruning and before-training pruning. Below, we introduce in detail the definition of pruning plasticity and the experimental design that we used to study pruning plasticity. 3.1 Metrics Let us denote Wt ∈ Rd as the weights of the network and mt ∈ {0, 1}d as the binary mask yielded from the pruning method at epoch t. Thus, the pruned network can be denoted as Wt mt. Let T be the total number of epochs the model should be trained. Let CONTRAINk(Wt mt, a) refers to the function that continues to train the pruned model for k epochs with the learning rate schedule a. Definition of Pruning plasticity. We define pruning plasticity as tCONTRAINk(Wt mt,at)− tPRE, where tPRE is the test accuracy measured before pruning and tCONTRAINk(Wt mt,at) is the test accuracy measured after k epoch of continued training CONTRAINk(Wt mt, at). Specifically, to better understand the effect of pruning on the current model status and to avoid the effect of learning rate decay, we fix the learning rate as the one when the model is pruned, i.e, at. This setting is also appealing to GMP [77, 13] and DST [44, 9, 48, 37] in which most of the pruned models are continually trained with the current learning rate for some time. Final performance gap. Nevertheless, we also investigate the effect of pruning on the final performance, that is, continually training the pruned networks to the end with the remaining learning rate schedule CONTRAINT−t(Wt mt, a[t+1:T ]). In this case, we report tCONTRAINT−t(Wt mt,a[t+1:T ]) − tFINAL, where tFINAL is the final test accuracy of the unpruned models. 3.2 Architectures and Datasets We choose two commonly used architectures to study pruning plasticity, VGG-19 [58] with batch normalization on CIFAR-10 [27], and ResNet-20 [20] on CIFAR-10. We share the summary of the networks, data, and hyperparameters of dense-to-sparse training in Table 1. We use standard implementations and hyperparameters available online, with the exception of the small batch size for the ResNet-50 on ImageNet due to the limited hardware resources (2× Tesla V100). All accuracies are in line with the baselines reported in the references [8, 11, 67, 9, 37]. 3.3 How to Prune, and How to Regenerate Structured and Unstructured Pruning. We consider unstructured and structured pruning in this paper. Structured pruning prunes weights in groups, or removes the entire neurons, convolutional filters, or channels, enabling acceleration with the off-the-shelf hardware. In particular, we choose the filter pruning method used in Li et al. [32]. Unstructured sparsity is a more promising direction not only due to its outstanding performance at extreme sparsities but the increasing support for sparse operation in the practical hardware [35, 14, 52, 76, 22]. For example, Liu et al. [35] illustrated for the first time the true potential of DST, demonstrating significant training/inference efficiency improvement over the dense training. Different from prior conventions [77, 13, 33, 2] where values of the pruned weights are kept, we set the pruned weights to zero to eliminate the historical information for all implementations in this paper. Magnitude pruning. We prune the weights with the smallest magnitude, as it has evolved as the standard method when pruning happens during training, e.g., GMP [77, 13] and DST [44, 9, 37]. We are also aware of other pruning criteria including but not limited to Hessian [29, 19, 59], Taylor expansion [47, 46], connection sensitivity [31], Gradient Flow [67], Neural Tangent Kernel [38, 16]. One-shot pruning. To isolate the pruning effect at different training stages and to avoid the interaction between two iterations of pruning, we focus on one-shot pruning. Please note that iterative pruning can also be generalized in our setting, as our experimental design includes neural networks trained at various sparsities and each of them is further pruned with various pruning rates. Layer-wise pruning and global pruning. We study both the layer-wise magnitude pruning and global magnitude pruning for pruning plasticity. Global magnitude pruning prunes different layers together and leads to non-uniform sparsity distributions; layer-wise pruning operates layer by layer, resulting in uniform distributions. Gradient-based regeneration. The simplest regeneration scheme is to randomly activate new connections [3, 44]. However, it would take a lot of time for random regeneration to discover the important connections, especially for the very extreme sparsities. Alternatively, gradients, including those for the connections with zero weights, provide good indicators for the connection importance. For this reason, we focus on gradient-based regeneration proposed in Rigged Lottery ( RigL) [9], i.e., regenerating the same number of connections as pruned with the largest gradient magnitude. 3.4 Experimental Results We study pruning plasticity during training with/without regeneration, for both dense training and sparse training. We report the results of ResNet-20 on CIFAR-10 with unstructured global pruning in the main body of the paper. The rest of the experiments are given in Appendix A. Unless otherwise stated, results are qualitatively similar across all networks. Concretely, we first pre-train networks at four sparsity levels, including 0, 0.5, 0.9, and 0.98. The sparse neural networks are trained with uniform distribution (i.e., all layers have the same sparsity). We further choose four pruning rates, e.g., 0.2, 0.5, 0.9, and 0.98, to measure the corresponding pruning plasticity of the pre-trained networks. Pruning plasticity. We continue to train the pruned model for 30 epochs and report pruning plasticity in Figure 2. Overall, the learning rate schedule, the pruning rate, and the sparsity of the original models all have a big impact on pruning plasticity. Pruning plasticity decreases as the learning rate decays for all models with different sparsity levels. The models trained with a large learning rate 0.1 can easily recover, or exceed the original performance except for the extremely large pruning rate 0.98. However, the models obtained during the later training phases can recover only with the mild pruning rate choices, e.g., 0.2 (orange lines) and 0.5 (green lines). We next demonstrate the effect of connection regeneration on pruning plasticity in the bottom row of Figure 2. It is clear to see that connection regeneration significantly improves pruning plasticity of all the cases, especially for the models that are over-pruned (purple lines). Still, even with connection regeneration, pruning plasticity suffers from performance degradation when pruning occurs after the learning rate drops. Final performance gap. Compared with the current model status, people might be more interested in the effect of pruning on the final performance. We further measure the performance gap between the original test accuracy of the unpruned models and the final test accuracy of the pruned model under a full continued training CONTRAINT−t(Wt mt, a[t+1:T ]) in Figure 3. We observe that, in this case, large learning rates do not enjoy large performance improvement, but still, the performance gap increases as the learning rate drops. It is reasonable to conjecture that the accuracy improvement of pruning plasticity with the large learning rate, 0.1, is due to the unconverged performance during the early phase of training. Besides, it is surprising to find that the final performance of extreme sparse networks (e.g., the third column and the fourth column) significantly benefits from mild pruning. Again, the ability of the pruned model to recover from pruning remarkably improves after regenerating the connections back. 4 Gradual Pruning with Zero-Cost Neuroregeneration So far, we have known that regenerating the important connections to the pruned models during training substantially improves pruning plasticity as well as the final performance. However, naively regenerating extra connections increases the parameter count and conflicts with the motivation of gradual pruning. Inspired by the mechanism of neuroregeneration in the nervous system, we propose a novel sparse training method which we call gradual pruning with zero-cost neuroregeneration (GraNet). GraNet consults the information produced throughout training and regenerates important connections during training in a parameter-efficient fashion. See Appendix B.1 for the pseudocode of GraNet. We introduce the main components of GraNet below. 4.1 Gradual Pruning We follow the gradual pruning scheme used in [77] and gradually sparsifies the dense network to the target sparsity level over n pruning iterations. Let us define si is the initial sparsity, sf is the target sparsity, t0 is is the starting epoch of gradual pruning, tf is the end epoch of gradual pruning, and ∆t is the pruning frequency. The pruning rate of each pruning iteration is: st = sf + (si − sf ) ( 1− t− t0 n∆t )3 , t ∈ {t0, t0 + ∆t, ..., t0 + n∆t} . (1) We choose global pruning for our method as it generally achieves better performance than uniform pruning. We also report the performance of the uniform sparsity as used in [13] in Appendix C.3. The conventional gradual pruning methods [77, 13] change the mask (not the weight values) to fulfill the pruning operation, so that the pruned connections have the possibility to be reactivated in the later training phases. Despite this, since the weights of the pruned connections are not updated, they have a small chance to receive sufficient updates to exceed the pruning threshold. This hinders the regeneration of the important connections. 4.2 Zero-Cost Neuroregeneration The main difference between GraNet and the conventional GMP methods [77, 13] is the Zero-Cost Neuroregeneration. Imitating the neuroregeneration of the peripheral nervous system [41, 73] where new neurons and connections are synthesized to replace the damaged ones, we first detect and eliminate the “damaged” connections, and then regenerate the same number of new connections. By doing this, we can achieve connection regeneration without increasing the number of connections. Concretely, we identify the “damaged” connections as the ones with the smallest weight magnitudes. Small magnitude indicates that either the weight’s gradient is small or a large number of oscillations occur to the gradient direction. Therefore, these weights have a small contribution to the training loss and can be removed. Again, we use the gradient as the importance score for regeneration, same as the regrow method as used in RigL [9]. Why we call it “Zero-Cost Neuroregeneration"? In addition to not increasing the connection (parameter) count, the backward pass of our method is sparse most of the time even though our regeneration utilizes the dense gradient to identify the important connections. We perform neuroregeneration immediately after each gradual pruning step, meaning that the regeneration occurs only once every several thousand iterations. The extra overhead to calculate the dense gradient can be amortized compared with the whole training costs. Compared with the methods [33, 69] that require updating all the weights in the backward pass, our method is much more training efficient, as around 2/3 of the training FLOPs is owing to the backward pass [9, 72]. Let us denote r as the ratio of the number of the regenerated connections to the total number of connections; W is the network weight. We first remove r proportion of “damaged” weights with the smallest magnitude by: W ′ = TopK (|W |, 1− r) . (2) Here TopK(v, k) returns the weight tensor retaining the top k-proportion of elements from v. Immediately after that, we regenerate r proportion of new connections based on the gradient magnitude: W = W ′ + TopK (|gi/∈W ′ |, r) , (3) where |gi/∈W ′ | are the gradient magnitude of the zero weights. We perform Zero-Cost Neuroregeneration layer by layer from the beginning of the training to the end. GraNet can naturally generalize to the dense-to-sparse training scenario and the sparse-to-sparse training scenario by setting the initial sparsity level si = 0 and si > 0 in Eq. (1), respectively. For simplicity, we set si = 0.5, t0 = 0, and tf as the epoch when performing the first learning rate decay for the sparse-to-sparse training. Different from the existing sparse-to-sparse training methods, i.e., SET [44], RigL [9], and ITOP [37], in which the sparsity is fixed throughout training, GraNet starts from a denser yet still sparse model and gradually prunes the sparse model to the desired sparsity. Although starting with more parameters, the global pruning technique of gradual pruning helps GraNet quickly evolve to a better sparsity distribution than RigL with lower feedforward FLOPs and higher test accuracy. What’s more, GraNet sparsifies all layers including the first convolutional layer and the last fully-connected layer. 4.3 Experimental Results We conduct various experiments to evaluate the effectiveness of GraNet. We compare GraNet with various dense-to-sparse methods and sparse-to-sparse methods. The results of Rigged Lottery (RigL) and GMP with CIFAR-10/100 were reproduced by our implementation with PyTorch so that the only difference between GraNet and GMP is the Zero-Cost Neuroregeneration. For each model, we divide the results into three groups from top to bottom: pruning at initialization, dynamic sparse training and dense-to-sparse methods. See Appendix B for more implementation details used in the experiments. GraNet (si = 0.5) refers to the sparse-to-sparse version and the and GraNet (si = 0) refers to the dense-to-sparse version. CIFAR-10/100. The results of CIFAR-10/100 are shared in Table 2. We can observe that performance differences among different methods on CIFAR-10 are generally small, but still, GraNet (si = 0) consistently improves the performance over GMP except for the sparsity 95%, and achieves the highest accuracy in 4 out of 6 cases. In terms of the more complex data CIFAR-100, the performance differences between the during-training pruning methods and before-training pruning methods are much larger. GraNet (si = 0) again consistently outperforms GMP with all sparsities, highlighting the benefits of Zero-Cost Neuroregeneration. It is maybe more interesting that GraNet (si = 0) even outperforms the post-training method, subdifferential inclusion for sparsity (SIS), by a large margin. In terms of sparse-to-sparse training, our proposed GraNet (si = 0.5) has a dominant performance over other methods. Especially at the very extreme sparsity 0.98, our method outperforms RigL by 1.40% and 2.22% with VGG-19 on CIFAR-10 and CIFAR-100, respectively. ImageNet. Due to the small data size, the experiments with CIFAR-10/100 may not be sufficient to draw a solid conclusion. We further evaluate our method with ResNet-50 on ImageNet in Table 3. We only run this experiment once due to the limited resources. We set t0 = 0 and tf = 30 for both GraNet (si = 0) and GraNet (si = 0.5) on ImageNet. Again, GraNet (si = 0) outperforms GMP consistently with only half training FLOPs and achieves the highest accuracy among all the dense-to-sparse methods at sparsity of 0.9. Surprisingly, GraNet (si = 0.5) significantly boosts the sparse-to-sparse training performance, even over the dense-to-sparse training. Concretely, GraNet (si = 0.5) outperforms RigL by 0.9% and 1.5% at sparsity 0.8 and 0.9, respectively. To the best of our knowledge, this is the first time in the literature that sparse-to-sparse training reaches a test accuracy of 76% with ResNet-50 on ImageNet at sparsity 0.8, without extension of training time. It is reasonable for GraNet (si = 0.5) to achieve better accuracy than RigL, since the denser models at the beginning help GraNet explore more the parameter space. According to the In-Time Over-Parameterization hypothesis [37], the performance of sparse training methods is highly correlated with the total number of parameters that the sparse model has visited. We further report the training/inference FLOPs required by all pruning methods. Compared with other dense-to-sparse methods, the final networks learned by GraNet (si = 0) require more FLOPs to test, whereas the overall training FLOPs required by GraNet (si = 0) are smaller than others. Even though starting from a denser model, GraNet (si = 0.5) requires less training and inference FLOPs than the state-of-the-art method, i.e., RigL. The sparsity budgets for 0.9 sparse ResNet-50 on ImageNet-1K learned by our methods are reported in Appendix D. We also report how FLOPs of the pruned ResNet-50 evolve during the course of training in Appendix E. 4.4 Effect of the Initial Sparsity As we mentioned earlier, the denser initial network is the key factor in the success of GraNet. We conducted experiments to study the effect of the initial sparsity on GraNet with ResNet-50 on ImageNet. The initial sparsity is chosen from [0.0, 0.5, 0.6, 0.7, 0.8, 0.9] and the final sparsity is fixed as 0.9. The results are shared in Table 4. We can see the training FLOPs of GraNet are quite robust to the initial sparsity. Surprisingly yet reasonably, it seems that the the smaller the initial sparsity is (up to 0.5), the better final sparsity distribution GraNet finds, with higher test accuracy and fewer feedforward FLOPs. The lower feedforward FLOPs of the final network perfectly balance the overhead caused by the denser initial network. 4.5 Performance of GraNet at Extreme Sparsities In this section, we share the results of GraNet and RigL at extreme sparsities. The initial sparsity is set as 0.5. When the final sparsity is relatively smaller (e.g., 0.8, 0.9), GraNet requires a lower (or the same) number of training FLOPs than RigL, whereas GraNet requires more training FLOPs than RigL when the final sparsity is extremely high (e.g., 0.95, 0.965). This makes sense since when the sparsity is extremely high, the saved FLOPs count of the distribution discovered by GraNet is too small to amortize the overhead caused by denser initial models. Yet, the increased number of training FLOPs of GraNet leads to substantial accuracy improvement (> 2%) over RigL. The efficiency of GraNet (si = 0.5) comes from two important technical differences compared with RigL: (1) better final sparse distribution discovered by global pruning; (2) a shorter period of gradual pruning time (the first 30 epochs for ResNet-50 on ImageNet). Although starting with more parameters, the global pruning enables GraNet to quickly (first 30 epochs) evolve to a better sparsity distribution with lower test FLOPs than ERK. After 30 epochs of gradual pruning, the network continues to be trained with this better distribution for 70 epochs, so that the overhead in the early training phase with larger training FLOPs is amortized by the later and longer training phase with fewer training FLOPs. 4.6 Ablation Study of Random Reinitialization Next, we ask whether what GraNet learned are the specific sparse connectivity or the sparse connectivity together with the weight values. We randomly reinitialize the pruned network with the same mask and retrain it. The results are given in Figure 4. The performance of the reinitialized networks falls significantly short of the performance achieved by GraNet (si = 0), indicating that what was learned by GraNet is the sparse connectivity together with the weight values. Besides, we find that the retraining performance of GraNet is higher than GMP. This further confirms that Zero-Cost Neuroregeneration helps the gradual pruning find more accurate mask positions. 0.5 0.6 0.7 0.8 0.9 0.95 0.98 Sparsity 92.25 92.50 92.75 93.00 93.25 93.50 93.75 94.00 Te st A cc ur ac y [% ] VGG19 on CIFAR-10 0.5 0.6 0.7 0.8 0.9 0.95 0.98 Sparsity 69 70 71 72 73 74 Te st A cc ur ac y [% ] VGG19 on CIFAR-100 GraNet GMP Reinitialization with GDP mask Reinitialization with GMP mask Figure 4: Reinitialization ablation on subnetworks discovered by GMP and GraNet (si = 0). 4.7 Comparison between Re-training and Extended Training In this section, we study if re-training techniques can further improve the performance of the subnetworks discovered by GraNet. The authors of Lottery Ticket Hypothesis (LTH) [10] introduced a retraining technique, even if they did not evaluate it as such, where the subnetworks discovered by iterative magnitude pruning can be re-trained in isolation to full accuracy with the original initializations. Later on, learning rate rewinding (LRR) [54] was proposed further to improve the re-training performance by only rewinding the learning rate. Since GraNet also utilizes magnitude pruning to discover subnetworks, it is natural to test if these re-training techniques can bring benefits to GraNet. As shown in Table 6, both re-training techniques do not bring benefits to GraNet. Instead of re-training the subnetworks, we find that simply extending the training time significantly boosts the performance of GraNet with similar computational costs. 5 Conclusion, and Reflection of Broader Impacts In this paper, we re-emphasize the merit of during-training pruning. Compared with the recently proposed works, i.e., LTH and SNIP, during-training pruning is an efficient yet performant class of pruning methods that have received much less attention. We quantitatively study pruning during training from the perspective of pruning plasticity. Inspired by the findings from pruning plasticity and the mechanism of neuroregeneration in the nervous system, we further proposed a novel sparse training method, GraNet, that performs the cost-free connection regeneration during training. GraNet advances the state of the art in both dense-to-sparse training and sparse-to-sparse training. Our paper re-emphasizes the great potential of during-training pruning in reducing the training/inference resources required by ML models without sacrificing accuracy. It has a significant environmental impact on reducing the energy cost of the ML models and CO2 emissions [1, 53, 15, 56, 62]. 6 Acknowledgement This project is partially financed by the Dutch Research Council (NWO). We thank the reviewers for the constructive comments and questions, which improved the quality of our paper.
1. What is the main contribution of the paper regarding dynamic sparse training? 2. What are the strengths of the proposed approach compared to other methods? 3. Are there any similarities or differences between the proposed method and previous works such as RigL? 4. How does the reviewer assess the novelty and impact of the paper's content? 5. What are the weaknesses or areas for improvement in the paper, particularly regarding experimental clarification and comparisons with other works?
Summary Of The Paper Review
Summary Of The Paper This work aims to combine ideas presented in recent dynamic sparse training (DST) methods with gradual magnitude pruning. Additionally the paper has some novel experiments that confirm previous work on when-is-best to prune during training. Although an interesting and potentially impactful idea, the story and results need a bit more work. Review Pros Results in Fig:2 are quite interesting. Though some clarification is needed (see below) The idea of combining pruning and re-generation is novel and authors do a good job on comparing with a large set of methods. Cons "For this reason, we determine to focus on gradient-based regeneration, i.e., regenerating the same number of connections as pruned with the largest gradient magnitude." This is sounds like what [rigl] does for training sparse networks. Is the method proposed for regeneration is practically same as rigl? If so it would be nice to called it so. If not, it would be nice to see a discussion on similarities and differences. My understanding is GraNet=GMP+RigL. It would be nice to make this clear. It needs to be clear which experiments uses global pruning (and thus non-uniform sparsity). For example on Table-3, GraNet-ST (ERK) requires 0.37× training flops which is less than rigl (erk). Shouldn't be the training flops of GraNet-ST always larger than rigl since it starts with more parameters? I am assuming global pruning is not used here, as if so; then the resulting distribution is not ERK anymore and this should be made clear. It should be make clear that GraNet-ST is different than other sparse training methods like set and rigl in the sense that it requires more sparsity/memory to begin with. I couldn't see any ablations on the initial sparsity of GraNet-ST method. I think the value 0.5 is picked, which is a significant increase in parameter count if the final sparsity is >90%; it would be nice to see effect of initial sparsity. It might be better to use term 'sparsity' instead of 'density' on Figure 2. More importantly I am confused what "pruning rate" is referring to. Reading the text, it seems to refer sparsity of the network after pruning. Then it is not clear how do you prune a 98% sparse (2% dense) network to 50% sparsity. Do you only grow? This needs to be cleared I think. Experiments in Figure:2 looks quite interesting otherwise. Minor "To better understand the effect of pruning during the optimization proces" [zhu2017] is a relevant work that studies the same. "To the best of our knowledge, this is the first time in the literature that sparse-to-sparse training reaches a test accuracy of 76% with ResNet-50 on ImageNet at sparsity 0.8." [rigl] paper I think has 5x training runs with such results. If extended training runs are not considered; this should be explained. I think a better way to compare dense-to-sparse methods with DST methods is to use training flops on the x-axis. "Perhaps most impressively, the latter for the first time boosts the sparse-to-sparse training performance over various dense-to-sparse methods by a large margin with ResNet-50 on ImageNet." I guess large-margin is subjective but [rigl] shows the same. [rigl] https://arxiv.org/abs/1911.11134 [zhu2017] https://arxiv.org/pdf/1710.01878.pdf After Rebuttal I thank the authors for their strong rebuttal. I think all my concerns/questions are addressed and I believe this work is a timely work that inspire and impact future research. Therefore I support acceptance. One final suggestion: I think it might help the readers to name both versions of the method as GraNet as when initial sparsity=0 GraNet-ST is GraNet. Then authors can use sth like GraNet (s=0) to identify pruning version.
NIPS
Title CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression Abstract Due to the high communication cost in distributed and federated learning, methods relying on compressed communication are becoming increasingly popular. Besides, the best theoretically and practically performing gradient-type methods invariably rely on some form of acceleration/momentum to reduce the number of communications (faster convergence), e.g., Nesterov’s accelerated gradient descent [31, 32] and Adam [14]. In order to combine the benefits of communication compression and convergence acceleration, we propose a compressed and accelerated gradient method based on ANITA [20] for distributed optimization, which we call CANITA. Our CANITA achieves the first accelerated rate O (√( 1 + √ ω3 n ) L + ω ( 1 ) 1 3 ) , which improves upon the state-of-the-art nonaccelerated rate O ( (1 + ωn ) L + ω+ω ω+n 1 ) of DIANA [12] for distributed general convex problems, where is the target error, L is the smooth parameter of the objective, n is the number of machines/devices, and ω is the compression parameter (larger ω means more compression can be applied, and no compression implies ω = 0). Our results show that as long as the number of devices n is large (often true in distributed/federated learning), or the compression ω is not very high, CANITA achieves the faster convergence rate O (√ L ) , i.e., the number of communication rounds is O (√ L ) (vs. O ( L ) achieved by previous works). As a result, CANITA enjoys the advantages of both compression (compressed communication in each round) and acceleration (much fewer communication rounds). N/A O (√( 1 + √ ω3 n ) L + ω ( 1 ) 1 3 ) , which improves upon the state-of-the-art non- accelerated rate O ( (1 + ωn ) L + ω2+ω ω+n 1 ) of DIANA [12] for distributed general convex problems, where is the target error, L is the smooth parameter of the objective, n is the number of machines/devices, and ω is the compression parameter (larger ω means more compression can be applied, and no compression implies ω = 0). Our results show that as long as the number of devices n is large (often true in distributed/federated learning), or the compression ω is not very high, CANITA achieves the faster convergence rate O (√ L ) , i.e., the number of communication rounds is O (√ L ) (vs. O ( L ) achieved by previous works). As a result, CANITA enjoys the advantages of both compression (compressed communication in each round) and acceleration (much fewer communication rounds). 1 Introduction With the proliferation of edge devices, such as mobile phones, wearables and smart home appliances, comes an increase in the amount of data rich in potential information which can be mined for the benefit of humankind. One of the approaches of turning the raw data into information is via federated learning [15, 29], where typically a single global supervised model is trained in a massively distributed manner over a network of heterogeneous devices. Training supervised distributed/federated learning models is typically performed by solving an optimization problem of the form min x∈Rd { f(x) := 1 n n∑ i=1 fi(x) } , (1) where n denotes the number of devices/machines/workers/clients, and fi : Rd → R is a loss function associated with the data stored on device i. We will write x∗ := arg min x∈Rd f(x). 35th Conference on Neural Information Processing Systems (NeurIPS 2021). If more than one minimizer exist, x∗ denotes an arbitrary but fixed solution. We will rely on the solution concept captured in the following definition: Definition 1 A random vector x̂ ∈ Rd is called an -solution of the distributed problem (1) if E [f(x̂)]− f(x∗) ≤ , where the expectation is with respect to the randomness inherent in the algorithm used to produce x̂. In distributed and federated learning problems of the form (1), communication of messages across the network typically forms the key bottleneck of the training system. In the modern practice of supervised learning in general and deep learning in particular, this is exacerbated by the reliance on massive models described by millions or even billions of parameters. For these reasons, it is very important to devise novel and more efficient training algorithms capable of decreasing the overall communication cost, which can be formalized as the product of the number of communication rounds necessary to train a model of sufficient quality, and the computation and communication cost associated with a typical communication round. 1.1 Methods with compressed communication One of the most common strategies for improving communication complexity is communication compression [37, 1, 40, 8, 30, 9, 26, 24]. This strategy is based on the reduction of the size of communicated messages via the application of a suitably chosen lossy compression mechanism, saving precious time spent in each communication round, and hoping that this will not increase the total number of communication rounds. Several recent theoretical results suggest that by combining an appropriate (randomized) compression operator with a suitably designed gradient-type method, one can obtain improvement in the total communication complexity over comparable baselines not performing any compression. For instance, this is the case for distributed compressed gradient descent (CGD) [1, 13, 8, 24], and distributed CGD methods which employ variance reduction to tame the variance introduced by compression [7, 30, 9, 24, 6]. 1.2 Methods with acceleration The acceleration/momentum of gradient-type methods is widely-studied in standard optimization problems, which aims to achieve faster convergence rates (fewer communication rounds) [33, 31, 32, 17, 28, 2, 18, 16, 23, 20]. Deep learning practitioners typically rely on Adam [14], or one of its many variants, which besides other tricks also adopts momentum. In particular, ANITA [20] obtains the current state-of-the-art convergence results for convex optimization. In this paper, we will adopt the acceleration from ANITA [20] to the distributed setting with compression. 1.3 Can communication compression and acceleration be combined? Encouraged by the recent theoretical success of communication compression, and the widespread success of accelerated methods, in this paper we seek to further enhance CGD methods with acceleration/momentum, with the aim to obtain provable improvements in overall communication complexity. Can distributed gradient-type methods theoretically benefit from the combination of gradient compression and acceleration/momentum? To the best of our knowledge, no such results exist in the general convex regime, and in this paper we close this gap by designing a method that can provably enjoy the advantages of both compression (compressed communication in each round) and acceleration (much fewer communication rounds). While there is abundance of research studying communication compression and acceleration in isolation, there is very limited work on the combination of both approaches. The first successful combination of gradient compression and acceleration/momentum was recently achieved by the ADIANA method of Li et al. [26]. However, Li et al. [26] only provide theoretical results for strongly convex problems, and their method is not applicable to (general) convex problems. So, one needs to both design a new method to handle the convex case, and perform its analysis. A-priori, it is not clear at all what approach would work. To the best of our knowledge, besides the initial work [26], we are only aware of two other works for addressing this question [41, 34]. However, both these works still only focus on the simpler and less practically relevant strongly convex setting. Thus, this line of research is still largely unexplored. For instance, the well-known logistic regression problem is convex but not strongly convex. Finally, even if a problem is strongly convex, the modulus of strong convexity is typically not known, or hard to estimate properly. 2 Summary of Contributions In this paper we propose and analyze an accelerated gradient method with compressed communication, which we call CANITA (described in Algorithm 1), for solving distributed general convex optimization problems of the form (1). In particular, CANITA can loosely be seen as a combination of the accelerated gradient method ANITA of [20], and the variance-reduced compressed gradient method DIANA of [30]. Ours is the first work provably combining the benefits of communication compression and acceleration in the general convex regime. 2.1 First accelerated rate for compressed gradient methods in the convex regime For general convex problems, CANITA is the first compressed communication gradient method with an accelerated rate. In particular, our CANITA solves the distributed problem (1) in O (√( 1 + √ ω3 n ) L + ω ( 1 ) 1 3 ) communication rounds, which improves upon the current state-of-the-art result O (( 1 + ωn ) L + ω2+n ω+n 1 ) achieved by the DIANA method [12]. See Table 1 for more comparisons. Let us now illustrate the improvements coming from this new bound on an example with concrete numerical values. Let the compression ratio be 10% (the size of compressed message is 0.1 · d, where d is the size of the uncompressed message). If random sparsification or quantization is used to achieve this, then ω ≈ 10 (see Section 3.1). Further, if the number of devices/machines is n = 106, 1In this strongly convex column, κ := L µ denotes the condition number, where L is the smooth parameter and µ > 0 is the strong convexity parameter. 2Here QSGD [1] needs an additional bounded gradient assumption, i.e., ‖∇fi(x)‖2 ≤ G2, ∀i ∈ [n], x ∈ Rd. and the target error tolerance is = 10−6, then the number of communication rounds of our CANITA method isO(103), while the number of communication rounds of the previous state-of-the-art method DIANA [12] is O(106), i.e., O (√ L ) vs. O(L ). This is an improvement of three orders of magnitude. Moreover, the numerical experiments in Section 6 indeed show that the performance of our CANITA is much better than previous non-accelerated compressed methods (QSGD and DIANA), corroborating the theoretical results (see Table 1) and confirming the practical superiority of our accelerated CANITA method. 2.2 Accelerated rate with limited compression for free For strongly convex problems, Li et al. [26] showed that if the number of devices/machines n is large, or the compression variance parameter ω is not very high (ω ≤ n1/3), then their ADIANA method enjoys the benefits of both compression and acceleration (i.e., √ κ log 1 of ADIANA vs. κ log 1 of previous works). In this paper, we consider the general convex setting and show that the proposed CANITA also enjoys the benefits of both compression and acceleration. Similarly, if ω ≤ n1/3 (i.e., many devices, or limited compression variance), CANITA achieves the accelerated rate √ L vs. L of previous works. This means that the compression does not hurt the accelerated rate at all. Note that the second term( 1 ) 1 3 is of a lower order compared with the first term √ L . 2.3 Novel proof technique The proof behind the analysis of CANITA is significantly different from that of ADIANA [26], which critically relies on strong convexity. Moreover, the theoretical rate in the strongly convex case is linear O(log 1 ), while it is sublinear O( 1 ) or O (√ 1 ) (accelerated) in the general convex case. We hope that our novel analysis can provide new insights and shed light on future work. 3 Preliminaries Let [n] denote the set {1, 2, · · · , n} and ‖ · ‖ denote the Euclidean norm for a vector and the spectral norm for a matrix. Let 〈u, v〉 denote the standard Euclidean inner product of two vectors u and v. We use O(·) and Ω(·) to hide the absolute constants. 3.1 Assumptions about the compression operators We now introduce the notion of a randomized compression operator which we use to compress the gradients to save on communication. We rely on a standard class of unbiased compressors (see Definition 2) that was used in the context of distributed gradient methods before [1, 13, 9, 24, 26]. Definition 2 (Compression operator) A randomized map C : Rd 7→ Rd is an ω-compression operator if E [C(x)] = x, E [ ‖C(x)− x‖2 ] ≤ ω‖x‖2, ∀x ∈ Rd. (2) In particular, no compression (C(x) ≡ x) implies ω = 0. It is well known that the conditions (2) are satisfied by many practically useful compression operators (see Table 1 in [3, 36]). For illustration purposes, we now present a couple canonical examples: sparsification and quantization. Example 1 (Random sparsification). Given x ∈ Rd, the random-k sparsification operator is defined by C(x) := d k · (ξk x), where denotes the Hadamard (element-wise) product and ξk ∈ {0, 1}d is a uniformly random binary vector with k nonzero entries (‖ξk‖0 = k). This random-k sparsification operator C satisfies (2) with ω = dk − 1. By setting k = d, this reduces to the identity compressor, whose variance is obviously zero: ω = 0. Example 2 (Random quantization). Given x ∈ Rd, the (p, s)-quantization operator is defined by C(x) := sign(x) · ‖x‖p · 1 s · ξs, where p, s ≥ 1 are integers, and ξs ∈ Rd is a random vector with i-th element ξs(i) := { l + 1, with probability |xi|‖x‖p s− l, l, otherwise. The level l satisfies |xi|‖x‖p ∈ [ l s , l+1 s ]. The probability is chosen so that E [ξs(i)] = |xi| ‖x‖p s. This (p, s)quantization operator C satisfies (2) with ω = 2+ d 1/p+d1/2 s . In particular, QSGD [1] used p = 2 (i.e., (2, s)-quantization) and proved that the expected sparsity of C(x) is E [‖C(x)‖0] = O ( s(s+ √ d) ) . 3.2 Assumptions about the functions Throughout the paper, we assume that the functions fi are convex and have Lipschitz continuous gradient. Assumption 1 Functions fi : Rd → R are convex, differentiable, and L-smooth. The last condition means that there exists a constant L > 0 such that for all i ∈ [n] we have ‖∇fi(x)−∇fi(y)‖ ≤ L‖x− y‖, ∀x, y ∈ Rd. (3) It is easy to see that the objective f(x) = 1n ∑n i=1 fi(x) in (1) satisfies (3) provided that the constituent functions {fi} do. 4 The CANITA Algorithm In this section, we describe our method, for which we coin the name CANITA, designed for solving problem (1), which is of importance in distributed and federated learning, and contrast it to the most closely related methods ANITA [20], DIANA [30] and ADIANA [26]. Algorithm 1 Distributed compressed accelerated ANITA method (CANITA) Input: initial point x0 ∈ Rd, initial shift vectors h01, . . . , h0n ∈ Rd, probabilities {pt}, and positive stepsizes {αt}, {ηt}, {θt} 1: Initialize: w0 = z0 = x0 and h0 = 1n ∑n i=1 h 0 i 2: for t = 0, 1, 2, . . . do 3: yt = θtxt + (1− θt)wt 4: for all machines i = 1, 2, . . . , n do in parallel 5: Compress the shifted local gradient Cti (∇fi(yt)− hti) and send the result to the server 6: Update the local shift ht+1i = h t i + αtCti (∇fi(wt)− hti) 7: end for 8: Aggregate received compressed local gradient information: gt = ht + 1n n∑ i=1 Cti (∇fi(yt)− hti) • Compute gradient estimator ht+1 = ht + αt 1 n n∑ i=1 Cti (∇fi(wt)− hti) •Maintain the average of local shifts 9: Perform update step: xt+1 = xt − ηtθt g t 10: zt+1 = θtxt+1 + (1− θt)wt 11: wt+1 = { zt+1, with probability pt wt, with probability 1− pt 12: end for 4.1 CANITA: description of the method Our proposed method CANITA, formally described in Algorithm 1, is an accelerated gradient method supporting compressed communication. It is the first method combing the benefits of acceleration and compression in the general convex regime (without strong convexity). In each round t, each machine computes its local gradient (e.g.,∇fi(yt)) and then a shifted version is compressed and sent to the server (See Line 5 of Algorithm 1). The local shifts hti are adaptively changing throughout the iterative process (Line 6), and have the role of reducing the variance introduced by compression C(·). If no compression is used, we may simply set the shifts to be hti = 0 for all i, t. The server subsequently aggregates all received messages to obtain the gradient estimator gt and maintain the average of local shifts ht+1 (Line 8), and then perform gradient update step (Line 9) and update momentum sequences (Line 10 and 3). Besides, the last Line 11 adopts a randomized update rule for the auxiliary vectors wt which simplifies the algorithm and analysis, resembling the workings of the loopless SVRG method used in [16, 20]. 4.2 CANITA vs existing methods CANITA can be loosely seen as a combination of the accelerated gradient method ANITA of [20], and the variance-reduced compressed gradient method DIANA of [30]. In particular, CANITA uses momentum/acceleration steps (see Line 3 and 10 of Algorithm 1) inspired by those of ANITA [20], and adopts the shifted compression framework for each machine (see Line 5 and 6 of Algorithm 1) as in the DIANA method [30]. We prove that CANITA enjoys the benefits of both methods simultaneously, i.e., convergence acceleration of ANITA and gradient compression of DIANA. Although CANITA can conceptually be seen as combination of ANITA [20] and DIANA [30, 9, 12] from an algorithmic perspective, the analysis of CANITA is entirely different. Let us now briefly outline some of the main differences. • For example, compared with ANITA [20], CANITA needs to deal with the extra compression of shifted local gradients in the distributed network. Thus, the obtained gradient estimator gk in Line 8 of Algorithm 1 is substantially different and more complicated than the one in ANITA, which necessitates a novel proof technique. • Compared with DIANA [30, 9, 12], the extra momentum steps in Line 3 and 10 of Algorithm 1 make the analysis of CANITA more complicated than that of DIANA. We obtain the accelerated rate O (√ L ) rather than the non-accelerated rate O(L ) of DIANA, and this is impossible without a substantially different proof technique. • Compared with the accelerated DIANA method ADIANA of [26], the analysis of CANITA is also substantially different since CANITA cannot exploit the strong convexity assumed therein. Finally, please refer to Section 2 where we summarize our contributions for additional discussions. 5 Convergence Results for the CANITA Algorithm In this section, we provide convergence results for CANITA (Algorithm 1). In order to simplify the expressions appearing in our main result (see Theorem 1 in Section 5.1) and in the lemmas needed to prove it (see Appendix A), it will be convenient to let F t := f(wt)− f(x∗), Ht := 1 n n∑ i=1 ‖∇fi(wt)− hti‖2, Dt := 1 2 ‖xt − x∗‖2. (4) 5.1 Generic convergence result We first present the main convergence theorem of CANITA for solving the distributed optimization problem (1) in the general convex regime. Theorem 1 Suppose that Assumption 1 holds and the compression operators {Cti} used in Algorithm 1 satisfy (2) of Definition 2. For any two positive sequences {βt} and {γt} such that the probabilities {pt} and positive stepsizes {αt}, {ηt}, {θt} of Algorithm 1 satisfy the following relations αt ≤ 1 1 + ω , ηt ≤ 1 L ( 1 + βt + 4ptγt ( 1 + 2pt αt )) (5) for all t ≥ 0, and 2ω βtn +4ptγt ( 1+ 2pt αt ) ≤ 1−θt, (1− ptθt)ηt ptθ2t ≤ ηt−1 pt−1θ2t−1 , ( ω βtn + ( 1− αt 2 ) γt ) ηt θ2t ≤ γt−1ηt−1 θ2t−1 (6) for all t ≥ 1. Then the sequences {xt, wt, hti} of CANITA (Algorithm 1) for all t ≥ 0 satisfy the inequality E [ F t+1 + γtpt L Ht+1 ] ≤ θ 2 t pt ηt ( (1− θ0p0)η0 θ20p0 F 0 + ( ω β0n + ( 1− α0 2 ) γ0 ) η0 θ20L H0 +D0 ) , (7) where the quantities F t, Ht, Dt are defined in (4). The detailed proof of Theorem 1 which relies on six lemmas is provided in Appendix A. In particular, the proof simply follows from the key Lemma 6 (see Appendix A.2), while Lemma 6 closely relies on previous five Lemmas 1–5 (see Appendix C.6). Note that all proofs for these six lemmas are deferred to Appendix C. As we shall see in detail in Section 5.2, the sequences βt, γt, pt and αt can be fixed to some constants.3 However, the relaxation parameter θt needs to be decreasing and the stepsize ηt may be increasing until a certain threshold. In particular, we choose βt ≡ c1, γt ≡ c2, pt ≡ c3, αt ≡ c4, θt = c5 t+ c6 , ηt = min {( 1 + 1 t+ c7 ) ηt−1, 1 c8L } , (8) where the constants {ci} may depend on the compression parameter ω and the number of devices/machines n. As a result, the right hand side of (7) will be of the order O ( L t2 ) , which indicates an accelerated rate. Hence, in order to find an -solution of problem (1), i.e., vector wT+1 such that E [ f(wT+1)− f(x∗) ] (4) := E [ FT+1 ] ≤ , (9) the number of communication rounds of CANITA (Algorithm 1) is at most T = O (√ L ) . While the above rate has an accelerated dependence on , it will be crucial to study the omitted constants {ci} (see (8)), and in particular their dependence on the compression parameter ω and the number of devices/machines n. As expected, for any fixed target error > 0, the number of communication rounds T (sufficient to guarantee that (9) holds) may grow with increasing levels of compression, i.e., with increasing ω. However, at the same time, the communication cost in each round decreases with ω. It is easy to see that this trade-off benefits compression. In particular, as we mention in Section 2, if the number of devices n is large, or the compression variance ω is not very high, then compression does not hurt the accelerated rate of communication rounds at all. 5.2 Detailed convergence result We now formulate a concrete Theorem 2 from Theorem 1 which leads to a detailed convergence result for CANITA (Algorithm 1) by specifying the choice of the parameters βt, γt, pt, αt, θt and ηt. The detailed proof of Theorem 2 is deferred to Appendix B. Theorem 2 Suppose that Assumption 1 holds and the compression operators {Cti} used in Algorithm 1 satisfy (2) of Definition 2. Let b = min { ω, √ ω(1+ω)2 n } and choose the two positive 3Exception: While we indeed choose βt ≡ β for t ≥ 1, the value of β0 may be different. sequences {βt} and {γt} as follows: βt = { β0 = 9(1+b+ω)2 (1+b)L for t = 0 β ≡ 48ω(1+ω)(1+b+2(1+ω))n(1+b)2 for t ≥ 1 , γt = γ ≡ (1 + b)2 8(1 + b+ 2(1 + ω)) for t ≥ 0. (10) If we set the probabilities {pt} and positive stepsizes {αt}, {ηt}, {θt} of Algorithm 1 as follows: pt ≡ 1 1 + b , αt ≡ 1 1 + ω , θt = 3(1 + b) t+ 9(1 + b+ ω) , for t ≥ 0, (11) and ηt = { 1 L(β0+3/2) for t = 0 min {( 1 + 1t+9(1+b+ω) ) ηt−1, 1 L(β+3/2) } for t ≥ 1 . (12) Then CANITA (Algorithm 1) for all T ≥ 0 satisfies E [ FT+1 ] ≤ O ( (1 + √ ω3/n)L T 2 + ω3 T 3 ) . (13) According to (13), the number of communication rounds for CANITA (Algorithm 1) to find an -solution of the distributed problem (1), i.e., E [ f(wT+1)− f(x∗) ] (4) := E [ FT+1 ] ≤ , is at most T = O √(1 +√ω3 n ) L + ω ( 1 ) 1 3 . 6 Experiments In this section, we demonstrate the performance of our accelerated method CANITA (Algorithm 1) and previous methods QSGD and DIANA (the theoretical convergence results of these algorithms can be found in Table 1) with different compression operators on the logistic regression problem, min x∈Rd f(x) := 1 n n∑ i=1 log ( 1 + exp(−biaTi x) ) , (14) where {ai, bi}ni=1 ∈ Rd × {±1} are data samples. We use three standard datasets: a9a, mushrooms, and w8a in the experiments. All datasets are downloaded from LIBSVM [4]. Similar to Li et al. [26], we also use three different compression operators: random sparsification (e.g. [39]), natural compression (e.g. [8]), and random quantization (e.g. [1]). In particular, we follow the same settings as in Li et al. [26]. For random-r sparsification, the number of communicated bits per iteration is 32r, and we choose r = d/4. For natural compression, the number of communicated bits per iteration is 9d bits [8]. For random (2, s)-quantization, we choose s = √ d, which means the number of communicated bits per iteration is 2.8d+ 32 [1]. The default number of nodes/machines/workers is 20. In our experiments, we directly use the theoretical stepsizes and parameters for all three algorithms: QSGD [1, 24], DIANA [12], our CANITA (Algorithm 1). To compare with the settings of DIANA and CANITA, we use local gradients (not stochastic gradients) in QSGD. Thus here QSGD is equivalent to DC-GD provided in [24]. In Figures 1–3, we compare our CANITA with QSGD and DIANA with three compression operators: random sparsification (left), natural compression (middle), and random quantization (right) on three datasets: a9a (Figure 1), mushrooms (Figure 2), and w8a (Figure 3). The x-axis and y-axis represent the number of communication bits and the training loss, respectively. Regarding the different compression operators, the experimental results indicate that natural compression and random quantization are better than random sparsification for all three algorithms. For Random Sparsification Natural Compression Random Quantization Random Sparsification Natural Compression Random Quantization Random Sparsification Natural Compression Random Quantization instance, in Figure 1, DIANA uses 1.5×106 (random sparsification), 1.0×106 (natural compression), 0.4× 106 (random quantization) communication bits for achieving the loss 0.4, respectively. Moreover, regarding the different algorithms, the experimental results indeed show that our CANITA converges the fastest compared with both QSGD and DIANA for all three compressors in all Figures 1–3, validating the theoretical results (see Table 1) and confirming the practical superiority of our accelerated CANITA method. 7 Conclusion In this paper, we proposed CANITA: the first gradient method for distributed general convex optimization provably enjoying the benefits of both communication compression and convergence acceleration. There is very limited work on combing compression and acceleration. Indeed, previous works only focus on the (much simpler) strongly convex setting. We hope that our novel algorithm and analysis can provide new insights and shed light on future work in this line of research. We leave further improvements to future work. For example, one may ask whether our approach can be combined with the benefits provided by multiple local update steps [29, 38, 11, 10, 42], with additional variance reduction techniques [9, 24], and to what extent one can extend our results to structured nonconvex problems [22, 19, 27, 21, 25, 6, 35, 5].
1. What is the focus of the paper regarding distributed optimization? 2. What are the strengths of the proposed algorithm in terms of convergence rate? 3. What are the weaknesses of the paper regarding the studied setting and its practicality? 4. How does the reviewer assess the significance of the results compared to prior works, particularly [1]?
Summary Of The Paper Review
Summary Of The Paper This paper considers a distributed GD setting where multiple nodes need to jointly minimize the average of individual convex objectives by computing and communicating its compressed local gradients. The proposed algorithm is shown to achieves an accelerated convergence rate with compressed gradient passing. Review This paper is clearly written and well organized. The proven convergence with the proposed algorithm is faster than a few earlier works. My major concern is the usefulness of the studied setting. Requiring each node to compute its gradient (instead of stochastic/sampled gradient) in a distributed optimization does not seem practical. The setting is much less challenging than the conventional distributed (compressed) SGD setting as considered in QSGD in [1]. So it is unfair to say the results in the current paper improve [1] as in table 1. The order-wise faster convergence (compared to [1]) is not quite surprising to me under this distributed GD setting.
NIPS
Title CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression Abstract Due to the high communication cost in distributed and federated learning, methods relying on compressed communication are becoming increasingly popular. Besides, the best theoretically and practically performing gradient-type methods invariably rely on some form of acceleration/momentum to reduce the number of communications (faster convergence), e.g., Nesterov’s accelerated gradient descent [31, 32] and Adam [14]. In order to combine the benefits of communication compression and convergence acceleration, we propose a compressed and accelerated gradient method based on ANITA [20] for distributed optimization, which we call CANITA. Our CANITA achieves the first accelerated rate O (√( 1 + √ ω3 n ) L + ω ( 1 ) 1 3 ) , which improves upon the state-of-the-art nonaccelerated rate O ( (1 + ωn ) L + ω+ω ω+n 1 ) of DIANA [12] for distributed general convex problems, where is the target error, L is the smooth parameter of the objective, n is the number of machines/devices, and ω is the compression parameter (larger ω means more compression can be applied, and no compression implies ω = 0). Our results show that as long as the number of devices n is large (often true in distributed/federated learning), or the compression ω is not very high, CANITA achieves the faster convergence rate O (√ L ) , i.e., the number of communication rounds is O (√ L ) (vs. O ( L ) achieved by previous works). As a result, CANITA enjoys the advantages of both compression (compressed communication in each round) and acceleration (much fewer communication rounds). N/A O (√( 1 + √ ω3 n ) L + ω ( 1 ) 1 3 ) , which improves upon the state-of-the-art non- accelerated rate O ( (1 + ωn ) L + ω2+ω ω+n 1 ) of DIANA [12] for distributed general convex problems, where is the target error, L is the smooth parameter of the objective, n is the number of machines/devices, and ω is the compression parameter (larger ω means more compression can be applied, and no compression implies ω = 0). Our results show that as long as the number of devices n is large (often true in distributed/federated learning), or the compression ω is not very high, CANITA achieves the faster convergence rate O (√ L ) , i.e., the number of communication rounds is O (√ L ) (vs. O ( L ) achieved by previous works). As a result, CANITA enjoys the advantages of both compression (compressed communication in each round) and acceleration (much fewer communication rounds). 1 Introduction With the proliferation of edge devices, such as mobile phones, wearables and smart home appliances, comes an increase in the amount of data rich in potential information which can be mined for the benefit of humankind. One of the approaches of turning the raw data into information is via federated learning [15, 29], where typically a single global supervised model is trained in a massively distributed manner over a network of heterogeneous devices. Training supervised distributed/federated learning models is typically performed by solving an optimization problem of the form min x∈Rd { f(x) := 1 n n∑ i=1 fi(x) } , (1) where n denotes the number of devices/machines/workers/clients, and fi : Rd → R is a loss function associated with the data stored on device i. We will write x∗ := arg min x∈Rd f(x). 35th Conference on Neural Information Processing Systems (NeurIPS 2021). If more than one minimizer exist, x∗ denotes an arbitrary but fixed solution. We will rely on the solution concept captured in the following definition: Definition 1 A random vector x̂ ∈ Rd is called an -solution of the distributed problem (1) if E [f(x̂)]− f(x∗) ≤ , where the expectation is with respect to the randomness inherent in the algorithm used to produce x̂. In distributed and federated learning problems of the form (1), communication of messages across the network typically forms the key bottleneck of the training system. In the modern practice of supervised learning in general and deep learning in particular, this is exacerbated by the reliance on massive models described by millions or even billions of parameters. For these reasons, it is very important to devise novel and more efficient training algorithms capable of decreasing the overall communication cost, which can be formalized as the product of the number of communication rounds necessary to train a model of sufficient quality, and the computation and communication cost associated with a typical communication round. 1.1 Methods with compressed communication One of the most common strategies for improving communication complexity is communication compression [37, 1, 40, 8, 30, 9, 26, 24]. This strategy is based on the reduction of the size of communicated messages via the application of a suitably chosen lossy compression mechanism, saving precious time spent in each communication round, and hoping that this will not increase the total number of communication rounds. Several recent theoretical results suggest that by combining an appropriate (randomized) compression operator with a suitably designed gradient-type method, one can obtain improvement in the total communication complexity over comparable baselines not performing any compression. For instance, this is the case for distributed compressed gradient descent (CGD) [1, 13, 8, 24], and distributed CGD methods which employ variance reduction to tame the variance introduced by compression [7, 30, 9, 24, 6]. 1.2 Methods with acceleration The acceleration/momentum of gradient-type methods is widely-studied in standard optimization problems, which aims to achieve faster convergence rates (fewer communication rounds) [33, 31, 32, 17, 28, 2, 18, 16, 23, 20]. Deep learning practitioners typically rely on Adam [14], or one of its many variants, which besides other tricks also adopts momentum. In particular, ANITA [20] obtains the current state-of-the-art convergence results for convex optimization. In this paper, we will adopt the acceleration from ANITA [20] to the distributed setting with compression. 1.3 Can communication compression and acceleration be combined? Encouraged by the recent theoretical success of communication compression, and the widespread success of accelerated methods, in this paper we seek to further enhance CGD methods with acceleration/momentum, with the aim to obtain provable improvements in overall communication complexity. Can distributed gradient-type methods theoretically benefit from the combination of gradient compression and acceleration/momentum? To the best of our knowledge, no such results exist in the general convex regime, and in this paper we close this gap by designing a method that can provably enjoy the advantages of both compression (compressed communication in each round) and acceleration (much fewer communication rounds). While there is abundance of research studying communication compression and acceleration in isolation, there is very limited work on the combination of both approaches. The first successful combination of gradient compression and acceleration/momentum was recently achieved by the ADIANA method of Li et al. [26]. However, Li et al. [26] only provide theoretical results for strongly convex problems, and their method is not applicable to (general) convex problems. So, one needs to both design a new method to handle the convex case, and perform its analysis. A-priori, it is not clear at all what approach would work. To the best of our knowledge, besides the initial work [26], we are only aware of two other works for addressing this question [41, 34]. However, both these works still only focus on the simpler and less practically relevant strongly convex setting. Thus, this line of research is still largely unexplored. For instance, the well-known logistic regression problem is convex but not strongly convex. Finally, even if a problem is strongly convex, the modulus of strong convexity is typically not known, or hard to estimate properly. 2 Summary of Contributions In this paper we propose and analyze an accelerated gradient method with compressed communication, which we call CANITA (described in Algorithm 1), for solving distributed general convex optimization problems of the form (1). In particular, CANITA can loosely be seen as a combination of the accelerated gradient method ANITA of [20], and the variance-reduced compressed gradient method DIANA of [30]. Ours is the first work provably combining the benefits of communication compression and acceleration in the general convex regime. 2.1 First accelerated rate for compressed gradient methods in the convex regime For general convex problems, CANITA is the first compressed communication gradient method with an accelerated rate. In particular, our CANITA solves the distributed problem (1) in O (√( 1 + √ ω3 n ) L + ω ( 1 ) 1 3 ) communication rounds, which improves upon the current state-of-the-art result O (( 1 + ωn ) L + ω2+n ω+n 1 ) achieved by the DIANA method [12]. See Table 1 for more comparisons. Let us now illustrate the improvements coming from this new bound on an example with concrete numerical values. Let the compression ratio be 10% (the size of compressed message is 0.1 · d, where d is the size of the uncompressed message). If random sparsification or quantization is used to achieve this, then ω ≈ 10 (see Section 3.1). Further, if the number of devices/machines is n = 106, 1In this strongly convex column, κ := L µ denotes the condition number, where L is the smooth parameter and µ > 0 is the strong convexity parameter. 2Here QSGD [1] needs an additional bounded gradient assumption, i.e., ‖∇fi(x)‖2 ≤ G2, ∀i ∈ [n], x ∈ Rd. and the target error tolerance is = 10−6, then the number of communication rounds of our CANITA method isO(103), while the number of communication rounds of the previous state-of-the-art method DIANA [12] is O(106), i.e., O (√ L ) vs. O(L ). This is an improvement of three orders of magnitude. Moreover, the numerical experiments in Section 6 indeed show that the performance of our CANITA is much better than previous non-accelerated compressed methods (QSGD and DIANA), corroborating the theoretical results (see Table 1) and confirming the practical superiority of our accelerated CANITA method. 2.2 Accelerated rate with limited compression for free For strongly convex problems, Li et al. [26] showed that if the number of devices/machines n is large, or the compression variance parameter ω is not very high (ω ≤ n1/3), then their ADIANA method enjoys the benefits of both compression and acceleration (i.e., √ κ log 1 of ADIANA vs. κ log 1 of previous works). In this paper, we consider the general convex setting and show that the proposed CANITA also enjoys the benefits of both compression and acceleration. Similarly, if ω ≤ n1/3 (i.e., many devices, or limited compression variance), CANITA achieves the accelerated rate √ L vs. L of previous works. This means that the compression does not hurt the accelerated rate at all. Note that the second term( 1 ) 1 3 is of a lower order compared with the first term √ L . 2.3 Novel proof technique The proof behind the analysis of CANITA is significantly different from that of ADIANA [26], which critically relies on strong convexity. Moreover, the theoretical rate in the strongly convex case is linear O(log 1 ), while it is sublinear O( 1 ) or O (√ 1 ) (accelerated) in the general convex case. We hope that our novel analysis can provide new insights and shed light on future work. 3 Preliminaries Let [n] denote the set {1, 2, · · · , n} and ‖ · ‖ denote the Euclidean norm for a vector and the spectral norm for a matrix. Let 〈u, v〉 denote the standard Euclidean inner product of two vectors u and v. We use O(·) and Ω(·) to hide the absolute constants. 3.1 Assumptions about the compression operators We now introduce the notion of a randomized compression operator which we use to compress the gradients to save on communication. We rely on a standard class of unbiased compressors (see Definition 2) that was used in the context of distributed gradient methods before [1, 13, 9, 24, 26]. Definition 2 (Compression operator) A randomized map C : Rd 7→ Rd is an ω-compression operator if E [C(x)] = x, E [ ‖C(x)− x‖2 ] ≤ ω‖x‖2, ∀x ∈ Rd. (2) In particular, no compression (C(x) ≡ x) implies ω = 0. It is well known that the conditions (2) are satisfied by many practically useful compression operators (see Table 1 in [3, 36]). For illustration purposes, we now present a couple canonical examples: sparsification and quantization. Example 1 (Random sparsification). Given x ∈ Rd, the random-k sparsification operator is defined by C(x) := d k · (ξk x), where denotes the Hadamard (element-wise) product and ξk ∈ {0, 1}d is a uniformly random binary vector with k nonzero entries (‖ξk‖0 = k). This random-k sparsification operator C satisfies (2) with ω = dk − 1. By setting k = d, this reduces to the identity compressor, whose variance is obviously zero: ω = 0. Example 2 (Random quantization). Given x ∈ Rd, the (p, s)-quantization operator is defined by C(x) := sign(x) · ‖x‖p · 1 s · ξs, where p, s ≥ 1 are integers, and ξs ∈ Rd is a random vector with i-th element ξs(i) := { l + 1, with probability |xi|‖x‖p s− l, l, otherwise. The level l satisfies |xi|‖x‖p ∈ [ l s , l+1 s ]. The probability is chosen so that E [ξs(i)] = |xi| ‖x‖p s. This (p, s)quantization operator C satisfies (2) with ω = 2+ d 1/p+d1/2 s . In particular, QSGD [1] used p = 2 (i.e., (2, s)-quantization) and proved that the expected sparsity of C(x) is E [‖C(x)‖0] = O ( s(s+ √ d) ) . 3.2 Assumptions about the functions Throughout the paper, we assume that the functions fi are convex and have Lipschitz continuous gradient. Assumption 1 Functions fi : Rd → R are convex, differentiable, and L-smooth. The last condition means that there exists a constant L > 0 such that for all i ∈ [n] we have ‖∇fi(x)−∇fi(y)‖ ≤ L‖x− y‖, ∀x, y ∈ Rd. (3) It is easy to see that the objective f(x) = 1n ∑n i=1 fi(x) in (1) satisfies (3) provided that the constituent functions {fi} do. 4 The CANITA Algorithm In this section, we describe our method, for which we coin the name CANITA, designed for solving problem (1), which is of importance in distributed and federated learning, and contrast it to the most closely related methods ANITA [20], DIANA [30] and ADIANA [26]. Algorithm 1 Distributed compressed accelerated ANITA method (CANITA) Input: initial point x0 ∈ Rd, initial shift vectors h01, . . . , h0n ∈ Rd, probabilities {pt}, and positive stepsizes {αt}, {ηt}, {θt} 1: Initialize: w0 = z0 = x0 and h0 = 1n ∑n i=1 h 0 i 2: for t = 0, 1, 2, . . . do 3: yt = θtxt + (1− θt)wt 4: for all machines i = 1, 2, . . . , n do in parallel 5: Compress the shifted local gradient Cti (∇fi(yt)− hti) and send the result to the server 6: Update the local shift ht+1i = h t i + αtCti (∇fi(wt)− hti) 7: end for 8: Aggregate received compressed local gradient information: gt = ht + 1n n∑ i=1 Cti (∇fi(yt)− hti) • Compute gradient estimator ht+1 = ht + αt 1 n n∑ i=1 Cti (∇fi(wt)− hti) •Maintain the average of local shifts 9: Perform update step: xt+1 = xt − ηtθt g t 10: zt+1 = θtxt+1 + (1− θt)wt 11: wt+1 = { zt+1, with probability pt wt, with probability 1− pt 12: end for 4.1 CANITA: description of the method Our proposed method CANITA, formally described in Algorithm 1, is an accelerated gradient method supporting compressed communication. It is the first method combing the benefits of acceleration and compression in the general convex regime (without strong convexity). In each round t, each machine computes its local gradient (e.g.,∇fi(yt)) and then a shifted version is compressed and sent to the server (See Line 5 of Algorithm 1). The local shifts hti are adaptively changing throughout the iterative process (Line 6), and have the role of reducing the variance introduced by compression C(·). If no compression is used, we may simply set the shifts to be hti = 0 for all i, t. The server subsequently aggregates all received messages to obtain the gradient estimator gt and maintain the average of local shifts ht+1 (Line 8), and then perform gradient update step (Line 9) and update momentum sequences (Line 10 and 3). Besides, the last Line 11 adopts a randomized update rule for the auxiliary vectors wt which simplifies the algorithm and analysis, resembling the workings of the loopless SVRG method used in [16, 20]. 4.2 CANITA vs existing methods CANITA can be loosely seen as a combination of the accelerated gradient method ANITA of [20], and the variance-reduced compressed gradient method DIANA of [30]. In particular, CANITA uses momentum/acceleration steps (see Line 3 and 10 of Algorithm 1) inspired by those of ANITA [20], and adopts the shifted compression framework for each machine (see Line 5 and 6 of Algorithm 1) as in the DIANA method [30]. We prove that CANITA enjoys the benefits of both methods simultaneously, i.e., convergence acceleration of ANITA and gradient compression of DIANA. Although CANITA can conceptually be seen as combination of ANITA [20] and DIANA [30, 9, 12] from an algorithmic perspective, the analysis of CANITA is entirely different. Let us now briefly outline some of the main differences. • For example, compared with ANITA [20], CANITA needs to deal with the extra compression of shifted local gradients in the distributed network. Thus, the obtained gradient estimator gk in Line 8 of Algorithm 1 is substantially different and more complicated than the one in ANITA, which necessitates a novel proof technique. • Compared with DIANA [30, 9, 12], the extra momentum steps in Line 3 and 10 of Algorithm 1 make the analysis of CANITA more complicated than that of DIANA. We obtain the accelerated rate O (√ L ) rather than the non-accelerated rate O(L ) of DIANA, and this is impossible without a substantially different proof technique. • Compared with the accelerated DIANA method ADIANA of [26], the analysis of CANITA is also substantially different since CANITA cannot exploit the strong convexity assumed therein. Finally, please refer to Section 2 where we summarize our contributions for additional discussions. 5 Convergence Results for the CANITA Algorithm In this section, we provide convergence results for CANITA (Algorithm 1). In order to simplify the expressions appearing in our main result (see Theorem 1 in Section 5.1) and in the lemmas needed to prove it (see Appendix A), it will be convenient to let F t := f(wt)− f(x∗), Ht := 1 n n∑ i=1 ‖∇fi(wt)− hti‖2, Dt := 1 2 ‖xt − x∗‖2. (4) 5.1 Generic convergence result We first present the main convergence theorem of CANITA for solving the distributed optimization problem (1) in the general convex regime. Theorem 1 Suppose that Assumption 1 holds and the compression operators {Cti} used in Algorithm 1 satisfy (2) of Definition 2. For any two positive sequences {βt} and {γt} such that the probabilities {pt} and positive stepsizes {αt}, {ηt}, {θt} of Algorithm 1 satisfy the following relations αt ≤ 1 1 + ω , ηt ≤ 1 L ( 1 + βt + 4ptγt ( 1 + 2pt αt )) (5) for all t ≥ 0, and 2ω βtn +4ptγt ( 1+ 2pt αt ) ≤ 1−θt, (1− ptθt)ηt ptθ2t ≤ ηt−1 pt−1θ2t−1 , ( ω βtn + ( 1− αt 2 ) γt ) ηt θ2t ≤ γt−1ηt−1 θ2t−1 (6) for all t ≥ 1. Then the sequences {xt, wt, hti} of CANITA (Algorithm 1) for all t ≥ 0 satisfy the inequality E [ F t+1 + γtpt L Ht+1 ] ≤ θ 2 t pt ηt ( (1− θ0p0)η0 θ20p0 F 0 + ( ω β0n + ( 1− α0 2 ) γ0 ) η0 θ20L H0 +D0 ) , (7) where the quantities F t, Ht, Dt are defined in (4). The detailed proof of Theorem 1 which relies on six lemmas is provided in Appendix A. In particular, the proof simply follows from the key Lemma 6 (see Appendix A.2), while Lemma 6 closely relies on previous five Lemmas 1–5 (see Appendix C.6). Note that all proofs for these six lemmas are deferred to Appendix C. As we shall see in detail in Section 5.2, the sequences βt, γt, pt and αt can be fixed to some constants.3 However, the relaxation parameter θt needs to be decreasing and the stepsize ηt may be increasing until a certain threshold. In particular, we choose βt ≡ c1, γt ≡ c2, pt ≡ c3, αt ≡ c4, θt = c5 t+ c6 , ηt = min {( 1 + 1 t+ c7 ) ηt−1, 1 c8L } , (8) where the constants {ci} may depend on the compression parameter ω and the number of devices/machines n. As a result, the right hand side of (7) will be of the order O ( L t2 ) , which indicates an accelerated rate. Hence, in order to find an -solution of problem (1), i.e., vector wT+1 such that E [ f(wT+1)− f(x∗) ] (4) := E [ FT+1 ] ≤ , (9) the number of communication rounds of CANITA (Algorithm 1) is at most T = O (√ L ) . While the above rate has an accelerated dependence on , it will be crucial to study the omitted constants {ci} (see (8)), and in particular their dependence on the compression parameter ω and the number of devices/machines n. As expected, for any fixed target error > 0, the number of communication rounds T (sufficient to guarantee that (9) holds) may grow with increasing levels of compression, i.e., with increasing ω. However, at the same time, the communication cost in each round decreases with ω. It is easy to see that this trade-off benefits compression. In particular, as we mention in Section 2, if the number of devices n is large, or the compression variance ω is not very high, then compression does not hurt the accelerated rate of communication rounds at all. 5.2 Detailed convergence result We now formulate a concrete Theorem 2 from Theorem 1 which leads to a detailed convergence result for CANITA (Algorithm 1) by specifying the choice of the parameters βt, γt, pt, αt, θt and ηt. The detailed proof of Theorem 2 is deferred to Appendix B. Theorem 2 Suppose that Assumption 1 holds and the compression operators {Cti} used in Algorithm 1 satisfy (2) of Definition 2. Let b = min { ω, √ ω(1+ω)2 n } and choose the two positive 3Exception: While we indeed choose βt ≡ β for t ≥ 1, the value of β0 may be different. sequences {βt} and {γt} as follows: βt = { β0 = 9(1+b+ω)2 (1+b)L for t = 0 β ≡ 48ω(1+ω)(1+b+2(1+ω))n(1+b)2 for t ≥ 1 , γt = γ ≡ (1 + b)2 8(1 + b+ 2(1 + ω)) for t ≥ 0. (10) If we set the probabilities {pt} and positive stepsizes {αt}, {ηt}, {θt} of Algorithm 1 as follows: pt ≡ 1 1 + b , αt ≡ 1 1 + ω , θt = 3(1 + b) t+ 9(1 + b+ ω) , for t ≥ 0, (11) and ηt = { 1 L(β0+3/2) for t = 0 min {( 1 + 1t+9(1+b+ω) ) ηt−1, 1 L(β+3/2) } for t ≥ 1 . (12) Then CANITA (Algorithm 1) for all T ≥ 0 satisfies E [ FT+1 ] ≤ O ( (1 + √ ω3/n)L T 2 + ω3 T 3 ) . (13) According to (13), the number of communication rounds for CANITA (Algorithm 1) to find an -solution of the distributed problem (1), i.e., E [ f(wT+1)− f(x∗) ] (4) := E [ FT+1 ] ≤ , is at most T = O √(1 +√ω3 n ) L + ω ( 1 ) 1 3 . 6 Experiments In this section, we demonstrate the performance of our accelerated method CANITA (Algorithm 1) and previous methods QSGD and DIANA (the theoretical convergence results of these algorithms can be found in Table 1) with different compression operators on the logistic regression problem, min x∈Rd f(x) := 1 n n∑ i=1 log ( 1 + exp(−biaTi x) ) , (14) where {ai, bi}ni=1 ∈ Rd × {±1} are data samples. We use three standard datasets: a9a, mushrooms, and w8a in the experiments. All datasets are downloaded from LIBSVM [4]. Similar to Li et al. [26], we also use three different compression operators: random sparsification (e.g. [39]), natural compression (e.g. [8]), and random quantization (e.g. [1]). In particular, we follow the same settings as in Li et al. [26]. For random-r sparsification, the number of communicated bits per iteration is 32r, and we choose r = d/4. For natural compression, the number of communicated bits per iteration is 9d bits [8]. For random (2, s)-quantization, we choose s = √ d, which means the number of communicated bits per iteration is 2.8d+ 32 [1]. The default number of nodes/machines/workers is 20. In our experiments, we directly use the theoretical stepsizes and parameters for all three algorithms: QSGD [1, 24], DIANA [12], our CANITA (Algorithm 1). To compare with the settings of DIANA and CANITA, we use local gradients (not stochastic gradients) in QSGD. Thus here QSGD is equivalent to DC-GD provided in [24]. In Figures 1–3, we compare our CANITA with QSGD and DIANA with three compression operators: random sparsification (left), natural compression (middle), and random quantization (right) on three datasets: a9a (Figure 1), mushrooms (Figure 2), and w8a (Figure 3). The x-axis and y-axis represent the number of communication bits and the training loss, respectively. Regarding the different compression operators, the experimental results indicate that natural compression and random quantization are better than random sparsification for all three algorithms. For Random Sparsification Natural Compression Random Quantization Random Sparsification Natural Compression Random Quantization Random Sparsification Natural Compression Random Quantization instance, in Figure 1, DIANA uses 1.5×106 (random sparsification), 1.0×106 (natural compression), 0.4× 106 (random quantization) communication bits for achieving the loss 0.4, respectively. Moreover, regarding the different algorithms, the experimental results indeed show that our CANITA converges the fastest compared with both QSGD and DIANA for all three compressors in all Figures 1–3, validating the theoretical results (see Table 1) and confirming the practical superiority of our accelerated CANITA method. 7 Conclusion In this paper, we proposed CANITA: the first gradient method for distributed general convex optimization provably enjoying the benefits of both communication compression and convergence acceleration. There is very limited work on combing compression and acceleration. Indeed, previous works only focus on the (much simpler) strongly convex setting. We hope that our novel algorithm and analysis can provide new insights and shed light on future work in this line of research. We leave further improvements to future work. For example, one may ask whether our approach can be combined with the benefits provided by multiple local update steps [29, 38, 11, 10, 42], with additional variance reduction techniques [9, 24], and to what extent one can extend our results to structured nonconvex problems [22, 19, 27, 21, 25, 6, 35, 5].
1. What is the focus of the paper in terms of distributed methods and acceleration techniques? 2. What are the strengths of the proposed approach, particularly in terms of convergence rate? 3. What are the weaknesses of the paper regarding its applicability to traditional distributed optimization problems or federated optimization problems? 4. How does the reviewer suggest improving the paper's comparisons with other distributed communication-efficient optimization methods? 5. What are the limitations of the proposed approach regarding memory usage?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a novel distributed methods with acceleration and communication compression. Authors proof the convergence and show that it is faster than existing SOTA methods. Review Positive: This method achieves bigO(1/T^2) convergence rate benefit from the acceleration technique. Negative: I do not think this method is suitable for traditional distributed optimization problems or federated optimization problems. In traditional distributed optimization problems, the data size and dimension are large. Calculating \nabla f_i(w) is slow and usually is worse than calculating a stochastic gradient, even when the model is convex. Furthermore, additional parameters h_t, \bar{x}_t and w_t will cost huge memory. In federated learning, usually only few machines participate training in each communication round. Authors should add experiments to compare CANITA with other distributed communication efficient optimization methods. I think a convex optimization experiment can be easily conduct on pytorch and tensorflow.
NIPS
Title CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression Abstract Due to the high communication cost in distributed and federated learning, methods relying on compressed communication are becoming increasingly popular. Besides, the best theoretically and practically performing gradient-type methods invariably rely on some form of acceleration/momentum to reduce the number of communications (faster convergence), e.g., Nesterov’s accelerated gradient descent [31, 32] and Adam [14]. In order to combine the benefits of communication compression and convergence acceleration, we propose a compressed and accelerated gradient method based on ANITA [20] for distributed optimization, which we call CANITA. Our CANITA achieves the first accelerated rate O (√( 1 + √ ω3 n ) L + ω ( 1 ) 1 3 ) , which improves upon the state-of-the-art nonaccelerated rate O ( (1 + ωn ) L + ω+ω ω+n 1 ) of DIANA [12] for distributed general convex problems, where is the target error, L is the smooth parameter of the objective, n is the number of machines/devices, and ω is the compression parameter (larger ω means more compression can be applied, and no compression implies ω = 0). Our results show that as long as the number of devices n is large (often true in distributed/federated learning), or the compression ω is not very high, CANITA achieves the faster convergence rate O (√ L ) , i.e., the number of communication rounds is O (√ L ) (vs. O ( L ) achieved by previous works). As a result, CANITA enjoys the advantages of both compression (compressed communication in each round) and acceleration (much fewer communication rounds). N/A O (√( 1 + √ ω3 n ) L + ω ( 1 ) 1 3 ) , which improves upon the state-of-the-art non- accelerated rate O ( (1 + ωn ) L + ω2+ω ω+n 1 ) of DIANA [12] for distributed general convex problems, where is the target error, L is the smooth parameter of the objective, n is the number of machines/devices, and ω is the compression parameter (larger ω means more compression can be applied, and no compression implies ω = 0). Our results show that as long as the number of devices n is large (often true in distributed/federated learning), or the compression ω is not very high, CANITA achieves the faster convergence rate O (√ L ) , i.e., the number of communication rounds is O (√ L ) (vs. O ( L ) achieved by previous works). As a result, CANITA enjoys the advantages of both compression (compressed communication in each round) and acceleration (much fewer communication rounds). 1 Introduction With the proliferation of edge devices, such as mobile phones, wearables and smart home appliances, comes an increase in the amount of data rich in potential information which can be mined for the benefit of humankind. One of the approaches of turning the raw data into information is via federated learning [15, 29], where typically a single global supervised model is trained in a massively distributed manner over a network of heterogeneous devices. Training supervised distributed/federated learning models is typically performed by solving an optimization problem of the form min x∈Rd { f(x) := 1 n n∑ i=1 fi(x) } , (1) where n denotes the number of devices/machines/workers/clients, and fi : Rd → R is a loss function associated with the data stored on device i. We will write x∗ := arg min x∈Rd f(x). 35th Conference on Neural Information Processing Systems (NeurIPS 2021). If more than one minimizer exist, x∗ denotes an arbitrary but fixed solution. We will rely on the solution concept captured in the following definition: Definition 1 A random vector x̂ ∈ Rd is called an -solution of the distributed problem (1) if E [f(x̂)]− f(x∗) ≤ , where the expectation is with respect to the randomness inherent in the algorithm used to produce x̂. In distributed and federated learning problems of the form (1), communication of messages across the network typically forms the key bottleneck of the training system. In the modern practice of supervised learning in general and deep learning in particular, this is exacerbated by the reliance on massive models described by millions or even billions of parameters. For these reasons, it is very important to devise novel and more efficient training algorithms capable of decreasing the overall communication cost, which can be formalized as the product of the number of communication rounds necessary to train a model of sufficient quality, and the computation and communication cost associated with a typical communication round. 1.1 Methods with compressed communication One of the most common strategies for improving communication complexity is communication compression [37, 1, 40, 8, 30, 9, 26, 24]. This strategy is based on the reduction of the size of communicated messages via the application of a suitably chosen lossy compression mechanism, saving precious time spent in each communication round, and hoping that this will not increase the total number of communication rounds. Several recent theoretical results suggest that by combining an appropriate (randomized) compression operator with a suitably designed gradient-type method, one can obtain improvement in the total communication complexity over comparable baselines not performing any compression. For instance, this is the case for distributed compressed gradient descent (CGD) [1, 13, 8, 24], and distributed CGD methods which employ variance reduction to tame the variance introduced by compression [7, 30, 9, 24, 6]. 1.2 Methods with acceleration The acceleration/momentum of gradient-type methods is widely-studied in standard optimization problems, which aims to achieve faster convergence rates (fewer communication rounds) [33, 31, 32, 17, 28, 2, 18, 16, 23, 20]. Deep learning practitioners typically rely on Adam [14], or one of its many variants, which besides other tricks also adopts momentum. In particular, ANITA [20] obtains the current state-of-the-art convergence results for convex optimization. In this paper, we will adopt the acceleration from ANITA [20] to the distributed setting with compression. 1.3 Can communication compression and acceleration be combined? Encouraged by the recent theoretical success of communication compression, and the widespread success of accelerated methods, in this paper we seek to further enhance CGD methods with acceleration/momentum, with the aim to obtain provable improvements in overall communication complexity. Can distributed gradient-type methods theoretically benefit from the combination of gradient compression and acceleration/momentum? To the best of our knowledge, no such results exist in the general convex regime, and in this paper we close this gap by designing a method that can provably enjoy the advantages of both compression (compressed communication in each round) and acceleration (much fewer communication rounds). While there is abundance of research studying communication compression and acceleration in isolation, there is very limited work on the combination of both approaches. The first successful combination of gradient compression and acceleration/momentum was recently achieved by the ADIANA method of Li et al. [26]. However, Li et al. [26] only provide theoretical results for strongly convex problems, and their method is not applicable to (general) convex problems. So, one needs to both design a new method to handle the convex case, and perform its analysis. A-priori, it is not clear at all what approach would work. To the best of our knowledge, besides the initial work [26], we are only aware of two other works for addressing this question [41, 34]. However, both these works still only focus on the simpler and less practically relevant strongly convex setting. Thus, this line of research is still largely unexplored. For instance, the well-known logistic regression problem is convex but not strongly convex. Finally, even if a problem is strongly convex, the modulus of strong convexity is typically not known, or hard to estimate properly. 2 Summary of Contributions In this paper we propose and analyze an accelerated gradient method with compressed communication, which we call CANITA (described in Algorithm 1), for solving distributed general convex optimization problems of the form (1). In particular, CANITA can loosely be seen as a combination of the accelerated gradient method ANITA of [20], and the variance-reduced compressed gradient method DIANA of [30]. Ours is the first work provably combining the benefits of communication compression and acceleration in the general convex regime. 2.1 First accelerated rate for compressed gradient methods in the convex regime For general convex problems, CANITA is the first compressed communication gradient method with an accelerated rate. In particular, our CANITA solves the distributed problem (1) in O (√( 1 + √ ω3 n ) L + ω ( 1 ) 1 3 ) communication rounds, which improves upon the current state-of-the-art result O (( 1 + ωn ) L + ω2+n ω+n 1 ) achieved by the DIANA method [12]. See Table 1 for more comparisons. Let us now illustrate the improvements coming from this new bound on an example with concrete numerical values. Let the compression ratio be 10% (the size of compressed message is 0.1 · d, where d is the size of the uncompressed message). If random sparsification or quantization is used to achieve this, then ω ≈ 10 (see Section 3.1). Further, if the number of devices/machines is n = 106, 1In this strongly convex column, κ := L µ denotes the condition number, where L is the smooth parameter and µ > 0 is the strong convexity parameter. 2Here QSGD [1] needs an additional bounded gradient assumption, i.e., ‖∇fi(x)‖2 ≤ G2, ∀i ∈ [n], x ∈ Rd. and the target error tolerance is = 10−6, then the number of communication rounds of our CANITA method isO(103), while the number of communication rounds of the previous state-of-the-art method DIANA [12] is O(106), i.e., O (√ L ) vs. O(L ). This is an improvement of three orders of magnitude. Moreover, the numerical experiments in Section 6 indeed show that the performance of our CANITA is much better than previous non-accelerated compressed methods (QSGD and DIANA), corroborating the theoretical results (see Table 1) and confirming the practical superiority of our accelerated CANITA method. 2.2 Accelerated rate with limited compression for free For strongly convex problems, Li et al. [26] showed that if the number of devices/machines n is large, or the compression variance parameter ω is not very high (ω ≤ n1/3), then their ADIANA method enjoys the benefits of both compression and acceleration (i.e., √ κ log 1 of ADIANA vs. κ log 1 of previous works). In this paper, we consider the general convex setting and show that the proposed CANITA also enjoys the benefits of both compression and acceleration. Similarly, if ω ≤ n1/3 (i.e., many devices, or limited compression variance), CANITA achieves the accelerated rate √ L vs. L of previous works. This means that the compression does not hurt the accelerated rate at all. Note that the second term( 1 ) 1 3 is of a lower order compared with the first term √ L . 2.3 Novel proof technique The proof behind the analysis of CANITA is significantly different from that of ADIANA [26], which critically relies on strong convexity. Moreover, the theoretical rate in the strongly convex case is linear O(log 1 ), while it is sublinear O( 1 ) or O (√ 1 ) (accelerated) in the general convex case. We hope that our novel analysis can provide new insights and shed light on future work. 3 Preliminaries Let [n] denote the set {1, 2, · · · , n} and ‖ · ‖ denote the Euclidean norm for a vector and the spectral norm for a matrix. Let 〈u, v〉 denote the standard Euclidean inner product of two vectors u and v. We use O(·) and Ω(·) to hide the absolute constants. 3.1 Assumptions about the compression operators We now introduce the notion of a randomized compression operator which we use to compress the gradients to save on communication. We rely on a standard class of unbiased compressors (see Definition 2) that was used in the context of distributed gradient methods before [1, 13, 9, 24, 26]. Definition 2 (Compression operator) A randomized map C : Rd 7→ Rd is an ω-compression operator if E [C(x)] = x, E [ ‖C(x)− x‖2 ] ≤ ω‖x‖2, ∀x ∈ Rd. (2) In particular, no compression (C(x) ≡ x) implies ω = 0. It is well known that the conditions (2) are satisfied by many practically useful compression operators (see Table 1 in [3, 36]). For illustration purposes, we now present a couple canonical examples: sparsification and quantization. Example 1 (Random sparsification). Given x ∈ Rd, the random-k sparsification operator is defined by C(x) := d k · (ξk x), where denotes the Hadamard (element-wise) product and ξk ∈ {0, 1}d is a uniformly random binary vector with k nonzero entries (‖ξk‖0 = k). This random-k sparsification operator C satisfies (2) with ω = dk − 1. By setting k = d, this reduces to the identity compressor, whose variance is obviously zero: ω = 0. Example 2 (Random quantization). Given x ∈ Rd, the (p, s)-quantization operator is defined by C(x) := sign(x) · ‖x‖p · 1 s · ξs, where p, s ≥ 1 are integers, and ξs ∈ Rd is a random vector with i-th element ξs(i) := { l + 1, with probability |xi|‖x‖p s− l, l, otherwise. The level l satisfies |xi|‖x‖p ∈ [ l s , l+1 s ]. The probability is chosen so that E [ξs(i)] = |xi| ‖x‖p s. This (p, s)quantization operator C satisfies (2) with ω = 2+ d 1/p+d1/2 s . In particular, QSGD [1] used p = 2 (i.e., (2, s)-quantization) and proved that the expected sparsity of C(x) is E [‖C(x)‖0] = O ( s(s+ √ d) ) . 3.2 Assumptions about the functions Throughout the paper, we assume that the functions fi are convex and have Lipschitz continuous gradient. Assumption 1 Functions fi : Rd → R are convex, differentiable, and L-smooth. The last condition means that there exists a constant L > 0 such that for all i ∈ [n] we have ‖∇fi(x)−∇fi(y)‖ ≤ L‖x− y‖, ∀x, y ∈ Rd. (3) It is easy to see that the objective f(x) = 1n ∑n i=1 fi(x) in (1) satisfies (3) provided that the constituent functions {fi} do. 4 The CANITA Algorithm In this section, we describe our method, for which we coin the name CANITA, designed for solving problem (1), which is of importance in distributed and federated learning, and contrast it to the most closely related methods ANITA [20], DIANA [30] and ADIANA [26]. Algorithm 1 Distributed compressed accelerated ANITA method (CANITA) Input: initial point x0 ∈ Rd, initial shift vectors h01, . . . , h0n ∈ Rd, probabilities {pt}, and positive stepsizes {αt}, {ηt}, {θt} 1: Initialize: w0 = z0 = x0 and h0 = 1n ∑n i=1 h 0 i 2: for t = 0, 1, 2, . . . do 3: yt = θtxt + (1− θt)wt 4: for all machines i = 1, 2, . . . , n do in parallel 5: Compress the shifted local gradient Cti (∇fi(yt)− hti) and send the result to the server 6: Update the local shift ht+1i = h t i + αtCti (∇fi(wt)− hti) 7: end for 8: Aggregate received compressed local gradient information: gt = ht + 1n n∑ i=1 Cti (∇fi(yt)− hti) • Compute gradient estimator ht+1 = ht + αt 1 n n∑ i=1 Cti (∇fi(wt)− hti) •Maintain the average of local shifts 9: Perform update step: xt+1 = xt − ηtθt g t 10: zt+1 = θtxt+1 + (1− θt)wt 11: wt+1 = { zt+1, with probability pt wt, with probability 1− pt 12: end for 4.1 CANITA: description of the method Our proposed method CANITA, formally described in Algorithm 1, is an accelerated gradient method supporting compressed communication. It is the first method combing the benefits of acceleration and compression in the general convex regime (without strong convexity). In each round t, each machine computes its local gradient (e.g.,∇fi(yt)) and then a shifted version is compressed and sent to the server (See Line 5 of Algorithm 1). The local shifts hti are adaptively changing throughout the iterative process (Line 6), and have the role of reducing the variance introduced by compression C(·). If no compression is used, we may simply set the shifts to be hti = 0 for all i, t. The server subsequently aggregates all received messages to obtain the gradient estimator gt and maintain the average of local shifts ht+1 (Line 8), and then perform gradient update step (Line 9) and update momentum sequences (Line 10 and 3). Besides, the last Line 11 adopts a randomized update rule for the auxiliary vectors wt which simplifies the algorithm and analysis, resembling the workings of the loopless SVRG method used in [16, 20]. 4.2 CANITA vs existing methods CANITA can be loosely seen as a combination of the accelerated gradient method ANITA of [20], and the variance-reduced compressed gradient method DIANA of [30]. In particular, CANITA uses momentum/acceleration steps (see Line 3 and 10 of Algorithm 1) inspired by those of ANITA [20], and adopts the shifted compression framework for each machine (see Line 5 and 6 of Algorithm 1) as in the DIANA method [30]. We prove that CANITA enjoys the benefits of both methods simultaneously, i.e., convergence acceleration of ANITA and gradient compression of DIANA. Although CANITA can conceptually be seen as combination of ANITA [20] and DIANA [30, 9, 12] from an algorithmic perspective, the analysis of CANITA is entirely different. Let us now briefly outline some of the main differences. • For example, compared with ANITA [20], CANITA needs to deal with the extra compression of shifted local gradients in the distributed network. Thus, the obtained gradient estimator gk in Line 8 of Algorithm 1 is substantially different and more complicated than the one in ANITA, which necessitates a novel proof technique. • Compared with DIANA [30, 9, 12], the extra momentum steps in Line 3 and 10 of Algorithm 1 make the analysis of CANITA more complicated than that of DIANA. We obtain the accelerated rate O (√ L ) rather than the non-accelerated rate O(L ) of DIANA, and this is impossible without a substantially different proof technique. • Compared with the accelerated DIANA method ADIANA of [26], the analysis of CANITA is also substantially different since CANITA cannot exploit the strong convexity assumed therein. Finally, please refer to Section 2 where we summarize our contributions for additional discussions. 5 Convergence Results for the CANITA Algorithm In this section, we provide convergence results for CANITA (Algorithm 1). In order to simplify the expressions appearing in our main result (see Theorem 1 in Section 5.1) and in the lemmas needed to prove it (see Appendix A), it will be convenient to let F t := f(wt)− f(x∗), Ht := 1 n n∑ i=1 ‖∇fi(wt)− hti‖2, Dt := 1 2 ‖xt − x∗‖2. (4) 5.1 Generic convergence result We first present the main convergence theorem of CANITA for solving the distributed optimization problem (1) in the general convex regime. Theorem 1 Suppose that Assumption 1 holds and the compression operators {Cti} used in Algorithm 1 satisfy (2) of Definition 2. For any two positive sequences {βt} and {γt} such that the probabilities {pt} and positive stepsizes {αt}, {ηt}, {θt} of Algorithm 1 satisfy the following relations αt ≤ 1 1 + ω , ηt ≤ 1 L ( 1 + βt + 4ptγt ( 1 + 2pt αt )) (5) for all t ≥ 0, and 2ω βtn +4ptγt ( 1+ 2pt αt ) ≤ 1−θt, (1− ptθt)ηt ptθ2t ≤ ηt−1 pt−1θ2t−1 , ( ω βtn + ( 1− αt 2 ) γt ) ηt θ2t ≤ γt−1ηt−1 θ2t−1 (6) for all t ≥ 1. Then the sequences {xt, wt, hti} of CANITA (Algorithm 1) for all t ≥ 0 satisfy the inequality E [ F t+1 + γtpt L Ht+1 ] ≤ θ 2 t pt ηt ( (1− θ0p0)η0 θ20p0 F 0 + ( ω β0n + ( 1− α0 2 ) γ0 ) η0 θ20L H0 +D0 ) , (7) where the quantities F t, Ht, Dt are defined in (4). The detailed proof of Theorem 1 which relies on six lemmas is provided in Appendix A. In particular, the proof simply follows from the key Lemma 6 (see Appendix A.2), while Lemma 6 closely relies on previous five Lemmas 1–5 (see Appendix C.6). Note that all proofs for these six lemmas are deferred to Appendix C. As we shall see in detail in Section 5.2, the sequences βt, γt, pt and αt can be fixed to some constants.3 However, the relaxation parameter θt needs to be decreasing and the stepsize ηt may be increasing until a certain threshold. In particular, we choose βt ≡ c1, γt ≡ c2, pt ≡ c3, αt ≡ c4, θt = c5 t+ c6 , ηt = min {( 1 + 1 t+ c7 ) ηt−1, 1 c8L } , (8) where the constants {ci} may depend on the compression parameter ω and the number of devices/machines n. As a result, the right hand side of (7) will be of the order O ( L t2 ) , which indicates an accelerated rate. Hence, in order to find an -solution of problem (1), i.e., vector wT+1 such that E [ f(wT+1)− f(x∗) ] (4) := E [ FT+1 ] ≤ , (9) the number of communication rounds of CANITA (Algorithm 1) is at most T = O (√ L ) . While the above rate has an accelerated dependence on , it will be crucial to study the omitted constants {ci} (see (8)), and in particular their dependence on the compression parameter ω and the number of devices/machines n. As expected, for any fixed target error > 0, the number of communication rounds T (sufficient to guarantee that (9) holds) may grow with increasing levels of compression, i.e., with increasing ω. However, at the same time, the communication cost in each round decreases with ω. It is easy to see that this trade-off benefits compression. In particular, as we mention in Section 2, if the number of devices n is large, or the compression variance ω is not very high, then compression does not hurt the accelerated rate of communication rounds at all. 5.2 Detailed convergence result We now formulate a concrete Theorem 2 from Theorem 1 which leads to a detailed convergence result for CANITA (Algorithm 1) by specifying the choice of the parameters βt, γt, pt, αt, θt and ηt. The detailed proof of Theorem 2 is deferred to Appendix B. Theorem 2 Suppose that Assumption 1 holds and the compression operators {Cti} used in Algorithm 1 satisfy (2) of Definition 2. Let b = min { ω, √ ω(1+ω)2 n } and choose the two positive 3Exception: While we indeed choose βt ≡ β for t ≥ 1, the value of β0 may be different. sequences {βt} and {γt} as follows: βt = { β0 = 9(1+b+ω)2 (1+b)L for t = 0 β ≡ 48ω(1+ω)(1+b+2(1+ω))n(1+b)2 for t ≥ 1 , γt = γ ≡ (1 + b)2 8(1 + b+ 2(1 + ω)) for t ≥ 0. (10) If we set the probabilities {pt} and positive stepsizes {αt}, {ηt}, {θt} of Algorithm 1 as follows: pt ≡ 1 1 + b , αt ≡ 1 1 + ω , θt = 3(1 + b) t+ 9(1 + b+ ω) , for t ≥ 0, (11) and ηt = { 1 L(β0+3/2) for t = 0 min {( 1 + 1t+9(1+b+ω) ) ηt−1, 1 L(β+3/2) } for t ≥ 1 . (12) Then CANITA (Algorithm 1) for all T ≥ 0 satisfies E [ FT+1 ] ≤ O ( (1 + √ ω3/n)L T 2 + ω3 T 3 ) . (13) According to (13), the number of communication rounds for CANITA (Algorithm 1) to find an -solution of the distributed problem (1), i.e., E [ f(wT+1)− f(x∗) ] (4) := E [ FT+1 ] ≤ , is at most T = O √(1 +√ω3 n ) L + ω ( 1 ) 1 3 . 6 Experiments In this section, we demonstrate the performance of our accelerated method CANITA (Algorithm 1) and previous methods QSGD and DIANA (the theoretical convergence results of these algorithms can be found in Table 1) with different compression operators on the logistic regression problem, min x∈Rd f(x) := 1 n n∑ i=1 log ( 1 + exp(−biaTi x) ) , (14) where {ai, bi}ni=1 ∈ Rd × {±1} are data samples. We use three standard datasets: a9a, mushrooms, and w8a in the experiments. All datasets are downloaded from LIBSVM [4]. Similar to Li et al. [26], we also use three different compression operators: random sparsification (e.g. [39]), natural compression (e.g. [8]), and random quantization (e.g. [1]). In particular, we follow the same settings as in Li et al. [26]. For random-r sparsification, the number of communicated bits per iteration is 32r, and we choose r = d/4. For natural compression, the number of communicated bits per iteration is 9d bits [8]. For random (2, s)-quantization, we choose s = √ d, which means the number of communicated bits per iteration is 2.8d+ 32 [1]. The default number of nodes/machines/workers is 20. In our experiments, we directly use the theoretical stepsizes and parameters for all three algorithms: QSGD [1, 24], DIANA [12], our CANITA (Algorithm 1). To compare with the settings of DIANA and CANITA, we use local gradients (not stochastic gradients) in QSGD. Thus here QSGD is equivalent to DC-GD provided in [24]. In Figures 1–3, we compare our CANITA with QSGD and DIANA with three compression operators: random sparsification (left), natural compression (middle), and random quantization (right) on three datasets: a9a (Figure 1), mushrooms (Figure 2), and w8a (Figure 3). The x-axis and y-axis represent the number of communication bits and the training loss, respectively. Regarding the different compression operators, the experimental results indicate that natural compression and random quantization are better than random sparsification for all three algorithms. For Random Sparsification Natural Compression Random Quantization Random Sparsification Natural Compression Random Quantization Random Sparsification Natural Compression Random Quantization instance, in Figure 1, DIANA uses 1.5×106 (random sparsification), 1.0×106 (natural compression), 0.4× 106 (random quantization) communication bits for achieving the loss 0.4, respectively. Moreover, regarding the different algorithms, the experimental results indeed show that our CANITA converges the fastest compared with both QSGD and DIANA for all three compressors in all Figures 1–3, validating the theoretical results (see Table 1) and confirming the practical superiority of our accelerated CANITA method. 7 Conclusion In this paper, we proposed CANITA: the first gradient method for distributed general convex optimization provably enjoying the benefits of both communication compression and convergence acceleration. There is very limited work on combing compression and acceleration. Indeed, previous works only focus on the (much simpler) strongly convex setting. We hope that our novel algorithm and analysis can provide new insights and shed light on future work in this line of research. We leave further improvements to future work. For example, one may ask whether our approach can be combined with the benefits provided by multiple local update steps [29, 38, 11, 10, 42], with additional variance reduction techniques [9, 24], and to what extent one can extend our results to structured nonconvex problems [22, 19, 27, 21, 25, 6, 35, 5].
1. What is the focus and contribution of the paper on distributed optimization? 2. What are the strengths of the proposed approach, particularly in terms of combining acceleration and compression techniques? 3. How does the reviewer assess the clarity, quality, originality, and significance of the paper's content? 4. Are there any concerns or questions regarding the paper that the reviewer would like to address?
Summary Of The Paper Review
Summary Of The Paper The paper propose a compressed and accelerated gradient method called CANITA for distributed optimization. It improves the the state-of-the-art result which is achieved by DIANA in smooth and convex problems. The main contribution is the (near) optimal algorithm and its theoretical analysis. Review Originality: The paper considers a unexplored topic, how to fuse acceleration technique and unbiased compression operator organically. Though the algorithm (CANITA) is a combination accelerated ANITA method and compressed DIANA method, the analysis of the resulting algorithm has its own difficulty, i.e., how to analyze the error introduced by the compression operator while not ruining the delicate structure of acceleration for convex problems. I think the analysis is new. Quality: The theoretical analysis is valid. The author also provides a proof sketch to illustrate how to construct a potential function for convergence analysis. Clarity: The paper is well written and clear. Significance: The paper improves the state-of-the-art result of non-accelerated compressed algorithms. I think the work might give insights on how to combine compression and acceleration optimally. ====================================================== I have read the author's response. I think the author well answers my questions. I keep my score.
NIPS
Title CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression Abstract Due to the high communication cost in distributed and federated learning, methods relying on compressed communication are becoming increasingly popular. Besides, the best theoretically and practically performing gradient-type methods invariably rely on some form of acceleration/momentum to reduce the number of communications (faster convergence), e.g., Nesterov’s accelerated gradient descent [31, 32] and Adam [14]. In order to combine the benefits of communication compression and convergence acceleration, we propose a compressed and accelerated gradient method based on ANITA [20] for distributed optimization, which we call CANITA. Our CANITA achieves the first accelerated rate O (√( 1 + √ ω3 n ) L + ω ( 1 ) 1 3 ) , which improves upon the state-of-the-art nonaccelerated rate O ( (1 + ωn ) L + ω+ω ω+n 1 ) of DIANA [12] for distributed general convex problems, where is the target error, L is the smooth parameter of the objective, n is the number of machines/devices, and ω is the compression parameter (larger ω means more compression can be applied, and no compression implies ω = 0). Our results show that as long as the number of devices n is large (often true in distributed/federated learning), or the compression ω is not very high, CANITA achieves the faster convergence rate O (√ L ) , i.e., the number of communication rounds is O (√ L ) (vs. O ( L ) achieved by previous works). As a result, CANITA enjoys the advantages of both compression (compressed communication in each round) and acceleration (much fewer communication rounds). N/A O (√( 1 + √ ω3 n ) L + ω ( 1 ) 1 3 ) , which improves upon the state-of-the-art non- accelerated rate O ( (1 + ωn ) L + ω2+ω ω+n 1 ) of DIANA [12] for distributed general convex problems, where is the target error, L is the smooth parameter of the objective, n is the number of machines/devices, and ω is the compression parameter (larger ω means more compression can be applied, and no compression implies ω = 0). Our results show that as long as the number of devices n is large (often true in distributed/federated learning), or the compression ω is not very high, CANITA achieves the faster convergence rate O (√ L ) , i.e., the number of communication rounds is O (√ L ) (vs. O ( L ) achieved by previous works). As a result, CANITA enjoys the advantages of both compression (compressed communication in each round) and acceleration (much fewer communication rounds). 1 Introduction With the proliferation of edge devices, such as mobile phones, wearables and smart home appliances, comes an increase in the amount of data rich in potential information which can be mined for the benefit of humankind. One of the approaches of turning the raw data into information is via federated learning [15, 29], where typically a single global supervised model is trained in a massively distributed manner over a network of heterogeneous devices. Training supervised distributed/federated learning models is typically performed by solving an optimization problem of the form min x∈Rd { f(x) := 1 n n∑ i=1 fi(x) } , (1) where n denotes the number of devices/machines/workers/clients, and fi : Rd → R is a loss function associated with the data stored on device i. We will write x∗ := arg min x∈Rd f(x). 35th Conference on Neural Information Processing Systems (NeurIPS 2021). If more than one minimizer exist, x∗ denotes an arbitrary but fixed solution. We will rely on the solution concept captured in the following definition: Definition 1 A random vector x̂ ∈ Rd is called an -solution of the distributed problem (1) if E [f(x̂)]− f(x∗) ≤ , where the expectation is with respect to the randomness inherent in the algorithm used to produce x̂. In distributed and federated learning problems of the form (1), communication of messages across the network typically forms the key bottleneck of the training system. In the modern practice of supervised learning in general and deep learning in particular, this is exacerbated by the reliance on massive models described by millions or even billions of parameters. For these reasons, it is very important to devise novel and more efficient training algorithms capable of decreasing the overall communication cost, which can be formalized as the product of the number of communication rounds necessary to train a model of sufficient quality, and the computation and communication cost associated with a typical communication round. 1.1 Methods with compressed communication One of the most common strategies for improving communication complexity is communication compression [37, 1, 40, 8, 30, 9, 26, 24]. This strategy is based on the reduction of the size of communicated messages via the application of a suitably chosen lossy compression mechanism, saving precious time spent in each communication round, and hoping that this will not increase the total number of communication rounds. Several recent theoretical results suggest that by combining an appropriate (randomized) compression operator with a suitably designed gradient-type method, one can obtain improvement in the total communication complexity over comparable baselines not performing any compression. For instance, this is the case for distributed compressed gradient descent (CGD) [1, 13, 8, 24], and distributed CGD methods which employ variance reduction to tame the variance introduced by compression [7, 30, 9, 24, 6]. 1.2 Methods with acceleration The acceleration/momentum of gradient-type methods is widely-studied in standard optimization problems, which aims to achieve faster convergence rates (fewer communication rounds) [33, 31, 32, 17, 28, 2, 18, 16, 23, 20]. Deep learning practitioners typically rely on Adam [14], or one of its many variants, which besides other tricks also adopts momentum. In particular, ANITA [20] obtains the current state-of-the-art convergence results for convex optimization. In this paper, we will adopt the acceleration from ANITA [20] to the distributed setting with compression. 1.3 Can communication compression and acceleration be combined? Encouraged by the recent theoretical success of communication compression, and the widespread success of accelerated methods, in this paper we seek to further enhance CGD methods with acceleration/momentum, with the aim to obtain provable improvements in overall communication complexity. Can distributed gradient-type methods theoretically benefit from the combination of gradient compression and acceleration/momentum? To the best of our knowledge, no such results exist in the general convex regime, and in this paper we close this gap by designing a method that can provably enjoy the advantages of both compression (compressed communication in each round) and acceleration (much fewer communication rounds). While there is abundance of research studying communication compression and acceleration in isolation, there is very limited work on the combination of both approaches. The first successful combination of gradient compression and acceleration/momentum was recently achieved by the ADIANA method of Li et al. [26]. However, Li et al. [26] only provide theoretical results for strongly convex problems, and their method is not applicable to (general) convex problems. So, one needs to both design a new method to handle the convex case, and perform its analysis. A-priori, it is not clear at all what approach would work. To the best of our knowledge, besides the initial work [26], we are only aware of two other works for addressing this question [41, 34]. However, both these works still only focus on the simpler and less practically relevant strongly convex setting. Thus, this line of research is still largely unexplored. For instance, the well-known logistic regression problem is convex but not strongly convex. Finally, even if a problem is strongly convex, the modulus of strong convexity is typically not known, or hard to estimate properly. 2 Summary of Contributions In this paper we propose and analyze an accelerated gradient method with compressed communication, which we call CANITA (described in Algorithm 1), for solving distributed general convex optimization problems of the form (1). In particular, CANITA can loosely be seen as a combination of the accelerated gradient method ANITA of [20], and the variance-reduced compressed gradient method DIANA of [30]. Ours is the first work provably combining the benefits of communication compression and acceleration in the general convex regime. 2.1 First accelerated rate for compressed gradient methods in the convex regime For general convex problems, CANITA is the first compressed communication gradient method with an accelerated rate. In particular, our CANITA solves the distributed problem (1) in O (√( 1 + √ ω3 n ) L + ω ( 1 ) 1 3 ) communication rounds, which improves upon the current state-of-the-art result O (( 1 + ωn ) L + ω2+n ω+n 1 ) achieved by the DIANA method [12]. See Table 1 for more comparisons. Let us now illustrate the improvements coming from this new bound on an example with concrete numerical values. Let the compression ratio be 10% (the size of compressed message is 0.1 · d, where d is the size of the uncompressed message). If random sparsification or quantization is used to achieve this, then ω ≈ 10 (see Section 3.1). Further, if the number of devices/machines is n = 106, 1In this strongly convex column, κ := L µ denotes the condition number, where L is the smooth parameter and µ > 0 is the strong convexity parameter. 2Here QSGD [1] needs an additional bounded gradient assumption, i.e., ‖∇fi(x)‖2 ≤ G2, ∀i ∈ [n], x ∈ Rd. and the target error tolerance is = 10−6, then the number of communication rounds of our CANITA method isO(103), while the number of communication rounds of the previous state-of-the-art method DIANA [12] is O(106), i.e., O (√ L ) vs. O(L ). This is an improvement of three orders of magnitude. Moreover, the numerical experiments in Section 6 indeed show that the performance of our CANITA is much better than previous non-accelerated compressed methods (QSGD and DIANA), corroborating the theoretical results (see Table 1) and confirming the practical superiority of our accelerated CANITA method. 2.2 Accelerated rate with limited compression for free For strongly convex problems, Li et al. [26] showed that if the number of devices/machines n is large, or the compression variance parameter ω is not very high (ω ≤ n1/3), then their ADIANA method enjoys the benefits of both compression and acceleration (i.e., √ κ log 1 of ADIANA vs. κ log 1 of previous works). In this paper, we consider the general convex setting and show that the proposed CANITA also enjoys the benefits of both compression and acceleration. Similarly, if ω ≤ n1/3 (i.e., many devices, or limited compression variance), CANITA achieves the accelerated rate √ L vs. L of previous works. This means that the compression does not hurt the accelerated rate at all. Note that the second term( 1 ) 1 3 is of a lower order compared with the first term √ L . 2.3 Novel proof technique The proof behind the analysis of CANITA is significantly different from that of ADIANA [26], which critically relies on strong convexity. Moreover, the theoretical rate in the strongly convex case is linear O(log 1 ), while it is sublinear O( 1 ) or O (√ 1 ) (accelerated) in the general convex case. We hope that our novel analysis can provide new insights and shed light on future work. 3 Preliminaries Let [n] denote the set {1, 2, · · · , n} and ‖ · ‖ denote the Euclidean norm for a vector and the spectral norm for a matrix. Let 〈u, v〉 denote the standard Euclidean inner product of two vectors u and v. We use O(·) and Ω(·) to hide the absolute constants. 3.1 Assumptions about the compression operators We now introduce the notion of a randomized compression operator which we use to compress the gradients to save on communication. We rely on a standard class of unbiased compressors (see Definition 2) that was used in the context of distributed gradient methods before [1, 13, 9, 24, 26]. Definition 2 (Compression operator) A randomized map C : Rd 7→ Rd is an ω-compression operator if E [C(x)] = x, E [ ‖C(x)− x‖2 ] ≤ ω‖x‖2, ∀x ∈ Rd. (2) In particular, no compression (C(x) ≡ x) implies ω = 0. It is well known that the conditions (2) are satisfied by many practically useful compression operators (see Table 1 in [3, 36]). For illustration purposes, we now present a couple canonical examples: sparsification and quantization. Example 1 (Random sparsification). Given x ∈ Rd, the random-k sparsification operator is defined by C(x) := d k · (ξk x), where denotes the Hadamard (element-wise) product and ξk ∈ {0, 1}d is a uniformly random binary vector with k nonzero entries (‖ξk‖0 = k). This random-k sparsification operator C satisfies (2) with ω = dk − 1. By setting k = d, this reduces to the identity compressor, whose variance is obviously zero: ω = 0. Example 2 (Random quantization). Given x ∈ Rd, the (p, s)-quantization operator is defined by C(x) := sign(x) · ‖x‖p · 1 s · ξs, where p, s ≥ 1 are integers, and ξs ∈ Rd is a random vector with i-th element ξs(i) := { l + 1, with probability |xi|‖x‖p s− l, l, otherwise. The level l satisfies |xi|‖x‖p ∈ [ l s , l+1 s ]. The probability is chosen so that E [ξs(i)] = |xi| ‖x‖p s. This (p, s)quantization operator C satisfies (2) with ω = 2+ d 1/p+d1/2 s . In particular, QSGD [1] used p = 2 (i.e., (2, s)-quantization) and proved that the expected sparsity of C(x) is E [‖C(x)‖0] = O ( s(s+ √ d) ) . 3.2 Assumptions about the functions Throughout the paper, we assume that the functions fi are convex and have Lipschitz continuous gradient. Assumption 1 Functions fi : Rd → R are convex, differentiable, and L-smooth. The last condition means that there exists a constant L > 0 such that for all i ∈ [n] we have ‖∇fi(x)−∇fi(y)‖ ≤ L‖x− y‖, ∀x, y ∈ Rd. (3) It is easy to see that the objective f(x) = 1n ∑n i=1 fi(x) in (1) satisfies (3) provided that the constituent functions {fi} do. 4 The CANITA Algorithm In this section, we describe our method, for which we coin the name CANITA, designed for solving problem (1), which is of importance in distributed and federated learning, and contrast it to the most closely related methods ANITA [20], DIANA [30] and ADIANA [26]. Algorithm 1 Distributed compressed accelerated ANITA method (CANITA) Input: initial point x0 ∈ Rd, initial shift vectors h01, . . . , h0n ∈ Rd, probabilities {pt}, and positive stepsizes {αt}, {ηt}, {θt} 1: Initialize: w0 = z0 = x0 and h0 = 1n ∑n i=1 h 0 i 2: for t = 0, 1, 2, . . . do 3: yt = θtxt + (1− θt)wt 4: for all machines i = 1, 2, . . . , n do in parallel 5: Compress the shifted local gradient Cti (∇fi(yt)− hti) and send the result to the server 6: Update the local shift ht+1i = h t i + αtCti (∇fi(wt)− hti) 7: end for 8: Aggregate received compressed local gradient information: gt = ht + 1n n∑ i=1 Cti (∇fi(yt)− hti) • Compute gradient estimator ht+1 = ht + αt 1 n n∑ i=1 Cti (∇fi(wt)− hti) •Maintain the average of local shifts 9: Perform update step: xt+1 = xt − ηtθt g t 10: zt+1 = θtxt+1 + (1− θt)wt 11: wt+1 = { zt+1, with probability pt wt, with probability 1− pt 12: end for 4.1 CANITA: description of the method Our proposed method CANITA, formally described in Algorithm 1, is an accelerated gradient method supporting compressed communication. It is the first method combing the benefits of acceleration and compression in the general convex regime (without strong convexity). In each round t, each machine computes its local gradient (e.g.,∇fi(yt)) and then a shifted version is compressed and sent to the server (See Line 5 of Algorithm 1). The local shifts hti are adaptively changing throughout the iterative process (Line 6), and have the role of reducing the variance introduced by compression C(·). If no compression is used, we may simply set the shifts to be hti = 0 for all i, t. The server subsequently aggregates all received messages to obtain the gradient estimator gt and maintain the average of local shifts ht+1 (Line 8), and then perform gradient update step (Line 9) and update momentum sequences (Line 10 and 3). Besides, the last Line 11 adopts a randomized update rule for the auxiliary vectors wt which simplifies the algorithm and analysis, resembling the workings of the loopless SVRG method used in [16, 20]. 4.2 CANITA vs existing methods CANITA can be loosely seen as a combination of the accelerated gradient method ANITA of [20], and the variance-reduced compressed gradient method DIANA of [30]. In particular, CANITA uses momentum/acceleration steps (see Line 3 and 10 of Algorithm 1) inspired by those of ANITA [20], and adopts the shifted compression framework for each machine (see Line 5 and 6 of Algorithm 1) as in the DIANA method [30]. We prove that CANITA enjoys the benefits of both methods simultaneously, i.e., convergence acceleration of ANITA and gradient compression of DIANA. Although CANITA can conceptually be seen as combination of ANITA [20] and DIANA [30, 9, 12] from an algorithmic perspective, the analysis of CANITA is entirely different. Let us now briefly outline some of the main differences. • For example, compared with ANITA [20], CANITA needs to deal with the extra compression of shifted local gradients in the distributed network. Thus, the obtained gradient estimator gk in Line 8 of Algorithm 1 is substantially different and more complicated than the one in ANITA, which necessitates a novel proof technique. • Compared with DIANA [30, 9, 12], the extra momentum steps in Line 3 and 10 of Algorithm 1 make the analysis of CANITA more complicated than that of DIANA. We obtain the accelerated rate O (√ L ) rather than the non-accelerated rate O(L ) of DIANA, and this is impossible without a substantially different proof technique. • Compared with the accelerated DIANA method ADIANA of [26], the analysis of CANITA is also substantially different since CANITA cannot exploit the strong convexity assumed therein. Finally, please refer to Section 2 where we summarize our contributions for additional discussions. 5 Convergence Results for the CANITA Algorithm In this section, we provide convergence results for CANITA (Algorithm 1). In order to simplify the expressions appearing in our main result (see Theorem 1 in Section 5.1) and in the lemmas needed to prove it (see Appendix A), it will be convenient to let F t := f(wt)− f(x∗), Ht := 1 n n∑ i=1 ‖∇fi(wt)− hti‖2, Dt := 1 2 ‖xt − x∗‖2. (4) 5.1 Generic convergence result We first present the main convergence theorem of CANITA for solving the distributed optimization problem (1) in the general convex regime. Theorem 1 Suppose that Assumption 1 holds and the compression operators {Cti} used in Algorithm 1 satisfy (2) of Definition 2. For any two positive sequences {βt} and {γt} such that the probabilities {pt} and positive stepsizes {αt}, {ηt}, {θt} of Algorithm 1 satisfy the following relations αt ≤ 1 1 + ω , ηt ≤ 1 L ( 1 + βt + 4ptγt ( 1 + 2pt αt )) (5) for all t ≥ 0, and 2ω βtn +4ptγt ( 1+ 2pt αt ) ≤ 1−θt, (1− ptθt)ηt ptθ2t ≤ ηt−1 pt−1θ2t−1 , ( ω βtn + ( 1− αt 2 ) γt ) ηt θ2t ≤ γt−1ηt−1 θ2t−1 (6) for all t ≥ 1. Then the sequences {xt, wt, hti} of CANITA (Algorithm 1) for all t ≥ 0 satisfy the inequality E [ F t+1 + γtpt L Ht+1 ] ≤ θ 2 t pt ηt ( (1− θ0p0)η0 θ20p0 F 0 + ( ω β0n + ( 1− α0 2 ) γ0 ) η0 θ20L H0 +D0 ) , (7) where the quantities F t, Ht, Dt are defined in (4). The detailed proof of Theorem 1 which relies on six lemmas is provided in Appendix A. In particular, the proof simply follows from the key Lemma 6 (see Appendix A.2), while Lemma 6 closely relies on previous five Lemmas 1–5 (see Appendix C.6). Note that all proofs for these six lemmas are deferred to Appendix C. As we shall see in detail in Section 5.2, the sequences βt, γt, pt and αt can be fixed to some constants.3 However, the relaxation parameter θt needs to be decreasing and the stepsize ηt may be increasing until a certain threshold. In particular, we choose βt ≡ c1, γt ≡ c2, pt ≡ c3, αt ≡ c4, θt = c5 t+ c6 , ηt = min {( 1 + 1 t+ c7 ) ηt−1, 1 c8L } , (8) where the constants {ci} may depend on the compression parameter ω and the number of devices/machines n. As a result, the right hand side of (7) will be of the order O ( L t2 ) , which indicates an accelerated rate. Hence, in order to find an -solution of problem (1), i.e., vector wT+1 such that E [ f(wT+1)− f(x∗) ] (4) := E [ FT+1 ] ≤ , (9) the number of communication rounds of CANITA (Algorithm 1) is at most T = O (√ L ) . While the above rate has an accelerated dependence on , it will be crucial to study the omitted constants {ci} (see (8)), and in particular their dependence on the compression parameter ω and the number of devices/machines n. As expected, for any fixed target error > 0, the number of communication rounds T (sufficient to guarantee that (9) holds) may grow with increasing levels of compression, i.e., with increasing ω. However, at the same time, the communication cost in each round decreases with ω. It is easy to see that this trade-off benefits compression. In particular, as we mention in Section 2, if the number of devices n is large, or the compression variance ω is not very high, then compression does not hurt the accelerated rate of communication rounds at all. 5.2 Detailed convergence result We now formulate a concrete Theorem 2 from Theorem 1 which leads to a detailed convergence result for CANITA (Algorithm 1) by specifying the choice of the parameters βt, γt, pt, αt, θt and ηt. The detailed proof of Theorem 2 is deferred to Appendix B. Theorem 2 Suppose that Assumption 1 holds and the compression operators {Cti} used in Algorithm 1 satisfy (2) of Definition 2. Let b = min { ω, √ ω(1+ω)2 n } and choose the two positive 3Exception: While we indeed choose βt ≡ β for t ≥ 1, the value of β0 may be different. sequences {βt} and {γt} as follows: βt = { β0 = 9(1+b+ω)2 (1+b)L for t = 0 β ≡ 48ω(1+ω)(1+b+2(1+ω))n(1+b)2 for t ≥ 1 , γt = γ ≡ (1 + b)2 8(1 + b+ 2(1 + ω)) for t ≥ 0. (10) If we set the probabilities {pt} and positive stepsizes {αt}, {ηt}, {θt} of Algorithm 1 as follows: pt ≡ 1 1 + b , αt ≡ 1 1 + ω , θt = 3(1 + b) t+ 9(1 + b+ ω) , for t ≥ 0, (11) and ηt = { 1 L(β0+3/2) for t = 0 min {( 1 + 1t+9(1+b+ω) ) ηt−1, 1 L(β+3/2) } for t ≥ 1 . (12) Then CANITA (Algorithm 1) for all T ≥ 0 satisfies E [ FT+1 ] ≤ O ( (1 + √ ω3/n)L T 2 + ω3 T 3 ) . (13) According to (13), the number of communication rounds for CANITA (Algorithm 1) to find an -solution of the distributed problem (1), i.e., E [ f(wT+1)− f(x∗) ] (4) := E [ FT+1 ] ≤ , is at most T = O √(1 +√ω3 n ) L + ω ( 1 ) 1 3 . 6 Experiments In this section, we demonstrate the performance of our accelerated method CANITA (Algorithm 1) and previous methods QSGD and DIANA (the theoretical convergence results of these algorithms can be found in Table 1) with different compression operators on the logistic regression problem, min x∈Rd f(x) := 1 n n∑ i=1 log ( 1 + exp(−biaTi x) ) , (14) where {ai, bi}ni=1 ∈ Rd × {±1} are data samples. We use three standard datasets: a9a, mushrooms, and w8a in the experiments. All datasets are downloaded from LIBSVM [4]. Similar to Li et al. [26], we also use three different compression operators: random sparsification (e.g. [39]), natural compression (e.g. [8]), and random quantization (e.g. [1]). In particular, we follow the same settings as in Li et al. [26]. For random-r sparsification, the number of communicated bits per iteration is 32r, and we choose r = d/4. For natural compression, the number of communicated bits per iteration is 9d bits [8]. For random (2, s)-quantization, we choose s = √ d, which means the number of communicated bits per iteration is 2.8d+ 32 [1]. The default number of nodes/machines/workers is 20. In our experiments, we directly use the theoretical stepsizes and parameters for all three algorithms: QSGD [1, 24], DIANA [12], our CANITA (Algorithm 1). To compare with the settings of DIANA and CANITA, we use local gradients (not stochastic gradients) in QSGD. Thus here QSGD is equivalent to DC-GD provided in [24]. In Figures 1–3, we compare our CANITA with QSGD and DIANA with three compression operators: random sparsification (left), natural compression (middle), and random quantization (right) on three datasets: a9a (Figure 1), mushrooms (Figure 2), and w8a (Figure 3). The x-axis and y-axis represent the number of communication bits and the training loss, respectively. Regarding the different compression operators, the experimental results indicate that natural compression and random quantization are better than random sparsification for all three algorithms. For Random Sparsification Natural Compression Random Quantization Random Sparsification Natural Compression Random Quantization Random Sparsification Natural Compression Random Quantization instance, in Figure 1, DIANA uses 1.5×106 (random sparsification), 1.0×106 (natural compression), 0.4× 106 (random quantization) communication bits for achieving the loss 0.4, respectively. Moreover, regarding the different algorithms, the experimental results indeed show that our CANITA converges the fastest compared with both QSGD and DIANA for all three compressors in all Figures 1–3, validating the theoretical results (see Table 1) and confirming the practical superiority of our accelerated CANITA method. 7 Conclusion In this paper, we proposed CANITA: the first gradient method for distributed general convex optimization provably enjoying the benefits of both communication compression and convergence acceleration. There is very limited work on combing compression and acceleration. Indeed, previous works only focus on the (much simpler) strongly convex setting. We hope that our novel algorithm and analysis can provide new insights and shed light on future work in this line of research. We leave further improvements to future work. For example, one may ask whether our approach can be combined with the benefits provided by multiple local update steps [29, 38, 11, 10, 42], with additional variance reduction techniques [9, 24], and to what extent one can extend our results to structured nonconvex problems [22, 19, 27, 21, 25, 6, 35, 5].
1. What is the main contribution of the paper regarding federated learning? 2. How does the reviewer assess the originality and significance of the proposed approach? 3. What are the strengths and weaknesses of the paper's technical aspects, particularly its theory and clarity? 4. What suggestions does the reviewer have for improving the paper, including numerical experiments and additional proofs?
Summary Of The Paper Review
Summary Of The Paper This paper studies the compressed and accelerated gradient method for nonstrongly convex federated learning. State-of-the-art accelerated convergence rate is established. Review Originality: This paper combines the accelerated ANITA in [16] and the compressed DIANA in [23]. I think it is a novel combination of two well-known techniques. It also extends the accelerated and compressed gradient method in [19] from strongly convex problems to nonstrongly convex ones. The convergence rate improvement over previous results is compared clearly. Quality: The theory is technically solid. State-of-the-art convergence rate for nonstrongly convex problems is proved. Clarity: This paper is written well. The proof is organized very clearly. Significance: This paper studies acceleration and compression in federated learning. This is a significant topic. State-of-the-art convergence rate for nonstrongly convex problems is proved. The previous work on acceleration and compression all focus on strongly convex problems. I think the result is significant. Major comment: The experiment is missing. I suggest to include the numerical experiment in the final version. This paper proposes a new method, rather than reanalyzing a widely used method. So I think the numerical experiment is necessary. Minor comment: This paper studies the nonstrongly convex problems while [19] studied strongly convex ones. I am interested how fast CANITA converges for strongly convex problems by adoptting the proof techniques used in this paper. Is it faster than the one in [19]? The two rates in the last two lines of Table 1 looks different by letting κ = L ϵ in the first one. I suggest to include the proof for strongly convex problmes in the supplementary material such that other researchers are likely to use the proof techniques in their work. For example, the authors may organize the proof in a unified framework for both strongly convex and nonstrongly convex problems. I suggest to remove the proof of Lemma 6 to the supplementary material and inlcude the numurical experiment in page 8. The proof of Theorem 1 in page 16 is redundant, because it is proved in page 9.
NIPS
Title Geometric Order Learning for Rank Estimation Abstract A novel approach to rank estimation, called geometric order learning (GOL), is proposed in this paper. First, we construct an embedding space, in which the direction and distance between objects represent order and metric relations between their ranks, by enforcing two geometric constraints: the order constraint compels objects to be sorted according to their ranks, while the metric constraint makes the distance between objects reflect their rank difference. Then, we perform the simple k nearest neighbor (k-NN) search in the embedding space to estimate the rank of a test object. Moreover, to assess the quality of embedding spaces for rank estimation, we propose a metric called discriminative ratio for ranking (DRR). Extensive experiments on facial age estimation, historical color image (HCI) classification, and aesthetic score regression demonstrate that GOL constructs effective embedding spaces and thus yields excellent rank estimation performances. The source codes are available at https://github.com/seon92/GOL 1 Introduction In rank estimation, we estimate the rank (or ordered class) of an object. It is different from ordinary classification, for its classes are arranged in a natural order. For example, in movie rating, classes can be ordered from ‘outstanding’ to ‘very good,’ ‘satisfactory,’ ‘unsatisfactory,’ and ‘poor.’ Rank estimation is a fundamental problem and, e.g., used for various computer vision tasks including facial age estimation (Shin et al., 2022), aesthetic quality assessment (Schifanella et al., 2015), and HCI classification (Palermo et al., 2012). For rank estimation, many techniques (Li et al., 2021; Liu et al., 2018) adopt the ordinal regression framework, which employs a classifier or a regressor to predict the rank of an object directly. However, they may fail to yield reliable estimates, for there is no clear distinction between ranks in many cases. For instance, in facial age estimation, the aging process — causing variations in facial shapes, sizes, and texture — has large individual differences due to factors such as genes, diet, and lifestyle. To address this issue, comparison-based algorithms (Lim et al., 2020; Lee & Kim, 2021; Li et al., 2014; Nguyen et al., 2018) have been proposed. Instead of predicting the rank directly, they learn a binary relation between objects, such as order or metric (Hrbacek & Jech, 1984). These relations provide useful information for rank estimation: an order indicates the relative priority between objects x and y, while a metric informs of the distance between them. In order learning (Lim et al., 2020; Lee & Kim, 2021), a comparator is learned to classify the relationship between x and y into one of three cases: x is ‘greater than,’ ‘similar to,’ or ‘smaller than’ y. Then, they estimate the rank of a test object by comparing it with multiple reference objects with known ranks. This approach is based on the idea that it is easier to predict ordering relationships between objects than to estimate the absolute ranks; telling the older one between two people is easier than estimating their exact ages. However, order learning disregards how much x is different from y. In other words, it ignores metric information. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). On the other hand, metric learning algorithms (Li et al., 2014; Nguyen et al., 2018) employ the triplet constraint on three objects (x, y, z). It enforces the distance between x and y to be less than that between x and z in the embedding space if the ranks of (x, y, z) are in the increasing or decreasing order. By its design, the triplet constraint does not fully exploit the order among objects. Figures 1 (a) and (b) compare embedding spaces, obtained by order learning and metric learning, respectively. Order learning sorts instances according to ranks in general, but instances in each class are scattered in the embedding space. In contrast, metric learning reduces within-class scattering but does not sort instances properly. For example, both blue and orange ranks are adjacent to the yellow one, although they should be arranged in the order of blue, orange, and yellow. Unlike ordinary classification, different errors have different severities in rank estimation: misclassifying an object in the blue rank as a yellow one is severer than mistaking it for an orange one. Ordering in the embedding space is important to avoid such severe errors. We propose a novel algorithm, GOL, to estimate the rank of an object reliably by exploiting both order and metric relations. To this end, we construct an embedding space, in which the direction and distance between objects represent the order and metric relations between their ranks. For the construction, we formulate two geometric constraints in the embedding space: 1) the order constraint enforces the feature vectors of instances to be arranged according to their ranks, and 2) the metric constraint makes the distance between instances reflect their rank difference. To satisfy these two constraints simultaneously, we introduce reference points that guide the region of each rank in the embedding space. Then, we use the simple k-NN rule in the embedding space to estimate the rank of a test instance. Extensive experiments show that GOL constructs high-quality embedding spaces and thus provides excellent rank estimation performances. The contributions of this paper can be summarized as follows. • GOL is the first attempt to design an embedding space in which the direction and distance between objects represent their order and metric relations. • We introduce a novel metric, called DRR, to assess the quality of embedding spaces for rank estimation. Then, it is shown that GOL effectively sorts and separates instances according to their ranks in an embedding space, as illustrated in Figure 1 (c). • GOL achieves state-of-the-art performances on various benchmark datasets for facial age estimation, HCI classification, and aesthetic score regression. Specifically, GOL performs the best in 20 out of 25 benchmark tests. 2 Related Work Ordinal regression: Many ordinal regression methods have been developed to estimate the rank of an object directly using classifiers or regressors. Rothe et al. (2015) employed tens of classifiers to yield the average of their predictions as output. Yi et al. (2014) developed a regressor to estimate the rank of an image using multi-scale patches. Also, Frank & Hall (2001) employed multiple binary classifiers, each of which tells whether the rank of an object is higher than a series of thresholds or not. However, such direct estimation of ranks is challenging even for human beings in general; e.g., humans usually predict only a rough range of another one’s age with limited confidence. Thus, Diaz & Marathe (2019) trained a regressor using soft ordinal labels to alleviate penalties on close predictions. Furthermore, Li et al. (2021) and Li et al. (2022) modeled the uncertainty of each prediction as a Gaussian distribution. In contrast to these methods, the proposed algorithm provides more accurate rank estimates, although it uses only a single encoder network with the simple k-NN rule and makes no complicated probabilistic assumptions. Order learning: Lim et al. (2020) first proposed the notion of order learning, which learns ordering relationships between objects and determines the rank of an unseen object by comparing it with references with known ranks. It yields promising results because relative assessment is easier than absolute assessment in general. Lee & Kim (2021) improved the performance of order learning by finding more reliable references. They decomposed object information into an order-related feature and an identity feature and showed that objects with similar identity features can be compared more reliably. Also, Shin et al. (2022) extended the classification approach in (Lim et al., 2020; Lee & Kim, 2021) to a regression-based one. These order learning methods, however, require a significant computational cost to find reliable references from an entire training set. Moreover, for rank estimation, they should do comparisons with many references with different ranks because they consider only relative priorities between objects. On the contrary, the proposed algorithm simply carries out the k-NN search to yield outstanding rank estimation results. Metric learning: Metric learning aims to construct an embedding space in which the distance between objects reflects their semantic difference. Most metric learning algorithms (Schroff et al., 2015; Deng et al., 2019) are for ordinary classification, clustering, or image retrieval tasks. Therefore, they enforce an object to be located near other objects in the same class in the embedding space but far from objects in different classes. However, in rank estimation, this approach may be suboptimal because it does not consider ordinal relationships among classes. For example, in movie rating, it does not discriminate the class difference between ‘outstanding’ and ‘very good’ from that between ‘outstanding’ and ‘poor.’ To alleviate this problem, Xiao et al. (2009) designed a metric, called labeled distance, to measure semantic similarities between objects and attempted to preserve local semantic structures in the feature space. Moreover, to preserve the ordinal relationships, Li et al. (2012) developed a metric learning algorithm to make the distances between objects proportional to their rank differences. Also, Tian et al. (2016) employed a series of margins to explicitly impose different embedding distances according to rank differences. Suárez et al. (2021) attempted to sort the embedding distances between pairs of objects according to their rank differences. 3 Proposed Algorithm 3.1 Preliminary – Order and Metric Mathematically, both order and metric are binary relations (Hrbacek & Jech, 1984). An order (Schröder, 2003), denoted by ≤, on a set Θ = {θ0, θ1, . . . , θM−1} should satisfy the properties of • Reflexivity: θi ≤ θi for all i, • Antisymmetry: θi ≤ θj and θj ≤ θi imply θi = θj , • Transitivity: θi ≤ θj and θj ≤ θk imply θi ≤ θk. On the other hand, a metric (Rudin, 1991) is a distance function d satisfying • Nonnegativity: d(θi, θj) ≥ 0 for all i, j, and d(θi, θj) = 0 if and only if θi = θj , • Commutativity: d(θi, θj) = d(θj , θi) for all i, j, • Triangle inequality: d(θi, θk) ≤ d(θi, θj) + d(θj , θk) for all i, j, k. In rank estimation, an order describes the priorities of ranks or classes in the set Θ = {θ0, . . . , θM−1}, where each rank represents one or more object instances. For example, in age estimation, θi may represent i-year-olds, and θ17 < θ32 indicates that 17-year-olds are younger than 32-yearolds. Let θ(·) be the rank function, and let x and y be instances. Then, θ(x) = θ17 means that person x is 17-year-old. Also, a metric describes the difference between ranks in Θ. For example, d(θ(x), θ(y)) = 15 means that two people x and y are 15 years apart. 3.2 Embedding Space Construction Given an object instance x, the objective is to estimate its rank θ(x). In such rank estimation, an order and a metric convey complementary information: the order provides directional information between ranks, while the metric does length (or magnitude of difference) information. In age estimation, let us consider three age ranks θ2, θ17, and θ32. Since θ2 < θ17 < θ32, the order informs that θ2 and θ32 are at the opposite sides with respect to θ17. On the other hand, since d(θ2, θ17) = d(θ32, θ17) = 15, the metric indicates how far θ2 and θ32 are from θ17. In this case, both lengths are identically 15. For rank estimation, the pairwise comparison methods (Lim et al., 2020; Lee & Kim, 2021; Nguyen et al., 2018) attempt to learn these relations. However, Lim et al. (2020) and Lee & Kim (2021) exploit the order relation only, whereas Nguyen et al. (2018) use the metric relation only to train their neural networks. Thus, the conventional methods may yield sub-optimal results. To learn both order and metric relations, we propose a geometric approach called GOL. It contains two types of geometric constraints that enforce directional (order) and distance (metric) relationships between object instances according to their ranks in an embedding space. Specifically, the order constraint sorts instances directionally according to the ranks, while the metric constraint separates two instances farther if their rank difference is larger. Figure 2 is an overview of GOL. Order constraint: Suppose that there are M ranks in a training set X . Without loss of generality, the ranks are assumed to be consecutive integers in Θ = {0, 1, . . . ,M − 1}. In Figure 2, an encoder h maps each instance x ∈ X into a feature vector hx = h(x) in an embedding space. As h, we adopt VGG16 (Simonyan & Zisserman, 2015) without fully connected layers. The output of the last pooling layer is normalized so that htxhx = 1. Thus, the embedding space is a unit hypersphere. As in Lim et al. (2020) and Lee & Kim (2021), we classify the ordering between two instances x and y in X into three categories: x y if θ(x)− θ(y) > τ, x ≈ y if |θ(x)− θ(y)| ≤ τ, x ≺ y if θ(x)− θ(y) < −τ, (1) where τ is a threshold. For instance ordering, notations ‘≺,≈, ’ are used instead of ‘<,=, >.’ The order constraint encourages instances to be sorted according to their ordering relationships. In other words, for two instances x and y with ordering x ≺ y, the vector from hx to hy should be aligned with the direction of the rank increment in the embedding space. To model such rank directions, we introduce M reference points, r0, r1, . . . , rM−1, which are learnable parameters guiding the positions of the M ranks in the embedding space. These reference points are randomly initialized by the Glorot normal method (Glorot & Bengio, 2010) and jointly optimized with encoder parameters during training. Let us define the direction vector v(r, s) from point r to point s on the unit hypersphere as v(r, s) = (s− r)/‖s− r‖. (2) Then, v(ri, rj) is called the rank direction from rank i to rank j. Also, the rank direction v(ri, rj) is forward if i < j, and backward if i > j. Note that forward directions may differ from one another, for they may represent different physical changes. For example, in age estimation, visual variations from 0-years-old to 5-years-old are mainly due to craniofacial development, whereas those from 45-years-old to 50-years-old are due to skin aging (Geng et al., 2007). If x ≺ y, we determine the forward and backward rank directions, respectively, by vf = v(rθ(x), rθ(y)), (3) vb = v(rθ(x), rθ(x)−1). (4) Then, the encoder is trained so that the embedded features hx and hy satisfy the order constraint: x ≺ y ⇔ vtfv(hx, hy) > vtbv(hx, hy). (5) In other words, the direction vector v(hx, hy) should be aligned more with the forward direction vf than with the backward direction vb. To enforce the order constraint in (5), we compute the softmax probability pxy = [pxyf , p xy b ] t, where pxyf = e vtf v(hx,hy)/(ev t f v(hx,hy) + ev t bv(hx,hy)) and pxyb = 1 − p xy f . We then define the order loss Lorder as the cross entropy between pxy and qxy = [q xy f , q xy b ] t = [1, 0]t, given by Lorder = q xy f log p xy f + q xy b log p xy b . (6) The order loss for case x y is formulated similarly in a symmetric manner. Metric constraint: Next, we formulate a metric constraint to make the distance between instances in the embedding space reflect their rank difference. Specifically, it is desirable that |θ(x)− θ(y)| > τ ⇔ de(hx, hy) > γ (7) where de is the Euclidean distance in the embedding space, and γ is a margin. Note that if |θ(x)− θ(y)| > τ , either x ≺ y or x y in (1). Hence, the metric constraint in (7) is equivalent to x ≈ y ⇔ de(hx, hy) ≤ γ (8) From the triangle inequality, we have |de(ri, hx) − de(ri, hy)| ≤ de(hx, hy) for every reference point ri. So, if x ≈ y, we also have |de(ri, hx)− de(ri, hy)| ≤ γ. To encourage this inequality, we define a loss Lx≈y as Lx≈y = ∑ i∈Θ max(|de(ri, hx)− de(ri, hy)| − γ, 0). (9) On the contrary, if x ≺ y, it should be that de(hx, hy) > γ. Thus, we define another loss Lx≺y = ∑ i:i≤θ(x) max(de(ri, hx)−de(ri, hy)+γ, 0)+ ∑ j:j≥θ(y) max(de(rj , hy)−de(rj , hx)+γ, 0). (10) To minimize the first sum, de(ri, hx) should be reduced, while de(ri, hy) should be increased. Thus, reference points ri, 0 ≤ i ≤ θ(x), are trained to attract hx and repel hy, as illustrated in Figure 2. The second sum in (10) is similar. We do not use reference points rl, θ(x) < l < θ(y), which tend to be between hx and hy and unhelpful for guiding them. The derivation of Lx≺y in (10) from the metric constraint in (7) is provided in Appendix B. Also, Lx y is formulated symmetrically. Then, we define the metric loss Lmetric as Lmetric = [x y] · Lx y + [x ≈ y] · Lx≈y + [x ≺ y] · Lx≺y (11) where [·] is the indicator function. Note that, in (9) or (10), we make hx and hy attract or repel each other indirectly through reference points ri. This is because the direct attraction or repulsion using de(hx, hy) may cause hx and hy to move in arbitrary directions. Because the reference points are trained also with the order loss in (6), they provide proper directional guidance of attraction or repulsion in (9) or (10). Experimental analysis on Lx≺y in (10) is available in Section 4.4. Loss function: In addition to the losses for the order and metric constraints, we employ the center loss (Nguyen et al., 2018), which aims at locating each reference point ri at the center of all instances with rank i. Lcenter = de(rθ(x), hx) + de(rθ(y), hy). (12) Finally, we use the overall loss function to optimize the encoder parameters and the reference points ri, which is given by Ltotal = Lorder + Lmetric + Lcenter. (13) It is worth pointing out that reference points play essential roles in all three loss terms in (13). They are used to define forward and backward rank directions in Lorder, so they help to sort instances directionally according to the ranks. The reference points themselves are also sorted since instances are clustered around the reference points because of Lcenter. Also, as mentioned before, the reference points provide proper directional guidance of attraction or repulsion in Lmetric to satisfy the metric constraint. In other words, GOL uses the reference points to satisfy both order and metric constraints simultaneously and thus constructs a well-arranged, well-clustered embedding space. 3.3 k-NN Rank Estimation For rank estimation, we use the simple k-NN rule. Given a test instance x, in the embedding space, we find a set N of its k NNs among all training instances in X . Then, the rank of x is estimated by θ̂(x) = 1 k ∑ y∈N θ(y). (14) 4 Experimental Results We conduct experiments on embedding space construction and rank estimation. Implementation details and more results are available in Appendices C and D, respectively. 4.1 Implementation We initialize an encoder h with VGG16 pre-trained on ILSVRC2012 (Deng et al., 2009) and reference points with the Glorot normal method (Glorot & Bengio, 2010). The Adam optimizer (Kingma & Ba, 2015) is used with a batch size of 32 and a weight decay of 5 × 10−4, and the initial learning rates for the encoder and the reference points are set to 10−4 and 10−3, respectively. We perform the scheduled learning according to cosine annealing cycles (Huang et al., 2017). For data augmentation, we do random horizontal flips only. 4.2 Embedding Spaces The proposed GOL algorithm attempts to design an embedding space in which both order and metric constraints are satisfied: instances should be well sorted according to their ranks and well separated if they have big rank differences. To measure the quality of an embedding space for rank estimation, we may adopt the between-class variance to within-class variance (B2W) criterion in Fisher’s linear discriminant (Duda et al., 2006), B2W = (∑ i∈Θ |Xi|d2e(ci, c) ) / (∑ i∈Θ ∑ x∈Xi d2e(hx, ci) ) (15) whereXi = {x ∈ X | θ(x) = i}, ci = ∑ x∈Xi hx/|Xi| is the centroid forXi, and c = ∑ x∈X hx/|X | is the centroid for all instances inX . A high B2W score indicates that the rank setsXi, 0 ≤ i ≤M−1, are well separated from one another, while instances in each rank set are compactly distributed. However, B2W does not consider the ordinal relationships of ranks in Θ since it was formulated for ordinary classification. Therefore, to assess embedding spaces for rank estimation, we propose the discriminative ratio for ranking (DRR), given by DRRβ = ∑ i,j∈Θ:i<j(j − i)βde(ci, cj)/ ∑ i,j∈Θ:i<j(j − i)β∑ i∈Θ ∑ x,y∈Xi:x 6=y de(hx, hy)/ ∑ i∈Θ ∑ x,y∈Xi:x6=y 1 (16) which is the ratio of the average pairwise centroid distance to the average pairwise instance distance in each rank set. Note that, in the numerator, ordinal weights (j − i)β are used to emphasize the difference between a pair of rank sets with a large rank difference. Here, β is a nonnegative parameter for controlling the level of emphasis. A high DRR score is obtained when instances are well sorted and well separated according to their ranks in the embedding space. Table 1 compares the B2W and DRR scores on the MORPH II (Ricanek & Tesafaye, 2006), CACD (Chen et al., 2015), and Adience (Levi & Hassner, 2015) train data. To compare the embedding spaces as fairly as possible, we use the same encoder backbone of VGG16 for all algorithms. GOL significantly outperforms all conventional algorithms in all tests. In GOL, instances within each rank set Xi are located compactly around reference point ri by Lcenter in (12), while those in different rank sets are well separated by Lmetric in (11). Hence, GOL yields outstanding B2W scores. Moreover, GOL excels with a larger score gap in DRR1.0 than in DRR0.5, which means that it arranges the rank sets effectively in the embedding space to reflect their ordinal relationships using Lorder in (6). Figure 3 visualizes the embedding spaces on Adience. As in Figure 1, we add a fully connected layer with 3 output neurons to each encoder for the visualization. In (a) and (b), instances are sorted but scattered over the spaces, as the order learning algorithms (Lim et al., 2020; Shin et al., 2022) ignore the metric relation of ranks. In (c), instances in each rank set are well clustered, but the rank sets are not sorted because ML (Schroff et al., 2015) neglects the order relation. In (d), MV (Pan et al., 2018), which is an ordinal regressor, exhibits large within-class scattering, as well as large between-class scattering. In contrast, in (e), GOL constructs a well-sorted, well-clustered embedding space. 4.3 Rank Estimation Since GOL constructs high-quality embedding spaces, it provides excellent rank estimation performances even with the simple k-NN rule. Facial age estimation: We use four datasets of MORPH II (Ricanek & Tesafaye, 2006), CACD (Chen et al., 2015), UTK (Zhang et al., 2017), and Adience (Levi & Hassner, 2015), as detailed in Appendix D.3. In Table 2, we compare the performances in the four evaluation settings of MORPH II, which is one of the most popular datasets in age estimation. We use the mean absolute error (MAE) and cumulative score (CS) metrics. MAE is the average absolute error between estimated and ground-truth ages (i.e. ranks), and CS computes the percentage of images whose absolute errors are less than or equal to a tolerance level l = 5. Note that all algorithms in Table 2 use VGG16 as the encoder backbones, except for C3AE employing a shallow CNN. As for Shin et al. (2022), the scores of the global ρ-regressor are compared since their local scheme employs as many as six independent VGG16 encoders recursively. GOL performs the best in 5 out of 8 tests, including setting C, which is the most challenging task. Furthermore, unlike most conventional algorithms, GOL does not do the IMDB-WIKI pretraining to boost performances. Even without the pretraining, i.e. by employing only MORPH II training data, GOL yields outstanding results in Table 2. Table 3 compares the results on CACD, which is a bigger dataset containing over 100,000 natural face shots in diverse environments. GOL outperforms the second-best methods with meaningful gaps of 0.12 and 0.17 in the train and validation settings, respectively, which indicates that it can cope with large and diverse data effectively as well. Table 3 also compares the results on UTK. Note that Gustafsson et al. (2020) and Berg et al. (2021) employ the deep ResNet50 network (He et al., 2016) as their encoders, and MWR-G (Shin et al., 2022) predicts ranks using a complicated, recursive regressor. In contrast, GOL uses the shallower VGG16 encoder and performs rank estimation based on the simple k-NN search. Nevertheless, it yields an excellent result, for its geometric constraints help to construct an effective embedding space. Table 4 shows the results on Adience, where each image is labeled as one of 8 age groups. Compared with the second-best MWR-G, GOL improves the accuracy by 0.3% and reduces MAE by 0.03. HCI classification: The HCI dataset (Palermo et al., 2012) is used for estimating the shooting decade of a photograph. It contains 1,325 images from five decades 1930s ∼ 1970s. Table 4 lists the results on HCI. GOL yields the best performances as well, by improving the accuracy by 1.5% and reducing MAE by 0.05 as compared with the second-best methods. This means that GOL yields reliable results even for a small dataset, which may be unfavorable for the k-NN estimation. Aesthetic score regression: The aesthetics dataset (Schifanella et al., 2015) contains 15,687 images in four categories. Each image is annotated with a 5-scale aesthetic score. In Table 5, we follow the experimental setting in (Liu et al., 2018). It is challenging to estimate aesthetic scores reliably due to the subjectivity and ambiguity of aesthetic criteria, but GOL performs the best in 8 out of 10 tests. Compared with the state-of-the-art POE (Li et al., 2021), GOL improves the accuracy by 0.3% and reduces MAE by 0.1 overall. 4.4 Analysis Ablation study: Table 6 compares ablated methods for the loss function in (13). Method I employs Lcenter only. In II and III, Lmetric and Lorder are excluded, respectively. Compared with IV (GOL), method I degrades the performances severely, since the center loss alone cannot construct a meaningful embedding space; a trivial solution to minimize Lcenter can be obtained by merging all instances and all reference points into a single point. Also, from II and IV, we see that the metric constraint significantly improves the performances by separating instances with different ranks from each other. From III and IV, we see that the order constraint also improves the results by arranging instances directionally according to their ranks. To summarize, both order and metric constraints improve the results and are complementary to each other. Figure 4 (a)∼(d) show the embedding spaces for the ablated methods I∼IV in Table 6, respectively. The top row visualizes the reduced 3D embedding spaces, as described in Appendix C.2, while the bottom row does the original 512D embedding spaces via t-SNE (Maaten & Hinton, 2008). In (a), only Lcenter is used, so the 3D embedding space almost collapses to a single point. In (b), instances are directionally aligned, but adjacent rank sets overlap one another. In (c) and (d), the 3D embedding spaces seem similar. However, GOL yields more clearly ordered instances in the t-SNE visualization of the original embedding space than ‘Lcenter + Lmetric’ does. Alternatives to Lx≺y: Table 7 compares alternative loss terms for Lx≺y in (10). Method I directly increases de(hx, hy) to make it larger than γ. Method II uses only two reference points rθ(x) and rθ(y). In other words, unlike method III (i.e. GOL), it does not employ reference points ri, 0 ≤ i < θ(x), and θ(y) < rj ≤M − 1. For each method, Lx y is also modified accordingly. Method I degrades the performances badly, as hx and hy move in arbitrary directions. Also, GOL performs better than II because attraction and repulsion with many reference points facilitate the positioning of instances in the embedding space, as well as the reference points themselves. Embedding space transition: Figure 5 visualizes the transition of an embedding space for Adience. As the GOL training step goes on, we see that instances are gradually sorted and separated according to their ranks to satisfy the geometric constraints. After the convergence, for each rank, the reference point ri and the centroid ci are close to each other, indicating that instances are clustered around their reference points. Moreover, those reference points are well sorted as well. Comparison with order learning: The conventional order learning algorithms (Lim et al., 2020; Lee & Kim, 2021; Shin et al., 2022) provide competitive ranking performances, but they require relatively heavy computations. Table 8 lists the testing times. To estimate the rank of a test instance, OL uses five references for each of the M ranks; e.g., 300 comparisons should be made if M = 60 as in typical age estimation. Moreover, OL and DRC-ORID need additional processes, such as MAP estimation, to estimate a rank based on ordering relationships. Also, MWR-G estimates the rank by comparing a test instance with multiple reference pairs recursively. In contrast, GOL simply performs the k-NN estimation, which can be done efficiently in a parallel manner, and thus is about 160 and 23 times faster than OL and MWR-G, respectively. Furthermore, the order learning algorithms should select references among a training set X through a complicated optimization process; e.g., OL compares all possible pairs of training instances with O(|X |2) complexity to select the most reliable references, and DRC-ORID (Lee & Kim, 2021) performs joint network training and clustering for reference selection. On the contrary, GOL requires no such process. Also, in Table 8, the proposed algorithm demands the fewest parameters because it directly estimates the rank of an instance in the embedding space through the k-NN search, whereas the order learning algorithms should adopt comparators or regressors tailored for each task. 5 Conclusions The GOL algorithm for rank estimation was proposed in this work. First, we construct an embedding space based on the two geometric constraints, which enforce the direction and distance between instances to represent the order and metric relations between their ranks. Then, we perform the simple k-NN search in the embedding space for rank estimation. Extensive experiments on various rank estimation tasks demonstrated that GOL constructs high-quality embedding spaces and yields excellent rank estimation results. Acknowledgments This work was conducted by Center for Applied Research in Artificial Intelligence (CARAI) grant funded by DAPA and ADD (UD190031RD) and supported by the NRF grants funded by the Korea government (MSIT) (No. NRF-2021R1A4A1031864 and No. NRF-2022R1A2B5B03002310).
1. What is the novelty and contribution of GOL embedding space compared to traditional ranking methods like SIFT-Rank? 2. How does GOL enforce the two geometric constraints - order/rank constraint and metric constraint reflecting rank difference? 3. Can you provide more information about the DRR metric and its relation to traditional mathematical methods defining rank and order geometry? 4. Can you explain how GOL performs rank estimation using kNN, and what is the significance of the discriminative ratio for ranking (DRR)? 5. How does the reviewer assess the strengths and weaknesses of the paper regarding its claims, experiments, and comparisons with other works?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper A GOL embedding space represents the direction and distance between objects represent order and metric relations between their ranks, by enforcing two geometric constraints 1) the order/rank constraint and 2) metric constraint reflects rank difference. Estimates a test object rank are achieved by kNN, and a metric called discriminative ratio for ranking (DRR) estimates the quality of rank estimation embedding spaces. Experiments on facial age estimation, historical color image (HCI) classification, and aesthetic score regression demonstrate that GOL constructs effective embedding spaces and yields rank estimation performances. Strengths And Weaknesses It seems GOL is the first attempt to design an embedding space in which the direction and distance between objects represent their order and metric relations. The GOL algorithm performs best in 80% of benchmark datasets for facial age estimation, HCI classification, and aesthetic score regression. It is not clear how here, the rank and DRR metric are related to traditional (mathematical) methods defining according rank and order geometry, without learning. For example, ranked gradient orientation features are used for head age estimation, classification, with much success, prior to this work. E.g. please see SIFT-Rank, SIFT being the "scale-invariant feature transform" (the state-of-the-art before GPU-based deep CNN learning), and Rank being the order statistics of the SIFT descriptor (equivalent to uniform orientation sampled gradient filtering). https://ieeexplore.ieee.org/document/5206849 Toews, M., Wells, W.M. and Zöllei, L., 2012, October. A feature-based developmental model of the infant brain in structural MRI. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 204-211). Springer, Berlin, Heidelberg. Questions Would be interesting to know the link to traditional ranking theories such as SIFT-Rank. In Figure 1 c), are we seeing ordered objects arranged in a "band" around a hypersphere? Limitations There are no obvious limitations, except demonstration on more datasets & tasks.
NIPS
Title Geometric Order Learning for Rank Estimation Abstract A novel approach to rank estimation, called geometric order learning (GOL), is proposed in this paper. First, we construct an embedding space, in which the direction and distance between objects represent order and metric relations between their ranks, by enforcing two geometric constraints: the order constraint compels objects to be sorted according to their ranks, while the metric constraint makes the distance between objects reflect their rank difference. Then, we perform the simple k nearest neighbor (k-NN) search in the embedding space to estimate the rank of a test object. Moreover, to assess the quality of embedding spaces for rank estimation, we propose a metric called discriminative ratio for ranking (DRR). Extensive experiments on facial age estimation, historical color image (HCI) classification, and aesthetic score regression demonstrate that GOL constructs effective embedding spaces and thus yields excellent rank estimation performances. The source codes are available at https://github.com/seon92/GOL 1 Introduction In rank estimation, we estimate the rank (or ordered class) of an object. It is different from ordinary classification, for its classes are arranged in a natural order. For example, in movie rating, classes can be ordered from ‘outstanding’ to ‘very good,’ ‘satisfactory,’ ‘unsatisfactory,’ and ‘poor.’ Rank estimation is a fundamental problem and, e.g., used for various computer vision tasks including facial age estimation (Shin et al., 2022), aesthetic quality assessment (Schifanella et al., 2015), and HCI classification (Palermo et al., 2012). For rank estimation, many techniques (Li et al., 2021; Liu et al., 2018) adopt the ordinal regression framework, which employs a classifier or a regressor to predict the rank of an object directly. However, they may fail to yield reliable estimates, for there is no clear distinction between ranks in many cases. For instance, in facial age estimation, the aging process — causing variations in facial shapes, sizes, and texture — has large individual differences due to factors such as genes, diet, and lifestyle. To address this issue, comparison-based algorithms (Lim et al., 2020; Lee & Kim, 2021; Li et al., 2014; Nguyen et al., 2018) have been proposed. Instead of predicting the rank directly, they learn a binary relation between objects, such as order or metric (Hrbacek & Jech, 1984). These relations provide useful information for rank estimation: an order indicates the relative priority between objects x and y, while a metric informs of the distance between them. In order learning (Lim et al., 2020; Lee & Kim, 2021), a comparator is learned to classify the relationship between x and y into one of three cases: x is ‘greater than,’ ‘similar to,’ or ‘smaller than’ y. Then, they estimate the rank of a test object by comparing it with multiple reference objects with known ranks. This approach is based on the idea that it is easier to predict ordering relationships between objects than to estimate the absolute ranks; telling the older one between two people is easier than estimating their exact ages. However, order learning disregards how much x is different from y. In other words, it ignores metric information. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). On the other hand, metric learning algorithms (Li et al., 2014; Nguyen et al., 2018) employ the triplet constraint on three objects (x, y, z). It enforces the distance between x and y to be less than that between x and z in the embedding space if the ranks of (x, y, z) are in the increasing or decreasing order. By its design, the triplet constraint does not fully exploit the order among objects. Figures 1 (a) and (b) compare embedding spaces, obtained by order learning and metric learning, respectively. Order learning sorts instances according to ranks in general, but instances in each class are scattered in the embedding space. In contrast, metric learning reduces within-class scattering but does not sort instances properly. For example, both blue and orange ranks are adjacent to the yellow one, although they should be arranged in the order of blue, orange, and yellow. Unlike ordinary classification, different errors have different severities in rank estimation: misclassifying an object in the blue rank as a yellow one is severer than mistaking it for an orange one. Ordering in the embedding space is important to avoid such severe errors. We propose a novel algorithm, GOL, to estimate the rank of an object reliably by exploiting both order and metric relations. To this end, we construct an embedding space, in which the direction and distance between objects represent the order and metric relations between their ranks. For the construction, we formulate two geometric constraints in the embedding space: 1) the order constraint enforces the feature vectors of instances to be arranged according to their ranks, and 2) the metric constraint makes the distance between instances reflect their rank difference. To satisfy these two constraints simultaneously, we introduce reference points that guide the region of each rank in the embedding space. Then, we use the simple k-NN rule in the embedding space to estimate the rank of a test instance. Extensive experiments show that GOL constructs high-quality embedding spaces and thus provides excellent rank estimation performances. The contributions of this paper can be summarized as follows. • GOL is the first attempt to design an embedding space in which the direction and distance between objects represent their order and metric relations. • We introduce a novel metric, called DRR, to assess the quality of embedding spaces for rank estimation. Then, it is shown that GOL effectively sorts and separates instances according to their ranks in an embedding space, as illustrated in Figure 1 (c). • GOL achieves state-of-the-art performances on various benchmark datasets for facial age estimation, HCI classification, and aesthetic score regression. Specifically, GOL performs the best in 20 out of 25 benchmark tests. 2 Related Work Ordinal regression: Many ordinal regression methods have been developed to estimate the rank of an object directly using classifiers or regressors. Rothe et al. (2015) employed tens of classifiers to yield the average of their predictions as output. Yi et al. (2014) developed a regressor to estimate the rank of an image using multi-scale patches. Also, Frank & Hall (2001) employed multiple binary classifiers, each of which tells whether the rank of an object is higher than a series of thresholds or not. However, such direct estimation of ranks is challenging even for human beings in general; e.g., humans usually predict only a rough range of another one’s age with limited confidence. Thus, Diaz & Marathe (2019) trained a regressor using soft ordinal labels to alleviate penalties on close predictions. Furthermore, Li et al. (2021) and Li et al. (2022) modeled the uncertainty of each prediction as a Gaussian distribution. In contrast to these methods, the proposed algorithm provides more accurate rank estimates, although it uses only a single encoder network with the simple k-NN rule and makes no complicated probabilistic assumptions. Order learning: Lim et al. (2020) first proposed the notion of order learning, which learns ordering relationships between objects and determines the rank of an unseen object by comparing it with references with known ranks. It yields promising results because relative assessment is easier than absolute assessment in general. Lee & Kim (2021) improved the performance of order learning by finding more reliable references. They decomposed object information into an order-related feature and an identity feature and showed that objects with similar identity features can be compared more reliably. Also, Shin et al. (2022) extended the classification approach in (Lim et al., 2020; Lee & Kim, 2021) to a regression-based one. These order learning methods, however, require a significant computational cost to find reliable references from an entire training set. Moreover, for rank estimation, they should do comparisons with many references with different ranks because they consider only relative priorities between objects. On the contrary, the proposed algorithm simply carries out the k-NN search to yield outstanding rank estimation results. Metric learning: Metric learning aims to construct an embedding space in which the distance between objects reflects their semantic difference. Most metric learning algorithms (Schroff et al., 2015; Deng et al., 2019) are for ordinary classification, clustering, or image retrieval tasks. Therefore, they enforce an object to be located near other objects in the same class in the embedding space but far from objects in different classes. However, in rank estimation, this approach may be suboptimal because it does not consider ordinal relationships among classes. For example, in movie rating, it does not discriminate the class difference between ‘outstanding’ and ‘very good’ from that between ‘outstanding’ and ‘poor.’ To alleviate this problem, Xiao et al. (2009) designed a metric, called labeled distance, to measure semantic similarities between objects and attempted to preserve local semantic structures in the feature space. Moreover, to preserve the ordinal relationships, Li et al. (2012) developed a metric learning algorithm to make the distances between objects proportional to their rank differences. Also, Tian et al. (2016) employed a series of margins to explicitly impose different embedding distances according to rank differences. Suárez et al. (2021) attempted to sort the embedding distances between pairs of objects according to their rank differences. 3 Proposed Algorithm 3.1 Preliminary – Order and Metric Mathematically, both order and metric are binary relations (Hrbacek & Jech, 1984). An order (Schröder, 2003), denoted by ≤, on a set Θ = {θ0, θ1, . . . , θM−1} should satisfy the properties of • Reflexivity: θi ≤ θi for all i, • Antisymmetry: θi ≤ θj and θj ≤ θi imply θi = θj , • Transitivity: θi ≤ θj and θj ≤ θk imply θi ≤ θk. On the other hand, a metric (Rudin, 1991) is a distance function d satisfying • Nonnegativity: d(θi, θj) ≥ 0 for all i, j, and d(θi, θj) = 0 if and only if θi = θj , • Commutativity: d(θi, θj) = d(θj , θi) for all i, j, • Triangle inequality: d(θi, θk) ≤ d(θi, θj) + d(θj , θk) for all i, j, k. In rank estimation, an order describes the priorities of ranks or classes in the set Θ = {θ0, . . . , θM−1}, where each rank represents one or more object instances. For example, in age estimation, θi may represent i-year-olds, and θ17 < θ32 indicates that 17-year-olds are younger than 32-yearolds. Let θ(·) be the rank function, and let x and y be instances. Then, θ(x) = θ17 means that person x is 17-year-old. Also, a metric describes the difference between ranks in Θ. For example, d(θ(x), θ(y)) = 15 means that two people x and y are 15 years apart. 3.2 Embedding Space Construction Given an object instance x, the objective is to estimate its rank θ(x). In such rank estimation, an order and a metric convey complementary information: the order provides directional information between ranks, while the metric does length (or magnitude of difference) information. In age estimation, let us consider three age ranks θ2, θ17, and θ32. Since θ2 < θ17 < θ32, the order informs that θ2 and θ32 are at the opposite sides with respect to θ17. On the other hand, since d(θ2, θ17) = d(θ32, θ17) = 15, the metric indicates how far θ2 and θ32 are from θ17. In this case, both lengths are identically 15. For rank estimation, the pairwise comparison methods (Lim et al., 2020; Lee & Kim, 2021; Nguyen et al., 2018) attempt to learn these relations. However, Lim et al. (2020) and Lee & Kim (2021) exploit the order relation only, whereas Nguyen et al. (2018) use the metric relation only to train their neural networks. Thus, the conventional methods may yield sub-optimal results. To learn both order and metric relations, we propose a geometric approach called GOL. It contains two types of geometric constraints that enforce directional (order) and distance (metric) relationships between object instances according to their ranks in an embedding space. Specifically, the order constraint sorts instances directionally according to the ranks, while the metric constraint separates two instances farther if their rank difference is larger. Figure 2 is an overview of GOL. Order constraint: Suppose that there are M ranks in a training set X . Without loss of generality, the ranks are assumed to be consecutive integers in Θ = {0, 1, . . . ,M − 1}. In Figure 2, an encoder h maps each instance x ∈ X into a feature vector hx = h(x) in an embedding space. As h, we adopt VGG16 (Simonyan & Zisserman, 2015) without fully connected layers. The output of the last pooling layer is normalized so that htxhx = 1. Thus, the embedding space is a unit hypersphere. As in Lim et al. (2020) and Lee & Kim (2021), we classify the ordering between two instances x and y in X into three categories: x y if θ(x)− θ(y) > τ, x ≈ y if |θ(x)− θ(y)| ≤ τ, x ≺ y if θ(x)− θ(y) < −τ, (1) where τ is a threshold. For instance ordering, notations ‘≺,≈, ’ are used instead of ‘<,=, >.’ The order constraint encourages instances to be sorted according to their ordering relationships. In other words, for two instances x and y with ordering x ≺ y, the vector from hx to hy should be aligned with the direction of the rank increment in the embedding space. To model such rank directions, we introduce M reference points, r0, r1, . . . , rM−1, which are learnable parameters guiding the positions of the M ranks in the embedding space. These reference points are randomly initialized by the Glorot normal method (Glorot & Bengio, 2010) and jointly optimized with encoder parameters during training. Let us define the direction vector v(r, s) from point r to point s on the unit hypersphere as v(r, s) = (s− r)/‖s− r‖. (2) Then, v(ri, rj) is called the rank direction from rank i to rank j. Also, the rank direction v(ri, rj) is forward if i < j, and backward if i > j. Note that forward directions may differ from one another, for they may represent different physical changes. For example, in age estimation, visual variations from 0-years-old to 5-years-old are mainly due to craniofacial development, whereas those from 45-years-old to 50-years-old are due to skin aging (Geng et al., 2007). If x ≺ y, we determine the forward and backward rank directions, respectively, by vf = v(rθ(x), rθ(y)), (3) vb = v(rθ(x), rθ(x)−1). (4) Then, the encoder is trained so that the embedded features hx and hy satisfy the order constraint: x ≺ y ⇔ vtfv(hx, hy) > vtbv(hx, hy). (5) In other words, the direction vector v(hx, hy) should be aligned more with the forward direction vf than with the backward direction vb. To enforce the order constraint in (5), we compute the softmax probability pxy = [pxyf , p xy b ] t, where pxyf = e vtf v(hx,hy)/(ev t f v(hx,hy) + ev t bv(hx,hy)) and pxyb = 1 − p xy f . We then define the order loss Lorder as the cross entropy between pxy and qxy = [q xy f , q xy b ] t = [1, 0]t, given by Lorder = q xy f log p xy f + q xy b log p xy b . (6) The order loss for case x y is formulated similarly in a symmetric manner. Metric constraint: Next, we formulate a metric constraint to make the distance between instances in the embedding space reflect their rank difference. Specifically, it is desirable that |θ(x)− θ(y)| > τ ⇔ de(hx, hy) > γ (7) where de is the Euclidean distance in the embedding space, and γ is a margin. Note that if |θ(x)− θ(y)| > τ , either x ≺ y or x y in (1). Hence, the metric constraint in (7) is equivalent to x ≈ y ⇔ de(hx, hy) ≤ γ (8) From the triangle inequality, we have |de(ri, hx) − de(ri, hy)| ≤ de(hx, hy) for every reference point ri. So, if x ≈ y, we also have |de(ri, hx)− de(ri, hy)| ≤ γ. To encourage this inequality, we define a loss Lx≈y as Lx≈y = ∑ i∈Θ max(|de(ri, hx)− de(ri, hy)| − γ, 0). (9) On the contrary, if x ≺ y, it should be that de(hx, hy) > γ. Thus, we define another loss Lx≺y = ∑ i:i≤θ(x) max(de(ri, hx)−de(ri, hy)+γ, 0)+ ∑ j:j≥θ(y) max(de(rj , hy)−de(rj , hx)+γ, 0). (10) To minimize the first sum, de(ri, hx) should be reduced, while de(ri, hy) should be increased. Thus, reference points ri, 0 ≤ i ≤ θ(x), are trained to attract hx and repel hy, as illustrated in Figure 2. The second sum in (10) is similar. We do not use reference points rl, θ(x) < l < θ(y), which tend to be between hx and hy and unhelpful for guiding them. The derivation of Lx≺y in (10) from the metric constraint in (7) is provided in Appendix B. Also, Lx y is formulated symmetrically. Then, we define the metric loss Lmetric as Lmetric = [x y] · Lx y + [x ≈ y] · Lx≈y + [x ≺ y] · Lx≺y (11) where [·] is the indicator function. Note that, in (9) or (10), we make hx and hy attract or repel each other indirectly through reference points ri. This is because the direct attraction or repulsion using de(hx, hy) may cause hx and hy to move in arbitrary directions. Because the reference points are trained also with the order loss in (6), they provide proper directional guidance of attraction or repulsion in (9) or (10). Experimental analysis on Lx≺y in (10) is available in Section 4.4. Loss function: In addition to the losses for the order and metric constraints, we employ the center loss (Nguyen et al., 2018), which aims at locating each reference point ri at the center of all instances with rank i. Lcenter = de(rθ(x), hx) + de(rθ(y), hy). (12) Finally, we use the overall loss function to optimize the encoder parameters and the reference points ri, which is given by Ltotal = Lorder + Lmetric + Lcenter. (13) It is worth pointing out that reference points play essential roles in all three loss terms in (13). They are used to define forward and backward rank directions in Lorder, so they help to sort instances directionally according to the ranks. The reference points themselves are also sorted since instances are clustered around the reference points because of Lcenter. Also, as mentioned before, the reference points provide proper directional guidance of attraction or repulsion in Lmetric to satisfy the metric constraint. In other words, GOL uses the reference points to satisfy both order and metric constraints simultaneously and thus constructs a well-arranged, well-clustered embedding space. 3.3 k-NN Rank Estimation For rank estimation, we use the simple k-NN rule. Given a test instance x, in the embedding space, we find a set N of its k NNs among all training instances in X . Then, the rank of x is estimated by θ̂(x) = 1 k ∑ y∈N θ(y). (14) 4 Experimental Results We conduct experiments on embedding space construction and rank estimation. Implementation details and more results are available in Appendices C and D, respectively. 4.1 Implementation We initialize an encoder h with VGG16 pre-trained on ILSVRC2012 (Deng et al., 2009) and reference points with the Glorot normal method (Glorot & Bengio, 2010). The Adam optimizer (Kingma & Ba, 2015) is used with a batch size of 32 and a weight decay of 5 × 10−4, and the initial learning rates for the encoder and the reference points are set to 10−4 and 10−3, respectively. We perform the scheduled learning according to cosine annealing cycles (Huang et al., 2017). For data augmentation, we do random horizontal flips only. 4.2 Embedding Spaces The proposed GOL algorithm attempts to design an embedding space in which both order and metric constraints are satisfied: instances should be well sorted according to their ranks and well separated if they have big rank differences. To measure the quality of an embedding space for rank estimation, we may adopt the between-class variance to within-class variance (B2W) criterion in Fisher’s linear discriminant (Duda et al., 2006), B2W = (∑ i∈Θ |Xi|d2e(ci, c) ) / (∑ i∈Θ ∑ x∈Xi d2e(hx, ci) ) (15) whereXi = {x ∈ X | θ(x) = i}, ci = ∑ x∈Xi hx/|Xi| is the centroid forXi, and c = ∑ x∈X hx/|X | is the centroid for all instances inX . A high B2W score indicates that the rank setsXi, 0 ≤ i ≤M−1, are well separated from one another, while instances in each rank set are compactly distributed. However, B2W does not consider the ordinal relationships of ranks in Θ since it was formulated for ordinary classification. Therefore, to assess embedding spaces for rank estimation, we propose the discriminative ratio for ranking (DRR), given by DRRβ = ∑ i,j∈Θ:i<j(j − i)βde(ci, cj)/ ∑ i,j∈Θ:i<j(j − i)β∑ i∈Θ ∑ x,y∈Xi:x 6=y de(hx, hy)/ ∑ i∈Θ ∑ x,y∈Xi:x6=y 1 (16) which is the ratio of the average pairwise centroid distance to the average pairwise instance distance in each rank set. Note that, in the numerator, ordinal weights (j − i)β are used to emphasize the difference between a pair of rank sets with a large rank difference. Here, β is a nonnegative parameter for controlling the level of emphasis. A high DRR score is obtained when instances are well sorted and well separated according to their ranks in the embedding space. Table 1 compares the B2W and DRR scores on the MORPH II (Ricanek & Tesafaye, 2006), CACD (Chen et al., 2015), and Adience (Levi & Hassner, 2015) train data. To compare the embedding spaces as fairly as possible, we use the same encoder backbone of VGG16 for all algorithms. GOL significantly outperforms all conventional algorithms in all tests. In GOL, instances within each rank set Xi are located compactly around reference point ri by Lcenter in (12), while those in different rank sets are well separated by Lmetric in (11). Hence, GOL yields outstanding B2W scores. Moreover, GOL excels with a larger score gap in DRR1.0 than in DRR0.5, which means that it arranges the rank sets effectively in the embedding space to reflect their ordinal relationships using Lorder in (6). Figure 3 visualizes the embedding spaces on Adience. As in Figure 1, we add a fully connected layer with 3 output neurons to each encoder for the visualization. In (a) and (b), instances are sorted but scattered over the spaces, as the order learning algorithms (Lim et al., 2020; Shin et al., 2022) ignore the metric relation of ranks. In (c), instances in each rank set are well clustered, but the rank sets are not sorted because ML (Schroff et al., 2015) neglects the order relation. In (d), MV (Pan et al., 2018), which is an ordinal regressor, exhibits large within-class scattering, as well as large between-class scattering. In contrast, in (e), GOL constructs a well-sorted, well-clustered embedding space. 4.3 Rank Estimation Since GOL constructs high-quality embedding spaces, it provides excellent rank estimation performances even with the simple k-NN rule. Facial age estimation: We use four datasets of MORPH II (Ricanek & Tesafaye, 2006), CACD (Chen et al., 2015), UTK (Zhang et al., 2017), and Adience (Levi & Hassner, 2015), as detailed in Appendix D.3. In Table 2, we compare the performances in the four evaluation settings of MORPH II, which is one of the most popular datasets in age estimation. We use the mean absolute error (MAE) and cumulative score (CS) metrics. MAE is the average absolute error between estimated and ground-truth ages (i.e. ranks), and CS computes the percentage of images whose absolute errors are less than or equal to a tolerance level l = 5. Note that all algorithms in Table 2 use VGG16 as the encoder backbones, except for C3AE employing a shallow CNN. As for Shin et al. (2022), the scores of the global ρ-regressor are compared since their local scheme employs as many as six independent VGG16 encoders recursively. GOL performs the best in 5 out of 8 tests, including setting C, which is the most challenging task. Furthermore, unlike most conventional algorithms, GOL does not do the IMDB-WIKI pretraining to boost performances. Even without the pretraining, i.e. by employing only MORPH II training data, GOL yields outstanding results in Table 2. Table 3 compares the results on CACD, which is a bigger dataset containing over 100,000 natural face shots in diverse environments. GOL outperforms the second-best methods with meaningful gaps of 0.12 and 0.17 in the train and validation settings, respectively, which indicates that it can cope with large and diverse data effectively as well. Table 3 also compares the results on UTK. Note that Gustafsson et al. (2020) and Berg et al. (2021) employ the deep ResNet50 network (He et al., 2016) as their encoders, and MWR-G (Shin et al., 2022) predicts ranks using a complicated, recursive regressor. In contrast, GOL uses the shallower VGG16 encoder and performs rank estimation based on the simple k-NN search. Nevertheless, it yields an excellent result, for its geometric constraints help to construct an effective embedding space. Table 4 shows the results on Adience, where each image is labeled as one of 8 age groups. Compared with the second-best MWR-G, GOL improves the accuracy by 0.3% and reduces MAE by 0.03. HCI classification: The HCI dataset (Palermo et al., 2012) is used for estimating the shooting decade of a photograph. It contains 1,325 images from five decades 1930s ∼ 1970s. Table 4 lists the results on HCI. GOL yields the best performances as well, by improving the accuracy by 1.5% and reducing MAE by 0.05 as compared with the second-best methods. This means that GOL yields reliable results even for a small dataset, which may be unfavorable for the k-NN estimation. Aesthetic score regression: The aesthetics dataset (Schifanella et al., 2015) contains 15,687 images in four categories. Each image is annotated with a 5-scale aesthetic score. In Table 5, we follow the experimental setting in (Liu et al., 2018). It is challenging to estimate aesthetic scores reliably due to the subjectivity and ambiguity of aesthetic criteria, but GOL performs the best in 8 out of 10 tests. Compared with the state-of-the-art POE (Li et al., 2021), GOL improves the accuracy by 0.3% and reduces MAE by 0.1 overall. 4.4 Analysis Ablation study: Table 6 compares ablated methods for the loss function in (13). Method I employs Lcenter only. In II and III, Lmetric and Lorder are excluded, respectively. Compared with IV (GOL), method I degrades the performances severely, since the center loss alone cannot construct a meaningful embedding space; a trivial solution to minimize Lcenter can be obtained by merging all instances and all reference points into a single point. Also, from II and IV, we see that the metric constraint significantly improves the performances by separating instances with different ranks from each other. From III and IV, we see that the order constraint also improves the results by arranging instances directionally according to their ranks. To summarize, both order and metric constraints improve the results and are complementary to each other. Figure 4 (a)∼(d) show the embedding spaces for the ablated methods I∼IV in Table 6, respectively. The top row visualizes the reduced 3D embedding spaces, as described in Appendix C.2, while the bottom row does the original 512D embedding spaces via t-SNE (Maaten & Hinton, 2008). In (a), only Lcenter is used, so the 3D embedding space almost collapses to a single point. In (b), instances are directionally aligned, but adjacent rank sets overlap one another. In (c) and (d), the 3D embedding spaces seem similar. However, GOL yields more clearly ordered instances in the t-SNE visualization of the original embedding space than ‘Lcenter + Lmetric’ does. Alternatives to Lx≺y: Table 7 compares alternative loss terms for Lx≺y in (10). Method I directly increases de(hx, hy) to make it larger than γ. Method II uses only two reference points rθ(x) and rθ(y). In other words, unlike method III (i.e. GOL), it does not employ reference points ri, 0 ≤ i < θ(x), and θ(y) < rj ≤M − 1. For each method, Lx y is also modified accordingly. Method I degrades the performances badly, as hx and hy move in arbitrary directions. Also, GOL performs better than II because attraction and repulsion with many reference points facilitate the positioning of instances in the embedding space, as well as the reference points themselves. Embedding space transition: Figure 5 visualizes the transition of an embedding space for Adience. As the GOL training step goes on, we see that instances are gradually sorted and separated according to their ranks to satisfy the geometric constraints. After the convergence, for each rank, the reference point ri and the centroid ci are close to each other, indicating that instances are clustered around their reference points. Moreover, those reference points are well sorted as well. Comparison with order learning: The conventional order learning algorithms (Lim et al., 2020; Lee & Kim, 2021; Shin et al., 2022) provide competitive ranking performances, but they require relatively heavy computations. Table 8 lists the testing times. To estimate the rank of a test instance, OL uses five references for each of the M ranks; e.g., 300 comparisons should be made if M = 60 as in typical age estimation. Moreover, OL and DRC-ORID need additional processes, such as MAP estimation, to estimate a rank based on ordering relationships. Also, MWR-G estimates the rank by comparing a test instance with multiple reference pairs recursively. In contrast, GOL simply performs the k-NN estimation, which can be done efficiently in a parallel manner, and thus is about 160 and 23 times faster than OL and MWR-G, respectively. Furthermore, the order learning algorithms should select references among a training set X through a complicated optimization process; e.g., OL compares all possible pairs of training instances with O(|X |2) complexity to select the most reliable references, and DRC-ORID (Lee & Kim, 2021) performs joint network training and clustering for reference selection. On the contrary, GOL requires no such process. Also, in Table 8, the proposed algorithm demands the fewest parameters because it directly estimates the rank of an instance in the embedding space through the k-NN search, whereas the order learning algorithms should adopt comparators or regressors tailored for each task. 5 Conclusions The GOL algorithm for rank estimation was proposed in this work. First, we construct an embedding space based on the two geometric constraints, which enforce the direction and distance between instances to represent the order and metric relations between their ranks. Then, we perform the simple k-NN search in the embedding space for rank estimation. Extensive experiments on various rank estimation tasks demonstrated that GOL constructs high-quality embedding spaces and yields excellent rank estimation results. Acknowledgments This work was conducted by Center for Applied Research in Artificial Intelligence (CARAI) grant funded by DAPA and ADD (UD190031RD) and supported by the NRF grants funded by the Korea government (MSIT) (No. NRF-2021R1A4A1031864 and No. NRF-2022R1A2B5B03002310).
1. What is the focus and contribution of the paper on rank estimation? 2. What are the strengths of the proposed approach, particularly in imposing geometric constraints? 3. What are the weaknesses of the paper, especially regarding the selection of reference points and fairness of comparisons? 4. Do you have any concerns about the effectiveness and robustness of the proposed method? 5. Are there any limitations or potential drawbacks of the proposed approach that should be acknowledged?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper introduces geometric order learning (GOL) method for rank estimation by enforcing two geometric constraints: the order constraint and the metric constraint. The order constraint enforces the feature vectors of instances to be arranged according to their ranks, and the metric constraint makes the distance between instances reflect their rank difference. The paper also proposes discriminative ratio ranking metric to assess the quality of embedding spaces for rank estimation. Extensive experiments demonstrate that GOL constructs effective embedding spaces and yields excellent rank estimation performances. Strengths And Weaknesses Strengths: A geometric order learning (GOL) method for rank estimation is proposed by enforcing the order constraint and the metric constraint. A discriminative ratio ranking metric is introduced to assess the quality of embedding spaces for rank estimation. Experiments are conducted to demonstrate that GOL constructs effective embedding spaces and yields excellent rank estimation performances. Weaknesses: The proposed GOP method rely on reference points, it is not clear how to select these reference points and how they affect the performance of GOP. As the network architectures of different methods in Table 1 are different, so whether the comparison is fair? Questions The proposed GOP method rely on reference points, it is not clear how to select these reference points and how they affect the performance of GOP. As the network architectures of different methods in Table 1 are different, so whether the comparison is fair? Limitations Yes
NIPS
Title Geometric Order Learning for Rank Estimation Abstract A novel approach to rank estimation, called geometric order learning (GOL), is proposed in this paper. First, we construct an embedding space, in which the direction and distance between objects represent order and metric relations between their ranks, by enforcing two geometric constraints: the order constraint compels objects to be sorted according to their ranks, while the metric constraint makes the distance between objects reflect their rank difference. Then, we perform the simple k nearest neighbor (k-NN) search in the embedding space to estimate the rank of a test object. Moreover, to assess the quality of embedding spaces for rank estimation, we propose a metric called discriminative ratio for ranking (DRR). Extensive experiments on facial age estimation, historical color image (HCI) classification, and aesthetic score regression demonstrate that GOL constructs effective embedding spaces and thus yields excellent rank estimation performances. The source codes are available at https://github.com/seon92/GOL 1 Introduction In rank estimation, we estimate the rank (or ordered class) of an object. It is different from ordinary classification, for its classes are arranged in a natural order. For example, in movie rating, classes can be ordered from ‘outstanding’ to ‘very good,’ ‘satisfactory,’ ‘unsatisfactory,’ and ‘poor.’ Rank estimation is a fundamental problem and, e.g., used for various computer vision tasks including facial age estimation (Shin et al., 2022), aesthetic quality assessment (Schifanella et al., 2015), and HCI classification (Palermo et al., 2012). For rank estimation, many techniques (Li et al., 2021; Liu et al., 2018) adopt the ordinal regression framework, which employs a classifier or a regressor to predict the rank of an object directly. However, they may fail to yield reliable estimates, for there is no clear distinction between ranks in many cases. For instance, in facial age estimation, the aging process — causing variations in facial shapes, sizes, and texture — has large individual differences due to factors such as genes, diet, and lifestyle. To address this issue, comparison-based algorithms (Lim et al., 2020; Lee & Kim, 2021; Li et al., 2014; Nguyen et al., 2018) have been proposed. Instead of predicting the rank directly, they learn a binary relation between objects, such as order or metric (Hrbacek & Jech, 1984). These relations provide useful information for rank estimation: an order indicates the relative priority between objects x and y, while a metric informs of the distance between them. In order learning (Lim et al., 2020; Lee & Kim, 2021), a comparator is learned to classify the relationship between x and y into one of three cases: x is ‘greater than,’ ‘similar to,’ or ‘smaller than’ y. Then, they estimate the rank of a test object by comparing it with multiple reference objects with known ranks. This approach is based on the idea that it is easier to predict ordering relationships between objects than to estimate the absolute ranks; telling the older one between two people is easier than estimating their exact ages. However, order learning disregards how much x is different from y. In other words, it ignores metric information. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). On the other hand, metric learning algorithms (Li et al., 2014; Nguyen et al., 2018) employ the triplet constraint on three objects (x, y, z). It enforces the distance between x and y to be less than that between x and z in the embedding space if the ranks of (x, y, z) are in the increasing or decreasing order. By its design, the triplet constraint does not fully exploit the order among objects. Figures 1 (a) and (b) compare embedding spaces, obtained by order learning and metric learning, respectively. Order learning sorts instances according to ranks in general, but instances in each class are scattered in the embedding space. In contrast, metric learning reduces within-class scattering but does not sort instances properly. For example, both blue and orange ranks are adjacent to the yellow one, although they should be arranged in the order of blue, orange, and yellow. Unlike ordinary classification, different errors have different severities in rank estimation: misclassifying an object in the blue rank as a yellow one is severer than mistaking it for an orange one. Ordering in the embedding space is important to avoid such severe errors. We propose a novel algorithm, GOL, to estimate the rank of an object reliably by exploiting both order and metric relations. To this end, we construct an embedding space, in which the direction and distance between objects represent the order and metric relations between their ranks. For the construction, we formulate two geometric constraints in the embedding space: 1) the order constraint enforces the feature vectors of instances to be arranged according to their ranks, and 2) the metric constraint makes the distance between instances reflect their rank difference. To satisfy these two constraints simultaneously, we introduce reference points that guide the region of each rank in the embedding space. Then, we use the simple k-NN rule in the embedding space to estimate the rank of a test instance. Extensive experiments show that GOL constructs high-quality embedding spaces and thus provides excellent rank estimation performances. The contributions of this paper can be summarized as follows. • GOL is the first attempt to design an embedding space in which the direction and distance between objects represent their order and metric relations. • We introduce a novel metric, called DRR, to assess the quality of embedding spaces for rank estimation. Then, it is shown that GOL effectively sorts and separates instances according to their ranks in an embedding space, as illustrated in Figure 1 (c). • GOL achieves state-of-the-art performances on various benchmark datasets for facial age estimation, HCI classification, and aesthetic score regression. Specifically, GOL performs the best in 20 out of 25 benchmark tests. 2 Related Work Ordinal regression: Many ordinal regression methods have been developed to estimate the rank of an object directly using classifiers or regressors. Rothe et al. (2015) employed tens of classifiers to yield the average of their predictions as output. Yi et al. (2014) developed a regressor to estimate the rank of an image using multi-scale patches. Also, Frank & Hall (2001) employed multiple binary classifiers, each of which tells whether the rank of an object is higher than a series of thresholds or not. However, such direct estimation of ranks is challenging even for human beings in general; e.g., humans usually predict only a rough range of another one’s age with limited confidence. Thus, Diaz & Marathe (2019) trained a regressor using soft ordinal labels to alleviate penalties on close predictions. Furthermore, Li et al. (2021) and Li et al. (2022) modeled the uncertainty of each prediction as a Gaussian distribution. In contrast to these methods, the proposed algorithm provides more accurate rank estimates, although it uses only a single encoder network with the simple k-NN rule and makes no complicated probabilistic assumptions. Order learning: Lim et al. (2020) first proposed the notion of order learning, which learns ordering relationships between objects and determines the rank of an unseen object by comparing it with references with known ranks. It yields promising results because relative assessment is easier than absolute assessment in general. Lee & Kim (2021) improved the performance of order learning by finding more reliable references. They decomposed object information into an order-related feature and an identity feature and showed that objects with similar identity features can be compared more reliably. Also, Shin et al. (2022) extended the classification approach in (Lim et al., 2020; Lee & Kim, 2021) to a regression-based one. These order learning methods, however, require a significant computational cost to find reliable references from an entire training set. Moreover, for rank estimation, they should do comparisons with many references with different ranks because they consider only relative priorities between objects. On the contrary, the proposed algorithm simply carries out the k-NN search to yield outstanding rank estimation results. Metric learning: Metric learning aims to construct an embedding space in which the distance between objects reflects their semantic difference. Most metric learning algorithms (Schroff et al., 2015; Deng et al., 2019) are for ordinary classification, clustering, or image retrieval tasks. Therefore, they enforce an object to be located near other objects in the same class in the embedding space but far from objects in different classes. However, in rank estimation, this approach may be suboptimal because it does not consider ordinal relationships among classes. For example, in movie rating, it does not discriminate the class difference between ‘outstanding’ and ‘very good’ from that between ‘outstanding’ and ‘poor.’ To alleviate this problem, Xiao et al. (2009) designed a metric, called labeled distance, to measure semantic similarities between objects and attempted to preserve local semantic structures in the feature space. Moreover, to preserve the ordinal relationships, Li et al. (2012) developed a metric learning algorithm to make the distances between objects proportional to their rank differences. Also, Tian et al. (2016) employed a series of margins to explicitly impose different embedding distances according to rank differences. Suárez et al. (2021) attempted to sort the embedding distances between pairs of objects according to their rank differences. 3 Proposed Algorithm 3.1 Preliminary – Order and Metric Mathematically, both order and metric are binary relations (Hrbacek & Jech, 1984). An order (Schröder, 2003), denoted by ≤, on a set Θ = {θ0, θ1, . . . , θM−1} should satisfy the properties of • Reflexivity: θi ≤ θi for all i, • Antisymmetry: θi ≤ θj and θj ≤ θi imply θi = θj , • Transitivity: θi ≤ θj and θj ≤ θk imply θi ≤ θk. On the other hand, a metric (Rudin, 1991) is a distance function d satisfying • Nonnegativity: d(θi, θj) ≥ 0 for all i, j, and d(θi, θj) = 0 if and only if θi = θj , • Commutativity: d(θi, θj) = d(θj , θi) for all i, j, • Triangle inequality: d(θi, θk) ≤ d(θi, θj) + d(θj , θk) for all i, j, k. In rank estimation, an order describes the priorities of ranks or classes in the set Θ = {θ0, . . . , θM−1}, where each rank represents one or more object instances. For example, in age estimation, θi may represent i-year-olds, and θ17 < θ32 indicates that 17-year-olds are younger than 32-yearolds. Let θ(·) be the rank function, and let x and y be instances. Then, θ(x) = θ17 means that person x is 17-year-old. Also, a metric describes the difference between ranks in Θ. For example, d(θ(x), θ(y)) = 15 means that two people x and y are 15 years apart. 3.2 Embedding Space Construction Given an object instance x, the objective is to estimate its rank θ(x). In such rank estimation, an order and a metric convey complementary information: the order provides directional information between ranks, while the metric does length (or magnitude of difference) information. In age estimation, let us consider three age ranks θ2, θ17, and θ32. Since θ2 < θ17 < θ32, the order informs that θ2 and θ32 are at the opposite sides with respect to θ17. On the other hand, since d(θ2, θ17) = d(θ32, θ17) = 15, the metric indicates how far θ2 and θ32 are from θ17. In this case, both lengths are identically 15. For rank estimation, the pairwise comparison methods (Lim et al., 2020; Lee & Kim, 2021; Nguyen et al., 2018) attempt to learn these relations. However, Lim et al. (2020) and Lee & Kim (2021) exploit the order relation only, whereas Nguyen et al. (2018) use the metric relation only to train their neural networks. Thus, the conventional methods may yield sub-optimal results. To learn both order and metric relations, we propose a geometric approach called GOL. It contains two types of geometric constraints that enforce directional (order) and distance (metric) relationships between object instances according to their ranks in an embedding space. Specifically, the order constraint sorts instances directionally according to the ranks, while the metric constraint separates two instances farther if their rank difference is larger. Figure 2 is an overview of GOL. Order constraint: Suppose that there are M ranks in a training set X . Without loss of generality, the ranks are assumed to be consecutive integers in Θ = {0, 1, . . . ,M − 1}. In Figure 2, an encoder h maps each instance x ∈ X into a feature vector hx = h(x) in an embedding space. As h, we adopt VGG16 (Simonyan & Zisserman, 2015) without fully connected layers. The output of the last pooling layer is normalized so that htxhx = 1. Thus, the embedding space is a unit hypersphere. As in Lim et al. (2020) and Lee & Kim (2021), we classify the ordering between two instances x and y in X into three categories: x y if θ(x)− θ(y) > τ, x ≈ y if |θ(x)− θ(y)| ≤ τ, x ≺ y if θ(x)− θ(y) < −τ, (1) where τ is a threshold. For instance ordering, notations ‘≺,≈, ’ are used instead of ‘<,=, >.’ The order constraint encourages instances to be sorted according to their ordering relationships. In other words, for two instances x and y with ordering x ≺ y, the vector from hx to hy should be aligned with the direction of the rank increment in the embedding space. To model such rank directions, we introduce M reference points, r0, r1, . . . , rM−1, which are learnable parameters guiding the positions of the M ranks in the embedding space. These reference points are randomly initialized by the Glorot normal method (Glorot & Bengio, 2010) and jointly optimized with encoder parameters during training. Let us define the direction vector v(r, s) from point r to point s on the unit hypersphere as v(r, s) = (s− r)/‖s− r‖. (2) Then, v(ri, rj) is called the rank direction from rank i to rank j. Also, the rank direction v(ri, rj) is forward if i < j, and backward if i > j. Note that forward directions may differ from one another, for they may represent different physical changes. For example, in age estimation, visual variations from 0-years-old to 5-years-old are mainly due to craniofacial development, whereas those from 45-years-old to 50-years-old are due to skin aging (Geng et al., 2007). If x ≺ y, we determine the forward and backward rank directions, respectively, by vf = v(rθ(x), rθ(y)), (3) vb = v(rθ(x), rθ(x)−1). (4) Then, the encoder is trained so that the embedded features hx and hy satisfy the order constraint: x ≺ y ⇔ vtfv(hx, hy) > vtbv(hx, hy). (5) In other words, the direction vector v(hx, hy) should be aligned more with the forward direction vf than with the backward direction vb. To enforce the order constraint in (5), we compute the softmax probability pxy = [pxyf , p xy b ] t, where pxyf = e vtf v(hx,hy)/(ev t f v(hx,hy) + ev t bv(hx,hy)) and pxyb = 1 − p xy f . We then define the order loss Lorder as the cross entropy between pxy and qxy = [q xy f , q xy b ] t = [1, 0]t, given by Lorder = q xy f log p xy f + q xy b log p xy b . (6) The order loss for case x y is formulated similarly in a symmetric manner. Metric constraint: Next, we formulate a metric constraint to make the distance between instances in the embedding space reflect their rank difference. Specifically, it is desirable that |θ(x)− θ(y)| > τ ⇔ de(hx, hy) > γ (7) where de is the Euclidean distance in the embedding space, and γ is a margin. Note that if |θ(x)− θ(y)| > τ , either x ≺ y or x y in (1). Hence, the metric constraint in (7) is equivalent to x ≈ y ⇔ de(hx, hy) ≤ γ (8) From the triangle inequality, we have |de(ri, hx) − de(ri, hy)| ≤ de(hx, hy) for every reference point ri. So, if x ≈ y, we also have |de(ri, hx)− de(ri, hy)| ≤ γ. To encourage this inequality, we define a loss Lx≈y as Lx≈y = ∑ i∈Θ max(|de(ri, hx)− de(ri, hy)| − γ, 0). (9) On the contrary, if x ≺ y, it should be that de(hx, hy) > γ. Thus, we define another loss Lx≺y = ∑ i:i≤θ(x) max(de(ri, hx)−de(ri, hy)+γ, 0)+ ∑ j:j≥θ(y) max(de(rj , hy)−de(rj , hx)+γ, 0). (10) To minimize the first sum, de(ri, hx) should be reduced, while de(ri, hy) should be increased. Thus, reference points ri, 0 ≤ i ≤ θ(x), are trained to attract hx and repel hy, as illustrated in Figure 2. The second sum in (10) is similar. We do not use reference points rl, θ(x) < l < θ(y), which tend to be between hx and hy and unhelpful for guiding them. The derivation of Lx≺y in (10) from the metric constraint in (7) is provided in Appendix B. Also, Lx y is formulated symmetrically. Then, we define the metric loss Lmetric as Lmetric = [x y] · Lx y + [x ≈ y] · Lx≈y + [x ≺ y] · Lx≺y (11) where [·] is the indicator function. Note that, in (9) or (10), we make hx and hy attract or repel each other indirectly through reference points ri. This is because the direct attraction or repulsion using de(hx, hy) may cause hx and hy to move in arbitrary directions. Because the reference points are trained also with the order loss in (6), they provide proper directional guidance of attraction or repulsion in (9) or (10). Experimental analysis on Lx≺y in (10) is available in Section 4.4. Loss function: In addition to the losses for the order and metric constraints, we employ the center loss (Nguyen et al., 2018), which aims at locating each reference point ri at the center of all instances with rank i. Lcenter = de(rθ(x), hx) + de(rθ(y), hy). (12) Finally, we use the overall loss function to optimize the encoder parameters and the reference points ri, which is given by Ltotal = Lorder + Lmetric + Lcenter. (13) It is worth pointing out that reference points play essential roles in all three loss terms in (13). They are used to define forward and backward rank directions in Lorder, so they help to sort instances directionally according to the ranks. The reference points themselves are also sorted since instances are clustered around the reference points because of Lcenter. Also, as mentioned before, the reference points provide proper directional guidance of attraction or repulsion in Lmetric to satisfy the metric constraint. In other words, GOL uses the reference points to satisfy both order and metric constraints simultaneously and thus constructs a well-arranged, well-clustered embedding space. 3.3 k-NN Rank Estimation For rank estimation, we use the simple k-NN rule. Given a test instance x, in the embedding space, we find a set N of its k NNs among all training instances in X . Then, the rank of x is estimated by θ̂(x) = 1 k ∑ y∈N θ(y). (14) 4 Experimental Results We conduct experiments on embedding space construction and rank estimation. Implementation details and more results are available in Appendices C and D, respectively. 4.1 Implementation We initialize an encoder h with VGG16 pre-trained on ILSVRC2012 (Deng et al., 2009) and reference points with the Glorot normal method (Glorot & Bengio, 2010). The Adam optimizer (Kingma & Ba, 2015) is used with a batch size of 32 and a weight decay of 5 × 10−4, and the initial learning rates for the encoder and the reference points are set to 10−4 and 10−3, respectively. We perform the scheduled learning according to cosine annealing cycles (Huang et al., 2017). For data augmentation, we do random horizontal flips only. 4.2 Embedding Spaces The proposed GOL algorithm attempts to design an embedding space in which both order and metric constraints are satisfied: instances should be well sorted according to their ranks and well separated if they have big rank differences. To measure the quality of an embedding space for rank estimation, we may adopt the between-class variance to within-class variance (B2W) criterion in Fisher’s linear discriminant (Duda et al., 2006), B2W = (∑ i∈Θ |Xi|d2e(ci, c) ) / (∑ i∈Θ ∑ x∈Xi d2e(hx, ci) ) (15) whereXi = {x ∈ X | θ(x) = i}, ci = ∑ x∈Xi hx/|Xi| is the centroid forXi, and c = ∑ x∈X hx/|X | is the centroid for all instances inX . A high B2W score indicates that the rank setsXi, 0 ≤ i ≤M−1, are well separated from one another, while instances in each rank set are compactly distributed. However, B2W does not consider the ordinal relationships of ranks in Θ since it was formulated for ordinary classification. Therefore, to assess embedding spaces for rank estimation, we propose the discriminative ratio for ranking (DRR), given by DRRβ = ∑ i,j∈Θ:i<j(j − i)βde(ci, cj)/ ∑ i,j∈Θ:i<j(j − i)β∑ i∈Θ ∑ x,y∈Xi:x 6=y de(hx, hy)/ ∑ i∈Θ ∑ x,y∈Xi:x6=y 1 (16) which is the ratio of the average pairwise centroid distance to the average pairwise instance distance in each rank set. Note that, in the numerator, ordinal weights (j − i)β are used to emphasize the difference between a pair of rank sets with a large rank difference. Here, β is a nonnegative parameter for controlling the level of emphasis. A high DRR score is obtained when instances are well sorted and well separated according to their ranks in the embedding space. Table 1 compares the B2W and DRR scores on the MORPH II (Ricanek & Tesafaye, 2006), CACD (Chen et al., 2015), and Adience (Levi & Hassner, 2015) train data. To compare the embedding spaces as fairly as possible, we use the same encoder backbone of VGG16 for all algorithms. GOL significantly outperforms all conventional algorithms in all tests. In GOL, instances within each rank set Xi are located compactly around reference point ri by Lcenter in (12), while those in different rank sets are well separated by Lmetric in (11). Hence, GOL yields outstanding B2W scores. Moreover, GOL excels with a larger score gap in DRR1.0 than in DRR0.5, which means that it arranges the rank sets effectively in the embedding space to reflect their ordinal relationships using Lorder in (6). Figure 3 visualizes the embedding spaces on Adience. As in Figure 1, we add a fully connected layer with 3 output neurons to each encoder for the visualization. In (a) and (b), instances are sorted but scattered over the spaces, as the order learning algorithms (Lim et al., 2020; Shin et al., 2022) ignore the metric relation of ranks. In (c), instances in each rank set are well clustered, but the rank sets are not sorted because ML (Schroff et al., 2015) neglects the order relation. In (d), MV (Pan et al., 2018), which is an ordinal regressor, exhibits large within-class scattering, as well as large between-class scattering. In contrast, in (e), GOL constructs a well-sorted, well-clustered embedding space. 4.3 Rank Estimation Since GOL constructs high-quality embedding spaces, it provides excellent rank estimation performances even with the simple k-NN rule. Facial age estimation: We use four datasets of MORPH II (Ricanek & Tesafaye, 2006), CACD (Chen et al., 2015), UTK (Zhang et al., 2017), and Adience (Levi & Hassner, 2015), as detailed in Appendix D.3. In Table 2, we compare the performances in the four evaluation settings of MORPH II, which is one of the most popular datasets in age estimation. We use the mean absolute error (MAE) and cumulative score (CS) metrics. MAE is the average absolute error between estimated and ground-truth ages (i.e. ranks), and CS computes the percentage of images whose absolute errors are less than or equal to a tolerance level l = 5. Note that all algorithms in Table 2 use VGG16 as the encoder backbones, except for C3AE employing a shallow CNN. As for Shin et al. (2022), the scores of the global ρ-regressor are compared since their local scheme employs as many as six independent VGG16 encoders recursively. GOL performs the best in 5 out of 8 tests, including setting C, which is the most challenging task. Furthermore, unlike most conventional algorithms, GOL does not do the IMDB-WIKI pretraining to boost performances. Even without the pretraining, i.e. by employing only MORPH II training data, GOL yields outstanding results in Table 2. Table 3 compares the results on CACD, which is a bigger dataset containing over 100,000 natural face shots in diverse environments. GOL outperforms the second-best methods with meaningful gaps of 0.12 and 0.17 in the train and validation settings, respectively, which indicates that it can cope with large and diverse data effectively as well. Table 3 also compares the results on UTK. Note that Gustafsson et al. (2020) and Berg et al. (2021) employ the deep ResNet50 network (He et al., 2016) as their encoders, and MWR-G (Shin et al., 2022) predicts ranks using a complicated, recursive regressor. In contrast, GOL uses the shallower VGG16 encoder and performs rank estimation based on the simple k-NN search. Nevertheless, it yields an excellent result, for its geometric constraints help to construct an effective embedding space. Table 4 shows the results on Adience, where each image is labeled as one of 8 age groups. Compared with the second-best MWR-G, GOL improves the accuracy by 0.3% and reduces MAE by 0.03. HCI classification: The HCI dataset (Palermo et al., 2012) is used for estimating the shooting decade of a photograph. It contains 1,325 images from five decades 1930s ∼ 1970s. Table 4 lists the results on HCI. GOL yields the best performances as well, by improving the accuracy by 1.5% and reducing MAE by 0.05 as compared with the second-best methods. This means that GOL yields reliable results even for a small dataset, which may be unfavorable for the k-NN estimation. Aesthetic score regression: The aesthetics dataset (Schifanella et al., 2015) contains 15,687 images in four categories. Each image is annotated with a 5-scale aesthetic score. In Table 5, we follow the experimental setting in (Liu et al., 2018). It is challenging to estimate aesthetic scores reliably due to the subjectivity and ambiguity of aesthetic criteria, but GOL performs the best in 8 out of 10 tests. Compared with the state-of-the-art POE (Li et al., 2021), GOL improves the accuracy by 0.3% and reduces MAE by 0.1 overall. 4.4 Analysis Ablation study: Table 6 compares ablated methods for the loss function in (13). Method I employs Lcenter only. In II and III, Lmetric and Lorder are excluded, respectively. Compared with IV (GOL), method I degrades the performances severely, since the center loss alone cannot construct a meaningful embedding space; a trivial solution to minimize Lcenter can be obtained by merging all instances and all reference points into a single point. Also, from II and IV, we see that the metric constraint significantly improves the performances by separating instances with different ranks from each other. From III and IV, we see that the order constraint also improves the results by arranging instances directionally according to their ranks. To summarize, both order and metric constraints improve the results and are complementary to each other. Figure 4 (a)∼(d) show the embedding spaces for the ablated methods I∼IV in Table 6, respectively. The top row visualizes the reduced 3D embedding spaces, as described in Appendix C.2, while the bottom row does the original 512D embedding spaces via t-SNE (Maaten & Hinton, 2008). In (a), only Lcenter is used, so the 3D embedding space almost collapses to a single point. In (b), instances are directionally aligned, but adjacent rank sets overlap one another. In (c) and (d), the 3D embedding spaces seem similar. However, GOL yields more clearly ordered instances in the t-SNE visualization of the original embedding space than ‘Lcenter + Lmetric’ does. Alternatives to Lx≺y: Table 7 compares alternative loss terms for Lx≺y in (10). Method I directly increases de(hx, hy) to make it larger than γ. Method II uses only two reference points rθ(x) and rθ(y). In other words, unlike method III (i.e. GOL), it does not employ reference points ri, 0 ≤ i < θ(x), and θ(y) < rj ≤M − 1. For each method, Lx y is also modified accordingly. Method I degrades the performances badly, as hx and hy move in arbitrary directions. Also, GOL performs better than II because attraction and repulsion with many reference points facilitate the positioning of instances in the embedding space, as well as the reference points themselves. Embedding space transition: Figure 5 visualizes the transition of an embedding space for Adience. As the GOL training step goes on, we see that instances are gradually sorted and separated according to their ranks to satisfy the geometric constraints. After the convergence, for each rank, the reference point ri and the centroid ci are close to each other, indicating that instances are clustered around their reference points. Moreover, those reference points are well sorted as well. Comparison with order learning: The conventional order learning algorithms (Lim et al., 2020; Lee & Kim, 2021; Shin et al., 2022) provide competitive ranking performances, but they require relatively heavy computations. Table 8 lists the testing times. To estimate the rank of a test instance, OL uses five references for each of the M ranks; e.g., 300 comparisons should be made if M = 60 as in typical age estimation. Moreover, OL and DRC-ORID need additional processes, such as MAP estimation, to estimate a rank based on ordering relationships. Also, MWR-G estimates the rank by comparing a test instance with multiple reference pairs recursively. In contrast, GOL simply performs the k-NN estimation, which can be done efficiently in a parallel manner, and thus is about 160 and 23 times faster than OL and MWR-G, respectively. Furthermore, the order learning algorithms should select references among a training set X through a complicated optimization process; e.g., OL compares all possible pairs of training instances with O(|X |2) complexity to select the most reliable references, and DRC-ORID (Lee & Kim, 2021) performs joint network training and clustering for reference selection. On the contrary, GOL requires no such process. Also, in Table 8, the proposed algorithm demands the fewest parameters because it directly estimates the rank of an instance in the embedding space through the k-NN search, whereas the order learning algorithms should adopt comparators or regressors tailored for each task. 5 Conclusions The GOL algorithm for rank estimation was proposed in this work. First, we construct an embedding space based on the two geometric constraints, which enforce the direction and distance between instances to represent the order and metric relations between their ranks. Then, we perform the simple k-NN search in the embedding space for rank estimation. Extensive experiments on various rank estimation tasks demonstrated that GOL constructs high-quality embedding spaces and yields excellent rank estimation results. Acknowledgments This work was conducted by Center for Applied Research in Artificial Intelligence (CARAI) grant funded by DAPA and ADD (UD190031RD) and supported by the NRF grants funded by the Korea government (MSIT) (No. NRF-2021R1A4A1031864 and No. NRF-2022R1A2B5B03002310).
1. What is the main contribution of the paper, and how does it relate to previous works in order learning and metric learning? 2. What are the strengths and weaknesses of the proposed method, particularly in terms of its ability to learn an embedding space that preserves the ordering and distance between training instances? 3. Do you have any concerns or questions regarding the formulation of the order constraint, the use of different ranking metrics, or the choice of the VGG backbone? 4. How does the proposed method compare to other approaches in terms of computational efficiency and robustness to image perturbations? 5. Are there any limitations or potential drawbacks to the proposed method, such as the need for a carefully designed objective function or the sensitivity to hyperparameter tuning?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper introduces an algorithm for learning an embedding space using which the rank of an object can be estimated, e.g. age of a person based on face images. The embedding space is constrained such that the ordering and distance between training instances are preserved. The authors introduce a metric for embedding space evaluation by repurposing the B2W metric (inter-class variance/intra-class variance) to account for the rank ordering of the data embeddings. Strengths And Weaknesses The paper is easy to read. However, parts of the introduction and related work section are repetitive. The order and distance properties listed as preliminaries are common knowledge and seem to add no specific detail to the objective formulation in the paper. The central idea of the paper is to combine concepts from order learning and metric learning. The paper presents an objective by combining the two constraints. The visualization presented in the paper shows that the proposed method learns embeddings that are ordered. Ablation study shows that each loss objective added helps improve the performance of the model. Questions The order constraint formulation uses v b which is based on consecutive ranks while the forward uses difference in references corresponding to x and y ranks - Is there a reason for this asymmetry? Where does the gain in test runtime come from? I would assume comparing with 5 references/rank would be faster than searching for the k-nearest neighbor from N training examples. Also, comparing methods where one uses a compressed representation in the form of references while the other uses the entire training set is a fair setting. The proposed method does learn references during training but seems to ignore these for inference - can the rank estimation be performed with just the references? Minor: How long is the model trained for? Is there a stopping condition? Is there any particular reason for choosing the VGG backbone? Limitations The authors show specific failure cases in facial age estimation where the lighting condition, overexposure, and other image perturbations affect the performance of the model - I think this is a general limitation of the dataset and is not specific to the model. A more careful analysis of the proposed method and its limitations might be useful. The paper lacks a discussion on why the embedding space has to be ordered - This is crucial for the premise of the method proposed. A method that embeds each rank in distinct spaces will be good at rank estimation (I believe this would explain the performance of other methods compared in the paper). The reported results seem to be based on a single run - It would be useful to report results based on multiple random initializations.
NIPS
Title Uncertainty Estimation Using Riemannian Model Dynamics for Offline Reinforcement Learning Abstract Model-based offline reinforcement learning approaches generally rely on bounds of model error. Estimating these bounds is usually achieved through uncertainty estimation methods. In this work, we combine parametric and nonparametric methods for uncertainty estimation through a novel latent space based metric. In particular, we build upon recent advances in Riemannian geometry of generative models to construct a pullback metric of an encoder-decoder based forward model. Our proposed metric measures both the quality of out-of-distribution samples as well as the discrepancy of examples in the data. We leverage our method for uncertainty estimation in a pessimistic model-based framework, showing a significant improvement upon contemporary model-based offline approaches on continuous control and autonomous driving benchmarks. 1 Introduction Offline Reinforcement Learning (RL) [Levine et al., 2020], a.k.a. batch-mode RL [Ernst et al., 2005, Riedmiller, 2005, Fonteneau et al., 2013], involves learning a policy from data sampled by a potentially suboptimal policy. Offline RL seeks to surpass the average performance of the agents that generated the data. Traditional methodologies fall short in offline settings, causing overestimation of the return [Buckman et al., 2020, Wang et al., 2020, Zanette, 2020]. One approach to overcome this in model-based settings is to penalize the return in out of distribution (OOD) regions, as depicted in Figure 1. In this manner, the agent is constrained to stay “near" areas of low model error, thereby limiting possible overestimation. However, reliable estimates of model error are key to the success of such methods. Estimating model error in OOD regions can be achieved through uncertainty estimation [Yu et al., 2020]. Methods of parametric uncertainty estimation such as bootstrap ensembles [Efron, 1982], Monte Carlo Dropout [Gal and Ghahramani, 2016], and randomized priors [Osband et al., 2018], may be susceptible to poor model specification and are most effective when dealing with large datasets. In contrast, nonparametric methods such as k-nearest neighbors (k-NN) [Villa Medina et al., 2013, Fathabadi et al., 2021] are beneficial in regions of limited data, yet require a proper metric to be used. We propose to combine parametric and nonparametric methods for uncertainty estimation. Particularly, we define a novel Riemmannian metric which captures the epistemic and aleatoric uncertainty of a generative parametric forward model. This distance metric is then applied to measure the average geodesic distance to the k-nearest neighbors in the data. We derive analytical expressions for our metric and provide an efficient manner to estimate it. We then demonstrate the effectiveness of our metric for penalizing an offline RL agent compared to contemporary approaches on continuous control and autonomous driving benchmarks. As we empirically show, common approaches, including statistical bootstrap ensemble or Euclidean distances in latent space, do not necessarily capture the underlying degree of error needed for model-based offline RL. ∗Correspondence to [email protected] 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 2 Preliminaries 2.1 Offline Reinforcement Learning We consider the standard Markov Decision Process (MDP) framework [Puterman, 2014] defined by the tuple (S,A, r, P, α), where S is the state space, A the action space, r : S ×A 7→ [0, 1] the reward function, P : S ×A× S 7→ [0, 1] the transition kernel, and α ∈ (0, 1) is the discount factor. In the online setting of reinforcement learning (RL), the environment initiates at some state s0 ∼ ρ0. At any time step the environment is in a state s ∈ S, an agent takes an action a ∈ A and receives a reward r(s, a) from the environment as a result of this action. The environment transitions to state s′ according to the transition function P (·|s, a). The goal of online RL is to find a policy π(a|s) that maximizes the expected discounted return vπ = Eπ [∑∞ t=0 α tr(st, at)|s0 ∼ ρ0 ] . Unlike the online setting, the offline setup considers a dataset Dn = {si, ai, ri, s′i} n i=1 of transitions generated by some unknown agents. The objective of offline RL is to find the best policy in the test environment (i.e., real MDP) given only access to the data generated by the unknown agents. 2.2 Riemannian Manifolds We define the Riemannian pullback metric, a fundamental component of our proposed method. We refer the reader to Carmo [1992] for further details on Riemannian geometry. We are interested in studying a smooth surface M with a Riemannian metric g. A Riemannian metric is a smooth function that assigns a symmetric positive definite matrix to any point in M . At each point z ∈M a tangent space TzM specifies the pointing direction of vectors “along” the surface. Definition 1. Let M be a smooth manifold. A Riemannian metric g on M changes smoothly and defines a real scalar product on the tangent space TzM for any z ∈M as gz(x, y) = ⟨x, y⟩z = ⟨x,G(z)y⟩, x, y ∈ TzM, where G(z) ∈ Rdz×dz is the corresponding metric tensor. (M, g) is called a Riemannian manifold. The Riemannian metric enables us to easily define geodesic curves. Consider some differentiable mapping γ : [0, 1] 7→M ⊆ Rdz , such that γ(0) = z0, γ(1) = z1. The length of the curve γ measured on M is given by L(γ) = ∫ 1 0 √〈 ∂γ(t) ∂t ,G(γ(t)) ∂γ(t) ∂t 〉 dt. (1) The geodesic distance d(z1, z2) between any two points z1, z2 ∈M is then the infimum length over all curves γ for which γ(0) = z0, γ(1) = z1. That is, d(z1, z2) = infγ L(γ) s.t. γ(0) = z0, γ(1) = z1. The geodesic distance can be found by solving a system of nonlinear ordinary differential equations (ODEs) defined in the intrinsic coordinates [Carmo, 1992]. Pullback Metric. Assume an ambient (observation) space X and its respective Riemannian manifold (MX , gX ). Learning gX can be hard (e.g., learning the distance metric between images). Still, it may be captured through a low dimensional submanifold. As such, it is many times convenient to parameterize the surface MX by a latent space Z = RdZ and a smooth function f : Z 7→ X , where Z is a low dimensional latent embedding space. As learning the manifold MX can be hard, we turn to learning the immersed low dimensional submanifold MZ (for which the chart maps are trivial, since Z = RdZ ). Given a curve γ : [0, 1] 7→MZ we have that 〈 ∂f(γ(t)) ∂t , GX (f(γ(t))) ∂f(γ(t)) ∂t 〉 =〈 ∂γ(t) ∂t , J T f (γ(t))GX (f(γ(t)))Jf (γ(t)) ∂γ(t) ∂t 〉 , where the Jacobian matrix Jf (z) = ∂f∂z ∈ R dX×dZ maps tangent vectors in TMZ to tangent vectors in TMX . The induced metric is thus given by Gf (z) = Jf (z) TGX (f(z))Jf (z). (2) The metric Gf is known as the pullback metric, as it “pulls back" the metric GX on X back to Gf via f : Z 7→ X . The pullback metric captures the intrinsic geometry of the immersed submanifold while taking into account the ambient space X . The geodesic distance in ambient space is captured by geodesics in the latent space Z , reducing the problem to learning the latent embedding space Z and the observation function f . Indeed, learning the latent space and observation function f can be achieved through a encoder-decoder framework, such as a VAE [Arvanitidis et al., 2018]. 3 Background: Penalty of Uncertainty for Offline Reinforcement Learning A key element of model-based RL methods involves estimating a model P̂ (s′|s, a) to construct a pessimistic MDP2. This work builds upon MOPO, a recently proposed model-based offline RL framework [Yu et al., 2020]). Particularly, we assume access to an approximate MDP (S,A, r̂, P̂ , α) (e.g., trained by maximizing the likelihood of the data), and define a penalized MDP (S,A, r̃, P̂ , α), such that for all s ∈ S, a ∈ A, r̃(s, a) = r̂(s, a) − λc(P (·|s, a), P̂ (·|s, a)), where c penalizes the reward according to model error (e.g., the total variation distance) and λ > 0. The offline RL problem is then solved by executing an online algorithm in the reward-penalized MDP. Unfortunately, as P (·|s, a) is unknown, and can only be estimated from the data, c(P (·|s, a), P̂ (·|s, a)) cannot be calculated. Nevertheless, one can attempt to upper bound the distance, i.e., for some U : S ×A 7→ R, c(P (·|s, a), P̂ (·|s, a)) ≤ U(s, a),∀s ∈ S, a ∈ A. In this work we propose to use a naturally induced metric of a variational forward model, which we show can introduce an effective penalty for offline RL. In Section 4 we define this metric, and finally, we leverage it in Section 4.4. 4 Metrics of Uncertainty As described in the previous section, our goal is to estimate model error in order to penalize the agent in out of distribution (OOD) regions. Yu et al. [2020] proposed to achieve this through bootstrap ensembles, an out of distribution uncertainty estimation technique. Alternatively, we propose to employ a well-known nonparametric approach for uncertainty estimation [Villa Medina et al., 2013, Fathabadi et al., 2021], namely k-nearest neighbors (k-NN). Specifically, for any s, a ∈ S ×A, we estimate model error by U(s, a) = 1 k ∑ (si,ai)∈NNk(s,a) d((s, a), (si, ai)), (3) where d : S ×A× S ×A 7→ R+ is a distance metric, and NNk(s, a) is the set of k-nearest neighbors of (s, a) in Dn according to the distance metric d. A question arises: how to choose d? Using the Euclidean distance in ambient (state-action) space is usually a bad choice (e.g., the ℓ2 distance between natural images is not necessarily meaningful). Moreover, to correctly measure the error, model dynamics should be somehow taken into consideration. We therefore consider an alternative approach which leverages the latent space of a variational forward model, as described next. 4.1 A Variational Latent Model of Dynamics We begin by modeling P̂ (s′|s, a) using a generative latent model. Specifically, we consider a latent model which consists of an encoder E : S × A 7→ B(Z) and a decoder fD : Z 7→ B(S), where B(X ) is set of probability measures on the Borel sets of X . While the encoder E learns a latent representation of s, a, the decoder fD estimates the next state s′ according to P (·|s, a). This model corresponds to the decomposition P (·|s, a) = fD(·|E(s, a)). Such a model can be trained by maximizing the Evidence Lower BOund (ELBO, Kingma and Welling [2013]) over the data. 2Pessimism is a key element of offline RL algorithms [Jin et al., 2020], limiting overestimation of a trained policy due to the distribution shift between the data and the trained policy. Algorithm 1 GELATO: Geometrically Enriched LATent model for Offline reinforcement learning 1: Input: Offline dataset Dn, RL algorithm 2: Train variational latent forward model on dataset Dn by maximizing ELBO. 3: Construct approximate MDP (S,A, r̂, P̂ , α) 4: Use distance dZ induced by pullback metric GfD,U◦fF (Theorem 2) to penalize reward r̃d(s, a) = r̂(s, a)− λU(s, a) where U(s, a) = ∑ (si,ai)∈NNk(s,a) dZ(E(s, a), E(si, ai)) 5: Train RL algorithm over penalized MDP (S,A, r̃d, P̂ , α) That is, given a prior P (z), we model Eϕ, fD,θ as parametric functions and maximize the ELBO, maxθ,ϕ EEϕ(z|s,a) [ log fD,θ(s ′|z) ] −DKL(Eϕ(z|s, a)||P (z)) We refer the reader to the appendix for an exhaustive overview of training VAEs by maximum likelihood and the ELBO. Recall that we wish to find a good metric for estimating model error. Having learned a latent model for P̂ (s′|s, a), its latent space Z can be used to define the metric d in Equation (3), i.e., the Euclidean distance between latent representations of state-action pairs. Unfortunately, as was previously shown [Arvanitidis et al., 2018], latent codes in variational models contain sharp discontinuities, rendering Euclidean distances in latent space unreliable and inaccurate (as we will also demonstrate in our experiments). Instead, we propose to use the natural induced metric of our latent model, as described in the following subsection. 4.2 The Pullback Metric of Model Dynamics In this part we define the metric d that will be used to approximate model error in Equation (3). Specifically, we consider a Riemannian submanifold defined by a latent space Z and observation function f , which induces minimum energy in the ambient space. We will later choose Z to be the latent space of our variational model (i.e., encoded state-action) and f to be the decoder function fD of next state transitions. We define the distance metric formally below. Definition 2. We define a Riemannian submanifold (MZ , gZ) by a differential function f : Z 7→ S and latent space Z such that dZ(z1, z2) = inf γ ∫ 1 0 ∥∥∥∥∂f(γ(t))∂t ∥∥∥∥ dt s.t. γ(0) = z1, γ(1) = z2. A similar metric has been used in previous work on generative latent models [Chen et al., 2018, Arvanitidis et al., 2018]. By choosing f to be the decoder function fD, latent codes that are close w.r.t. dZ induce curves of minimal energy in the ambient observation space (i.e., next state). This metric is closely related to the pullback metric (see Section 2.2), as shown by the following proposition. Proposition 1. Let (MZ , gZ) as defined above. Then Gf (z) = JTf (z)Jf (z), for any z ∈ Z . Indeed, Proposition 1 shows us that Gf is a pullback metric. ParticularlyZ and Jf define the structure of the ambient observation space X (in our case, next state transitions). By choosing f to be the decoder function fD, the metric GfD becomes stochastic, complicating analysis. Instead, as proposed and analyzed in Arvanitidis et al. [2018], we use the expected pullback metric E [GZ ] as an approximation of the underlying stochastic metric. Similar to previous work on variational models, we use a normally distributed decoder to define the output. Using Proposition 1, we have the following result (see Appendix for proof). Theorem 1. [Arvanitidis et al. [2018]] Assume fD(·|z) ∼ N (µ(z), σ(z)I). Then EfD(·|z) [ GfD (z) ] = Gµ(z) +Gσ(z), (4) where Gµ(z) = JTµ (z)Jµ(z) and Gσ(z) = J T σ (z)Jσ(z). Given an embedded latent space Z , the expected metric in Equation (4) gives us a sense of the topology of the latent space manifold induced by fD. The terms Gµ = JTµ Jµ and Gσ = J T σ Jσ are in fact the induced pullback metrics of µ and σ, respectively. 4.3 Capturing Epistemic and Aleatoric Uncertainty The previously proposed encoder-decoder model induces a metric which captures the structure of the learned dynamics. However, the decoder variance, σ(z), does not differentiate between aleatoric uncertainty (environment dynamics) and epistemic uncertainty (missing data). We propose two methods to enrich the metric in Equation (4) in order to achieve a better estimate of uncertainty. First, by using an ensemble of M decoder functions {fD,i}Mi=1 trained using standard bootstrap techniques [Efron, 1982], we capture the traditional epistemic uncertainty of the decoder parameters. Second, to correctly distinguish epistemic and aleatoric uncertainty, we add a latent forward function to our previously proposed variational model. Specifically, our latent model consists of an encoder E : S × A 7→ B(Z), forward model fF : Z 7→ B(X ) and decoder functions fD,i : X 7→ B(S) such that P (·|s, a) = fD,i(·|x), and x ∼ fF (·|E(s, a)). This structure enables us to capture the aleatoric uncertainty under the forward transition model fF , and the epistemic uncertainty using the decoders fD,i. That is, for fF (·|z) ∼ N (µF (z), σF (z)I), the variance model σF (z) captures the stochasticity in model dynamics. This decomposition is also helpful whenever one wants to train an agent in latent space (e.g., for planning Schrittwieser et al. [2020]) Next, we turn to analyze the pullback metric induced by the proposed forward transition model. As both fF and {fD,i}mi=1 are stochastic (capturing epistemic and aleatoric uncertainty), the result of Theorem 1 cannot be directly applied to their composition. The following proposition provides an analytical expression for the expected pullback metric of a sampled next state and a uniformly sampled decoder (the proof is given in the appendix). Theorem 2. Assume fF (·|z) ∼ N (µF (z), σF (z)I), fD,i(·|x) ∼ N (µiD(x), σiD(x)I), U ∼ Unif{1, . . . ,M}. Then, the expected pullback metric of the composite function (fD,U ◦ fF ) is given by EP (fD,U◦fF ) [ GfD,U◦fF (z) ] = JTµF (z)GfD (z)JµF (z) + J T σF (z)diag ( GfD (z) ) JσF (z), where GfD (z) = 1 M ∑M i=1 Ex∼F (·|z) [ JT µiD (x)JµiD (x) + J T σiD (x)JσiD (x) ] . Unlike the metric in Equation (4), the composite metric distorts the decoder metric with Jacobian matrices of the forward model statistics. It takes into account both the aleatoric and epistemic uncertainty through the forward model as well as ensemble of decoders. As a special case we note the metric for the case of deterministic model dynamics. Corollary 1. Assume deterministic model dynamics, i.e., x = fF (z), and without loss of generality assume fF ≡ I . Then, the expected pullback metric of Theorem 2 is given by E [ GfD,U◦fF (z) ] = 1 M ∑M i=1 J T µiD (z)JµiD (z) + J T σiD (z)JσiD (z). 4.4 GELATO: Geometrically Enriched LATent model for Offline reinforcement learning Having defined our metric, we are now ready to leverage it in a model based offline RL framework. Specifically, provided a dataset Dn = {(si, ai, ri, s′i)} n i=1 we train the variational latent forward model described in the previous section. Algorithm 1 presents GELATO, our proposed approach. In GELATO, we estimate model error by measuring the distance of a new sample to the data manifold. We construct the reward-penalized MDP for which the error acts as a pessimistic regularizer. Finally, we train an RL agent over the pessimistic MDP with transition P̂ (·|s, a) and reward r(s, a)−λU(s, a). By achieving an improved estimate for model error the model-based pessimistic approach can significantly improve performance, as shown in the following section. 5 Experiments This section is dedicated to quantitatively and qualitatively understand the benefits of our proposed metric and method. We validate two principal claims: (1) Our metric captures inherent characteristics of model dynamics. We demonstrate this by visualizing state-action geodesics of a toy grid world problem and a multi-agent autonomous driving task. We show that curves of minimum energy in ambient space indeed capture intrinsic properties of the problem. (2) Our metric provides an improved OOD uncertainty estimate for offline RL. We compare the traditionally used bootstrap ensemble method to our approach, which leverages our pullback metric in a nonparametric nearest neighbors approach. We also compare our method to simple use of Euclidean distances in latent space. We run extensive experiments on continuous control and autonomous driving benchmarks. We show that our metric achieves significantly improved performance in tasks for which geodesics are non-euclidean. 5.1 Metric Visualization Four Rooms. To better understand the inherent structure of our metric, we constructed a grid-world environment for visualizing our proposed latent representation and metric. The 15× 15 environment, as depicted in Figure 2, consists of four rooms, with impassable obstacles in their centers. The agent, residing at some position (x, y) ∈ [−1, 1]2 in the environment can take one of four actions: up, down, left, or right – moving the agent 1, 2 or 3 steps (uniformly distributed) in that direction. We collected a dataset of 10000 samples, taking random actions at random initializations of the environment. The ambient state space was represented by the position of the agent – a vector of dimension 2, normalized to values in [−1, 1]. Finally, we trained a variational latent model with latent dimension dZ = 2. We used a standard encoder z ∼ N (µθ(s), σθ(s)) and decoder s′ ∼ N (µϕ(z), σϕ(z)) represented by neural networks trained end-to-end using the evidence lower bound. We refer the reader to the appendix for an exhaustive description of the training procedure. The latent space output of our model is depicted by yellow markers in Figure 2a. Indeed, the latent embedding consists of four distinctive clusters, structured in a similar manner as our grid-world environment. Interestingly, the distortion of the latent space accurately depicts an intuitive notion of distance between states. As such, rooms are distinctively separated, with fair distance between each cluster. States of pathways between rooms clearly separate the room clusters, forming a topology with four discernible bottlenecks. In addition to the latent embedding, Figure 2a depicts the geometric volume measure √ det(GfD ) of the trained pullback metric induced by fD. This quantity demonstrates the effective geodesic distances between states in the decoder-induced submanifold. Indeed geodesics between data points to points outside of the data manifold (i.e., outside of the red region), attain high values as integrals over areas of high magnitude. In contrast, geodesics near the data manifold would low values. We visualize the decoder-induced geodesic distance and compare it to the latent Euclidean distance in Figures 2b and 2c, respectively. The plots depict the normalized distances of all states to the state marked by a yellow square. Evidently, the geodesic distance captures a better notion of distance in the said environment, correctly exposing the “land distance" in ambient space. As expected, states residing in the bottom-right room are farthest from the source state, as reaching them comprises of passing through at least two bottleneck states. In contrast, the latent Euclidean distance does not properly capture these geodesics, exhibiting a similar distribution of distances in other rooms. Nevertheless, both geodesic and Euclidean distances reveal the intrinsic topological structure of the environment, that of which is not captured by the extrinsic coordinates (x, y) ∈ [−1, 1]2. Particularly, the state coordinates (x, y) would wrongly assign short distances to states across impassible walls or obstacles, i.e., measuring the “air distance". Intersection. We visualized our metric in the intersection environment proposed in Leurent [2018]. Figure 3 compares the Euclidean and geodesic distances of a partially trained agent. Unlike the previous toy example, to visualize the inherent manifolds we used t-SNE [Van der Maaten and Hinton, 2008] projections computed with Euclidean distance and compared them to the projection computed with geodesic distance, i.e., curves of minimum energy in ambient space (Definition 2). Indeed, the geodesics captured the inherent structure of the environment, whereas Euclidean distances only managed to capture general clusters. These suggest that Euclidean distance might not be representative for measuring distance in latent space, as will also become evident by our experiments in the following subsections. 5.2 Datasets and Implementation Details We used D4RL [Fu et al., 2020] and the autonomous vehicle environments highway-env [Leurent, 2018] as benchmarks for all of our experiments. We tested GELATO on three Mujoco [Todorov et al., 2012] environments (Hopper, Walker2d, Halfcheetah) on datasets generated by a single policy and a mixture of two policies. Specifically, we used datasets generated by a random agent (1M samples), a partially trained agent, i.e, medium agent (1M samples), and a mixture of partially trained and expert agents (2M samples). For autonomous driving, we tested GELATO on four environments (Highway, Roundabout, Intersection, Racetrack), on datasets containing five episodes generated by a partially trained agent. We also tested a faster (×15 speedup) variant of the Highway environment, as well as a harder instantiation of the Intersection environment in which the number of cars was tripled (further details can be found in the appendix). We trained our variational latent model in two phases. First, the model was fully trained using a calibrated Gaussian decoder [Rybkin et al., 2020]. Specifically, a maximum-likelihood estimate of the variance was used σ∗ = MSE(µ, µ̂) ∈ argmaxσN (µ̂|µ, σ2I). Then, in the second stage we fit the variance decoder networks. Hyperparameters for training are found in the appendix. In order to practically estimate the geodesic distance in Algorithm 1, we defined a parametric curve in latent space and used gradient descent to minimize the curve’s energy. The resulting curve and pullback metric were then used to calculate the geodesic distance by a numerical estimate of the curve length (See Appendix for an exhaustive overview of the estimation method). We used FAISS [Johnson et al., 2019] for efficient GPU-based k-nearest neighbors calculation. We set k = 5 neighbors for the penalized reward (Equation (3)). Finally, we used a variant of Soft Learning, as proposed by Yu et al. [2020] as our RL algorithm for the continuous control benchmarks, and PPO [Schulman et al., 2017] for the autonomous driving tasks. All agents were trained for 1M steps (for continuous control benchmarks) and 350K steps (for the driving benchmarks), using a single GPU (RTX 2080), and averaged over 5 seeds (see Appendix for more details). 5.3 Results D4RL. Results for the continuous control domains are shown in Table 1. We performed experiments to analyze GELATO on various continuous control datasets. We compared GELATO to contemporary model-based offline RL approaches; namely, MOPO [Yu et al., 2020] and MBPO [Janner et al., 2019], as well as the standard baselines of SAC [Haarnoja et al., 2018] and imitation (behavioral cloning, Bain and Sammut [1995], Ross et al. [2011]). We found performance increase on most domains, and most significantly in the medium domain, i.e., partially trained agent. Since the medium dataset contained average behavior, combining the nonparametric nearest-neighbor uncertainty method with the bootstrap of decoders benefited the agent’s overall performance. In addition, GELATO with the latent ℓ2 distance metric performed well on many of the benchmarks. We conjecture this is due to the inherent Euclidean nature of the continuous control benchmarks. Unlike the embedding for the autonomous driving benchmarks (Figure 3), we found the D4RL data to project similarly when ℓ2 and geodesic distances were used (we provide plots of these embeddings in the Appendix). Highway-Env. Figure 4 shows results for the autonomous driving benchmarks in highway-env. In contrast to the continuous control benchmarks, we found a significant improvement of our metric on the autonomous driving benchmarks compared to standard uncertainty estimation methods as well as the latent Euclidean distance. We credit this improvement to the non-euclidean nature of the environments, as previously described in Figure 3. While Euclidean distances were useful in the Mujoco environments, they performed distinctly worse in the autonomous driving environments. Our results emphasize the importance of OOD uncertainty estimations methods in reinforcement learning on various types of datasets. While robotic control tasks provided useful insights, they did not capture the non-euclidean nature inherent in alternative tasks, such as autonomous driving. 6 Related Work Offline Reinforcement Learning. The field of offline RL has recently received much attention as several algorithmic approaches were able to surpass standard off-policy algorithms. Value-based online algorithms do not perform well under highly off-policy batch data [Fujimoto et al., 2019, Kumar et al., 2019, Fu et al., 2019, Fedus et al., 2020, Agarwal et al., 2020], much due to issues with bootstrapping from out-of-distribution (OOD) samples. These issues become more prominent in the offline setting, as new samples cannot be acquired. Several works on offline RL have shown improved performance on standard continuous control benchmarks [Laroche et al., 2019, Kumar et al., 2019, Fujimoto et al., 2019, Chen et al., 2020b, Swazinna et al., 2020, Kidambi et al., 2020, Yu et al., 2020]. This work focused on model-based approaches [Yu et al., 2020, Kidambi et al., 2020], in which the agent is incentivized to remain close to areas of low uncertainty. Our work focused on controlling uncertainty estimation in high dimensional environments. Our methodology utilized recent success on the geometry of deep generative models [Arvanitidis et al., 2018, 2020], proposing an alternative approach to uncertainty estimation. Representation Learning. Representation learning seeks to find an appropriate representation of data for performing a machine-learning task [Goodfellow et al., 2016]. Variational Auto Encoders [Kingma and Welling, 2013, Rezende et al., 2014] have been a popular representation learning technique, particularly in unsupervised learning regimes [Chen et al., 2016, Van Den Oord et al., 2017, Hsu et al., 2017, Serban et al., 2017, Engel et al., 2017, Bojanowski et al., 2018, Ding et al., 2020], though also in supervised learning and reinforcement learning [Hausman et al., 2018, Li et al., 2019, Petangoda et al., 2019, Hafner et al., 2019]. Particularly, variational models have been shown able to derive successful behaviors in high dimensional benchmarks [Hafner et al., 2020]. Various representation techniques in reinforcement learning have also proposed to disentangle representation of both states [Engel and Mannor, 2001, Littman and Sutton, 2002, Stooke et al., 2020, Zhu et al., 2020], and actions [Tennenholtz and Mannor, 2019, Chandak et al., 2019]. These allow for the abstraction of states and actions to significantly decrease computation requirements, by e.g., decreasing the effective dimensionality of the action space [Tennenholtz and Mannor, 2019]. Unlike previous work, GELATO is focused on a semiparametric approach for uncertainty estimation, enhancing offline reinforcement learning performance. 7 Discussion and Future Work This work presented a metric for model dynamics and its application to offline reinforcement learning. While our metric showed supportive evidence of improvement in model-based offline RL we note that it was significantly slower – comparably, 5 times slower than using the decoder’s variance for uncertainty estimation. The apparent slowdown in performance was mostly due to computation of the geodesic distance. Improvement in this area may utilize techniques for efficient geodesic estimation. We conclude by noting possible future applications of our work. In Section 5.1 we demonstrated the inherent geometry our model had captured, its corresponding metric, and geodesics. Still, in this work we focused specifically on metrics related to the decoded state. In fact, a similar derivation to Theorem 2 could be applied to other modeled statistics, e.g., Q-values, rewards, future actions, and more. Each distinct statistic would induce its own unique metric w.r.t. its respective probability measure. Particularly, this concept may benefit a vast array of applications in continuous or large state and action spaces.
1. What is the focus and contribution of the paper regarding reinforcement learning agents? 2. What are the strengths of the proposed approach, particularly in terms of its originality and application to offline RL? 3. Do you have any concerns or questions about the flexibility of the proposed method regarding encoder and decoder formulations? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper For reinforcement learning agents learned in an offline setting, it is especially important to be able to quantify uncertainty, as the agent may encounter out-of-distribution experiences when deployed in the online environment. Learning to quantify uncertainty then enables the agent to avoid these regions, by operating under a penalized MDP. In this work, uncertainty is estimated as a k-nearest neighbor between the current state action pair against the state actions pairs in the offline dataset, according to a distance metric. The distance metric is formulated by the authors as the geodesic in a learned latent space, modeled as a VAE. The latent encoder handles aleatoric uncertainty, whereas an ensemble of decoder functions is used to handle epistemic uncertainty. Strengths And Weaknesses To my knowledge, the paper is quite original. It builds off of theoretical findings from past work on deep generative models and applies it well to the offline RL domain. The quality of the paper is high - the idea is straightforward, expressed clearly, and demonstrated cleanly. The clarity of the writing is excellent, and it was easy to follow what the authors proposed. I believe this paper has high significance - enabling RL agents trained on offline datasets to quantify their uncertainty is critical for ensuring good continued performance when deployed in an online environment. Questions I am wondering how flexible the proposed method is to changes in the encoder and decoder beyond the standard Gaussian formulation. Would the derived pullback metric work for more complex VAEs? Could similar pullback metrics be derived for more general generative modeling techniques (my understanding of the work of Arvanitidis et al., 2018 is that it is a study on the geometry of generative models in general, not limited to VAEs)? Limitations (Social Impact Limitations N/A)
NIPS
Title Uncertainty Estimation Using Riemannian Model Dynamics for Offline Reinforcement Learning Abstract Model-based offline reinforcement learning approaches generally rely on bounds of model error. Estimating these bounds is usually achieved through uncertainty estimation methods. In this work, we combine parametric and nonparametric methods for uncertainty estimation through a novel latent space based metric. In particular, we build upon recent advances in Riemannian geometry of generative models to construct a pullback metric of an encoder-decoder based forward model. Our proposed metric measures both the quality of out-of-distribution samples as well as the discrepancy of examples in the data. We leverage our method for uncertainty estimation in a pessimistic model-based framework, showing a significant improvement upon contemporary model-based offline approaches on continuous control and autonomous driving benchmarks. 1 Introduction Offline Reinforcement Learning (RL) [Levine et al., 2020], a.k.a. batch-mode RL [Ernst et al., 2005, Riedmiller, 2005, Fonteneau et al., 2013], involves learning a policy from data sampled by a potentially suboptimal policy. Offline RL seeks to surpass the average performance of the agents that generated the data. Traditional methodologies fall short in offline settings, causing overestimation of the return [Buckman et al., 2020, Wang et al., 2020, Zanette, 2020]. One approach to overcome this in model-based settings is to penalize the return in out of distribution (OOD) regions, as depicted in Figure 1. In this manner, the agent is constrained to stay “near" areas of low model error, thereby limiting possible overestimation. However, reliable estimates of model error are key to the success of such methods. Estimating model error in OOD regions can be achieved through uncertainty estimation [Yu et al., 2020]. Methods of parametric uncertainty estimation such as bootstrap ensembles [Efron, 1982], Monte Carlo Dropout [Gal and Ghahramani, 2016], and randomized priors [Osband et al., 2018], may be susceptible to poor model specification and are most effective when dealing with large datasets. In contrast, nonparametric methods such as k-nearest neighbors (k-NN) [Villa Medina et al., 2013, Fathabadi et al., 2021] are beneficial in regions of limited data, yet require a proper metric to be used. We propose to combine parametric and nonparametric methods for uncertainty estimation. Particularly, we define a novel Riemmannian metric which captures the epistemic and aleatoric uncertainty of a generative parametric forward model. This distance metric is then applied to measure the average geodesic distance to the k-nearest neighbors in the data. We derive analytical expressions for our metric and provide an efficient manner to estimate it. We then demonstrate the effectiveness of our metric for penalizing an offline RL agent compared to contemporary approaches on continuous control and autonomous driving benchmarks. As we empirically show, common approaches, including statistical bootstrap ensemble or Euclidean distances in latent space, do not necessarily capture the underlying degree of error needed for model-based offline RL. ∗Correspondence to [email protected] 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 2 Preliminaries 2.1 Offline Reinforcement Learning We consider the standard Markov Decision Process (MDP) framework [Puterman, 2014] defined by the tuple (S,A, r, P, α), where S is the state space, A the action space, r : S ×A 7→ [0, 1] the reward function, P : S ×A× S 7→ [0, 1] the transition kernel, and α ∈ (0, 1) is the discount factor. In the online setting of reinforcement learning (RL), the environment initiates at some state s0 ∼ ρ0. At any time step the environment is in a state s ∈ S, an agent takes an action a ∈ A and receives a reward r(s, a) from the environment as a result of this action. The environment transitions to state s′ according to the transition function P (·|s, a). The goal of online RL is to find a policy π(a|s) that maximizes the expected discounted return vπ = Eπ [∑∞ t=0 α tr(st, at)|s0 ∼ ρ0 ] . Unlike the online setting, the offline setup considers a dataset Dn = {si, ai, ri, s′i} n i=1 of transitions generated by some unknown agents. The objective of offline RL is to find the best policy in the test environment (i.e., real MDP) given only access to the data generated by the unknown agents. 2.2 Riemannian Manifolds We define the Riemannian pullback metric, a fundamental component of our proposed method. We refer the reader to Carmo [1992] for further details on Riemannian geometry. We are interested in studying a smooth surface M with a Riemannian metric g. A Riemannian metric is a smooth function that assigns a symmetric positive definite matrix to any point in M . At each point z ∈M a tangent space TzM specifies the pointing direction of vectors “along” the surface. Definition 1. Let M be a smooth manifold. A Riemannian metric g on M changes smoothly and defines a real scalar product on the tangent space TzM for any z ∈M as gz(x, y) = ⟨x, y⟩z = ⟨x,G(z)y⟩, x, y ∈ TzM, where G(z) ∈ Rdz×dz is the corresponding metric tensor. (M, g) is called a Riemannian manifold. The Riemannian metric enables us to easily define geodesic curves. Consider some differentiable mapping γ : [0, 1] 7→M ⊆ Rdz , such that γ(0) = z0, γ(1) = z1. The length of the curve γ measured on M is given by L(γ) = ∫ 1 0 √〈 ∂γ(t) ∂t ,G(γ(t)) ∂γ(t) ∂t 〉 dt. (1) The geodesic distance d(z1, z2) between any two points z1, z2 ∈M is then the infimum length over all curves γ for which γ(0) = z0, γ(1) = z1. That is, d(z1, z2) = infγ L(γ) s.t. γ(0) = z0, γ(1) = z1. The geodesic distance can be found by solving a system of nonlinear ordinary differential equations (ODEs) defined in the intrinsic coordinates [Carmo, 1992]. Pullback Metric. Assume an ambient (observation) space X and its respective Riemannian manifold (MX , gX ). Learning gX can be hard (e.g., learning the distance metric between images). Still, it may be captured through a low dimensional submanifold. As such, it is many times convenient to parameterize the surface MX by a latent space Z = RdZ and a smooth function f : Z 7→ X , where Z is a low dimensional latent embedding space. As learning the manifold MX can be hard, we turn to learning the immersed low dimensional submanifold MZ (for which the chart maps are trivial, since Z = RdZ ). Given a curve γ : [0, 1] 7→MZ we have that 〈 ∂f(γ(t)) ∂t , GX (f(γ(t))) ∂f(γ(t)) ∂t 〉 =〈 ∂γ(t) ∂t , J T f (γ(t))GX (f(γ(t)))Jf (γ(t)) ∂γ(t) ∂t 〉 , where the Jacobian matrix Jf (z) = ∂f∂z ∈ R dX×dZ maps tangent vectors in TMZ to tangent vectors in TMX . The induced metric is thus given by Gf (z) = Jf (z) TGX (f(z))Jf (z). (2) The metric Gf is known as the pullback metric, as it “pulls back" the metric GX on X back to Gf via f : Z 7→ X . The pullback metric captures the intrinsic geometry of the immersed submanifold while taking into account the ambient space X . The geodesic distance in ambient space is captured by geodesics in the latent space Z , reducing the problem to learning the latent embedding space Z and the observation function f . Indeed, learning the latent space and observation function f can be achieved through a encoder-decoder framework, such as a VAE [Arvanitidis et al., 2018]. 3 Background: Penalty of Uncertainty for Offline Reinforcement Learning A key element of model-based RL methods involves estimating a model P̂ (s′|s, a) to construct a pessimistic MDP2. This work builds upon MOPO, a recently proposed model-based offline RL framework [Yu et al., 2020]). Particularly, we assume access to an approximate MDP (S,A, r̂, P̂ , α) (e.g., trained by maximizing the likelihood of the data), and define a penalized MDP (S,A, r̃, P̂ , α), such that for all s ∈ S, a ∈ A, r̃(s, a) = r̂(s, a) − λc(P (·|s, a), P̂ (·|s, a)), where c penalizes the reward according to model error (e.g., the total variation distance) and λ > 0. The offline RL problem is then solved by executing an online algorithm in the reward-penalized MDP. Unfortunately, as P (·|s, a) is unknown, and can only be estimated from the data, c(P (·|s, a), P̂ (·|s, a)) cannot be calculated. Nevertheless, one can attempt to upper bound the distance, i.e., for some U : S ×A 7→ R, c(P (·|s, a), P̂ (·|s, a)) ≤ U(s, a),∀s ∈ S, a ∈ A. In this work we propose to use a naturally induced metric of a variational forward model, which we show can introduce an effective penalty for offline RL. In Section 4 we define this metric, and finally, we leverage it in Section 4.4. 4 Metrics of Uncertainty As described in the previous section, our goal is to estimate model error in order to penalize the agent in out of distribution (OOD) regions. Yu et al. [2020] proposed to achieve this through bootstrap ensembles, an out of distribution uncertainty estimation technique. Alternatively, we propose to employ a well-known nonparametric approach for uncertainty estimation [Villa Medina et al., 2013, Fathabadi et al., 2021], namely k-nearest neighbors (k-NN). Specifically, for any s, a ∈ S ×A, we estimate model error by U(s, a) = 1 k ∑ (si,ai)∈NNk(s,a) d((s, a), (si, ai)), (3) where d : S ×A× S ×A 7→ R+ is a distance metric, and NNk(s, a) is the set of k-nearest neighbors of (s, a) in Dn according to the distance metric d. A question arises: how to choose d? Using the Euclidean distance in ambient (state-action) space is usually a bad choice (e.g., the ℓ2 distance between natural images is not necessarily meaningful). Moreover, to correctly measure the error, model dynamics should be somehow taken into consideration. We therefore consider an alternative approach which leverages the latent space of a variational forward model, as described next. 4.1 A Variational Latent Model of Dynamics We begin by modeling P̂ (s′|s, a) using a generative latent model. Specifically, we consider a latent model which consists of an encoder E : S × A 7→ B(Z) and a decoder fD : Z 7→ B(S), where B(X ) is set of probability measures on the Borel sets of X . While the encoder E learns a latent representation of s, a, the decoder fD estimates the next state s′ according to P (·|s, a). This model corresponds to the decomposition P (·|s, a) = fD(·|E(s, a)). Such a model can be trained by maximizing the Evidence Lower BOund (ELBO, Kingma and Welling [2013]) over the data. 2Pessimism is a key element of offline RL algorithms [Jin et al., 2020], limiting overestimation of a trained policy due to the distribution shift between the data and the trained policy. Algorithm 1 GELATO: Geometrically Enriched LATent model for Offline reinforcement learning 1: Input: Offline dataset Dn, RL algorithm 2: Train variational latent forward model on dataset Dn by maximizing ELBO. 3: Construct approximate MDP (S,A, r̂, P̂ , α) 4: Use distance dZ induced by pullback metric GfD,U◦fF (Theorem 2) to penalize reward r̃d(s, a) = r̂(s, a)− λU(s, a) where U(s, a) = ∑ (si,ai)∈NNk(s,a) dZ(E(s, a), E(si, ai)) 5: Train RL algorithm over penalized MDP (S,A, r̃d, P̂ , α) That is, given a prior P (z), we model Eϕ, fD,θ as parametric functions and maximize the ELBO, maxθ,ϕ EEϕ(z|s,a) [ log fD,θ(s ′|z) ] −DKL(Eϕ(z|s, a)||P (z)) We refer the reader to the appendix for an exhaustive overview of training VAEs by maximum likelihood and the ELBO. Recall that we wish to find a good metric for estimating model error. Having learned a latent model for P̂ (s′|s, a), its latent space Z can be used to define the metric d in Equation (3), i.e., the Euclidean distance between latent representations of state-action pairs. Unfortunately, as was previously shown [Arvanitidis et al., 2018], latent codes in variational models contain sharp discontinuities, rendering Euclidean distances in latent space unreliable and inaccurate (as we will also demonstrate in our experiments). Instead, we propose to use the natural induced metric of our latent model, as described in the following subsection. 4.2 The Pullback Metric of Model Dynamics In this part we define the metric d that will be used to approximate model error in Equation (3). Specifically, we consider a Riemannian submanifold defined by a latent space Z and observation function f , which induces minimum energy in the ambient space. We will later choose Z to be the latent space of our variational model (i.e., encoded state-action) and f to be the decoder function fD of next state transitions. We define the distance metric formally below. Definition 2. We define a Riemannian submanifold (MZ , gZ) by a differential function f : Z 7→ S and latent space Z such that dZ(z1, z2) = inf γ ∫ 1 0 ∥∥∥∥∂f(γ(t))∂t ∥∥∥∥ dt s.t. γ(0) = z1, γ(1) = z2. A similar metric has been used in previous work on generative latent models [Chen et al., 2018, Arvanitidis et al., 2018]. By choosing f to be the decoder function fD, latent codes that are close w.r.t. dZ induce curves of minimal energy in the ambient observation space (i.e., next state). This metric is closely related to the pullback metric (see Section 2.2), as shown by the following proposition. Proposition 1. Let (MZ , gZ) as defined above. Then Gf (z) = JTf (z)Jf (z), for any z ∈ Z . Indeed, Proposition 1 shows us that Gf is a pullback metric. ParticularlyZ and Jf define the structure of the ambient observation space X (in our case, next state transitions). By choosing f to be the decoder function fD, the metric GfD becomes stochastic, complicating analysis. Instead, as proposed and analyzed in Arvanitidis et al. [2018], we use the expected pullback metric E [GZ ] as an approximation of the underlying stochastic metric. Similar to previous work on variational models, we use a normally distributed decoder to define the output. Using Proposition 1, we have the following result (see Appendix for proof). Theorem 1. [Arvanitidis et al. [2018]] Assume fD(·|z) ∼ N (µ(z), σ(z)I). Then EfD(·|z) [ GfD (z) ] = Gµ(z) +Gσ(z), (4) where Gµ(z) = JTµ (z)Jµ(z) and Gσ(z) = J T σ (z)Jσ(z). Given an embedded latent space Z , the expected metric in Equation (4) gives us a sense of the topology of the latent space manifold induced by fD. The terms Gµ = JTµ Jµ and Gσ = J T σ Jσ are in fact the induced pullback metrics of µ and σ, respectively. 4.3 Capturing Epistemic and Aleatoric Uncertainty The previously proposed encoder-decoder model induces a metric which captures the structure of the learned dynamics. However, the decoder variance, σ(z), does not differentiate between aleatoric uncertainty (environment dynamics) and epistemic uncertainty (missing data). We propose two methods to enrich the metric in Equation (4) in order to achieve a better estimate of uncertainty. First, by using an ensemble of M decoder functions {fD,i}Mi=1 trained using standard bootstrap techniques [Efron, 1982], we capture the traditional epistemic uncertainty of the decoder parameters. Second, to correctly distinguish epistemic and aleatoric uncertainty, we add a latent forward function to our previously proposed variational model. Specifically, our latent model consists of an encoder E : S × A 7→ B(Z), forward model fF : Z 7→ B(X ) and decoder functions fD,i : X 7→ B(S) such that P (·|s, a) = fD,i(·|x), and x ∼ fF (·|E(s, a)). This structure enables us to capture the aleatoric uncertainty under the forward transition model fF , and the epistemic uncertainty using the decoders fD,i. That is, for fF (·|z) ∼ N (µF (z), σF (z)I), the variance model σF (z) captures the stochasticity in model dynamics. This decomposition is also helpful whenever one wants to train an agent in latent space (e.g., for planning Schrittwieser et al. [2020]) Next, we turn to analyze the pullback metric induced by the proposed forward transition model. As both fF and {fD,i}mi=1 are stochastic (capturing epistemic and aleatoric uncertainty), the result of Theorem 1 cannot be directly applied to their composition. The following proposition provides an analytical expression for the expected pullback metric of a sampled next state and a uniformly sampled decoder (the proof is given in the appendix). Theorem 2. Assume fF (·|z) ∼ N (µF (z), σF (z)I), fD,i(·|x) ∼ N (µiD(x), σiD(x)I), U ∼ Unif{1, . . . ,M}. Then, the expected pullback metric of the composite function (fD,U ◦ fF ) is given by EP (fD,U◦fF ) [ GfD,U◦fF (z) ] = JTµF (z)GfD (z)JµF (z) + J T σF (z)diag ( GfD (z) ) JσF (z), where GfD (z) = 1 M ∑M i=1 Ex∼F (·|z) [ JT µiD (x)JµiD (x) + J T σiD (x)JσiD (x) ] . Unlike the metric in Equation (4), the composite metric distorts the decoder metric with Jacobian matrices of the forward model statistics. It takes into account both the aleatoric and epistemic uncertainty through the forward model as well as ensemble of decoders. As a special case we note the metric for the case of deterministic model dynamics. Corollary 1. Assume deterministic model dynamics, i.e., x = fF (z), and without loss of generality assume fF ≡ I . Then, the expected pullback metric of Theorem 2 is given by E [ GfD,U◦fF (z) ] = 1 M ∑M i=1 J T µiD (z)JµiD (z) + J T σiD (z)JσiD (z). 4.4 GELATO: Geometrically Enriched LATent model for Offline reinforcement learning Having defined our metric, we are now ready to leverage it in a model based offline RL framework. Specifically, provided a dataset Dn = {(si, ai, ri, s′i)} n i=1 we train the variational latent forward model described in the previous section. Algorithm 1 presents GELATO, our proposed approach. In GELATO, we estimate model error by measuring the distance of a new sample to the data manifold. We construct the reward-penalized MDP for which the error acts as a pessimistic regularizer. Finally, we train an RL agent over the pessimistic MDP with transition P̂ (·|s, a) and reward r(s, a)−λU(s, a). By achieving an improved estimate for model error the model-based pessimistic approach can significantly improve performance, as shown in the following section. 5 Experiments This section is dedicated to quantitatively and qualitatively understand the benefits of our proposed metric and method. We validate two principal claims: (1) Our metric captures inherent characteristics of model dynamics. We demonstrate this by visualizing state-action geodesics of a toy grid world problem and a multi-agent autonomous driving task. We show that curves of minimum energy in ambient space indeed capture intrinsic properties of the problem. (2) Our metric provides an improved OOD uncertainty estimate for offline RL. We compare the traditionally used bootstrap ensemble method to our approach, which leverages our pullback metric in a nonparametric nearest neighbors approach. We also compare our method to simple use of Euclidean distances in latent space. We run extensive experiments on continuous control and autonomous driving benchmarks. We show that our metric achieves significantly improved performance in tasks for which geodesics are non-euclidean. 5.1 Metric Visualization Four Rooms. To better understand the inherent structure of our metric, we constructed a grid-world environment for visualizing our proposed latent representation and metric. The 15× 15 environment, as depicted in Figure 2, consists of four rooms, with impassable obstacles in their centers. The agent, residing at some position (x, y) ∈ [−1, 1]2 in the environment can take one of four actions: up, down, left, or right – moving the agent 1, 2 or 3 steps (uniformly distributed) in that direction. We collected a dataset of 10000 samples, taking random actions at random initializations of the environment. The ambient state space was represented by the position of the agent – a vector of dimension 2, normalized to values in [−1, 1]. Finally, we trained a variational latent model with latent dimension dZ = 2. We used a standard encoder z ∼ N (µθ(s), σθ(s)) and decoder s′ ∼ N (µϕ(z), σϕ(z)) represented by neural networks trained end-to-end using the evidence lower bound. We refer the reader to the appendix for an exhaustive description of the training procedure. The latent space output of our model is depicted by yellow markers in Figure 2a. Indeed, the latent embedding consists of four distinctive clusters, structured in a similar manner as our grid-world environment. Interestingly, the distortion of the latent space accurately depicts an intuitive notion of distance between states. As such, rooms are distinctively separated, with fair distance between each cluster. States of pathways between rooms clearly separate the room clusters, forming a topology with four discernible bottlenecks. In addition to the latent embedding, Figure 2a depicts the geometric volume measure √ det(GfD ) of the trained pullback metric induced by fD. This quantity demonstrates the effective geodesic distances between states in the decoder-induced submanifold. Indeed geodesics between data points to points outside of the data manifold (i.e., outside of the red region), attain high values as integrals over areas of high magnitude. In contrast, geodesics near the data manifold would low values. We visualize the decoder-induced geodesic distance and compare it to the latent Euclidean distance in Figures 2b and 2c, respectively. The plots depict the normalized distances of all states to the state marked by a yellow square. Evidently, the geodesic distance captures a better notion of distance in the said environment, correctly exposing the “land distance" in ambient space. As expected, states residing in the bottom-right room are farthest from the source state, as reaching them comprises of passing through at least two bottleneck states. In contrast, the latent Euclidean distance does not properly capture these geodesics, exhibiting a similar distribution of distances in other rooms. Nevertheless, both geodesic and Euclidean distances reveal the intrinsic topological structure of the environment, that of which is not captured by the extrinsic coordinates (x, y) ∈ [−1, 1]2. Particularly, the state coordinates (x, y) would wrongly assign short distances to states across impassible walls or obstacles, i.e., measuring the “air distance". Intersection. We visualized our metric in the intersection environment proposed in Leurent [2018]. Figure 3 compares the Euclidean and geodesic distances of a partially trained agent. Unlike the previous toy example, to visualize the inherent manifolds we used t-SNE [Van der Maaten and Hinton, 2008] projections computed with Euclidean distance and compared them to the projection computed with geodesic distance, i.e., curves of minimum energy in ambient space (Definition 2). Indeed, the geodesics captured the inherent structure of the environment, whereas Euclidean distances only managed to capture general clusters. These suggest that Euclidean distance might not be representative for measuring distance in latent space, as will also become evident by our experiments in the following subsections. 5.2 Datasets and Implementation Details We used D4RL [Fu et al., 2020] and the autonomous vehicle environments highway-env [Leurent, 2018] as benchmarks for all of our experiments. We tested GELATO on three Mujoco [Todorov et al., 2012] environments (Hopper, Walker2d, Halfcheetah) on datasets generated by a single policy and a mixture of two policies. Specifically, we used datasets generated by a random agent (1M samples), a partially trained agent, i.e, medium agent (1M samples), and a mixture of partially trained and expert agents (2M samples). For autonomous driving, we tested GELATO on four environments (Highway, Roundabout, Intersection, Racetrack), on datasets containing five episodes generated by a partially trained agent. We also tested a faster (×15 speedup) variant of the Highway environment, as well as a harder instantiation of the Intersection environment in which the number of cars was tripled (further details can be found in the appendix). We trained our variational latent model in two phases. First, the model was fully trained using a calibrated Gaussian decoder [Rybkin et al., 2020]. Specifically, a maximum-likelihood estimate of the variance was used σ∗ = MSE(µ, µ̂) ∈ argmaxσN (µ̂|µ, σ2I). Then, in the second stage we fit the variance decoder networks. Hyperparameters for training are found in the appendix. In order to practically estimate the geodesic distance in Algorithm 1, we defined a parametric curve in latent space and used gradient descent to minimize the curve’s energy. The resulting curve and pullback metric were then used to calculate the geodesic distance by a numerical estimate of the curve length (See Appendix for an exhaustive overview of the estimation method). We used FAISS [Johnson et al., 2019] for efficient GPU-based k-nearest neighbors calculation. We set k = 5 neighbors for the penalized reward (Equation (3)). Finally, we used a variant of Soft Learning, as proposed by Yu et al. [2020] as our RL algorithm for the continuous control benchmarks, and PPO [Schulman et al., 2017] for the autonomous driving tasks. All agents were trained for 1M steps (for continuous control benchmarks) and 350K steps (for the driving benchmarks), using a single GPU (RTX 2080), and averaged over 5 seeds (see Appendix for more details). 5.3 Results D4RL. Results for the continuous control domains are shown in Table 1. We performed experiments to analyze GELATO on various continuous control datasets. We compared GELATO to contemporary model-based offline RL approaches; namely, MOPO [Yu et al., 2020] and MBPO [Janner et al., 2019], as well as the standard baselines of SAC [Haarnoja et al., 2018] and imitation (behavioral cloning, Bain and Sammut [1995], Ross et al. [2011]). We found performance increase on most domains, and most significantly in the medium domain, i.e., partially trained agent. Since the medium dataset contained average behavior, combining the nonparametric nearest-neighbor uncertainty method with the bootstrap of decoders benefited the agent’s overall performance. In addition, GELATO with the latent ℓ2 distance metric performed well on many of the benchmarks. We conjecture this is due to the inherent Euclidean nature of the continuous control benchmarks. Unlike the embedding for the autonomous driving benchmarks (Figure 3), we found the D4RL data to project similarly when ℓ2 and geodesic distances were used (we provide plots of these embeddings in the Appendix). Highway-Env. Figure 4 shows results for the autonomous driving benchmarks in highway-env. In contrast to the continuous control benchmarks, we found a significant improvement of our metric on the autonomous driving benchmarks compared to standard uncertainty estimation methods as well as the latent Euclidean distance. We credit this improvement to the non-euclidean nature of the environments, as previously described in Figure 3. While Euclidean distances were useful in the Mujoco environments, they performed distinctly worse in the autonomous driving environments. Our results emphasize the importance of OOD uncertainty estimations methods in reinforcement learning on various types of datasets. While robotic control tasks provided useful insights, they did not capture the non-euclidean nature inherent in alternative tasks, such as autonomous driving. 6 Related Work Offline Reinforcement Learning. The field of offline RL has recently received much attention as several algorithmic approaches were able to surpass standard off-policy algorithms. Value-based online algorithms do not perform well under highly off-policy batch data [Fujimoto et al., 2019, Kumar et al., 2019, Fu et al., 2019, Fedus et al., 2020, Agarwal et al., 2020], much due to issues with bootstrapping from out-of-distribution (OOD) samples. These issues become more prominent in the offline setting, as new samples cannot be acquired. Several works on offline RL have shown improved performance on standard continuous control benchmarks [Laroche et al., 2019, Kumar et al., 2019, Fujimoto et al., 2019, Chen et al., 2020b, Swazinna et al., 2020, Kidambi et al., 2020, Yu et al., 2020]. This work focused on model-based approaches [Yu et al., 2020, Kidambi et al., 2020], in which the agent is incentivized to remain close to areas of low uncertainty. Our work focused on controlling uncertainty estimation in high dimensional environments. Our methodology utilized recent success on the geometry of deep generative models [Arvanitidis et al., 2018, 2020], proposing an alternative approach to uncertainty estimation. Representation Learning. Representation learning seeks to find an appropriate representation of data for performing a machine-learning task [Goodfellow et al., 2016]. Variational Auto Encoders [Kingma and Welling, 2013, Rezende et al., 2014] have been a popular representation learning technique, particularly in unsupervised learning regimes [Chen et al., 2016, Van Den Oord et al., 2017, Hsu et al., 2017, Serban et al., 2017, Engel et al., 2017, Bojanowski et al., 2018, Ding et al., 2020], though also in supervised learning and reinforcement learning [Hausman et al., 2018, Li et al., 2019, Petangoda et al., 2019, Hafner et al., 2019]. Particularly, variational models have been shown able to derive successful behaviors in high dimensional benchmarks [Hafner et al., 2020]. Various representation techniques in reinforcement learning have also proposed to disentangle representation of both states [Engel and Mannor, 2001, Littman and Sutton, 2002, Stooke et al., 2020, Zhu et al., 2020], and actions [Tennenholtz and Mannor, 2019, Chandak et al., 2019]. These allow for the abstraction of states and actions to significantly decrease computation requirements, by e.g., decreasing the effective dimensionality of the action space [Tennenholtz and Mannor, 2019]. Unlike previous work, GELATO is focused on a semiparametric approach for uncertainty estimation, enhancing offline reinforcement learning performance. 7 Discussion and Future Work This work presented a metric for model dynamics and its application to offline reinforcement learning. While our metric showed supportive evidence of improvement in model-based offline RL we note that it was significantly slower – comparably, 5 times slower than using the decoder’s variance for uncertainty estimation. The apparent slowdown in performance was mostly due to computation of the geodesic distance. Improvement in this area may utilize techniques for efficient geodesic estimation. We conclude by noting possible future applications of our work. In Section 5.1 we demonstrated the inherent geometry our model had captured, its corresponding metric, and geodesics. Still, in this work we focused specifically on metrics related to the decoded state. In fact, a similar derivation to Theorem 2 could be applied to other modeled statistics, e.g., Q-values, rewards, future actions, and more. Each distinct statistic would induce its own unique metric w.r.t. its respective probability measure. Particularly, this concept may benefit a vast array of applications in continuous or large state and action spaces.
1. What is the focus and contribution of the paper on uncertainty quantification in offline model-based RL? 2. What are the strengths of the proposed approach, particularly in terms of its ability to capture both aleatoric and epistemic uncertainties? 3. What are the weaknesses of the paper, especially regarding its experimental results and comparisons with other works? 4. Do you have any concerns or suggestions regarding the proposed method's broad applicability to various RL estimates? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper To address the uncertainty quantification (UQ) problem in offline model-based RL, the authors propose a Riemannian metric that captures both uncertainty in dynamics (aleatoric) and OOD data (epistemic). Their offline RL algorithm GELATO (cute name) learns an uncertainty-penalized reward function by learning the geodesic distance through a 'pullback' metric. The geodesic distance is combined with KNN evaluation and bootstrapping to generate improved UQ performance. In control and autonomous driving datasets GELATO achieves good performance. Strengths And Weaknesses Strength: Delightfully well written paper. The exposition of motivation, theory, and experiment results are clear and convincing. Introduces a new representation for estimating distance of model dynamics in latent space, that is more geometrically salient then Euclidean distance. Detailed analysis of success and failure cases in multiple domains in the experiment section. Broad applicability of the contribution. As the authors mentioned, their method of capturing geodesics can be applied to many RL estimates such as Q values, rewards, etc. The authors provide a clear and effective way of doing so. minor details: missing reference in line 19, typo of "Riemannian" in line 29. Questions Also minor: I found section 4.3 to be a little bit hard to follow at first. M is used to refer to the manifold in the paper except in this section. Limitations N/A
NIPS
Title Uncertainty Estimation Using Riemannian Model Dynamics for Offline Reinforcement Learning Abstract Model-based offline reinforcement learning approaches generally rely on bounds of model error. Estimating these bounds is usually achieved through uncertainty estimation methods. In this work, we combine parametric and nonparametric methods for uncertainty estimation through a novel latent space based metric. In particular, we build upon recent advances in Riemannian geometry of generative models to construct a pullback metric of an encoder-decoder based forward model. Our proposed metric measures both the quality of out-of-distribution samples as well as the discrepancy of examples in the data. We leverage our method for uncertainty estimation in a pessimistic model-based framework, showing a significant improvement upon contemporary model-based offline approaches on continuous control and autonomous driving benchmarks. 1 Introduction Offline Reinforcement Learning (RL) [Levine et al., 2020], a.k.a. batch-mode RL [Ernst et al., 2005, Riedmiller, 2005, Fonteneau et al., 2013], involves learning a policy from data sampled by a potentially suboptimal policy. Offline RL seeks to surpass the average performance of the agents that generated the data. Traditional methodologies fall short in offline settings, causing overestimation of the return [Buckman et al., 2020, Wang et al., 2020, Zanette, 2020]. One approach to overcome this in model-based settings is to penalize the return in out of distribution (OOD) regions, as depicted in Figure 1. In this manner, the agent is constrained to stay “near" areas of low model error, thereby limiting possible overestimation. However, reliable estimates of model error are key to the success of such methods. Estimating model error in OOD regions can be achieved through uncertainty estimation [Yu et al., 2020]. Methods of parametric uncertainty estimation such as bootstrap ensembles [Efron, 1982], Monte Carlo Dropout [Gal and Ghahramani, 2016], and randomized priors [Osband et al., 2018], may be susceptible to poor model specification and are most effective when dealing with large datasets. In contrast, nonparametric methods such as k-nearest neighbors (k-NN) [Villa Medina et al., 2013, Fathabadi et al., 2021] are beneficial in regions of limited data, yet require a proper metric to be used. We propose to combine parametric and nonparametric methods for uncertainty estimation. Particularly, we define a novel Riemmannian metric which captures the epistemic and aleatoric uncertainty of a generative parametric forward model. This distance metric is then applied to measure the average geodesic distance to the k-nearest neighbors in the data. We derive analytical expressions for our metric and provide an efficient manner to estimate it. We then demonstrate the effectiveness of our metric for penalizing an offline RL agent compared to contemporary approaches on continuous control and autonomous driving benchmarks. As we empirically show, common approaches, including statistical bootstrap ensemble or Euclidean distances in latent space, do not necessarily capture the underlying degree of error needed for model-based offline RL. ∗Correspondence to [email protected] 36th Conference on Neural Information Processing Systems (NeurIPS 2022). 2 Preliminaries 2.1 Offline Reinforcement Learning We consider the standard Markov Decision Process (MDP) framework [Puterman, 2014] defined by the tuple (S,A, r, P, α), where S is the state space, A the action space, r : S ×A 7→ [0, 1] the reward function, P : S ×A× S 7→ [0, 1] the transition kernel, and α ∈ (0, 1) is the discount factor. In the online setting of reinforcement learning (RL), the environment initiates at some state s0 ∼ ρ0. At any time step the environment is in a state s ∈ S, an agent takes an action a ∈ A and receives a reward r(s, a) from the environment as a result of this action. The environment transitions to state s′ according to the transition function P (·|s, a). The goal of online RL is to find a policy π(a|s) that maximizes the expected discounted return vπ = Eπ [∑∞ t=0 α tr(st, at)|s0 ∼ ρ0 ] . Unlike the online setting, the offline setup considers a dataset Dn = {si, ai, ri, s′i} n i=1 of transitions generated by some unknown agents. The objective of offline RL is to find the best policy in the test environment (i.e., real MDP) given only access to the data generated by the unknown agents. 2.2 Riemannian Manifolds We define the Riemannian pullback metric, a fundamental component of our proposed method. We refer the reader to Carmo [1992] for further details on Riemannian geometry. We are interested in studying a smooth surface M with a Riemannian metric g. A Riemannian metric is a smooth function that assigns a symmetric positive definite matrix to any point in M . At each point z ∈M a tangent space TzM specifies the pointing direction of vectors “along” the surface. Definition 1. Let M be a smooth manifold. A Riemannian metric g on M changes smoothly and defines a real scalar product on the tangent space TzM for any z ∈M as gz(x, y) = ⟨x, y⟩z = ⟨x,G(z)y⟩, x, y ∈ TzM, where G(z) ∈ Rdz×dz is the corresponding metric tensor. (M, g) is called a Riemannian manifold. The Riemannian metric enables us to easily define geodesic curves. Consider some differentiable mapping γ : [0, 1] 7→M ⊆ Rdz , such that γ(0) = z0, γ(1) = z1. The length of the curve γ measured on M is given by L(γ) = ∫ 1 0 √〈 ∂γ(t) ∂t ,G(γ(t)) ∂γ(t) ∂t 〉 dt. (1) The geodesic distance d(z1, z2) between any two points z1, z2 ∈M is then the infimum length over all curves γ for which γ(0) = z0, γ(1) = z1. That is, d(z1, z2) = infγ L(γ) s.t. γ(0) = z0, γ(1) = z1. The geodesic distance can be found by solving a system of nonlinear ordinary differential equations (ODEs) defined in the intrinsic coordinates [Carmo, 1992]. Pullback Metric. Assume an ambient (observation) space X and its respective Riemannian manifold (MX , gX ). Learning gX can be hard (e.g., learning the distance metric between images). Still, it may be captured through a low dimensional submanifold. As such, it is many times convenient to parameterize the surface MX by a latent space Z = RdZ and a smooth function f : Z 7→ X , where Z is a low dimensional latent embedding space. As learning the manifold MX can be hard, we turn to learning the immersed low dimensional submanifold MZ (for which the chart maps are trivial, since Z = RdZ ). Given a curve γ : [0, 1] 7→MZ we have that 〈 ∂f(γ(t)) ∂t , GX (f(γ(t))) ∂f(γ(t)) ∂t 〉 =〈 ∂γ(t) ∂t , J T f (γ(t))GX (f(γ(t)))Jf (γ(t)) ∂γ(t) ∂t 〉 , where the Jacobian matrix Jf (z) = ∂f∂z ∈ R dX×dZ maps tangent vectors in TMZ to tangent vectors in TMX . The induced metric is thus given by Gf (z) = Jf (z) TGX (f(z))Jf (z). (2) The metric Gf is known as the pullback metric, as it “pulls back" the metric GX on X back to Gf via f : Z 7→ X . The pullback metric captures the intrinsic geometry of the immersed submanifold while taking into account the ambient space X . The geodesic distance in ambient space is captured by geodesics in the latent space Z , reducing the problem to learning the latent embedding space Z and the observation function f . Indeed, learning the latent space and observation function f can be achieved through a encoder-decoder framework, such as a VAE [Arvanitidis et al., 2018]. 3 Background: Penalty of Uncertainty for Offline Reinforcement Learning A key element of model-based RL methods involves estimating a model P̂ (s′|s, a) to construct a pessimistic MDP2. This work builds upon MOPO, a recently proposed model-based offline RL framework [Yu et al., 2020]). Particularly, we assume access to an approximate MDP (S,A, r̂, P̂ , α) (e.g., trained by maximizing the likelihood of the data), and define a penalized MDP (S,A, r̃, P̂ , α), such that for all s ∈ S, a ∈ A, r̃(s, a) = r̂(s, a) − λc(P (·|s, a), P̂ (·|s, a)), where c penalizes the reward according to model error (e.g., the total variation distance) and λ > 0. The offline RL problem is then solved by executing an online algorithm in the reward-penalized MDP. Unfortunately, as P (·|s, a) is unknown, and can only be estimated from the data, c(P (·|s, a), P̂ (·|s, a)) cannot be calculated. Nevertheless, one can attempt to upper bound the distance, i.e., for some U : S ×A 7→ R, c(P (·|s, a), P̂ (·|s, a)) ≤ U(s, a),∀s ∈ S, a ∈ A. In this work we propose to use a naturally induced metric of a variational forward model, which we show can introduce an effective penalty for offline RL. In Section 4 we define this metric, and finally, we leverage it in Section 4.4. 4 Metrics of Uncertainty As described in the previous section, our goal is to estimate model error in order to penalize the agent in out of distribution (OOD) regions. Yu et al. [2020] proposed to achieve this through bootstrap ensembles, an out of distribution uncertainty estimation technique. Alternatively, we propose to employ a well-known nonparametric approach for uncertainty estimation [Villa Medina et al., 2013, Fathabadi et al., 2021], namely k-nearest neighbors (k-NN). Specifically, for any s, a ∈ S ×A, we estimate model error by U(s, a) = 1 k ∑ (si,ai)∈NNk(s,a) d((s, a), (si, ai)), (3) where d : S ×A× S ×A 7→ R+ is a distance metric, and NNk(s, a) is the set of k-nearest neighbors of (s, a) in Dn according to the distance metric d. A question arises: how to choose d? Using the Euclidean distance in ambient (state-action) space is usually a bad choice (e.g., the ℓ2 distance between natural images is not necessarily meaningful). Moreover, to correctly measure the error, model dynamics should be somehow taken into consideration. We therefore consider an alternative approach which leverages the latent space of a variational forward model, as described next. 4.1 A Variational Latent Model of Dynamics We begin by modeling P̂ (s′|s, a) using a generative latent model. Specifically, we consider a latent model which consists of an encoder E : S × A 7→ B(Z) and a decoder fD : Z 7→ B(S), where B(X ) is set of probability measures on the Borel sets of X . While the encoder E learns a latent representation of s, a, the decoder fD estimates the next state s′ according to P (·|s, a). This model corresponds to the decomposition P (·|s, a) = fD(·|E(s, a)). Such a model can be trained by maximizing the Evidence Lower BOund (ELBO, Kingma and Welling [2013]) over the data. 2Pessimism is a key element of offline RL algorithms [Jin et al., 2020], limiting overestimation of a trained policy due to the distribution shift between the data and the trained policy. Algorithm 1 GELATO: Geometrically Enriched LATent model for Offline reinforcement learning 1: Input: Offline dataset Dn, RL algorithm 2: Train variational latent forward model on dataset Dn by maximizing ELBO. 3: Construct approximate MDP (S,A, r̂, P̂ , α) 4: Use distance dZ induced by pullback metric GfD,U◦fF (Theorem 2) to penalize reward r̃d(s, a) = r̂(s, a)− λU(s, a) where U(s, a) = ∑ (si,ai)∈NNk(s,a) dZ(E(s, a), E(si, ai)) 5: Train RL algorithm over penalized MDP (S,A, r̃d, P̂ , α) That is, given a prior P (z), we model Eϕ, fD,θ as parametric functions and maximize the ELBO, maxθ,ϕ EEϕ(z|s,a) [ log fD,θ(s ′|z) ] −DKL(Eϕ(z|s, a)||P (z)) We refer the reader to the appendix for an exhaustive overview of training VAEs by maximum likelihood and the ELBO. Recall that we wish to find a good metric for estimating model error. Having learned a latent model for P̂ (s′|s, a), its latent space Z can be used to define the metric d in Equation (3), i.e., the Euclidean distance between latent representations of state-action pairs. Unfortunately, as was previously shown [Arvanitidis et al., 2018], latent codes in variational models contain sharp discontinuities, rendering Euclidean distances in latent space unreliable and inaccurate (as we will also demonstrate in our experiments). Instead, we propose to use the natural induced metric of our latent model, as described in the following subsection. 4.2 The Pullback Metric of Model Dynamics In this part we define the metric d that will be used to approximate model error in Equation (3). Specifically, we consider a Riemannian submanifold defined by a latent space Z and observation function f , which induces minimum energy in the ambient space. We will later choose Z to be the latent space of our variational model (i.e., encoded state-action) and f to be the decoder function fD of next state transitions. We define the distance metric formally below. Definition 2. We define a Riemannian submanifold (MZ , gZ) by a differential function f : Z 7→ S and latent space Z such that dZ(z1, z2) = inf γ ∫ 1 0 ∥∥∥∥∂f(γ(t))∂t ∥∥∥∥ dt s.t. γ(0) = z1, γ(1) = z2. A similar metric has been used in previous work on generative latent models [Chen et al., 2018, Arvanitidis et al., 2018]. By choosing f to be the decoder function fD, latent codes that are close w.r.t. dZ induce curves of minimal energy in the ambient observation space (i.e., next state). This metric is closely related to the pullback metric (see Section 2.2), as shown by the following proposition. Proposition 1. Let (MZ , gZ) as defined above. Then Gf (z) = JTf (z)Jf (z), for any z ∈ Z . Indeed, Proposition 1 shows us that Gf is a pullback metric. ParticularlyZ and Jf define the structure of the ambient observation space X (in our case, next state transitions). By choosing f to be the decoder function fD, the metric GfD becomes stochastic, complicating analysis. Instead, as proposed and analyzed in Arvanitidis et al. [2018], we use the expected pullback metric E [GZ ] as an approximation of the underlying stochastic metric. Similar to previous work on variational models, we use a normally distributed decoder to define the output. Using Proposition 1, we have the following result (see Appendix for proof). Theorem 1. [Arvanitidis et al. [2018]] Assume fD(·|z) ∼ N (µ(z), σ(z)I). Then EfD(·|z) [ GfD (z) ] = Gµ(z) +Gσ(z), (4) where Gµ(z) = JTµ (z)Jµ(z) and Gσ(z) = J T σ (z)Jσ(z). Given an embedded latent space Z , the expected metric in Equation (4) gives us a sense of the topology of the latent space manifold induced by fD. The terms Gµ = JTµ Jµ and Gσ = J T σ Jσ are in fact the induced pullback metrics of µ and σ, respectively. 4.3 Capturing Epistemic and Aleatoric Uncertainty The previously proposed encoder-decoder model induces a metric which captures the structure of the learned dynamics. However, the decoder variance, σ(z), does not differentiate between aleatoric uncertainty (environment dynamics) and epistemic uncertainty (missing data). We propose two methods to enrich the metric in Equation (4) in order to achieve a better estimate of uncertainty. First, by using an ensemble of M decoder functions {fD,i}Mi=1 trained using standard bootstrap techniques [Efron, 1982], we capture the traditional epistemic uncertainty of the decoder parameters. Second, to correctly distinguish epistemic and aleatoric uncertainty, we add a latent forward function to our previously proposed variational model. Specifically, our latent model consists of an encoder E : S × A 7→ B(Z), forward model fF : Z 7→ B(X ) and decoder functions fD,i : X 7→ B(S) such that P (·|s, a) = fD,i(·|x), and x ∼ fF (·|E(s, a)). This structure enables us to capture the aleatoric uncertainty under the forward transition model fF , and the epistemic uncertainty using the decoders fD,i. That is, for fF (·|z) ∼ N (µF (z), σF (z)I), the variance model σF (z) captures the stochasticity in model dynamics. This decomposition is also helpful whenever one wants to train an agent in latent space (e.g., for planning Schrittwieser et al. [2020]) Next, we turn to analyze the pullback metric induced by the proposed forward transition model. As both fF and {fD,i}mi=1 are stochastic (capturing epistemic and aleatoric uncertainty), the result of Theorem 1 cannot be directly applied to their composition. The following proposition provides an analytical expression for the expected pullback metric of a sampled next state and a uniformly sampled decoder (the proof is given in the appendix). Theorem 2. Assume fF (·|z) ∼ N (µF (z), σF (z)I), fD,i(·|x) ∼ N (µiD(x), σiD(x)I), U ∼ Unif{1, . . . ,M}. Then, the expected pullback metric of the composite function (fD,U ◦ fF ) is given by EP (fD,U◦fF ) [ GfD,U◦fF (z) ] = JTµF (z)GfD (z)JµF (z) + J T σF (z)diag ( GfD (z) ) JσF (z), where GfD (z) = 1 M ∑M i=1 Ex∼F (·|z) [ JT µiD (x)JµiD (x) + J T σiD (x)JσiD (x) ] . Unlike the metric in Equation (4), the composite metric distorts the decoder metric with Jacobian matrices of the forward model statistics. It takes into account both the aleatoric and epistemic uncertainty through the forward model as well as ensemble of decoders. As a special case we note the metric for the case of deterministic model dynamics. Corollary 1. Assume deterministic model dynamics, i.e., x = fF (z), and without loss of generality assume fF ≡ I . Then, the expected pullback metric of Theorem 2 is given by E [ GfD,U◦fF (z) ] = 1 M ∑M i=1 J T µiD (z)JµiD (z) + J T σiD (z)JσiD (z). 4.4 GELATO: Geometrically Enriched LATent model for Offline reinforcement learning Having defined our metric, we are now ready to leverage it in a model based offline RL framework. Specifically, provided a dataset Dn = {(si, ai, ri, s′i)} n i=1 we train the variational latent forward model described in the previous section. Algorithm 1 presents GELATO, our proposed approach. In GELATO, we estimate model error by measuring the distance of a new sample to the data manifold. We construct the reward-penalized MDP for which the error acts as a pessimistic regularizer. Finally, we train an RL agent over the pessimistic MDP with transition P̂ (·|s, a) and reward r(s, a)−λU(s, a). By achieving an improved estimate for model error the model-based pessimistic approach can significantly improve performance, as shown in the following section. 5 Experiments This section is dedicated to quantitatively and qualitatively understand the benefits of our proposed metric and method. We validate two principal claims: (1) Our metric captures inherent characteristics of model dynamics. We demonstrate this by visualizing state-action geodesics of a toy grid world problem and a multi-agent autonomous driving task. We show that curves of minimum energy in ambient space indeed capture intrinsic properties of the problem. (2) Our metric provides an improved OOD uncertainty estimate for offline RL. We compare the traditionally used bootstrap ensemble method to our approach, which leverages our pullback metric in a nonparametric nearest neighbors approach. We also compare our method to simple use of Euclidean distances in latent space. We run extensive experiments on continuous control and autonomous driving benchmarks. We show that our metric achieves significantly improved performance in tasks for which geodesics are non-euclidean. 5.1 Metric Visualization Four Rooms. To better understand the inherent structure of our metric, we constructed a grid-world environment for visualizing our proposed latent representation and metric. The 15× 15 environment, as depicted in Figure 2, consists of four rooms, with impassable obstacles in their centers. The agent, residing at some position (x, y) ∈ [−1, 1]2 in the environment can take one of four actions: up, down, left, or right – moving the agent 1, 2 or 3 steps (uniformly distributed) in that direction. We collected a dataset of 10000 samples, taking random actions at random initializations of the environment. The ambient state space was represented by the position of the agent – a vector of dimension 2, normalized to values in [−1, 1]. Finally, we trained a variational latent model with latent dimension dZ = 2. We used a standard encoder z ∼ N (µθ(s), σθ(s)) and decoder s′ ∼ N (µϕ(z), σϕ(z)) represented by neural networks trained end-to-end using the evidence lower bound. We refer the reader to the appendix for an exhaustive description of the training procedure. The latent space output of our model is depicted by yellow markers in Figure 2a. Indeed, the latent embedding consists of four distinctive clusters, structured in a similar manner as our grid-world environment. Interestingly, the distortion of the latent space accurately depicts an intuitive notion of distance between states. As such, rooms are distinctively separated, with fair distance between each cluster. States of pathways between rooms clearly separate the room clusters, forming a topology with four discernible bottlenecks. In addition to the latent embedding, Figure 2a depicts the geometric volume measure √ det(GfD ) of the trained pullback metric induced by fD. This quantity demonstrates the effective geodesic distances between states in the decoder-induced submanifold. Indeed geodesics between data points to points outside of the data manifold (i.e., outside of the red region), attain high values as integrals over areas of high magnitude. In contrast, geodesics near the data manifold would low values. We visualize the decoder-induced geodesic distance and compare it to the latent Euclidean distance in Figures 2b and 2c, respectively. The plots depict the normalized distances of all states to the state marked by a yellow square. Evidently, the geodesic distance captures a better notion of distance in the said environment, correctly exposing the “land distance" in ambient space. As expected, states residing in the bottom-right room are farthest from the source state, as reaching them comprises of passing through at least two bottleneck states. In contrast, the latent Euclidean distance does not properly capture these geodesics, exhibiting a similar distribution of distances in other rooms. Nevertheless, both geodesic and Euclidean distances reveal the intrinsic topological structure of the environment, that of which is not captured by the extrinsic coordinates (x, y) ∈ [−1, 1]2. Particularly, the state coordinates (x, y) would wrongly assign short distances to states across impassible walls or obstacles, i.e., measuring the “air distance". Intersection. We visualized our metric in the intersection environment proposed in Leurent [2018]. Figure 3 compares the Euclidean and geodesic distances of a partially trained agent. Unlike the previous toy example, to visualize the inherent manifolds we used t-SNE [Van der Maaten and Hinton, 2008] projections computed with Euclidean distance and compared them to the projection computed with geodesic distance, i.e., curves of minimum energy in ambient space (Definition 2). Indeed, the geodesics captured the inherent structure of the environment, whereas Euclidean distances only managed to capture general clusters. These suggest that Euclidean distance might not be representative for measuring distance in latent space, as will also become evident by our experiments in the following subsections. 5.2 Datasets and Implementation Details We used D4RL [Fu et al., 2020] and the autonomous vehicle environments highway-env [Leurent, 2018] as benchmarks for all of our experiments. We tested GELATO on three Mujoco [Todorov et al., 2012] environments (Hopper, Walker2d, Halfcheetah) on datasets generated by a single policy and a mixture of two policies. Specifically, we used datasets generated by a random agent (1M samples), a partially trained agent, i.e, medium agent (1M samples), and a mixture of partially trained and expert agents (2M samples). For autonomous driving, we tested GELATO on four environments (Highway, Roundabout, Intersection, Racetrack), on datasets containing five episodes generated by a partially trained agent. We also tested a faster (×15 speedup) variant of the Highway environment, as well as a harder instantiation of the Intersection environment in which the number of cars was tripled (further details can be found in the appendix). We trained our variational latent model in two phases. First, the model was fully trained using a calibrated Gaussian decoder [Rybkin et al., 2020]. Specifically, a maximum-likelihood estimate of the variance was used σ∗ = MSE(µ, µ̂) ∈ argmaxσN (µ̂|µ, σ2I). Then, in the second stage we fit the variance decoder networks. Hyperparameters for training are found in the appendix. In order to practically estimate the geodesic distance in Algorithm 1, we defined a parametric curve in latent space and used gradient descent to minimize the curve’s energy. The resulting curve and pullback metric were then used to calculate the geodesic distance by a numerical estimate of the curve length (See Appendix for an exhaustive overview of the estimation method). We used FAISS [Johnson et al., 2019] for efficient GPU-based k-nearest neighbors calculation. We set k = 5 neighbors for the penalized reward (Equation (3)). Finally, we used a variant of Soft Learning, as proposed by Yu et al. [2020] as our RL algorithm for the continuous control benchmarks, and PPO [Schulman et al., 2017] for the autonomous driving tasks. All agents were trained for 1M steps (for continuous control benchmarks) and 350K steps (for the driving benchmarks), using a single GPU (RTX 2080), and averaged over 5 seeds (see Appendix for more details). 5.3 Results D4RL. Results for the continuous control domains are shown in Table 1. We performed experiments to analyze GELATO on various continuous control datasets. We compared GELATO to contemporary model-based offline RL approaches; namely, MOPO [Yu et al., 2020] and MBPO [Janner et al., 2019], as well as the standard baselines of SAC [Haarnoja et al., 2018] and imitation (behavioral cloning, Bain and Sammut [1995], Ross et al. [2011]). We found performance increase on most domains, and most significantly in the medium domain, i.e., partially trained agent. Since the medium dataset contained average behavior, combining the nonparametric nearest-neighbor uncertainty method with the bootstrap of decoders benefited the agent’s overall performance. In addition, GELATO with the latent ℓ2 distance metric performed well on many of the benchmarks. We conjecture this is due to the inherent Euclidean nature of the continuous control benchmarks. Unlike the embedding for the autonomous driving benchmarks (Figure 3), we found the D4RL data to project similarly when ℓ2 and geodesic distances were used (we provide plots of these embeddings in the Appendix). Highway-Env. Figure 4 shows results for the autonomous driving benchmarks in highway-env. In contrast to the continuous control benchmarks, we found a significant improvement of our metric on the autonomous driving benchmarks compared to standard uncertainty estimation methods as well as the latent Euclidean distance. We credit this improvement to the non-euclidean nature of the environments, as previously described in Figure 3. While Euclidean distances were useful in the Mujoco environments, they performed distinctly worse in the autonomous driving environments. Our results emphasize the importance of OOD uncertainty estimations methods in reinforcement learning on various types of datasets. While robotic control tasks provided useful insights, they did not capture the non-euclidean nature inherent in alternative tasks, such as autonomous driving. 6 Related Work Offline Reinforcement Learning. The field of offline RL has recently received much attention as several algorithmic approaches were able to surpass standard off-policy algorithms. Value-based online algorithms do not perform well under highly off-policy batch data [Fujimoto et al., 2019, Kumar et al., 2019, Fu et al., 2019, Fedus et al., 2020, Agarwal et al., 2020], much due to issues with bootstrapping from out-of-distribution (OOD) samples. These issues become more prominent in the offline setting, as new samples cannot be acquired. Several works on offline RL have shown improved performance on standard continuous control benchmarks [Laroche et al., 2019, Kumar et al., 2019, Fujimoto et al., 2019, Chen et al., 2020b, Swazinna et al., 2020, Kidambi et al., 2020, Yu et al., 2020]. This work focused on model-based approaches [Yu et al., 2020, Kidambi et al., 2020], in which the agent is incentivized to remain close to areas of low uncertainty. Our work focused on controlling uncertainty estimation in high dimensional environments. Our methodology utilized recent success on the geometry of deep generative models [Arvanitidis et al., 2018, 2020], proposing an alternative approach to uncertainty estimation. Representation Learning. Representation learning seeks to find an appropriate representation of data for performing a machine-learning task [Goodfellow et al., 2016]. Variational Auto Encoders [Kingma and Welling, 2013, Rezende et al., 2014] have been a popular representation learning technique, particularly in unsupervised learning regimes [Chen et al., 2016, Van Den Oord et al., 2017, Hsu et al., 2017, Serban et al., 2017, Engel et al., 2017, Bojanowski et al., 2018, Ding et al., 2020], though also in supervised learning and reinforcement learning [Hausman et al., 2018, Li et al., 2019, Petangoda et al., 2019, Hafner et al., 2019]. Particularly, variational models have been shown able to derive successful behaviors in high dimensional benchmarks [Hafner et al., 2020]. Various representation techniques in reinforcement learning have also proposed to disentangle representation of both states [Engel and Mannor, 2001, Littman and Sutton, 2002, Stooke et al., 2020, Zhu et al., 2020], and actions [Tennenholtz and Mannor, 2019, Chandak et al., 2019]. These allow for the abstraction of states and actions to significantly decrease computation requirements, by e.g., decreasing the effective dimensionality of the action space [Tennenholtz and Mannor, 2019]. Unlike previous work, GELATO is focused on a semiparametric approach for uncertainty estimation, enhancing offline reinforcement learning performance. 7 Discussion and Future Work This work presented a metric for model dynamics and its application to offline reinforcement learning. While our metric showed supportive evidence of improvement in model-based offline RL we note that it was significantly slower – comparably, 5 times slower than using the decoder’s variance for uncertainty estimation. The apparent slowdown in performance was mostly due to computation of the geodesic distance. Improvement in this area may utilize techniques for efficient geodesic estimation. We conclude by noting possible future applications of our work. In Section 5.1 we demonstrated the inherent geometry our model had captured, its corresponding metric, and geodesics. Still, in this work we focused specifically on metrics related to the decoded state. In fact, a similar derivation to Theorem 2 could be applied to other modeled statistics, e.g., Q-values, rewards, future actions, and more. Each distinct statistic would induce its own unique metric w.r.t. its respective probability measure. Particularly, this concept may benefit a vast array of applications in continuous or large state and action spaces.
1. What is the focus and contribution of the paper on using Riemannian distance metrics for estimating uncertainties in model-based Reinforcement Learning (RL) settings? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its novelty, performance, and choice of metric? 3. Do you have any questions regarding the paper's clarity, organization, and relevance to the field? 4. How does the reviewer assess the significance of the proposed framework, results, and conclusions? 5. Are there any suggestions or recommendations for improving the paper, such as providing more detail in certain areas or addressing minor issues with figures and references?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Using Riemannian distance metrics instead of Euclidean metrics, the authors better estimate uncertainties for out-of-distribution (OOD) data and apply it to model-based Reinforcement Learning (RL) settings. The paper introduces a new pullback metric for such dynamic models, and verifies its performance on several RL benchmarks. Strengths And Weaknesses A very well written paper, where the novelty is clear. The results are quite nice, though showing the improvement in more scenarios would be great, since the authors noticed that the Riemannian manifold metric work better in some specific cases. However, it seems a bit limiting to define the metric for only dynamic systems. The authors could elaborate more on their choice for this specific setting for model-based RL uncertainty estimation as well, as opposed to general uncertainty measurement. While the whole paper is focused around offline RL, it would be great if online RL can be added into the introduction a bit more so the contrast between the two is clearer. It also isn't clear to the reader why parametric and nonparametric methods are combined for uncertainty estimation, this could be motivated better. Line 27 the "metric" used for kNN could be associated with how the distance is defined. That way the reader can understand why a "proper metric" is important. Similarly, line 29 could have just a quick explanation on what epistemic and aleatoric uncertainties are. Just model/data uncertainty would help the understanding for the reader. Section 2.1 is mainly about (online) RL, not sure if the title necessarily fits. Or the third paragraph on offline setting can be expanded and explained more in detail, since it's the focus of this paper. The equation in line 78-79 (which seems quite important) could be elaborated more, in what space the curve length is defined, and why we care about using a curve length for f ( γ ) instead of γ . Also, it seems section 3 would fit into section 2, since it is also a preliminary/background. Is there a reason it is separate? Though the Algorithm 1 is presented quite nicely, it could be even further improved if it were a visual figure, I believe. It would make a very strong titlepage figure as well, outlining the novel framework the authors introduce in this paper. A major issue that can be easily fixed, is that many of the figures, tables, and algorithms are located pages before they are actually referenced in the text. The whole idea of the metric capturing inherent characteristics of the model dynamics is still a bit vague to me. If it is, for instance, the discrete bottlenecks present in the four rooms example, more samples would be needed to see this clearly. If it is the clustering in the intersection example, more explanation is needed why the clusters for Euclidean distance is not enough. t-SNE visualization is only qualitative, and distances do not necessarily mean much, though maybe some conclusion can be drawn from the shape of the clusters. A lot of explanation on how the geodesics are found is missing, how the parametric curve is defined for instance. This could be made a much more important part of the paper. The computation time is also mentioned at some point, and this could be described more in detail, exact computation times would be appreciated. The related works are divided quite nicely, but the last paragraph on disentangling latent space for RL feels a bit out of place and irrelevant. Lastly, a lot of the discussion was in the results 5.3 already, so the section 7 title could just be conclusion. All-in-all a well-written paper with plenty of preliminary information for readers of all backgrounds. The proposed framework could be shown using a titlepage figure describing the pipeline/block diagram for more impact. The overall contributions are clear, and the results agree with the conclusions from the authors. Although many of the core ideas follow from previous work, this work enables the application of geometry-aware learning methods in RL context, providing consistently better results than baselines. Small details: There is no appendix, even though it was referenced many times in the paper. Missing figure reference in line 19 Typo line 29 "Riemmannian" Line 231 "would low values." misses some word Questions In the pullback metric for model dynamics, how did the authors come up with this choice for metric? How is the time derivative computed? And what about the geodesic computation, are using gradient descent to solve an ODE? In the abstract, the authors mention how the "proposed metric measures both the quality of OOD samples as well as the discrepancy of examples in the data" (line 7). Don't they both in the end mean the same thing? In Figure 1, "high model error" is mentioned twice, but never do we see "low model error". This would be on the manifold, correct? Would it make sense to add it somewhere in the figure/caption? Line 121 mentions how the encoder and decoders are parametric functions, is there any details on this? If normal neural networks are used, would that not be nonparametric? The equation in Definition 2 defines the curve length form Eq. 1, correct? It could be helpful to link these two for better understanding if that's the case. The authors introduce an ensemble of decoders in Line 161, which I assume is then used for all experiments that follow? How about the forward model, do we continue with the assumption in Corollary 1 and practically neglect it? What is the "meaning" of this ambient space X? This could be made clearer. The need for this forward model in the first place is unclear, perhaps Corollary 1 could be entirely removed. The results are compared with a classical bootstrap ensemble method, but isn't that also used in the proposed method for the epistemic uncertainty? What is the difference there with the baseline? Is the four room experiment discrete or continuous? From the actions, it seems to be discrete, but since the latent space is continuous, I was wondering what happens if we sample more points in the latent space, and where they would correspond to in the rooms, i.e. how likely it is they will end up in the walls/obstacles. Also, would the input space be 3D? Isn't a 2D latent space a bit of an easy mapping in this case? Is "Data Score" in Table 1 the score that the collected data achieved? A quick explanation here would help. Why is t-SNE used for the intersection example latent space visualization? Is it because it is more than 2D? How many dimensions is the latent space here? Limitations The limitations are properly addressed by the authors.
NIPS
Title Hamiltonian Neural Networks Abstract Even though neural networks enjoy widespread use, they still struggle to learn the basic laws of physics. How might we endow them with better inductive biases? In this paper, we draw inspiration from Hamiltonian mechanics to train models that learn and respect exact conservation laws in an unsupervised manner. We evaluate our models on problems where conservation of energy is important, including the two-body problem and pixel observations of a pendulum. Our model trains faster and generalizes better than a regular neural network. An interesting side effect is that our model is perfectly reversible in time. Ideal mass-spring system Noisy observations Baseline NN Prediction Prediction Hamiltonian NN Figure 1: Learning the Hamiltonian of a mass-spring system. The variables q and p correspond to position and momentum coordinates. As there is no friction, the baseline’s inner spiral is due to model errors. By comparison, the Hamiltonian Neural Network learns to exactly conserve a quantity that is analogous to total energy. 1 Introduction Neural networks have a remarkable ability to learn and generalize from data. This lets them excel at tasks such as image classification [21], reinforcement learning [45, 26, 37], and robotic dexterity [1, 22]. Even though these tasks are diverse, they all share the same underlying physical laws. For example, a notion of gravity is important for reasoning about objects in an image, training an RL agent to walk, or directing a robot to manipulate objects. Based on this observation, researchers have become increasingly interested in finding physics priors that transfer across tasks [43, 34, 17, 10, 6, 40]. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. Untrained neural networks do not have physics priors; they learn approximate physics knowledge directly from data. This generally prevents them from learning exact physical laws. Consider the frictionless mass-spring system shown in Figure 1. Here the total energy of the system is being conserved. More specifically, this particular system conserves a quantity proportional to q2 + p2, where q is the position and p is the momentum of the mass. The baseline neural network in Figure 1 learns an approximation of this conservation law, and yet the approximation is imperfect enough that a forward simulation of the system drifts over time to higher or lower energy states. Can we define a class of neural networks that will precisely conserve energy-like quantities over time? In this paper, we draw inspiration from Hamiltonian mechanics, a branch of physics concerned with conservation laws and invariances, to define Hamiltonian Neural Networks, or HNNs. We begin with an equation called the Hamiltonian, which relates the state of a system to some conserved quantity (usually energy) and lets us simulate how the system changes with time. Physicists generally use domain-specific knowledge to find this equation, but here we try a different approach: Instead of crafting the Hamiltonian by hand, we propose parameterizing it with a neural network and then learning it directly from data. Since almost all physical laws can be expressed as conservation laws, our approach is quite general [27]. In practice, our model trains quickly and generalizes well1. Figure 1, for example, shows the outcome of training an HNN on the same mass-spring system. Unlike the baseline model, it learns to conserve an energy-like quantity. 2 Theory Predicting dynamics. The hallmark of a good physics model is its ability to predict changes in a system over time. This is the challenge we now turn to. In particular, our goal is to learn the dynamics of a system using a neural network. The simplest way of doing this is by predicting the next state of a system given the current one. A variety of previous works have taken this path and produced excellent results [41, 14, 43, 34, 17, 6]. There are, however, a few problems with this approach. The first problem is its notion of discrete “time steps” that connect neighboring states. Since time is actually continuous, a better approach would be to express dynamics as a set of differential equations and then integrate them from an initial state at t0 to a final state at t1. Equation 1 shows how this might be done, letting S denote the time derivatives of the coordinates of the system2. This approach has been under-explored so far, but techniques like Neural ODEs take a step in the right direction [7]. (q1,p1) = (q0,p0) + Z t1 t0 S(q,p) dt (1) The second problem with existing methods is that they tend not to learn exact conservation laws or invariant quantities. This often causes them to drift away from the true dynamics of the system as small errors accumulate. The HNN model that we propose ameliorates both of these problems. To see how it does this — and to situate our work in the proper context — we first briefly review Hamiltonian mechanics. Hamiltonian Mechanics. William Hamilton introduced Hamiltonian mechanics in the 19th century as a mathematical reformulation of classical mechanics. Its original purpose was to express classical mechanics in a more unified and general manner. Over time, though, scientists have applied it to nearly every area of physics from thermodynamics to quantum field theory [29, 32, 39]. In Hamiltonian mechanics, we begin with a set of coordinates (q,p). Usually, q = (q1, ..., qN ) represents the positions of a set of objects whereas p = (p1, ..., pN ) denotes their momentum. Note how this gives us N coordinate pairs (q1, p1)...(qN , pN ). Taken together, they offer a complete description of the system. Next, we define a scalar function, H(q,p) called the Hamiltonian so that dq dt = @H @p , dp dt = @H @q . (2) 1We make our code available at github.com/greydanus/hamiltonian-nn. 2Any coordinates that describe the state of the system. Later we will use position and momentum (p, q). Equation 2 tells us that moving coordinates in the direction SH = @H @p , @H @q gives us the time evolution of the system. We can think of S as a vector field over the inputs of H. In fact, it is a special kind of vector field called a “symplectic gradient”. Whereas moving in the direction of the gradient of H changes the output as quickly as possible, moving in the direction of the symplectic gradient keeps the output exactly constant. Hamilton used this mathematical framework to relate the position and momentum vectors (q,p) of a system to its total energy Etot = H(q,p). Then, he found SH using Equation 2 and obtained the dynamics of the system by integrating this field according to Equation 1. This is a powerful approach because it works for almost any system where the total energy is conserved. Hamiltonian mechanics, like Newtonian mechanics, can predict the motion of a mass-spring system or a single pendulum. But its true strengths only become apparent when we tackle systems with many degrees of freedom. Celestial mechanics, which are chaotic for more than two bodies, are a good example. A few other examples include many-body quantum systems, fluid simulations, and condensed matter physics [29, 32, 39, 33, 9, 12]. Hamiltonian Neural Networks. In this paper, we propose learning a parametric function for H instead of SH. In doing so, we endow our model with the ability to learn exactly conserved quantities from data in an unsupervised manner. During the forward pass, it consumes a set of coordinates and outputs a single scalar “energy-like” value. Then, before computing the loss, we take an in-graph gradient of the output with respect to the input coordinates (Figure A.1). It is with respect to this gradient that we compute and optimize an L2 loss (Equation 3). LHNN = @H✓ @p @q @t 2 + @H✓ @q + @p @t 2 (3) For a visual comparison between this approach and the baseline, refer to Figure 1 or Figure 1(b). This training procedure allows HNNs to learn conserved quantities analogous to total energy straight from data. Apart from conservation laws, HNNs have several other interesting and potentially useful properties. First, they are perfectly reversible in that the mapping from (q,p) at one time to (q,p) at another time is bijective. Second, we can manipulate the HNN-conserved quantity (analogous to total energy) by integrating along the gradient of H, giving us an interesting counterfactual tool (e.g. “What would happen if we added 1 Joule of energy?”). We’ll discuss these properties later in Section 6. 3 Learning a Hamiltonian from Data Optimizing the gradients of a neural network is a rare approach. There are a few previous works which do this [42, 35, 28], but their scope and implementation details diverge from this work and from one another. With this in mind, our first step was to investigate the empirical properties of HNNs on three simple physics tasks. Task 1: Ideal Mass-Spring. Our first task was to model the dynamics of the frictionless mass-spring system shown in Figure 1. The system’s Hamiltonian is given in Equation 4 where k is the spring constant and m is the mass constant. For simplicity, we set k = m = 1. Then we sampled initial coordinates with total energies uniformly distributed between [0.2, 1]. We constructed training and test sets of 25 trajectories each and added Gaussian noise with standard deviation 2 = 0.1 to every data point. Each trajectory had 30 observations; each observation was a concatenation of (q, p). H = 1 2 kq2 + p2 2m (4) Task 2: Ideal Pendulum. Our second task was to model a frictionless pendulum. Pendulums are nonlinear oscillators so they present a slightly more difficult problem. Writing the gravitational constant as g and the length of the pendulum as l, the general Hamiltonian is H = 2mgl(1 cos q) + l2p2 2m (5) Once again we set m = l = 1 for simplicity. This time, we set g = 3 and sampled initial coordinates with total energies in the range [1.3, 2.3]. We chose these numbers in order to situate the dataset along the system’s transition from linear to nonlinear dynamics. As with Task 1, we constructed training and test sets of 25 trajectories each and added the same amount of noise. Task 3: Real Pendulum. Our third task featured the position and momentum readings from a real pendulum. We used data from a Science paper by Schmidt & Lipson [35] which also tackled the problem of learning conservation laws from data. This dataset was noisier than the synthetic ones and it did not strictly obey any conservation laws since the real pendulum had a small amount of friction. Our goal here was to examine how HNNs fared on noisy and biased real-world data. 3.1 Methods In all three tasks, we trained our models with a learning rate of 10 3 and used the Adam optimizer [20]. Since the training sets were small, we set the batch size to be the total number of examples. On each dataset we trained two fully-connected neural networks: the first was a baseline model that, given a vector input (q,p) output the vector (@q/@t, @p/@t) directly. The second was an HNN that estimated the same vector using the derivative of a scalar quantity as shown in Equation 2 (also see Figure A.1). Where possible, we used analytic time derivatives as the targets. Otherwise, we calculated finite difference approximations. All of our models had three layers, 200 hidden units, and tanh activations. We trained them for 2000 gradient steps and evaluated them on the test set. We logged three metrics: L2 train loss, L2 test loss, and mean squared error (MSE) between the true and predicted total energies. To determine the energy metric, we integrated our models according to Equation 1 starting from a random test point. Then we used MSE to measure how much a given model’s dynamics diverged from the ground truth. Intuitively, the loss metrics measure our model’s ability to fit individual data points while the energy metric measures its stability and conservation of energy over long timespans. To obtain dynamics, we integrated our models with the fourth-order Runge-Kutta integrator in scipy.integrate.solve_ivp and set the error tolerance to 10 9 [30]. 3.2 Results We found that HNNs train as quickly as baseline models and converge to similar final losses. Table 1 shows their relative performance over the three tasks. But even as HNNs tied with the baseline on on loss, they dramatically outperformed it on the MSE energy metric. Figure 2 shows why this is the case: as we integrate the two models over time, various errors accumulate in the baseline and it eventually diverges. Meanwhile, the HNN conserves a quantity that closely resembles total energy and diverges more slowly or not at all. It’s worth noting that the quantity conserved by the HNN is not equivalent to the total energy; rather, it’s something very close to the total energy. The third and fourth columns of Figure 2 provide a useful comparison between the HNN-conserved quantity and the total energy. Looking closely at the spacing of the y axes, one can see that the HNN-conserved quantity has the same scale as total energy, but differs by a constant factor. Since energy is a relative quantity, this is perfectly acceptable3. The total energy plot for the real pendulum shows another interesting pattern. Whereas the ground truth data does not quite conserve total energy, the HNN roughly conserves this quantity. This, in fact, is a fundamental limitation of HNNs: they assume a conserved quantity exists and thus are unable to account for things that violate this assumpation, such as friction. In order to account for friction, we would need to model it separately from the HNN. 4 Modeling Larger Systems Having established baselines on a few simple tasks, our next step was to tackle a larger system involving more than one pair of (p, q) coordinates. One well-studied problem that fits this description is the two-body problem, which requires four (p, q) pairs. H = |pCM|2 m1 +m2 + |p1|2 + |p2|2 2µ + g m1m2 |q1 q2|2 (6) Task 4: Two-body problem. In the two-body problem, point particles interact with one another via an attractive force such as gravity. Once again, we let g be the gravitational constant and m represent mass. Equation 6 gives the Hamiltonian of the system where µ is the reduced mass and pCM is the momentum of the center of mass. As in previous tasks, we set m1 = m2 = g = 1 for simplicity. Furthermore, we restricted our experiments to systems where the momentum of the center of mass was zero. Even so, with eight degrees of freedom (given by the x and y position and momentum coordinates of the two bodies) this system represented an interesting challenge. 4.1 Methods Our first step was to generate a dataset of 1000 near-circular, two-body trajectories. We initialized every trajectory with center of mass zero, total momentum zero, and radius r = kq2 q1k in the range [0.5, 1.5]. In order to control the level of numerical stability, we chose initial velocities that gave perfectly circular orbits and then added Gaussian noise to them. We found that scaling this noise by a factor of 2 = 0.05 produced trajectories with a good balance between stability and diversity. We used fourth-order Runge-Kutta integration to find 200 trajectories of 50 observations each and then performed an 80/20% train/test set split over trajectories. Our models and training procedure were identical to those described in Section 3 except this time we trained for 10,000 gradient steps and used a batch size of 200. 4.2 Results The HNN model scaled well to this system. The first row of Figure 3 suggests that it learned to conserve a quantity nearly equal to the total energy of the system whereas the baseline model did not. The second row of Figure 3 gives a qualitative comparison of trajectories. After one orbit, the baseline dynamics have completely diverged from the ground truth whereas the HNN dynamics have only accumulated a small amount of error. As we continue to integrate up to t = 50 and beyond (Figure B.1), both models diverge but the HNN does so at a much slower rate. Even as the HNN 3To see why energy is relative, imagine a cat that is at an elevation of 0 m in one reference frame and 1 m in another. Its potential energy (and total energy) will differ by a constant factor depending on frame of reference. diverges from the ground truth orbit, its total energy remains stable rather than decaying to zero or spiraling to infinity. We report quantitative results for this task in Table 1. Both train and test losses of the HNN model were about an order of magnitude lower than those of the baseline. The HNN did a better job of conserving total energy, with an energy MSE that was several orders of magnitude below the baseline. Having achieved success on the two-body problem, we ran the same set of experiments on the chaotic three-body problem. We show preliminary results in Appendix B where once again the HNN outperforms its baseline by a considerable margin. We opted to focus on the two-body results here because the three-body results still need improvement. 5 Learning a Hamiltonian from Pixels One of the key strengths of neural networks is that they can learn abstract representations directly from high-dimensional data such as pixels or words. Having trained HNN models on position and momentum coordinates, we were eager to see whether we could train them on arbitrary coordinates like the latent vectors of an autoencoder. Task 5: Pixel Pendulum. With this in mind, we constructed a dataset of pixel observations of a pendulum and then combined an autoencoder with an HNN to model its dynamics. To our knowledge this is the first instance of a Hamiltonian learned directly from pixel data. 5.1 Methods In recent years, OpenAI Gym has been widely adopted by the machine learning community as a means for training and evaluating reinforcement learning agents [5]. Some works have even trained world models on these environments [15, 16]. Seeing these efforts as related and complimentary to our work, we used OpenAI Gym’s Pendulum-v0 environment in this experiment. First, we generated 200 trajectories of 100 frames each4. We required that the maximum absolute displacement of the pendulum arm be ⇡6 radians. Starting from 400 x 400 x 3 RGB pixel observations, we cropped, desaturated, and downsampled them to 28 x 28 x 1 frames and concatenated each frame with its successor so that the input to our model was a tensor of shape batch x 28 x 28 x 2. We used two frames so that velocity would be observable from the input. Without the ability to observe velocity, an autoencoder without recurrence would be unable to ascertain the system’s full state space. In designing the autoencoder portion of the model, our main objective was simplicity and trainability. We chose to use fully-connected layers in lieu of convolutional layers because they are simpler. Furthermore, convolutional layers sometimes struggle to extract even simple position information [23]. Both the encoder and decoder were composed of four fully-connected layers with relu activations and residual connections. We used 200 hidden units on all layers except the latent vector z, where we used two units. As for the HNN component of this model, we used the same architecture and parameters as described in Section 3. Unless otherwise specified, we used the same training procedure as described in Section 4.1. We found that using a small amount of weight decay, 10 5 in this case, was beneficial. Losses. The most notable difference between this experiment and the others was the loss function. This loss function was composed of three terms: the first being the HNN loss, the second being a classic autoencoder loss (L2 loss over pixels), and the third being an auxiliary loss on the autoencoder’s 4Choosing the “no torque” action at every timestep. latent space: LCC = ztp (ztq zt+1q ) 2 (7) The purpose of the auxiliary loss term, given in Equation 7, was to make the second half of z, which we’ll label zp, resemble the derivatives of the first half of z, which we’ll label zq. This loss encouraged the latent vector (zq, zp) to have roughly same properties as canonical coordinates (q,p). These properties, measured by the Poisson bracket relations, are necessary for writing a Hamiltonian. We found that the auxiliary loss did not degrade the autoencoder’s performance. Furthermore, it is not domain-specific and can be used with any autoencoder with an even-sized latent space. 5.2 Results Unlike the baseline model, the HNN learned to conserve a scalar quantity analogous to the total energy of the system. This enabled it to predict accurate dynamics for the system over much longer timespans. Figure 4 shows a qualitative comparison of trajectories predicted by the two models. As in previous experiments, we computed these dynamics using Equation 2 and a fourth-order Runge-Kutta integrator. Unlike previous experiments, we performed this integration in the latent space of the autoencoder. Then, after integration, we projected to pixel space using the decoder network. The HNN and its baseline reached comparable train and test losses, but once again, the HNN dramatically outperformed the baseline on the energy metric (Table 1). 6 Useful properties of HNNs While the main purpose of HNNs is to endow neural networks with better physics priors, in this section we ask what other useful properties these models might have. Adding and removing energy. So far, we have seen that integrating the symplectic gradient of the Hamiltonian can give us the time evolution of a system but we have not tried following the Riemann gradient RH = @H @q , @H @p . Intuitively, this corresponds to adding or removing some of the HNN-conserved quantity from the system. It’s especially interesting to alternate between integrating RH and SH. Figure 5 shows how we can take advantage of this effect to “bump” the pendulum to a higher energy level. We could imagine using this technique to answer counterfactual questions e.g. “What would have happened if we applied a torque?” Perfect reversibility. As neural networks have grown in size, the memory consumption of transient activations, the intermediate activations saved for backpropagation, has become a notable bottleneck. Several works propose semireversible models that construct one layer’s activations from the activations of the next [13, 25, 19]. Neural ODEs also have this property [7]. Many of these models are only approximately reversible: their mappings are not quite bijective. Unlike those methods, our approach is guaranteed to produce trajectories that are perfectly reversible through time. We can simply refer to a result from Hamiltonian mechanics called Liouville’s Theorem: the density of particles in phase space is constant. What this implies is that any mapping (q0,p0) ! (q1,p1) is bijective/invertible. 7 Related work Learning physical laws from data. Schmidt & Lipson [35] used a genetic algorithm to search a space of mathematical functions for conservation laws and recovered the Lagrangians and Hamiltonians of several real systems. We were inspired by their approach, but used a neural neural network to avoid constraining our search to a set of hand-picked functions. Two recent works are similar to this paper in that the authors sought to uncover physical laws from data using neural networks [18, 4]. Unlike our work, they did not explicitly parameterize Hamiltonians. Physics priors for neural networks. A wealth of previous works have sought to furnish neural networks with better physics priors. Many of these works are domain-specific: the authors used domain knowledge about molecular dynamics [31, 38, 8, 28], quantum mechanics [36], or robotics [24] to help their models train faster or generalize. Others, such as Interaction Networks or Relational Networks were meant to be fully general [43, 34, 2]. Here, we also aimed to keep our approach fully general while introducing a strong and theoretically-motivated prior. Modeling energy surfaces. Physicists, particularly those studying molecular dynamics, have seen success using neural networks to model energy surfaces [3, 11, 36, 44]. In particular, several works have shown dramatic computation speedups compared to density functional theory [31, 38, 8]. Molecular dynamics researchers integrate the derivatives of energy in order to obtain dynamics, just as we did in this work. A key difference between these approaches and our own is that 1) we emphasize the Hamiltonian formalism 2) we optimize the gradients of our model (though some works do optimize the gradients of a molecular dynamics model [42, 28]). 8 Discussion Whereas Hamiltonian mechanics is an old and well-established theory, the science of deep learning is still in its infancy. Whereas Hamiltonian mechanics describes the real world from first principles, deep learning does so starting from data. We believe that Hamiltonian Neural Networks, and models like them, represent a promising way of bringing together the strengths of both approaches. 9 Acknowledgements Sam Greydanus would like to thank the Google AI Residency Program for providing extraordinary mentorship and resources. The authors would like to thank Nic Ford, Trevor Gale, Rapha Gontijo Lopes, Keren Gu, Ben Caine, Mark Woodward, Stephan Hoyer, Jascha Sohl-Dickstein, and many others for insightful conversations and support. Special thanks to James and Judy Greydanus for their feedback and support from beginning to end.
1. What is the novel idea proposed by the paper in using Hamiltonian equations as loss functions of NNs? 2. What are the limitations of the proposed approach, particularly in its application to physical problems? 3. How convincing are the experimental results presented in the paper? 4. What is the potential of the proposed approach in solving real-world problems?
Review
Review As I mentioned, up to my knowledge, the idea of using Hamiltonian equations as loss functions of NNs is new, interesting and easy to follow. However I am not convinced that it can be applied to a large set of physical problems. The major draw back is that the Hamiltonian equations should be known in advance by the designers of the model rather than learned from data. Another short-coming is their trivial experimental results. As a matter of fact, I do not find much point in the presented toy tasks 1, 2 and even 3, as the maximum information that the network is potentially able to learn is to estimate the noise parameter as otherwise the provided prior knowledge is sufficient to solve these tasks (and therefore no neural net is needed). The last task is much more interesting because the NN learns to link the raw data (pixels) into the quantities for which the Hamiltonian is defined. However even this task is in a sense too simple and does not convince me that such an approach can be applied to any real world problem.
NIPS
Title Hamiltonian Neural Networks Abstract Even though neural networks enjoy widespread use, they still struggle to learn the basic laws of physics. How might we endow them with better inductive biases? In this paper, we draw inspiration from Hamiltonian mechanics to train models that learn and respect exact conservation laws in an unsupervised manner. We evaluate our models on problems where conservation of energy is important, including the two-body problem and pixel observations of a pendulum. Our model trains faster and generalizes better than a regular neural network. An interesting side effect is that our model is perfectly reversible in time. Ideal mass-spring system Noisy observations Baseline NN Prediction Prediction Hamiltonian NN Figure 1: Learning the Hamiltonian of a mass-spring system. The variables q and p correspond to position and momentum coordinates. As there is no friction, the baseline’s inner spiral is due to model errors. By comparison, the Hamiltonian Neural Network learns to exactly conserve a quantity that is analogous to total energy. 1 Introduction Neural networks have a remarkable ability to learn and generalize from data. This lets them excel at tasks such as image classification [21], reinforcement learning [45, 26, 37], and robotic dexterity [1, 22]. Even though these tasks are diverse, they all share the same underlying physical laws. For example, a notion of gravity is important for reasoning about objects in an image, training an RL agent to walk, or directing a robot to manipulate objects. Based on this observation, researchers have become increasingly interested in finding physics priors that transfer across tasks [43, 34, 17, 10, 6, 40]. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. Untrained neural networks do not have physics priors; they learn approximate physics knowledge directly from data. This generally prevents them from learning exact physical laws. Consider the frictionless mass-spring system shown in Figure 1. Here the total energy of the system is being conserved. More specifically, this particular system conserves a quantity proportional to q2 + p2, where q is the position and p is the momentum of the mass. The baseline neural network in Figure 1 learns an approximation of this conservation law, and yet the approximation is imperfect enough that a forward simulation of the system drifts over time to higher or lower energy states. Can we define a class of neural networks that will precisely conserve energy-like quantities over time? In this paper, we draw inspiration from Hamiltonian mechanics, a branch of physics concerned with conservation laws and invariances, to define Hamiltonian Neural Networks, or HNNs. We begin with an equation called the Hamiltonian, which relates the state of a system to some conserved quantity (usually energy) and lets us simulate how the system changes with time. Physicists generally use domain-specific knowledge to find this equation, but here we try a different approach: Instead of crafting the Hamiltonian by hand, we propose parameterizing it with a neural network and then learning it directly from data. Since almost all physical laws can be expressed as conservation laws, our approach is quite general [27]. In practice, our model trains quickly and generalizes well1. Figure 1, for example, shows the outcome of training an HNN on the same mass-spring system. Unlike the baseline model, it learns to conserve an energy-like quantity. 2 Theory Predicting dynamics. The hallmark of a good physics model is its ability to predict changes in a system over time. This is the challenge we now turn to. In particular, our goal is to learn the dynamics of a system using a neural network. The simplest way of doing this is by predicting the next state of a system given the current one. A variety of previous works have taken this path and produced excellent results [41, 14, 43, 34, 17, 6]. There are, however, a few problems with this approach. The first problem is its notion of discrete “time steps” that connect neighboring states. Since time is actually continuous, a better approach would be to express dynamics as a set of differential equations and then integrate them from an initial state at t0 to a final state at t1. Equation 1 shows how this might be done, letting S denote the time derivatives of the coordinates of the system2. This approach has been under-explored so far, but techniques like Neural ODEs take a step in the right direction [7]. (q1,p1) = (q0,p0) + Z t1 t0 S(q,p) dt (1) The second problem with existing methods is that they tend not to learn exact conservation laws or invariant quantities. This often causes them to drift away from the true dynamics of the system as small errors accumulate. The HNN model that we propose ameliorates both of these problems. To see how it does this — and to situate our work in the proper context — we first briefly review Hamiltonian mechanics. Hamiltonian Mechanics. William Hamilton introduced Hamiltonian mechanics in the 19th century as a mathematical reformulation of classical mechanics. Its original purpose was to express classical mechanics in a more unified and general manner. Over time, though, scientists have applied it to nearly every area of physics from thermodynamics to quantum field theory [29, 32, 39]. In Hamiltonian mechanics, we begin with a set of coordinates (q,p). Usually, q = (q1, ..., qN ) represents the positions of a set of objects whereas p = (p1, ..., pN ) denotes their momentum. Note how this gives us N coordinate pairs (q1, p1)...(qN , pN ). Taken together, they offer a complete description of the system. Next, we define a scalar function, H(q,p) called the Hamiltonian so that dq dt = @H @p , dp dt = @H @q . (2) 1We make our code available at github.com/greydanus/hamiltonian-nn. 2Any coordinates that describe the state of the system. Later we will use position and momentum (p, q). Equation 2 tells us that moving coordinates in the direction SH = @H @p , @H @q gives us the time evolution of the system. We can think of S as a vector field over the inputs of H. In fact, it is a special kind of vector field called a “symplectic gradient”. Whereas moving in the direction of the gradient of H changes the output as quickly as possible, moving in the direction of the symplectic gradient keeps the output exactly constant. Hamilton used this mathematical framework to relate the position and momentum vectors (q,p) of a system to its total energy Etot = H(q,p). Then, he found SH using Equation 2 and obtained the dynamics of the system by integrating this field according to Equation 1. This is a powerful approach because it works for almost any system where the total energy is conserved. Hamiltonian mechanics, like Newtonian mechanics, can predict the motion of a mass-spring system or a single pendulum. But its true strengths only become apparent when we tackle systems with many degrees of freedom. Celestial mechanics, which are chaotic for more than two bodies, are a good example. A few other examples include many-body quantum systems, fluid simulations, and condensed matter physics [29, 32, 39, 33, 9, 12]. Hamiltonian Neural Networks. In this paper, we propose learning a parametric function for H instead of SH. In doing so, we endow our model with the ability to learn exactly conserved quantities from data in an unsupervised manner. During the forward pass, it consumes a set of coordinates and outputs a single scalar “energy-like” value. Then, before computing the loss, we take an in-graph gradient of the output with respect to the input coordinates (Figure A.1). It is with respect to this gradient that we compute and optimize an L2 loss (Equation 3). LHNN = @H✓ @p @q @t 2 + @H✓ @q + @p @t 2 (3) For a visual comparison between this approach and the baseline, refer to Figure 1 or Figure 1(b). This training procedure allows HNNs to learn conserved quantities analogous to total energy straight from data. Apart from conservation laws, HNNs have several other interesting and potentially useful properties. First, they are perfectly reversible in that the mapping from (q,p) at one time to (q,p) at another time is bijective. Second, we can manipulate the HNN-conserved quantity (analogous to total energy) by integrating along the gradient of H, giving us an interesting counterfactual tool (e.g. “What would happen if we added 1 Joule of energy?”). We’ll discuss these properties later in Section 6. 3 Learning a Hamiltonian from Data Optimizing the gradients of a neural network is a rare approach. There are a few previous works which do this [42, 35, 28], but their scope and implementation details diverge from this work and from one another. With this in mind, our first step was to investigate the empirical properties of HNNs on three simple physics tasks. Task 1: Ideal Mass-Spring. Our first task was to model the dynamics of the frictionless mass-spring system shown in Figure 1. The system’s Hamiltonian is given in Equation 4 where k is the spring constant and m is the mass constant. For simplicity, we set k = m = 1. Then we sampled initial coordinates with total energies uniformly distributed between [0.2, 1]. We constructed training and test sets of 25 trajectories each and added Gaussian noise with standard deviation 2 = 0.1 to every data point. Each trajectory had 30 observations; each observation was a concatenation of (q, p). H = 1 2 kq2 + p2 2m (4) Task 2: Ideal Pendulum. Our second task was to model a frictionless pendulum. Pendulums are nonlinear oscillators so they present a slightly more difficult problem. Writing the gravitational constant as g and the length of the pendulum as l, the general Hamiltonian is H = 2mgl(1 cos q) + l2p2 2m (5) Once again we set m = l = 1 for simplicity. This time, we set g = 3 and sampled initial coordinates with total energies in the range [1.3, 2.3]. We chose these numbers in order to situate the dataset along the system’s transition from linear to nonlinear dynamics. As with Task 1, we constructed training and test sets of 25 trajectories each and added the same amount of noise. Task 3: Real Pendulum. Our third task featured the position and momentum readings from a real pendulum. We used data from a Science paper by Schmidt & Lipson [35] which also tackled the problem of learning conservation laws from data. This dataset was noisier than the synthetic ones and it did not strictly obey any conservation laws since the real pendulum had a small amount of friction. Our goal here was to examine how HNNs fared on noisy and biased real-world data. 3.1 Methods In all three tasks, we trained our models with a learning rate of 10 3 and used the Adam optimizer [20]. Since the training sets were small, we set the batch size to be the total number of examples. On each dataset we trained two fully-connected neural networks: the first was a baseline model that, given a vector input (q,p) output the vector (@q/@t, @p/@t) directly. The second was an HNN that estimated the same vector using the derivative of a scalar quantity as shown in Equation 2 (also see Figure A.1). Where possible, we used analytic time derivatives as the targets. Otherwise, we calculated finite difference approximations. All of our models had three layers, 200 hidden units, and tanh activations. We trained them for 2000 gradient steps and evaluated them on the test set. We logged three metrics: L2 train loss, L2 test loss, and mean squared error (MSE) between the true and predicted total energies. To determine the energy metric, we integrated our models according to Equation 1 starting from a random test point. Then we used MSE to measure how much a given model’s dynamics diverged from the ground truth. Intuitively, the loss metrics measure our model’s ability to fit individual data points while the energy metric measures its stability and conservation of energy over long timespans. To obtain dynamics, we integrated our models with the fourth-order Runge-Kutta integrator in scipy.integrate.solve_ivp and set the error tolerance to 10 9 [30]. 3.2 Results We found that HNNs train as quickly as baseline models and converge to similar final losses. Table 1 shows their relative performance over the three tasks. But even as HNNs tied with the baseline on on loss, they dramatically outperformed it on the MSE energy metric. Figure 2 shows why this is the case: as we integrate the two models over time, various errors accumulate in the baseline and it eventually diverges. Meanwhile, the HNN conserves a quantity that closely resembles total energy and diverges more slowly or not at all. It’s worth noting that the quantity conserved by the HNN is not equivalent to the total energy; rather, it’s something very close to the total energy. The third and fourth columns of Figure 2 provide a useful comparison between the HNN-conserved quantity and the total energy. Looking closely at the spacing of the y axes, one can see that the HNN-conserved quantity has the same scale as total energy, but differs by a constant factor. Since energy is a relative quantity, this is perfectly acceptable3. The total energy plot for the real pendulum shows another interesting pattern. Whereas the ground truth data does not quite conserve total energy, the HNN roughly conserves this quantity. This, in fact, is a fundamental limitation of HNNs: they assume a conserved quantity exists and thus are unable to account for things that violate this assumpation, such as friction. In order to account for friction, we would need to model it separately from the HNN. 4 Modeling Larger Systems Having established baselines on a few simple tasks, our next step was to tackle a larger system involving more than one pair of (p, q) coordinates. One well-studied problem that fits this description is the two-body problem, which requires four (p, q) pairs. H = |pCM|2 m1 +m2 + |p1|2 + |p2|2 2µ + g m1m2 |q1 q2|2 (6) Task 4: Two-body problem. In the two-body problem, point particles interact with one another via an attractive force such as gravity. Once again, we let g be the gravitational constant and m represent mass. Equation 6 gives the Hamiltonian of the system where µ is the reduced mass and pCM is the momentum of the center of mass. As in previous tasks, we set m1 = m2 = g = 1 for simplicity. Furthermore, we restricted our experiments to systems where the momentum of the center of mass was zero. Even so, with eight degrees of freedom (given by the x and y position and momentum coordinates of the two bodies) this system represented an interesting challenge. 4.1 Methods Our first step was to generate a dataset of 1000 near-circular, two-body trajectories. We initialized every trajectory with center of mass zero, total momentum zero, and radius r = kq2 q1k in the range [0.5, 1.5]. In order to control the level of numerical stability, we chose initial velocities that gave perfectly circular orbits and then added Gaussian noise to them. We found that scaling this noise by a factor of 2 = 0.05 produced trajectories with a good balance between stability and diversity. We used fourth-order Runge-Kutta integration to find 200 trajectories of 50 observations each and then performed an 80/20% train/test set split over trajectories. Our models and training procedure were identical to those described in Section 3 except this time we trained for 10,000 gradient steps and used a batch size of 200. 4.2 Results The HNN model scaled well to this system. The first row of Figure 3 suggests that it learned to conserve a quantity nearly equal to the total energy of the system whereas the baseline model did not. The second row of Figure 3 gives a qualitative comparison of trajectories. After one orbit, the baseline dynamics have completely diverged from the ground truth whereas the HNN dynamics have only accumulated a small amount of error. As we continue to integrate up to t = 50 and beyond (Figure B.1), both models diverge but the HNN does so at a much slower rate. Even as the HNN 3To see why energy is relative, imagine a cat that is at an elevation of 0 m in one reference frame and 1 m in another. Its potential energy (and total energy) will differ by a constant factor depending on frame of reference. diverges from the ground truth orbit, its total energy remains stable rather than decaying to zero or spiraling to infinity. We report quantitative results for this task in Table 1. Both train and test losses of the HNN model were about an order of magnitude lower than those of the baseline. The HNN did a better job of conserving total energy, with an energy MSE that was several orders of magnitude below the baseline. Having achieved success on the two-body problem, we ran the same set of experiments on the chaotic three-body problem. We show preliminary results in Appendix B where once again the HNN outperforms its baseline by a considerable margin. We opted to focus on the two-body results here because the three-body results still need improvement. 5 Learning a Hamiltonian from Pixels One of the key strengths of neural networks is that they can learn abstract representations directly from high-dimensional data such as pixels or words. Having trained HNN models on position and momentum coordinates, we were eager to see whether we could train them on arbitrary coordinates like the latent vectors of an autoencoder. Task 5: Pixel Pendulum. With this in mind, we constructed a dataset of pixel observations of a pendulum and then combined an autoencoder with an HNN to model its dynamics. To our knowledge this is the first instance of a Hamiltonian learned directly from pixel data. 5.1 Methods In recent years, OpenAI Gym has been widely adopted by the machine learning community as a means for training and evaluating reinforcement learning agents [5]. Some works have even trained world models on these environments [15, 16]. Seeing these efforts as related and complimentary to our work, we used OpenAI Gym’s Pendulum-v0 environment in this experiment. First, we generated 200 trajectories of 100 frames each4. We required that the maximum absolute displacement of the pendulum arm be ⇡6 radians. Starting from 400 x 400 x 3 RGB pixel observations, we cropped, desaturated, and downsampled them to 28 x 28 x 1 frames and concatenated each frame with its successor so that the input to our model was a tensor of shape batch x 28 x 28 x 2. We used two frames so that velocity would be observable from the input. Without the ability to observe velocity, an autoencoder without recurrence would be unable to ascertain the system’s full state space. In designing the autoencoder portion of the model, our main objective was simplicity and trainability. We chose to use fully-connected layers in lieu of convolutional layers because they are simpler. Furthermore, convolutional layers sometimes struggle to extract even simple position information [23]. Both the encoder and decoder were composed of four fully-connected layers with relu activations and residual connections. We used 200 hidden units on all layers except the latent vector z, where we used two units. As for the HNN component of this model, we used the same architecture and parameters as described in Section 3. Unless otherwise specified, we used the same training procedure as described in Section 4.1. We found that using a small amount of weight decay, 10 5 in this case, was beneficial. Losses. The most notable difference between this experiment and the others was the loss function. This loss function was composed of three terms: the first being the HNN loss, the second being a classic autoencoder loss (L2 loss over pixels), and the third being an auxiliary loss on the autoencoder’s 4Choosing the “no torque” action at every timestep. latent space: LCC = ztp (ztq zt+1q ) 2 (7) The purpose of the auxiliary loss term, given in Equation 7, was to make the second half of z, which we’ll label zp, resemble the derivatives of the first half of z, which we’ll label zq. This loss encouraged the latent vector (zq, zp) to have roughly same properties as canonical coordinates (q,p). These properties, measured by the Poisson bracket relations, are necessary for writing a Hamiltonian. We found that the auxiliary loss did not degrade the autoencoder’s performance. Furthermore, it is not domain-specific and can be used with any autoencoder with an even-sized latent space. 5.2 Results Unlike the baseline model, the HNN learned to conserve a scalar quantity analogous to the total energy of the system. This enabled it to predict accurate dynamics for the system over much longer timespans. Figure 4 shows a qualitative comparison of trajectories predicted by the two models. As in previous experiments, we computed these dynamics using Equation 2 and a fourth-order Runge-Kutta integrator. Unlike previous experiments, we performed this integration in the latent space of the autoencoder. Then, after integration, we projected to pixel space using the decoder network. The HNN and its baseline reached comparable train and test losses, but once again, the HNN dramatically outperformed the baseline on the energy metric (Table 1). 6 Useful properties of HNNs While the main purpose of HNNs is to endow neural networks with better physics priors, in this section we ask what other useful properties these models might have. Adding and removing energy. So far, we have seen that integrating the symplectic gradient of the Hamiltonian can give us the time evolution of a system but we have not tried following the Riemann gradient RH = @H @q , @H @p . Intuitively, this corresponds to adding or removing some of the HNN-conserved quantity from the system. It’s especially interesting to alternate between integrating RH and SH. Figure 5 shows how we can take advantage of this effect to “bump” the pendulum to a higher energy level. We could imagine using this technique to answer counterfactual questions e.g. “What would have happened if we applied a torque?” Perfect reversibility. As neural networks have grown in size, the memory consumption of transient activations, the intermediate activations saved for backpropagation, has become a notable bottleneck. Several works propose semireversible models that construct one layer’s activations from the activations of the next [13, 25, 19]. Neural ODEs also have this property [7]. Many of these models are only approximately reversible: their mappings are not quite bijective. Unlike those methods, our approach is guaranteed to produce trajectories that are perfectly reversible through time. We can simply refer to a result from Hamiltonian mechanics called Liouville’s Theorem: the density of particles in phase space is constant. What this implies is that any mapping (q0,p0) ! (q1,p1) is bijective/invertible. 7 Related work Learning physical laws from data. Schmidt & Lipson [35] used a genetic algorithm to search a space of mathematical functions for conservation laws and recovered the Lagrangians and Hamiltonians of several real systems. We were inspired by their approach, but used a neural neural network to avoid constraining our search to a set of hand-picked functions. Two recent works are similar to this paper in that the authors sought to uncover physical laws from data using neural networks [18, 4]. Unlike our work, they did not explicitly parameterize Hamiltonians. Physics priors for neural networks. A wealth of previous works have sought to furnish neural networks with better physics priors. Many of these works are domain-specific: the authors used domain knowledge about molecular dynamics [31, 38, 8, 28], quantum mechanics [36], or robotics [24] to help their models train faster or generalize. Others, such as Interaction Networks or Relational Networks were meant to be fully general [43, 34, 2]. Here, we also aimed to keep our approach fully general while introducing a strong and theoretically-motivated prior. Modeling energy surfaces. Physicists, particularly those studying molecular dynamics, have seen success using neural networks to model energy surfaces [3, 11, 36, 44]. In particular, several works have shown dramatic computation speedups compared to density functional theory [31, 38, 8]. Molecular dynamics researchers integrate the derivatives of energy in order to obtain dynamics, just as we did in this work. A key difference between these approaches and our own is that 1) we emphasize the Hamiltonian formalism 2) we optimize the gradients of our model (though some works do optimize the gradients of a molecular dynamics model [42, 28]). 8 Discussion Whereas Hamiltonian mechanics is an old and well-established theory, the science of deep learning is still in its infancy. Whereas Hamiltonian mechanics describes the real world from first principles, deep learning does so starting from data. We believe that Hamiltonian Neural Networks, and models like them, represent a promising way of bringing together the strengths of both approaches. 9 Acknowledgements Sam Greydanus would like to thank the Google AI Residency Program for providing extraordinary mentorship and resources. The authors would like to thank Nic Ford, Trevor Gale, Rapha Gontijo Lopes, Keren Gu, Ben Caine, Mark Woodward, Stephan Hoyer, Jascha Sohl-Dickstein, and many others for insightful conversations and support. Special thanks to James and Judy Greydanus for their feedback and support from beginning to end.
1. What is the focus of the paper regarding neural networks and conservation laws? 2. What are the strengths and weaknesses of the proposed approach compared to prior works in the domain? 3. How does the reviewer assess the novelty and applicability of the paper's content? 4. Do you have any questions or concerns regarding the reversibility property and back-propagation efficiency? 5. Can you provide examples or comparisons to illustrate the advantages and limitations of the proposed method?
Review
Review This paper is very well written, nicely motivated and introduces a general principle to design neural network for data with conservation laws using Hamiltonian mechanics. Contrary to what the authors state, including energy conservation into neural networks and optimizing its gradients is now common procedure in this domain, for example: - Pukrittayakamee et al. Journal of Chemical Physics, 130(13). 2009 - Behler. Physical Chemistry Chemical Physics, 13(40). 2011 - Gastegger. Journal of Chemical Theory and Computation, 11(5), 2187-2198. 2015 - Schuett et al, NeurIPS 30 / Journal of Chemical Physics 148(24). 2017 - Yao et al., Chemical science 9(8). 2018 The proposed approach constitutes a generalization of neural networks for high-dimensional potential energy surface by including the momentum in the input to arrive at the Hamiltonian formalism. For classical systems, as presented in this paper, it seems that this addition is rather counter-productive: while the change of momentum is described by the potential (see references above), the change of positions directly follows from the equations of motion and does not require an additional derivative of the network. This is both more computationally efficient and generalizes by design to all initial momenta (provided the corresponding positions stay close to the training manifold). On the other hand, I am not convinced that the proposed architecture would still work when applying a trained model to a different energy level. The mentioned property of reversibility is not convincing as it is not described how this property can help to make back-propagation more efficient. Reversing the trajectory only allows to discard the intermediate time step, which are already present in the training set. For the network itself, one still needs to use the full back-propagation of each time step to get the gradient w.r.t. the parameters. Beyond that, the mentioned reversibility is given for all predictions that are by-design conservative force fields, such as the earlier mentioned neural network potentials. A strong point of the paper is the application to the pixel pendulum. This could inspire new and creative applications of neural networks that encode conservation laws. I would still guess, that similar results can be obtained from a pure potential approach without momentum inputs. It would be interesting to see future, more general applications, where this is no longer possible. In the author feedback, my main issue with the paper has not been addressed sufficiently. The Hamiltonian formalism is more general than predicting just the potential energy + derivatives (which amounts to energy conservation from classical mechanic), but this is not taken advantage of. The author's response, that potential energy approaches need potential energy labels is not correct: they can also be trained solely on derivatives (e.g., see JCP ref above). A concrete example: A consequence of the more general Hamiltonian formalism is that the network would need to be retrained for a pendulum with a different total energy since the impulse at the same pendulum position would be different. In contrast, a network using potential energy derivatives + equations of motions can handle different energy levels, since it does not depend on the impulse and takes only positions. Apart from this, the approach is a valuable contribution with great potential in other areas than classical mechanics.
NIPS
Title Hamiltonian Neural Networks Abstract Even though neural networks enjoy widespread use, they still struggle to learn the basic laws of physics. How might we endow them with better inductive biases? In this paper, we draw inspiration from Hamiltonian mechanics to train models that learn and respect exact conservation laws in an unsupervised manner. We evaluate our models on problems where conservation of energy is important, including the two-body problem and pixel observations of a pendulum. Our model trains faster and generalizes better than a regular neural network. An interesting side effect is that our model is perfectly reversible in time. Ideal mass-spring system Noisy observations Baseline NN Prediction Prediction Hamiltonian NN Figure 1: Learning the Hamiltonian of a mass-spring system. The variables q and p correspond to position and momentum coordinates. As there is no friction, the baseline’s inner spiral is due to model errors. By comparison, the Hamiltonian Neural Network learns to exactly conserve a quantity that is analogous to total energy. 1 Introduction Neural networks have a remarkable ability to learn and generalize from data. This lets them excel at tasks such as image classification [21], reinforcement learning [45, 26, 37], and robotic dexterity [1, 22]. Even though these tasks are diverse, they all share the same underlying physical laws. For example, a notion of gravity is important for reasoning about objects in an image, training an RL agent to walk, or directing a robot to manipulate objects. Based on this observation, researchers have become increasingly interested in finding physics priors that transfer across tasks [43, 34, 17, 10, 6, 40]. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. Untrained neural networks do not have physics priors; they learn approximate physics knowledge directly from data. This generally prevents them from learning exact physical laws. Consider the frictionless mass-spring system shown in Figure 1. Here the total energy of the system is being conserved. More specifically, this particular system conserves a quantity proportional to q2 + p2, where q is the position and p is the momentum of the mass. The baseline neural network in Figure 1 learns an approximation of this conservation law, and yet the approximation is imperfect enough that a forward simulation of the system drifts over time to higher or lower energy states. Can we define a class of neural networks that will precisely conserve energy-like quantities over time? In this paper, we draw inspiration from Hamiltonian mechanics, a branch of physics concerned with conservation laws and invariances, to define Hamiltonian Neural Networks, or HNNs. We begin with an equation called the Hamiltonian, which relates the state of a system to some conserved quantity (usually energy) and lets us simulate how the system changes with time. Physicists generally use domain-specific knowledge to find this equation, but here we try a different approach: Instead of crafting the Hamiltonian by hand, we propose parameterizing it with a neural network and then learning it directly from data. Since almost all physical laws can be expressed as conservation laws, our approach is quite general [27]. In practice, our model trains quickly and generalizes well1. Figure 1, for example, shows the outcome of training an HNN on the same mass-spring system. Unlike the baseline model, it learns to conserve an energy-like quantity. 2 Theory Predicting dynamics. The hallmark of a good physics model is its ability to predict changes in a system over time. This is the challenge we now turn to. In particular, our goal is to learn the dynamics of a system using a neural network. The simplest way of doing this is by predicting the next state of a system given the current one. A variety of previous works have taken this path and produced excellent results [41, 14, 43, 34, 17, 6]. There are, however, a few problems with this approach. The first problem is its notion of discrete “time steps” that connect neighboring states. Since time is actually continuous, a better approach would be to express dynamics as a set of differential equations and then integrate them from an initial state at t0 to a final state at t1. Equation 1 shows how this might be done, letting S denote the time derivatives of the coordinates of the system2. This approach has been under-explored so far, but techniques like Neural ODEs take a step in the right direction [7]. (q1,p1) = (q0,p0) + Z t1 t0 S(q,p) dt (1) The second problem with existing methods is that they tend not to learn exact conservation laws or invariant quantities. This often causes them to drift away from the true dynamics of the system as small errors accumulate. The HNN model that we propose ameliorates both of these problems. To see how it does this — and to situate our work in the proper context — we first briefly review Hamiltonian mechanics. Hamiltonian Mechanics. William Hamilton introduced Hamiltonian mechanics in the 19th century as a mathematical reformulation of classical mechanics. Its original purpose was to express classical mechanics in a more unified and general manner. Over time, though, scientists have applied it to nearly every area of physics from thermodynamics to quantum field theory [29, 32, 39]. In Hamiltonian mechanics, we begin with a set of coordinates (q,p). Usually, q = (q1, ..., qN ) represents the positions of a set of objects whereas p = (p1, ..., pN ) denotes their momentum. Note how this gives us N coordinate pairs (q1, p1)...(qN , pN ). Taken together, they offer a complete description of the system. Next, we define a scalar function, H(q,p) called the Hamiltonian so that dq dt = @H @p , dp dt = @H @q . (2) 1We make our code available at github.com/greydanus/hamiltonian-nn. 2Any coordinates that describe the state of the system. Later we will use position and momentum (p, q). Equation 2 tells us that moving coordinates in the direction SH = @H @p , @H @q gives us the time evolution of the system. We can think of S as a vector field over the inputs of H. In fact, it is a special kind of vector field called a “symplectic gradient”. Whereas moving in the direction of the gradient of H changes the output as quickly as possible, moving in the direction of the symplectic gradient keeps the output exactly constant. Hamilton used this mathematical framework to relate the position and momentum vectors (q,p) of a system to its total energy Etot = H(q,p). Then, he found SH using Equation 2 and obtained the dynamics of the system by integrating this field according to Equation 1. This is a powerful approach because it works for almost any system where the total energy is conserved. Hamiltonian mechanics, like Newtonian mechanics, can predict the motion of a mass-spring system or a single pendulum. But its true strengths only become apparent when we tackle systems with many degrees of freedom. Celestial mechanics, which are chaotic for more than two bodies, are a good example. A few other examples include many-body quantum systems, fluid simulations, and condensed matter physics [29, 32, 39, 33, 9, 12]. Hamiltonian Neural Networks. In this paper, we propose learning a parametric function for H instead of SH. In doing so, we endow our model with the ability to learn exactly conserved quantities from data in an unsupervised manner. During the forward pass, it consumes a set of coordinates and outputs a single scalar “energy-like” value. Then, before computing the loss, we take an in-graph gradient of the output with respect to the input coordinates (Figure A.1). It is with respect to this gradient that we compute and optimize an L2 loss (Equation 3). LHNN = @H✓ @p @q @t 2 + @H✓ @q + @p @t 2 (3) For a visual comparison between this approach and the baseline, refer to Figure 1 or Figure 1(b). This training procedure allows HNNs to learn conserved quantities analogous to total energy straight from data. Apart from conservation laws, HNNs have several other interesting and potentially useful properties. First, they are perfectly reversible in that the mapping from (q,p) at one time to (q,p) at another time is bijective. Second, we can manipulate the HNN-conserved quantity (analogous to total energy) by integrating along the gradient of H, giving us an interesting counterfactual tool (e.g. “What would happen if we added 1 Joule of energy?”). We’ll discuss these properties later in Section 6. 3 Learning a Hamiltonian from Data Optimizing the gradients of a neural network is a rare approach. There are a few previous works which do this [42, 35, 28], but their scope and implementation details diverge from this work and from one another. With this in mind, our first step was to investigate the empirical properties of HNNs on three simple physics tasks. Task 1: Ideal Mass-Spring. Our first task was to model the dynamics of the frictionless mass-spring system shown in Figure 1. The system’s Hamiltonian is given in Equation 4 where k is the spring constant and m is the mass constant. For simplicity, we set k = m = 1. Then we sampled initial coordinates with total energies uniformly distributed between [0.2, 1]. We constructed training and test sets of 25 trajectories each and added Gaussian noise with standard deviation 2 = 0.1 to every data point. Each trajectory had 30 observations; each observation was a concatenation of (q, p). H = 1 2 kq2 + p2 2m (4) Task 2: Ideal Pendulum. Our second task was to model a frictionless pendulum. Pendulums are nonlinear oscillators so they present a slightly more difficult problem. Writing the gravitational constant as g and the length of the pendulum as l, the general Hamiltonian is H = 2mgl(1 cos q) + l2p2 2m (5) Once again we set m = l = 1 for simplicity. This time, we set g = 3 and sampled initial coordinates with total energies in the range [1.3, 2.3]. We chose these numbers in order to situate the dataset along the system’s transition from linear to nonlinear dynamics. As with Task 1, we constructed training and test sets of 25 trajectories each and added the same amount of noise. Task 3: Real Pendulum. Our third task featured the position and momentum readings from a real pendulum. We used data from a Science paper by Schmidt & Lipson [35] which also tackled the problem of learning conservation laws from data. This dataset was noisier than the synthetic ones and it did not strictly obey any conservation laws since the real pendulum had a small amount of friction. Our goal here was to examine how HNNs fared on noisy and biased real-world data. 3.1 Methods In all three tasks, we trained our models with a learning rate of 10 3 and used the Adam optimizer [20]. Since the training sets were small, we set the batch size to be the total number of examples. On each dataset we trained two fully-connected neural networks: the first was a baseline model that, given a vector input (q,p) output the vector (@q/@t, @p/@t) directly. The second was an HNN that estimated the same vector using the derivative of a scalar quantity as shown in Equation 2 (also see Figure A.1). Where possible, we used analytic time derivatives as the targets. Otherwise, we calculated finite difference approximations. All of our models had three layers, 200 hidden units, and tanh activations. We trained them for 2000 gradient steps and evaluated them on the test set. We logged three metrics: L2 train loss, L2 test loss, and mean squared error (MSE) between the true and predicted total energies. To determine the energy metric, we integrated our models according to Equation 1 starting from a random test point. Then we used MSE to measure how much a given model’s dynamics diverged from the ground truth. Intuitively, the loss metrics measure our model’s ability to fit individual data points while the energy metric measures its stability and conservation of energy over long timespans. To obtain dynamics, we integrated our models with the fourth-order Runge-Kutta integrator in scipy.integrate.solve_ivp and set the error tolerance to 10 9 [30]. 3.2 Results We found that HNNs train as quickly as baseline models and converge to similar final losses. Table 1 shows their relative performance over the three tasks. But even as HNNs tied with the baseline on on loss, they dramatically outperformed it on the MSE energy metric. Figure 2 shows why this is the case: as we integrate the two models over time, various errors accumulate in the baseline and it eventually diverges. Meanwhile, the HNN conserves a quantity that closely resembles total energy and diverges more slowly or not at all. It’s worth noting that the quantity conserved by the HNN is not equivalent to the total energy; rather, it’s something very close to the total energy. The third and fourth columns of Figure 2 provide a useful comparison between the HNN-conserved quantity and the total energy. Looking closely at the spacing of the y axes, one can see that the HNN-conserved quantity has the same scale as total energy, but differs by a constant factor. Since energy is a relative quantity, this is perfectly acceptable3. The total energy plot for the real pendulum shows another interesting pattern. Whereas the ground truth data does not quite conserve total energy, the HNN roughly conserves this quantity. This, in fact, is a fundamental limitation of HNNs: they assume a conserved quantity exists and thus are unable to account for things that violate this assumpation, such as friction. In order to account for friction, we would need to model it separately from the HNN. 4 Modeling Larger Systems Having established baselines on a few simple tasks, our next step was to tackle a larger system involving more than one pair of (p, q) coordinates. One well-studied problem that fits this description is the two-body problem, which requires four (p, q) pairs. H = |pCM|2 m1 +m2 + |p1|2 + |p2|2 2µ + g m1m2 |q1 q2|2 (6) Task 4: Two-body problem. In the two-body problem, point particles interact with one another via an attractive force such as gravity. Once again, we let g be the gravitational constant and m represent mass. Equation 6 gives the Hamiltonian of the system where µ is the reduced mass and pCM is the momentum of the center of mass. As in previous tasks, we set m1 = m2 = g = 1 for simplicity. Furthermore, we restricted our experiments to systems where the momentum of the center of mass was zero. Even so, with eight degrees of freedom (given by the x and y position and momentum coordinates of the two bodies) this system represented an interesting challenge. 4.1 Methods Our first step was to generate a dataset of 1000 near-circular, two-body trajectories. We initialized every trajectory with center of mass zero, total momentum zero, and radius r = kq2 q1k in the range [0.5, 1.5]. In order to control the level of numerical stability, we chose initial velocities that gave perfectly circular orbits and then added Gaussian noise to them. We found that scaling this noise by a factor of 2 = 0.05 produced trajectories with a good balance between stability and diversity. We used fourth-order Runge-Kutta integration to find 200 trajectories of 50 observations each and then performed an 80/20% train/test set split over trajectories. Our models and training procedure were identical to those described in Section 3 except this time we trained for 10,000 gradient steps and used a batch size of 200. 4.2 Results The HNN model scaled well to this system. The first row of Figure 3 suggests that it learned to conserve a quantity nearly equal to the total energy of the system whereas the baseline model did not. The second row of Figure 3 gives a qualitative comparison of trajectories. After one orbit, the baseline dynamics have completely diverged from the ground truth whereas the HNN dynamics have only accumulated a small amount of error. As we continue to integrate up to t = 50 and beyond (Figure B.1), both models diverge but the HNN does so at a much slower rate. Even as the HNN 3To see why energy is relative, imagine a cat that is at an elevation of 0 m in one reference frame and 1 m in another. Its potential energy (and total energy) will differ by a constant factor depending on frame of reference. diverges from the ground truth orbit, its total energy remains stable rather than decaying to zero or spiraling to infinity. We report quantitative results for this task in Table 1. Both train and test losses of the HNN model were about an order of magnitude lower than those of the baseline. The HNN did a better job of conserving total energy, with an energy MSE that was several orders of magnitude below the baseline. Having achieved success on the two-body problem, we ran the same set of experiments on the chaotic three-body problem. We show preliminary results in Appendix B where once again the HNN outperforms its baseline by a considerable margin. We opted to focus on the two-body results here because the three-body results still need improvement. 5 Learning a Hamiltonian from Pixels One of the key strengths of neural networks is that they can learn abstract representations directly from high-dimensional data such as pixels or words. Having trained HNN models on position and momentum coordinates, we were eager to see whether we could train them on arbitrary coordinates like the latent vectors of an autoencoder. Task 5: Pixel Pendulum. With this in mind, we constructed a dataset of pixel observations of a pendulum and then combined an autoencoder with an HNN to model its dynamics. To our knowledge this is the first instance of a Hamiltonian learned directly from pixel data. 5.1 Methods In recent years, OpenAI Gym has been widely adopted by the machine learning community as a means for training and evaluating reinforcement learning agents [5]. Some works have even trained world models on these environments [15, 16]. Seeing these efforts as related and complimentary to our work, we used OpenAI Gym’s Pendulum-v0 environment in this experiment. First, we generated 200 trajectories of 100 frames each4. We required that the maximum absolute displacement of the pendulum arm be ⇡6 radians. Starting from 400 x 400 x 3 RGB pixel observations, we cropped, desaturated, and downsampled them to 28 x 28 x 1 frames and concatenated each frame with its successor so that the input to our model was a tensor of shape batch x 28 x 28 x 2. We used two frames so that velocity would be observable from the input. Without the ability to observe velocity, an autoencoder without recurrence would be unable to ascertain the system’s full state space. In designing the autoencoder portion of the model, our main objective was simplicity and trainability. We chose to use fully-connected layers in lieu of convolutional layers because they are simpler. Furthermore, convolutional layers sometimes struggle to extract even simple position information [23]. Both the encoder and decoder were composed of four fully-connected layers with relu activations and residual connections. We used 200 hidden units on all layers except the latent vector z, where we used two units. As for the HNN component of this model, we used the same architecture and parameters as described in Section 3. Unless otherwise specified, we used the same training procedure as described in Section 4.1. We found that using a small amount of weight decay, 10 5 in this case, was beneficial. Losses. The most notable difference between this experiment and the others was the loss function. This loss function was composed of three terms: the first being the HNN loss, the second being a classic autoencoder loss (L2 loss over pixels), and the third being an auxiliary loss on the autoencoder’s 4Choosing the “no torque” action at every timestep. latent space: LCC = ztp (ztq zt+1q ) 2 (7) The purpose of the auxiliary loss term, given in Equation 7, was to make the second half of z, which we’ll label zp, resemble the derivatives of the first half of z, which we’ll label zq. This loss encouraged the latent vector (zq, zp) to have roughly same properties as canonical coordinates (q,p). These properties, measured by the Poisson bracket relations, are necessary for writing a Hamiltonian. We found that the auxiliary loss did not degrade the autoencoder’s performance. Furthermore, it is not domain-specific and can be used with any autoencoder with an even-sized latent space. 5.2 Results Unlike the baseline model, the HNN learned to conserve a scalar quantity analogous to the total energy of the system. This enabled it to predict accurate dynamics for the system over much longer timespans. Figure 4 shows a qualitative comparison of trajectories predicted by the two models. As in previous experiments, we computed these dynamics using Equation 2 and a fourth-order Runge-Kutta integrator. Unlike previous experiments, we performed this integration in the latent space of the autoencoder. Then, after integration, we projected to pixel space using the decoder network. The HNN and its baseline reached comparable train and test losses, but once again, the HNN dramatically outperformed the baseline on the energy metric (Table 1). 6 Useful properties of HNNs While the main purpose of HNNs is to endow neural networks with better physics priors, in this section we ask what other useful properties these models might have. Adding and removing energy. So far, we have seen that integrating the symplectic gradient of the Hamiltonian can give us the time evolution of a system but we have not tried following the Riemann gradient RH = @H @q , @H @p . Intuitively, this corresponds to adding or removing some of the HNN-conserved quantity from the system. It’s especially interesting to alternate between integrating RH and SH. Figure 5 shows how we can take advantage of this effect to “bump” the pendulum to a higher energy level. We could imagine using this technique to answer counterfactual questions e.g. “What would have happened if we applied a torque?” Perfect reversibility. As neural networks have grown in size, the memory consumption of transient activations, the intermediate activations saved for backpropagation, has become a notable bottleneck. Several works propose semireversible models that construct one layer’s activations from the activations of the next [13, 25, 19]. Neural ODEs also have this property [7]. Many of these models are only approximately reversible: their mappings are not quite bijective. Unlike those methods, our approach is guaranteed to produce trajectories that are perfectly reversible through time. We can simply refer to a result from Hamiltonian mechanics called Liouville’s Theorem: the density of particles in phase space is constant. What this implies is that any mapping (q0,p0) ! (q1,p1) is bijective/invertible. 7 Related work Learning physical laws from data. Schmidt & Lipson [35] used a genetic algorithm to search a space of mathematical functions for conservation laws and recovered the Lagrangians and Hamiltonians of several real systems. We were inspired by their approach, but used a neural neural network to avoid constraining our search to a set of hand-picked functions. Two recent works are similar to this paper in that the authors sought to uncover physical laws from data using neural networks [18, 4]. Unlike our work, they did not explicitly parameterize Hamiltonians. Physics priors for neural networks. A wealth of previous works have sought to furnish neural networks with better physics priors. Many of these works are domain-specific: the authors used domain knowledge about molecular dynamics [31, 38, 8, 28], quantum mechanics [36], or robotics [24] to help their models train faster or generalize. Others, such as Interaction Networks or Relational Networks were meant to be fully general [43, 34, 2]. Here, we also aimed to keep our approach fully general while introducing a strong and theoretically-motivated prior. Modeling energy surfaces. Physicists, particularly those studying molecular dynamics, have seen success using neural networks to model energy surfaces [3, 11, 36, 44]. In particular, several works have shown dramatic computation speedups compared to density functional theory [31, 38, 8]. Molecular dynamics researchers integrate the derivatives of energy in order to obtain dynamics, just as we did in this work. A key difference between these approaches and our own is that 1) we emphasize the Hamiltonian formalism 2) we optimize the gradients of our model (though some works do optimize the gradients of a molecular dynamics model [42, 28]). 8 Discussion Whereas Hamiltonian mechanics is an old and well-established theory, the science of deep learning is still in its infancy. Whereas Hamiltonian mechanics describes the real world from first principles, deep learning does so starting from data. We believe that Hamiltonian Neural Networks, and models like them, represent a promising way of bringing together the strengths of both approaches. 9 Acknowledgements Sam Greydanus would like to thank the Google AI Residency Program for providing extraordinary mentorship and resources. The authors would like to thank Nic Ford, Trevor Gale, Rapha Gontijo Lopes, Keren Gu, Ben Caine, Mark Woodward, Stephan Hoyer, Jascha Sohl-Dickstein, and many others for insightful conversations and support. Special thanks to James and Judy Greydanus for their feedback and support from beginning to end.
1. What is the main contribution of the paper in terms of modeling dynamic systems? 2. How does the paper differ from other concurrent works in the same area? 3. What is the significance of using neural networks to model Hamiltonian mechanics? 4. How does the proposed approach impact the understanding of dynamic systems in physics? 5. Can you provide examples or applications where the proposed methodology could be particularly useful?
Review
Review 1. Originality: To the best of my knowledge, modeling hamiltonian of a dynamic system using NN is novel. Though there are concurrent works with the similar theme, this paper is, from my point of view, the most clear and thorough one among them. The related works are well cited. 2. Quality This paper is technically sound. This work is self-contained and did a good job to prove the concept it introduces. The evaluation is thorough, yet a bit simple, but is powerful enough to prove the concept of this paper. 3. Clarity: This paper is well written. It give a gentle introduction to hamiltonian mechanics for readers who may not have the proper background. 4. Significance: This work provides a novel, concrete and practical methodology for learning dynamic systems in a physically grounded fashion. Moreover, this modeling strategy has great potential since it may not rely on the exact coordinate frame which the system is defined, as hamiltonians can be defined on any valid generalized frame which suit the constraint. This direction definitely requires more thoughts and efforts and should be of significance to the community.
NIPS
Title FedSplit: an algorithmic framework for fast federated optimization Abstract Motivated by federated learning, we consider the hub-and-spoke model of distributed optimization in which a central authority coordinates the computation of a solution among many agents while limiting communication. We first study some past procedures for federated optimization, and show that their fixed points need not correspond to stationary points of the original optimization problem, even in simple convex settings with deterministic updates. In order to remedy these issues, we introduce FedSplit, a class of algorithms based on operator splitting procedures for solving distributed convex minimization with additive structure. We prove that these procedures have the correct fixed points, corresponding to optima of the original optimization problem, and we characterize their convergence rates under different settings. Our theory shows that these methods are provably robust to inexact computation of intermediate local quantities. We complement our theory with some experiments that demonstrate the benefits of our methods in practice. 1 Introduction Federated learning is a rapidly evolving application of distributed optimization for learning problems in large-scale networks of remote clients [13]. These systems present new challenges, as they are characterized by heterogeneity in computational resources, data across a large, multi-agent network, unreliable communication, and privacy constraints due to sensitive client data [15]. Although distributed optimization has a rich history and extensive literature (e.g., see the sources [2, 4, 8, 28, 14, 23] and references therein), renewed interest due to federated learning has led to a flurry of recent work in the area. Notably, McMahan et al. [17] introduced the FedSGD and FedAvg algorithms, by adapting the classical stochastic gradient method to the federated setting, considering the possibility that clients may fail and may only be subsampled on each round of computation. Another recent proposal, FedProx, attempted to mitigate potential device heterogeneity issues by applying averaged proximal updates to solve federated minimization problems. Currently, a general convergence theory of these methods is lacking. Moreover, practitioners have documented failures of convergence in certain settings (e.g., see Figure 3 and related discussion in the work [17]). 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Our contributions: The first contribution of this paper is to analyze some past procedures, and show that even in the favorable setting of deterministic updates (i.e., no stochastic approximation used), these methods typically fail to preserve solutions of the original optimization problem as fixed points. More precisely, even when these methods do converge, the resulting fixed point need not correspond to an optimal solution of the desired federated learning problem. Since the stochastic variants implemented in practice are approximate versions of the underlying deterministic procedures, this implies these methods also fail to preserve the correct fixed points in general. With the motivation of rectifying this undesirable feature, our second contribution is to introduce a family of federated optimization algorithms, which we call FedSplit, that do preserve the correct fixed points for distributed optimization problems of the form minimize F (x) ··= mX j=1 fj(x), (1) where fj : Rd ! R are the clients’ cost functions for variable x 2 Rd. In machine learning applications, the vector x 2 Rd is a parameter of a statistical model. Our procedure and analysis builds on a long line of work relating optimization with monotone operators and operator splitting techniques [4, 26, 7, 1]. In this paper, we focus on the case when fj are convex functions with Lipschitz continuous gradient [24]. 2 Existing algorithms and their fixed points We focus our discussion on deterministic analogues of two recently proposed procedures—namely, FedSGD [17] and FedProx [16]. For analysis, it is useful to introduce the equivalent, consensus reformulation [4] of the distributed problem (1): minimize F (x) ··= Pm j=1 fj(xj) subject to x1 = x2 = · · · = xm. (2) 2.1 Federated gradient algorithms The recently proposed FedSGD method [17] is based on a multi-step projected stochastic gradient method for solving the consensus problem. For our analysis we consider the obvious deterministic version of this algorithm, which replaces the stochastic gradient by the full gradient. Formally, given a stepsize s > 0, define the gradient mappings Gj(x) ··= x srfj(x) for j = 1, . . . ,m. (3) For a given integer e > 1, we define Gej as the e-fold composition of Gj and G0j as the identity operator on Rd. The FedGD(s, e) algorithm from initialization x(1) obeys the recursion for t = 1, 2, . . . : x(t+1/2)j ··= G e j(x (t) j ), for j 2 [m] ··= {1, 2, . . . ,m}, and (4a) x(t+1)j ··= x (t+1/2), for j 2 [m]. (4b) Recall that x(t+1/2) = 1m Pm j=1 x (t+1/2) j is the block average. The following result characterizes the fixed points of this procedure. Proposition 1. For any s > 0 and e > 1, the sequence {x(t)}1t=1 generated by the FedGD(s, e) algorithm in equation (4) has the following properties: (a) if x(t) is convergent, then the local variables x(t)j share a common limit x ? such that x(t)j ! x? as t ! 1 for j 2 [m]; (b) any such limit x? satisfies the fixed point relation eX i=1 mX j=1 rfj(Gi 1j (x ? )) = 0. (5) The proof of this claim, as well as all other claims in the paper, are deferred to Appendix A of the supplement. Unpacking this claim slightly, suppose first that a single update is performed between communications, so e = 1. In this case, we have Pe i=1 rfj(G i 1 j (x ? )) = rfj(x?), so that if x(t) has a limit x, it satisfies the relations x1 = x2 = · · · = xm and mX j=1 rfj(xj) = 0. Consequently, provided that the losses fj are convex, Proposition 1 implies that the limit of the sequence x(t), when it exists, is a minimizer of the consensus problem (2). On the other hand, when e > 1, a limit of the iterate sequence x(t) must satisfy equation (5), which in general causes the method to have limit points which are not minimizers of the consensus problem. We give a concrete example in Section 2.3. 2.2 Federated proximal algorithms Another recently proposed algorithm is FedProx [16], which can be seen as a distributed method loosely based on the classical proximal point method [24]. For a given stepsize s > 0, the proximal operator of a function f : Rd ! R and its associated optimal value, the Moreau envelope of f , are given by [19, 24, 25, chap. 1.G]: proxsf (z) ··= argmin x2Rd ⇢ f(x) + 1 2s kz xk2 and Msf (z) ··= inf x2Rd ⇢ f(x) + 1 2s kz xk2 . We remark that when f is convex, the existence of such a (unique) minimizer for the problem implied by the proximal operator is immediate. With these definitions in place, we can now study the behavior of the FedProx method [16]. We again consider a deterministic version of FedProx, in which we remove any inaccuracies introduced by stochastic approximation. For a given initialization x(1), for t = 1, 2, . . .: x(t+1/2)j ··= proxsfj (x (t) j ), for j 2 [m], and (6a) x(t+1)j ··= x (t+1/2), for j 2 [m]. (6b) The following result characterizes the fixed points of this method. Proposition 2. For any stepsize s > 0, the sequence {x(t)}1t=1 generated by the FedProx algorithm (see equations (6a) and (6b)) has the following properties: (a) if x(t) is convergent then, the local variables x(t)j share a common limit x ? such that x(t)j ! x? as t ! 1 for each j 2 [m]; (b) the limit x? satisfies the fixed point relation mX j=1 rMsfj (x?) = 0. (7) Hence, we see that this algorithm has fixed points that will be a zero of the sum of the gradients of the Moreau envelopes Msfj , rather than a zero of the sum of the gradients of the functions fj themselves. When m > 1, these fixed point relations are, in general, different. It is worth noting a very special case in which FedGD and FedProx will preserve the correct fixed points, even when e > 1. In particular, suppose all of local cost functions share a common minimizer x?, so that rfj(x?) = 0 for j 2 [m]. Under this assumption, we have Gj(x?) = x? all j 2 [m], and hence by arguing inductively, we have Gij(x?) = x? for all i > 1. Additionally recall that the minimizers of fj and Msfj coincide. Consequently, the fixed point relations (5) and (7) corresponding to FedGD and FedProx respecively, are both equivalent to the optimality condition for the federated problem. However, we emphasize this condition is not realistic in practice: if the optima of fj are exactly (or even approximately) the same, there would be little point in sharing data between devices by solving the federated learning problem. In contrast, the FedSplit algorithm presented in the next section retains correct fixed points for general federated learning problems without making such unrealistic, additional assumptions. 2.3 Example: Incorrectness on a least squares problem We illustrate these non-convergence results by specializing to least squares and carrying out a simulation study on a synthetic least squares dataset. For j = 1, . . . ,m, suppose that we are given a design matrix Aj 2 Rnj⇥d and a response vector bj 2 Rnj . The least squares regression problem defined by all the devices takes the form minimize F (x) ··= 1 2 mX j=1 kAjx bjk2. (8) This problem is a special case of our general problem (1) with fj(x) = (1/2)kAjx bjk2 for all j. When Aj are full rank, the solution to this problem is unique and given by x?ls = ✓ mX j=1 ATj Aj ◆ 1 mX j=1 ATj bj . (9) Following Proposition 2, it is easy to verify that FedProx has fixed points of the form x?FedProx = mX j=1 n I (I + sATj Aj) 1 o! 1✓ mX j=1 (ATj Aj + (1/s)I) 1ATj bj ◆ . Following Proposition 1, it is easy to verify that FedGD has fixed points of the form1 x?FedGD = mX j=1 ATj Aj ⇢ e 1X k=0 (I sATj Aj)k ! 1 mX j=1 ( e 1X k=0 (I sATj Aj)k ATj bj ! . (10) Therefore the previous three displays show that in general, when m > 1 and e > 1—that is, with more than one client, and more than one local update between communication rounds—we have x?FedProx 6= x?ls and x?FedGD 6= x?ls. Therefore, we see that FedProx and FedGD do not have the correct fixed points, even with idealized deterministic updates. Figure 1 shows the results of applying the (deterministic) versions of FedProx and FedSGD, with varying numbers of local epochs e 2 {1, 10, 100} for the least squares minimization problem (8). As expected, we see that FedProx and multi-step, deterministic FedSGD fail to converge to the correct fixed point for this problem. Although the presented deterministic variant of FedSGD will converge when a single local gradient step is taken between communication rounds (i.e., when e = 1), we see that it also does not converge to the optimal solution as soon as e > 1. See Appendix B.1 of the supplement for additional details on this simulation study. 1Here we assume that s > 0 is small enough so that kI sATjAjkop < 1, which ensures convergence. 3 FedSplit and convergence guarantees We now turn to the description of a framework that allows us to provide a clean characterization of the fixed points of iterative algorithms and to propose algorithms with convergence guarantees. Throughout our development, we assume that each function fj : Rd ! R is convex and differentiable. 3.1 An operator-theoretic view We begin by recalling the consensus formulation (2) of the problem in terms of a block-partitioned vector x = (x1, . . . , xm) 2 (Rd)m, the function F : (Rd)m ! R given by F (x) ··= Pm j=1 fj(xj), and the constraint set E ··= {x | x1 = x2 = · · · = xm} is the feasible subspace for problem (2). By appealing to the first-order optimality conditions for the problem (2), it is equivalent to find a vector x 2 (Rd)m such that rF (x) belongs to the normal cone of the constraint set E, or equivalently such that rF (x) 2 E?. Equivalently, if we define a set-valued operator NE as NE(x) ··= ⇢ E?, x1 = x2 = · · · = xm, ;, else (11) then it is equivalent to find a vector x 2 (Rd)m that satisfies the inclusion condition 0 2 rF (x) + NE(x). (12) where rF (x) = (rf1(x1), . . . ,rfm(xm)). When the loss functions fj : Rd ! R are convex, both rF and NE are monotone operators on (Rd)m [1]. Thus, the display (12) is a monotone inclusion problem. Methods for solving monotone inclusions have a long history of study within the applied mathematics and optimization literatures [26, 7]. We now use this framework to develop and analyze algorithms for solving the federated problems of interest. 3.2 Splitting procedures for federated optimization We now describe a method, derived from splitting the inclusion relation, whose fixed points do correspond with global minima of the distributed problem. It is an instantiation of the PeacemanRachford splitting [20], which we refer to as the FedSplit algorithm in this distributed setting. Algorithm 1 [FedSplit] Splitting scheme for solving federated problems of the form (1) Given initialization x 2 Rd, proximal solvers prox_updatej : R d ! Rd Initialize x (1) = z(1)1 = · · · = z (1) m = x for t = 1, 2, . . .: 1. for j = 1, . . . ,m: a. Local prox step: set z(t+1/2)j = prox_updatej(2x (t) z(t)j ) b. Local centering step: set z(t+1)j = z (t) j + 2(z (t+1/2) j x (t)) end for 2. Compute global average: set x(t+1) = z(t+1). end for Thus, the FedSplit procedure maintains a parameter vector z(t)j 2 Rd for each device j 2 [m]. The central server maintains a parameter vector x(t) 2 Rd, which collects averages of the parameter estimates at each machine. The local update at device j is defined in terms of a proximal solver prox_updatej(·), which typically be approximate proximal updates prox_updatej(x) ⇡ proxsfj (x), uniformly in x 2 R d for a suitable stepsize s > 0. We make the sense of this approximation precise when we state our convergence results in Section 3.3. An advantage to FedSplit is that unlike FedGD and FedProx, it has the correct fixed points for the distributed problem. Proposition 3. Suppose for some s > 0, prox_updatej(·) = proxsfj (·), for all j. Suppose that z? = (z?1 , . . . , z ? m) is a fixed point for the FedSplit procedure, meaning that z?j = z ? j + 2 ⇣ proxsfj (2z ? z?j ) z? ⌘ , for all j 2 [m]. (13) Then the average x? ··= 1m Pm j=1 z ? j is optimal: Pm j=1 fj(x ? ) = infx2Rd Pm j=1 fj(x). 3.3 Convergence results In this section, we give convergence guarantees for the FedSplit procedure in Algorithm 1 under exact and inexact proximal operator implementations. Strongly convex and smooth losses We begin by considering the case when the losses fj : Rd ! R are `j-strongly convex and Lj-smooth. We define `⇤ ··= min j=1,...,m `j , L ⇤ ··= max j=1,...,m Lj , and ··= L⇤ `⇤ . (14) Note that corresponds to the induced condition number of our federated problem (2). The following result demonstrates that in this setting, our method enjoys geometric convergence to the optimum, even with inexact proximal implementations. Theorem 1. Consider the FedSplit algorithm with possibly inexact proximal implementations, kprox_updatej(z) proxsfj (z)k 6 b for all j and all z 2 Rd, (15) and with stepsize s = 1/ p `⇤L⇤. Then for any initialization, the iterates satisfy kx(t+1) x?k 6 ✓ 1 2p + 1 ◆t kz(1) z?kp m + ( p + 1)b, for all t = 1, 2, . . .. (16) We now discuss some aspects of Theorem 1. Exact proximal evaluations: In the special (albeit unrealistic) case when the proximal evaluations are exact, the uniform bound (15) holds with b = 0. Consequently, given some initialization z(1), if we want "-accuracy, meaning kx(T ) x?k 6 ", we see that this occurs as soon as T exceeds T (",) = O(1) np log ✓ kz(1) z?k " p m ◆o iterations of the overall procedure. Here O(1) denotes a universal constant. Approximate proximal updates by gradient steps: In practice, the FedSplit algorithm will be implemented using an approximate prox-solver. Recall that the proximal update at device j at round t takes the form: proxsfj (x (t) j ) = argmin u2Rd sfj(u) + 1 2 ku x(t)j k 2 2 | {z } hj(u) . A natural way to compute an approximate minimizer is to run e rounds of gradient descent on the function hj . Concretely, at round t, we initialize the gradient method with the initial point u(1) = x (t) j , and run gradient descent on hj with a stepsize ↵, thereby generating the sequence u(t+1) = u(t) ↵rhj(u(t)) = u(t) ↵srfj(u(t)) + u(t) x(t)j (17) We define prox_updatej(x (t) j ) to be the output of this procedure after e steps. Corollary 1 (FedSplit convergence with inexact proximal updates). Consider the FedSplit procedure run with proximal stepsize s = 1p `⇤L⇤ , and using approximate proximal updates based on e rounds of gradient descent with stepsize ↵ = (1 + s `⇤+L ⇤ 2 ) 1 initialized (in round t) at the previous iterate x(t)j . Then the the bound (15) holds at round t with error at most b 6 1 1p + 1 e kx(t)j proxsfj (x (t) j )k2. (18) Given the exponential decay in the number of rounds e exhibited in the bound (18), in practice, it suffices to take a relatively small number of gradient steps. For instance, in our experiments to be reported in Section 4, we find that e = 10 suffices to match the exact proximal updates. This inexact proximal update could also be implemented with a gradient method and backtracking line search [5]. Smooth but not strongly convex losses We now consider the case when fj : Rd ! R are Ljsmooth and convex, but not necessarily strongly convex. In this case, the consensus objective F (z) = Pm j=1 fj(zj) is an L ⇤-smooth function on the product space (Rd)m.2 Our approach to solving such a problem is to apply the FedSplit procedure to a suitably regularized version of the original problem. More precisely, given some initial vector x(1) 2 Rd and regularization parameter > 0, let us define the function F (z) ··= mX j=1 n fj(zj) + 2m kzj x(1)k2 o . (19) We see that F : (Rd)m ! R is a -strongly convex and L⇤ = (L⇤ + )-smooth function. The next result shows that for any " > 0, minimizing the function F up to an error of order ", using a carefully chosen , yields an "-cost-suboptimal minimizer of the original objective function F . Theorem 2. Given some 2 ⇣ 0, " mkx(1) x?k2 ⌘ and any initialization x(1) 2 Rd, suppose that we run the FedSplit procedure (Algorithm 1) on the regularized objective F using exact prox steps with stepsize s = 1/ p L⇤ . Then the FedSplit algorithm outputs a vector bx 2 Rd satisfying F (bx) F ? 6 " after exceeding eO ✓q L⇤kx(1) x?k2 " ◆ iterations.3 We remark that this faster convergence rate of eO t 2 is nearly optimal for first-order algorithms [18], and to our knowledge such results were not known for operator splitting-based procedures prior to this work. 4 Experiments In this section, we present numerical results for FedSpliton some convex federated optimization problem instances. We include additional details on these simulations in Section B of the supplement. Logistic regression We begin with federated binary classification, where we solve, minimize mX j=1 njX i=1 log(1 + e bijaTijx), (20) with variable x 2 Rd. We generate the problem data {(aij , bij)} ⇢ Rd ⇥ {±1} synthetically; see Section B.2.1 in the supplement for details. We also use FedSplit to solve a multiclass classification problem, with K classes. Here we solve minimize mX j=1 n njX i=1 KX k=1 log(1 + e bijkaTijxk) + 2 KX k=1 kxkk2 o (21) with variables x1, x2, . . . , xK 2 Rd, regularization parameter > 0, and sample size N = Pm j=1 nj . Here, the problem data {(aij , bij)} ⇢ Rd ⇥ {±1}K are images and multiclass labels from the FEMNIST dataset in the LEAF framework [6]. This dataset was proposed as a benchmark for federated optimization; there are N = 805, 263 images, m = 3, 550 clients, and K = 62 classes. The problem dimension is d = 6, 875; see Section B.2.2 in the supplement for additional details. In Figure 2, we present numerical results on problems (20) and (21). We implement FedSplit with exact proximal operators and inexact implementations with a constant number of gradient steps e 2 {1, 5, 10}. For comparison, we implemented a federated gradient method as previously described (4). As shown in Figure 2(a), both FedGD with e = 1 and the FedSplit procedure exhibit linear convergence rates. Using inexact proximal updates with the FedSplit procedure preserves the linear convergence up to the error floor introduced by the exactness of the updates.In this case, the 2To avoid degeneracies, we assume x 7! Pm j=1 fj(x) is bounded below and attains its minimum. 3The eO (·) notation denotes constant and polylogarithmic factors that are not dominant. inexact proximal updates with e = 10—that is, performing 10 local updates per each round of global communication—suffice to track the exact FedSplit procedure up to an accuracy below 10 6. In Figure 2(b), we see that FedSplit similarly outperforms FedGD on actual client data.4. Dependence on problem conditioning It is well-known that the convergence rates of first-order methods are affected by problem conditioning. First, let us re-state our theoretical guarantees in terms of iteration complexity. We let T (",) denote the maximum number of iterations required so that, for any problem with condition number at most , the iterate x(T ) with T = T (",) satisfies the bound F (x(T )) F ? 6 ". For federated objectives with condition number as defined in (14), FedSplit and FedGD have iteration complexities TFedSplit(",) = O( p log(1/")) and TFedGrad(",) = O( log(1/")). (22) This follows from Theorem 1 and standard results from convex optimization theory [18]. Hence, whereas FedSplithas a more expensive local update, it has much better dependence on the condition number . In the context of federated optimization, this iteration complexity should be interpreted as the number of communication rounds between clients and the coordinating entity. Hence, this highlights a concrete tradeoff between local computation and global communication in these methods. Note that while acclereated first-order methods matches the iteration complexity of FedSplit, they are sensitive to stepsize misspecification and are not robust to errors incurred in gradient updates [9]. This is in contrast to the inexact convergence guarantees that FedSplit enjoys (see Theorem 1). In Figure 3, we present the results of a simulation study that shows these iteration complexity estimates are accurate in practice. We construct a sequence of least squares problems with varying condition number between 10 and 10000. We then look at the number of iterations required to obtain an "-cost suboptimal solution with " = 10 3; see Section B.3 in the supplement for additional simulation details. In this way, we obtain estimates of the functions 7! TFedGrad(10 3,) and 7! TFedSplit(10 3,), which measure the dependence of the iteration complexity on the condition number. Figure 3 provides plots of these estimated functions. Consistent with our theory, we see that FedGD has an approximately linear dependence on the condition number, whereas the FedSplit procedure has much milder dependence on conditioning. Concretely, for an instance with condition number = 10000, the FedGD procedure requires on the order of 34000 iterations, whereas the FedSplit procedure requires roughly 400 iterations. Therefore, while FedSplit involves more expensive intermediate proximal updates, it enjoys a smaller iteration count, which in the context of this federated setting indicates a significantly smaller number of communication rounds between clients and the the centralized server. 4Given the large scale nature of this example, we implement an accelerated gradient method for the proximal updates, terminated when the gradient of the proximal objective drops below 10 8. 5 Discussion We highlight a few interesting directions for future work on federated learning and FedSplit . First, in practice, it is standard to use stochastic optimization algorithms in solving large-scale machine learning problems, and we are currently analyzing stochastic approximation procedures as applied to the device-based proximal updates underlying our method. Our results on the incorrectness of previously proposed methods and the work of Woodworth and colleagues [27] on the suboptimality on multi-step stochastic gradient methods, highlight the need for better understanding of the tradeoff between the accuracy of stochastic and deterministic approximations to intermediate quantities and rates of convergence in federated optimization. We also mention the possibility of employing stochastic approximation with higher-order methods, such as the Newton sketch algorithm [21, 22]. It is also important to consider our procedure under asynchronous updates, perhaps under delays in computation. Finally, an important desideratum in federated learning is suitable privacy guarantees for client the local data [3]. Understanding how noise aggregated through differentially private mechanisms couples with our inexact convergence guarantees is a key direction for future work. Broader Impact As mentioned in the introduction, a main application of federated optimization is to large-scale statistical learning, as carried out by application developers and cell phone manufacturers. On the other hand, learning from federated data is also inherent to other settings where data is not stored centrally: consider, for example, collecting clinical trial data across multiple hospitals and running a centralized analysis. Therefore, we envision analysts who are operating in these settings—where data is not available centrally due to communication barriers or privacy constraints—as main benefactors of this work. Our methods enjoy the same trade-offs with respect to biases in data, failures of systems, as other standard first-order algorithms. We believe that having convergent algorithms in this federated setting should help promote good practices with regard to analyzing large-scale, federated, and sensitive datasets. Acknowledgments and Disclosure of Funding We thank Bora Nikolic and Cong Ma for their careful reading and comments of an initial draft of this manuscript. RP was partially supported by a Berkeley Fellowship via the ARCS Foundation. MJW was partially supported by Office of Naval Research grant DOD-ONR-N00014-18-1-2640, and NSF grant NSF-DMS-1612948.
1. What is the focus and contribution of the paper on federated optimization? 2. What are the strengths of the proposed approach, particularly in terms of its elegance and simplicity? 3. What are the weaknesses of the paper regarding its limitations in theoretical analysis? 4. How does the reviewer assess the significance and novelty of the paper's contributions?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper presents a new splitting method for federated optimization. Convergence rates are established under a variety of plausible assumptions. These range for geometric rate under Lipschitz + strong convexity assumptions, to O(1/t^2) rates for smooth but not strongly convex functions. In all cases, exact (i.e., non-noisy gradients are considered). Strengths The results are new and the problem formulation is natural and useful. The method is elegant. Overall, I think the paper makes a good contribution to the literature, especially compared to many previous works whose results are much messier. Weaknesses - The main weakness is the lack of a good theoretical result in the case when the proximal steps are not solved exactly. In that case, Corollary 1 gives a bound for b, but this is given in terms of the iterates themselves. I suppose one can always try to assume that the optimization is done over a set of finite diameter, in which case the right-hand side can be naturally bounded; but in the absence of this assumption, the analysis is not fully complete. - The results are quite easy to obtain and the techniques come from a natural modification of well-known arguments. However, this might be just because the method is cleverly chosen so that the ultimate analysis is simple.
NIPS
Title FedSplit: an algorithmic framework for fast federated optimization Abstract Motivated by federated learning, we consider the hub-and-spoke model of distributed optimization in which a central authority coordinates the computation of a solution among many agents while limiting communication. We first study some past procedures for federated optimization, and show that their fixed points need not correspond to stationary points of the original optimization problem, even in simple convex settings with deterministic updates. In order to remedy these issues, we introduce FedSplit, a class of algorithms based on operator splitting procedures for solving distributed convex minimization with additive structure. We prove that these procedures have the correct fixed points, corresponding to optima of the original optimization problem, and we characterize their convergence rates under different settings. Our theory shows that these methods are provably robust to inexact computation of intermediate local quantities. We complement our theory with some experiments that demonstrate the benefits of our methods in practice. 1 Introduction Federated learning is a rapidly evolving application of distributed optimization for learning problems in large-scale networks of remote clients [13]. These systems present new challenges, as they are characterized by heterogeneity in computational resources, data across a large, multi-agent network, unreliable communication, and privacy constraints due to sensitive client data [15]. Although distributed optimization has a rich history and extensive literature (e.g., see the sources [2, 4, 8, 28, 14, 23] and references therein), renewed interest due to federated learning has led to a flurry of recent work in the area. Notably, McMahan et al. [17] introduced the FedSGD and FedAvg algorithms, by adapting the classical stochastic gradient method to the federated setting, considering the possibility that clients may fail and may only be subsampled on each round of computation. Another recent proposal, FedProx, attempted to mitigate potential device heterogeneity issues by applying averaged proximal updates to solve federated minimization problems. Currently, a general convergence theory of these methods is lacking. Moreover, practitioners have documented failures of convergence in certain settings (e.g., see Figure 3 and related discussion in the work [17]). 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Our contributions: The first contribution of this paper is to analyze some past procedures, and show that even in the favorable setting of deterministic updates (i.e., no stochastic approximation used), these methods typically fail to preserve solutions of the original optimization problem as fixed points. More precisely, even when these methods do converge, the resulting fixed point need not correspond to an optimal solution of the desired federated learning problem. Since the stochastic variants implemented in practice are approximate versions of the underlying deterministic procedures, this implies these methods also fail to preserve the correct fixed points in general. With the motivation of rectifying this undesirable feature, our second contribution is to introduce a family of federated optimization algorithms, which we call FedSplit, that do preserve the correct fixed points for distributed optimization problems of the form minimize F (x) ··= mX j=1 fj(x), (1) where fj : Rd ! R are the clients’ cost functions for variable x 2 Rd. In machine learning applications, the vector x 2 Rd is a parameter of a statistical model. Our procedure and analysis builds on a long line of work relating optimization with monotone operators and operator splitting techniques [4, 26, 7, 1]. In this paper, we focus on the case when fj are convex functions with Lipschitz continuous gradient [24]. 2 Existing algorithms and their fixed points We focus our discussion on deterministic analogues of two recently proposed procedures—namely, FedSGD [17] and FedProx [16]. For analysis, it is useful to introduce the equivalent, consensus reformulation [4] of the distributed problem (1): minimize F (x) ··= Pm j=1 fj(xj) subject to x1 = x2 = · · · = xm. (2) 2.1 Federated gradient algorithms The recently proposed FedSGD method [17] is based on a multi-step projected stochastic gradient method for solving the consensus problem. For our analysis we consider the obvious deterministic version of this algorithm, which replaces the stochastic gradient by the full gradient. Formally, given a stepsize s > 0, define the gradient mappings Gj(x) ··= x srfj(x) for j = 1, . . . ,m. (3) For a given integer e > 1, we define Gej as the e-fold composition of Gj and G0j as the identity operator on Rd. The FedGD(s, e) algorithm from initialization x(1) obeys the recursion for t = 1, 2, . . . : x(t+1/2)j ··= G e j(x (t) j ), for j 2 [m] ··= {1, 2, . . . ,m}, and (4a) x(t+1)j ··= x (t+1/2), for j 2 [m]. (4b) Recall that x(t+1/2) = 1m Pm j=1 x (t+1/2) j is the block average. The following result characterizes the fixed points of this procedure. Proposition 1. For any s > 0 and e > 1, the sequence {x(t)}1t=1 generated by the FedGD(s, e) algorithm in equation (4) has the following properties: (a) if x(t) is convergent, then the local variables x(t)j share a common limit x ? such that x(t)j ! x? as t ! 1 for j 2 [m]; (b) any such limit x? satisfies the fixed point relation eX i=1 mX j=1 rfj(Gi 1j (x ? )) = 0. (5) The proof of this claim, as well as all other claims in the paper, are deferred to Appendix A of the supplement. Unpacking this claim slightly, suppose first that a single update is performed between communications, so e = 1. In this case, we have Pe i=1 rfj(G i 1 j (x ? )) = rfj(x?), so that if x(t) has a limit x, it satisfies the relations x1 = x2 = · · · = xm and mX j=1 rfj(xj) = 0. Consequently, provided that the losses fj are convex, Proposition 1 implies that the limit of the sequence x(t), when it exists, is a minimizer of the consensus problem (2). On the other hand, when e > 1, a limit of the iterate sequence x(t) must satisfy equation (5), which in general causes the method to have limit points which are not minimizers of the consensus problem. We give a concrete example in Section 2.3. 2.2 Federated proximal algorithms Another recently proposed algorithm is FedProx [16], which can be seen as a distributed method loosely based on the classical proximal point method [24]. For a given stepsize s > 0, the proximal operator of a function f : Rd ! R and its associated optimal value, the Moreau envelope of f , are given by [19, 24, 25, chap. 1.G]: proxsf (z) ··= argmin x2Rd ⇢ f(x) + 1 2s kz xk2 and Msf (z) ··= inf x2Rd ⇢ f(x) + 1 2s kz xk2 . We remark that when f is convex, the existence of such a (unique) minimizer for the problem implied by the proximal operator is immediate. With these definitions in place, we can now study the behavior of the FedProx method [16]. We again consider a deterministic version of FedProx, in which we remove any inaccuracies introduced by stochastic approximation. For a given initialization x(1), for t = 1, 2, . . .: x(t+1/2)j ··= proxsfj (x (t) j ), for j 2 [m], and (6a) x(t+1)j ··= x (t+1/2), for j 2 [m]. (6b) The following result characterizes the fixed points of this method. Proposition 2. For any stepsize s > 0, the sequence {x(t)}1t=1 generated by the FedProx algorithm (see equations (6a) and (6b)) has the following properties: (a) if x(t) is convergent then, the local variables x(t)j share a common limit x ? such that x(t)j ! x? as t ! 1 for each j 2 [m]; (b) the limit x? satisfies the fixed point relation mX j=1 rMsfj (x?) = 0. (7) Hence, we see that this algorithm has fixed points that will be a zero of the sum of the gradients of the Moreau envelopes Msfj , rather than a zero of the sum of the gradients of the functions fj themselves. When m > 1, these fixed point relations are, in general, different. It is worth noting a very special case in which FedGD and FedProx will preserve the correct fixed points, even when e > 1. In particular, suppose all of local cost functions share a common minimizer x?, so that rfj(x?) = 0 for j 2 [m]. Under this assumption, we have Gj(x?) = x? all j 2 [m], and hence by arguing inductively, we have Gij(x?) = x? for all i > 1. Additionally recall that the minimizers of fj and Msfj coincide. Consequently, the fixed point relations (5) and (7) corresponding to FedGD and FedProx respecively, are both equivalent to the optimality condition for the federated problem. However, we emphasize this condition is not realistic in practice: if the optima of fj are exactly (or even approximately) the same, there would be little point in sharing data between devices by solving the federated learning problem. In contrast, the FedSplit algorithm presented in the next section retains correct fixed points for general federated learning problems without making such unrealistic, additional assumptions. 2.3 Example: Incorrectness on a least squares problem We illustrate these non-convergence results by specializing to least squares and carrying out a simulation study on a synthetic least squares dataset. For j = 1, . . . ,m, suppose that we are given a design matrix Aj 2 Rnj⇥d and a response vector bj 2 Rnj . The least squares regression problem defined by all the devices takes the form minimize F (x) ··= 1 2 mX j=1 kAjx bjk2. (8) This problem is a special case of our general problem (1) with fj(x) = (1/2)kAjx bjk2 for all j. When Aj are full rank, the solution to this problem is unique and given by x?ls = ✓ mX j=1 ATj Aj ◆ 1 mX j=1 ATj bj . (9) Following Proposition 2, it is easy to verify that FedProx has fixed points of the form x?FedProx = mX j=1 n I (I + sATj Aj) 1 o! 1✓ mX j=1 (ATj Aj + (1/s)I) 1ATj bj ◆ . Following Proposition 1, it is easy to verify that FedGD has fixed points of the form1 x?FedGD = mX j=1 ATj Aj ⇢ e 1X k=0 (I sATj Aj)k ! 1 mX j=1 ( e 1X k=0 (I sATj Aj)k ATj bj ! . (10) Therefore the previous three displays show that in general, when m > 1 and e > 1—that is, with more than one client, and more than one local update between communication rounds—we have x?FedProx 6= x?ls and x?FedGD 6= x?ls. Therefore, we see that FedProx and FedGD do not have the correct fixed points, even with idealized deterministic updates. Figure 1 shows the results of applying the (deterministic) versions of FedProx and FedSGD, with varying numbers of local epochs e 2 {1, 10, 100} for the least squares minimization problem (8). As expected, we see that FedProx and multi-step, deterministic FedSGD fail to converge to the correct fixed point for this problem. Although the presented deterministic variant of FedSGD will converge when a single local gradient step is taken between communication rounds (i.e., when e = 1), we see that it also does not converge to the optimal solution as soon as e > 1. See Appendix B.1 of the supplement for additional details on this simulation study. 1Here we assume that s > 0 is small enough so that kI sATjAjkop < 1, which ensures convergence. 3 FedSplit and convergence guarantees We now turn to the description of a framework that allows us to provide a clean characterization of the fixed points of iterative algorithms and to propose algorithms with convergence guarantees. Throughout our development, we assume that each function fj : Rd ! R is convex and differentiable. 3.1 An operator-theoretic view We begin by recalling the consensus formulation (2) of the problem in terms of a block-partitioned vector x = (x1, . . . , xm) 2 (Rd)m, the function F : (Rd)m ! R given by F (x) ··= Pm j=1 fj(xj), and the constraint set E ··= {x | x1 = x2 = · · · = xm} is the feasible subspace for problem (2). By appealing to the first-order optimality conditions for the problem (2), it is equivalent to find a vector x 2 (Rd)m such that rF (x) belongs to the normal cone of the constraint set E, or equivalently such that rF (x) 2 E?. Equivalently, if we define a set-valued operator NE as NE(x) ··= ⇢ E?, x1 = x2 = · · · = xm, ;, else (11) then it is equivalent to find a vector x 2 (Rd)m that satisfies the inclusion condition 0 2 rF (x) + NE(x). (12) where rF (x) = (rf1(x1), . . . ,rfm(xm)). When the loss functions fj : Rd ! R are convex, both rF and NE are monotone operators on (Rd)m [1]. Thus, the display (12) is a monotone inclusion problem. Methods for solving monotone inclusions have a long history of study within the applied mathematics and optimization literatures [26, 7]. We now use this framework to develop and analyze algorithms for solving the federated problems of interest. 3.2 Splitting procedures for federated optimization We now describe a method, derived from splitting the inclusion relation, whose fixed points do correspond with global minima of the distributed problem. It is an instantiation of the PeacemanRachford splitting [20], which we refer to as the FedSplit algorithm in this distributed setting. Algorithm 1 [FedSplit] Splitting scheme for solving federated problems of the form (1) Given initialization x 2 Rd, proximal solvers prox_updatej : R d ! Rd Initialize x (1) = z(1)1 = · · · = z (1) m = x for t = 1, 2, . . .: 1. for j = 1, . . . ,m: a. Local prox step: set z(t+1/2)j = prox_updatej(2x (t) z(t)j ) b. Local centering step: set z(t+1)j = z (t) j + 2(z (t+1/2) j x (t)) end for 2. Compute global average: set x(t+1) = z(t+1). end for Thus, the FedSplit procedure maintains a parameter vector z(t)j 2 Rd for each device j 2 [m]. The central server maintains a parameter vector x(t) 2 Rd, which collects averages of the parameter estimates at each machine. The local update at device j is defined in terms of a proximal solver prox_updatej(·), which typically be approximate proximal updates prox_updatej(x) ⇡ proxsfj (x), uniformly in x 2 R d for a suitable stepsize s > 0. We make the sense of this approximation precise when we state our convergence results in Section 3.3. An advantage to FedSplit is that unlike FedGD and FedProx, it has the correct fixed points for the distributed problem. Proposition 3. Suppose for some s > 0, prox_updatej(·) = proxsfj (·), for all j. Suppose that z? = (z?1 , . . . , z ? m) is a fixed point for the FedSplit procedure, meaning that z?j = z ? j + 2 ⇣ proxsfj (2z ? z?j ) z? ⌘ , for all j 2 [m]. (13) Then the average x? ··= 1m Pm j=1 z ? j is optimal: Pm j=1 fj(x ? ) = infx2Rd Pm j=1 fj(x). 3.3 Convergence results In this section, we give convergence guarantees for the FedSplit procedure in Algorithm 1 under exact and inexact proximal operator implementations. Strongly convex and smooth losses We begin by considering the case when the losses fj : Rd ! R are `j-strongly convex and Lj-smooth. We define `⇤ ··= min j=1,...,m `j , L ⇤ ··= max j=1,...,m Lj , and ··= L⇤ `⇤ . (14) Note that corresponds to the induced condition number of our federated problem (2). The following result demonstrates that in this setting, our method enjoys geometric convergence to the optimum, even with inexact proximal implementations. Theorem 1. Consider the FedSplit algorithm with possibly inexact proximal implementations, kprox_updatej(z) proxsfj (z)k 6 b for all j and all z 2 Rd, (15) and with stepsize s = 1/ p `⇤L⇤. Then for any initialization, the iterates satisfy kx(t+1) x?k 6 ✓ 1 2p + 1 ◆t kz(1) z?kp m + ( p + 1)b, for all t = 1, 2, . . .. (16) We now discuss some aspects of Theorem 1. Exact proximal evaluations: In the special (albeit unrealistic) case when the proximal evaluations are exact, the uniform bound (15) holds with b = 0. Consequently, given some initialization z(1), if we want "-accuracy, meaning kx(T ) x?k 6 ", we see that this occurs as soon as T exceeds T (",) = O(1) np log ✓ kz(1) z?k " p m ◆o iterations of the overall procedure. Here O(1) denotes a universal constant. Approximate proximal updates by gradient steps: In practice, the FedSplit algorithm will be implemented using an approximate prox-solver. Recall that the proximal update at device j at round t takes the form: proxsfj (x (t) j ) = argmin u2Rd sfj(u) + 1 2 ku x(t)j k 2 2 | {z } hj(u) . A natural way to compute an approximate minimizer is to run e rounds of gradient descent on the function hj . Concretely, at round t, we initialize the gradient method with the initial point u(1) = x (t) j , and run gradient descent on hj with a stepsize ↵, thereby generating the sequence u(t+1) = u(t) ↵rhj(u(t)) = u(t) ↵srfj(u(t)) + u(t) x(t)j (17) We define prox_updatej(x (t) j ) to be the output of this procedure after e steps. Corollary 1 (FedSplit convergence with inexact proximal updates). Consider the FedSplit procedure run with proximal stepsize s = 1p `⇤L⇤ , and using approximate proximal updates based on e rounds of gradient descent with stepsize ↵ = (1 + s `⇤+L ⇤ 2 ) 1 initialized (in round t) at the previous iterate x(t)j . Then the the bound (15) holds at round t with error at most b 6 1 1p + 1 e kx(t)j proxsfj (x (t) j )k2. (18) Given the exponential decay in the number of rounds e exhibited in the bound (18), in practice, it suffices to take a relatively small number of gradient steps. For instance, in our experiments to be reported in Section 4, we find that e = 10 suffices to match the exact proximal updates. This inexact proximal update could also be implemented with a gradient method and backtracking line search [5]. Smooth but not strongly convex losses We now consider the case when fj : Rd ! R are Ljsmooth and convex, but not necessarily strongly convex. In this case, the consensus objective F (z) = Pm j=1 fj(zj) is an L ⇤-smooth function on the product space (Rd)m.2 Our approach to solving such a problem is to apply the FedSplit procedure to a suitably regularized version of the original problem. More precisely, given some initial vector x(1) 2 Rd and regularization parameter > 0, let us define the function F (z) ··= mX j=1 n fj(zj) + 2m kzj x(1)k2 o . (19) We see that F : (Rd)m ! R is a -strongly convex and L⇤ = (L⇤ + )-smooth function. The next result shows that for any " > 0, minimizing the function F up to an error of order ", using a carefully chosen , yields an "-cost-suboptimal minimizer of the original objective function F . Theorem 2. Given some 2 ⇣ 0, " mkx(1) x?k2 ⌘ and any initialization x(1) 2 Rd, suppose that we run the FedSplit procedure (Algorithm 1) on the regularized objective F using exact prox steps with stepsize s = 1/ p L⇤ . Then the FedSplit algorithm outputs a vector bx 2 Rd satisfying F (bx) F ? 6 " after exceeding eO ✓q L⇤kx(1) x?k2 " ◆ iterations.3 We remark that this faster convergence rate of eO t 2 is nearly optimal for first-order algorithms [18], and to our knowledge such results were not known for operator splitting-based procedures prior to this work. 4 Experiments In this section, we present numerical results for FedSpliton some convex federated optimization problem instances. We include additional details on these simulations in Section B of the supplement. Logistic regression We begin with federated binary classification, where we solve, minimize mX j=1 njX i=1 log(1 + e bijaTijx), (20) with variable x 2 Rd. We generate the problem data {(aij , bij)} ⇢ Rd ⇥ {±1} synthetically; see Section B.2.1 in the supplement for details. We also use FedSplit to solve a multiclass classification problem, with K classes. Here we solve minimize mX j=1 n njX i=1 KX k=1 log(1 + e bijkaTijxk) + 2 KX k=1 kxkk2 o (21) with variables x1, x2, . . . , xK 2 Rd, regularization parameter > 0, and sample size N = Pm j=1 nj . Here, the problem data {(aij , bij)} ⇢ Rd ⇥ {±1}K are images and multiclass labels from the FEMNIST dataset in the LEAF framework [6]. This dataset was proposed as a benchmark for federated optimization; there are N = 805, 263 images, m = 3, 550 clients, and K = 62 classes. The problem dimension is d = 6, 875; see Section B.2.2 in the supplement for additional details. In Figure 2, we present numerical results on problems (20) and (21). We implement FedSplit with exact proximal operators and inexact implementations with a constant number of gradient steps e 2 {1, 5, 10}. For comparison, we implemented a federated gradient method as previously described (4). As shown in Figure 2(a), both FedGD with e = 1 and the FedSplit procedure exhibit linear convergence rates. Using inexact proximal updates with the FedSplit procedure preserves the linear convergence up to the error floor introduced by the exactness of the updates.In this case, the 2To avoid degeneracies, we assume x 7! Pm j=1 fj(x) is bounded below and attains its minimum. 3The eO (·) notation denotes constant and polylogarithmic factors that are not dominant. inexact proximal updates with e = 10—that is, performing 10 local updates per each round of global communication—suffice to track the exact FedSplit procedure up to an accuracy below 10 6. In Figure 2(b), we see that FedSplit similarly outperforms FedGD on actual client data.4. Dependence on problem conditioning It is well-known that the convergence rates of first-order methods are affected by problem conditioning. First, let us re-state our theoretical guarantees in terms of iteration complexity. We let T (",) denote the maximum number of iterations required so that, for any problem with condition number at most , the iterate x(T ) with T = T (",) satisfies the bound F (x(T )) F ? 6 ". For federated objectives with condition number as defined in (14), FedSplit and FedGD have iteration complexities TFedSplit(",) = O( p log(1/")) and TFedGrad(",) = O( log(1/")). (22) This follows from Theorem 1 and standard results from convex optimization theory [18]. Hence, whereas FedSplithas a more expensive local update, it has much better dependence on the condition number . In the context of federated optimization, this iteration complexity should be interpreted as the number of communication rounds between clients and the coordinating entity. Hence, this highlights a concrete tradeoff between local computation and global communication in these methods. Note that while acclereated first-order methods matches the iteration complexity of FedSplit, they are sensitive to stepsize misspecification and are not robust to errors incurred in gradient updates [9]. This is in contrast to the inexact convergence guarantees that FedSplit enjoys (see Theorem 1). In Figure 3, we present the results of a simulation study that shows these iteration complexity estimates are accurate in practice. We construct a sequence of least squares problems with varying condition number between 10 and 10000. We then look at the number of iterations required to obtain an "-cost suboptimal solution with " = 10 3; see Section B.3 in the supplement for additional simulation details. In this way, we obtain estimates of the functions 7! TFedGrad(10 3,) and 7! TFedSplit(10 3,), which measure the dependence of the iteration complexity on the condition number. Figure 3 provides plots of these estimated functions. Consistent with our theory, we see that FedGD has an approximately linear dependence on the condition number, whereas the FedSplit procedure has much milder dependence on conditioning. Concretely, for an instance with condition number = 10000, the FedGD procedure requires on the order of 34000 iterations, whereas the FedSplit procedure requires roughly 400 iterations. Therefore, while FedSplit involves more expensive intermediate proximal updates, it enjoys a smaller iteration count, which in the context of this federated setting indicates a significantly smaller number of communication rounds between clients and the the centralized server. 4Given the large scale nature of this example, we implement an accelerated gradient method for the proximal updates, terminated when the gradient of the proximal objective drops below 10 8. 5 Discussion We highlight a few interesting directions for future work on federated learning and FedSplit . First, in practice, it is standard to use stochastic optimization algorithms in solving large-scale machine learning problems, and we are currently analyzing stochastic approximation procedures as applied to the device-based proximal updates underlying our method. Our results on the incorrectness of previously proposed methods and the work of Woodworth and colleagues [27] on the suboptimality on multi-step stochastic gradient methods, highlight the need for better understanding of the tradeoff between the accuracy of stochastic and deterministic approximations to intermediate quantities and rates of convergence in federated optimization. We also mention the possibility of employing stochastic approximation with higher-order methods, such as the Newton sketch algorithm [21, 22]. It is also important to consider our procedure under asynchronous updates, perhaps under delays in computation. Finally, an important desideratum in federated learning is suitable privacy guarantees for client the local data [3]. Understanding how noise aggregated through differentially private mechanisms couples with our inexact convergence guarantees is a key direction for future work. Broader Impact As mentioned in the introduction, a main application of federated optimization is to large-scale statistical learning, as carried out by application developers and cell phone manufacturers. On the other hand, learning from federated data is also inherent to other settings where data is not stored centrally: consider, for example, collecting clinical trial data across multiple hospitals and running a centralized analysis. Therefore, we envision analysts who are operating in these settings—where data is not available centrally due to communication barriers or privacy constraints—as main benefactors of this work. Our methods enjoy the same trade-offs with respect to biases in data, failures of systems, as other standard first-order algorithms. We believe that having convergent algorithms in this federated setting should help promote good practices with regard to analyzing large-scale, federated, and sensitive datasets. Acknowledgments and Disclosure of Funding We thank Bora Nikolic and Cong Ma for their careful reading and comments of an initial draft of this manuscript. RP was partially supported by a Berkeley Fellowship via the ARCS Foundation. MJW was partially supported by Office of Naval Research grant DOD-ONR-N00014-18-1-2640, and NSF grant NSF-DMS-1612948.
1. What are the key contributions and novel aspects introduced by the paper in federated learning? 2. What are the weaknesses of the paper compared to prior works? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors showed that the fixed points of FedSGD and FedProx need not correspond to stationary points of the original optimization problem. Then they proposed FedSplit framework to deal with this issue. Strengths - Strong theory, in fact the authors derived an explicite formula for the fixed points of FedSGD and FedProx. -They gave example of situations where these points do not correspond to stationary points of the original optimization problem with particular focus on least squares. -They proposed FedSplit to handle this issue and they proved its convergence to the right points. Weaknesses -The experiments are not strong. -A numerical illustration is given only based on simple logistic regression and using FEMNIST. -The authors can strength the numerical part using bigger models and larger datasets.
NIPS
Title FedSplit: an algorithmic framework for fast federated optimization Abstract Motivated by federated learning, we consider the hub-and-spoke model of distributed optimization in which a central authority coordinates the computation of a solution among many agents while limiting communication. We first study some past procedures for federated optimization, and show that their fixed points need not correspond to stationary points of the original optimization problem, even in simple convex settings with deterministic updates. In order to remedy these issues, we introduce FedSplit, a class of algorithms based on operator splitting procedures for solving distributed convex minimization with additive structure. We prove that these procedures have the correct fixed points, corresponding to optima of the original optimization problem, and we characterize their convergence rates under different settings. Our theory shows that these methods are provably robust to inexact computation of intermediate local quantities. We complement our theory with some experiments that demonstrate the benefits of our methods in practice. 1 Introduction Federated learning is a rapidly evolving application of distributed optimization for learning problems in large-scale networks of remote clients [13]. These systems present new challenges, as they are characterized by heterogeneity in computational resources, data across a large, multi-agent network, unreliable communication, and privacy constraints due to sensitive client data [15]. Although distributed optimization has a rich history and extensive literature (e.g., see the sources [2, 4, 8, 28, 14, 23] and references therein), renewed interest due to federated learning has led to a flurry of recent work in the area. Notably, McMahan et al. [17] introduced the FedSGD and FedAvg algorithms, by adapting the classical stochastic gradient method to the federated setting, considering the possibility that clients may fail and may only be subsampled on each round of computation. Another recent proposal, FedProx, attempted to mitigate potential device heterogeneity issues by applying averaged proximal updates to solve federated minimization problems. Currently, a general convergence theory of these methods is lacking. Moreover, practitioners have documented failures of convergence in certain settings (e.g., see Figure 3 and related discussion in the work [17]). 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Our contributions: The first contribution of this paper is to analyze some past procedures, and show that even in the favorable setting of deterministic updates (i.e., no stochastic approximation used), these methods typically fail to preserve solutions of the original optimization problem as fixed points. More precisely, even when these methods do converge, the resulting fixed point need not correspond to an optimal solution of the desired federated learning problem. Since the stochastic variants implemented in practice are approximate versions of the underlying deterministic procedures, this implies these methods also fail to preserve the correct fixed points in general. With the motivation of rectifying this undesirable feature, our second contribution is to introduce a family of federated optimization algorithms, which we call FedSplit, that do preserve the correct fixed points for distributed optimization problems of the form minimize F (x) ··= mX j=1 fj(x), (1) where fj : Rd ! R are the clients’ cost functions for variable x 2 Rd. In machine learning applications, the vector x 2 Rd is a parameter of a statistical model. Our procedure and analysis builds on a long line of work relating optimization with monotone operators and operator splitting techniques [4, 26, 7, 1]. In this paper, we focus on the case when fj are convex functions with Lipschitz continuous gradient [24]. 2 Existing algorithms and their fixed points We focus our discussion on deterministic analogues of two recently proposed procedures—namely, FedSGD [17] and FedProx [16]. For analysis, it is useful to introduce the equivalent, consensus reformulation [4] of the distributed problem (1): minimize F (x) ··= Pm j=1 fj(xj) subject to x1 = x2 = · · · = xm. (2) 2.1 Federated gradient algorithms The recently proposed FedSGD method [17] is based on a multi-step projected stochastic gradient method for solving the consensus problem. For our analysis we consider the obvious deterministic version of this algorithm, which replaces the stochastic gradient by the full gradient. Formally, given a stepsize s > 0, define the gradient mappings Gj(x) ··= x srfj(x) for j = 1, . . . ,m. (3) For a given integer e > 1, we define Gej as the e-fold composition of Gj and G0j as the identity operator on Rd. The FedGD(s, e) algorithm from initialization x(1) obeys the recursion for t = 1, 2, . . . : x(t+1/2)j ··= G e j(x (t) j ), for j 2 [m] ··= {1, 2, . . . ,m}, and (4a) x(t+1)j ··= x (t+1/2), for j 2 [m]. (4b) Recall that x(t+1/2) = 1m Pm j=1 x (t+1/2) j is the block average. The following result characterizes the fixed points of this procedure. Proposition 1. For any s > 0 and e > 1, the sequence {x(t)}1t=1 generated by the FedGD(s, e) algorithm in equation (4) has the following properties: (a) if x(t) is convergent, then the local variables x(t)j share a common limit x ? such that x(t)j ! x? as t ! 1 for j 2 [m]; (b) any such limit x? satisfies the fixed point relation eX i=1 mX j=1 rfj(Gi 1j (x ? )) = 0. (5) The proof of this claim, as well as all other claims in the paper, are deferred to Appendix A of the supplement. Unpacking this claim slightly, suppose first that a single update is performed between communications, so e = 1. In this case, we have Pe i=1 rfj(G i 1 j (x ? )) = rfj(x?), so that if x(t) has a limit x, it satisfies the relations x1 = x2 = · · · = xm and mX j=1 rfj(xj) = 0. Consequently, provided that the losses fj are convex, Proposition 1 implies that the limit of the sequence x(t), when it exists, is a minimizer of the consensus problem (2). On the other hand, when e > 1, a limit of the iterate sequence x(t) must satisfy equation (5), which in general causes the method to have limit points which are not minimizers of the consensus problem. We give a concrete example in Section 2.3. 2.2 Federated proximal algorithms Another recently proposed algorithm is FedProx [16], which can be seen as a distributed method loosely based on the classical proximal point method [24]. For a given stepsize s > 0, the proximal operator of a function f : Rd ! R and its associated optimal value, the Moreau envelope of f , are given by [19, 24, 25, chap. 1.G]: proxsf (z) ··= argmin x2Rd ⇢ f(x) + 1 2s kz xk2 and Msf (z) ··= inf x2Rd ⇢ f(x) + 1 2s kz xk2 . We remark that when f is convex, the existence of such a (unique) minimizer for the problem implied by the proximal operator is immediate. With these definitions in place, we can now study the behavior of the FedProx method [16]. We again consider a deterministic version of FedProx, in which we remove any inaccuracies introduced by stochastic approximation. For a given initialization x(1), for t = 1, 2, . . .: x(t+1/2)j ··= proxsfj (x (t) j ), for j 2 [m], and (6a) x(t+1)j ··= x (t+1/2), for j 2 [m]. (6b) The following result characterizes the fixed points of this method. Proposition 2. For any stepsize s > 0, the sequence {x(t)}1t=1 generated by the FedProx algorithm (see equations (6a) and (6b)) has the following properties: (a) if x(t) is convergent then, the local variables x(t)j share a common limit x ? such that x(t)j ! x? as t ! 1 for each j 2 [m]; (b) the limit x? satisfies the fixed point relation mX j=1 rMsfj (x?) = 0. (7) Hence, we see that this algorithm has fixed points that will be a zero of the sum of the gradients of the Moreau envelopes Msfj , rather than a zero of the sum of the gradients of the functions fj themselves. When m > 1, these fixed point relations are, in general, different. It is worth noting a very special case in which FedGD and FedProx will preserve the correct fixed points, even when e > 1. In particular, suppose all of local cost functions share a common minimizer x?, so that rfj(x?) = 0 for j 2 [m]. Under this assumption, we have Gj(x?) = x? all j 2 [m], and hence by arguing inductively, we have Gij(x?) = x? for all i > 1. Additionally recall that the minimizers of fj and Msfj coincide. Consequently, the fixed point relations (5) and (7) corresponding to FedGD and FedProx respecively, are both equivalent to the optimality condition for the federated problem. However, we emphasize this condition is not realistic in practice: if the optima of fj are exactly (or even approximately) the same, there would be little point in sharing data between devices by solving the federated learning problem. In contrast, the FedSplit algorithm presented in the next section retains correct fixed points for general federated learning problems without making such unrealistic, additional assumptions. 2.3 Example: Incorrectness on a least squares problem We illustrate these non-convergence results by specializing to least squares and carrying out a simulation study on a synthetic least squares dataset. For j = 1, . . . ,m, suppose that we are given a design matrix Aj 2 Rnj⇥d and a response vector bj 2 Rnj . The least squares regression problem defined by all the devices takes the form minimize F (x) ··= 1 2 mX j=1 kAjx bjk2. (8) This problem is a special case of our general problem (1) with fj(x) = (1/2)kAjx bjk2 for all j. When Aj are full rank, the solution to this problem is unique and given by x?ls = ✓ mX j=1 ATj Aj ◆ 1 mX j=1 ATj bj . (9) Following Proposition 2, it is easy to verify that FedProx has fixed points of the form x?FedProx = mX j=1 n I (I + sATj Aj) 1 o! 1✓ mX j=1 (ATj Aj + (1/s)I) 1ATj bj ◆ . Following Proposition 1, it is easy to verify that FedGD has fixed points of the form1 x?FedGD = mX j=1 ATj Aj ⇢ e 1X k=0 (I sATj Aj)k ! 1 mX j=1 ( e 1X k=0 (I sATj Aj)k ATj bj ! . (10) Therefore the previous three displays show that in general, when m > 1 and e > 1—that is, with more than one client, and more than one local update between communication rounds—we have x?FedProx 6= x?ls and x?FedGD 6= x?ls. Therefore, we see that FedProx and FedGD do not have the correct fixed points, even with idealized deterministic updates. Figure 1 shows the results of applying the (deterministic) versions of FedProx and FedSGD, with varying numbers of local epochs e 2 {1, 10, 100} for the least squares minimization problem (8). As expected, we see that FedProx and multi-step, deterministic FedSGD fail to converge to the correct fixed point for this problem. Although the presented deterministic variant of FedSGD will converge when a single local gradient step is taken between communication rounds (i.e., when e = 1), we see that it also does not converge to the optimal solution as soon as e > 1. See Appendix B.1 of the supplement for additional details on this simulation study. 1Here we assume that s > 0 is small enough so that kI sATjAjkop < 1, which ensures convergence. 3 FedSplit and convergence guarantees We now turn to the description of a framework that allows us to provide a clean characterization of the fixed points of iterative algorithms and to propose algorithms with convergence guarantees. Throughout our development, we assume that each function fj : Rd ! R is convex and differentiable. 3.1 An operator-theoretic view We begin by recalling the consensus formulation (2) of the problem in terms of a block-partitioned vector x = (x1, . . . , xm) 2 (Rd)m, the function F : (Rd)m ! R given by F (x) ··= Pm j=1 fj(xj), and the constraint set E ··= {x | x1 = x2 = · · · = xm} is the feasible subspace for problem (2). By appealing to the first-order optimality conditions for the problem (2), it is equivalent to find a vector x 2 (Rd)m such that rF (x) belongs to the normal cone of the constraint set E, or equivalently such that rF (x) 2 E?. Equivalently, if we define a set-valued operator NE as NE(x) ··= ⇢ E?, x1 = x2 = · · · = xm, ;, else (11) then it is equivalent to find a vector x 2 (Rd)m that satisfies the inclusion condition 0 2 rF (x) + NE(x). (12) where rF (x) = (rf1(x1), . . . ,rfm(xm)). When the loss functions fj : Rd ! R are convex, both rF and NE are monotone operators on (Rd)m [1]. Thus, the display (12) is a monotone inclusion problem. Methods for solving monotone inclusions have a long history of study within the applied mathematics and optimization literatures [26, 7]. We now use this framework to develop and analyze algorithms for solving the federated problems of interest. 3.2 Splitting procedures for federated optimization We now describe a method, derived from splitting the inclusion relation, whose fixed points do correspond with global minima of the distributed problem. It is an instantiation of the PeacemanRachford splitting [20], which we refer to as the FedSplit algorithm in this distributed setting. Algorithm 1 [FedSplit] Splitting scheme for solving federated problems of the form (1) Given initialization x 2 Rd, proximal solvers prox_updatej : R d ! Rd Initialize x (1) = z(1)1 = · · · = z (1) m = x for t = 1, 2, . . .: 1. for j = 1, . . . ,m: a. Local prox step: set z(t+1/2)j = prox_updatej(2x (t) z(t)j ) b. Local centering step: set z(t+1)j = z (t) j + 2(z (t+1/2) j x (t)) end for 2. Compute global average: set x(t+1) = z(t+1). end for Thus, the FedSplit procedure maintains a parameter vector z(t)j 2 Rd for each device j 2 [m]. The central server maintains a parameter vector x(t) 2 Rd, which collects averages of the parameter estimates at each machine. The local update at device j is defined in terms of a proximal solver prox_updatej(·), which typically be approximate proximal updates prox_updatej(x) ⇡ proxsfj (x), uniformly in x 2 R d for a suitable stepsize s > 0. We make the sense of this approximation precise when we state our convergence results in Section 3.3. An advantage to FedSplit is that unlike FedGD and FedProx, it has the correct fixed points for the distributed problem. Proposition 3. Suppose for some s > 0, prox_updatej(·) = proxsfj (·), for all j. Suppose that z? = (z?1 , . . . , z ? m) is a fixed point for the FedSplit procedure, meaning that z?j = z ? j + 2 ⇣ proxsfj (2z ? z?j ) z? ⌘ , for all j 2 [m]. (13) Then the average x? ··= 1m Pm j=1 z ? j is optimal: Pm j=1 fj(x ? ) = infx2Rd Pm j=1 fj(x). 3.3 Convergence results In this section, we give convergence guarantees for the FedSplit procedure in Algorithm 1 under exact and inexact proximal operator implementations. Strongly convex and smooth losses We begin by considering the case when the losses fj : Rd ! R are `j-strongly convex and Lj-smooth. We define `⇤ ··= min j=1,...,m `j , L ⇤ ··= max j=1,...,m Lj , and ··= L⇤ `⇤ . (14) Note that corresponds to the induced condition number of our federated problem (2). The following result demonstrates that in this setting, our method enjoys geometric convergence to the optimum, even with inexact proximal implementations. Theorem 1. Consider the FedSplit algorithm with possibly inexact proximal implementations, kprox_updatej(z) proxsfj (z)k 6 b for all j and all z 2 Rd, (15) and with stepsize s = 1/ p `⇤L⇤. Then for any initialization, the iterates satisfy kx(t+1) x?k 6 ✓ 1 2p + 1 ◆t kz(1) z?kp m + ( p + 1)b, for all t = 1, 2, . . .. (16) We now discuss some aspects of Theorem 1. Exact proximal evaluations: In the special (albeit unrealistic) case when the proximal evaluations are exact, the uniform bound (15) holds with b = 0. Consequently, given some initialization z(1), if we want "-accuracy, meaning kx(T ) x?k 6 ", we see that this occurs as soon as T exceeds T (",) = O(1) np log ✓ kz(1) z?k " p m ◆o iterations of the overall procedure. Here O(1) denotes a universal constant. Approximate proximal updates by gradient steps: In practice, the FedSplit algorithm will be implemented using an approximate prox-solver. Recall that the proximal update at device j at round t takes the form: proxsfj (x (t) j ) = argmin u2Rd sfj(u) + 1 2 ku x(t)j k 2 2 | {z } hj(u) . A natural way to compute an approximate minimizer is to run e rounds of gradient descent on the function hj . Concretely, at round t, we initialize the gradient method with the initial point u(1) = x (t) j , and run gradient descent on hj with a stepsize ↵, thereby generating the sequence u(t+1) = u(t) ↵rhj(u(t)) = u(t) ↵srfj(u(t)) + u(t) x(t)j (17) We define prox_updatej(x (t) j ) to be the output of this procedure after e steps. Corollary 1 (FedSplit convergence with inexact proximal updates). Consider the FedSplit procedure run with proximal stepsize s = 1p `⇤L⇤ , and using approximate proximal updates based on e rounds of gradient descent with stepsize ↵ = (1 + s `⇤+L ⇤ 2 ) 1 initialized (in round t) at the previous iterate x(t)j . Then the the bound (15) holds at round t with error at most b 6 1 1p + 1 e kx(t)j proxsfj (x (t) j )k2. (18) Given the exponential decay in the number of rounds e exhibited in the bound (18), in practice, it suffices to take a relatively small number of gradient steps. For instance, in our experiments to be reported in Section 4, we find that e = 10 suffices to match the exact proximal updates. This inexact proximal update could also be implemented with a gradient method and backtracking line search [5]. Smooth but not strongly convex losses We now consider the case when fj : Rd ! R are Ljsmooth and convex, but not necessarily strongly convex. In this case, the consensus objective F (z) = Pm j=1 fj(zj) is an L ⇤-smooth function on the product space (Rd)m.2 Our approach to solving such a problem is to apply the FedSplit procedure to a suitably regularized version of the original problem. More precisely, given some initial vector x(1) 2 Rd and regularization parameter > 0, let us define the function F (z) ··= mX j=1 n fj(zj) + 2m kzj x(1)k2 o . (19) We see that F : (Rd)m ! R is a -strongly convex and L⇤ = (L⇤ + )-smooth function. The next result shows that for any " > 0, minimizing the function F up to an error of order ", using a carefully chosen , yields an "-cost-suboptimal minimizer of the original objective function F . Theorem 2. Given some 2 ⇣ 0, " mkx(1) x?k2 ⌘ and any initialization x(1) 2 Rd, suppose that we run the FedSplit procedure (Algorithm 1) on the regularized objective F using exact prox steps with stepsize s = 1/ p L⇤ . Then the FedSplit algorithm outputs a vector bx 2 Rd satisfying F (bx) F ? 6 " after exceeding eO ✓q L⇤kx(1) x?k2 " ◆ iterations.3 We remark that this faster convergence rate of eO t 2 is nearly optimal for first-order algorithms [18], and to our knowledge such results were not known for operator splitting-based procedures prior to this work. 4 Experiments In this section, we present numerical results for FedSpliton some convex federated optimization problem instances. We include additional details on these simulations in Section B of the supplement. Logistic regression We begin with federated binary classification, where we solve, minimize mX j=1 njX i=1 log(1 + e bijaTijx), (20) with variable x 2 Rd. We generate the problem data {(aij , bij)} ⇢ Rd ⇥ {±1} synthetically; see Section B.2.1 in the supplement for details. We also use FedSplit to solve a multiclass classification problem, with K classes. Here we solve minimize mX j=1 n njX i=1 KX k=1 log(1 + e bijkaTijxk) + 2 KX k=1 kxkk2 o (21) with variables x1, x2, . . . , xK 2 Rd, regularization parameter > 0, and sample size N = Pm j=1 nj . Here, the problem data {(aij , bij)} ⇢ Rd ⇥ {±1}K are images and multiclass labels from the FEMNIST dataset in the LEAF framework [6]. This dataset was proposed as a benchmark for federated optimization; there are N = 805, 263 images, m = 3, 550 clients, and K = 62 classes. The problem dimension is d = 6, 875; see Section B.2.2 in the supplement for additional details. In Figure 2, we present numerical results on problems (20) and (21). We implement FedSplit with exact proximal operators and inexact implementations with a constant number of gradient steps e 2 {1, 5, 10}. For comparison, we implemented a federated gradient method as previously described (4). As shown in Figure 2(a), both FedGD with e = 1 and the FedSplit procedure exhibit linear convergence rates. Using inexact proximal updates with the FedSplit procedure preserves the linear convergence up to the error floor introduced by the exactness of the updates.In this case, the 2To avoid degeneracies, we assume x 7! Pm j=1 fj(x) is bounded below and attains its minimum. 3The eO (·) notation denotes constant and polylogarithmic factors that are not dominant. inexact proximal updates with e = 10—that is, performing 10 local updates per each round of global communication—suffice to track the exact FedSplit procedure up to an accuracy below 10 6. In Figure 2(b), we see that FedSplit similarly outperforms FedGD on actual client data.4. Dependence on problem conditioning It is well-known that the convergence rates of first-order methods are affected by problem conditioning. First, let us re-state our theoretical guarantees in terms of iteration complexity. We let T (",) denote the maximum number of iterations required so that, for any problem with condition number at most , the iterate x(T ) with T = T (",) satisfies the bound F (x(T )) F ? 6 ". For federated objectives with condition number as defined in (14), FedSplit and FedGD have iteration complexities TFedSplit(",) = O( p log(1/")) and TFedGrad(",) = O( log(1/")). (22) This follows from Theorem 1 and standard results from convex optimization theory [18]. Hence, whereas FedSplithas a more expensive local update, it has much better dependence on the condition number . In the context of federated optimization, this iteration complexity should be interpreted as the number of communication rounds between clients and the coordinating entity. Hence, this highlights a concrete tradeoff between local computation and global communication in these methods. Note that while acclereated first-order methods matches the iteration complexity of FedSplit, they are sensitive to stepsize misspecification and are not robust to errors incurred in gradient updates [9]. This is in contrast to the inexact convergence guarantees that FedSplit enjoys (see Theorem 1). In Figure 3, we present the results of a simulation study that shows these iteration complexity estimates are accurate in practice. We construct a sequence of least squares problems with varying condition number between 10 and 10000. We then look at the number of iterations required to obtain an "-cost suboptimal solution with " = 10 3; see Section B.3 in the supplement for additional simulation details. In this way, we obtain estimates of the functions 7! TFedGrad(10 3,) and 7! TFedSplit(10 3,), which measure the dependence of the iteration complexity on the condition number. Figure 3 provides plots of these estimated functions. Consistent with our theory, we see that FedGD has an approximately linear dependence on the condition number, whereas the FedSplit procedure has much milder dependence on conditioning. Concretely, for an instance with condition number = 10000, the FedGD procedure requires on the order of 34000 iterations, whereas the FedSplit procedure requires roughly 400 iterations. Therefore, while FedSplit involves more expensive intermediate proximal updates, it enjoys a smaller iteration count, which in the context of this federated setting indicates a significantly smaller number of communication rounds between clients and the the centralized server. 4Given the large scale nature of this example, we implement an accelerated gradient method for the proximal updates, terminated when the gradient of the proximal objective drops below 10 8. 5 Discussion We highlight a few interesting directions for future work on federated learning and FedSplit . First, in practice, it is standard to use stochastic optimization algorithms in solving large-scale machine learning problems, and we are currently analyzing stochastic approximation procedures as applied to the device-based proximal updates underlying our method. Our results on the incorrectness of previously proposed methods and the work of Woodworth and colleagues [27] on the suboptimality on multi-step stochastic gradient methods, highlight the need for better understanding of the tradeoff between the accuracy of stochastic and deterministic approximations to intermediate quantities and rates of convergence in federated optimization. We also mention the possibility of employing stochastic approximation with higher-order methods, such as the Newton sketch algorithm [21, 22]. It is also important to consider our procedure under asynchronous updates, perhaps under delays in computation. Finally, an important desideratum in federated learning is suitable privacy guarantees for client the local data [3]. Understanding how noise aggregated through differentially private mechanisms couples with our inexact convergence guarantees is a key direction for future work. Broader Impact As mentioned in the introduction, a main application of federated optimization is to large-scale statistical learning, as carried out by application developers and cell phone manufacturers. On the other hand, learning from federated data is also inherent to other settings where data is not stored centrally: consider, for example, collecting clinical trial data across multiple hospitals and running a centralized analysis. Therefore, we envision analysts who are operating in these settings—where data is not available centrally due to communication barriers or privacy constraints—as main benefactors of this work. Our methods enjoy the same trade-offs with respect to biases in data, failures of systems, as other standard first-order algorithms. We believe that having convergent algorithms in this federated setting should help promote good practices with regard to analyzing large-scale, federated, and sensitive datasets. Acknowledgments and Disclosure of Funding We thank Bora Nikolic and Cong Ma for their careful reading and comments of an initial draft of this manuscript. RP was partially supported by a Berkeley Fellowship via the ARCS Foundation. MJW was partially supported by Office of Naval Research grant DOD-ONR-N00014-18-1-2640, and NSF grant NSF-DMS-1612948.
1. What is the main contribution of the paper regarding federated learning? 2. What are the strengths and weaknesses of the proposed algorithm FedSplit compared to other classical methods like FedAvg and FedProx? 3. How does the reviewer assess the convergence properties of FedSplit, particularly in comparison with accelerated gradient descent (AGD)? 4. Do you have any concerns or suggestions regarding the paper's claims and comparisons with past procedures? 5. Are there any limitations or trade-offs in the proposed method that should be acknowledged and addressed?
Summary and Contributions Strengths Weaknesses
Summary and Contributions Main merit: The paper introduces a novel local algorithm for federated learning -- FedSplit. While the classical FL methods such as FedAvg and FedProx does not have a correct fixed point, FedSplit has the correct fixed points despite a quite simple structure. I believe this is a very nice contribution and should be appreciated by the community. _______________ After the rebuttal _________________ The authors did not disagree with the points I raised. At the same time, the authors have proposed a reasonable fixes to these issues: mention SCAFFOLD, be a little more careful about taking a credit for noticing wrong fixed points of FedAvg/FedProx and mention that rate of FedSplit is no better than rate of AGD in heterogenous setting (but argue the benefits of FedSplit in the iid setup). Just one last comment: I was not persuaded by the comment that FedProx is better to AGD in terms of the inexactness; I believe more details would need to be given to make this argument solid (I understand there is no space for this in the rebuttal). 1) if one has access to the exact gradients, AGD can be performed directly, while FedSplit still needs to solve the local subproblem inexactly, and 2) AGD can still work well under inexact updates one done correctly, see https://arxiv.org/abs/1109.2415 for example (one should make more detailed argument about inexactness over there and Thm 1,3 from your paper). Given the above (I read the other reviews too), I stand by my initial score -- 6. Strengths Explained in the contributions. Weaknesses Main criticism: 1) The paper claims two main contributions, one of which is "The first contribution of this paper is to analyze some past procedures, and show that even in the favorable setting of deterministic updates (i.e., no stochastic approximation used), these methods typically fail to preserve solutions of the original optimization problem as fixed points " I believe the text above is misleading. In fact, it was already well known for the "past procedures" to not have the correct fixed points; one alternative approach to deal with such an issue was to incorporate the "drift"; see https://arxiv.org/abs/1910.06378 for example. Therefore, believe it would be more appropriate to not claim the contribution for showing the wrong fixed point of the local algorithms. 2) While FedSplit is an honest local algorithm with a correct fixed point; the convergence properties of FedSplit are strictly worse than those of the plain accelerated gradient descent (AGD). Specifically, in strongly convex case (same arguments apply for weakly convex), the communication complexity of FedSplit is O(sqrt(kappa)log(1/epsilon)), which is identical to the communication complexity of AGD. In fact, AGD is favorable (in terms of the rate) as is requires a single gradient evaluation instead of evaluating the prox with high enough precision so that the inexactness does not drive the rate. To be fair, the local methods do not, in general, outperform the non-local counterparts in terms of the communication even if the fixed point is correct; see https://arxiv.org/abs/2006.04735 for example (no need to cite it as the paper appeared online only recently) -- in fact, one might argue that plain AGD is optimal in such a case (see https://arxiv.org/abs/2005.10675 for example). For that reason, one might not hope for a better rate of FedSplit. To "fix" this issue, I suggest the following: i) consider data homogeneous setting (i.e., identical data across nodes). In such a case, FedSplit should strictly outperform AGD (I presume it would even converge in a single iteration in such a case) ii) mention transparently in the paper that FedSplit is not favorable over AGD in terms of the theory for the heterogenous setting.
NIPS
Title FedSplit: an algorithmic framework for fast federated optimization Abstract Motivated by federated learning, we consider the hub-and-spoke model of distributed optimization in which a central authority coordinates the computation of a solution among many agents while limiting communication. We first study some past procedures for federated optimization, and show that their fixed points need not correspond to stationary points of the original optimization problem, even in simple convex settings with deterministic updates. In order to remedy these issues, we introduce FedSplit, a class of algorithms based on operator splitting procedures for solving distributed convex minimization with additive structure. We prove that these procedures have the correct fixed points, corresponding to optima of the original optimization problem, and we characterize their convergence rates under different settings. Our theory shows that these methods are provably robust to inexact computation of intermediate local quantities. We complement our theory with some experiments that demonstrate the benefits of our methods in practice. 1 Introduction Federated learning is a rapidly evolving application of distributed optimization for learning problems in large-scale networks of remote clients [13]. These systems present new challenges, as they are characterized by heterogeneity in computational resources, data across a large, multi-agent network, unreliable communication, and privacy constraints due to sensitive client data [15]. Although distributed optimization has a rich history and extensive literature (e.g., see the sources [2, 4, 8, 28, 14, 23] and references therein), renewed interest due to federated learning has led to a flurry of recent work in the area. Notably, McMahan et al. [17] introduced the FedSGD and FedAvg algorithms, by adapting the classical stochastic gradient method to the federated setting, considering the possibility that clients may fail and may only be subsampled on each round of computation. Another recent proposal, FedProx, attempted to mitigate potential device heterogeneity issues by applying averaged proximal updates to solve federated minimization problems. Currently, a general convergence theory of these methods is lacking. Moreover, practitioners have documented failures of convergence in certain settings (e.g., see Figure 3 and related discussion in the work [17]). 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Our contributions: The first contribution of this paper is to analyze some past procedures, and show that even in the favorable setting of deterministic updates (i.e., no stochastic approximation used), these methods typically fail to preserve solutions of the original optimization problem as fixed points. More precisely, even when these methods do converge, the resulting fixed point need not correspond to an optimal solution of the desired federated learning problem. Since the stochastic variants implemented in practice are approximate versions of the underlying deterministic procedures, this implies these methods also fail to preserve the correct fixed points in general. With the motivation of rectifying this undesirable feature, our second contribution is to introduce a family of federated optimization algorithms, which we call FedSplit, that do preserve the correct fixed points for distributed optimization problems of the form minimize F (x) ··= mX j=1 fj(x), (1) where fj : Rd ! R are the clients’ cost functions for variable x 2 Rd. In machine learning applications, the vector x 2 Rd is a parameter of a statistical model. Our procedure and analysis builds on a long line of work relating optimization with monotone operators and operator splitting techniques [4, 26, 7, 1]. In this paper, we focus on the case when fj are convex functions with Lipschitz continuous gradient [24]. 2 Existing algorithms and their fixed points We focus our discussion on deterministic analogues of two recently proposed procedures—namely, FedSGD [17] and FedProx [16]. For analysis, it is useful to introduce the equivalent, consensus reformulation [4] of the distributed problem (1): minimize F (x) ··= Pm j=1 fj(xj) subject to x1 = x2 = · · · = xm. (2) 2.1 Federated gradient algorithms The recently proposed FedSGD method [17] is based on a multi-step projected stochastic gradient method for solving the consensus problem. For our analysis we consider the obvious deterministic version of this algorithm, which replaces the stochastic gradient by the full gradient. Formally, given a stepsize s > 0, define the gradient mappings Gj(x) ··= x srfj(x) for j = 1, . . . ,m. (3) For a given integer e > 1, we define Gej as the e-fold composition of Gj and G0j as the identity operator on Rd. The FedGD(s, e) algorithm from initialization x(1) obeys the recursion for t = 1, 2, . . . : x(t+1/2)j ··= G e j(x (t) j ), for j 2 [m] ··= {1, 2, . . . ,m}, and (4a) x(t+1)j ··= x (t+1/2), for j 2 [m]. (4b) Recall that x(t+1/2) = 1m Pm j=1 x (t+1/2) j is the block average. The following result characterizes the fixed points of this procedure. Proposition 1. For any s > 0 and e > 1, the sequence {x(t)}1t=1 generated by the FedGD(s, e) algorithm in equation (4) has the following properties: (a) if x(t) is convergent, then the local variables x(t)j share a common limit x ? such that x(t)j ! x? as t ! 1 for j 2 [m]; (b) any such limit x? satisfies the fixed point relation eX i=1 mX j=1 rfj(Gi 1j (x ? )) = 0. (5) The proof of this claim, as well as all other claims in the paper, are deferred to Appendix A of the supplement. Unpacking this claim slightly, suppose first that a single update is performed between communications, so e = 1. In this case, we have Pe i=1 rfj(G i 1 j (x ? )) = rfj(x?), so that if x(t) has a limit x, it satisfies the relations x1 = x2 = · · · = xm and mX j=1 rfj(xj) = 0. Consequently, provided that the losses fj are convex, Proposition 1 implies that the limit of the sequence x(t), when it exists, is a minimizer of the consensus problem (2). On the other hand, when e > 1, a limit of the iterate sequence x(t) must satisfy equation (5), which in general causes the method to have limit points which are not minimizers of the consensus problem. We give a concrete example in Section 2.3. 2.2 Federated proximal algorithms Another recently proposed algorithm is FedProx [16], which can be seen as a distributed method loosely based on the classical proximal point method [24]. For a given stepsize s > 0, the proximal operator of a function f : Rd ! R and its associated optimal value, the Moreau envelope of f , are given by [19, 24, 25, chap. 1.G]: proxsf (z) ··= argmin x2Rd ⇢ f(x) + 1 2s kz xk2 and Msf (z) ··= inf x2Rd ⇢ f(x) + 1 2s kz xk2 . We remark that when f is convex, the existence of such a (unique) minimizer for the problem implied by the proximal operator is immediate. With these definitions in place, we can now study the behavior of the FedProx method [16]. We again consider a deterministic version of FedProx, in which we remove any inaccuracies introduced by stochastic approximation. For a given initialization x(1), for t = 1, 2, . . .: x(t+1/2)j ··= proxsfj (x (t) j ), for j 2 [m], and (6a) x(t+1)j ··= x (t+1/2), for j 2 [m]. (6b) The following result characterizes the fixed points of this method. Proposition 2. For any stepsize s > 0, the sequence {x(t)}1t=1 generated by the FedProx algorithm (see equations (6a) and (6b)) has the following properties: (a) if x(t) is convergent then, the local variables x(t)j share a common limit x ? such that x(t)j ! x? as t ! 1 for each j 2 [m]; (b) the limit x? satisfies the fixed point relation mX j=1 rMsfj (x?) = 0. (7) Hence, we see that this algorithm has fixed points that will be a zero of the sum of the gradients of the Moreau envelopes Msfj , rather than a zero of the sum of the gradients of the functions fj themselves. When m > 1, these fixed point relations are, in general, different. It is worth noting a very special case in which FedGD and FedProx will preserve the correct fixed points, even when e > 1. In particular, suppose all of local cost functions share a common minimizer x?, so that rfj(x?) = 0 for j 2 [m]. Under this assumption, we have Gj(x?) = x? all j 2 [m], and hence by arguing inductively, we have Gij(x?) = x? for all i > 1. Additionally recall that the minimizers of fj and Msfj coincide. Consequently, the fixed point relations (5) and (7) corresponding to FedGD and FedProx respecively, are both equivalent to the optimality condition for the federated problem. However, we emphasize this condition is not realistic in practice: if the optima of fj are exactly (or even approximately) the same, there would be little point in sharing data between devices by solving the federated learning problem. In contrast, the FedSplit algorithm presented in the next section retains correct fixed points for general federated learning problems without making such unrealistic, additional assumptions. 2.3 Example: Incorrectness on a least squares problem We illustrate these non-convergence results by specializing to least squares and carrying out a simulation study on a synthetic least squares dataset. For j = 1, . . . ,m, suppose that we are given a design matrix Aj 2 Rnj⇥d and a response vector bj 2 Rnj . The least squares regression problem defined by all the devices takes the form minimize F (x) ··= 1 2 mX j=1 kAjx bjk2. (8) This problem is a special case of our general problem (1) with fj(x) = (1/2)kAjx bjk2 for all j. When Aj are full rank, the solution to this problem is unique and given by x?ls = ✓ mX j=1 ATj Aj ◆ 1 mX j=1 ATj bj . (9) Following Proposition 2, it is easy to verify that FedProx has fixed points of the form x?FedProx = mX j=1 n I (I + sATj Aj) 1 o! 1✓ mX j=1 (ATj Aj + (1/s)I) 1ATj bj ◆ . Following Proposition 1, it is easy to verify that FedGD has fixed points of the form1 x?FedGD = mX j=1 ATj Aj ⇢ e 1X k=0 (I sATj Aj)k ! 1 mX j=1 ( e 1X k=0 (I sATj Aj)k ATj bj ! . (10) Therefore the previous three displays show that in general, when m > 1 and e > 1—that is, with more than one client, and more than one local update between communication rounds—we have x?FedProx 6= x?ls and x?FedGD 6= x?ls. Therefore, we see that FedProx and FedGD do not have the correct fixed points, even with idealized deterministic updates. Figure 1 shows the results of applying the (deterministic) versions of FedProx and FedSGD, with varying numbers of local epochs e 2 {1, 10, 100} for the least squares minimization problem (8). As expected, we see that FedProx and multi-step, deterministic FedSGD fail to converge to the correct fixed point for this problem. Although the presented deterministic variant of FedSGD will converge when a single local gradient step is taken between communication rounds (i.e., when e = 1), we see that it also does not converge to the optimal solution as soon as e > 1. See Appendix B.1 of the supplement for additional details on this simulation study. 1Here we assume that s > 0 is small enough so that kI sATjAjkop < 1, which ensures convergence. 3 FedSplit and convergence guarantees We now turn to the description of a framework that allows us to provide a clean characterization of the fixed points of iterative algorithms and to propose algorithms with convergence guarantees. Throughout our development, we assume that each function fj : Rd ! R is convex and differentiable. 3.1 An operator-theoretic view We begin by recalling the consensus formulation (2) of the problem in terms of a block-partitioned vector x = (x1, . . . , xm) 2 (Rd)m, the function F : (Rd)m ! R given by F (x) ··= Pm j=1 fj(xj), and the constraint set E ··= {x | x1 = x2 = · · · = xm} is the feasible subspace for problem (2). By appealing to the first-order optimality conditions for the problem (2), it is equivalent to find a vector x 2 (Rd)m such that rF (x) belongs to the normal cone of the constraint set E, or equivalently such that rF (x) 2 E?. Equivalently, if we define a set-valued operator NE as NE(x) ··= ⇢ E?, x1 = x2 = · · · = xm, ;, else (11) then it is equivalent to find a vector x 2 (Rd)m that satisfies the inclusion condition 0 2 rF (x) + NE(x). (12) where rF (x) = (rf1(x1), . . . ,rfm(xm)). When the loss functions fj : Rd ! R are convex, both rF and NE are monotone operators on (Rd)m [1]. Thus, the display (12) is a monotone inclusion problem. Methods for solving monotone inclusions have a long history of study within the applied mathematics and optimization literatures [26, 7]. We now use this framework to develop and analyze algorithms for solving the federated problems of interest. 3.2 Splitting procedures for federated optimization We now describe a method, derived from splitting the inclusion relation, whose fixed points do correspond with global minima of the distributed problem. It is an instantiation of the PeacemanRachford splitting [20], which we refer to as the FedSplit algorithm in this distributed setting. Algorithm 1 [FedSplit] Splitting scheme for solving federated problems of the form (1) Given initialization x 2 Rd, proximal solvers prox_updatej : R d ! Rd Initialize x (1) = z(1)1 = · · · = z (1) m = x for t = 1, 2, . . .: 1. for j = 1, . . . ,m: a. Local prox step: set z(t+1/2)j = prox_updatej(2x (t) z(t)j ) b. Local centering step: set z(t+1)j = z (t) j + 2(z (t+1/2) j x (t)) end for 2. Compute global average: set x(t+1) = z(t+1). end for Thus, the FedSplit procedure maintains a parameter vector z(t)j 2 Rd for each device j 2 [m]. The central server maintains a parameter vector x(t) 2 Rd, which collects averages of the parameter estimates at each machine. The local update at device j is defined in terms of a proximal solver prox_updatej(·), which typically be approximate proximal updates prox_updatej(x) ⇡ proxsfj (x), uniformly in x 2 R d for a suitable stepsize s > 0. We make the sense of this approximation precise when we state our convergence results in Section 3.3. An advantage to FedSplit is that unlike FedGD and FedProx, it has the correct fixed points for the distributed problem. Proposition 3. Suppose for some s > 0, prox_updatej(·) = proxsfj (·), for all j. Suppose that z? = (z?1 , . . . , z ? m) is a fixed point for the FedSplit procedure, meaning that z?j = z ? j + 2 ⇣ proxsfj (2z ? z?j ) z? ⌘ , for all j 2 [m]. (13) Then the average x? ··= 1m Pm j=1 z ? j is optimal: Pm j=1 fj(x ? ) = infx2Rd Pm j=1 fj(x). 3.3 Convergence results In this section, we give convergence guarantees for the FedSplit procedure in Algorithm 1 under exact and inexact proximal operator implementations. Strongly convex and smooth losses We begin by considering the case when the losses fj : Rd ! R are `j-strongly convex and Lj-smooth. We define `⇤ ··= min j=1,...,m `j , L ⇤ ··= max j=1,...,m Lj , and ··= L⇤ `⇤ . (14) Note that corresponds to the induced condition number of our federated problem (2). The following result demonstrates that in this setting, our method enjoys geometric convergence to the optimum, even with inexact proximal implementations. Theorem 1. Consider the FedSplit algorithm with possibly inexact proximal implementations, kprox_updatej(z) proxsfj (z)k 6 b for all j and all z 2 Rd, (15) and with stepsize s = 1/ p `⇤L⇤. Then for any initialization, the iterates satisfy kx(t+1) x?k 6 ✓ 1 2p + 1 ◆t kz(1) z?kp m + ( p + 1)b, for all t = 1, 2, . . .. (16) We now discuss some aspects of Theorem 1. Exact proximal evaluations: In the special (albeit unrealistic) case when the proximal evaluations are exact, the uniform bound (15) holds with b = 0. Consequently, given some initialization z(1), if we want "-accuracy, meaning kx(T ) x?k 6 ", we see that this occurs as soon as T exceeds T (",) = O(1) np log ✓ kz(1) z?k " p m ◆o iterations of the overall procedure. Here O(1) denotes a universal constant. Approximate proximal updates by gradient steps: In practice, the FedSplit algorithm will be implemented using an approximate prox-solver. Recall that the proximal update at device j at round t takes the form: proxsfj (x (t) j ) = argmin u2Rd sfj(u) + 1 2 ku x(t)j k 2 2 | {z } hj(u) . A natural way to compute an approximate minimizer is to run e rounds of gradient descent on the function hj . Concretely, at round t, we initialize the gradient method with the initial point u(1) = x (t) j , and run gradient descent on hj with a stepsize ↵, thereby generating the sequence u(t+1) = u(t) ↵rhj(u(t)) = u(t) ↵srfj(u(t)) + u(t) x(t)j (17) We define prox_updatej(x (t) j ) to be the output of this procedure after e steps. Corollary 1 (FedSplit convergence with inexact proximal updates). Consider the FedSplit procedure run with proximal stepsize s = 1p `⇤L⇤ , and using approximate proximal updates based on e rounds of gradient descent with stepsize ↵ = (1 + s `⇤+L ⇤ 2 ) 1 initialized (in round t) at the previous iterate x(t)j . Then the the bound (15) holds at round t with error at most b 6 1 1p + 1 e kx(t)j proxsfj (x (t) j )k2. (18) Given the exponential decay in the number of rounds e exhibited in the bound (18), in practice, it suffices to take a relatively small number of gradient steps. For instance, in our experiments to be reported in Section 4, we find that e = 10 suffices to match the exact proximal updates. This inexact proximal update could also be implemented with a gradient method and backtracking line search [5]. Smooth but not strongly convex losses We now consider the case when fj : Rd ! R are Ljsmooth and convex, but not necessarily strongly convex. In this case, the consensus objective F (z) = Pm j=1 fj(zj) is an L ⇤-smooth function on the product space (Rd)m.2 Our approach to solving such a problem is to apply the FedSplit procedure to a suitably regularized version of the original problem. More precisely, given some initial vector x(1) 2 Rd and regularization parameter > 0, let us define the function F (z) ··= mX j=1 n fj(zj) + 2m kzj x(1)k2 o . (19) We see that F : (Rd)m ! R is a -strongly convex and L⇤ = (L⇤ + )-smooth function. The next result shows that for any " > 0, minimizing the function F up to an error of order ", using a carefully chosen , yields an "-cost-suboptimal minimizer of the original objective function F . Theorem 2. Given some 2 ⇣ 0, " mkx(1) x?k2 ⌘ and any initialization x(1) 2 Rd, suppose that we run the FedSplit procedure (Algorithm 1) on the regularized objective F using exact prox steps with stepsize s = 1/ p L⇤ . Then the FedSplit algorithm outputs a vector bx 2 Rd satisfying F (bx) F ? 6 " after exceeding eO ✓q L⇤kx(1) x?k2 " ◆ iterations.3 We remark that this faster convergence rate of eO t 2 is nearly optimal for first-order algorithms [18], and to our knowledge such results were not known for operator splitting-based procedures prior to this work. 4 Experiments In this section, we present numerical results for FedSpliton some convex federated optimization problem instances. We include additional details on these simulations in Section B of the supplement. Logistic regression We begin with federated binary classification, where we solve, minimize mX j=1 njX i=1 log(1 + e bijaTijx), (20) with variable x 2 Rd. We generate the problem data {(aij , bij)} ⇢ Rd ⇥ {±1} synthetically; see Section B.2.1 in the supplement for details. We also use FedSplit to solve a multiclass classification problem, with K classes. Here we solve minimize mX j=1 n njX i=1 KX k=1 log(1 + e bijkaTijxk) + 2 KX k=1 kxkk2 o (21) with variables x1, x2, . . . , xK 2 Rd, regularization parameter > 0, and sample size N = Pm j=1 nj . Here, the problem data {(aij , bij)} ⇢ Rd ⇥ {±1}K are images and multiclass labels from the FEMNIST dataset in the LEAF framework [6]. This dataset was proposed as a benchmark for federated optimization; there are N = 805, 263 images, m = 3, 550 clients, and K = 62 classes. The problem dimension is d = 6, 875; see Section B.2.2 in the supplement for additional details. In Figure 2, we present numerical results on problems (20) and (21). We implement FedSplit with exact proximal operators and inexact implementations with a constant number of gradient steps e 2 {1, 5, 10}. For comparison, we implemented a federated gradient method as previously described (4). As shown in Figure 2(a), both FedGD with e = 1 and the FedSplit procedure exhibit linear convergence rates. Using inexact proximal updates with the FedSplit procedure preserves the linear convergence up to the error floor introduced by the exactness of the updates.In this case, the 2To avoid degeneracies, we assume x 7! Pm j=1 fj(x) is bounded below and attains its minimum. 3The eO (·) notation denotes constant and polylogarithmic factors that are not dominant. inexact proximal updates with e = 10—that is, performing 10 local updates per each round of global communication—suffice to track the exact FedSplit procedure up to an accuracy below 10 6. In Figure 2(b), we see that FedSplit similarly outperforms FedGD on actual client data.4. Dependence on problem conditioning It is well-known that the convergence rates of first-order methods are affected by problem conditioning. First, let us re-state our theoretical guarantees in terms of iteration complexity. We let T (",) denote the maximum number of iterations required so that, for any problem with condition number at most , the iterate x(T ) with T = T (",) satisfies the bound F (x(T )) F ? 6 ". For federated objectives with condition number as defined in (14), FedSplit and FedGD have iteration complexities TFedSplit(",) = O( p log(1/")) and TFedGrad(",) = O( log(1/")). (22) This follows from Theorem 1 and standard results from convex optimization theory [18]. Hence, whereas FedSplithas a more expensive local update, it has much better dependence on the condition number . In the context of federated optimization, this iteration complexity should be interpreted as the number of communication rounds between clients and the coordinating entity. Hence, this highlights a concrete tradeoff between local computation and global communication in these methods. Note that while acclereated first-order methods matches the iteration complexity of FedSplit, they are sensitive to stepsize misspecification and are not robust to errors incurred in gradient updates [9]. This is in contrast to the inexact convergence guarantees that FedSplit enjoys (see Theorem 1). In Figure 3, we present the results of a simulation study that shows these iteration complexity estimates are accurate in practice. We construct a sequence of least squares problems with varying condition number between 10 and 10000. We then look at the number of iterations required to obtain an "-cost suboptimal solution with " = 10 3; see Section B.3 in the supplement for additional simulation details. In this way, we obtain estimates of the functions 7! TFedGrad(10 3,) and 7! TFedSplit(10 3,), which measure the dependence of the iteration complexity on the condition number. Figure 3 provides plots of these estimated functions. Consistent with our theory, we see that FedGD has an approximately linear dependence on the condition number, whereas the FedSplit procedure has much milder dependence on conditioning. Concretely, for an instance with condition number = 10000, the FedGD procedure requires on the order of 34000 iterations, whereas the FedSplit procedure requires roughly 400 iterations. Therefore, while FedSplit involves more expensive intermediate proximal updates, it enjoys a smaller iteration count, which in the context of this federated setting indicates a significantly smaller number of communication rounds between clients and the the centralized server. 4Given the large scale nature of this example, we implement an accelerated gradient method for the proximal updates, terminated when the gradient of the proximal objective drops below 10 8. 5 Discussion We highlight a few interesting directions for future work on federated learning and FedSplit . First, in practice, it is standard to use stochastic optimization algorithms in solving large-scale machine learning problems, and we are currently analyzing stochastic approximation procedures as applied to the device-based proximal updates underlying our method. Our results on the incorrectness of previously proposed methods and the work of Woodworth and colleagues [27] on the suboptimality on multi-step stochastic gradient methods, highlight the need for better understanding of the tradeoff between the accuracy of stochastic and deterministic approximations to intermediate quantities and rates of convergence in federated optimization. We also mention the possibility of employing stochastic approximation with higher-order methods, such as the Newton sketch algorithm [21, 22]. It is also important to consider our procedure under asynchronous updates, perhaps under delays in computation. Finally, an important desideratum in federated learning is suitable privacy guarantees for client the local data [3]. Understanding how noise aggregated through differentially private mechanisms couples with our inexact convergence guarantees is a key direction for future work. Broader Impact As mentioned in the introduction, a main application of federated optimization is to large-scale statistical learning, as carried out by application developers and cell phone manufacturers. On the other hand, learning from federated data is also inherent to other settings where data is not stored centrally: consider, for example, collecting clinical trial data across multiple hospitals and running a centralized analysis. Therefore, we envision analysts who are operating in these settings—where data is not available centrally due to communication barriers or privacy constraints—as main benefactors of this work. Our methods enjoy the same trade-offs with respect to biases in data, failures of systems, as other standard first-order algorithms. We believe that having convergent algorithms in this federated setting should help promote good practices with regard to analyzing large-scale, federated, and sensitive datasets. Acknowledgments and Disclosure of Funding We thank Bora Nikolic and Cong Ma for their careful reading and comments of an initial draft of this manuscript. RP was partially supported by a Berkeley Fellowship via the ARCS Foundation. MJW was partially supported by Office of Naval Research grant DOD-ONR-N00014-18-1-2640, and NSF grant NSF-DMS-1612948.
1. What is the focus of the paper regarding federated distributed optimization? 2. What are the strengths of the proposed algorithm, particularly in addressing the limitations of previous methods? 3. What are the weaknesses of the paper, especially regarding the algorithm's derivation and its connection to multi-device communication and failures?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper considers federated distributed optimization under convex assumption. The paper first identifies the fixed points of some federated optimization algorithms may not be stationary points of the original optimization. To fix this, the paper introduces a new algorithm which has fixed points corresponding to the optima of the original problem. The convergence rate is also established for the proposed algorithm. Strengths The paper presents the federated SGD and federated proximal algorithms have fixed points not corresponding to the zero of sum of gradients of the consensus problems even in the deterministic case. An example is presented to verify the claim. The paper then proposes the FedSplit algorithm based on operator splitting algorithm with additive structure. The algorithm has a local proximal update local parameters and aggregates local parameters to update the server. The common convergence rate is established for the algorithm using common tools. Weaknesses The paper could add more details on the derivation of the algorithm 1 and give some intuition about why the proposed algorithm could fix the issue. The FedSplit method is more like deterministic distributed optimization algorithm. The connections to the multi-device communications and failures mentioned in the paper are weak.
NIPS
Title SCONE: Surface Coverage Optimization in Unknown Environments by Volumetric Integration Abstract Next Best View computation (NBV) is a long-standing problem in robotics, and consists in identifying the next most informative sensor position(s) for reconstructing a 3D object or scene efficiently and accurately. Like most current methods, we consider NBV prediction from a depth sensor like Lidar systems. Learningbased methods relying on a volumetric representation of the scene are suitable for path planning, but have lower accuracy than methods using a surface-based representation. However, the latter do not scale well with the size of the scene and constrain the camera to a small number of poses. To obtain the advantages of both representations, we show that we can maximize surface metrics by Monte Carlo integration over a volumetric representation. In particular, we propose an approach, SCONE, that relies on two neural modules: The first module predicts occupancy probability in the entire volume of the scene. Given any new camera pose, the second module samples points in the scene based on their occupancy probability and leverages a self-attention mechanism to predict the visibility of the samples. Finally, we integrate the visibility to evaluate the gain in surface coverage for the new camera pose. NBV is selected as the pose that maximizes the gain in total surface coverage. Our method scales to large scenes and handles free camera motion: It takes as input an arbitrarily large point cloud gathered by a depth sensor as well as camera poses to predict NBV. We demonstrate our approach on a novel dataset made of large and complex 3D scenes. 1 Introduction Next Best View computation (NBV) is a long-standing problem in robotics [6, 29], which consists in identifying the next most informative sensor position(s) for reconstructing a 3D object or scene efficiently and accurately. Typically, a position is evaluated on how much it can increase the total coverage of the scene surface. Few methods have relied on Deep Learning (DL) for the NBV problem, even though DL can provide useful geometric prior to obtain a better prediction of the surface coverage [33, 16, 25]. Like most current methods, we consider NBV prediction from a depth sensor. Existing methods based on a depth sensor rely either on a volumetric or on a surface-based representation of the scene geometry. Volumetric mapping-based methods can compute collision efficiently, which is practical for path planning in real case scenarios [20, 23, 24, 14, 1, 7]. However, they typically rely on voxels or a global embedding [12, 3, 21, 22, 27, 5] for the scene, which results in poor accuracy in reconstruction and poor performance in NBV selection for complex 3D objects. On the contrary, surface mapping-based methods that process directly a dense point cloud of the surface as gathered by the depth sensor are efficient for NBV prediction with high-detailed geometry. They are however limited to very specific cases, generally a single, small-scale, isolated object with the camera constrained to stay on a sphere centered on the object [15, 24, 8, 4, 13, 14, 14, 33, 16, 25]. Thus, they cannot be applied to the exploration of 3D scenes. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). As shown in Figure 1, we introduce a volumetric DL method that efficiently identifies NBVs for unknown large-scale 3D scenes in which the camera can move freely. Instead of representing the scene with a single global embedding, we choose to use a grid of local point clouds, which scales much better to large and complex 3D scenes. We show how to learn to predict the visibility of unseen 3D points in all directions given an estimate of the 3D scene geometry. We can then integrate these visibilities in the direction of any camera by using a Monte Carlo integration approach, which allows us to optimize the camera pose to find the next most informative views. We call our method SCONE, for Surface Coverage Optimization in uNknown Environments. In this respect, we introduce a theoretical framework to translate the optimization of surface coverage gain, a surface metric on manifolds that represents the ability of a camera to increase the visible area of the surface, into an optimization problem on volumetric integrals. Such a formalism allows us to use a volumetric mapping of geometry, which is convenient not only to scale the model to exploration tasks and scene reconstruction, but also to make probabilistic predictions about geometry. In particular, given a partial point cloud gathered by a depth sensor, our model learns to make predictions with virtually infinite resolution about the occupancy probability in the scene volume by learning a deep implicit function [17, 28, 18, 30, 31, 19]. Such predictions scale to very large point clouds since they depend only on neighborhood geometry. Then, our model leverages a self-attention mechanism [26] to predict occlusions and compute informative functions mapped on a continuous sphere that represent visibility likelihood of points in all directions. The occupancy probability field is finally used as a basis to sample points and compute Monte Carlo integrals of visibility scores. Since NBV learning-based methods are mostly limited to single, small-scale, centered object reconstruction in literature, we first compare the performance of our model to the state of the art on the ShapeNet dataset [2], following the protocol introduced in [33]. While our method was designed to handle more general frameworks such as 3D scene reconstruction and continuous cameras poses in the scene, it outperforms the state of the art for dense reconstruction of objects when the camera is constrained to stay on a sphere centered on the object. We then conduct experiments in large 3D environments using a simple planning algorithm that builds a camera trajectory online by iteratively selecting NBVs with SCONE. Since, to the best of our knowledge, we propose the first supervised Deep Learning method for such free 6D motion of the camera, we created a dataset made of several large-scale scenes under the CC License for quantitative evaluation. We made our code and this dataset available for allowing comparison of future methods with SCONE on our project webpage: https://github.com/Anttwo/SCONE. 2 Approach Let us consider a depth sensor exploring a 3D scene, at time step t ≥ 0. Using its observations at discrete time steps j with 0 ≤ j ≤ t, the sensor has gathered a cloud of points distributed on the surface of the scene. We refer to this cloud as the partial surface point cloud, as it describes the part of the surface seen –or covered– by the sensor in the scene. To solve the NBV problem, we want to identify a camera pose that maximizes the coverage of previously unseen surface. To this end, our method takes as input the partial surface point cloud as well as the history of 6D camera poses at time steps j ≤ t (i.e.all previous positions and orientations of the sensor). Our approach is built around two successive steps, each relying on a dedicated neural module as shown in figure 2: First, we make a prediction about the geometry of the scene, to estimate where the uncovered points could be. Then, we predict the visibility gain of uncovered points from any new camera pose; The NBV is finally selected as the camera with the most new visible points in its field of view. Although we seek to maximize a surface metric such as surface coverage gain, our method relies on a volumetric representation of the object or scene to reconstruct. In this regard, we show that we can maximize a surface metric by integrating over a volumetric representation with virtually infinite resolution. As we argue below, such a representation is not only useful for collision-free robot navigation but is also much more efficient for optimizing surface coverage gain than the alternative of identifying the 3D points lying on the surface, which is difficult in an unknown and occluded environment. More exactly, we derive a volumetric integral that is asymptotically proportional to the surface coverage gain metric, which is enough for our maximization problem. In the following subsection, we first present this derivation considering volume represented as a perfect binary occupancy map. We then present the two neural modules of SCONE and explain how we use them to predict all terms involved in the volumetric integral, by leveraging neural models and self-attention mechanisms to predict occupancy and occlusions. 2.1 Maximizing Surface Coverage Gain on a Binary Occupancy Map Here, we consider a binary occupancy map σ : R3 → {0, 1} representing the volume of the target object or scene. We will relax our derivations to a probabilistic occupancy map when looking for the next best view in the next subsections. From the binary map σ, we can define the set χ of occupied points, i.e., the set of points x verifying σ(x) = 1, its surface as the boundary ∂χ, and the surface coverage C(c) achieved by a camera pose c = (cpos, crot) ∈ C := R3 × SO(3) as the following surface integral: C(c) = 1 |∂χ|S ∫ ∂χ vc(x) dx, (1) where |∂χ|S := ∫ ∂χ dx is the area of surface ∂χ. χc ⊂ χ is the subset of occupied points contained in the field of view of camera c, and vc(x) is the visibility of point x from camera c, i.e., vc(x) = 1χc(x) · 1 (σ ({(1− λ)cpos + λx such that λ ∈ [0, 1)}) = {0}). Since we want to maximize the total coverage of the surface by all cameras during reconstruction, we are actually interested in maximizing the coverage of previously unobserved points rather than the absolute coverage. Given a set of previous camera poses, which we call the camera history H ⊂ C, and a 3D point x, we introduce the knowledge indicator γH : R3 → {0, 1} such that γH(x) = max{vc(x) : c ∈ H}. We then define the coverage gain GH(c) of camera pose c as: GH(c) = 1 |∂χ|S ∫ ∂χ νHc (x) dx, (2) where νHc (x) = (1 − γH(x)) · vc(x) is the visibility gain of x in χc, for camera history H . This function is equal to 1 iff x is visible at pose c but was not observed by any camera pose in H . Given a camera history H , our goal is to identify a pose c that maximizes GH(c). Given an occupancy map σ, we could evaluate the integral in Eq. (2) by simply sampling points p on surface ∂χ. However, in practice we will estimate the occupancy map iteratively in an unknown environment, and we will only have access to an occupancy probability distribution. Extracting surface points from such a probabilistic occupancy map gives results that can differ a lot from the true surface: Indeed, in 3D, a surface acts as a very concentrated set with zero-measure, and requires high confidence to give meaningful results. Instead of extracting surface points, we extend the properties of such points to a small spherical neighborhood of the surface. This will allow us to replace the maximization of a surface metric by the maximization of a volumetric integral, which is much easier to compute from our volumetric representation. More exactly, we assume there exists a quantity µ0 > 0 such that any volume point in the spherical neighborhood T (∂χ, µ0) := { p ∈ R3 | ∃x ∈ ∂χ, ∥x− p∥2 < µ0 } keeps the same visibility property as its neighboring surface points. With such a hypothesis, we give a thickness to the surface, which makes sense when working with discrete points sampled in space to approximate a volume. To this end, we introduce a new visibility gain function gHc to adapt the definition of the former visibility gain νHc . For any 0 < µ < µ0: gHc (µ;x) = { 1 if ∃x0 ∈ ∂χ, λ < µ such that x = x0 + λN(x0) and νHc (x0) = 1, 0 otherwise , (3) where N is the inward normal vector field. With further regularity assumptions about the surface that are detailed in the appendix, such quantities are well defined. Assuming µ0 is small enough, the following explicit formula translates the surface approach into a volume integral for any camera pose c ∈ C and µ < µ0:∫ T (∂χ,µ) gHc (µ;x)dx = ∫ ∂χ ∫ µ −µ gHc (µ;x0 + λN(x0)) det(I − λWx0) dλ dx0, (4) with Wx0 the Weingarten map at x0, that is, the Hessian of the signed distance function on the boundary of χ, which is continuous on the scene surface, assumed to be compact [10]. By developing the determinant, we find that det(I − λWx0) = 1 + λb(λ, x0) where b is a bounded function on the compact space [−µ, µ] × ∂χ. Moreover, for all x0 ∈ ∂χ, we have by definition gHc (µ;x0 + λN(x0)) = g H c (µ;x0) = ν H c (x0) when 0 ≤ λ < µ, and gHc (µ;x0 + λN(x0)) = 0 when −µ < λ < 0. It follows that, for every 0 < µ < µ0:∫ T (∂χ,µ) gHc (µ;x)dx = ∫ ∂χ ∫ µ 0 gHc (µ;x0)(1 + λb(λ, x0)) dλ dx0 = µ ∫ ∂χ gHc (µ;x0) dx0 + ∫ ∂χ ∫ µ 0 λgHc (µ;x0)b(λ, x0) dλ dx0 = µ|∂χ|SGH(c) + ∫ ∂χ ∫ µ 0 λgHc (µ;x0)b(λ, x0) dλ dx0. (5) The complete derivations are given in the appendix. Function gHc (µ; ·) is naturally equal to 0 for every point outside T (∂χ, µ). Moreover, considering the regularity assumptions we made on the compact surface, if µ0 is chosen small enough then for all x0 ∈ ∂χ, µ < µ0, the point x0 + µN(x0) is located inside the volume, such that ∫ T (∂χ,µ) gHc (µ;x) dx =∫ χ gHc (µ;x) dx. Since |gHc (µ; ·)| ≤ 1 for all c ∈ C and µ > 0, we deduce the following theorem by bounding |b| on [−µ, µ]× ∂χ: Theorem 1. Under the previous regularity assumptions on the volume χ of the scene and its surface ∂χ, there exist µ0 > 0 and M > 0 such that for all µ < µ0, and any camera c ∈ C:∣∣∣∣ 1|χ|V ∫ χ gHc (µ;x)dx− µ |∂χ|S |χ|V GH(c) ∣∣∣∣ ≤ Mµ2 , (6) where |χ|V is the volume of χ. This theorem states that, asymptotically for small values of µ, the volume integral ∫ χ gHc (µ;x) dx gets proportional to the surface coverage gain values GH(c) that we want to maximize. This result is convenient since a volume integral can be easily approximated with Monte-Carlo integration on the volume and a uniform dense sampling based on the occupancy function σ. Consequently, the more points we sample in the volume, the smaller µ we can choose, and the closer maximizing the volume integral of spherical neighborhood visibility gain gets to maximizing the surface coverage gain. 2.2 Architecture To approximate the volumetric integral in Equation 6 for any camera pose c, we need to compute χc as well as function gHc . In this regard, we need to compute both the occupancy map and the visibility gains of points for any camera pose. Since the environment is not perfectly known, we predict each one of these functions with a dedicated neural module. The first module takes as input the partial point cloud gathered by the depth sensor to predict the occupancy probability distribution. The second module takes as input a camera pose, a feature representing camera history as well as a predicted sequence of occupied points located in the camera field of view to predict visibility gains. Predicting the occupancy probability field σ̂. The occupancy function σ is not known perfectly in practice. To represent occupancy, most volumetric NBV methods rely on memory-heavy representations (like an occupancy 3D-grid or a volumetric voxelization), that are generally less efficient for encoding fine details and optimizing dense reconstructions, and will necessarily downgrade the resolution compared to a point cloud directly sampled on the surface. To address this issue while still working with a volumetric representation of the scene, we use a deep implicit function to encode the 3D mapping of occupancy efficiently. Such a function has a virtually infinite resolution, and prevents us from saving a large 3D grid in memory. We thus approximate σ with the first module of our model, which consists of a neural network σ̂ : P(R3) ×R3 → [0, 1] that takes as inputs a partial surface point cloud PH ⊂ R3 and query points x ∈ R3, and outputs the occupancy probability σ̂(PH ;x) of x. PH is obtained by merging together all depth maps previously captured from cameras in H . As shown in Figure 3, rather than using a direct encoding of the global shape of PH as input to σ̂, we take inspiration from [9] to achieve scalability and encode PH using features computed from the points’ neighborhoods. The difference with [9] is that we rely on a multiscale approach: For a given query 3D point x, these features are computed from the k nearest neighbors of x computed at different scales. For each scale s, we downsample point cloud PH around x into a sparser point cloud P (s)H before recomputing the nearest neighbors p (s) i (x), i = 1, ..., k of x: In this way, the size of the neighborhood increases with scale. Next, for each value of s, we use small attention units [26, 11] on the sequence of centered neighborhood points (p(s)1 (x)− x, ..., p (s) k (x)− x) and apply pooling operations to encode each sequence of k neighbors into a single feature that describes the local geometry for the corresponding scale. We finally concatenate these different scale features with another uncentered global feature as well as the query point x, and feed them to an MLP to predict the occupancy probability. The last global feature aims to provide really coarse information about the geometry and the location of x in the scene. This model scales well to large scenes: Adding points from distant views to the current partial point cloud does not change the local state of the point cloud. To avoid computing neighborhoods on the entire point cloud when reconstructing large scenes, we partition the space into cells in which we store the points in PH . Given a query point x, we only use the neighboring cells to compute p (s) i (x). Predicting the visibility gain gHc . To maximize surface coverage gain, we need to compute the volumetric integral of visibility gain functions gHc . We do this again by Monte Carlo sampling, however, in unknown environments we cannot compute explicitly occlusions to derive visibility gain functions gHc since the geometry, represented as a point cloud, is partially unknown and sparser than a true surface. We thus train the second module of our model to predict visibility gain functions by leveraging a self-attention mechanism that helps to estimate occlusion effects in the point cloud PH . In particular, for any camera pose c ∈ C and 3D point x ∈ χc, the second module derives its prediction of visibility gains from three core features: (i) The predicted probability σ̂(P ;x) of x to be occupied, (ii) the occlusions on x by the subvolume χc and (iii) the camera history H . To feed all this information to our model in an efficient way, we follow the pipeline presented in Figure 4. The model starts by using the predicted occupancy probability function σ̂ to sample 3D points in the volume χ. These samples will be used for Monte Carlo integration. We refer to these points as proxy points as we use them to encode the volume in the camera field of view, i.e., in a pyramidal frustum view. We write χ̂ as the discrete set of sampled proxy points, and χ̂c as the set of proxy points located in the field of view of the camera c. We first encode these proxy points individually by applying a small MLP on their 3D coordinates and their occupancy probability value concatenated together. Then, our model processes the sequence of these encodings with a self-attention unit to handle occlusion effects of subvolume χc on every individual point. Note there is no pooling operation on the output of this unit: The network predicts per-point features and does not aggregate predictions, since we do it ourselves with Monte Carlo integration. Next, for each proxy point x ∈ χ̂ , we compute an additional feature hH(x) that encodes the history of camera positions H with respect to this point as a spherical mapping: It consists in the projection on a sphere centered on x of all camera positions for which x was in the field of view. These features are concatenated to the outputs of the self-attention unit. Our model finally uses an MLP on these features to predict the entire visibility gain functions of every point x as a vector of coordinates in the orthonormal basis of spherical harmonics. With such a formalism, the model is able to compute visibility gains for points inside a subvolume in all directions with a single forward pass. In this regard, we avoid unnecessary computation and are able to process a large number of cameras in the same time when they share the same proxy points in their field of view (e.g., reconstruction of a single object centered in the scene, where χ̂c = χ̂ for all c, or when several cameras observe the same part of the 3D scene, i.e., χ̂c = χ̂′ ⊂ χ̂ for several c). Formally, if we denote by Y ml : S 2 → R the real spherical harmonic of rank (l,m) and ϕml (χ̂c;x, hH(x)) the predicted coordinate of rank (l,m) for proxy point x ∈ χ̂c with attention to subset χ̂c and camera history feature hH(x), the visibility gain of point x in direction d ∈ S2 is defined as ∑ l,m ϕml (χ̂c;x, hH(x)) · Y ml (d) (7) so that the coverage gain GH(c) of a camera pose c ∈ C is proportional to IH(c) := 1 |χ̂| ∑ x∈χ̂ 1χ̂c(x)∑ l,m ϕml (χ̂c;x, hH(x)) · Y ml ( x− cpos ∥x− cpos∥2 ) . (8) The next best view among several camera positions is finally defined as the camera pose c∗ with the highest value for IH(c∗). Equation 8 is a Monte Carlo approximation of the volumetric integral in Equation 6, where the occupancy map and the visibility gains are predicted with neural networks. We choose to use a Monte-Carlo integral rather than a neural aggregator because this approach is simple, fast, makes training more stable, has good performance, better interpretability, and can handle sequences of arbitrary size. In particular, it implicitly encourages our model to compute meaningful visibility gains for each point since there is no asymmetry between the points. We also use spherical harmonics to encode the camera history encoding hH(x) of each point x, which makes it a homogeneous input to the predicted output. Consequently, this input comes at the end of the architecture, and aims to adapt the visibility gain according to previous camera positions. This convenient representation, inspired by [32], allows us to handle free camera motion; On the contrary, several models in the literature encode the camera directions on a discrete sphere [33, 16, 25]. Training. We train the occupancy probability module alone with a Mean Squared Error loss with the ground truth occupancy map. We do not compute ground-truth visibility gains to train the second module since it would make computation more difficult and require further assumptions: We supervise directly on IH(c) by comparing it to the ground-truth surface coverage gain for multiple cameras, with softmax normalization and Kullback-Leibler divergence loss. Extensive details about the training of our modules and the choices we made are given in the appendix. 3 Experiments As discussed in the introduction, deep learning-based NBV models for dense reconstruction are currently limited to single, small-scale, centered object reconstruction. To compare SCONE to these previous methods in this context, we first constrain the camera pose to lie on a sphere centered on an object. We then introduce our dataset made of 13 large-scale 3D models that we created to evaluate SCONE on free camera motions (3D models courtesy of Brian Trepanier, Andrea Spognetta, and 3D Interiors, under CC License; all models were downloaded from the website Sketchfab). 3.1 Next Best View for Single Object Reconstruction We first compare the performance of our model to the state of the art on a subset of the ShapeNet dataset [2] introduced in [33] and following the protocol of [33]: We sample 4,000 training meshes from 8 specific categories of objects, 400 validation meshes and 400 test meshes from the same categories, and 400 additional test meshes from 8 categories unseen during training. The evaluation on the test datasets consists of 10-view reconstructions of single objects. Given a mesh in the dataset, camera positions are discretized on a sphere. We start the reconstruction process by selecting a random camera pose, then we iterate NBV selection 9 times in order to maximize coverage with a sequence of 10 views in total. The evaluation metric is the area under the curve (AUC) of surface coverage throughout the reconstruction. This criterion not only evaluates the quality of final surface coverage, but also the convergence speed toward a satisfying coverage. Results are presented in Table 1. Further details about the evaluation, the estimation of ground truth surface coverage gains, and the metric computation are available in the appendix. 3.2 Active View Planning in a 3D Scene To evaluate the scalability of our model to large environments as well as free camera motion in 3D space, we also conducted experiments using a naive planning algorithm that incrementally builds a path in the scene: We first discretize the camera poses in the scene on a 5D grid, corresponding to coordinates cpos = (xc, yc, zc) of the camera as well as the elevation and azimuth to encode rotation crot. The number of poses depends on the dimensions of the bounding box of the scene. This box is an input to the algorithm, as a way for the user to tell which part of the scene should be reconstructed. In our experiments, the number of different poses is around 10,000. Note we did not retrain our model for such scenes and use the previous model trained on ShapeNet as explained in Section 3.1. The depth sensor starts from a random pose (from which, at least, a part of the main structure to reconstruct is visible). Then, at each iteration, our method estimates the coverage gain of direct neighboring poses in the 5D grid, and selects the one with the highest score. The sensor moves to this position, captures a new partial point cloud and concatenates it to the previous one. Since we focus on online path planning for dense reconstruction optimization and designed our model to be scalable with local neighborhood features, the size of the global point cloud can become very large. We iterate the process either 100 times to build a full path around the object in around 5 minutes on a single Nvidia GTX1080 GPU, and recovered a partial point cloud with up to 100,000 points. Contrary to single, small-scale object reconstruction where the object is always entirely included in the field of view, we simulated a limited range in terms of field of view and depth for the depth sensor, so that the extent of the entire scene cannot be covered in a single view. Thus, taking pictures at long range and going around the scene randomly is not an efficient strategy; the sensor must find a path around the surface to optimize the full coverage. We use a ray-casting renderer to approximate a real Lidar. More details about the experiments can be found in the appendix. We compared the performance of SCONE with simple baselines: First, a random walk, which chooses neighboring poses randomly with uniform probabilities. Then, we evaluated an alternate version of our model, which we call SCONE-Entropy, which leverages the first module of our full model to compute occupancy probability in the scene, then selects the next best view as the position that maximizes the Shannon entropy in its field of view. The comparison is interesting, since SCONEEntropy adapts classic NBV approaches based on information theory to a deep learning framework. Figures 5 and 6 provide qualitative results. Figure 7 shows the convergence speed of covered surface by SCONE and our two baselines, averaged on all scenes of the dataset. Despite being trained only on ShapeNet 3D models, our method is able to compute meaningful paths around the structures and consistently reach satisfying global coverage. Since we focused on building a metric for NBV computation, this experiment is simple and does not implement any further prior from common path planning strategies: For instance, we do not compute distant NBV, nor optimal paths to move from a position to a distant NBV with respect to our volumetric mapping. There is no doubt the model could benefit from further strategies inspired by the path planning literature. 3.3 Ablation Study We now provide an ablation study for both modules of our full model: The prediction of occupancy probability, and the computation of coverage gain from predictions about visibility gains. To evaluate both modules separately, we compared their performance on their training critera: MSE for occupancy probability, and softmax followed by KL-Div on a dense set of cameras for coverage gain. The values are reported in Figure 8. We provide extensive details and analysis in the appendix. Occupancy probability. Apart from the base architecture presented in Figure 3, we trained the first module under two additional settings. First, we removed the multi-scale neighborhood features computed by downsampling the partial point cloud PH , and trained the module to predict an occupancy probability value σ̂ only from global feature g(PH) and encoding f(x). As anticipated, the lack of neighborhood features not only prevents the model from an efficient scaling to large scenes, but also causes a huge loss in performance. We also trained a slightly more complex version of the module by feeding it the camera history harmonics hH(x) as an additional feature. It appears this helps the model to increase its confidence and make better predictions, but the gains are quite marginal. Visibility gain. We trained two variations of our second module. First, we completely removed the geometric prediction: We use directly the surface points as proxy points, mapped with an occupancy probability equal to 1. As a consequence, the model suffers from a significant loss in performance to compute coverage gain from its predicted visibility gain functions. Thus, we can confirm that the volumetric framework improves the performance of SCONE. This result is remarkable, since surface representations usually make better NBV predictions for dense reconstruction of detailed objects in literature. On the contrary, we show that our formalism is able to achieve higher performance by leveraging a volumetric representation with a high resolution deep implicit function σ̂. We trained a second variation without the spherical mappings hH of camera history. We verified that such additional features increase performance, especially at the end of a 3D reconstruction. 4 Limitations and Conclusion Beyond the prediction of the Next Best View, our method SCONE is able to evaluate the value of any camera pose given a current estimate of the scene volume. We demonstrated this ability with a simple path planning algorithm that looks for the next pose around the current pose iteratively. However: • Like current methods, SCONE relies on a depth sensor and assumes that the captured depth maps are perfect. In practice, such a sensor is not necessarily available and can also be noisy. It would be very interesting to extend the approach to color cameras. • Also like current methods, it is limited to the prediction of the camera pose for the next step. This is greedy, non-optimal as multiple poses are required anyway for a complete 3D reconstruction. Thanks to its scalability, we believe that our method could be extended to the prediction of the “Next Best Path”, where future camera poses would be predicted jointly for their complementarity. Acknowledgements This work was granted access to the HPC resources of IDRIS under the allocation 2022-AD011013387 made by GENCI. We thank Renaud Marlet, Elliot Vincent, Romain Loiseau, and Tom Monnier for inspiring discussions and valuable feedback.
1. What is the focus and contribution of the paper on next-best view prediction? 2. What are the strengths of the proposed approach, particularly in terms of neural representation and loss functions? 3. Do you have any concerns regarding the derivation and approximation of surface coverage gain? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. What are the limitations and potential negative societal impacts of the proposed method?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper describes a method for next-best view prediction for reconstructing large-scale 3D scenes with a depth sensor. They derive a formula to estimate the surface coverage gain for any potential camera pose given a camera pose history and a probabilistic occupancy map. They use one neural network to predict probabilistic occupancy map based on a point cloud input, and a second network to predict the visibility gain, which is used in the calculation of surface coverage gain. The first network is trained to match the ground truth occupancy map, and the second is trained to match the ground-truth surface coverage gain. Extensive experiments on synthetic datasets demonstrates an improvement in performance over SOTA. Strengths And Weaknesses They provide a thorough derivation an approximation of surface coverage gain that they prove asymptotically approaches the true value. This formulation is novel to the best of my knowledge. The structure of the neural network and the loss functions is interesting. They use separate networks to predict the probabilistic occupancy map and the visibility gain functions. The loss function for the visibility gain network is not visibility gain itself but the surface coverage gain. They also use attention mechanisms to model occlusion effects. They provide a thorough set of experiments to demonstrate their method leads to an improvement on ShapeNet following a standard protocol. They also provide some ablation studies to establish the usefulness of various aspects of their proposed approach. Overall, they have some interesting new ideas which lead to an increase in performance for NBV selection, and their work also can lead to new research such as handling noise in the depth map and selecting optimal paths rather than single viewpoints. Questions L89: the expression (1- \lambda) c_pos + \lambda x seems to be interpolating between the camera position and the point, but it doesn't take the camera's viewing direction into account? L101: "tubular" -- is this the right word? or should it be "spherical"? I couldn't figure out why the neighborhood region would be tubular. Based on the expression in L105 it seems to be spherical (all points within a distance of \mu_o from x. Limitations Limitations are discussed but not potential negative societal impacts.
NIPS
Title SCONE: Surface Coverage Optimization in Unknown Environments by Volumetric Integration Abstract Next Best View computation (NBV) is a long-standing problem in robotics, and consists in identifying the next most informative sensor position(s) for reconstructing a 3D object or scene efficiently and accurately. Like most current methods, we consider NBV prediction from a depth sensor like Lidar systems. Learningbased methods relying on a volumetric representation of the scene are suitable for path planning, but have lower accuracy than methods using a surface-based representation. However, the latter do not scale well with the size of the scene and constrain the camera to a small number of poses. To obtain the advantages of both representations, we show that we can maximize surface metrics by Monte Carlo integration over a volumetric representation. In particular, we propose an approach, SCONE, that relies on two neural modules: The first module predicts occupancy probability in the entire volume of the scene. Given any new camera pose, the second module samples points in the scene based on their occupancy probability and leverages a self-attention mechanism to predict the visibility of the samples. Finally, we integrate the visibility to evaluate the gain in surface coverage for the new camera pose. NBV is selected as the pose that maximizes the gain in total surface coverage. Our method scales to large scenes and handles free camera motion: It takes as input an arbitrarily large point cloud gathered by a depth sensor as well as camera poses to predict NBV. We demonstrate our approach on a novel dataset made of large and complex 3D scenes. 1 Introduction Next Best View computation (NBV) is a long-standing problem in robotics [6, 29], which consists in identifying the next most informative sensor position(s) for reconstructing a 3D object or scene efficiently and accurately. Typically, a position is evaluated on how much it can increase the total coverage of the scene surface. Few methods have relied on Deep Learning (DL) for the NBV problem, even though DL can provide useful geometric prior to obtain a better prediction of the surface coverage [33, 16, 25]. Like most current methods, we consider NBV prediction from a depth sensor. Existing methods based on a depth sensor rely either on a volumetric or on a surface-based representation of the scene geometry. Volumetric mapping-based methods can compute collision efficiently, which is practical for path planning in real case scenarios [20, 23, 24, 14, 1, 7]. However, they typically rely on voxels or a global embedding [12, 3, 21, 22, 27, 5] for the scene, which results in poor accuracy in reconstruction and poor performance in NBV selection for complex 3D objects. On the contrary, surface mapping-based methods that process directly a dense point cloud of the surface as gathered by the depth sensor are efficient for NBV prediction with high-detailed geometry. They are however limited to very specific cases, generally a single, small-scale, isolated object with the camera constrained to stay on a sphere centered on the object [15, 24, 8, 4, 13, 14, 14, 33, 16, 25]. Thus, they cannot be applied to the exploration of 3D scenes. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). As shown in Figure 1, we introduce a volumetric DL method that efficiently identifies NBVs for unknown large-scale 3D scenes in which the camera can move freely. Instead of representing the scene with a single global embedding, we choose to use a grid of local point clouds, which scales much better to large and complex 3D scenes. We show how to learn to predict the visibility of unseen 3D points in all directions given an estimate of the 3D scene geometry. We can then integrate these visibilities in the direction of any camera by using a Monte Carlo integration approach, which allows us to optimize the camera pose to find the next most informative views. We call our method SCONE, for Surface Coverage Optimization in uNknown Environments. In this respect, we introduce a theoretical framework to translate the optimization of surface coverage gain, a surface metric on manifolds that represents the ability of a camera to increase the visible area of the surface, into an optimization problem on volumetric integrals. Such a formalism allows us to use a volumetric mapping of geometry, which is convenient not only to scale the model to exploration tasks and scene reconstruction, but also to make probabilistic predictions about geometry. In particular, given a partial point cloud gathered by a depth sensor, our model learns to make predictions with virtually infinite resolution about the occupancy probability in the scene volume by learning a deep implicit function [17, 28, 18, 30, 31, 19]. Such predictions scale to very large point clouds since they depend only on neighborhood geometry. Then, our model leverages a self-attention mechanism [26] to predict occlusions and compute informative functions mapped on a continuous sphere that represent visibility likelihood of points in all directions. The occupancy probability field is finally used as a basis to sample points and compute Monte Carlo integrals of visibility scores. Since NBV learning-based methods are mostly limited to single, small-scale, centered object reconstruction in literature, we first compare the performance of our model to the state of the art on the ShapeNet dataset [2], following the protocol introduced in [33]. While our method was designed to handle more general frameworks such as 3D scene reconstruction and continuous cameras poses in the scene, it outperforms the state of the art for dense reconstruction of objects when the camera is constrained to stay on a sphere centered on the object. We then conduct experiments in large 3D environments using a simple planning algorithm that builds a camera trajectory online by iteratively selecting NBVs with SCONE. Since, to the best of our knowledge, we propose the first supervised Deep Learning method for such free 6D motion of the camera, we created a dataset made of several large-scale scenes under the CC License for quantitative evaluation. We made our code and this dataset available for allowing comparison of future methods with SCONE on our project webpage: https://github.com/Anttwo/SCONE. 2 Approach Let us consider a depth sensor exploring a 3D scene, at time step t ≥ 0. Using its observations at discrete time steps j with 0 ≤ j ≤ t, the sensor has gathered a cloud of points distributed on the surface of the scene. We refer to this cloud as the partial surface point cloud, as it describes the part of the surface seen –or covered– by the sensor in the scene. To solve the NBV problem, we want to identify a camera pose that maximizes the coverage of previously unseen surface. To this end, our method takes as input the partial surface point cloud as well as the history of 6D camera poses at time steps j ≤ t (i.e.all previous positions and orientations of the sensor). Our approach is built around two successive steps, each relying on a dedicated neural module as shown in figure 2: First, we make a prediction about the geometry of the scene, to estimate where the uncovered points could be. Then, we predict the visibility gain of uncovered points from any new camera pose; The NBV is finally selected as the camera with the most new visible points in its field of view. Although we seek to maximize a surface metric such as surface coverage gain, our method relies on a volumetric representation of the object or scene to reconstruct. In this regard, we show that we can maximize a surface metric by integrating over a volumetric representation with virtually infinite resolution. As we argue below, such a representation is not only useful for collision-free robot navigation but is also much more efficient for optimizing surface coverage gain than the alternative of identifying the 3D points lying on the surface, which is difficult in an unknown and occluded environment. More exactly, we derive a volumetric integral that is asymptotically proportional to the surface coverage gain metric, which is enough for our maximization problem. In the following subsection, we first present this derivation considering volume represented as a perfect binary occupancy map. We then present the two neural modules of SCONE and explain how we use them to predict all terms involved in the volumetric integral, by leveraging neural models and self-attention mechanisms to predict occupancy and occlusions. 2.1 Maximizing Surface Coverage Gain on a Binary Occupancy Map Here, we consider a binary occupancy map σ : R3 → {0, 1} representing the volume of the target object or scene. We will relax our derivations to a probabilistic occupancy map when looking for the next best view in the next subsections. From the binary map σ, we can define the set χ of occupied points, i.e., the set of points x verifying σ(x) = 1, its surface as the boundary ∂χ, and the surface coverage C(c) achieved by a camera pose c = (cpos, crot) ∈ C := R3 × SO(3) as the following surface integral: C(c) = 1 |∂χ|S ∫ ∂χ vc(x) dx, (1) where |∂χ|S := ∫ ∂χ dx is the area of surface ∂χ. χc ⊂ χ is the subset of occupied points contained in the field of view of camera c, and vc(x) is the visibility of point x from camera c, i.e., vc(x) = 1χc(x) · 1 (σ ({(1− λ)cpos + λx such that λ ∈ [0, 1)}) = {0}). Since we want to maximize the total coverage of the surface by all cameras during reconstruction, we are actually interested in maximizing the coverage of previously unobserved points rather than the absolute coverage. Given a set of previous camera poses, which we call the camera history H ⊂ C, and a 3D point x, we introduce the knowledge indicator γH : R3 → {0, 1} such that γH(x) = max{vc(x) : c ∈ H}. We then define the coverage gain GH(c) of camera pose c as: GH(c) = 1 |∂χ|S ∫ ∂χ νHc (x) dx, (2) where νHc (x) = (1 − γH(x)) · vc(x) is the visibility gain of x in χc, for camera history H . This function is equal to 1 iff x is visible at pose c but was not observed by any camera pose in H . Given a camera history H , our goal is to identify a pose c that maximizes GH(c). Given an occupancy map σ, we could evaluate the integral in Eq. (2) by simply sampling points p on surface ∂χ. However, in practice we will estimate the occupancy map iteratively in an unknown environment, and we will only have access to an occupancy probability distribution. Extracting surface points from such a probabilistic occupancy map gives results that can differ a lot from the true surface: Indeed, in 3D, a surface acts as a very concentrated set with zero-measure, and requires high confidence to give meaningful results. Instead of extracting surface points, we extend the properties of such points to a small spherical neighborhood of the surface. This will allow us to replace the maximization of a surface metric by the maximization of a volumetric integral, which is much easier to compute from our volumetric representation. More exactly, we assume there exists a quantity µ0 > 0 such that any volume point in the spherical neighborhood T (∂χ, µ0) := { p ∈ R3 | ∃x ∈ ∂χ, ∥x− p∥2 < µ0 } keeps the same visibility property as its neighboring surface points. With such a hypothesis, we give a thickness to the surface, which makes sense when working with discrete points sampled in space to approximate a volume. To this end, we introduce a new visibility gain function gHc to adapt the definition of the former visibility gain νHc . For any 0 < µ < µ0: gHc (µ;x) = { 1 if ∃x0 ∈ ∂χ, λ < µ such that x = x0 + λN(x0) and νHc (x0) = 1, 0 otherwise , (3) where N is the inward normal vector field. With further regularity assumptions about the surface that are detailed in the appendix, such quantities are well defined. Assuming µ0 is small enough, the following explicit formula translates the surface approach into a volume integral for any camera pose c ∈ C and µ < µ0:∫ T (∂χ,µ) gHc (µ;x)dx = ∫ ∂χ ∫ µ −µ gHc (µ;x0 + λN(x0)) det(I − λWx0) dλ dx0, (4) with Wx0 the Weingarten map at x0, that is, the Hessian of the signed distance function on the boundary of χ, which is continuous on the scene surface, assumed to be compact [10]. By developing the determinant, we find that det(I − λWx0) = 1 + λb(λ, x0) where b is a bounded function on the compact space [−µ, µ] × ∂χ. Moreover, for all x0 ∈ ∂χ, we have by definition gHc (µ;x0 + λN(x0)) = g H c (µ;x0) = ν H c (x0) when 0 ≤ λ < µ, and gHc (µ;x0 + λN(x0)) = 0 when −µ < λ < 0. It follows that, for every 0 < µ < µ0:∫ T (∂χ,µ) gHc (µ;x)dx = ∫ ∂χ ∫ µ 0 gHc (µ;x0)(1 + λb(λ, x0)) dλ dx0 = µ ∫ ∂χ gHc (µ;x0) dx0 + ∫ ∂χ ∫ µ 0 λgHc (µ;x0)b(λ, x0) dλ dx0 = µ|∂χ|SGH(c) + ∫ ∂χ ∫ µ 0 λgHc (µ;x0)b(λ, x0) dλ dx0. (5) The complete derivations are given in the appendix. Function gHc (µ; ·) is naturally equal to 0 for every point outside T (∂χ, µ). Moreover, considering the regularity assumptions we made on the compact surface, if µ0 is chosen small enough then for all x0 ∈ ∂χ, µ < µ0, the point x0 + µN(x0) is located inside the volume, such that ∫ T (∂χ,µ) gHc (µ;x) dx =∫ χ gHc (µ;x) dx. Since |gHc (µ; ·)| ≤ 1 for all c ∈ C and µ > 0, we deduce the following theorem by bounding |b| on [−µ, µ]× ∂χ: Theorem 1. Under the previous regularity assumptions on the volume χ of the scene and its surface ∂χ, there exist µ0 > 0 and M > 0 such that for all µ < µ0, and any camera c ∈ C:∣∣∣∣ 1|χ|V ∫ χ gHc (µ;x)dx− µ |∂χ|S |χ|V GH(c) ∣∣∣∣ ≤ Mµ2 , (6) where |χ|V is the volume of χ. This theorem states that, asymptotically for small values of µ, the volume integral ∫ χ gHc (µ;x) dx gets proportional to the surface coverage gain values GH(c) that we want to maximize. This result is convenient since a volume integral can be easily approximated with Monte-Carlo integration on the volume and a uniform dense sampling based on the occupancy function σ. Consequently, the more points we sample in the volume, the smaller µ we can choose, and the closer maximizing the volume integral of spherical neighborhood visibility gain gets to maximizing the surface coverage gain. 2.2 Architecture To approximate the volumetric integral in Equation 6 for any camera pose c, we need to compute χc as well as function gHc . In this regard, we need to compute both the occupancy map and the visibility gains of points for any camera pose. Since the environment is not perfectly known, we predict each one of these functions with a dedicated neural module. The first module takes as input the partial point cloud gathered by the depth sensor to predict the occupancy probability distribution. The second module takes as input a camera pose, a feature representing camera history as well as a predicted sequence of occupied points located in the camera field of view to predict visibility gains. Predicting the occupancy probability field σ̂. The occupancy function σ is not known perfectly in practice. To represent occupancy, most volumetric NBV methods rely on memory-heavy representations (like an occupancy 3D-grid or a volumetric voxelization), that are generally less efficient for encoding fine details and optimizing dense reconstructions, and will necessarily downgrade the resolution compared to a point cloud directly sampled on the surface. To address this issue while still working with a volumetric representation of the scene, we use a deep implicit function to encode the 3D mapping of occupancy efficiently. Such a function has a virtually infinite resolution, and prevents us from saving a large 3D grid in memory. We thus approximate σ with the first module of our model, which consists of a neural network σ̂ : P(R3) ×R3 → [0, 1] that takes as inputs a partial surface point cloud PH ⊂ R3 and query points x ∈ R3, and outputs the occupancy probability σ̂(PH ;x) of x. PH is obtained by merging together all depth maps previously captured from cameras in H . As shown in Figure 3, rather than using a direct encoding of the global shape of PH as input to σ̂, we take inspiration from [9] to achieve scalability and encode PH using features computed from the points’ neighborhoods. The difference with [9] is that we rely on a multiscale approach: For a given query 3D point x, these features are computed from the k nearest neighbors of x computed at different scales. For each scale s, we downsample point cloud PH around x into a sparser point cloud P (s)H before recomputing the nearest neighbors p (s) i (x), i = 1, ..., k of x: In this way, the size of the neighborhood increases with scale. Next, for each value of s, we use small attention units [26, 11] on the sequence of centered neighborhood points (p(s)1 (x)− x, ..., p (s) k (x)− x) and apply pooling operations to encode each sequence of k neighbors into a single feature that describes the local geometry for the corresponding scale. We finally concatenate these different scale features with another uncentered global feature as well as the query point x, and feed them to an MLP to predict the occupancy probability. The last global feature aims to provide really coarse information about the geometry and the location of x in the scene. This model scales well to large scenes: Adding points from distant views to the current partial point cloud does not change the local state of the point cloud. To avoid computing neighborhoods on the entire point cloud when reconstructing large scenes, we partition the space into cells in which we store the points in PH . Given a query point x, we only use the neighboring cells to compute p (s) i (x). Predicting the visibility gain gHc . To maximize surface coverage gain, we need to compute the volumetric integral of visibility gain functions gHc . We do this again by Monte Carlo sampling, however, in unknown environments we cannot compute explicitly occlusions to derive visibility gain functions gHc since the geometry, represented as a point cloud, is partially unknown and sparser than a true surface. We thus train the second module of our model to predict visibility gain functions by leveraging a self-attention mechanism that helps to estimate occlusion effects in the point cloud PH . In particular, for any camera pose c ∈ C and 3D point x ∈ χc, the second module derives its prediction of visibility gains from three core features: (i) The predicted probability σ̂(P ;x) of x to be occupied, (ii) the occlusions on x by the subvolume χc and (iii) the camera history H . To feed all this information to our model in an efficient way, we follow the pipeline presented in Figure 4. The model starts by using the predicted occupancy probability function σ̂ to sample 3D points in the volume χ. These samples will be used for Monte Carlo integration. We refer to these points as proxy points as we use them to encode the volume in the camera field of view, i.e., in a pyramidal frustum view. We write χ̂ as the discrete set of sampled proxy points, and χ̂c as the set of proxy points located in the field of view of the camera c. We first encode these proxy points individually by applying a small MLP on their 3D coordinates and their occupancy probability value concatenated together. Then, our model processes the sequence of these encodings with a self-attention unit to handle occlusion effects of subvolume χc on every individual point. Note there is no pooling operation on the output of this unit: The network predicts per-point features and does not aggregate predictions, since we do it ourselves with Monte Carlo integration. Next, for each proxy point x ∈ χ̂ , we compute an additional feature hH(x) that encodes the history of camera positions H with respect to this point as a spherical mapping: It consists in the projection on a sphere centered on x of all camera positions for which x was in the field of view. These features are concatenated to the outputs of the self-attention unit. Our model finally uses an MLP on these features to predict the entire visibility gain functions of every point x as a vector of coordinates in the orthonormal basis of spherical harmonics. With such a formalism, the model is able to compute visibility gains for points inside a subvolume in all directions with a single forward pass. In this regard, we avoid unnecessary computation and are able to process a large number of cameras in the same time when they share the same proxy points in their field of view (e.g., reconstruction of a single object centered in the scene, where χ̂c = χ̂ for all c, or when several cameras observe the same part of the 3D scene, i.e., χ̂c = χ̂′ ⊂ χ̂ for several c). Formally, if we denote by Y ml : S 2 → R the real spherical harmonic of rank (l,m) and ϕml (χ̂c;x, hH(x)) the predicted coordinate of rank (l,m) for proxy point x ∈ χ̂c with attention to subset χ̂c and camera history feature hH(x), the visibility gain of point x in direction d ∈ S2 is defined as ∑ l,m ϕml (χ̂c;x, hH(x)) · Y ml (d) (7) so that the coverage gain GH(c) of a camera pose c ∈ C is proportional to IH(c) := 1 |χ̂| ∑ x∈χ̂ 1χ̂c(x)∑ l,m ϕml (χ̂c;x, hH(x)) · Y ml ( x− cpos ∥x− cpos∥2 ) . (8) The next best view among several camera positions is finally defined as the camera pose c∗ with the highest value for IH(c∗). Equation 8 is a Monte Carlo approximation of the volumetric integral in Equation 6, where the occupancy map and the visibility gains are predicted with neural networks. We choose to use a Monte-Carlo integral rather than a neural aggregator because this approach is simple, fast, makes training more stable, has good performance, better interpretability, and can handle sequences of arbitrary size. In particular, it implicitly encourages our model to compute meaningful visibility gains for each point since there is no asymmetry between the points. We also use spherical harmonics to encode the camera history encoding hH(x) of each point x, which makes it a homogeneous input to the predicted output. Consequently, this input comes at the end of the architecture, and aims to adapt the visibility gain according to previous camera positions. This convenient representation, inspired by [32], allows us to handle free camera motion; On the contrary, several models in the literature encode the camera directions on a discrete sphere [33, 16, 25]. Training. We train the occupancy probability module alone with a Mean Squared Error loss with the ground truth occupancy map. We do not compute ground-truth visibility gains to train the second module since it would make computation more difficult and require further assumptions: We supervise directly on IH(c) by comparing it to the ground-truth surface coverage gain for multiple cameras, with softmax normalization and Kullback-Leibler divergence loss. Extensive details about the training of our modules and the choices we made are given in the appendix. 3 Experiments As discussed in the introduction, deep learning-based NBV models for dense reconstruction are currently limited to single, small-scale, centered object reconstruction. To compare SCONE to these previous methods in this context, we first constrain the camera pose to lie on a sphere centered on an object. We then introduce our dataset made of 13 large-scale 3D models that we created to evaluate SCONE on free camera motions (3D models courtesy of Brian Trepanier, Andrea Spognetta, and 3D Interiors, under CC License; all models were downloaded from the website Sketchfab). 3.1 Next Best View for Single Object Reconstruction We first compare the performance of our model to the state of the art on a subset of the ShapeNet dataset [2] introduced in [33] and following the protocol of [33]: We sample 4,000 training meshes from 8 specific categories of objects, 400 validation meshes and 400 test meshes from the same categories, and 400 additional test meshes from 8 categories unseen during training. The evaluation on the test datasets consists of 10-view reconstructions of single objects. Given a mesh in the dataset, camera positions are discretized on a sphere. We start the reconstruction process by selecting a random camera pose, then we iterate NBV selection 9 times in order to maximize coverage with a sequence of 10 views in total. The evaluation metric is the area under the curve (AUC) of surface coverage throughout the reconstruction. This criterion not only evaluates the quality of final surface coverage, but also the convergence speed toward a satisfying coverage. Results are presented in Table 1. Further details about the evaluation, the estimation of ground truth surface coverage gains, and the metric computation are available in the appendix. 3.2 Active View Planning in a 3D Scene To evaluate the scalability of our model to large environments as well as free camera motion in 3D space, we also conducted experiments using a naive planning algorithm that incrementally builds a path in the scene: We first discretize the camera poses in the scene on a 5D grid, corresponding to coordinates cpos = (xc, yc, zc) of the camera as well as the elevation and azimuth to encode rotation crot. The number of poses depends on the dimensions of the bounding box of the scene. This box is an input to the algorithm, as a way for the user to tell which part of the scene should be reconstructed. In our experiments, the number of different poses is around 10,000. Note we did not retrain our model for such scenes and use the previous model trained on ShapeNet as explained in Section 3.1. The depth sensor starts from a random pose (from which, at least, a part of the main structure to reconstruct is visible). Then, at each iteration, our method estimates the coverage gain of direct neighboring poses in the 5D grid, and selects the one with the highest score. The sensor moves to this position, captures a new partial point cloud and concatenates it to the previous one. Since we focus on online path planning for dense reconstruction optimization and designed our model to be scalable with local neighborhood features, the size of the global point cloud can become very large. We iterate the process either 100 times to build a full path around the object in around 5 minutes on a single Nvidia GTX1080 GPU, and recovered a partial point cloud with up to 100,000 points. Contrary to single, small-scale object reconstruction where the object is always entirely included in the field of view, we simulated a limited range in terms of field of view and depth for the depth sensor, so that the extent of the entire scene cannot be covered in a single view. Thus, taking pictures at long range and going around the scene randomly is not an efficient strategy; the sensor must find a path around the surface to optimize the full coverage. We use a ray-casting renderer to approximate a real Lidar. More details about the experiments can be found in the appendix. We compared the performance of SCONE with simple baselines: First, a random walk, which chooses neighboring poses randomly with uniform probabilities. Then, we evaluated an alternate version of our model, which we call SCONE-Entropy, which leverages the first module of our full model to compute occupancy probability in the scene, then selects the next best view as the position that maximizes the Shannon entropy in its field of view. The comparison is interesting, since SCONEEntropy adapts classic NBV approaches based on information theory to a deep learning framework. Figures 5 and 6 provide qualitative results. Figure 7 shows the convergence speed of covered surface by SCONE and our two baselines, averaged on all scenes of the dataset. Despite being trained only on ShapeNet 3D models, our method is able to compute meaningful paths around the structures and consistently reach satisfying global coverage. Since we focused on building a metric for NBV computation, this experiment is simple and does not implement any further prior from common path planning strategies: For instance, we do not compute distant NBV, nor optimal paths to move from a position to a distant NBV with respect to our volumetric mapping. There is no doubt the model could benefit from further strategies inspired by the path planning literature. 3.3 Ablation Study We now provide an ablation study for both modules of our full model: The prediction of occupancy probability, and the computation of coverage gain from predictions about visibility gains. To evaluate both modules separately, we compared their performance on their training critera: MSE for occupancy probability, and softmax followed by KL-Div on a dense set of cameras for coverage gain. The values are reported in Figure 8. We provide extensive details and analysis in the appendix. Occupancy probability. Apart from the base architecture presented in Figure 3, we trained the first module under two additional settings. First, we removed the multi-scale neighborhood features computed by downsampling the partial point cloud PH , and trained the module to predict an occupancy probability value σ̂ only from global feature g(PH) and encoding f(x). As anticipated, the lack of neighborhood features not only prevents the model from an efficient scaling to large scenes, but also causes a huge loss in performance. We also trained a slightly more complex version of the module by feeding it the camera history harmonics hH(x) as an additional feature. It appears this helps the model to increase its confidence and make better predictions, but the gains are quite marginal. Visibility gain. We trained two variations of our second module. First, we completely removed the geometric prediction: We use directly the surface points as proxy points, mapped with an occupancy probability equal to 1. As a consequence, the model suffers from a significant loss in performance to compute coverage gain from its predicted visibility gain functions. Thus, we can confirm that the volumetric framework improves the performance of SCONE. This result is remarkable, since surface representations usually make better NBV predictions for dense reconstruction of detailed objects in literature. On the contrary, we show that our formalism is able to achieve higher performance by leveraging a volumetric representation with a high resolution deep implicit function σ̂. We trained a second variation without the spherical mappings hH of camera history. We verified that such additional features increase performance, especially at the end of a 3D reconstruction. 4 Limitations and Conclusion Beyond the prediction of the Next Best View, our method SCONE is able to evaluate the value of any camera pose given a current estimate of the scene volume. We demonstrated this ability with a simple path planning algorithm that looks for the next pose around the current pose iteratively. However: • Like current methods, SCONE relies on a depth sensor and assumes that the captured depth maps are perfect. In practice, such a sensor is not necessarily available and can also be noisy. It would be very interesting to extend the approach to color cameras. • Also like current methods, it is limited to the prediction of the camera pose for the next step. This is greedy, non-optimal as multiple poses are required anyway for a complete 3D reconstruction. Thanks to its scalability, we believe that our method could be extended to the prediction of the “Next Best Path”, where future camera poses would be predicted jointly for their complementarity. Acknowledgements This work was granted access to the HPC resources of IDRIS under the allocation 2022-AD011013387 made by GENCI. We thank Renaud Marlet, Elliot Vincent, Romain Loiseau, and Tom Monnier for inspiring discussions and valuable feedback.
1. What is the focus and contribution of the paper regarding partial point cloud reconstruction? 2. What are the strengths of the proposed approach, particularly in terms of mathematical foundation and visibility gain? 3. What are the weaknesses of the paper regarding readability and sensitivity to noise and occlusion? 4. Do you have any concerns or questions regarding the neural architecture and its relation to the theoretical formulations? 5. What are the limitations of the method, and how does it compare to other works in the field?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper aims to solve the problem of next best view from partial point cloud towards a complete reconstruction. The method uses the principle of SDF and define a way of computing the maximum coverage gain if a camera is at position "c" given the history of camera position, the visibility of the surface on those historical position. This in turn expected to maximise the total coverage. The authors derive the relations of the incremental coverage gain mathematically and use a neural network to realise those relation to produce the coverage gain given a camera position "c". They show results on various dataset including large scale reconstruction. Strengths And Weaknesses Strengths: The paper has a strong mathematical foundation on defining the coverage gain. The use of spherical harmonics for visibility gain is nice, though SH is being used in recent volume rendering work which authors also referred. Supplementary method provides all the derivation required for the proof of the relations used in the main paper. The paper shows interesting results. Weaknesses The paper is well grounded with the theoretical formulations. But the readability of the paper is not very good. It is relatively difficult to relate the equations with the neural architecture for a reader. The concepts are philosophically mapped but the derivation is not clearly mapped with the neural method or in the inductive bias design. The paragraph from line 132 to136 attempted to give some clarity but this needs to be expanded for a general reader. Its not clear if there were neural aggregator instead of Monte Carlo, then what could happened? In table 1 the numbers are very close. What is the sensitivity of these numbers w.r.t the reconstruction? Questions Isn't the surface point could be obtained using ray intersection from camera in line 100? Or am I missing something? Its not clear if the point cloud has a noise how it impact the visibility gain and occupancy function? Though authors mention this in the limitation, but the discussion on sensitivity of noise could have been good. If the ablation is given with the reconstruction quality then it will be more clear to the reader about the significance of the numbers. How the authors create the ground truth of the surface coverage gain? If the explanation from Eq. 6 to the neural design is more lucidly written, then the readability will increase. Is the method tried in indoor? How this method is positioned w.r.t https://arxiv.org/abs/1805.07794 Is it possible to use the Pointnet features for encoding the 3D point and its neighbourhood in Fig 2? How the occlusion is handled? Limitations The limitation of this method is narrated by authors and some of the insight we can get from the above questions. For reproducibility, the crucial information regarding the training needs to be there in the main paper.
NIPS
Title SCONE: Surface Coverage Optimization in Unknown Environments by Volumetric Integration Abstract Next Best View computation (NBV) is a long-standing problem in robotics, and consists in identifying the next most informative sensor position(s) for reconstructing a 3D object or scene efficiently and accurately. Like most current methods, we consider NBV prediction from a depth sensor like Lidar systems. Learningbased methods relying on a volumetric representation of the scene are suitable for path planning, but have lower accuracy than methods using a surface-based representation. However, the latter do not scale well with the size of the scene and constrain the camera to a small number of poses. To obtain the advantages of both representations, we show that we can maximize surface metrics by Monte Carlo integration over a volumetric representation. In particular, we propose an approach, SCONE, that relies on two neural modules: The first module predicts occupancy probability in the entire volume of the scene. Given any new camera pose, the second module samples points in the scene based on their occupancy probability and leverages a self-attention mechanism to predict the visibility of the samples. Finally, we integrate the visibility to evaluate the gain in surface coverage for the new camera pose. NBV is selected as the pose that maximizes the gain in total surface coverage. Our method scales to large scenes and handles free camera motion: It takes as input an arbitrarily large point cloud gathered by a depth sensor as well as camera poses to predict NBV. We demonstrate our approach on a novel dataset made of large and complex 3D scenes. 1 Introduction Next Best View computation (NBV) is a long-standing problem in robotics [6, 29], which consists in identifying the next most informative sensor position(s) for reconstructing a 3D object or scene efficiently and accurately. Typically, a position is evaluated on how much it can increase the total coverage of the scene surface. Few methods have relied on Deep Learning (DL) for the NBV problem, even though DL can provide useful geometric prior to obtain a better prediction of the surface coverage [33, 16, 25]. Like most current methods, we consider NBV prediction from a depth sensor. Existing methods based on a depth sensor rely either on a volumetric or on a surface-based representation of the scene geometry. Volumetric mapping-based methods can compute collision efficiently, which is practical for path planning in real case scenarios [20, 23, 24, 14, 1, 7]. However, they typically rely on voxels or a global embedding [12, 3, 21, 22, 27, 5] for the scene, which results in poor accuracy in reconstruction and poor performance in NBV selection for complex 3D objects. On the contrary, surface mapping-based methods that process directly a dense point cloud of the surface as gathered by the depth sensor are efficient for NBV prediction with high-detailed geometry. They are however limited to very specific cases, generally a single, small-scale, isolated object with the camera constrained to stay on a sphere centered on the object [15, 24, 8, 4, 13, 14, 14, 33, 16, 25]. Thus, they cannot be applied to the exploration of 3D scenes. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). As shown in Figure 1, we introduce a volumetric DL method that efficiently identifies NBVs for unknown large-scale 3D scenes in which the camera can move freely. Instead of representing the scene with a single global embedding, we choose to use a grid of local point clouds, which scales much better to large and complex 3D scenes. We show how to learn to predict the visibility of unseen 3D points in all directions given an estimate of the 3D scene geometry. We can then integrate these visibilities in the direction of any camera by using a Monte Carlo integration approach, which allows us to optimize the camera pose to find the next most informative views. We call our method SCONE, for Surface Coverage Optimization in uNknown Environments. In this respect, we introduce a theoretical framework to translate the optimization of surface coverage gain, a surface metric on manifolds that represents the ability of a camera to increase the visible area of the surface, into an optimization problem on volumetric integrals. Such a formalism allows us to use a volumetric mapping of geometry, which is convenient not only to scale the model to exploration tasks and scene reconstruction, but also to make probabilistic predictions about geometry. In particular, given a partial point cloud gathered by a depth sensor, our model learns to make predictions with virtually infinite resolution about the occupancy probability in the scene volume by learning a deep implicit function [17, 28, 18, 30, 31, 19]. Such predictions scale to very large point clouds since they depend only on neighborhood geometry. Then, our model leverages a self-attention mechanism [26] to predict occlusions and compute informative functions mapped on a continuous sphere that represent visibility likelihood of points in all directions. The occupancy probability field is finally used as a basis to sample points and compute Monte Carlo integrals of visibility scores. Since NBV learning-based methods are mostly limited to single, small-scale, centered object reconstruction in literature, we first compare the performance of our model to the state of the art on the ShapeNet dataset [2], following the protocol introduced in [33]. While our method was designed to handle more general frameworks such as 3D scene reconstruction and continuous cameras poses in the scene, it outperforms the state of the art for dense reconstruction of objects when the camera is constrained to stay on a sphere centered on the object. We then conduct experiments in large 3D environments using a simple planning algorithm that builds a camera trajectory online by iteratively selecting NBVs with SCONE. Since, to the best of our knowledge, we propose the first supervised Deep Learning method for such free 6D motion of the camera, we created a dataset made of several large-scale scenes under the CC License for quantitative evaluation. We made our code and this dataset available for allowing comparison of future methods with SCONE on our project webpage: https://github.com/Anttwo/SCONE. 2 Approach Let us consider a depth sensor exploring a 3D scene, at time step t ≥ 0. Using its observations at discrete time steps j with 0 ≤ j ≤ t, the sensor has gathered a cloud of points distributed on the surface of the scene. We refer to this cloud as the partial surface point cloud, as it describes the part of the surface seen –or covered– by the sensor in the scene. To solve the NBV problem, we want to identify a camera pose that maximizes the coverage of previously unseen surface. To this end, our method takes as input the partial surface point cloud as well as the history of 6D camera poses at time steps j ≤ t (i.e.all previous positions and orientations of the sensor). Our approach is built around two successive steps, each relying on a dedicated neural module as shown in figure 2: First, we make a prediction about the geometry of the scene, to estimate where the uncovered points could be. Then, we predict the visibility gain of uncovered points from any new camera pose; The NBV is finally selected as the camera with the most new visible points in its field of view. Although we seek to maximize a surface metric such as surface coverage gain, our method relies on a volumetric representation of the object or scene to reconstruct. In this regard, we show that we can maximize a surface metric by integrating over a volumetric representation with virtually infinite resolution. As we argue below, such a representation is not only useful for collision-free robot navigation but is also much more efficient for optimizing surface coverage gain than the alternative of identifying the 3D points lying on the surface, which is difficult in an unknown and occluded environment. More exactly, we derive a volumetric integral that is asymptotically proportional to the surface coverage gain metric, which is enough for our maximization problem. In the following subsection, we first present this derivation considering volume represented as a perfect binary occupancy map. We then present the two neural modules of SCONE and explain how we use them to predict all terms involved in the volumetric integral, by leveraging neural models and self-attention mechanisms to predict occupancy and occlusions. 2.1 Maximizing Surface Coverage Gain on a Binary Occupancy Map Here, we consider a binary occupancy map σ : R3 → {0, 1} representing the volume of the target object or scene. We will relax our derivations to a probabilistic occupancy map when looking for the next best view in the next subsections. From the binary map σ, we can define the set χ of occupied points, i.e., the set of points x verifying σ(x) = 1, its surface as the boundary ∂χ, and the surface coverage C(c) achieved by a camera pose c = (cpos, crot) ∈ C := R3 × SO(3) as the following surface integral: C(c) = 1 |∂χ|S ∫ ∂χ vc(x) dx, (1) where |∂χ|S := ∫ ∂χ dx is the area of surface ∂χ. χc ⊂ χ is the subset of occupied points contained in the field of view of camera c, and vc(x) is the visibility of point x from camera c, i.e., vc(x) = 1χc(x) · 1 (σ ({(1− λ)cpos + λx such that λ ∈ [0, 1)}) = {0}). Since we want to maximize the total coverage of the surface by all cameras during reconstruction, we are actually interested in maximizing the coverage of previously unobserved points rather than the absolute coverage. Given a set of previous camera poses, which we call the camera history H ⊂ C, and a 3D point x, we introduce the knowledge indicator γH : R3 → {0, 1} such that γH(x) = max{vc(x) : c ∈ H}. We then define the coverage gain GH(c) of camera pose c as: GH(c) = 1 |∂χ|S ∫ ∂χ νHc (x) dx, (2) where νHc (x) = (1 − γH(x)) · vc(x) is the visibility gain of x in χc, for camera history H . This function is equal to 1 iff x is visible at pose c but was not observed by any camera pose in H . Given a camera history H , our goal is to identify a pose c that maximizes GH(c). Given an occupancy map σ, we could evaluate the integral in Eq. (2) by simply sampling points p on surface ∂χ. However, in practice we will estimate the occupancy map iteratively in an unknown environment, and we will only have access to an occupancy probability distribution. Extracting surface points from such a probabilistic occupancy map gives results that can differ a lot from the true surface: Indeed, in 3D, a surface acts as a very concentrated set with zero-measure, and requires high confidence to give meaningful results. Instead of extracting surface points, we extend the properties of such points to a small spherical neighborhood of the surface. This will allow us to replace the maximization of a surface metric by the maximization of a volumetric integral, which is much easier to compute from our volumetric representation. More exactly, we assume there exists a quantity µ0 > 0 such that any volume point in the spherical neighborhood T (∂χ, µ0) := { p ∈ R3 | ∃x ∈ ∂χ, ∥x− p∥2 < µ0 } keeps the same visibility property as its neighboring surface points. With such a hypothesis, we give a thickness to the surface, which makes sense when working with discrete points sampled in space to approximate a volume. To this end, we introduce a new visibility gain function gHc to adapt the definition of the former visibility gain νHc . For any 0 < µ < µ0: gHc (µ;x) = { 1 if ∃x0 ∈ ∂χ, λ < µ such that x = x0 + λN(x0) and νHc (x0) = 1, 0 otherwise , (3) where N is the inward normal vector field. With further regularity assumptions about the surface that are detailed in the appendix, such quantities are well defined. Assuming µ0 is small enough, the following explicit formula translates the surface approach into a volume integral for any camera pose c ∈ C and µ < µ0:∫ T (∂χ,µ) gHc (µ;x)dx = ∫ ∂χ ∫ µ −µ gHc (µ;x0 + λN(x0)) det(I − λWx0) dλ dx0, (4) with Wx0 the Weingarten map at x0, that is, the Hessian of the signed distance function on the boundary of χ, which is continuous on the scene surface, assumed to be compact [10]. By developing the determinant, we find that det(I − λWx0) = 1 + λb(λ, x0) where b is a bounded function on the compact space [−µ, µ] × ∂χ. Moreover, for all x0 ∈ ∂χ, we have by definition gHc (µ;x0 + λN(x0)) = g H c (µ;x0) = ν H c (x0) when 0 ≤ λ < µ, and gHc (µ;x0 + λN(x0)) = 0 when −µ < λ < 0. It follows that, for every 0 < µ < µ0:∫ T (∂χ,µ) gHc (µ;x)dx = ∫ ∂χ ∫ µ 0 gHc (µ;x0)(1 + λb(λ, x0)) dλ dx0 = µ ∫ ∂χ gHc (µ;x0) dx0 + ∫ ∂χ ∫ µ 0 λgHc (µ;x0)b(λ, x0) dλ dx0 = µ|∂χ|SGH(c) + ∫ ∂χ ∫ µ 0 λgHc (µ;x0)b(λ, x0) dλ dx0. (5) The complete derivations are given in the appendix. Function gHc (µ; ·) is naturally equal to 0 for every point outside T (∂χ, µ). Moreover, considering the regularity assumptions we made on the compact surface, if µ0 is chosen small enough then for all x0 ∈ ∂χ, µ < µ0, the point x0 + µN(x0) is located inside the volume, such that ∫ T (∂χ,µ) gHc (µ;x) dx =∫ χ gHc (µ;x) dx. Since |gHc (µ; ·)| ≤ 1 for all c ∈ C and µ > 0, we deduce the following theorem by bounding |b| on [−µ, µ]× ∂χ: Theorem 1. Under the previous regularity assumptions on the volume χ of the scene and its surface ∂χ, there exist µ0 > 0 and M > 0 such that for all µ < µ0, and any camera c ∈ C:∣∣∣∣ 1|χ|V ∫ χ gHc (µ;x)dx− µ |∂χ|S |χ|V GH(c) ∣∣∣∣ ≤ Mµ2 , (6) where |χ|V is the volume of χ. This theorem states that, asymptotically for small values of µ, the volume integral ∫ χ gHc (µ;x) dx gets proportional to the surface coverage gain values GH(c) that we want to maximize. This result is convenient since a volume integral can be easily approximated with Monte-Carlo integration on the volume and a uniform dense sampling based on the occupancy function σ. Consequently, the more points we sample in the volume, the smaller µ we can choose, and the closer maximizing the volume integral of spherical neighborhood visibility gain gets to maximizing the surface coverage gain. 2.2 Architecture To approximate the volumetric integral in Equation 6 for any camera pose c, we need to compute χc as well as function gHc . In this regard, we need to compute both the occupancy map and the visibility gains of points for any camera pose. Since the environment is not perfectly known, we predict each one of these functions with a dedicated neural module. The first module takes as input the partial point cloud gathered by the depth sensor to predict the occupancy probability distribution. The second module takes as input a camera pose, a feature representing camera history as well as a predicted sequence of occupied points located in the camera field of view to predict visibility gains. Predicting the occupancy probability field σ̂. The occupancy function σ is not known perfectly in practice. To represent occupancy, most volumetric NBV methods rely on memory-heavy representations (like an occupancy 3D-grid or a volumetric voxelization), that are generally less efficient for encoding fine details and optimizing dense reconstructions, and will necessarily downgrade the resolution compared to a point cloud directly sampled on the surface. To address this issue while still working with a volumetric representation of the scene, we use a deep implicit function to encode the 3D mapping of occupancy efficiently. Such a function has a virtually infinite resolution, and prevents us from saving a large 3D grid in memory. We thus approximate σ with the first module of our model, which consists of a neural network σ̂ : P(R3) ×R3 → [0, 1] that takes as inputs a partial surface point cloud PH ⊂ R3 and query points x ∈ R3, and outputs the occupancy probability σ̂(PH ;x) of x. PH is obtained by merging together all depth maps previously captured from cameras in H . As shown in Figure 3, rather than using a direct encoding of the global shape of PH as input to σ̂, we take inspiration from [9] to achieve scalability and encode PH using features computed from the points’ neighborhoods. The difference with [9] is that we rely on a multiscale approach: For a given query 3D point x, these features are computed from the k nearest neighbors of x computed at different scales. For each scale s, we downsample point cloud PH around x into a sparser point cloud P (s)H before recomputing the nearest neighbors p (s) i (x), i = 1, ..., k of x: In this way, the size of the neighborhood increases with scale. Next, for each value of s, we use small attention units [26, 11] on the sequence of centered neighborhood points (p(s)1 (x)− x, ..., p (s) k (x)− x) and apply pooling operations to encode each sequence of k neighbors into a single feature that describes the local geometry for the corresponding scale. We finally concatenate these different scale features with another uncentered global feature as well as the query point x, and feed them to an MLP to predict the occupancy probability. The last global feature aims to provide really coarse information about the geometry and the location of x in the scene. This model scales well to large scenes: Adding points from distant views to the current partial point cloud does not change the local state of the point cloud. To avoid computing neighborhoods on the entire point cloud when reconstructing large scenes, we partition the space into cells in which we store the points in PH . Given a query point x, we only use the neighboring cells to compute p (s) i (x). Predicting the visibility gain gHc . To maximize surface coverage gain, we need to compute the volumetric integral of visibility gain functions gHc . We do this again by Monte Carlo sampling, however, in unknown environments we cannot compute explicitly occlusions to derive visibility gain functions gHc since the geometry, represented as a point cloud, is partially unknown and sparser than a true surface. We thus train the second module of our model to predict visibility gain functions by leveraging a self-attention mechanism that helps to estimate occlusion effects in the point cloud PH . In particular, for any camera pose c ∈ C and 3D point x ∈ χc, the second module derives its prediction of visibility gains from three core features: (i) The predicted probability σ̂(P ;x) of x to be occupied, (ii) the occlusions on x by the subvolume χc and (iii) the camera history H . To feed all this information to our model in an efficient way, we follow the pipeline presented in Figure 4. The model starts by using the predicted occupancy probability function σ̂ to sample 3D points in the volume χ. These samples will be used for Monte Carlo integration. We refer to these points as proxy points as we use them to encode the volume in the camera field of view, i.e., in a pyramidal frustum view. We write χ̂ as the discrete set of sampled proxy points, and χ̂c as the set of proxy points located in the field of view of the camera c. We first encode these proxy points individually by applying a small MLP on their 3D coordinates and their occupancy probability value concatenated together. Then, our model processes the sequence of these encodings with a self-attention unit to handle occlusion effects of subvolume χc on every individual point. Note there is no pooling operation on the output of this unit: The network predicts per-point features and does not aggregate predictions, since we do it ourselves with Monte Carlo integration. Next, for each proxy point x ∈ χ̂ , we compute an additional feature hH(x) that encodes the history of camera positions H with respect to this point as a spherical mapping: It consists in the projection on a sphere centered on x of all camera positions for which x was in the field of view. These features are concatenated to the outputs of the self-attention unit. Our model finally uses an MLP on these features to predict the entire visibility gain functions of every point x as a vector of coordinates in the orthonormal basis of spherical harmonics. With such a formalism, the model is able to compute visibility gains for points inside a subvolume in all directions with a single forward pass. In this regard, we avoid unnecessary computation and are able to process a large number of cameras in the same time when they share the same proxy points in their field of view (e.g., reconstruction of a single object centered in the scene, where χ̂c = χ̂ for all c, or when several cameras observe the same part of the 3D scene, i.e., χ̂c = χ̂′ ⊂ χ̂ for several c). Formally, if we denote by Y ml : S 2 → R the real spherical harmonic of rank (l,m) and ϕml (χ̂c;x, hH(x)) the predicted coordinate of rank (l,m) for proxy point x ∈ χ̂c with attention to subset χ̂c and camera history feature hH(x), the visibility gain of point x in direction d ∈ S2 is defined as ∑ l,m ϕml (χ̂c;x, hH(x)) · Y ml (d) (7) so that the coverage gain GH(c) of a camera pose c ∈ C is proportional to IH(c) := 1 |χ̂| ∑ x∈χ̂ 1χ̂c(x)∑ l,m ϕml (χ̂c;x, hH(x)) · Y ml ( x− cpos ∥x− cpos∥2 ) . (8) The next best view among several camera positions is finally defined as the camera pose c∗ with the highest value for IH(c∗). Equation 8 is a Monte Carlo approximation of the volumetric integral in Equation 6, where the occupancy map and the visibility gains are predicted with neural networks. We choose to use a Monte-Carlo integral rather than a neural aggregator because this approach is simple, fast, makes training more stable, has good performance, better interpretability, and can handle sequences of arbitrary size. In particular, it implicitly encourages our model to compute meaningful visibility gains for each point since there is no asymmetry between the points. We also use spherical harmonics to encode the camera history encoding hH(x) of each point x, which makes it a homogeneous input to the predicted output. Consequently, this input comes at the end of the architecture, and aims to adapt the visibility gain according to previous camera positions. This convenient representation, inspired by [32], allows us to handle free camera motion; On the contrary, several models in the literature encode the camera directions on a discrete sphere [33, 16, 25]. Training. We train the occupancy probability module alone with a Mean Squared Error loss with the ground truth occupancy map. We do not compute ground-truth visibility gains to train the second module since it would make computation more difficult and require further assumptions: We supervise directly on IH(c) by comparing it to the ground-truth surface coverage gain for multiple cameras, with softmax normalization and Kullback-Leibler divergence loss. Extensive details about the training of our modules and the choices we made are given in the appendix. 3 Experiments As discussed in the introduction, deep learning-based NBV models for dense reconstruction are currently limited to single, small-scale, centered object reconstruction. To compare SCONE to these previous methods in this context, we first constrain the camera pose to lie on a sphere centered on an object. We then introduce our dataset made of 13 large-scale 3D models that we created to evaluate SCONE on free camera motions (3D models courtesy of Brian Trepanier, Andrea Spognetta, and 3D Interiors, under CC License; all models were downloaded from the website Sketchfab). 3.1 Next Best View for Single Object Reconstruction We first compare the performance of our model to the state of the art on a subset of the ShapeNet dataset [2] introduced in [33] and following the protocol of [33]: We sample 4,000 training meshes from 8 specific categories of objects, 400 validation meshes and 400 test meshes from the same categories, and 400 additional test meshes from 8 categories unseen during training. The evaluation on the test datasets consists of 10-view reconstructions of single objects. Given a mesh in the dataset, camera positions are discretized on a sphere. We start the reconstruction process by selecting a random camera pose, then we iterate NBV selection 9 times in order to maximize coverage with a sequence of 10 views in total. The evaluation metric is the area under the curve (AUC) of surface coverage throughout the reconstruction. This criterion not only evaluates the quality of final surface coverage, but also the convergence speed toward a satisfying coverage. Results are presented in Table 1. Further details about the evaluation, the estimation of ground truth surface coverage gains, and the metric computation are available in the appendix. 3.2 Active View Planning in a 3D Scene To evaluate the scalability of our model to large environments as well as free camera motion in 3D space, we also conducted experiments using a naive planning algorithm that incrementally builds a path in the scene: We first discretize the camera poses in the scene on a 5D grid, corresponding to coordinates cpos = (xc, yc, zc) of the camera as well as the elevation and azimuth to encode rotation crot. The number of poses depends on the dimensions of the bounding box of the scene. This box is an input to the algorithm, as a way for the user to tell which part of the scene should be reconstructed. In our experiments, the number of different poses is around 10,000. Note we did not retrain our model for such scenes and use the previous model trained on ShapeNet as explained in Section 3.1. The depth sensor starts from a random pose (from which, at least, a part of the main structure to reconstruct is visible). Then, at each iteration, our method estimates the coverage gain of direct neighboring poses in the 5D grid, and selects the one with the highest score. The sensor moves to this position, captures a new partial point cloud and concatenates it to the previous one. Since we focus on online path planning for dense reconstruction optimization and designed our model to be scalable with local neighborhood features, the size of the global point cloud can become very large. We iterate the process either 100 times to build a full path around the object in around 5 minutes on a single Nvidia GTX1080 GPU, and recovered a partial point cloud with up to 100,000 points. Contrary to single, small-scale object reconstruction where the object is always entirely included in the field of view, we simulated a limited range in terms of field of view and depth for the depth sensor, so that the extent of the entire scene cannot be covered in a single view. Thus, taking pictures at long range and going around the scene randomly is not an efficient strategy; the sensor must find a path around the surface to optimize the full coverage. We use a ray-casting renderer to approximate a real Lidar. More details about the experiments can be found in the appendix. We compared the performance of SCONE with simple baselines: First, a random walk, which chooses neighboring poses randomly with uniform probabilities. Then, we evaluated an alternate version of our model, which we call SCONE-Entropy, which leverages the first module of our full model to compute occupancy probability in the scene, then selects the next best view as the position that maximizes the Shannon entropy in its field of view. The comparison is interesting, since SCONEEntropy adapts classic NBV approaches based on information theory to a deep learning framework. Figures 5 and 6 provide qualitative results. Figure 7 shows the convergence speed of covered surface by SCONE and our two baselines, averaged on all scenes of the dataset. Despite being trained only on ShapeNet 3D models, our method is able to compute meaningful paths around the structures and consistently reach satisfying global coverage. Since we focused on building a metric for NBV computation, this experiment is simple and does not implement any further prior from common path planning strategies: For instance, we do not compute distant NBV, nor optimal paths to move from a position to a distant NBV with respect to our volumetric mapping. There is no doubt the model could benefit from further strategies inspired by the path planning literature. 3.3 Ablation Study We now provide an ablation study for both modules of our full model: The prediction of occupancy probability, and the computation of coverage gain from predictions about visibility gains. To evaluate both modules separately, we compared their performance on their training critera: MSE for occupancy probability, and softmax followed by KL-Div on a dense set of cameras for coverage gain. The values are reported in Figure 8. We provide extensive details and analysis in the appendix. Occupancy probability. Apart from the base architecture presented in Figure 3, we trained the first module under two additional settings. First, we removed the multi-scale neighborhood features computed by downsampling the partial point cloud PH , and trained the module to predict an occupancy probability value σ̂ only from global feature g(PH) and encoding f(x). As anticipated, the lack of neighborhood features not only prevents the model from an efficient scaling to large scenes, but also causes a huge loss in performance. We also trained a slightly more complex version of the module by feeding it the camera history harmonics hH(x) as an additional feature. It appears this helps the model to increase its confidence and make better predictions, but the gains are quite marginal. Visibility gain. We trained two variations of our second module. First, we completely removed the geometric prediction: We use directly the surface points as proxy points, mapped with an occupancy probability equal to 1. As a consequence, the model suffers from a significant loss in performance to compute coverage gain from its predicted visibility gain functions. Thus, we can confirm that the volumetric framework improves the performance of SCONE. This result is remarkable, since surface representations usually make better NBV predictions for dense reconstruction of detailed objects in literature. On the contrary, we show that our formalism is able to achieve higher performance by leveraging a volumetric representation with a high resolution deep implicit function σ̂. We trained a second variation without the spherical mappings hH of camera history. We verified that such additional features increase performance, especially at the end of a 3D reconstruction. 4 Limitations and Conclusion Beyond the prediction of the Next Best View, our method SCONE is able to evaluate the value of any camera pose given a current estimate of the scene volume. We demonstrated this ability with a simple path planning algorithm that looks for the next pose around the current pose iteratively. However: • Like current methods, SCONE relies on a depth sensor and assumes that the captured depth maps are perfect. In practice, such a sensor is not necessarily available and can also be noisy. It would be very interesting to extend the approach to color cameras. • Also like current methods, it is limited to the prediction of the camera pose for the next step. This is greedy, non-optimal as multiple poses are required anyway for a complete 3D reconstruction. Thanks to its scalability, we believe that our method could be extended to the prediction of the “Next Best Path”, where future camera poses would be predicted jointly for their complementarity. Acknowledgements This work was granted access to the HPC resources of IDRIS under the allocation 2022-AD011013387 made by GENCI. We thank Renaud Marlet, Elliot Vincent, Romain Loiseau, and Tom Monnier for inspiring discussions and valuable feedback.
1. What is the focus and contribution of the paper on next-best-view prediction during three-dimensional surface reconstruction using depth sensors? 2. What are the strengths of the proposed approach, particularly in its scalability and ability to handle large-scale structure-from-motion reconstructions? 3. What are the weaknesses of the paper, especially regarding the theoretical framework's connection to practice and the method's ability to tackle arbitrary geometry and large scale models? 4. How does the method deal with the problem of point visibility prediction, and what are the challenges associated with it? 5. What are the limitations of the current method in terms of accuracy, and how could they be addressed in future work?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes a method for next-best-view prediction during three-dimensional surface reconstruction using depth sensors. A characteristic feature of the method is its scalability: most of the state-of-the-art methods are able to work only with artificial models from the ShapeNet dataset of very limited dimensions, while the proposed method is aiming to tackle large-scale structure-from-motion reconstructions such as the one of Colosseum. The paper introduces a formal theoretical framework explaining that it is possible to estimate surface visibility metrics by sampling points in the volume around the surface, not necessarily strictly on the surface. The proposed method build on this framework. It is evaluated on ShapeNet and on a proposed dataset of large-scale models. Strengths And Weaknesses The paper develops a new theoretical framework, that explains an approach chosen by the authors to sample points for surface visibility prediction. In particular, the paper shows that it is possible to sample points in the volume around the surface rather than on the surface directly. The paper proposes to use two models: one for occupancy prediction, and another one for point visibility estimation for a particular camera viewpoint. The first model for occupancy prediction is elegant and scalable, reminding of fully convolutional models for images, but working with point clouds. It shows an approach to build spatially localized models for occupancy prediction effectively working with local sub-sets of input points. The approach with decomposing the problem into two models seems to be the key to achieving scalability of prediction. It is a distinct step forward in this field, because previous models assumed small object-like structures and simply predicted probabilities of choosing the next view from a pre-defined set of viewpoints evenly distributed on a sphere. The new method is significantly more powerful both in tackling arbitrary geometry and large scale models. One of interesting questions possibly weakening the paper is that it is rather uncommon to use depth sensors for reconstructing large-scale objects. Usually people focus on Structure-from-motion or photogrammetry tools in these cases, and these tools rely on stereo algorithms rather than on the depth sensors. However, one may argue that a stereo method applied to a pair of images can be understood as a depth sensor. In general, the theoretical framework developed in the paper is concerned with point sampling. However, when the method itself is explained, a particular approach to point sampling used in the method is not described well. This way, the theory leaves an impression of being rather disconnected from practice in this particular paper. I see a conceptual difficulty embedded in a formula (8). However, the formula (8) defines dependency of point visibility prediction on point-camera direction. First of all, camera orientation does not enter the formula at all. It is unclear how to find the best orientation of the camera. Next, an infinite number of camera viewpoints lying on a ray have the same camera-point direction. It is unclear which viewpoint should one choose. Importantly, occupancy status of the point with respect to the cameras lying on this ray may change. One may move along the ray further away, and if the ray intersects some surface, then the point will become occluded starting from that intersection. This seems not to be addressed by the current model. The method, as I see it, is based on a significant simplification: it downscales the space of camera viewpoints from the 5-dimensional space down to two dimensions. It would be very interesting to clarify this point. It remains unclear, what kind of input does the method expect. Sometimes, we see a reference to a ‘depth sensor’. In the ShapeNet experiment, it remains unclear, what is given as input. I would really like to see some clarification in the paper, what particular type of depth sensor should to be used with this method. Questions L67: Could you please explain the meaning of “volumetric methods are less efficient ..because they dilute information in the 3D space” L75 this maps is not -> this map could be Rephrase introduction and abstract to emphasize the input precisely (e.g., ‘a method taking a point cloud of points observed so far as input’). Right now the introduction only tells about a probabilistic occupancy map, referring that the perfect map is not known at run-time, but not describing, what is exactly considered to be known. L78 ‘predict unknown geometry’ - What do these words refer to? L93: Maybe rename ‘knowledge factor’ to ‘knowledge indicator’ as it can only be equal to 0 or 1. In (4) dx -> dx_0 L122: the volume is supposed to be opaque - what property exactly is expected here, in more precise terms? L177: How does the model sample 3D points using the predicted occupancy probability function? In experiment 3.1, how are the partial point clouds generated? L256-257: how can ray tracing be used to render a point cloud? What is done to make this rendering approximate specifically the LIDAR sensor, but not time-of-flight or active stereo depth sensors? L292: ‘Model suffers ... to compute coverage gain’ - does it mean that the second model suffers to predict coverage gain from such a set of points? Limitations The prediction of coverage gain is done in the 2D camera viewpoint space rather than in the 5D space The input to the method is not clearly formalized It is difficult to judge about the limitations of the current method in terms of accuracy. Would be nice to illustrate some cases when the method does not perform well, and reason about why it happens so. Would be nice to see how accurate the occupancy prediction model is, how far can it extrapolate geometry, as I think it is the main property this model should have. The second model should decide where can it expect to see more geometry that was previously unseen. Essentially it means that the first model should do occupancy map completion. How well does the model perform in this taskm is a good question for an additional ablation study.
NIPS
Title LISA: Learning Interpretable Skill Abstractions from Language Abstract Learning policies that effectively utilize language instructions in complex, multitask environments is an important problem in sequential decision-making. While it is possible to condition on the entire language instruction directly, such an approach could suffer from generalization issues. In our work, we propose Learning Interpretable Skill Abstractions (LISA), a hierarchical imitation learning framework that can learn diverse, interpretable primitive behaviors or skills from languageconditioned demonstrations to better generalize to unseen instructions. LISA uses vector quantization to learn discrete skill codes that are highly correlated with language instructions and the behavior of the learned policy. In navigation and robotic manipulation tasks, LISA outperforms a strong non-hierarchical Decision Transformer baseline in the low data regime and is able to compose learned skills to solve tasks containing unseen long-range instructions. Our method demonstrates a more natural way to condition on language in sequential decision-making problems and achieve interpretable and controllable behavior with the learned skills. N/A 1 Introduction Intelligent machines should be able to solve a variety of complex, long-horizon tasks in an environment and generalize to novel scenarios. In the sequential decision-making paradigm, provided expert demonstrations, an agent can learn to perform these tasks via multi-task imitation learning (IL). As humans, it is desirable to specify tasks to an agent using a convenient, yet expressive modality and the agent should solve the task by taking actions in the environment. There are several ways for humans to specify tasks to an agent, such as task IDs, goal images, and goal demonstrations. However, these specifications tend to be ambiguous, require significant human effort, and can be cumbersome to curate and provide at test time. One of the most natural and versatile ways for humans to specify tasks is via natural language. The goal of language-conditioned IL is to solve tasks in an environment given language-conditioned trajectories at training time and a natural language instruction at test time. This becomes challenging when the task involves completing several sub-tasks sequen- *Equal contribution. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). tially, like the example shown in Figure 1. A crucial step towards solving this problem is exploiting the inherent hierarchical structure of natural language. For example, given the task specification “pull the handle and move black mug right”, we can split it into learning two independent primitive behaviors or skills, i.e. “pull the handle” and “move black mug right”. If we are able to decompose the problem of solving these complex tasks into learning skills, we can re-use and compose these learned skills to generalize to unseen tasks in the future. This is especially useful in the low-data regime, since we may not see all possible tasks given the limited dataset, but may see all the constituent sub-tasks. Using such hierarchical learning, we can utilize language effectively and learn skills as the building blocks of complex behaviors. Utilizing language effectively to learn skills is a non-trivial problem and raises several challenges. (i) The process of learning skills from language-conditioned trajectories is unsupervised as we may not have knowledge about which parts of the trajectory corresponds to each skill. (ii) We need to ensure that the learned skills are useful, i.e. encode behavior that can be composed to solve new tasks. (iii) We would like the learned skills to be interpretable by humans, both in terms of the language and the behaviours they encode. There are several benefits of interpretability. For example, it allows us to understand which skills our model is good at and which skills it struggles with. In safety critical settings such as robotic surgery or autonomous driving, knowing what each skill does allows us to pick and choose which skills we want to run at test time. It also provides a visual window into a neural network policy which is extremely desirable [54]. There have been prior works such as [38, 47, 13] that have failed to address these challenges and condition on language in a monolithic fashion without learning skills. As a result, they tend to perform poorly on long-horizon composition tasks such as the one in Figure 1. To this end, we propose Learning Interpretable Skill Abstractions from language (LISA), a hierarchical imitation learning framework that can learn interpretable skills from language-conditioned offline demonstrations. LISA uses a two-level architecture – a skill predictor that predicts quantized skills from a learnt vector codebook and a policy that uses these skill vector codes to predict actions. The discrete skills learned from language are interpretable (see Figure 2 and 4) and can be composed to solve long-range tasks. Using quantization maximizes skill reuse and enforces a bottleneck to pass information from the language to the policy, enabling unsupervised learning of interpretable skills. We perform experiments on grid world navigation and robotic manipulation tasks and show that our hierarchical method can outperform a strong non-hierarchical baseline based on Decision Transformer [11] in the low-data regime. We analyse these skills qualitatively and quantitatively and find them to be highly correlated to language and behaviour. Finally, using these skills to perform long-range composition tasks on a robotic manipulation environment results in performance that is nearly 2x better than the non-hierarchical version. Concretely, our contributions are as follows: • We introduce LISA, a novel hierarchical imitation framework to solve complex tasks specified via language by learning re-usable skills. • We demonstrate the effectiveness of our approach in the low-data regime where its crucial to break down complex tasks to generalize well. • We show our method performs well in long-range composition tasks where we may need to apply multiple skills sequentially. • We also show that the learned skills are highly correlated to language and behaviour and can easily be interpreted by humans. 2 Related Work 2.1 Imitation Learning Imitation learning (IL) has a long history, with early works using behavioral cloning [41–43] to learn policies via supervised learning on expert demonstration data. Recent methods have shown significant improvements via learning reward functions [21] or Q-functions [18] from expert data to mimic expert behavior. Nevertheless, these works typically consider a single task. An important problem here is multi-task IL, where the imitator is trained to mimic behavior on a variety of training tasks with the goal of generalizing the learned behaviors to test tasks. A crucial variable in the multi-task IL set-up is how the task is specified, e.g vectorized representations of goal states [37], task IDs [24], and single demonstrations [56, 14, 16, 57]. In contrast, we focus on a multi-task IL setup with task-specification through language, one of the most natural and versatile ways for humans to communicate desired goals and intents. 2.2 Language Grounding Several prior works have attempted to ground language with tasks or use language as a source of instructions for learning tasks with varying degrees of success [32, 55, 4, 39, 5]. [27] is a good reference for works combining language with sequential-decision making. But apart from a few exceptions, most algorithms in this area use the language instruction in a monolithic fashion and are designed to work for simple goals that requires the agent to demonstrate a single skill [40, 9, 20, 6] or tasks where each constituent sub-goal has to be explicitly specified [10, 46, 34, 3, 52, 50, 17, 35, 31]. Some recent works have shown success on using play data [28] or pseudo-expert data such as LOReL [38] and CLIPORT [47]. LOReL and CLIPORT are not hierarchical techniques. [28] can be interpreted as a hierarchical technique that generates latent sub-goals as a function of goal images, language instructions and task IDs but the skills learned by LISA are purely a function of language and states alone and do not require goal images or task IDs. [23, 22] and [49] are some examples of works that use a two-level architecture for language conditioned tasks but neither of these methods learn skills that are interpretable. 2.3 Latent-models and Hierarchical Learning Past works have attempted to learn policies conditioned on latent variables and some of them can be interpreted as hierarchical techniques. For example, [15] learns skills using latent variables that visit different parts of the environment’s state space. [45] improved on this by learning skills that were more easily predictable using a dynamics model. But these fall more under the category of skill discovery than hierarchical techniques since the skill code is fixed for the entire trajectory, as is the case with [15]. [29] and [26] are other works that use a latent-variable approach to IL. But these approaches don’t necessarily learn a latent variable with the intention of breaking down complex tasks into skills. With LISA, we sample several skills per trajectory with the clear intention of each skill corresponding to completing a sub-task for the whole trajectory. Also, none of the methods mentioned here condition on language. There has been some work on hierarchical frameworks for RL to learn high-level action abstractions, called options [51], such as [25, 58, 36] but these works are not goal-conditioned. Unlike LISA, these works don’t use language and the options might lack diversity and not correspond to any concrete or interpretable skills. Furthermore, none have used the VQ technique to learn options and often suffer from training instabilities. 3 Approach The key idea of LISA is to learn quantized skill representations that are informative of both language and behaviors, which allows us to break down high-level instructions, specified via language, into discrete, interpretable and composable codes (see Fig. 2, Fig. 6 and Fig. 8 for visualizations). These codes enable learning explainable and controllable behaviour, as shown in Fig. 1 and Fig. 4. Section 3.1 describes the problem formulation, an overview of our framework, and presents our language-conditioned model. Section 3.2 provides details on the training approach. 3.1 Language-conditioned Skill Learning 3.1.1 Problem Setup We consider general multi-task environments, represented as a task-augmented Markov decision process (MDP) with a family of different tasks T . A task Ti may be composed of other tasks in T and encode multiple sub-goals. For example, in a navigation environment, a task could be composed of two or more sub-tasks - “pick up ball”, “open door” - in any hierarchical order. S,A represent state and action spaces. We assume that each full task has a single natural language description l ∈ L, where L represents the space of language instructions. Any sub-goals for the task are encoded within this single language instruction. We assume access to an offline dataset D of trajectories obtained from an optimal policy for a variety of tasks in an environment with only their language description available. Each trajectory τ i = (li, {(si1, ai1), (si2, ai2), ..., (siT , aiT )}) consists of the language description and the observations sit ∈ S, actions ait ∈ A taken over T timesteps. The trajectories are not labeled with any rewards. Our aim is to predict the expert actions at, given a language instruction and past observations. Note that each trajectory in the training dataset can comprise of any number of sub-tasks. For example, we could have a trajectory to “open a door” and another to “pick up a ball and close the door” in the training data. With LISA we aim to solve the task “open a door and pick up the ball” at test time even though we haven’t seen this task at training time. In a trajectory with multiple sub-tasks, the training dataset does not give us information about where one sub-task ends and where another one begins. LISA must learn how to identify and stitch together these sub-tasks learned during training, in order to solve a new language instruction such as the one shown in Fig. 1 at test time. 3.1.2 Hierarchical Skill Abstractions We visualize the working of LISA in Figure 3. Our framework consists of two modules: a skill predictor f : L × S → C and a policy π : S × C → A. Here, C = { z1, . . . , zK } is a learnable codebook of K quantized skill latent codes. D is the dimension of the latent space of skills. Our key idea is to break learning behavior from language in two stages: 1) Learn discrete latent codes z, representing skills, from the fulllanguage instruction to decompose the task into smaller sub-goals 2) Learn a policy π conditioned only on these discrete codes. In LISA, both stages are trained end-to-end. Given an input τ = (l, {st, at}Tt=1), the skill predictor f predicts a skill code z̃ ∈ RD at a timestep t as z̃ = f(l, (st, st−1, ...)). These codes are discretized using a vector quantization operation q(·) that maps a latent z̃ to its closest codebook entry z = q(z̃). The quantization operation q(·) helps in learning discrete skill codes and acts as a bottleneck on passing language information. We detail its operation in Sec. 3.2. The chosen skill code z, is persisted for H timesteps where H is called the horizon. More details on how we chose the horizon and ablations studies on the choice of H can be found in appendix sections D and F.1. After H timesteps, the skill predictor is invoked again to predict a new skill. This enforces the skill to act as a temporal abstraction on actions, i.e. options [51]. The policy π predicts the action at at each timestep t conditioned on the state and a single skill code z that is active at that timestep. For π to correctly predict the original actions, it needs to use the language information encoded in the skill codes. LISA learns quantized skill codes in a vector codebook instead of continuous embeddings as this encourages reusing and composing these codes together to pass information from the language input to the actual behavior. Our learnt discrete skill codes adds interpretability and controllability to the policy’s behavior. 3.2 Training LISA Learning Discrete Skills. LISA uses Vector Quantization (VQ), inspired from [53]. It is a natural and widely-used method to map an input signal to a low-dimensional discrete learnt representation. VQ learns a codebook C ∈ { z1, . . . , zK } of K embedding vectors. Given an embedding z̃ from the skill predictor f , it maps the embedding to the closest vector in the codebook: z = q(z̃) =: argmin zk∈C ∥z̃ − zk∥2 with the codebook vectors updated to be the moving average of the embeddings z closest to them. This can be classically seen as learning K cluster centers via k-means [19]. Backpropagation through the non-differentiable quantization operation is achieved by a straightthrough gradient estimator, which simply copies the gradients from the decoder to the encoder, such that the model and codebook can be trained end-to-end. VQ enforces each learnt skill z to lie in C, which can be thought as learning K prototypes or cluster centers for the language embeddings using the seen states. This acts as a bottleneck that efficiently decomposes a language instruction into sub-parts encoded as discrete skills. LISA Objective. LISA is trained end-to-end using an objective LLISA = LBC + λLVQ, where LBC is the behavior-cloning loss on the policy πθ, λ is the VQ loss weight and LVQ is the vector quantization loss on the skill predictor fϕ given as: LVQ(f) = Eτ [∥sg [q(z̃)]− z̃∥22] (1) with z̃ = fϕ(l, (st, st−1, ..)). sg [·] denotes the stop-gradient operation. LVQ is also called commitment loss. It minimizes the conditional entropy of the skill predictor embeddings given the codebook vectors, making the embeddings stick to a single codebook vector. The codebook vectors are learnt using an exponential moving average update, same as [53]. Avoiding language reconstruction. LISA avoids auxiliary losses for language reconstruction from the skills latent codes and it’s not obvious why the skill codes are properly encoding language, and we expand on it here. For a given a signal X and a code Z, reconstructing the signal from the code as X̃ = f(Z) using cross-entropy loss amounts to maximizing the Mutual Information (MI) I(X,Z) between X and Z [1, 7]. In our case, we can write the MI between the skill codes and language using entropies as: I(z, l) = H(z)−H(z | l), whereas methods that attempt to reconstruct language apply the following decomposition: I(z, l) = H(l) − H(l | z). Here, H(l), the entropy of language instructions, is constant, and this gives us the cross-entropy loss. Thus we can avoid language reconstruction via cross-entropy loss by maximizing I(z, l) directly. In LISA, Lvq = −H(z | l), and we find there is no need to place a constraint on H(z) as the learned skill codes are diverse, needing to encode enough information to correctly predict the correct actions.1 1In experiments, we tried enforcing a constraint on H(z) by using extra InfoNCE loss term but don’t observe any gains. Algorithm 1 Training LISA Input: Dataset D of language-paired trajectories Input: Num skills K and horizon H 1: Initialize skill predictor fϕ, policy πθ 2: Vector Quantization op q(·) 3: while not converged do 4: Sample τ = (l, {s0, s1, s2...sT }, {a0, a1, a2...aT }) 5: Initialize S = {s0} ▷ List of seen states 6: for k = 0.. ⌊ T H ⌋ do ▷ Sample a skill every H steps 7: z ← q(fϕ(l, S)) 8: for step t = 1..H do ▷ Predict actions using a fixed skill and context length H 9: akH+t ← πθ(z, S[: −H]) 10: S ← S ∪ {skH+t} ▷ Append seen state 11: end for 12: Train fϕ, πθ using objective LLISA 13: end for 14: end while 𝑧 = 14 Figure 4: Behavior with fixed LISA options. We show the word clouds and the behavior of the policy obtained by using a fixed skill code z = 14 for an entire episode. We find that this code encodes the skill “closing the drawer”, as indicated by the word cloud. The policy executes this skill with a high degree of success when conditioned on this code for the entire trajectory, across multiple environment initializations and seeds. As a result, LISA can maximize the MI between the learnt skills and languages without auxiliary reconstruction losses and enforcing only Lvq on the skill codes. We empirically estimate the MI between the language and skill codes and find that our experiments confirm this in Section 4.6. 3.2.1 LISA Implementation LISA can be be implemented using different network architectures, such as Transformers or MLPs. In our experiments, we use Transformer architectures with LISA, but we find that out method is effective even with simple architectures choices such as MLPs, as shown in the appendix section F.5. Even when using Transformers for both the skill predictor and the policy network, our compute requirement is comparable to the non-hierarchical Flat Transformer policy as we can get away with using fewer layers in each module. Language Encoder. We use a pre-trained DistilBERT [44] encoder to generate language embeddings from the text instruction. We fine-tune the language encoder end-to-end and use the full language embedding for each word token, and not a pooled representation of the whole text. Observation Encoder. For image observations, we use convolution layers to generate embeddings. For simple state representations, we use MLPs. Skill Predictor. The skill predictor network f is implemented as a small Causal Transformer network that takes in the language embeddings and the observation embeddings at each time step. The language embeddings are concatenated at the beginning of the observation embeddings before being fed into the skill predictor. The network applies a causal mask hiding the future observations. Policy Network. Our policy network π, also implemented as a small Causal Transformer inspired by Decison Transformer (DT) [11]. However, unlike DT, our policy is not conditioned on any reward signal, but on the skill code. The sequence length of π is the horizon H of the skills which is much smaller compared to the length of the full trajectory. Flat Decision Transformer Baseline. Our flat baseline is based on DT and is implementation-wise similar to LISA, but without a skill predictor network. The policy here is a Causal Transformer, where we modify DT to condition on the language instruction embedding from a pre-trained DistillBERT text encoder instead of the future sum of returns. We found this baseline to be inefficient at handling long-range language instructions, needing sequence lengths of 1000 on complex environments such as BabyAI-BossLevel in our experiments. Since LISA has two transformers as opposed to just one in the flat baseline we ensured that the baseline and our method had a similar number of total parameters. To this end, the flat baseline uses Transformer network with 2 self-attention layers, and LISA’s skill predictor and policy use Transformer network with a single self-attention layer each. We also ensured that the embedding dimension and the number of heads in each layer were exactly the same in both LISA and the flat baseline. Details of this are provided in appendix sections D.1 and D.2 respectively. In fact, one could argue that LISA has less representation power because the policy transformer can only attend to the last H steps while the flat baseline can attend to the entire trajectory which is what makes it an extremely strong baseline. The flat baseline also uses the same pre-trained DistillBERT text encoder model as LISA for dealing with natural language input. 4 Experiments In this section, we evaluate LISA on grid-world navigation and robotic manipulation tasks. We compare the performance of LISA with a strong non-hierarchical baseline in the low-data regime. We then analyse our learnt skill abstractions in detail – what they represent, how we can interpret them and how they improve performance on downstream composition tasks. For the sake of brevity, we present additional ablations in the Appendix F, on doing manual planning with LISA skills (Section F.2), transferring learned skills to different environments (Section F.3) and learning continuous skills (Section F.6). 4.1 Datasets Several language-conditioned datasets have been curated as of late such as [46, 48, 13, 38, 33, 2, 10, 12]. Nevertheless, a lot of these datasets focus on complex-state representations and navigation in 3D environments, making them challenging to train on and qualitatively analyze our skills as shown in Fig. 4. We found BabyAI, a grid-world navigation environment and LOReL, a robotic manipulation environment as two diverse test beds that were very different from each other and conducive for hierarchical skill learning as well as detailed qualitative and quantitative analysis of our learned skills and we use them for our experiments. BabyAI Dataset. The BabyAI dataset [13] contains 19 levels of increasing difficulty where each level is set in a grid world and an agent sees a partially observed ego-centric view in a square of size 7x7. The agent must learn to perform various tasks of arbitrary difficulty such as moving objects between rooms, opening or closing doors, etc. all with a partially observed state and a language instruction. The language instructions for easy levels are quite simple but get exponentially more challenging for harder levels and contain several skills that the agent must complete in sequence (examples in appendix section C.1). The dataset provides 1 million expert trajectories for each of the 19 levels, but we use 0.1− 10% of these trajectories to train our models. We evaluate our policy on a set of 100 different instructions from the gym environment for each level, which contain high percentage of unseen environments layouts and language instructions given the limited data we use for training. More details about this dataset can be found in Appendix C.1 and in the BabyAI paper. LOReL Sawyer Dataset. This dataset [38] consists of pseudo-expert trajectories or play data collected from a replay buffer of a random RL policy and has been labeled with post-hoc crowdsourced language instructions. Hence, the trajectories complete the language instruction provided but may not necessarily be optimal. Play data is inexpensive to collect [30] in the real world and it is important for algorithms to be robust to such datasets as well. However, due to the randomness in the trajectories, this makes the dataset extremely difficult to use in a behavior cloning (BC) setting. Despite this, we are able to achieve good performance on this benchmark and are able to learn some very useful skills. The LOReL Sawyer dataset contains 50k trajectories of length 20 on a simulated environment with a Sawyer robot. We evaluate on the same set of 6 tasks that the original paper does for our results in Table 1: close drawer, open drawer, turn faucet right, turn faucet left, move black mug right, move white mug down. We use two different settings - with robot state space observations and partially-observed image observations. More details can be found in the Appendixd C.2 and in the LOReL paper. ∗ We optimized a language-conditioned BC model following the LOReL paper to the best of our abilities but could not get better performance. 4.2 Baselines Original. These refer to the baselines from the original paper for each dataset. For BabyAI, we trained their non-hierarchical RNN based method on different number of trajectories. Similarly, on LOReL we compare with the performance of language-conditioned BC. The original LOReL method uses a planning algorithm on a learned reward function to get around the sub-optimal nature of the trajectories. We found the BC baseline as a more fair comparison, as LISA is trained using BC as well. Nonetheless, we compare with the original LOReL planner in Section 4.7 for composition tasks. LOReL results in Table 1 refer to the performance on the 6 seen instructions in the LOReL evaluation dataset, same as ones reported in the original paper. Flat Baseline. We implement a non-hierarchical baseline using language-conditioned Decision Transformer denoted as Lang DT, the details of which are in section 3.2.1. 4.3 How does performance of LISA compare with non-hierarchical baselines in low-data regime? We consider three levels from the BabyAI environment and the LOReL Sawyer environment. For BabyAI, we consider the GoToSeq, SynthSeq and BossLevel tasks since they are challenging and require performing several sub-tasks one after the other. Since these levels contain instructions that are compositional in nature, when we train on limited data the algorithm must learn skills which form complex instructions to generalize well to unseen instructions at test time. Our experimental results are shown in Table 1. We train the models on a randomly sampled 1k, 10k and 100k trajectories from the full BabyAI dataset and 50k trajectories on the LOReL dataset. We use more data from the LOReL dataset because of the sub-optimal nature of the trajectories. On all the environments, our method is competitive to or outperforms the strong non-hierarchical Decision Transformer baseline. The gap grows larger as we reduce the number of trajectories trained on, indicating that our method is able to leverage the common sub-task structures better and glean more information from limited data. As expected, with larger amounts of training data it becomes hard to beat the flat baseline since the model sees more compositions during training and can generalize better at test time [8]. As mentioned above, we evaluate on the same 6 seen instructions the original LOReL paper did. We also evaluate the performance on varying language instructions on LOReL, similar to the original paper, with additional results in Appendix E. We were pleasantly surprised that LISA is 2x better than the flat Lang-DT baseline on LOReL tasks, reaching 40% success rate using partial image observations despite the sub-optimal nature of the data. One explanation for this is that the discrete skill codes are able to capture different ways of doing the same task, thereby allowing LISA to learn an implicit multi-modal policy. This is not possible with the flat version as it has no way to compartmentalize these noisy trajectories, and perhaps tends to overfit on this noisy data, leading to performance degradation. 4.4 What skills does LISA learn? Are they diverse? To answer this question, we analyse the skills produced by LISA and the language tokens corresponding to each skill. We plot a heat map in Figure 5 corresponding to the correlation between the language tokens and skill codes. Here, we plot the map corresponding to the LOReL dataset. From the figure, we can see that certain skill codes correspond very strongly to certain language tokens and by extension, tasks. We also see the sparse nature of the heat maps which indicates that each skill corresponds to distinct language tokens. We also plot word clouds corresponding to four different options in the LOReL environment in Figure 6 and we notice that different options are triggered by different language tokens. From the figure, it is clear that the skill on the top left corner corresponds to close the drawer and the skill on the top right corresponds to turn faucet left. Similar word clouds and heat maps for the BabyAI environments are in the appendix section B.3. 4.5 Do the skills learned by LISA correspond to interpretable behavior? We have seen that the different skills correspond to different language tokens, but do the policies conditioned on these skills behave according to the language tokens? To understand this, we fix the skill code for the entire trajectory and run the policy i.e. we are shutting off the skill predictor and always predicting the same skill for the entire trajectory. As we can see from the word cloud and the corresponding trajectory in Figure 4, the behaviour for skill code 14 is exactly what we can infer from the language tokens in the word cloud – close the drawer. More such images and trajectories can be found in the appendix section B.5. 4.6 Why do LISA learned skills show such a strong correlation to language? As mentioned in section 3.2, the commitment loss from VQ acts as a way to increase the MI between the language and the skill codes during training. This allows the codes to be highly correlated with language without any reconstruction losses. To analyze this, we plot the MI between the options and the language during training on the BabyAI BossLevel with 1k trajectories and the plot can be seen in Figure 7. The plots show the MI increasing over training for a wide range of settings as we vary the number of skills and the horizon. In the ablation studies below, we report the success rate corresponding to each of these curves and we notice that there’s almost a direct correlation with increasing MI and task performance. This is very encouraging since it clearly shows that the skills are encoding language and that directly impacts the performance of the behavior cloning policy. 4.7 Can we use the learned skills to perform new composition tasks? To test our composition performance, we evaluate on LOReL composition tasks using images in Table 2. To this end, we handcraft 15 unseen composition instructions. We have listed these instructions in the Appendix Table 5 with one such example “pull the handle and move black mug down”. We ran 10 different runs of each instruction across 3 different seeds. As we can see, our performance is nearly 2x that of the non-hierarchical baseline. We also compare with the original LOReL planner on these composition tasks and we notice that we perform slightly better despite them having access to a reward function and a dynamics model pre-trained on 1M frames while LISA is trained from scratch. We set the max number of episode steps to 40 from the usual 20 for all the methods while performing these experiments because of the compositional nature of the tasks. Note that results in Table 1 show compositionality performance on the BabyAI dataset as we train with 0.1%- 10% of the data. When we evaluate on the gym environment generating any possible language instruction from the BabyAI grammar, we may come across several unseen instructions at test time. To give a sense of the % of unseen instructions for BabyAI when we evaluate on the gym environment, we take the different BabyAI environments and report the % of unseen language instructions seen at test time for different training data regimes in Table 3. For each statistic, we sample 10,000 random instructions from the environment and check how many are unseen in the training dataset used, repeated over 3 different seeds. 4.8 How does LISA compare to simply doing K-Means clustering on the language and state embeddings? In LISA, the VQ approach can be seen as taking the concatenated language-state inputs and projecting it into a learned embedding space. VQ here simply learns K embedding vectors that act as K cluster centers for the projected input vectors in this embedding space, and allows for differentiability, enabling learning through backpropagation. This is also similar to propotypical methods used for few-shot learning and allows for deep differentiation clustering, giving an intuition of why it works. To compare against k-means, we construct a simple unsupervised learning baseline that clusters trajectories in the training dataset using k-means. Specifically, in the BabyAI BossLevel environment using 1k training trajectories, we take all concatenated language-state vectors for all trajectories in the dataset and cluster them using k-means and use the assigned cluster centers as the skill codes. We then learn a policy using these skill codes to measure their efficacy and found that LISA has a performance of 49.1± 2.4% and k-means has a performance of 20.2± 5.2% over 3 seeds. Thus, we see that using the simple k-means skills is insufficient to learn a good policy to solve the BabyAI BossLevel task, as the skills are not representative enough of the language instructions. A reason for this is that language and state vectors lie in different embedding spaces, and K-means based on euclidean distance is not optimal on the concatenated vectors. 5 Limitations and Future Work We present LISA, a hierarchical imitation learning framework that can be used to learn interpretable skill abstractions from language-conditioned expert demonstrations. We showed that the skills are diverse and can be used to solve long-range language tasks and that our method outperforms a strong non-hierarchical baseline in the low-data regime. However, there are several limitations to LISA and plenty of scope for future work. One limitation of LISA is that there are several hyperparameters to tune that may affect performance like the number of options and the horizon for each option. It certainly helps to have a good idea of the task to decide these hyperparameters even though the ablations show that the method is fairly robust to these choices. Its also useful to learn the horizon for each skill by learning a termination condition and we leave this for future work. Although our method has been evaluated on the language-conditioned imitation learning setting, its not difficult to modify this method to make it work for image goals or demos, and in the RL setting as well. Its interesting to see if the vector quantization trick can be used to learn goal-conditioned skills in a more general framework. Acknowledgments and Disclosure of Funding We are thankful to John Schulman, Chelsea Finn, Karol Hausman and Dilip Arumugam for initial discussions regarding our method, and to Suraj Nair for providing help with the LOReL baseline. This research was supported in part by NSF (#1651565, #1522054, #1733686), ONR (N00014-19-12145), AFOSR (FA9550-19-1-0024), FLI and Samsung.
1. What is the focus and contribution of the paper regarding hierarchical imitation learning? 2. What are the strengths of the proposed approach, particularly in its ability to learn low-level skills from instructions? 3. Do you have any concerns or questions regarding the methodology, such as the importance of sampling every H steps or the choice of using VQVAE? 4. How does the approach scale when increasing the number of skills in the codebook for the same task? 5. Are there any limitations to the length of the language instruction used?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper proposes a hierarchical imitation learning where language instruction is used as input and converted to a latent code. This latent code is then used to condition the policy and guide the policy to the target. Strengths And Weaknesses The paper is nicely written and the main arguments are clearly explained. The figures although are a bit harder to discern due to the smaller size. The proposed approach, LISA, is one of the first imitation learning methods that employs learning low-level skills from a set of instructions. The results show that the agents are able to outperform the baselines and the word clouds depict the different skills associated with different latent variables. Questions The approach seems to be a simple extension to various works in unsupervised skill discovery methods such as DIAYN, RVIC, etc. The difference being here the latent codes are not randomly initialized but instead are learned through VQVAE. I'm curious though how would LISA perform if all the instructions and the resulting codes are given at once and not sampled every H steps. Is it the sampling every H steps that's important to the resulting behavior? Is the choice of using VQVAE to convert to discrete codes arbitrary? How about using approximate backpropagation techniques such as Gumbel-Softmax? Would this change the result? Did the authors look into the effective number of skills used? This helps in highlighting that all codes in the codebook are indeed useful and has some relationship with some linguistic object. Limitations The authors discuss the limitations of the work in the last section. Besides tuning hyperparameters, is there any limitation on the length of the language instruction used? How does the approach scale when we increase the number of skills in the codebook for the same task?
NIPS
Title LISA: Learning Interpretable Skill Abstractions from Language Abstract Learning policies that effectively utilize language instructions in complex, multitask environments is an important problem in sequential decision-making. While it is possible to condition on the entire language instruction directly, such an approach could suffer from generalization issues. In our work, we propose Learning Interpretable Skill Abstractions (LISA), a hierarchical imitation learning framework that can learn diverse, interpretable primitive behaviors or skills from languageconditioned demonstrations to better generalize to unseen instructions. LISA uses vector quantization to learn discrete skill codes that are highly correlated with language instructions and the behavior of the learned policy. In navigation and robotic manipulation tasks, LISA outperforms a strong non-hierarchical Decision Transformer baseline in the low data regime and is able to compose learned skills to solve tasks containing unseen long-range instructions. Our method demonstrates a more natural way to condition on language in sequential decision-making problems and achieve interpretable and controllable behavior with the learned skills. N/A 1 Introduction Intelligent machines should be able to solve a variety of complex, long-horizon tasks in an environment and generalize to novel scenarios. In the sequential decision-making paradigm, provided expert demonstrations, an agent can learn to perform these tasks via multi-task imitation learning (IL). As humans, it is desirable to specify tasks to an agent using a convenient, yet expressive modality and the agent should solve the task by taking actions in the environment. There are several ways for humans to specify tasks to an agent, such as task IDs, goal images, and goal demonstrations. However, these specifications tend to be ambiguous, require significant human effort, and can be cumbersome to curate and provide at test time. One of the most natural and versatile ways for humans to specify tasks is via natural language. The goal of language-conditioned IL is to solve tasks in an environment given language-conditioned trajectories at training time and a natural language instruction at test time. This becomes challenging when the task involves completing several sub-tasks sequen- *Equal contribution. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). tially, like the example shown in Figure 1. A crucial step towards solving this problem is exploiting the inherent hierarchical structure of natural language. For example, given the task specification “pull the handle and move black mug right”, we can split it into learning two independent primitive behaviors or skills, i.e. “pull the handle” and “move black mug right”. If we are able to decompose the problem of solving these complex tasks into learning skills, we can re-use and compose these learned skills to generalize to unseen tasks in the future. This is especially useful in the low-data regime, since we may not see all possible tasks given the limited dataset, but may see all the constituent sub-tasks. Using such hierarchical learning, we can utilize language effectively and learn skills as the building blocks of complex behaviors. Utilizing language effectively to learn skills is a non-trivial problem and raises several challenges. (i) The process of learning skills from language-conditioned trajectories is unsupervised as we may not have knowledge about which parts of the trajectory corresponds to each skill. (ii) We need to ensure that the learned skills are useful, i.e. encode behavior that can be composed to solve new tasks. (iii) We would like the learned skills to be interpretable by humans, both in terms of the language and the behaviours they encode. There are several benefits of interpretability. For example, it allows us to understand which skills our model is good at and which skills it struggles with. In safety critical settings such as robotic surgery or autonomous driving, knowing what each skill does allows us to pick and choose which skills we want to run at test time. It also provides a visual window into a neural network policy which is extremely desirable [54]. There have been prior works such as [38, 47, 13] that have failed to address these challenges and condition on language in a monolithic fashion without learning skills. As a result, they tend to perform poorly on long-horizon composition tasks such as the one in Figure 1. To this end, we propose Learning Interpretable Skill Abstractions from language (LISA), a hierarchical imitation learning framework that can learn interpretable skills from language-conditioned offline demonstrations. LISA uses a two-level architecture – a skill predictor that predicts quantized skills from a learnt vector codebook and a policy that uses these skill vector codes to predict actions. The discrete skills learned from language are interpretable (see Figure 2 and 4) and can be composed to solve long-range tasks. Using quantization maximizes skill reuse and enforces a bottleneck to pass information from the language to the policy, enabling unsupervised learning of interpretable skills. We perform experiments on grid world navigation and robotic manipulation tasks and show that our hierarchical method can outperform a strong non-hierarchical baseline based on Decision Transformer [11] in the low-data regime. We analyse these skills qualitatively and quantitatively and find them to be highly correlated to language and behaviour. Finally, using these skills to perform long-range composition tasks on a robotic manipulation environment results in performance that is nearly 2x better than the non-hierarchical version. Concretely, our contributions are as follows: • We introduce LISA, a novel hierarchical imitation framework to solve complex tasks specified via language by learning re-usable skills. • We demonstrate the effectiveness of our approach in the low-data regime where its crucial to break down complex tasks to generalize well. • We show our method performs well in long-range composition tasks where we may need to apply multiple skills sequentially. • We also show that the learned skills are highly correlated to language and behaviour and can easily be interpreted by humans. 2 Related Work 2.1 Imitation Learning Imitation learning (IL) has a long history, with early works using behavioral cloning [41–43] to learn policies via supervised learning on expert demonstration data. Recent methods have shown significant improvements via learning reward functions [21] or Q-functions [18] from expert data to mimic expert behavior. Nevertheless, these works typically consider a single task. An important problem here is multi-task IL, where the imitator is trained to mimic behavior on a variety of training tasks with the goal of generalizing the learned behaviors to test tasks. A crucial variable in the multi-task IL set-up is how the task is specified, e.g vectorized representations of goal states [37], task IDs [24], and single demonstrations [56, 14, 16, 57]. In contrast, we focus on a multi-task IL setup with task-specification through language, one of the most natural and versatile ways for humans to communicate desired goals and intents. 2.2 Language Grounding Several prior works have attempted to ground language with tasks or use language as a source of instructions for learning tasks with varying degrees of success [32, 55, 4, 39, 5]. [27] is a good reference for works combining language with sequential-decision making. But apart from a few exceptions, most algorithms in this area use the language instruction in a monolithic fashion and are designed to work for simple goals that requires the agent to demonstrate a single skill [40, 9, 20, 6] or tasks where each constituent sub-goal has to be explicitly specified [10, 46, 34, 3, 52, 50, 17, 35, 31]. Some recent works have shown success on using play data [28] or pseudo-expert data such as LOReL [38] and CLIPORT [47]. LOReL and CLIPORT are not hierarchical techniques. [28] can be interpreted as a hierarchical technique that generates latent sub-goals as a function of goal images, language instructions and task IDs but the skills learned by LISA are purely a function of language and states alone and do not require goal images or task IDs. [23, 22] and [49] are some examples of works that use a two-level architecture for language conditioned tasks but neither of these methods learn skills that are interpretable. 2.3 Latent-models and Hierarchical Learning Past works have attempted to learn policies conditioned on latent variables and some of them can be interpreted as hierarchical techniques. For example, [15] learns skills using latent variables that visit different parts of the environment’s state space. [45] improved on this by learning skills that were more easily predictable using a dynamics model. But these fall more under the category of skill discovery than hierarchical techniques since the skill code is fixed for the entire trajectory, as is the case with [15]. [29] and [26] are other works that use a latent-variable approach to IL. But these approaches don’t necessarily learn a latent variable with the intention of breaking down complex tasks into skills. With LISA, we sample several skills per trajectory with the clear intention of each skill corresponding to completing a sub-task for the whole trajectory. Also, none of the methods mentioned here condition on language. There has been some work on hierarchical frameworks for RL to learn high-level action abstractions, called options [51], such as [25, 58, 36] but these works are not goal-conditioned. Unlike LISA, these works don’t use language and the options might lack diversity and not correspond to any concrete or interpretable skills. Furthermore, none have used the VQ technique to learn options and often suffer from training instabilities. 3 Approach The key idea of LISA is to learn quantized skill representations that are informative of both language and behaviors, which allows us to break down high-level instructions, specified via language, into discrete, interpretable and composable codes (see Fig. 2, Fig. 6 and Fig. 8 for visualizations). These codes enable learning explainable and controllable behaviour, as shown in Fig. 1 and Fig. 4. Section 3.1 describes the problem formulation, an overview of our framework, and presents our language-conditioned model. Section 3.2 provides details on the training approach. 3.1 Language-conditioned Skill Learning 3.1.1 Problem Setup We consider general multi-task environments, represented as a task-augmented Markov decision process (MDP) with a family of different tasks T . A task Ti may be composed of other tasks in T and encode multiple sub-goals. For example, in a navigation environment, a task could be composed of two or more sub-tasks - “pick up ball”, “open door” - in any hierarchical order. S,A represent state and action spaces. We assume that each full task has a single natural language description l ∈ L, where L represents the space of language instructions. Any sub-goals for the task are encoded within this single language instruction. We assume access to an offline dataset D of trajectories obtained from an optimal policy for a variety of tasks in an environment with only their language description available. Each trajectory τ i = (li, {(si1, ai1), (si2, ai2), ..., (siT , aiT )}) consists of the language description and the observations sit ∈ S, actions ait ∈ A taken over T timesteps. The trajectories are not labeled with any rewards. Our aim is to predict the expert actions at, given a language instruction and past observations. Note that each trajectory in the training dataset can comprise of any number of sub-tasks. For example, we could have a trajectory to “open a door” and another to “pick up a ball and close the door” in the training data. With LISA we aim to solve the task “open a door and pick up the ball” at test time even though we haven’t seen this task at training time. In a trajectory with multiple sub-tasks, the training dataset does not give us information about where one sub-task ends and where another one begins. LISA must learn how to identify and stitch together these sub-tasks learned during training, in order to solve a new language instruction such as the one shown in Fig. 1 at test time. 3.1.2 Hierarchical Skill Abstractions We visualize the working of LISA in Figure 3. Our framework consists of two modules: a skill predictor f : L × S → C and a policy π : S × C → A. Here, C = { z1, . . . , zK } is a learnable codebook of K quantized skill latent codes. D is the dimension of the latent space of skills. Our key idea is to break learning behavior from language in two stages: 1) Learn discrete latent codes z, representing skills, from the fulllanguage instruction to decompose the task into smaller sub-goals 2) Learn a policy π conditioned only on these discrete codes. In LISA, both stages are trained end-to-end. Given an input τ = (l, {st, at}Tt=1), the skill predictor f predicts a skill code z̃ ∈ RD at a timestep t as z̃ = f(l, (st, st−1, ...)). These codes are discretized using a vector quantization operation q(·) that maps a latent z̃ to its closest codebook entry z = q(z̃). The quantization operation q(·) helps in learning discrete skill codes and acts as a bottleneck on passing language information. We detail its operation in Sec. 3.2. The chosen skill code z, is persisted for H timesteps where H is called the horizon. More details on how we chose the horizon and ablations studies on the choice of H can be found in appendix sections D and F.1. After H timesteps, the skill predictor is invoked again to predict a new skill. This enforces the skill to act as a temporal abstraction on actions, i.e. options [51]. The policy π predicts the action at at each timestep t conditioned on the state and a single skill code z that is active at that timestep. For π to correctly predict the original actions, it needs to use the language information encoded in the skill codes. LISA learns quantized skill codes in a vector codebook instead of continuous embeddings as this encourages reusing and composing these codes together to pass information from the language input to the actual behavior. Our learnt discrete skill codes adds interpretability and controllability to the policy’s behavior. 3.2 Training LISA Learning Discrete Skills. LISA uses Vector Quantization (VQ), inspired from [53]. It is a natural and widely-used method to map an input signal to a low-dimensional discrete learnt representation. VQ learns a codebook C ∈ { z1, . . . , zK } of K embedding vectors. Given an embedding z̃ from the skill predictor f , it maps the embedding to the closest vector in the codebook: z = q(z̃) =: argmin zk∈C ∥z̃ − zk∥2 with the codebook vectors updated to be the moving average of the embeddings z closest to them. This can be classically seen as learning K cluster centers via k-means [19]. Backpropagation through the non-differentiable quantization operation is achieved by a straightthrough gradient estimator, which simply copies the gradients from the decoder to the encoder, such that the model and codebook can be trained end-to-end. VQ enforces each learnt skill z to lie in C, which can be thought as learning K prototypes or cluster centers for the language embeddings using the seen states. This acts as a bottleneck that efficiently decomposes a language instruction into sub-parts encoded as discrete skills. LISA Objective. LISA is trained end-to-end using an objective LLISA = LBC + λLVQ, where LBC is the behavior-cloning loss on the policy πθ, λ is the VQ loss weight and LVQ is the vector quantization loss on the skill predictor fϕ given as: LVQ(f) = Eτ [∥sg [q(z̃)]− z̃∥22] (1) with z̃ = fϕ(l, (st, st−1, ..)). sg [·] denotes the stop-gradient operation. LVQ is also called commitment loss. It minimizes the conditional entropy of the skill predictor embeddings given the codebook vectors, making the embeddings stick to a single codebook vector. The codebook vectors are learnt using an exponential moving average update, same as [53]. Avoiding language reconstruction. LISA avoids auxiliary losses for language reconstruction from the skills latent codes and it’s not obvious why the skill codes are properly encoding language, and we expand on it here. For a given a signal X and a code Z, reconstructing the signal from the code as X̃ = f(Z) using cross-entropy loss amounts to maximizing the Mutual Information (MI) I(X,Z) between X and Z [1, 7]. In our case, we can write the MI between the skill codes and language using entropies as: I(z, l) = H(z)−H(z | l), whereas methods that attempt to reconstruct language apply the following decomposition: I(z, l) = H(l) − H(l | z). Here, H(l), the entropy of language instructions, is constant, and this gives us the cross-entropy loss. Thus we can avoid language reconstruction via cross-entropy loss by maximizing I(z, l) directly. In LISA, Lvq = −H(z | l), and we find there is no need to place a constraint on H(z) as the learned skill codes are diverse, needing to encode enough information to correctly predict the correct actions.1 1In experiments, we tried enforcing a constraint on H(z) by using extra InfoNCE loss term but don’t observe any gains. Algorithm 1 Training LISA Input: Dataset D of language-paired trajectories Input: Num skills K and horizon H 1: Initialize skill predictor fϕ, policy πθ 2: Vector Quantization op q(·) 3: while not converged do 4: Sample τ = (l, {s0, s1, s2...sT }, {a0, a1, a2...aT }) 5: Initialize S = {s0} ▷ List of seen states 6: for k = 0.. ⌊ T H ⌋ do ▷ Sample a skill every H steps 7: z ← q(fϕ(l, S)) 8: for step t = 1..H do ▷ Predict actions using a fixed skill and context length H 9: akH+t ← πθ(z, S[: −H]) 10: S ← S ∪ {skH+t} ▷ Append seen state 11: end for 12: Train fϕ, πθ using objective LLISA 13: end for 14: end while 𝑧 = 14 Figure 4: Behavior with fixed LISA options. We show the word clouds and the behavior of the policy obtained by using a fixed skill code z = 14 for an entire episode. We find that this code encodes the skill “closing the drawer”, as indicated by the word cloud. The policy executes this skill with a high degree of success when conditioned on this code for the entire trajectory, across multiple environment initializations and seeds. As a result, LISA can maximize the MI between the learnt skills and languages without auxiliary reconstruction losses and enforcing only Lvq on the skill codes. We empirically estimate the MI between the language and skill codes and find that our experiments confirm this in Section 4.6. 3.2.1 LISA Implementation LISA can be be implemented using different network architectures, such as Transformers or MLPs. In our experiments, we use Transformer architectures with LISA, but we find that out method is effective even with simple architectures choices such as MLPs, as shown in the appendix section F.5. Even when using Transformers for both the skill predictor and the policy network, our compute requirement is comparable to the non-hierarchical Flat Transformer policy as we can get away with using fewer layers in each module. Language Encoder. We use a pre-trained DistilBERT [44] encoder to generate language embeddings from the text instruction. We fine-tune the language encoder end-to-end and use the full language embedding for each word token, and not a pooled representation of the whole text. Observation Encoder. For image observations, we use convolution layers to generate embeddings. For simple state representations, we use MLPs. Skill Predictor. The skill predictor network f is implemented as a small Causal Transformer network that takes in the language embeddings and the observation embeddings at each time step. The language embeddings are concatenated at the beginning of the observation embeddings before being fed into the skill predictor. The network applies a causal mask hiding the future observations. Policy Network. Our policy network π, also implemented as a small Causal Transformer inspired by Decison Transformer (DT) [11]. However, unlike DT, our policy is not conditioned on any reward signal, but on the skill code. The sequence length of π is the horizon H of the skills which is much smaller compared to the length of the full trajectory. Flat Decision Transformer Baseline. Our flat baseline is based on DT and is implementation-wise similar to LISA, but without a skill predictor network. The policy here is a Causal Transformer, where we modify DT to condition on the language instruction embedding from a pre-trained DistillBERT text encoder instead of the future sum of returns. We found this baseline to be inefficient at handling long-range language instructions, needing sequence lengths of 1000 on complex environments such as BabyAI-BossLevel in our experiments. Since LISA has two transformers as opposed to just one in the flat baseline we ensured that the baseline and our method had a similar number of total parameters. To this end, the flat baseline uses Transformer network with 2 self-attention layers, and LISA’s skill predictor and policy use Transformer network with a single self-attention layer each. We also ensured that the embedding dimension and the number of heads in each layer were exactly the same in both LISA and the flat baseline. Details of this are provided in appendix sections D.1 and D.2 respectively. In fact, one could argue that LISA has less representation power because the policy transformer can only attend to the last H steps while the flat baseline can attend to the entire trajectory which is what makes it an extremely strong baseline. The flat baseline also uses the same pre-trained DistillBERT text encoder model as LISA for dealing with natural language input. 4 Experiments In this section, we evaluate LISA on grid-world navigation and robotic manipulation tasks. We compare the performance of LISA with a strong non-hierarchical baseline in the low-data regime. We then analyse our learnt skill abstractions in detail – what they represent, how we can interpret them and how they improve performance on downstream composition tasks. For the sake of brevity, we present additional ablations in the Appendix F, on doing manual planning with LISA skills (Section F.2), transferring learned skills to different environments (Section F.3) and learning continuous skills (Section F.6). 4.1 Datasets Several language-conditioned datasets have been curated as of late such as [46, 48, 13, 38, 33, 2, 10, 12]. Nevertheless, a lot of these datasets focus on complex-state representations and navigation in 3D environments, making them challenging to train on and qualitatively analyze our skills as shown in Fig. 4. We found BabyAI, a grid-world navigation environment and LOReL, a robotic manipulation environment as two diverse test beds that were very different from each other and conducive for hierarchical skill learning as well as detailed qualitative and quantitative analysis of our learned skills and we use them for our experiments. BabyAI Dataset. The BabyAI dataset [13] contains 19 levels of increasing difficulty where each level is set in a grid world and an agent sees a partially observed ego-centric view in a square of size 7x7. The agent must learn to perform various tasks of arbitrary difficulty such as moving objects between rooms, opening or closing doors, etc. all with a partially observed state and a language instruction. The language instructions for easy levels are quite simple but get exponentially more challenging for harder levels and contain several skills that the agent must complete in sequence (examples in appendix section C.1). The dataset provides 1 million expert trajectories for each of the 19 levels, but we use 0.1− 10% of these trajectories to train our models. We evaluate our policy on a set of 100 different instructions from the gym environment for each level, which contain high percentage of unseen environments layouts and language instructions given the limited data we use for training. More details about this dataset can be found in Appendix C.1 and in the BabyAI paper. LOReL Sawyer Dataset. This dataset [38] consists of pseudo-expert trajectories or play data collected from a replay buffer of a random RL policy and has been labeled with post-hoc crowdsourced language instructions. Hence, the trajectories complete the language instruction provided but may not necessarily be optimal. Play data is inexpensive to collect [30] in the real world and it is important for algorithms to be robust to such datasets as well. However, due to the randomness in the trajectories, this makes the dataset extremely difficult to use in a behavior cloning (BC) setting. Despite this, we are able to achieve good performance on this benchmark and are able to learn some very useful skills. The LOReL Sawyer dataset contains 50k trajectories of length 20 on a simulated environment with a Sawyer robot. We evaluate on the same set of 6 tasks that the original paper does for our results in Table 1: close drawer, open drawer, turn faucet right, turn faucet left, move black mug right, move white mug down. We use two different settings - with robot state space observations and partially-observed image observations. More details can be found in the Appendixd C.2 and in the LOReL paper. ∗ We optimized a language-conditioned BC model following the LOReL paper to the best of our abilities but could not get better performance. 4.2 Baselines Original. These refer to the baselines from the original paper for each dataset. For BabyAI, we trained their non-hierarchical RNN based method on different number of trajectories. Similarly, on LOReL we compare with the performance of language-conditioned BC. The original LOReL method uses a planning algorithm on a learned reward function to get around the sub-optimal nature of the trajectories. We found the BC baseline as a more fair comparison, as LISA is trained using BC as well. Nonetheless, we compare with the original LOReL planner in Section 4.7 for composition tasks. LOReL results in Table 1 refer to the performance on the 6 seen instructions in the LOReL evaluation dataset, same as ones reported in the original paper. Flat Baseline. We implement a non-hierarchical baseline using language-conditioned Decision Transformer denoted as Lang DT, the details of which are in section 3.2.1. 4.3 How does performance of LISA compare with non-hierarchical baselines in low-data regime? We consider three levels from the BabyAI environment and the LOReL Sawyer environment. For BabyAI, we consider the GoToSeq, SynthSeq and BossLevel tasks since they are challenging and require performing several sub-tasks one after the other. Since these levels contain instructions that are compositional in nature, when we train on limited data the algorithm must learn skills which form complex instructions to generalize well to unseen instructions at test time. Our experimental results are shown in Table 1. We train the models on a randomly sampled 1k, 10k and 100k trajectories from the full BabyAI dataset and 50k trajectories on the LOReL dataset. We use more data from the LOReL dataset because of the sub-optimal nature of the trajectories. On all the environments, our method is competitive to or outperforms the strong non-hierarchical Decision Transformer baseline. The gap grows larger as we reduce the number of trajectories trained on, indicating that our method is able to leverage the common sub-task structures better and glean more information from limited data. As expected, with larger amounts of training data it becomes hard to beat the flat baseline since the model sees more compositions during training and can generalize better at test time [8]. As mentioned above, we evaluate on the same 6 seen instructions the original LOReL paper did. We also evaluate the performance on varying language instructions on LOReL, similar to the original paper, with additional results in Appendix E. We were pleasantly surprised that LISA is 2x better than the flat Lang-DT baseline on LOReL tasks, reaching 40% success rate using partial image observations despite the sub-optimal nature of the data. One explanation for this is that the discrete skill codes are able to capture different ways of doing the same task, thereby allowing LISA to learn an implicit multi-modal policy. This is not possible with the flat version as it has no way to compartmentalize these noisy trajectories, and perhaps tends to overfit on this noisy data, leading to performance degradation. 4.4 What skills does LISA learn? Are they diverse? To answer this question, we analyse the skills produced by LISA and the language tokens corresponding to each skill. We plot a heat map in Figure 5 corresponding to the correlation between the language tokens and skill codes. Here, we plot the map corresponding to the LOReL dataset. From the figure, we can see that certain skill codes correspond very strongly to certain language tokens and by extension, tasks. We also see the sparse nature of the heat maps which indicates that each skill corresponds to distinct language tokens. We also plot word clouds corresponding to four different options in the LOReL environment in Figure 6 and we notice that different options are triggered by different language tokens. From the figure, it is clear that the skill on the top left corner corresponds to close the drawer and the skill on the top right corresponds to turn faucet left. Similar word clouds and heat maps for the BabyAI environments are in the appendix section B.3. 4.5 Do the skills learned by LISA correspond to interpretable behavior? We have seen that the different skills correspond to different language tokens, but do the policies conditioned on these skills behave according to the language tokens? To understand this, we fix the skill code for the entire trajectory and run the policy i.e. we are shutting off the skill predictor and always predicting the same skill for the entire trajectory. As we can see from the word cloud and the corresponding trajectory in Figure 4, the behaviour for skill code 14 is exactly what we can infer from the language tokens in the word cloud – close the drawer. More such images and trajectories can be found in the appendix section B.5. 4.6 Why do LISA learned skills show such a strong correlation to language? As mentioned in section 3.2, the commitment loss from VQ acts as a way to increase the MI between the language and the skill codes during training. This allows the codes to be highly correlated with language without any reconstruction losses. To analyze this, we plot the MI between the options and the language during training on the BabyAI BossLevel with 1k trajectories and the plot can be seen in Figure 7. The plots show the MI increasing over training for a wide range of settings as we vary the number of skills and the horizon. In the ablation studies below, we report the success rate corresponding to each of these curves and we notice that there’s almost a direct correlation with increasing MI and task performance. This is very encouraging since it clearly shows that the skills are encoding language and that directly impacts the performance of the behavior cloning policy. 4.7 Can we use the learned skills to perform new composition tasks? To test our composition performance, we evaluate on LOReL composition tasks using images in Table 2. To this end, we handcraft 15 unseen composition instructions. We have listed these instructions in the Appendix Table 5 with one such example “pull the handle and move black mug down”. We ran 10 different runs of each instruction across 3 different seeds. As we can see, our performance is nearly 2x that of the non-hierarchical baseline. We also compare with the original LOReL planner on these composition tasks and we notice that we perform slightly better despite them having access to a reward function and a dynamics model pre-trained on 1M frames while LISA is trained from scratch. We set the max number of episode steps to 40 from the usual 20 for all the methods while performing these experiments because of the compositional nature of the tasks. Note that results in Table 1 show compositionality performance on the BabyAI dataset as we train with 0.1%- 10% of the data. When we evaluate on the gym environment generating any possible language instruction from the BabyAI grammar, we may come across several unseen instructions at test time. To give a sense of the % of unseen instructions for BabyAI when we evaluate on the gym environment, we take the different BabyAI environments and report the % of unseen language instructions seen at test time for different training data regimes in Table 3. For each statistic, we sample 10,000 random instructions from the environment and check how many are unseen in the training dataset used, repeated over 3 different seeds. 4.8 How does LISA compare to simply doing K-Means clustering on the language and state embeddings? In LISA, the VQ approach can be seen as taking the concatenated language-state inputs and projecting it into a learned embedding space. VQ here simply learns K embedding vectors that act as K cluster centers for the projected input vectors in this embedding space, and allows for differentiability, enabling learning through backpropagation. This is also similar to propotypical methods used for few-shot learning and allows for deep differentiation clustering, giving an intuition of why it works. To compare against k-means, we construct a simple unsupervised learning baseline that clusters trajectories in the training dataset using k-means. Specifically, in the BabyAI BossLevel environment using 1k training trajectories, we take all concatenated language-state vectors for all trajectories in the dataset and cluster them using k-means and use the assigned cluster centers as the skill codes. We then learn a policy using these skill codes to measure their efficacy and found that LISA has a performance of 49.1± 2.4% and k-means has a performance of 20.2± 5.2% over 3 seeds. Thus, we see that using the simple k-means skills is insufficient to learn a good policy to solve the BabyAI BossLevel task, as the skills are not representative enough of the language instructions. A reason for this is that language and state vectors lie in different embedding spaces, and K-means based on euclidean distance is not optimal on the concatenated vectors. 5 Limitations and Future Work We present LISA, a hierarchical imitation learning framework that can be used to learn interpretable skill abstractions from language-conditioned expert demonstrations. We showed that the skills are diverse and can be used to solve long-range language tasks and that our method outperforms a strong non-hierarchical baseline in the low-data regime. However, there are several limitations to LISA and plenty of scope for future work. One limitation of LISA is that there are several hyperparameters to tune that may affect performance like the number of options and the horizon for each option. It certainly helps to have a good idea of the task to decide these hyperparameters even though the ablations show that the method is fairly robust to these choices. Its also useful to learn the horizon for each skill by learning a termination condition and we leave this for future work. Although our method has been evaluated on the language-conditioned imitation learning setting, its not difficult to modify this method to make it work for image goals or demos, and in the RL setting as well. Its interesting to see if the vector quantization trick can be used to learn goal-conditioned skills in a more general framework. Acknowledgments and Disclosure of Funding We are thankful to John Schulman, Chelsea Finn, Karol Hausman and Dilip Arumugam for initial discussions regarding our method, and to Suraj Nair for providing help with the LOReL baseline. This research was supported in part by NSF (#1651565, #1522054, #1733686), ONR (N00014-19-12145), AFOSR (FA9550-19-1-0024), FLI and Samsung.
1. What is the focus and contribution of the paper regarding hierarchical imitation learning? 2. What are the strengths and weaknesses of the proposed method, particularly in terms of originality, quality, clarity, significance, and limitations? 3. Do you have any questions or concerns regarding the skill predictor module, policy module, behavior cloning, vector quantization, or the combination of these techniques? 4. How does the reviewer assess the effectiveness and efficiency of LISA in different tasks, such as BabyAI and LOReL? 5. Are there any concerns about the interpretability and generalizability of the learned skill codes?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The submission presents a method, LISA, a hierarchical imitation learning framework that train a language-conditioned agent with an end-to-end fashion. LISA learns a skill abstraction through a skill predictor module and vector quantization conditioned on language input and observations, a policy module is learned conditioned on the skill abstraction. LISA is trained with behavior-cloning and vector quantization objectives. LISA's performance is evaluated on two tasks, BabyAI, LOReL; some baselines are created by authors and from original task paper. Comparing with baselines, the results demonstrate good sample efficiency, possibility for perform unseen composition task and good interpretability from learned skill code. Strengths And Weaknesses Originality It is a recombination of existing techniques, where vector quantization is used in representation learning and behavior cloning is also not new. Nonetheless, it is the first work that use a combination that learn a quantized abstract from language and state, and develop a policy based on the produced skill abstract. Quality The technical details are provided in the writing or references are given. Most experiment result and analysis are well supported. I believe the claims made in the contribution part are mostly fulfilled. The weakness of the paper are: the number of skill code is task dependent, the horizon of each skill has to be fixed, and chosen ahead; (the weakness aforementioned are mentioned in the writing). The conclusion related to compositionally is less convincing for me as I posted in question section below. The skill codes interpretation make more sense for LOReL than BabyAI due to larger of vocabulary size, thus, there are limitations on how we could use this interpretation. Clarity The writing is mostly clear, but there are errors in table name (eg Table 10 referred multiple times but it should be table 1). Also table and figure are often too far from the description (Fig 3, Tab 1); line 173, does the policy module take input of both state and skill code or only skill code? The two options for LOReL task in table 1 is not explained in the paper. Significance LISA presents a new hierarchical imitation learning framework that utilize vector quantization to learn skill abstraction and further build a policy module on top of skill abstractions. LISA is sample efficient and is useful for low-data regime. The learned skill codes are interpretable on limited conditions. Questions line 222: I am confused why use a causal mask for hiding future observation since skill predictor only takes previous states and language as input according to line 164. line 257 and line 267 describing generalization and compositionally related claims, but how likely the evaluation set is unseen is not quantified. The composition task dataset appears to be small, is a larger version available at this time? Limitations The hyperparameter limitations are included in the paper. The interpretation ability of skill code is limited and relies on problem setting.
NIPS
Title LISA: Learning Interpretable Skill Abstractions from Language Abstract Learning policies that effectively utilize language instructions in complex, multitask environments is an important problem in sequential decision-making. While it is possible to condition on the entire language instruction directly, such an approach could suffer from generalization issues. In our work, we propose Learning Interpretable Skill Abstractions (LISA), a hierarchical imitation learning framework that can learn diverse, interpretable primitive behaviors or skills from languageconditioned demonstrations to better generalize to unseen instructions. LISA uses vector quantization to learn discrete skill codes that are highly correlated with language instructions and the behavior of the learned policy. In navigation and robotic manipulation tasks, LISA outperforms a strong non-hierarchical Decision Transformer baseline in the low data regime and is able to compose learned skills to solve tasks containing unseen long-range instructions. Our method demonstrates a more natural way to condition on language in sequential decision-making problems and achieve interpretable and controllable behavior with the learned skills. N/A 1 Introduction Intelligent machines should be able to solve a variety of complex, long-horizon tasks in an environment and generalize to novel scenarios. In the sequential decision-making paradigm, provided expert demonstrations, an agent can learn to perform these tasks via multi-task imitation learning (IL). As humans, it is desirable to specify tasks to an agent using a convenient, yet expressive modality and the agent should solve the task by taking actions in the environment. There are several ways for humans to specify tasks to an agent, such as task IDs, goal images, and goal demonstrations. However, these specifications tend to be ambiguous, require significant human effort, and can be cumbersome to curate and provide at test time. One of the most natural and versatile ways for humans to specify tasks is via natural language. The goal of language-conditioned IL is to solve tasks in an environment given language-conditioned trajectories at training time and a natural language instruction at test time. This becomes challenging when the task involves completing several sub-tasks sequen- *Equal contribution. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). tially, like the example shown in Figure 1. A crucial step towards solving this problem is exploiting the inherent hierarchical structure of natural language. For example, given the task specification “pull the handle and move black mug right”, we can split it into learning two independent primitive behaviors or skills, i.e. “pull the handle” and “move black mug right”. If we are able to decompose the problem of solving these complex tasks into learning skills, we can re-use and compose these learned skills to generalize to unseen tasks in the future. This is especially useful in the low-data regime, since we may not see all possible tasks given the limited dataset, but may see all the constituent sub-tasks. Using such hierarchical learning, we can utilize language effectively and learn skills as the building blocks of complex behaviors. Utilizing language effectively to learn skills is a non-trivial problem and raises several challenges. (i) The process of learning skills from language-conditioned trajectories is unsupervised as we may not have knowledge about which parts of the trajectory corresponds to each skill. (ii) We need to ensure that the learned skills are useful, i.e. encode behavior that can be composed to solve new tasks. (iii) We would like the learned skills to be interpretable by humans, both in terms of the language and the behaviours they encode. There are several benefits of interpretability. For example, it allows us to understand which skills our model is good at and which skills it struggles with. In safety critical settings such as robotic surgery or autonomous driving, knowing what each skill does allows us to pick and choose which skills we want to run at test time. It also provides a visual window into a neural network policy which is extremely desirable [54]. There have been prior works such as [38, 47, 13] that have failed to address these challenges and condition on language in a monolithic fashion without learning skills. As a result, they tend to perform poorly on long-horizon composition tasks such as the one in Figure 1. To this end, we propose Learning Interpretable Skill Abstractions from language (LISA), a hierarchical imitation learning framework that can learn interpretable skills from language-conditioned offline demonstrations. LISA uses a two-level architecture – a skill predictor that predicts quantized skills from a learnt vector codebook and a policy that uses these skill vector codes to predict actions. The discrete skills learned from language are interpretable (see Figure 2 and 4) and can be composed to solve long-range tasks. Using quantization maximizes skill reuse and enforces a bottleneck to pass information from the language to the policy, enabling unsupervised learning of interpretable skills. We perform experiments on grid world navigation and robotic manipulation tasks and show that our hierarchical method can outperform a strong non-hierarchical baseline based on Decision Transformer [11] in the low-data regime. We analyse these skills qualitatively and quantitatively and find them to be highly correlated to language and behaviour. Finally, using these skills to perform long-range composition tasks on a robotic manipulation environment results in performance that is nearly 2x better than the non-hierarchical version. Concretely, our contributions are as follows: • We introduce LISA, a novel hierarchical imitation framework to solve complex tasks specified via language by learning re-usable skills. • We demonstrate the effectiveness of our approach in the low-data regime where its crucial to break down complex tasks to generalize well. • We show our method performs well in long-range composition tasks where we may need to apply multiple skills sequentially. • We also show that the learned skills are highly correlated to language and behaviour and can easily be interpreted by humans. 2 Related Work 2.1 Imitation Learning Imitation learning (IL) has a long history, with early works using behavioral cloning [41–43] to learn policies via supervised learning on expert demonstration data. Recent methods have shown significant improvements via learning reward functions [21] or Q-functions [18] from expert data to mimic expert behavior. Nevertheless, these works typically consider a single task. An important problem here is multi-task IL, where the imitator is trained to mimic behavior on a variety of training tasks with the goal of generalizing the learned behaviors to test tasks. A crucial variable in the multi-task IL set-up is how the task is specified, e.g vectorized representations of goal states [37], task IDs [24], and single demonstrations [56, 14, 16, 57]. In contrast, we focus on a multi-task IL setup with task-specification through language, one of the most natural and versatile ways for humans to communicate desired goals and intents. 2.2 Language Grounding Several prior works have attempted to ground language with tasks or use language as a source of instructions for learning tasks with varying degrees of success [32, 55, 4, 39, 5]. [27] is a good reference for works combining language with sequential-decision making. But apart from a few exceptions, most algorithms in this area use the language instruction in a monolithic fashion and are designed to work for simple goals that requires the agent to demonstrate a single skill [40, 9, 20, 6] or tasks where each constituent sub-goal has to be explicitly specified [10, 46, 34, 3, 52, 50, 17, 35, 31]. Some recent works have shown success on using play data [28] or pseudo-expert data such as LOReL [38] and CLIPORT [47]. LOReL and CLIPORT are not hierarchical techniques. [28] can be interpreted as a hierarchical technique that generates latent sub-goals as a function of goal images, language instructions and task IDs but the skills learned by LISA are purely a function of language and states alone and do not require goal images or task IDs. [23, 22] and [49] are some examples of works that use a two-level architecture for language conditioned tasks but neither of these methods learn skills that are interpretable. 2.3 Latent-models and Hierarchical Learning Past works have attempted to learn policies conditioned on latent variables and some of them can be interpreted as hierarchical techniques. For example, [15] learns skills using latent variables that visit different parts of the environment’s state space. [45] improved on this by learning skills that were more easily predictable using a dynamics model. But these fall more under the category of skill discovery than hierarchical techniques since the skill code is fixed for the entire trajectory, as is the case with [15]. [29] and [26] are other works that use a latent-variable approach to IL. But these approaches don’t necessarily learn a latent variable with the intention of breaking down complex tasks into skills. With LISA, we sample several skills per trajectory with the clear intention of each skill corresponding to completing a sub-task for the whole trajectory. Also, none of the methods mentioned here condition on language. There has been some work on hierarchical frameworks for RL to learn high-level action abstractions, called options [51], such as [25, 58, 36] but these works are not goal-conditioned. Unlike LISA, these works don’t use language and the options might lack diversity and not correspond to any concrete or interpretable skills. Furthermore, none have used the VQ technique to learn options and often suffer from training instabilities. 3 Approach The key idea of LISA is to learn quantized skill representations that are informative of both language and behaviors, which allows us to break down high-level instructions, specified via language, into discrete, interpretable and composable codes (see Fig. 2, Fig. 6 and Fig. 8 for visualizations). These codes enable learning explainable and controllable behaviour, as shown in Fig. 1 and Fig. 4. Section 3.1 describes the problem formulation, an overview of our framework, and presents our language-conditioned model. Section 3.2 provides details on the training approach. 3.1 Language-conditioned Skill Learning 3.1.1 Problem Setup We consider general multi-task environments, represented as a task-augmented Markov decision process (MDP) with a family of different tasks T . A task Ti may be composed of other tasks in T and encode multiple sub-goals. For example, in a navigation environment, a task could be composed of two or more sub-tasks - “pick up ball”, “open door” - in any hierarchical order. S,A represent state and action spaces. We assume that each full task has a single natural language description l ∈ L, where L represents the space of language instructions. Any sub-goals for the task are encoded within this single language instruction. We assume access to an offline dataset D of trajectories obtained from an optimal policy for a variety of tasks in an environment with only their language description available. Each trajectory τ i = (li, {(si1, ai1), (si2, ai2), ..., (siT , aiT )}) consists of the language description and the observations sit ∈ S, actions ait ∈ A taken over T timesteps. The trajectories are not labeled with any rewards. Our aim is to predict the expert actions at, given a language instruction and past observations. Note that each trajectory in the training dataset can comprise of any number of sub-tasks. For example, we could have a trajectory to “open a door” and another to “pick up a ball and close the door” in the training data. With LISA we aim to solve the task “open a door and pick up the ball” at test time even though we haven’t seen this task at training time. In a trajectory with multiple sub-tasks, the training dataset does not give us information about where one sub-task ends and where another one begins. LISA must learn how to identify and stitch together these sub-tasks learned during training, in order to solve a new language instruction such as the one shown in Fig. 1 at test time. 3.1.2 Hierarchical Skill Abstractions We visualize the working of LISA in Figure 3. Our framework consists of two modules: a skill predictor f : L × S → C and a policy π : S × C → A. Here, C = { z1, . . . , zK } is a learnable codebook of K quantized skill latent codes. D is the dimension of the latent space of skills. Our key idea is to break learning behavior from language in two stages: 1) Learn discrete latent codes z, representing skills, from the fulllanguage instruction to decompose the task into smaller sub-goals 2) Learn a policy π conditioned only on these discrete codes. In LISA, both stages are trained end-to-end. Given an input τ = (l, {st, at}Tt=1), the skill predictor f predicts a skill code z̃ ∈ RD at a timestep t as z̃ = f(l, (st, st−1, ...)). These codes are discretized using a vector quantization operation q(·) that maps a latent z̃ to its closest codebook entry z = q(z̃). The quantization operation q(·) helps in learning discrete skill codes and acts as a bottleneck on passing language information. We detail its operation in Sec. 3.2. The chosen skill code z, is persisted for H timesteps where H is called the horizon. More details on how we chose the horizon and ablations studies on the choice of H can be found in appendix sections D and F.1. After H timesteps, the skill predictor is invoked again to predict a new skill. This enforces the skill to act as a temporal abstraction on actions, i.e. options [51]. The policy π predicts the action at at each timestep t conditioned on the state and a single skill code z that is active at that timestep. For π to correctly predict the original actions, it needs to use the language information encoded in the skill codes. LISA learns quantized skill codes in a vector codebook instead of continuous embeddings as this encourages reusing and composing these codes together to pass information from the language input to the actual behavior. Our learnt discrete skill codes adds interpretability and controllability to the policy’s behavior. 3.2 Training LISA Learning Discrete Skills. LISA uses Vector Quantization (VQ), inspired from [53]. It is a natural and widely-used method to map an input signal to a low-dimensional discrete learnt representation. VQ learns a codebook C ∈ { z1, . . . , zK } of K embedding vectors. Given an embedding z̃ from the skill predictor f , it maps the embedding to the closest vector in the codebook: z = q(z̃) =: argmin zk∈C ∥z̃ − zk∥2 with the codebook vectors updated to be the moving average of the embeddings z closest to them. This can be classically seen as learning K cluster centers via k-means [19]. Backpropagation through the non-differentiable quantization operation is achieved by a straightthrough gradient estimator, which simply copies the gradients from the decoder to the encoder, such that the model and codebook can be trained end-to-end. VQ enforces each learnt skill z to lie in C, which can be thought as learning K prototypes or cluster centers for the language embeddings using the seen states. This acts as a bottleneck that efficiently decomposes a language instruction into sub-parts encoded as discrete skills. LISA Objective. LISA is trained end-to-end using an objective LLISA = LBC + λLVQ, where LBC is the behavior-cloning loss on the policy πθ, λ is the VQ loss weight and LVQ is the vector quantization loss on the skill predictor fϕ given as: LVQ(f) = Eτ [∥sg [q(z̃)]− z̃∥22] (1) with z̃ = fϕ(l, (st, st−1, ..)). sg [·] denotes the stop-gradient operation. LVQ is also called commitment loss. It minimizes the conditional entropy of the skill predictor embeddings given the codebook vectors, making the embeddings stick to a single codebook vector. The codebook vectors are learnt using an exponential moving average update, same as [53]. Avoiding language reconstruction. LISA avoids auxiliary losses for language reconstruction from the skills latent codes and it’s not obvious why the skill codes are properly encoding language, and we expand on it here. For a given a signal X and a code Z, reconstructing the signal from the code as X̃ = f(Z) using cross-entropy loss amounts to maximizing the Mutual Information (MI) I(X,Z) between X and Z [1, 7]. In our case, we can write the MI between the skill codes and language using entropies as: I(z, l) = H(z)−H(z | l), whereas methods that attempt to reconstruct language apply the following decomposition: I(z, l) = H(l) − H(l | z). Here, H(l), the entropy of language instructions, is constant, and this gives us the cross-entropy loss. Thus we can avoid language reconstruction via cross-entropy loss by maximizing I(z, l) directly. In LISA, Lvq = −H(z | l), and we find there is no need to place a constraint on H(z) as the learned skill codes are diverse, needing to encode enough information to correctly predict the correct actions.1 1In experiments, we tried enforcing a constraint on H(z) by using extra InfoNCE loss term but don’t observe any gains. Algorithm 1 Training LISA Input: Dataset D of language-paired trajectories Input: Num skills K and horizon H 1: Initialize skill predictor fϕ, policy πθ 2: Vector Quantization op q(·) 3: while not converged do 4: Sample τ = (l, {s0, s1, s2...sT }, {a0, a1, a2...aT }) 5: Initialize S = {s0} ▷ List of seen states 6: for k = 0.. ⌊ T H ⌋ do ▷ Sample a skill every H steps 7: z ← q(fϕ(l, S)) 8: for step t = 1..H do ▷ Predict actions using a fixed skill and context length H 9: akH+t ← πθ(z, S[: −H]) 10: S ← S ∪ {skH+t} ▷ Append seen state 11: end for 12: Train fϕ, πθ using objective LLISA 13: end for 14: end while 𝑧 = 14 Figure 4: Behavior with fixed LISA options. We show the word clouds and the behavior of the policy obtained by using a fixed skill code z = 14 for an entire episode. We find that this code encodes the skill “closing the drawer”, as indicated by the word cloud. The policy executes this skill with a high degree of success when conditioned on this code for the entire trajectory, across multiple environment initializations and seeds. As a result, LISA can maximize the MI between the learnt skills and languages without auxiliary reconstruction losses and enforcing only Lvq on the skill codes. We empirically estimate the MI between the language and skill codes and find that our experiments confirm this in Section 4.6. 3.2.1 LISA Implementation LISA can be be implemented using different network architectures, such as Transformers or MLPs. In our experiments, we use Transformer architectures with LISA, but we find that out method is effective even with simple architectures choices such as MLPs, as shown in the appendix section F.5. Even when using Transformers for both the skill predictor and the policy network, our compute requirement is comparable to the non-hierarchical Flat Transformer policy as we can get away with using fewer layers in each module. Language Encoder. We use a pre-trained DistilBERT [44] encoder to generate language embeddings from the text instruction. We fine-tune the language encoder end-to-end and use the full language embedding for each word token, and not a pooled representation of the whole text. Observation Encoder. For image observations, we use convolution layers to generate embeddings. For simple state representations, we use MLPs. Skill Predictor. The skill predictor network f is implemented as a small Causal Transformer network that takes in the language embeddings and the observation embeddings at each time step. The language embeddings are concatenated at the beginning of the observation embeddings before being fed into the skill predictor. The network applies a causal mask hiding the future observations. Policy Network. Our policy network π, also implemented as a small Causal Transformer inspired by Decison Transformer (DT) [11]. However, unlike DT, our policy is not conditioned on any reward signal, but on the skill code. The sequence length of π is the horizon H of the skills which is much smaller compared to the length of the full trajectory. Flat Decision Transformer Baseline. Our flat baseline is based on DT and is implementation-wise similar to LISA, but without a skill predictor network. The policy here is a Causal Transformer, where we modify DT to condition on the language instruction embedding from a pre-trained DistillBERT text encoder instead of the future sum of returns. We found this baseline to be inefficient at handling long-range language instructions, needing sequence lengths of 1000 on complex environments such as BabyAI-BossLevel in our experiments. Since LISA has two transformers as opposed to just one in the flat baseline we ensured that the baseline and our method had a similar number of total parameters. To this end, the flat baseline uses Transformer network with 2 self-attention layers, and LISA’s skill predictor and policy use Transformer network with a single self-attention layer each. We also ensured that the embedding dimension and the number of heads in each layer were exactly the same in both LISA and the flat baseline. Details of this are provided in appendix sections D.1 and D.2 respectively. In fact, one could argue that LISA has less representation power because the policy transformer can only attend to the last H steps while the flat baseline can attend to the entire trajectory which is what makes it an extremely strong baseline. The flat baseline also uses the same pre-trained DistillBERT text encoder model as LISA for dealing with natural language input. 4 Experiments In this section, we evaluate LISA on grid-world navigation and robotic manipulation tasks. We compare the performance of LISA with a strong non-hierarchical baseline in the low-data regime. We then analyse our learnt skill abstractions in detail – what they represent, how we can interpret them and how they improve performance on downstream composition tasks. For the sake of brevity, we present additional ablations in the Appendix F, on doing manual planning with LISA skills (Section F.2), transferring learned skills to different environments (Section F.3) and learning continuous skills (Section F.6). 4.1 Datasets Several language-conditioned datasets have been curated as of late such as [46, 48, 13, 38, 33, 2, 10, 12]. Nevertheless, a lot of these datasets focus on complex-state representations and navigation in 3D environments, making them challenging to train on and qualitatively analyze our skills as shown in Fig. 4. We found BabyAI, a grid-world navigation environment and LOReL, a robotic manipulation environment as two diverse test beds that were very different from each other and conducive for hierarchical skill learning as well as detailed qualitative and quantitative analysis of our learned skills and we use them for our experiments. BabyAI Dataset. The BabyAI dataset [13] contains 19 levels of increasing difficulty where each level is set in a grid world and an agent sees a partially observed ego-centric view in a square of size 7x7. The agent must learn to perform various tasks of arbitrary difficulty such as moving objects between rooms, opening or closing doors, etc. all with a partially observed state and a language instruction. The language instructions for easy levels are quite simple but get exponentially more challenging for harder levels and contain several skills that the agent must complete in sequence (examples in appendix section C.1). The dataset provides 1 million expert trajectories for each of the 19 levels, but we use 0.1− 10% of these trajectories to train our models. We evaluate our policy on a set of 100 different instructions from the gym environment for each level, which contain high percentage of unseen environments layouts and language instructions given the limited data we use for training. More details about this dataset can be found in Appendix C.1 and in the BabyAI paper. LOReL Sawyer Dataset. This dataset [38] consists of pseudo-expert trajectories or play data collected from a replay buffer of a random RL policy and has been labeled with post-hoc crowdsourced language instructions. Hence, the trajectories complete the language instruction provided but may not necessarily be optimal. Play data is inexpensive to collect [30] in the real world and it is important for algorithms to be robust to such datasets as well. However, due to the randomness in the trajectories, this makes the dataset extremely difficult to use in a behavior cloning (BC) setting. Despite this, we are able to achieve good performance on this benchmark and are able to learn some very useful skills. The LOReL Sawyer dataset contains 50k trajectories of length 20 on a simulated environment with a Sawyer robot. We evaluate on the same set of 6 tasks that the original paper does for our results in Table 1: close drawer, open drawer, turn faucet right, turn faucet left, move black mug right, move white mug down. We use two different settings - with robot state space observations and partially-observed image observations. More details can be found in the Appendixd C.2 and in the LOReL paper. ∗ We optimized a language-conditioned BC model following the LOReL paper to the best of our abilities but could not get better performance. 4.2 Baselines Original. These refer to the baselines from the original paper for each dataset. For BabyAI, we trained their non-hierarchical RNN based method on different number of trajectories. Similarly, on LOReL we compare with the performance of language-conditioned BC. The original LOReL method uses a planning algorithm on a learned reward function to get around the sub-optimal nature of the trajectories. We found the BC baseline as a more fair comparison, as LISA is trained using BC as well. Nonetheless, we compare with the original LOReL planner in Section 4.7 for composition tasks. LOReL results in Table 1 refer to the performance on the 6 seen instructions in the LOReL evaluation dataset, same as ones reported in the original paper. Flat Baseline. We implement a non-hierarchical baseline using language-conditioned Decision Transformer denoted as Lang DT, the details of which are in section 3.2.1. 4.3 How does performance of LISA compare with non-hierarchical baselines in low-data regime? We consider three levels from the BabyAI environment and the LOReL Sawyer environment. For BabyAI, we consider the GoToSeq, SynthSeq and BossLevel tasks since they are challenging and require performing several sub-tasks one after the other. Since these levels contain instructions that are compositional in nature, when we train on limited data the algorithm must learn skills which form complex instructions to generalize well to unseen instructions at test time. Our experimental results are shown in Table 1. We train the models on a randomly sampled 1k, 10k and 100k trajectories from the full BabyAI dataset and 50k trajectories on the LOReL dataset. We use more data from the LOReL dataset because of the sub-optimal nature of the trajectories. On all the environments, our method is competitive to or outperforms the strong non-hierarchical Decision Transformer baseline. The gap grows larger as we reduce the number of trajectories trained on, indicating that our method is able to leverage the common sub-task structures better and glean more information from limited data. As expected, with larger amounts of training data it becomes hard to beat the flat baseline since the model sees more compositions during training and can generalize better at test time [8]. As mentioned above, we evaluate on the same 6 seen instructions the original LOReL paper did. We also evaluate the performance on varying language instructions on LOReL, similar to the original paper, with additional results in Appendix E. We were pleasantly surprised that LISA is 2x better than the flat Lang-DT baseline on LOReL tasks, reaching 40% success rate using partial image observations despite the sub-optimal nature of the data. One explanation for this is that the discrete skill codes are able to capture different ways of doing the same task, thereby allowing LISA to learn an implicit multi-modal policy. This is not possible with the flat version as it has no way to compartmentalize these noisy trajectories, and perhaps tends to overfit on this noisy data, leading to performance degradation. 4.4 What skills does LISA learn? Are they diverse? To answer this question, we analyse the skills produced by LISA and the language tokens corresponding to each skill. We plot a heat map in Figure 5 corresponding to the correlation between the language tokens and skill codes. Here, we plot the map corresponding to the LOReL dataset. From the figure, we can see that certain skill codes correspond very strongly to certain language tokens and by extension, tasks. We also see the sparse nature of the heat maps which indicates that each skill corresponds to distinct language tokens. We also plot word clouds corresponding to four different options in the LOReL environment in Figure 6 and we notice that different options are triggered by different language tokens. From the figure, it is clear that the skill on the top left corner corresponds to close the drawer and the skill on the top right corresponds to turn faucet left. Similar word clouds and heat maps for the BabyAI environments are in the appendix section B.3. 4.5 Do the skills learned by LISA correspond to interpretable behavior? We have seen that the different skills correspond to different language tokens, but do the policies conditioned on these skills behave according to the language tokens? To understand this, we fix the skill code for the entire trajectory and run the policy i.e. we are shutting off the skill predictor and always predicting the same skill for the entire trajectory. As we can see from the word cloud and the corresponding trajectory in Figure 4, the behaviour for skill code 14 is exactly what we can infer from the language tokens in the word cloud – close the drawer. More such images and trajectories can be found in the appendix section B.5. 4.6 Why do LISA learned skills show such a strong correlation to language? As mentioned in section 3.2, the commitment loss from VQ acts as a way to increase the MI between the language and the skill codes during training. This allows the codes to be highly correlated with language without any reconstruction losses. To analyze this, we plot the MI between the options and the language during training on the BabyAI BossLevel with 1k trajectories and the plot can be seen in Figure 7. The plots show the MI increasing over training for a wide range of settings as we vary the number of skills and the horizon. In the ablation studies below, we report the success rate corresponding to each of these curves and we notice that there’s almost a direct correlation with increasing MI and task performance. This is very encouraging since it clearly shows that the skills are encoding language and that directly impacts the performance of the behavior cloning policy. 4.7 Can we use the learned skills to perform new composition tasks? To test our composition performance, we evaluate on LOReL composition tasks using images in Table 2. To this end, we handcraft 15 unseen composition instructions. We have listed these instructions in the Appendix Table 5 with one such example “pull the handle and move black mug down”. We ran 10 different runs of each instruction across 3 different seeds. As we can see, our performance is nearly 2x that of the non-hierarchical baseline. We also compare with the original LOReL planner on these composition tasks and we notice that we perform slightly better despite them having access to a reward function and a dynamics model pre-trained on 1M frames while LISA is trained from scratch. We set the max number of episode steps to 40 from the usual 20 for all the methods while performing these experiments because of the compositional nature of the tasks. Note that results in Table 1 show compositionality performance on the BabyAI dataset as we train with 0.1%- 10% of the data. When we evaluate on the gym environment generating any possible language instruction from the BabyAI grammar, we may come across several unseen instructions at test time. To give a sense of the % of unseen instructions for BabyAI when we evaluate on the gym environment, we take the different BabyAI environments and report the % of unseen language instructions seen at test time for different training data regimes in Table 3. For each statistic, we sample 10,000 random instructions from the environment and check how many are unseen in the training dataset used, repeated over 3 different seeds. 4.8 How does LISA compare to simply doing K-Means clustering on the language and state embeddings? In LISA, the VQ approach can be seen as taking the concatenated language-state inputs and projecting it into a learned embedding space. VQ here simply learns K embedding vectors that act as K cluster centers for the projected input vectors in this embedding space, and allows for differentiability, enabling learning through backpropagation. This is also similar to propotypical methods used for few-shot learning and allows for deep differentiation clustering, giving an intuition of why it works. To compare against k-means, we construct a simple unsupervised learning baseline that clusters trajectories in the training dataset using k-means. Specifically, in the BabyAI BossLevel environment using 1k training trajectories, we take all concatenated language-state vectors for all trajectories in the dataset and cluster them using k-means and use the assigned cluster centers as the skill codes. We then learn a policy using these skill codes to measure their efficacy and found that LISA has a performance of 49.1± 2.4% and k-means has a performance of 20.2± 5.2% over 3 seeds. Thus, we see that using the simple k-means skills is insufficient to learn a good policy to solve the BabyAI BossLevel task, as the skills are not representative enough of the language instructions. A reason for this is that language and state vectors lie in different embedding spaces, and K-means based on euclidean distance is not optimal on the concatenated vectors. 5 Limitations and Future Work We present LISA, a hierarchical imitation learning framework that can be used to learn interpretable skill abstractions from language-conditioned expert demonstrations. We showed that the skills are diverse and can be used to solve long-range language tasks and that our method outperforms a strong non-hierarchical baseline in the low-data regime. However, there are several limitations to LISA and plenty of scope for future work. One limitation of LISA is that there are several hyperparameters to tune that may affect performance like the number of options and the horizon for each option. It certainly helps to have a good idea of the task to decide these hyperparameters even though the ablations show that the method is fairly robust to these choices. Its also useful to learn the horizon for each skill by learning a termination condition and we leave this for future work. Although our method has been evaluated on the language-conditioned imitation learning setting, its not difficult to modify this method to make it work for image goals or demos, and in the RL setting as well. Its interesting to see if the vector quantization trick can be used to learn goal-conditioned skills in a more general framework. Acknowledgments and Disclosure of Funding We are thankful to John Schulman, Chelsea Finn, Karol Hausman and Dilip Arumugam for initial discussions regarding our method, and to Suraj Nair for providing help with the LOReL baseline. This research was supported in part by NSF (#1651565, #1522054, #1733686), ONR (N00014-19-12145), AFOSR (FA9550-19-1-0024), FLI and Samsung.
1. What is the main contribution of the paper regarding hierarchical instruction following agent policy? 2. What are the strengths and weaknesses of the proposed method, particularly in its performance and hyperparameter tuning? 3. How does the reviewer assess the interpretability and generalization ability of the learned skill vectors? 4. Are there any concerns or suggestions regarding the presentation and clarity of certain parts of the paper? 5. Is there any discussion on the limitation of the fixed horizon assumption and how it affects the results?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Post rebuttal update: Raising score from 5 to 6 This work proposes a method to learn a hierarchical instruction following agent policy from demonstrations.The policy is composed of a skill predictor which plays the role of a high-level controller, and a low level controller predicts actions conditioned on the skill. The skills are trained from scratch with a vector quantization approach; the skills are represented by codes in a codebook and the skill predictor chooses the skill to execute based on a language instruction and a history of observations. The proposed method is shown to perform well on BabyAI and Lorel Sawyer benchmarks and outperforms baseline non-hierarchical policies on long horizon tasks, especially when supervision is limited. In addition, the authors show that the learned skill vectors are interpretable and correspond to meaningful primitive skills. Strengths And Weaknesses Strengths Learning reusable skills from data is an important problem with potential for better generalization, interpretability and controllability. Proposed approach outperforms baseline approaches when supervision is limited. Authors show that the learned skills are interpretable, meaningful and lead to better generalization on long-horizon tasks. Weaknesses The precise source of performance is hard to pinpoint. Does LISA work better than the flat baseline because it has more parameters? Were hyperparameters of baseline approaches carefully tuned? The authors can try to better isolate the source of performance. The proposed approach has many hyperparameters to be tuned such as codebook size, horizon, etc. I could not find discussion about how these parameters were chosen. The fixed horizon assumption is a limitation. Have the authors examined how the choice of this parameter affects the results? The proposed approach is interesting and shows positive results on two benchmarks. But the approach has key limitations which needs to be better addressed in the paper. This includes discussion about hyperparameter choices and how they affect the performance and more clarity on how the skills are interpreted. Based on the quantitative results presented on the paper alone it is hard to gauge the proposed approach due to confounding factors such as hyperparameter optimization and parameter count. Addressing these issues can significantly improve the paper. I found the discussion in sec 4.7 interesting. The proposed approach can potentially work very well on compositional generalization. Expanding the scope of the analysis (Eg. defining a clear evaluation setup, experimenting on a large number of tasks, etc.) in sec 4.7 can significantly strengthen the paper and become a core contribution. Other I couldn’t find enough details in the paper to understand how the correlation between skills and words is computed and how it is visualized. Algorithm 1: line 12 needs to be described better Plots are hard to visualize due to small font sizes (fig 4, fig 6). Also, there are layout issues that need to be fixed (Unclear why Fig 6 appears in p8) Questions How is the MI computed in Fig 6? Have you considered alternative approaches where you simply segment the trajectories and cluster them to identify salient skills? I believe this could be a strong baseline. Can you provide some intuition to why the vector quantization approach would be preferred over this approach? Presentation 132: “task can be a union of other tasks” - this is not clear to me 136: “sub-goals for the task are encoded within this single instruction” - didn’t understand this 217: “finetune the language encoder to the vocabulary” - what does this mean? There are mentions of ‘GIF’ in the paper. Good to rephrase. Limitations Seems adequate.
NIPS
Title LISA: Learning Interpretable Skill Abstractions from Language Abstract Learning policies that effectively utilize language instructions in complex, multitask environments is an important problem in sequential decision-making. While it is possible to condition on the entire language instruction directly, such an approach could suffer from generalization issues. In our work, we propose Learning Interpretable Skill Abstractions (LISA), a hierarchical imitation learning framework that can learn diverse, interpretable primitive behaviors or skills from languageconditioned demonstrations to better generalize to unseen instructions. LISA uses vector quantization to learn discrete skill codes that are highly correlated with language instructions and the behavior of the learned policy. In navigation and robotic manipulation tasks, LISA outperforms a strong non-hierarchical Decision Transformer baseline in the low data regime and is able to compose learned skills to solve tasks containing unseen long-range instructions. Our method demonstrates a more natural way to condition on language in sequential decision-making problems and achieve interpretable and controllable behavior with the learned skills. N/A 1 Introduction Intelligent machines should be able to solve a variety of complex, long-horizon tasks in an environment and generalize to novel scenarios. In the sequential decision-making paradigm, provided expert demonstrations, an agent can learn to perform these tasks via multi-task imitation learning (IL). As humans, it is desirable to specify tasks to an agent using a convenient, yet expressive modality and the agent should solve the task by taking actions in the environment. There are several ways for humans to specify tasks to an agent, such as task IDs, goal images, and goal demonstrations. However, these specifications tend to be ambiguous, require significant human effort, and can be cumbersome to curate and provide at test time. One of the most natural and versatile ways for humans to specify tasks is via natural language. The goal of language-conditioned IL is to solve tasks in an environment given language-conditioned trajectories at training time and a natural language instruction at test time. This becomes challenging when the task involves completing several sub-tasks sequen- *Equal contribution. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). tially, like the example shown in Figure 1. A crucial step towards solving this problem is exploiting the inherent hierarchical structure of natural language. For example, given the task specification “pull the handle and move black mug right”, we can split it into learning two independent primitive behaviors or skills, i.e. “pull the handle” and “move black mug right”. If we are able to decompose the problem of solving these complex tasks into learning skills, we can re-use and compose these learned skills to generalize to unseen tasks in the future. This is especially useful in the low-data regime, since we may not see all possible tasks given the limited dataset, but may see all the constituent sub-tasks. Using such hierarchical learning, we can utilize language effectively and learn skills as the building blocks of complex behaviors. Utilizing language effectively to learn skills is a non-trivial problem and raises several challenges. (i) The process of learning skills from language-conditioned trajectories is unsupervised as we may not have knowledge about which parts of the trajectory corresponds to each skill. (ii) We need to ensure that the learned skills are useful, i.e. encode behavior that can be composed to solve new tasks. (iii) We would like the learned skills to be interpretable by humans, both in terms of the language and the behaviours they encode. There are several benefits of interpretability. For example, it allows us to understand which skills our model is good at and which skills it struggles with. In safety critical settings such as robotic surgery or autonomous driving, knowing what each skill does allows us to pick and choose which skills we want to run at test time. It also provides a visual window into a neural network policy which is extremely desirable [54]. There have been prior works such as [38, 47, 13] that have failed to address these challenges and condition on language in a monolithic fashion without learning skills. As a result, they tend to perform poorly on long-horizon composition tasks such as the one in Figure 1. To this end, we propose Learning Interpretable Skill Abstractions from language (LISA), a hierarchical imitation learning framework that can learn interpretable skills from language-conditioned offline demonstrations. LISA uses a two-level architecture – a skill predictor that predicts quantized skills from a learnt vector codebook and a policy that uses these skill vector codes to predict actions. The discrete skills learned from language are interpretable (see Figure 2 and 4) and can be composed to solve long-range tasks. Using quantization maximizes skill reuse and enforces a bottleneck to pass information from the language to the policy, enabling unsupervised learning of interpretable skills. We perform experiments on grid world navigation and robotic manipulation tasks and show that our hierarchical method can outperform a strong non-hierarchical baseline based on Decision Transformer [11] in the low-data regime. We analyse these skills qualitatively and quantitatively and find them to be highly correlated to language and behaviour. Finally, using these skills to perform long-range composition tasks on a robotic manipulation environment results in performance that is nearly 2x better than the non-hierarchical version. Concretely, our contributions are as follows: • We introduce LISA, a novel hierarchical imitation framework to solve complex tasks specified via language by learning re-usable skills. • We demonstrate the effectiveness of our approach in the low-data regime where its crucial to break down complex tasks to generalize well. • We show our method performs well in long-range composition tasks where we may need to apply multiple skills sequentially. • We also show that the learned skills are highly correlated to language and behaviour and can easily be interpreted by humans. 2 Related Work 2.1 Imitation Learning Imitation learning (IL) has a long history, with early works using behavioral cloning [41–43] to learn policies via supervised learning on expert demonstration data. Recent methods have shown significant improvements via learning reward functions [21] or Q-functions [18] from expert data to mimic expert behavior. Nevertheless, these works typically consider a single task. An important problem here is multi-task IL, where the imitator is trained to mimic behavior on a variety of training tasks with the goal of generalizing the learned behaviors to test tasks. A crucial variable in the multi-task IL set-up is how the task is specified, e.g vectorized representations of goal states [37], task IDs [24], and single demonstrations [56, 14, 16, 57]. In contrast, we focus on a multi-task IL setup with task-specification through language, one of the most natural and versatile ways for humans to communicate desired goals and intents. 2.2 Language Grounding Several prior works have attempted to ground language with tasks or use language as a source of instructions for learning tasks with varying degrees of success [32, 55, 4, 39, 5]. [27] is a good reference for works combining language with sequential-decision making. But apart from a few exceptions, most algorithms in this area use the language instruction in a monolithic fashion and are designed to work for simple goals that requires the agent to demonstrate a single skill [40, 9, 20, 6] or tasks where each constituent sub-goal has to be explicitly specified [10, 46, 34, 3, 52, 50, 17, 35, 31]. Some recent works have shown success on using play data [28] or pseudo-expert data such as LOReL [38] and CLIPORT [47]. LOReL and CLIPORT are not hierarchical techniques. [28] can be interpreted as a hierarchical technique that generates latent sub-goals as a function of goal images, language instructions and task IDs but the skills learned by LISA are purely a function of language and states alone and do not require goal images or task IDs. [23, 22] and [49] are some examples of works that use a two-level architecture for language conditioned tasks but neither of these methods learn skills that are interpretable. 2.3 Latent-models and Hierarchical Learning Past works have attempted to learn policies conditioned on latent variables and some of them can be interpreted as hierarchical techniques. For example, [15] learns skills using latent variables that visit different parts of the environment’s state space. [45] improved on this by learning skills that were more easily predictable using a dynamics model. But these fall more under the category of skill discovery than hierarchical techniques since the skill code is fixed for the entire trajectory, as is the case with [15]. [29] and [26] are other works that use a latent-variable approach to IL. But these approaches don’t necessarily learn a latent variable with the intention of breaking down complex tasks into skills. With LISA, we sample several skills per trajectory with the clear intention of each skill corresponding to completing a sub-task for the whole trajectory. Also, none of the methods mentioned here condition on language. There has been some work on hierarchical frameworks for RL to learn high-level action abstractions, called options [51], such as [25, 58, 36] but these works are not goal-conditioned. Unlike LISA, these works don’t use language and the options might lack diversity and not correspond to any concrete or interpretable skills. Furthermore, none have used the VQ technique to learn options and often suffer from training instabilities. 3 Approach The key idea of LISA is to learn quantized skill representations that are informative of both language and behaviors, which allows us to break down high-level instructions, specified via language, into discrete, interpretable and composable codes (see Fig. 2, Fig. 6 and Fig. 8 for visualizations). These codes enable learning explainable and controllable behaviour, as shown in Fig. 1 and Fig. 4. Section 3.1 describes the problem formulation, an overview of our framework, and presents our language-conditioned model. Section 3.2 provides details on the training approach. 3.1 Language-conditioned Skill Learning 3.1.1 Problem Setup We consider general multi-task environments, represented as a task-augmented Markov decision process (MDP) with a family of different tasks T . A task Ti may be composed of other tasks in T and encode multiple sub-goals. For example, in a navigation environment, a task could be composed of two or more sub-tasks - “pick up ball”, “open door” - in any hierarchical order. S,A represent state and action spaces. We assume that each full task has a single natural language description l ∈ L, where L represents the space of language instructions. Any sub-goals for the task are encoded within this single language instruction. We assume access to an offline dataset D of trajectories obtained from an optimal policy for a variety of tasks in an environment with only their language description available. Each trajectory τ i = (li, {(si1, ai1), (si2, ai2), ..., (siT , aiT )}) consists of the language description and the observations sit ∈ S, actions ait ∈ A taken over T timesteps. The trajectories are not labeled with any rewards. Our aim is to predict the expert actions at, given a language instruction and past observations. Note that each trajectory in the training dataset can comprise of any number of sub-tasks. For example, we could have a trajectory to “open a door” and another to “pick up a ball and close the door” in the training data. With LISA we aim to solve the task “open a door and pick up the ball” at test time even though we haven’t seen this task at training time. In a trajectory with multiple sub-tasks, the training dataset does not give us information about where one sub-task ends and where another one begins. LISA must learn how to identify and stitch together these sub-tasks learned during training, in order to solve a new language instruction such as the one shown in Fig. 1 at test time. 3.1.2 Hierarchical Skill Abstractions We visualize the working of LISA in Figure 3. Our framework consists of two modules: a skill predictor f : L × S → C and a policy π : S × C → A. Here, C = { z1, . . . , zK } is a learnable codebook of K quantized skill latent codes. D is the dimension of the latent space of skills. Our key idea is to break learning behavior from language in two stages: 1) Learn discrete latent codes z, representing skills, from the fulllanguage instruction to decompose the task into smaller sub-goals 2) Learn a policy π conditioned only on these discrete codes. In LISA, both stages are trained end-to-end. Given an input τ = (l, {st, at}Tt=1), the skill predictor f predicts a skill code z̃ ∈ RD at a timestep t as z̃ = f(l, (st, st−1, ...)). These codes are discretized using a vector quantization operation q(·) that maps a latent z̃ to its closest codebook entry z = q(z̃). The quantization operation q(·) helps in learning discrete skill codes and acts as a bottleneck on passing language information. We detail its operation in Sec. 3.2. The chosen skill code z, is persisted for H timesteps where H is called the horizon. More details on how we chose the horizon and ablations studies on the choice of H can be found in appendix sections D and F.1. After H timesteps, the skill predictor is invoked again to predict a new skill. This enforces the skill to act as a temporal abstraction on actions, i.e. options [51]. The policy π predicts the action at at each timestep t conditioned on the state and a single skill code z that is active at that timestep. For π to correctly predict the original actions, it needs to use the language information encoded in the skill codes. LISA learns quantized skill codes in a vector codebook instead of continuous embeddings as this encourages reusing and composing these codes together to pass information from the language input to the actual behavior. Our learnt discrete skill codes adds interpretability and controllability to the policy’s behavior. 3.2 Training LISA Learning Discrete Skills. LISA uses Vector Quantization (VQ), inspired from [53]. It is a natural and widely-used method to map an input signal to a low-dimensional discrete learnt representation. VQ learns a codebook C ∈ { z1, . . . , zK } of K embedding vectors. Given an embedding z̃ from the skill predictor f , it maps the embedding to the closest vector in the codebook: z = q(z̃) =: argmin zk∈C ∥z̃ − zk∥2 with the codebook vectors updated to be the moving average of the embeddings z closest to them. This can be classically seen as learning K cluster centers via k-means [19]. Backpropagation through the non-differentiable quantization operation is achieved by a straightthrough gradient estimator, which simply copies the gradients from the decoder to the encoder, such that the model and codebook can be trained end-to-end. VQ enforces each learnt skill z to lie in C, which can be thought as learning K prototypes or cluster centers for the language embeddings using the seen states. This acts as a bottleneck that efficiently decomposes a language instruction into sub-parts encoded as discrete skills. LISA Objective. LISA is trained end-to-end using an objective LLISA = LBC + λLVQ, where LBC is the behavior-cloning loss on the policy πθ, λ is the VQ loss weight and LVQ is the vector quantization loss on the skill predictor fϕ given as: LVQ(f) = Eτ [∥sg [q(z̃)]− z̃∥22] (1) with z̃ = fϕ(l, (st, st−1, ..)). sg [·] denotes the stop-gradient operation. LVQ is also called commitment loss. It minimizes the conditional entropy of the skill predictor embeddings given the codebook vectors, making the embeddings stick to a single codebook vector. The codebook vectors are learnt using an exponential moving average update, same as [53]. Avoiding language reconstruction. LISA avoids auxiliary losses for language reconstruction from the skills latent codes and it’s not obvious why the skill codes are properly encoding language, and we expand on it here. For a given a signal X and a code Z, reconstructing the signal from the code as X̃ = f(Z) using cross-entropy loss amounts to maximizing the Mutual Information (MI) I(X,Z) between X and Z [1, 7]. In our case, we can write the MI between the skill codes and language using entropies as: I(z, l) = H(z)−H(z | l), whereas methods that attempt to reconstruct language apply the following decomposition: I(z, l) = H(l) − H(l | z). Here, H(l), the entropy of language instructions, is constant, and this gives us the cross-entropy loss. Thus we can avoid language reconstruction via cross-entropy loss by maximizing I(z, l) directly. In LISA, Lvq = −H(z | l), and we find there is no need to place a constraint on H(z) as the learned skill codes are diverse, needing to encode enough information to correctly predict the correct actions.1 1In experiments, we tried enforcing a constraint on H(z) by using extra InfoNCE loss term but don’t observe any gains. Algorithm 1 Training LISA Input: Dataset D of language-paired trajectories Input: Num skills K and horizon H 1: Initialize skill predictor fϕ, policy πθ 2: Vector Quantization op q(·) 3: while not converged do 4: Sample τ = (l, {s0, s1, s2...sT }, {a0, a1, a2...aT }) 5: Initialize S = {s0} ▷ List of seen states 6: for k = 0.. ⌊ T H ⌋ do ▷ Sample a skill every H steps 7: z ← q(fϕ(l, S)) 8: for step t = 1..H do ▷ Predict actions using a fixed skill and context length H 9: akH+t ← πθ(z, S[: −H]) 10: S ← S ∪ {skH+t} ▷ Append seen state 11: end for 12: Train fϕ, πθ using objective LLISA 13: end for 14: end while 𝑧 = 14 Figure 4: Behavior with fixed LISA options. We show the word clouds and the behavior of the policy obtained by using a fixed skill code z = 14 for an entire episode. We find that this code encodes the skill “closing the drawer”, as indicated by the word cloud. The policy executes this skill with a high degree of success when conditioned on this code for the entire trajectory, across multiple environment initializations and seeds. As a result, LISA can maximize the MI between the learnt skills and languages without auxiliary reconstruction losses and enforcing only Lvq on the skill codes. We empirically estimate the MI between the language and skill codes and find that our experiments confirm this in Section 4.6. 3.2.1 LISA Implementation LISA can be be implemented using different network architectures, such as Transformers or MLPs. In our experiments, we use Transformer architectures with LISA, but we find that out method is effective even with simple architectures choices such as MLPs, as shown in the appendix section F.5. Even when using Transformers for both the skill predictor and the policy network, our compute requirement is comparable to the non-hierarchical Flat Transformer policy as we can get away with using fewer layers in each module. Language Encoder. We use a pre-trained DistilBERT [44] encoder to generate language embeddings from the text instruction. We fine-tune the language encoder end-to-end and use the full language embedding for each word token, and not a pooled representation of the whole text. Observation Encoder. For image observations, we use convolution layers to generate embeddings. For simple state representations, we use MLPs. Skill Predictor. The skill predictor network f is implemented as a small Causal Transformer network that takes in the language embeddings and the observation embeddings at each time step. The language embeddings are concatenated at the beginning of the observation embeddings before being fed into the skill predictor. The network applies a causal mask hiding the future observations. Policy Network. Our policy network π, also implemented as a small Causal Transformer inspired by Decison Transformer (DT) [11]. However, unlike DT, our policy is not conditioned on any reward signal, but on the skill code. The sequence length of π is the horizon H of the skills which is much smaller compared to the length of the full trajectory. Flat Decision Transformer Baseline. Our flat baseline is based on DT and is implementation-wise similar to LISA, but without a skill predictor network. The policy here is a Causal Transformer, where we modify DT to condition on the language instruction embedding from a pre-trained DistillBERT text encoder instead of the future sum of returns. We found this baseline to be inefficient at handling long-range language instructions, needing sequence lengths of 1000 on complex environments such as BabyAI-BossLevel in our experiments. Since LISA has two transformers as opposed to just one in the flat baseline we ensured that the baseline and our method had a similar number of total parameters. To this end, the flat baseline uses Transformer network with 2 self-attention layers, and LISA’s skill predictor and policy use Transformer network with a single self-attention layer each. We also ensured that the embedding dimension and the number of heads in each layer were exactly the same in both LISA and the flat baseline. Details of this are provided in appendix sections D.1 and D.2 respectively. In fact, one could argue that LISA has less representation power because the policy transformer can only attend to the last H steps while the flat baseline can attend to the entire trajectory which is what makes it an extremely strong baseline. The flat baseline also uses the same pre-trained DistillBERT text encoder model as LISA for dealing with natural language input. 4 Experiments In this section, we evaluate LISA on grid-world navigation and robotic manipulation tasks. We compare the performance of LISA with a strong non-hierarchical baseline in the low-data regime. We then analyse our learnt skill abstractions in detail – what they represent, how we can interpret them and how they improve performance on downstream composition tasks. For the sake of brevity, we present additional ablations in the Appendix F, on doing manual planning with LISA skills (Section F.2), transferring learned skills to different environments (Section F.3) and learning continuous skills (Section F.6). 4.1 Datasets Several language-conditioned datasets have been curated as of late such as [46, 48, 13, 38, 33, 2, 10, 12]. Nevertheless, a lot of these datasets focus on complex-state representations and navigation in 3D environments, making them challenging to train on and qualitatively analyze our skills as shown in Fig. 4. We found BabyAI, a grid-world navigation environment and LOReL, a robotic manipulation environment as two diverse test beds that were very different from each other and conducive for hierarchical skill learning as well as detailed qualitative and quantitative analysis of our learned skills and we use them for our experiments. BabyAI Dataset. The BabyAI dataset [13] contains 19 levels of increasing difficulty where each level is set in a grid world and an agent sees a partially observed ego-centric view in a square of size 7x7. The agent must learn to perform various tasks of arbitrary difficulty such as moving objects between rooms, opening or closing doors, etc. all with a partially observed state and a language instruction. The language instructions for easy levels are quite simple but get exponentially more challenging for harder levels and contain several skills that the agent must complete in sequence (examples in appendix section C.1). The dataset provides 1 million expert trajectories for each of the 19 levels, but we use 0.1− 10% of these trajectories to train our models. We evaluate our policy on a set of 100 different instructions from the gym environment for each level, which contain high percentage of unseen environments layouts and language instructions given the limited data we use for training. More details about this dataset can be found in Appendix C.1 and in the BabyAI paper. LOReL Sawyer Dataset. This dataset [38] consists of pseudo-expert trajectories or play data collected from a replay buffer of a random RL policy and has been labeled with post-hoc crowdsourced language instructions. Hence, the trajectories complete the language instruction provided but may not necessarily be optimal. Play data is inexpensive to collect [30] in the real world and it is important for algorithms to be robust to such datasets as well. However, due to the randomness in the trajectories, this makes the dataset extremely difficult to use in a behavior cloning (BC) setting. Despite this, we are able to achieve good performance on this benchmark and are able to learn some very useful skills. The LOReL Sawyer dataset contains 50k trajectories of length 20 on a simulated environment with a Sawyer robot. We evaluate on the same set of 6 tasks that the original paper does for our results in Table 1: close drawer, open drawer, turn faucet right, turn faucet left, move black mug right, move white mug down. We use two different settings - with robot state space observations and partially-observed image observations. More details can be found in the Appendixd C.2 and in the LOReL paper. ∗ We optimized a language-conditioned BC model following the LOReL paper to the best of our abilities but could not get better performance. 4.2 Baselines Original. These refer to the baselines from the original paper for each dataset. For BabyAI, we trained their non-hierarchical RNN based method on different number of trajectories. Similarly, on LOReL we compare with the performance of language-conditioned BC. The original LOReL method uses a planning algorithm on a learned reward function to get around the sub-optimal nature of the trajectories. We found the BC baseline as a more fair comparison, as LISA is trained using BC as well. Nonetheless, we compare with the original LOReL planner in Section 4.7 for composition tasks. LOReL results in Table 1 refer to the performance on the 6 seen instructions in the LOReL evaluation dataset, same as ones reported in the original paper. Flat Baseline. We implement a non-hierarchical baseline using language-conditioned Decision Transformer denoted as Lang DT, the details of which are in section 3.2.1. 4.3 How does performance of LISA compare with non-hierarchical baselines in low-data regime? We consider three levels from the BabyAI environment and the LOReL Sawyer environment. For BabyAI, we consider the GoToSeq, SynthSeq and BossLevel tasks since they are challenging and require performing several sub-tasks one after the other. Since these levels contain instructions that are compositional in nature, when we train on limited data the algorithm must learn skills which form complex instructions to generalize well to unseen instructions at test time. Our experimental results are shown in Table 1. We train the models on a randomly sampled 1k, 10k and 100k trajectories from the full BabyAI dataset and 50k trajectories on the LOReL dataset. We use more data from the LOReL dataset because of the sub-optimal nature of the trajectories. On all the environments, our method is competitive to or outperforms the strong non-hierarchical Decision Transformer baseline. The gap grows larger as we reduce the number of trajectories trained on, indicating that our method is able to leverage the common sub-task structures better and glean more information from limited data. As expected, with larger amounts of training data it becomes hard to beat the flat baseline since the model sees more compositions during training and can generalize better at test time [8]. As mentioned above, we evaluate on the same 6 seen instructions the original LOReL paper did. We also evaluate the performance on varying language instructions on LOReL, similar to the original paper, with additional results in Appendix E. We were pleasantly surprised that LISA is 2x better than the flat Lang-DT baseline on LOReL tasks, reaching 40% success rate using partial image observations despite the sub-optimal nature of the data. One explanation for this is that the discrete skill codes are able to capture different ways of doing the same task, thereby allowing LISA to learn an implicit multi-modal policy. This is not possible with the flat version as it has no way to compartmentalize these noisy trajectories, and perhaps tends to overfit on this noisy data, leading to performance degradation. 4.4 What skills does LISA learn? Are they diverse? To answer this question, we analyse the skills produced by LISA and the language tokens corresponding to each skill. We plot a heat map in Figure 5 corresponding to the correlation between the language tokens and skill codes. Here, we plot the map corresponding to the LOReL dataset. From the figure, we can see that certain skill codes correspond very strongly to certain language tokens and by extension, tasks. We also see the sparse nature of the heat maps which indicates that each skill corresponds to distinct language tokens. We also plot word clouds corresponding to four different options in the LOReL environment in Figure 6 and we notice that different options are triggered by different language tokens. From the figure, it is clear that the skill on the top left corner corresponds to close the drawer and the skill on the top right corresponds to turn faucet left. Similar word clouds and heat maps for the BabyAI environments are in the appendix section B.3. 4.5 Do the skills learned by LISA correspond to interpretable behavior? We have seen that the different skills correspond to different language tokens, but do the policies conditioned on these skills behave according to the language tokens? To understand this, we fix the skill code for the entire trajectory and run the policy i.e. we are shutting off the skill predictor and always predicting the same skill for the entire trajectory. As we can see from the word cloud and the corresponding trajectory in Figure 4, the behaviour for skill code 14 is exactly what we can infer from the language tokens in the word cloud – close the drawer. More such images and trajectories can be found in the appendix section B.5. 4.6 Why do LISA learned skills show such a strong correlation to language? As mentioned in section 3.2, the commitment loss from VQ acts as a way to increase the MI between the language and the skill codes during training. This allows the codes to be highly correlated with language without any reconstruction losses. To analyze this, we plot the MI between the options and the language during training on the BabyAI BossLevel with 1k trajectories and the plot can be seen in Figure 7. The plots show the MI increasing over training for a wide range of settings as we vary the number of skills and the horizon. In the ablation studies below, we report the success rate corresponding to each of these curves and we notice that there’s almost a direct correlation with increasing MI and task performance. This is very encouraging since it clearly shows that the skills are encoding language and that directly impacts the performance of the behavior cloning policy. 4.7 Can we use the learned skills to perform new composition tasks? To test our composition performance, we evaluate on LOReL composition tasks using images in Table 2. To this end, we handcraft 15 unseen composition instructions. We have listed these instructions in the Appendix Table 5 with one such example “pull the handle and move black mug down”. We ran 10 different runs of each instruction across 3 different seeds. As we can see, our performance is nearly 2x that of the non-hierarchical baseline. We also compare with the original LOReL planner on these composition tasks and we notice that we perform slightly better despite them having access to a reward function and a dynamics model pre-trained on 1M frames while LISA is trained from scratch. We set the max number of episode steps to 40 from the usual 20 for all the methods while performing these experiments because of the compositional nature of the tasks. Note that results in Table 1 show compositionality performance on the BabyAI dataset as we train with 0.1%- 10% of the data. When we evaluate on the gym environment generating any possible language instruction from the BabyAI grammar, we may come across several unseen instructions at test time. To give a sense of the % of unseen instructions for BabyAI when we evaluate on the gym environment, we take the different BabyAI environments and report the % of unseen language instructions seen at test time for different training data regimes in Table 3. For each statistic, we sample 10,000 random instructions from the environment and check how many are unseen in the training dataset used, repeated over 3 different seeds. 4.8 How does LISA compare to simply doing K-Means clustering on the language and state embeddings? In LISA, the VQ approach can be seen as taking the concatenated language-state inputs and projecting it into a learned embedding space. VQ here simply learns K embedding vectors that act as K cluster centers for the projected input vectors in this embedding space, and allows for differentiability, enabling learning through backpropagation. This is also similar to propotypical methods used for few-shot learning and allows for deep differentiation clustering, giving an intuition of why it works. To compare against k-means, we construct a simple unsupervised learning baseline that clusters trajectories in the training dataset using k-means. Specifically, in the BabyAI BossLevel environment using 1k training trajectories, we take all concatenated language-state vectors for all trajectories in the dataset and cluster them using k-means and use the assigned cluster centers as the skill codes. We then learn a policy using these skill codes to measure their efficacy and found that LISA has a performance of 49.1± 2.4% and k-means has a performance of 20.2± 5.2% over 3 seeds. Thus, we see that using the simple k-means skills is insufficient to learn a good policy to solve the BabyAI BossLevel task, as the skills are not representative enough of the language instructions. A reason for this is that language and state vectors lie in different embedding spaces, and K-means based on euclidean distance is not optimal on the concatenated vectors. 5 Limitations and Future Work We present LISA, a hierarchical imitation learning framework that can be used to learn interpretable skill abstractions from language-conditioned expert demonstrations. We showed that the skills are diverse and can be used to solve long-range language tasks and that our method outperforms a strong non-hierarchical baseline in the low-data regime. However, there are several limitations to LISA and plenty of scope for future work. One limitation of LISA is that there are several hyperparameters to tune that may affect performance like the number of options and the horizon for each option. It certainly helps to have a good idea of the task to decide these hyperparameters even though the ablations show that the method is fairly robust to these choices. Its also useful to learn the horizon for each skill by learning a termination condition and we leave this for future work. Although our method has been evaluated on the language-conditioned imitation learning setting, its not difficult to modify this method to make it work for image goals or demos, and in the RL setting as well. Its interesting to see if the vector quantization trick can be used to learn goal-conditioned skills in a more general framework. Acknowledgments and Disclosure of Funding We are thankful to John Schulman, Chelsea Finn, Karol Hausman and Dilip Arumugam for initial discussions regarding our method, and to Suraj Nair for providing help with the LOReL baseline. This research was supported in part by NSF (#1651565, #1522054, #1733686), ONR (N00014-19-12145), AFOSR (FA9550-19-1-0024), FLI and Samsung.
1. What is the main contribution of the paper regarding hierarchical instruction-conditioned imitation learning? 2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to learn discrete skills from language and demonstrations in an end-to-end fashion? 3. How does the paper evaluate its method on two different datasets, and what are the results of the ablations and qualitative analysis? 4. What are the limitations of the paper regarding the choice of horizon and number of options, and how could these be addressed? 5. What suggestions do you have for strengthening the paper, such as extending manual planning experiments or showing compositional generalization of sub-tasks? 6. Are there any minor comments or clarifications that could improve the paper, such as providing more details on setting H or citing relevant works?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper This paper proposes a hierarchical model for instruction-conditioned imitation learning (instruction following). A skill predictor network conditions on instructions and observations to predict discrete skill codes, and a policy network conditions on these skill codes to carry out actions. Discrete skill codes are learned using vector quantization. The paper evaluates on BabyAI and LOReL imitation learning tasks, finding that (1) the method outperforms a non-hierarchical imitation learning baseline model in low-resource training settings and when carrying out complex compositional tasks and (2) discrete skills induced are correlated with words in the language. Strengths And Weaknesses Strengths S1) The approach in the paper is well-motivated and is likely to be of interest both to the sequential decision making and language grounding communities. Most past work on hierarchical language-conditioned imitation learning has relied on supervising the intermediate layer of the hierarchy, so an end-to-end approach is exciting. S2) The method is generally well-described, although some details were only clear from the appendix. S3) I appreciate that the paper evaluated on two different datasets, and presented pretty thorough ablations and qualitative analysis (in the appendix). Weaknesses W1) The main novelty of the paper is the learning of discrete skills from language & demonstrations in an end-to-end fashion -- without the discrete skill bottleneck, it seems that the LISA approach would be an end-to-end neural instruction following model of the type very common in work on language grounding (e.g. Mei et al 2016, Anderson et al. 2018), and presented as an ablation in appendix F.2. But the paper doesn't currently have enough evidence to convince me that these discrete skills are useful: W1a) While LISA does outperform a baseline approach (the "flat" model), this baseline has a pretty different architecture and initialization (in particular, LISA uses a pre-trained DistillBERT text encoder while the baseline model is trained from scratch) and so I think is much less suitable than as a baseline than the ablation in appendix F.2, which performs comparably to LISA. So I don't think it's reasonable to claim performance improvements from the LISA approach based on the current results. Update after response: Thanks to the authors for the baseline clarifications; the baseline is much more comparable to LISA than I'd thought. W1b) It's unclear to me that the use of discrete codes adds interpretability. The main interpretation method (word clouds and heatmaps) maps these skills to words in the existing instructions. Similar methods can also be applied to continuous representations (e.g. Andreas et al., Translating Neuralese). In the skill code fixing example (4.5), it seems plausible that the method is just carrying out "close the drawer" given the affordance of the environment: the arm is located next to an open drawer. See suggestions, in the limitations section, below for suggestions on this. Update after response: I appreciate the author's arguments about the easy of interpretability, and the additional experiments showing that skill codes are being used and it's not just environment affordances. W2) Although not a crucial weakness, since this paper does contribute beyond them, some relevant works are Sharma et al. Skill Induction and Planning with Latent Language ACL 2022 Corona et al. Modular Networks for Compositional Instruction Following NAACL 2021 W3) Some of the experimental descriptions could be clearer, in particular whether the paper is interested in evaluating generalization to instructions unseen at training time; see questions below. Questions Q1) Could more details be clarified on the difference between instructions seen in training and in evaluation for both BabyAI and LOReL, especially in the "new composition" settings? Q2) Is 4.7, is the time horizon increased at inference time only or just at training time? It's also unclear to me why H would need to be increased, since if I understand correctly it's the length of time each skill is invoked for and it seems that while the task contains more skills in this setting, skills themselves need not be longer. Limitations I appreciated the discussion of the option-related limitations: needing to fix the horizon, and number of options. It would help to more clearly outline how the values for these were chosen in this work. Suggestions I think it would really strengthen the paper to either (1) extend the "manual planning" experiments given in the Appendix to show that control codes can be intervened on to make the models carry out behavior in novel contexts where the behaviors weren't seen in training or (2) show that the models can carry out unseen compositions of sub-tasks (i.e. similar to the compositional generalization setting explored in Corona et al.) The claims in 294 and 302-305 about sub-task reuse, and skill codes capturing task variation were pretty abstract and it would really strengthen the paper if some concrete evidence for them could be presented. Minor comments line 105: Hu et al. does learn interpretable skills - language descriptions of skills are produced by their instructor model. 145: the description here could be a bit more rigorous. Is there mean to be a latent partitioning of each trajectory into steps for each sub-task, or can steps be associated with multiple sub-tasks in some soft way? 171: it would help to give more details on how H is set, perhaps with references to the values & ablations in the appendix. 181: this equation above doesn't describe the learning process for the cluster centers. 198: this is not obvious to me; a citation would help. 198: fragmented sentences 242: "off" -> "of"
NIPS
Title On Regularizing Rademacher Observation Losses Abstract It has recently been shown that supervised learning linear classifiers with two of the most popular losses, the logistic and square loss, is equivalent to optimizing an equivalent loss over sufficient statistics about the class: Rademacher observations (rados). It has also been shown that learning over rados brings solutions to two prominent problems for which the state of the art of learning from examples can be comparatively inferior and in fact less convenient: (i) protecting and learning from private examples, (ii) learning from distributed datasets without entity resolution. Bis repetita placent: the two proofs of equivalence are different and rely on specific properties of the corresponding losses, so whether these can be unified and generalized inevitably comes to mind. This is our first contribution: we show how they can be fit into the same theory for the equivalence between example and rado losses. As a second contribution, we show that the generalization unveils a surprising new connection to regularized learning, and in particular a sufficient condition under which regularizing the loss over examples is equivalent to regularizing the rados (i.e. the data) in the equivalent rado loss, in such a way that an efficient algorithm for one regularized rado loss may be as efficient when changing the regularizer. This is our third contribution: we give a formal boosting algorithm for the regularized exponential rado-loss which boost with any of the ridge, lasso, SLOPE, `∞, or elastic net regularizer, using the same master routine for all. Because the regularized exponential rado-loss is the equivalent of the regularized logistic loss over examples we obtain the first efficient proxy to the minimization of the regularized logistic loss over examples using such a wide spectrum of regularizers. Experiments with a readily available code display that regularization significantly improves rado-based learning and compares favourably with example-based learning. 1 Introduction What kind of data should we use to train a supervised learner ? A recent result has shown that minimising the popular logistic loss over examples with linear classifiers (in supervised learning) is equivalent to the minimisation of the exponential loss over sufficient statistics about the class known as Rademacher observations (rados, [Nock et al., 2015]), for the same classifier. In short, we fit a classifier over data that is different from examples, and the same classifier generalizes well to new observations. It has been shown that rados offer solutions for two problems for which the state of the art involving examples can be comparatively significantly inferior: • protection of the examples’ privacy from various algebraic, geometric, statistical and computational standpoints, and learning from private data [Nock et al., 2015]; • learning from a large number of distributed datasets without having to perform entity resolution between datasets [Patrini et al., 2016]. Quite remarkably, the training time of the algorithms involved can be smaller than it would be on examples, by orders of magnitude [Patrini et al., 2016]. Two key problems remain however: the 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. accuracy of learning from rados can compete experimentally with that of learning from examples, yet there is a gap to reduce for rados to be not just a good material to learn from in a privacy/distributed setting, but also a serious alternative to learning from examples at large, yielding new avenues to supervised learning. Second, theoretically speaking, it is now known that two widely popular losses over examples admit an equivalent loss in the rado world: the logistic loss and the square loss [Nock et al., 2015, Patrini et al., 2016]. This inevitably suggests that this property may hold for more losses, yet barely anything displays patterns of generalizability in the existing proofs. Our contributions: in this paper, we provide answers to these two questions, with three main contributions. Our first contribution is to show that this generalization indeed holds: other example losses admit equivalent losses in the rado world, meaning in particular that their minimiser classifier is the same, regardless of the dataset of examples. The technique we use exploits a two-player zero sum game representation of convex losses, that has been very useful to analyse boosting algorithms [Schapire, 2003, Telgarsky, 2012], with one key difference: payoffs are non-linear convex, eventually non-differentiable. These also resemble the entropic dual losses [Reid et al., 2015], with the difference that we do not enforce conjugacy over the simplex. The conditions of the game are slightly different for examples and rados. We provide necessary and sufficient conditions for the resulting losses over examples and rados to be equivalent. Informally, equivalence happens iff the convex functions of the games satisfy a symmetry relationship and the weights satisfy a linear system of equations. Some popular losses fit in the equivalence [Nair and Hinton, 2010, Gentile and Warmuth, 1998, Nock and Nielsen, 2008, Telgarsky, 2012, Vapnik, 1998, van Rooyen et al., 2015]. Our second contribution came unexpectedly through this equivalence. Regularizing a loss is standard in machine learning [Bach et al., 2011]. We show a sufficient condition for the equivalence under which regularizing the example loss is equivalent to regularizing the rados in the equivalent rado loss, i.e. making a Minkowski sum of the rado set with a classifier-based set. This property is independent of the regularizer, and incidentally happens to hold for all our cases of equivalence (Cf first contribution). A regularizer added to a loss over examples thus transfers to data in the rado world, in essentially the same way for all regularizers, and if one can solve the non-trivial computational and optimization problem that poses this data modification for one regularized rado loss, then, basically, "A good optimization algorithm for this regularized rado loss may fit to other regularizers as well” Our third contribution exemplifies this. We propose an iterative boosting algorithm, Ω-R.ADABOOST, that learns a classifier from rados using the exponential regularized rado loss, with regularization choice belonging to the ridge, lasso, `∞, or the recently coined SLOPE [Bogdan et al., 2015]. Since rado regularization would theoretically require to modify data at each iteration, such schemes are computationally non-trivial. We show that this modification can in fact be bypassed for the exponential rado loss, and the algorithm, Ω-R.ADABOOST, is as fast as ADABOOST. Ω-R.ADABOOST has however a key advantage over ADABOOST that to our knowledge is new in the boosting world: for any of these four regularizers, Ω-R.ADABOOST is a boosting algorithm — thus, because of the equivalence between the minimization of the logistic loss over examples and the minimization of the exponential rado loss, Ω-R.ADABOOST is in fact an efficient proxy to boost the regularized logistic loss over examples using whichever of the four regularizers, and by extension, linear combination of them (e.g., elastic net regularization [Zou and Hastie, 2005]). We are not aware of any regularized logistic loss formal boosting algorithm with such a wide spectrum of regularizers. Extensive experiments validate this property: Ω-R.ADABOOST is all the better vs ADABOOST (unregularized or regularized) as the domain gets larger, and is able to rapidly learn both accurate and sparse classifiers, making it an especially good contender for supervised learning at large on big domains. The rest of this paper is as follows. Sections §2, 3 and 4 respectively present the equivalence between example and rado losses, its extension to regularized learning and Ω-R.ADABOOST. §5 and 6 respectively present experiments, and conclude. In order not to laden the paper’s body, a Supplementary Material (SM) contains the proofs and additional theoretical and experimental results. 2 Games and equivalent example/rado losses To avoid notational load, we briefly present our learning setting to point the key quantity in our formulation of the general two players game. Let [m] .= {1, 2, ...,m} and Σm . = {−1, 1}m, for m > 0. The classical (batch) supervised learner is example-based: it is given a set of examples S = {(xi, yi), i ∈ [m]} where xi ∈ Rd, yi ∈ Σ1, ∀i ∈ [m]. It returns a classifier h : Rd → R from a predefined set H. Let zi(h) . = yh(xi) and abbreviate z(h) by z for short. The learner fits h to the minimization of a loss. Table 1, column `e, presents some losses that can be used: we remark that h appears only through z, so let us consider in this section that the learner rather fits vector z ∈ Rm. We can now define our two players game setting. Let ϕe : R→ R and ϕr : R→ R two convex and lower-semicontinuous generators. We define functionsLe : Rm×Rm → R andLr : R2 m×Rm → R: Le(p, z) . = ∑ i∈[m] pizi + µe ∑ i∈[m] ϕe(pi) , (1) Lr(q, z) . = ∑ I⊆[m] qI ∑ i∈I zi + µr ∑ I⊆[m] ϕr(qI) , (2) where µe,µr > 0 do not depend on z. For the notation to be meaningful, the coordinates in q are assumed (wlog) to be in bijection with 2[m]. The dependence of both problems in their respective generators is implicit and shall be clear from context. The adversary’s goal is to fit p∗(z) . = arg min p∈Rm Le(p, z) , (3) q∗(z) . = arg min q∈H2m Lr(q, z) , (4) with H2m .= {q ∈ R2m : 1>q = 1}, so as to attain Le(z) . = Le(p ∗(z), z) , (5) Lr(z) . = Lr(q ∗(z), z) , (6) and let ∂Le(z) and ∂Lr(z) denote their subdifferentials. We view the learner’s task as the problem of maximising the corresponding problems in eq. (5) (with examples; this is already sketched above) or (6) (with what we shall call Rademacher observations, or rados), or equivalently minimising negative the corresponding function, and then resort to a loss function. The question of when these two problems are equivalent from the learner’s standpoint motivates the following definition. Definition 1 Two generators ϕe, ϕr are said proportionate iff ∀m > 0, there exists (µe,µr) such that Le(z) = Lr(z) + b , ∀z ∈ Rm . (7) (b does not depend on z) ∀m ∈ N∗, let Gm . = [ 0>2m−1 1 > 2m−1 Gm−1 Gm−1 ] (∈ {0, 1}m×2 m ) (8) if m > 1, and G1 . = [0 1] otherwise (notation zd indicates a vector in Rd). Theorem 2 ϕe, ϕr are proportionate iff the optima p∗(z) and q∗(z) to eqs (3) and (4) satisfy: p∗(z) ∈ ∂Lr(z) , (9) Gmq ∗(z) ∈ ∂Le(z) . (10) If ϕe, ϕr are differentiable and strictly convex, they are proportionate iff p∗(z) = Gmq∗(z). We can alleviate the fact that convexity is strict, which results in a set-valued identity for ϕe, ϕr to be proportionate. This gives a necessary and sufficient condition for two generators to be proportionate. It does not say how to construct one from the other, if possible. We now show that it is indeed possible and prune the search space: if ϕe is proportionate to some ϕr, then it has to be a “symmetrized” version of ϕr, according to the following definition. Definition 3 Let ϕr s.t. domϕr ⊇ (0, 1). ϕs(r)(z) . = ϕr(z) + ϕr(1− z) is the symmetrisation of ϕr. Lemma 4 If ϕe and ϕr are proportionate, then ϕe(z) = (µr/µe) · ϕs(r)(z) + (b/µe) (b is in (7)). To summarize, ϕe and ϕr are proportionate iff (i) they meet the structural property that ϕe is (proportional to) the symmetrized version of ϕr (according to Definition 3), and (ii) the optimal solutions p∗(z) and q∗(z) to problems (1) and (2) satisfy the conditions of Theorem 2. Depending on the direction, we have two cases to craft proportionate generators. First, if we have ϕr, then necessarily ϕe ∝ ϕs(r) so we merely have to check Theorem 2. Second, if we have ϕe, then it matches Definition 31. In this case, we have to find ϕr = f + g where g(z) = −g(1− z) and ϕe(z) = f(z) + f(1− z). We now come back to Le(z), Lr(z) (Definition 1), and make the connection with example and rado losses. In the next definition, an e-loss `e(z) is a function defined over the coordinates of z, and a r-loss `r(z) is a function defined over the subsets of sums of coordinates. Functions can depend on other parameters as well. Definition 5 Suppose e-loss `e(z) and r-loss `r(z) are such that there exist (i) fe : R → R and fr(z) : R→ R both strictly increasing and such that ∀z ∈ Rm, −Le(z) = fe (`e(z)) , (11) −Lr(z) = fr (`r(z)) , (12) where Le(z) and Lr(z) are defined via two proportionate generators ϕe and ϕr (Definition 1). Then the couple (`e, `r) is called a couple of equivalent example-rado losses. Following is the main Theorem of this Section, which summarizes all the cases of equivalence between example and rado losses, and shows that the theory developed on example / rado losses with proportionate generators encompasses the specific proofs and cases already known [Nock et al., 2015, Patrini et al., 2016]. Table 1 also displays generator ϕr. Theorem 6 In each row of Table 1, `e(z,µe) and `r(z,µr) are equivalent for µe and µr as indicated. The proof (SM, Subsection 2.3) details for each case the proportionate generators ϕe and ϕr. 3 Learning with (rado) regularized losses We now detail further the learning setting. In the preceeding Section, we have definef zi(h) . = yh(xi), which we plug in the losses of Table 1 to obtain the corresponding example and rado losses. Losses simplify conveniently when H consists of linear classifiers, h(x) .= θ>x for some θ ∈ Θ ⊆ Rd. In this case, the example loss can be described using edge vectors Se . = {yi · xi, i = 1, 2, ...,m} since zi = θ >(yi ·xi), and the rado loss can be described using rademacher observations [Nock et al., 2015], since ∑ i∈I zi = θ >πσ for σi = yi iff i ∈ I (and −yi otherwise) and πσ . = (1/2) · ∑ i(σi + yi) ·xi. Let us define S∗r . = {πσ,σ ∈ Σm} the set of all rademacher observations. We rewrite any couple of equivalent example and rado losses as `e(Se,θ) and `r(S∗r ,θ) respectively 2, omitting parameters µe and µr, assumed to be fixed beforehand for the equivalence to hold (see Table 1). Let us regularize the example loss, so that the learner’s goal is to minimize `e(Se,θ,Ω) . = `e(Se,θ) + Ω(θ) , (13) 1Alternatively, −ϕe is permissible [Kearns and Mansour, 1999]. 2To prevent notational overload, we blend notions of (pointwise) loss and (samplewise) risk, as just “losses”. Algorithm 1 Ω-R.ADABOOST Input set of rados Sr . = {π1,π2, ...,πn}; T ∈ N∗; parameters γ ∈ (0, 1),ω ∈ R+; Step 1 : let θ0 ← 0, w0 ← (1/n)1 ; Step 2 : for t = 1, 2, ..., T Step 2.1 : call the weak learner: (ι(t), rt)← Ω-WL(Sr,wt,γ,ω,θt−1); Step 2.2 : compute update parameters αι(t) and δt (here, π∗k . = maxj |πjk|): αι(t) ← (1/(2π∗ι(t))) log((1 + rt)/(1− rt)) and δt ← ω · (Ω(θt)− Ω(θt−1)) ; (16) Step 2.3 : update and normalize weights: for j = 1, 2, ..., n, wtj ← w(t−1)j · exp ( −αtπjι(t) + δt ) /Zt ; (17) Return θT ; with Ω a regularizer [Bach et al., 2011]. The following shows that when fe in eq. (11) is linear, there is a rado-loss equivalent to this regularized loss, regardless of Ω. Theorem 7 Suppose H contains linear classifiers. Let (`e(Se,θ), `r(S∗r ,θ)) be any couple of equivalent example-rado losses such that fe in eq. (11) is linear: fe(z) = ae · z + be , (14) for some ae > 0, be ∈ R. Then for any regularizer Ω(.) (assuming wlog Ω(0) = 0), the regularized example loss `e(Se,θ,Ω) is equivalent to rado loss `r(S ∗,Ω,θ r ,θ) computed over regularized rados: S∗,Ω,θr . = S∗r ⊕ {−Ω̃(θ) · θ} , (15) Here, ⊕ is Minkowski sum and Ω̃(θ) .= ae · Ω(θ)/‖θ‖22 if θ 6= 0 (and 0 otherwise). Theorem 7 applies to all rado losses (I-IV) in Table 1. The effect of regularization on rados is intuitive from the margin standpoint: assume that a “good” classifier θ is one that ensures lowerbounded inner products θ>z ≥ τ for some margin threshold τ . Then any good classifier on a regularized rado πσ shall actually meet, over examples, ∑ i:yi=σi θ>(yi · xi) ≥ τ + ae · Ω(θ). This inequality ties an "accuracy" of θ (edges, left hand-side) and its sparsity (right-hand side). Clearly, Theorem 7 has an unfamiliar shape since regularisation modifies data in the rado world: a different θ, or a different Ω, yields a different S∗,Ω,θr , and therefore it may seem very tricky to minimize such a regularized loss. Even more, iterative algorithms like boosting algorithms look at first glance a poor choice, since any update on θ implies an update on the rados as well. What we show in the following Section is essentially the opposite for the exponential rado loss, and a generalization of the RADOBOOST algorithm of Nock et al. [2015], which does not modify rados, is a formal boosting algorithm for a broad set of regularizers. Also, remarkably, only the high-level code of the weak learner depends on the regularizer; that of the strong learner is not affected. 4 Boosting with (rado) regularized losses Ω-R.ADABOOST presents our approach to learning with rados regularized with regularizer Ω to minimise loss `expr (Sr,θ,Ω) in eq. (45). Classifier θt is defined as θt . = ∑t t′=1 αι(t′) · 1ι(t′), where 1k is the kth canonical basis vector. The expected edge rt used to compute αt in eq. (16) is based on the following basis assignation: rι(t) ← 1 π∗ι(t) n∑ j=1 wtjπjι(t) (∈ [−1, 1]) . (19) The computation of rt is eventually tweaked by the weak learner, as displayed in Algorithm ΩWL. We investigate four choices for Ω. For each of them, we prove the boosting ability of ΩR.ADABOOST (Γ is symmetric positive definite, Sd is the symmetric group of order d, |θ| is the Algorithm 2 Ω-WL, for Ω ∈ {‖.‖1, ‖.‖2Γ, ‖.‖∞, ‖.‖Φ} Input set of rados Sr . = {π1,π2, ...,πn}; weights w ∈ 4n; parameters γ ∈ (0, 1), ω ∈ R+; classifier θ ∈ Rd; Step 1 : pick weak feature ι∗ ∈ [d]; Optional — use preference order: ι ι′ ⇔ |rι| − δι ≥ |rι′ | − δι′ // δι . = ω · (Ω(θ + αι · 1ι)− Ω(θ)), rι is given in (19) and αι is given in (16) Step 2 : if Ω = ‖.‖2Γ then r∗ ← { rι∗ if rι∗ ∈ [−γ,γ] sign (rι∗) · γ otherwise ; (18) else r∗ ← rι∗ ; Return (ι∗, r∗); vector whose coordinates are the absolute values of the coordinates of θ): Ω(θ) = ‖θ‖1 . = |θ|>1 Lasso ‖θ‖2Γ . = θ>Γθ Ridge ‖θ‖∞ . = maxk |θk| `∞ ‖θ‖Φ . = maxM∈Sd(M|θ|)>ξ SLOPE (20) [Bach et al., 2011, Bogdan et al., 2015, Duchi and Singer, 2009, Su and Candès, 2015]. The coordinates of ξ in SLOPE are ξk . = Φ−1(1− kq/(2d)) where Φ−1(.) is the quantile of the standard normal distribution and q ∈ (0, 1); thus, the largest coordinates (in absolute value) of θ are more penalized. We now establish the boosting ability of Ω-R.ADABOOST. We give no direction for Step 1 in Ω-WL, which is consistent with the definition of a weak learner in the boosting theory: all we require from the weak learner is |r.| no smaller than some weak learning threshold γWL > 0. Definition 8 Fix any constant γWL ∈ (0, 1). Ω-WL is said to be a γWL-Weak Learner iff the feature ι(t) it picks at iteration t satisfies |rι(t)| ≥ γWL, for any t = 1, 2, ..., T . We also provide an optional step for the weak learner in Ω-WL, which we exploit in the experimentations, which gives a total preference order on features to optimise further Ω-R.ADABOOST. Theorem 9 (boosting with ridge). Take Ω(.) = ‖.‖2Γ. Fix any 0 < a < 1/5, and suppose that ω and the number of iterations T of Ω-R.ADABOOST are chosen so that ω < (2amin k max j π2jk)/(TλΓ) , (21) where λΓ > 0 is the largest eigenvalue of Γ. Then there exists some γ > 0 (depending on a, and given to Ω-WL) such that for any fixed 0 < γWL < γ, if Ω-WL is a γWL-Weak Learner, then Ω-R.ADABOOST returns at the end of the T boosting iterations a classifier θT which meets: `expr (Sr,θT , ‖.‖2Γ) ≤ exp(−aγ2WLT/2) . (22) Furthermore, if we fix a = 1/7, then we can fix γ = 0.98, and if a = 1/10, then we can fix γ = 0.999. Two remarks are in order. First, the cases a = 1/7, 1/10 show that Ω-WL can still obtain large edges in eq. (19), so even a “strong” weak learner might fit in for Ω-WL, without clamping edges. Second, the right-hand side of ineq. (21) may be very large if we consider that mink maxj π2jk may be proportional to m2. So the constraint onω is in fact loose. Theorem 10 (boosting with lasso or `∞). Take Ω(.) ∈ {‖.‖1, ‖.‖∞}. Suppose Ω-WL is a γWL-Weak Learner for some γWL > 0. Suppose ∃0 < a < 3/11 s. t. ω satisfies: ω = aγWL min k max j |πjk| . (23) Then Ω-R.ADABOOST returns at the end of the T boosting iterations a classifier θT which meets: `expr (Sr,θT ,Ω) ≤ exp(−T̃γ2WL/2) , (24) where T̃ = aγWLT if Ω = ‖.‖1, and T̃ = (T − T∗) + aγWL · T∗ if Ω = ‖.‖∞; T∗ is the number of iterations where the feature computing the `∞ norm was updated3. We finally investigate the SLOPE choice. The Theorem is proven for ω = 1 in Ω-R.ADABOOST, for two reasons: it matches the original definition [Bogdan et al., 2015] and furthermore it unveils an interesting connection between boosting and SLOPE properties. Theorem 11 (boosting with SLOPE). Take Ω(.) = ‖.‖Φ. Let a . = min{3γWL/11,Φ−1(1 − q/(2d))/mink maxj |πjk|}. Suppose wlog |θTk| ≥ |θT (k+1)|,∀k, and fix ω = 1. Suppose (i) Ω-WL is a γWL-Weak Learner for some γWL > 0, and (ii) the q-value is chosen to meet: q ≥ 2 ·max k {( 1− Φ ( 3γWL 11 ·max j |πjk| ))/( k d )} . Then classifier θT returned by Ω-R.ADABOOST at the end of the T boosting iterations satisfies: `expr (Sr,θT , ‖.‖Φ) ≤ exp(−aγ2WLT/2) . (25) Constraint (ii) on q is interesting in the light of the properties of SLOPE [Su and Candès, 2015]. Modulo some assumptions, SLOPE yields a control the false discovery rate (FDR) — i.e., negligible coefficients in the "true” linear model θ∗ that are found significant in the learned θ —. Constraint (ii) links the "small” achievable FDR (upperbounded by q) to the "boostability” of the data: the fact that each feature k can be chosen by the weak learner for a "large” γWL, or has maxj |πjk| large, precisely flags potential significant features, thus reducing the risk of sparsity errors, and allowing small q, which is constraint (ii). Using the second order approximation of normal quantiles [Su and Candès, 2015], a sufficient condition for (ii) is that, for some K > 0, γWL min j max j |πjk| ≥ K · √ log d+ log q−1 ; (26) but minj maxj |πjk| is proportional to m, so ineq. (26), and thus (ii), may hold even for small samples and q-values. An additional Theorem deferred to SM sor space considerations shows that for any applicable choice of regularization (eq. 20), the regularized log-loss of θT over examples enjoys with high probability a monotonically decreasing upperbound with T as: `loge (Se,θ,Ω) ≤ log 2− κ · T + τ(m), with τ(m)→ 0 when m→∞ (and τ does not depend on T ), and κ > 0 does not depend on T . Hence, Ω-R.ADABOOST is an efficient proxy to boost the regularized log-loss over examples, using whichever of the ridge, lasso, `∞ or SLOPE regularization — establishing the first boosting algorithm for this choice —, or linear combinations of the choices, e.g. for elastic nets. If we were to compare Theorems 9 – 11 (eqs (22, 24, 25)), then the convergence looks best for ridge (the unsigned exponent is Õ(γ2WL)) while it looks slightly worse for `∞ and SLOPE (the unsigned exponent is now Õ(γ3WL)), the lasso being in between. 5 Experiments We have implemented Ω-WL4 using the order suggested to retrieve the topmost feature in the order. Hence, the weak learner returns the feature maximising |rι| − δι. The rationale for this comes from the proofs of Theorems 9 — 11, showing that ∏ t exp(−(r2ι(t)/2− δι(t))) is an upperbound on the exponential regularized rado-loss. We do not clamp the weak learner for Ω(.) = ‖.‖2Γ, so the weak learner is restricted to Step 1 in Ω-WL5. The objective of these experiments is to evaluate Ω-R.ADABOOST as a contender for supervised learning per se. We compared Ω-R.ADABOOST to ADABOOST/`1 regularized-ADABOOST [Schapire and Singer, 1999, Xi et al., 2009]. All algorithms are run for a total of T = 1000 iterations, and at the end of the iterations, the classifier in the sequence that minimizes the empirical loss is kept. Notice therefore that rado-based classifiers are evaluated on the training set which computes the 3If several features match this criterion, T∗ is the total number of iterations for all these features. 4Code available at: http://users.cecs.anu.edu.au/∼rnock/ 5the values forω that we test, in {10−u, u ∈ {0, 1, 2, 3, 4, 5}}, are small with respect to the upperbound in ineq. (21) given the number of boosting steps (T = 1000), and would yield on most domains a maximal γ ≈ 1. rados. To obtain very sparse solutions for regularized-ADABOOST, we pick its ω (β in [Xi et al., 2009]) in {10−4, 1, 104}. The complete results aggregate experiments on twenty (20) domains, all but one coming from the UCI [Bache and Lichman, 2013] (plus the Kaggle competition domain “Give me some credit”), with up to d =500+ features and m =100 000+ examples. Two tables, in the SM (Tables 1 and 2 in Section 3) report respectively the test errors and sparsity of classifiers, whose summary is given here in Table 2. The experimental setup is a ten-folds stratified cross validation for all algorithms and each domain. ADABOOST/regularized-ADABOOST is trained using the complete training fold. When the domain size m ≤ 40000, the number of rados n used for Ω-R.ADABOOST is a random subset of rados of size equal to that of the training fold. When the domain size exceeds 40000, a random set of n = 10000 rados is computed from the training fold. Thus, (i) there is no optimisation of the examples chosen to compute rados, (ii) we always keep a very small number of rados compared to the maximum available, and (iii) when the domain size gets large, we keep a comparatively tiny number of rados. Hence, the performances of Ω-R.ADABOOST do not stem from any optimization in the choice or size of the rado sample. Ada ∅ ‖.‖2Id ‖.‖1 ‖.‖∞ ‖.‖Φ Ada 11 10 10 8 9 ∅ 9 3 3 2 1 ‖.‖2Id 10 17 11 9 7 ‖.‖1 10 17 7 7 4 ‖.‖∞ 11 18 9 9 8 ‖.‖Φ 10 19 10 10 11 Table 2: Number of domains for which algorithm in row beats algorithm in column (Ada = best result of ADABOOST, ∅ = Ω-R.ADABOOST not regularized, see text). Experiments support several key observations. First, regularization consistently reduces the test error of Ω-R.ADABOOST, by more than 15% on Magic, and 20% on Kaggle. In Table 2, Ω-R.ADABOOST unregularized ("∅") is virtually always beaten by its SLOPE regularized version. Second, Ω-R.ADABOOST is able to obtain both very sparse and accurate classifiers (Magic, Hardware, Marketing, Kaggle). Third, Ω-R.ADABOOST competes or beats ADABOOST on all domains, and is all the better as the domain gets bigger. Even qualitatively as seen in Table 2, the best result obtained by ADABOOST (regularized or not) does not manage to beat any of the regularized versions of Ω-R.ADABOOST on the majority of the domains. Fourth, it is important to have several choices of regularizers at hand. On domain Statlog, the difference in test error between the worst and the best regularization of Ω-R.ADABOOST exceeds 15%. Fifth, as already remarked [Nock et al., 2015], significantly subsampling rados (e.g. Marketing, Kaggle) still yields very accurate classifiers. Sixth, regularization in Ω-R.ADABOOST successfully reduces sparsity to learn more accurate classifiers on several domains (Spectf, Transfusion, Hill-noise, Winered, Magic, Marketing), achieving efficient adaptive sparsity control. Last, the comparatively extremely poor results of ADABOOST on the biggest domains seems to come from another advantage of rados that the theory developed so far does not take into account: on domains for which some features are significantly correlated with the class and for which we have a large number of examples, the concentration of the expected feature value in rados seems to provide leveraging coefficients that tend to have much larger (absolute) value than in ADABOOST, making the convergence of Ω-R.ADABOOST significantly faster than ADABOOST. For example, we have checked that it takes much more than the T = 1000 iterations for ADABOOST to start converging to the results of regularized Ω-R.ADABOOST on Hardware or Kaggle. 6 Conclusion We have shown that the recent equivalences between two example and rado losses can be unified and generalized via a principled representation of a loss function in a two-player zero-sum game. Furthermore, we have shown that this equivalence extends to regularized losses, where the regularization in the rado loss is performed over the rados themselves with Minkowski sums. Our theory and experiments on Ω-R.ADABOOST with prominent regularizers (including ridge, lasso, `∞, SLOPE) indicate that when such a simple regularized form of the rado loss is available, it may help to devise accurate and efficient workarounds to boost a regularized loss over examples via the rado loss, even when the regularizer is significantly more involved like e.g. for group norms [Bach et al., 2011]. Acknowledgments Thanks are due to Stephen Hardy and Giorgio Patrini for stimulating discussions around this material.
1. What are the main contributions and extensions provided by the paper regarding Rademacher observations? 2. How does the reviewer assess the quality and significance of the theoretical framework and its connections to previous works? 3. What are the strengths and weaknesses of the proposed boosting algorithm, particularly in terms of computational efficiency and practical importance? 4. Do you have any concerns about the influence and impact of the work on other researchers and fields?
Review
Review The authors extend and generalize previous work on learning with Rademacher observations. In particular - The authors provide a unified theoretical framework that shows that a number of example losses have equivalent Rademacher observation losses. This extends previous work which presented equivalents for the square and logistic losses. - The authors prove that a regularizer in the example loss can be transferred to the Rado loss. - Based on the above, the authors propose a novel boosting algorithm that minimizes a regularized Rademacher observation loss, showing that learning is computationally efficient. - The authors also provide empirical evidence to the potential practical importance of the proposed boosting algorithm.The presented work is of very high quality and definitely has a place at NIPS. As indicated by my scoring my only qualm would be with respect to its potential impact or usefulness. Looking at the, excellent, work of Nock or Patrini, it does not seem to have influenced the work of many other researchers in ensemble learning or other fields. As such I am not entirely positive that the presented work would be of interest to a large number of attendees; the practical impact of the work is also not clear, though the empirical results (on UCI datasets) would seem to indicate some practical importance. If the authors could point out some citations showing the impact or general usefulness of the work (or similar work,e.g. Nock's work), it would be helpful.
NIPS
Title On Regularizing Rademacher Observation Losses Abstract It has recently been shown that supervised learning linear classifiers with two of the most popular losses, the logistic and square loss, is equivalent to optimizing an equivalent loss over sufficient statistics about the class: Rademacher observations (rados). It has also been shown that learning over rados brings solutions to two prominent problems for which the state of the art of learning from examples can be comparatively inferior and in fact less convenient: (i) protecting and learning from private examples, (ii) learning from distributed datasets without entity resolution. Bis repetita placent: the two proofs of equivalence are different and rely on specific properties of the corresponding losses, so whether these can be unified and generalized inevitably comes to mind. This is our first contribution: we show how they can be fit into the same theory for the equivalence between example and rado losses. As a second contribution, we show that the generalization unveils a surprising new connection to regularized learning, and in particular a sufficient condition under which regularizing the loss over examples is equivalent to regularizing the rados (i.e. the data) in the equivalent rado loss, in such a way that an efficient algorithm for one regularized rado loss may be as efficient when changing the regularizer. This is our third contribution: we give a formal boosting algorithm for the regularized exponential rado-loss which boost with any of the ridge, lasso, SLOPE, `∞, or elastic net regularizer, using the same master routine for all. Because the regularized exponential rado-loss is the equivalent of the regularized logistic loss over examples we obtain the first efficient proxy to the minimization of the regularized logistic loss over examples using such a wide spectrum of regularizers. Experiments with a readily available code display that regularization significantly improves rado-based learning and compares favourably with example-based learning. 1 Introduction What kind of data should we use to train a supervised learner ? A recent result has shown that minimising the popular logistic loss over examples with linear classifiers (in supervised learning) is equivalent to the minimisation of the exponential loss over sufficient statistics about the class known as Rademacher observations (rados, [Nock et al., 2015]), for the same classifier. In short, we fit a classifier over data that is different from examples, and the same classifier generalizes well to new observations. It has been shown that rados offer solutions for two problems for which the state of the art involving examples can be comparatively significantly inferior: • protection of the examples’ privacy from various algebraic, geometric, statistical and computational standpoints, and learning from private data [Nock et al., 2015]; • learning from a large number of distributed datasets without having to perform entity resolution between datasets [Patrini et al., 2016]. Quite remarkably, the training time of the algorithms involved can be smaller than it would be on examples, by orders of magnitude [Patrini et al., 2016]. Two key problems remain however: the 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. accuracy of learning from rados can compete experimentally with that of learning from examples, yet there is a gap to reduce for rados to be not just a good material to learn from in a privacy/distributed setting, but also a serious alternative to learning from examples at large, yielding new avenues to supervised learning. Second, theoretically speaking, it is now known that two widely popular losses over examples admit an equivalent loss in the rado world: the logistic loss and the square loss [Nock et al., 2015, Patrini et al., 2016]. This inevitably suggests that this property may hold for more losses, yet barely anything displays patterns of generalizability in the existing proofs. Our contributions: in this paper, we provide answers to these two questions, with three main contributions. Our first contribution is to show that this generalization indeed holds: other example losses admit equivalent losses in the rado world, meaning in particular that their minimiser classifier is the same, regardless of the dataset of examples. The technique we use exploits a two-player zero sum game representation of convex losses, that has been very useful to analyse boosting algorithms [Schapire, 2003, Telgarsky, 2012], with one key difference: payoffs are non-linear convex, eventually non-differentiable. These also resemble the entropic dual losses [Reid et al., 2015], with the difference that we do not enforce conjugacy over the simplex. The conditions of the game are slightly different for examples and rados. We provide necessary and sufficient conditions for the resulting losses over examples and rados to be equivalent. Informally, equivalence happens iff the convex functions of the games satisfy a symmetry relationship and the weights satisfy a linear system of equations. Some popular losses fit in the equivalence [Nair and Hinton, 2010, Gentile and Warmuth, 1998, Nock and Nielsen, 2008, Telgarsky, 2012, Vapnik, 1998, van Rooyen et al., 2015]. Our second contribution came unexpectedly through this equivalence. Regularizing a loss is standard in machine learning [Bach et al., 2011]. We show a sufficient condition for the equivalence under which regularizing the example loss is equivalent to regularizing the rados in the equivalent rado loss, i.e. making a Minkowski sum of the rado set with a classifier-based set. This property is independent of the regularizer, and incidentally happens to hold for all our cases of equivalence (Cf first contribution). A regularizer added to a loss over examples thus transfers to data in the rado world, in essentially the same way for all regularizers, and if one can solve the non-trivial computational and optimization problem that poses this data modification for one regularized rado loss, then, basically, "A good optimization algorithm for this regularized rado loss may fit to other regularizers as well” Our third contribution exemplifies this. We propose an iterative boosting algorithm, Ω-R.ADABOOST, that learns a classifier from rados using the exponential regularized rado loss, with regularization choice belonging to the ridge, lasso, `∞, or the recently coined SLOPE [Bogdan et al., 2015]. Since rado regularization would theoretically require to modify data at each iteration, such schemes are computationally non-trivial. We show that this modification can in fact be bypassed for the exponential rado loss, and the algorithm, Ω-R.ADABOOST, is as fast as ADABOOST. Ω-R.ADABOOST has however a key advantage over ADABOOST that to our knowledge is new in the boosting world: for any of these four regularizers, Ω-R.ADABOOST is a boosting algorithm — thus, because of the equivalence between the minimization of the logistic loss over examples and the minimization of the exponential rado loss, Ω-R.ADABOOST is in fact an efficient proxy to boost the regularized logistic loss over examples using whichever of the four regularizers, and by extension, linear combination of them (e.g., elastic net regularization [Zou and Hastie, 2005]). We are not aware of any regularized logistic loss formal boosting algorithm with such a wide spectrum of regularizers. Extensive experiments validate this property: Ω-R.ADABOOST is all the better vs ADABOOST (unregularized or regularized) as the domain gets larger, and is able to rapidly learn both accurate and sparse classifiers, making it an especially good contender for supervised learning at large on big domains. The rest of this paper is as follows. Sections §2, 3 and 4 respectively present the equivalence between example and rado losses, its extension to regularized learning and Ω-R.ADABOOST. §5 and 6 respectively present experiments, and conclude. In order not to laden the paper’s body, a Supplementary Material (SM) contains the proofs and additional theoretical and experimental results. 2 Games and equivalent example/rado losses To avoid notational load, we briefly present our learning setting to point the key quantity in our formulation of the general two players game. Let [m] .= {1, 2, ...,m} and Σm . = {−1, 1}m, for m > 0. The classical (batch) supervised learner is example-based: it is given a set of examples S = {(xi, yi), i ∈ [m]} where xi ∈ Rd, yi ∈ Σ1, ∀i ∈ [m]. It returns a classifier h : Rd → R from a predefined set H. Let zi(h) . = yh(xi) and abbreviate z(h) by z for short. The learner fits h to the minimization of a loss. Table 1, column `e, presents some losses that can be used: we remark that h appears only through z, so let us consider in this section that the learner rather fits vector z ∈ Rm. We can now define our two players game setting. Let ϕe : R→ R and ϕr : R→ R two convex and lower-semicontinuous generators. We define functionsLe : Rm×Rm → R andLr : R2 m×Rm → R: Le(p, z) . = ∑ i∈[m] pizi + µe ∑ i∈[m] ϕe(pi) , (1) Lr(q, z) . = ∑ I⊆[m] qI ∑ i∈I zi + µr ∑ I⊆[m] ϕr(qI) , (2) where µe,µr > 0 do not depend on z. For the notation to be meaningful, the coordinates in q are assumed (wlog) to be in bijection with 2[m]. The dependence of both problems in their respective generators is implicit and shall be clear from context. The adversary’s goal is to fit p∗(z) . = arg min p∈Rm Le(p, z) , (3) q∗(z) . = arg min q∈H2m Lr(q, z) , (4) with H2m .= {q ∈ R2m : 1>q = 1}, so as to attain Le(z) . = Le(p ∗(z), z) , (5) Lr(z) . = Lr(q ∗(z), z) , (6) and let ∂Le(z) and ∂Lr(z) denote their subdifferentials. We view the learner’s task as the problem of maximising the corresponding problems in eq. (5) (with examples; this is already sketched above) or (6) (with what we shall call Rademacher observations, or rados), or equivalently minimising negative the corresponding function, and then resort to a loss function. The question of when these two problems are equivalent from the learner’s standpoint motivates the following definition. Definition 1 Two generators ϕe, ϕr are said proportionate iff ∀m > 0, there exists (µe,µr) such that Le(z) = Lr(z) + b , ∀z ∈ Rm . (7) (b does not depend on z) ∀m ∈ N∗, let Gm . = [ 0>2m−1 1 > 2m−1 Gm−1 Gm−1 ] (∈ {0, 1}m×2 m ) (8) if m > 1, and G1 . = [0 1] otherwise (notation zd indicates a vector in Rd). Theorem 2 ϕe, ϕr are proportionate iff the optima p∗(z) and q∗(z) to eqs (3) and (4) satisfy: p∗(z) ∈ ∂Lr(z) , (9) Gmq ∗(z) ∈ ∂Le(z) . (10) If ϕe, ϕr are differentiable and strictly convex, they are proportionate iff p∗(z) = Gmq∗(z). We can alleviate the fact that convexity is strict, which results in a set-valued identity for ϕe, ϕr to be proportionate. This gives a necessary and sufficient condition for two generators to be proportionate. It does not say how to construct one from the other, if possible. We now show that it is indeed possible and prune the search space: if ϕe is proportionate to some ϕr, then it has to be a “symmetrized” version of ϕr, according to the following definition. Definition 3 Let ϕr s.t. domϕr ⊇ (0, 1). ϕs(r)(z) . = ϕr(z) + ϕr(1− z) is the symmetrisation of ϕr. Lemma 4 If ϕe and ϕr are proportionate, then ϕe(z) = (µr/µe) · ϕs(r)(z) + (b/µe) (b is in (7)). To summarize, ϕe and ϕr are proportionate iff (i) they meet the structural property that ϕe is (proportional to) the symmetrized version of ϕr (according to Definition 3), and (ii) the optimal solutions p∗(z) and q∗(z) to problems (1) and (2) satisfy the conditions of Theorem 2. Depending on the direction, we have two cases to craft proportionate generators. First, if we have ϕr, then necessarily ϕe ∝ ϕs(r) so we merely have to check Theorem 2. Second, if we have ϕe, then it matches Definition 31. In this case, we have to find ϕr = f + g where g(z) = −g(1− z) and ϕe(z) = f(z) + f(1− z). We now come back to Le(z), Lr(z) (Definition 1), and make the connection with example and rado losses. In the next definition, an e-loss `e(z) is a function defined over the coordinates of z, and a r-loss `r(z) is a function defined over the subsets of sums of coordinates. Functions can depend on other parameters as well. Definition 5 Suppose e-loss `e(z) and r-loss `r(z) are such that there exist (i) fe : R → R and fr(z) : R→ R both strictly increasing and such that ∀z ∈ Rm, −Le(z) = fe (`e(z)) , (11) −Lr(z) = fr (`r(z)) , (12) where Le(z) and Lr(z) are defined via two proportionate generators ϕe and ϕr (Definition 1). Then the couple (`e, `r) is called a couple of equivalent example-rado losses. Following is the main Theorem of this Section, which summarizes all the cases of equivalence between example and rado losses, and shows that the theory developed on example / rado losses with proportionate generators encompasses the specific proofs and cases already known [Nock et al., 2015, Patrini et al., 2016]. Table 1 also displays generator ϕr. Theorem 6 In each row of Table 1, `e(z,µe) and `r(z,µr) are equivalent for µe and µr as indicated. The proof (SM, Subsection 2.3) details for each case the proportionate generators ϕe and ϕr. 3 Learning with (rado) regularized losses We now detail further the learning setting. In the preceeding Section, we have definef zi(h) . = yh(xi), which we plug in the losses of Table 1 to obtain the corresponding example and rado losses. Losses simplify conveniently when H consists of linear classifiers, h(x) .= θ>x for some θ ∈ Θ ⊆ Rd. In this case, the example loss can be described using edge vectors Se . = {yi · xi, i = 1, 2, ...,m} since zi = θ >(yi ·xi), and the rado loss can be described using rademacher observations [Nock et al., 2015], since ∑ i∈I zi = θ >πσ for σi = yi iff i ∈ I (and −yi otherwise) and πσ . = (1/2) · ∑ i(σi + yi) ·xi. Let us define S∗r . = {πσ,σ ∈ Σm} the set of all rademacher observations. We rewrite any couple of equivalent example and rado losses as `e(Se,θ) and `r(S∗r ,θ) respectively 2, omitting parameters µe and µr, assumed to be fixed beforehand for the equivalence to hold (see Table 1). Let us regularize the example loss, so that the learner’s goal is to minimize `e(Se,θ,Ω) . = `e(Se,θ) + Ω(θ) , (13) 1Alternatively, −ϕe is permissible [Kearns and Mansour, 1999]. 2To prevent notational overload, we blend notions of (pointwise) loss and (samplewise) risk, as just “losses”. Algorithm 1 Ω-R.ADABOOST Input set of rados Sr . = {π1,π2, ...,πn}; T ∈ N∗; parameters γ ∈ (0, 1),ω ∈ R+; Step 1 : let θ0 ← 0, w0 ← (1/n)1 ; Step 2 : for t = 1, 2, ..., T Step 2.1 : call the weak learner: (ι(t), rt)← Ω-WL(Sr,wt,γ,ω,θt−1); Step 2.2 : compute update parameters αι(t) and δt (here, π∗k . = maxj |πjk|): αι(t) ← (1/(2π∗ι(t))) log((1 + rt)/(1− rt)) and δt ← ω · (Ω(θt)− Ω(θt−1)) ; (16) Step 2.3 : update and normalize weights: for j = 1, 2, ..., n, wtj ← w(t−1)j · exp ( −αtπjι(t) + δt ) /Zt ; (17) Return θT ; with Ω a regularizer [Bach et al., 2011]. The following shows that when fe in eq. (11) is linear, there is a rado-loss equivalent to this regularized loss, regardless of Ω. Theorem 7 Suppose H contains linear classifiers. Let (`e(Se,θ), `r(S∗r ,θ)) be any couple of equivalent example-rado losses such that fe in eq. (11) is linear: fe(z) = ae · z + be , (14) for some ae > 0, be ∈ R. Then for any regularizer Ω(.) (assuming wlog Ω(0) = 0), the regularized example loss `e(Se,θ,Ω) is equivalent to rado loss `r(S ∗,Ω,θ r ,θ) computed over regularized rados: S∗,Ω,θr . = S∗r ⊕ {−Ω̃(θ) · θ} , (15) Here, ⊕ is Minkowski sum and Ω̃(θ) .= ae · Ω(θ)/‖θ‖22 if θ 6= 0 (and 0 otherwise). Theorem 7 applies to all rado losses (I-IV) in Table 1. The effect of regularization on rados is intuitive from the margin standpoint: assume that a “good” classifier θ is one that ensures lowerbounded inner products θ>z ≥ τ for some margin threshold τ . Then any good classifier on a regularized rado πσ shall actually meet, over examples, ∑ i:yi=σi θ>(yi · xi) ≥ τ + ae · Ω(θ). This inequality ties an "accuracy" of θ (edges, left hand-side) and its sparsity (right-hand side). Clearly, Theorem 7 has an unfamiliar shape since regularisation modifies data in the rado world: a different θ, or a different Ω, yields a different S∗,Ω,θr , and therefore it may seem very tricky to minimize such a regularized loss. Even more, iterative algorithms like boosting algorithms look at first glance a poor choice, since any update on θ implies an update on the rados as well. What we show in the following Section is essentially the opposite for the exponential rado loss, and a generalization of the RADOBOOST algorithm of Nock et al. [2015], which does not modify rados, is a formal boosting algorithm for a broad set of regularizers. Also, remarkably, only the high-level code of the weak learner depends on the regularizer; that of the strong learner is not affected. 4 Boosting with (rado) regularized losses Ω-R.ADABOOST presents our approach to learning with rados regularized with regularizer Ω to minimise loss `expr (Sr,θ,Ω) in eq. (45). Classifier θt is defined as θt . = ∑t t′=1 αι(t′) · 1ι(t′), where 1k is the kth canonical basis vector. The expected edge rt used to compute αt in eq. (16) is based on the following basis assignation: rι(t) ← 1 π∗ι(t) n∑ j=1 wtjπjι(t) (∈ [−1, 1]) . (19) The computation of rt is eventually tweaked by the weak learner, as displayed in Algorithm ΩWL. We investigate four choices for Ω. For each of them, we prove the boosting ability of ΩR.ADABOOST (Γ is symmetric positive definite, Sd is the symmetric group of order d, |θ| is the Algorithm 2 Ω-WL, for Ω ∈ {‖.‖1, ‖.‖2Γ, ‖.‖∞, ‖.‖Φ} Input set of rados Sr . = {π1,π2, ...,πn}; weights w ∈ 4n; parameters γ ∈ (0, 1), ω ∈ R+; classifier θ ∈ Rd; Step 1 : pick weak feature ι∗ ∈ [d]; Optional — use preference order: ι ι′ ⇔ |rι| − δι ≥ |rι′ | − δι′ // δι . = ω · (Ω(θ + αι · 1ι)− Ω(θ)), rι is given in (19) and αι is given in (16) Step 2 : if Ω = ‖.‖2Γ then r∗ ← { rι∗ if rι∗ ∈ [−γ,γ] sign (rι∗) · γ otherwise ; (18) else r∗ ← rι∗ ; Return (ι∗, r∗); vector whose coordinates are the absolute values of the coordinates of θ): Ω(θ) = ‖θ‖1 . = |θ|>1 Lasso ‖θ‖2Γ . = θ>Γθ Ridge ‖θ‖∞ . = maxk |θk| `∞ ‖θ‖Φ . = maxM∈Sd(M|θ|)>ξ SLOPE (20) [Bach et al., 2011, Bogdan et al., 2015, Duchi and Singer, 2009, Su and Candès, 2015]. The coordinates of ξ in SLOPE are ξk . = Φ−1(1− kq/(2d)) where Φ−1(.) is the quantile of the standard normal distribution and q ∈ (0, 1); thus, the largest coordinates (in absolute value) of θ are more penalized. We now establish the boosting ability of Ω-R.ADABOOST. We give no direction for Step 1 in Ω-WL, which is consistent with the definition of a weak learner in the boosting theory: all we require from the weak learner is |r.| no smaller than some weak learning threshold γWL > 0. Definition 8 Fix any constant γWL ∈ (0, 1). Ω-WL is said to be a γWL-Weak Learner iff the feature ι(t) it picks at iteration t satisfies |rι(t)| ≥ γWL, for any t = 1, 2, ..., T . We also provide an optional step for the weak learner in Ω-WL, which we exploit in the experimentations, which gives a total preference order on features to optimise further Ω-R.ADABOOST. Theorem 9 (boosting with ridge). Take Ω(.) = ‖.‖2Γ. Fix any 0 < a < 1/5, and suppose that ω and the number of iterations T of Ω-R.ADABOOST are chosen so that ω < (2amin k max j π2jk)/(TλΓ) , (21) where λΓ > 0 is the largest eigenvalue of Γ. Then there exists some γ > 0 (depending on a, and given to Ω-WL) such that for any fixed 0 < γWL < γ, if Ω-WL is a γWL-Weak Learner, then Ω-R.ADABOOST returns at the end of the T boosting iterations a classifier θT which meets: `expr (Sr,θT , ‖.‖2Γ) ≤ exp(−aγ2WLT/2) . (22) Furthermore, if we fix a = 1/7, then we can fix γ = 0.98, and if a = 1/10, then we can fix γ = 0.999. Two remarks are in order. First, the cases a = 1/7, 1/10 show that Ω-WL can still obtain large edges in eq. (19), so even a “strong” weak learner might fit in for Ω-WL, without clamping edges. Second, the right-hand side of ineq. (21) may be very large if we consider that mink maxj π2jk may be proportional to m2. So the constraint onω is in fact loose. Theorem 10 (boosting with lasso or `∞). Take Ω(.) ∈ {‖.‖1, ‖.‖∞}. Suppose Ω-WL is a γWL-Weak Learner for some γWL > 0. Suppose ∃0 < a < 3/11 s. t. ω satisfies: ω = aγWL min k max j |πjk| . (23) Then Ω-R.ADABOOST returns at the end of the T boosting iterations a classifier θT which meets: `expr (Sr,θT ,Ω) ≤ exp(−T̃γ2WL/2) , (24) where T̃ = aγWLT if Ω = ‖.‖1, and T̃ = (T − T∗) + aγWL · T∗ if Ω = ‖.‖∞; T∗ is the number of iterations where the feature computing the `∞ norm was updated3. We finally investigate the SLOPE choice. The Theorem is proven for ω = 1 in Ω-R.ADABOOST, for two reasons: it matches the original definition [Bogdan et al., 2015] and furthermore it unveils an interesting connection between boosting and SLOPE properties. Theorem 11 (boosting with SLOPE). Take Ω(.) = ‖.‖Φ. Let a . = min{3γWL/11,Φ−1(1 − q/(2d))/mink maxj |πjk|}. Suppose wlog |θTk| ≥ |θT (k+1)|,∀k, and fix ω = 1. Suppose (i) Ω-WL is a γWL-Weak Learner for some γWL > 0, and (ii) the q-value is chosen to meet: q ≥ 2 ·max k {( 1− Φ ( 3γWL 11 ·max j |πjk| ))/( k d )} . Then classifier θT returned by Ω-R.ADABOOST at the end of the T boosting iterations satisfies: `expr (Sr,θT , ‖.‖Φ) ≤ exp(−aγ2WLT/2) . (25) Constraint (ii) on q is interesting in the light of the properties of SLOPE [Su and Candès, 2015]. Modulo some assumptions, SLOPE yields a control the false discovery rate (FDR) — i.e., negligible coefficients in the "true” linear model θ∗ that are found significant in the learned θ —. Constraint (ii) links the "small” achievable FDR (upperbounded by q) to the "boostability” of the data: the fact that each feature k can be chosen by the weak learner for a "large” γWL, or has maxj |πjk| large, precisely flags potential significant features, thus reducing the risk of sparsity errors, and allowing small q, which is constraint (ii). Using the second order approximation of normal quantiles [Su and Candès, 2015], a sufficient condition for (ii) is that, for some K > 0, γWL min j max j |πjk| ≥ K · √ log d+ log q−1 ; (26) but minj maxj |πjk| is proportional to m, so ineq. (26), and thus (ii), may hold even for small samples and q-values. An additional Theorem deferred to SM sor space considerations shows that for any applicable choice of regularization (eq. 20), the regularized log-loss of θT over examples enjoys with high probability a monotonically decreasing upperbound with T as: `loge (Se,θ,Ω) ≤ log 2− κ · T + τ(m), with τ(m)→ 0 when m→∞ (and τ does not depend on T ), and κ > 0 does not depend on T . Hence, Ω-R.ADABOOST is an efficient proxy to boost the regularized log-loss over examples, using whichever of the ridge, lasso, `∞ or SLOPE regularization — establishing the first boosting algorithm for this choice —, or linear combinations of the choices, e.g. for elastic nets. If we were to compare Theorems 9 – 11 (eqs (22, 24, 25)), then the convergence looks best for ridge (the unsigned exponent is Õ(γ2WL)) while it looks slightly worse for `∞ and SLOPE (the unsigned exponent is now Õ(γ3WL)), the lasso being in between. 5 Experiments We have implemented Ω-WL4 using the order suggested to retrieve the topmost feature in the order. Hence, the weak learner returns the feature maximising |rι| − δι. The rationale for this comes from the proofs of Theorems 9 — 11, showing that ∏ t exp(−(r2ι(t)/2− δι(t))) is an upperbound on the exponential regularized rado-loss. We do not clamp the weak learner for Ω(.) = ‖.‖2Γ, so the weak learner is restricted to Step 1 in Ω-WL5. The objective of these experiments is to evaluate Ω-R.ADABOOST as a contender for supervised learning per se. We compared Ω-R.ADABOOST to ADABOOST/`1 regularized-ADABOOST [Schapire and Singer, 1999, Xi et al., 2009]. All algorithms are run for a total of T = 1000 iterations, and at the end of the iterations, the classifier in the sequence that minimizes the empirical loss is kept. Notice therefore that rado-based classifiers are evaluated on the training set which computes the 3If several features match this criterion, T∗ is the total number of iterations for all these features. 4Code available at: http://users.cecs.anu.edu.au/∼rnock/ 5the values forω that we test, in {10−u, u ∈ {0, 1, 2, 3, 4, 5}}, are small with respect to the upperbound in ineq. (21) given the number of boosting steps (T = 1000), and would yield on most domains a maximal γ ≈ 1. rados. To obtain very sparse solutions for regularized-ADABOOST, we pick its ω (β in [Xi et al., 2009]) in {10−4, 1, 104}. The complete results aggregate experiments on twenty (20) domains, all but one coming from the UCI [Bache and Lichman, 2013] (plus the Kaggle competition domain “Give me some credit”), with up to d =500+ features and m =100 000+ examples. Two tables, in the SM (Tables 1 and 2 in Section 3) report respectively the test errors and sparsity of classifiers, whose summary is given here in Table 2. The experimental setup is a ten-folds stratified cross validation for all algorithms and each domain. ADABOOST/regularized-ADABOOST is trained using the complete training fold. When the domain size m ≤ 40000, the number of rados n used for Ω-R.ADABOOST is a random subset of rados of size equal to that of the training fold. When the domain size exceeds 40000, a random set of n = 10000 rados is computed from the training fold. Thus, (i) there is no optimisation of the examples chosen to compute rados, (ii) we always keep a very small number of rados compared to the maximum available, and (iii) when the domain size gets large, we keep a comparatively tiny number of rados. Hence, the performances of Ω-R.ADABOOST do not stem from any optimization in the choice or size of the rado sample. Ada ∅ ‖.‖2Id ‖.‖1 ‖.‖∞ ‖.‖Φ Ada 11 10 10 8 9 ∅ 9 3 3 2 1 ‖.‖2Id 10 17 11 9 7 ‖.‖1 10 17 7 7 4 ‖.‖∞ 11 18 9 9 8 ‖.‖Φ 10 19 10 10 11 Table 2: Number of domains for which algorithm in row beats algorithm in column (Ada = best result of ADABOOST, ∅ = Ω-R.ADABOOST not regularized, see text). Experiments support several key observations. First, regularization consistently reduces the test error of Ω-R.ADABOOST, by more than 15% on Magic, and 20% on Kaggle. In Table 2, Ω-R.ADABOOST unregularized ("∅") is virtually always beaten by its SLOPE regularized version. Second, Ω-R.ADABOOST is able to obtain both very sparse and accurate classifiers (Magic, Hardware, Marketing, Kaggle). Third, Ω-R.ADABOOST competes or beats ADABOOST on all domains, and is all the better as the domain gets bigger. Even qualitatively as seen in Table 2, the best result obtained by ADABOOST (regularized or not) does not manage to beat any of the regularized versions of Ω-R.ADABOOST on the majority of the domains. Fourth, it is important to have several choices of regularizers at hand. On domain Statlog, the difference in test error between the worst and the best regularization of Ω-R.ADABOOST exceeds 15%. Fifth, as already remarked [Nock et al., 2015], significantly subsampling rados (e.g. Marketing, Kaggle) still yields very accurate classifiers. Sixth, regularization in Ω-R.ADABOOST successfully reduces sparsity to learn more accurate classifiers on several domains (Spectf, Transfusion, Hill-noise, Winered, Magic, Marketing), achieving efficient adaptive sparsity control. Last, the comparatively extremely poor results of ADABOOST on the biggest domains seems to come from another advantage of rados that the theory developed so far does not take into account: on domains for which some features are significantly correlated with the class and for which we have a large number of examples, the concentration of the expected feature value in rados seems to provide leveraging coefficients that tend to have much larger (absolute) value than in ADABOOST, making the convergence of Ω-R.ADABOOST significantly faster than ADABOOST. For example, we have checked that it takes much more than the T = 1000 iterations for ADABOOST to start converging to the results of regularized Ω-R.ADABOOST on Hardware or Kaggle. 6 Conclusion We have shown that the recent equivalences between two example and rado losses can be unified and generalized via a principled representation of a loss function in a two-player zero-sum game. Furthermore, we have shown that this equivalence extends to regularized losses, where the regularization in the rado loss is performed over the rados themselves with Minkowski sums. Our theory and experiments on Ω-R.ADABOOST with prominent regularizers (including ridge, lasso, `∞, SLOPE) indicate that when such a simple regularized form of the rado loss is available, it may help to devise accurate and efficient workarounds to boost a regularized loss over examples via the rado loss, even when the regularizer is significantly more involved like e.g. for group norms [Bach et al., 2011]. Acknowledgments Thanks are due to Stephen Hardy and Giorgio Patrini for stimulating discussions around this material.
1. What is the main contribution of the paper in terms of unifying and generalizing losses? 2. What are the strengths of the proposed approach regarding regularization and boosting algorithms? 3. What are the weaknesses of the paper regarding experimental results and conclusions? 4. How does the reviewer assess the significance of the proposed approach compared to prior works? 5. Are there any typos or unclear statements in the review that need clarification?
Review
Review This paper showed that the equivalences between two example and rado losses can be unified and generalized. Moreover, this equivalence can be extended to regularized losses. A sufficient condition is introduced to guide when and how regularizing the loss over examples is equivalent to regularizing the rados in the equivalent rado loss. A formal boosting algorithm for the regularized exponential rado-loss is proposed to boost with any of the ridge, lasso, SLOPE, L1, or elastic net regularizer using the same master routine. Experiments demonstrated that regularization improves rado-based learning.Pro: - It was reasonably proved that the regularized exponential rado-loss is the equivalent of the regularized logistic loss over examples - A formal boosting algorithm is proposed for the regularized exponential rado-loss. Experimental results demonstrated its advantages over the versions without regularization. Con: - While experimental results have clearly shown that regularization versions could achieve better results over none-regularization version, the current results are not enough to confirm the conclusion: \omega-R-Adaboost is better than Adaboost. More experiments are preferred here. - In bigger domains, \omega-R-Adaboost is much better than Adaboost. The former processed data randomly selected from the whole data samples, whole the latter processed all data samples. It is interesting to see how about the latter only processed data randomly selected from the whole data samples. In this case, the former is still much better? - typo: line 8: “bis repetita placent” ?? line 32: comparatively significantly inferior??? ……
NIPS
Title On Regularizing Rademacher Observation Losses Abstract It has recently been shown that supervised learning linear classifiers with two of the most popular losses, the logistic and square loss, is equivalent to optimizing an equivalent loss over sufficient statistics about the class: Rademacher observations (rados). It has also been shown that learning over rados brings solutions to two prominent problems for which the state of the art of learning from examples can be comparatively inferior and in fact less convenient: (i) protecting and learning from private examples, (ii) learning from distributed datasets without entity resolution. Bis repetita placent: the two proofs of equivalence are different and rely on specific properties of the corresponding losses, so whether these can be unified and generalized inevitably comes to mind. This is our first contribution: we show how they can be fit into the same theory for the equivalence between example and rado losses. As a second contribution, we show that the generalization unveils a surprising new connection to regularized learning, and in particular a sufficient condition under which regularizing the loss over examples is equivalent to regularizing the rados (i.e. the data) in the equivalent rado loss, in such a way that an efficient algorithm for one regularized rado loss may be as efficient when changing the regularizer. This is our third contribution: we give a formal boosting algorithm for the regularized exponential rado-loss which boost with any of the ridge, lasso, SLOPE, `∞, or elastic net regularizer, using the same master routine for all. Because the regularized exponential rado-loss is the equivalent of the regularized logistic loss over examples we obtain the first efficient proxy to the minimization of the regularized logistic loss over examples using such a wide spectrum of regularizers. Experiments with a readily available code display that regularization significantly improves rado-based learning and compares favourably with example-based learning. 1 Introduction What kind of data should we use to train a supervised learner ? A recent result has shown that minimising the popular logistic loss over examples with linear classifiers (in supervised learning) is equivalent to the minimisation of the exponential loss over sufficient statistics about the class known as Rademacher observations (rados, [Nock et al., 2015]), for the same classifier. In short, we fit a classifier over data that is different from examples, and the same classifier generalizes well to new observations. It has been shown that rados offer solutions for two problems for which the state of the art involving examples can be comparatively significantly inferior: • protection of the examples’ privacy from various algebraic, geometric, statistical and computational standpoints, and learning from private data [Nock et al., 2015]; • learning from a large number of distributed datasets without having to perform entity resolution between datasets [Patrini et al., 2016]. Quite remarkably, the training time of the algorithms involved can be smaller than it would be on examples, by orders of magnitude [Patrini et al., 2016]. Two key problems remain however: the 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. accuracy of learning from rados can compete experimentally with that of learning from examples, yet there is a gap to reduce for rados to be not just a good material to learn from in a privacy/distributed setting, but also a serious alternative to learning from examples at large, yielding new avenues to supervised learning. Second, theoretically speaking, it is now known that two widely popular losses over examples admit an equivalent loss in the rado world: the logistic loss and the square loss [Nock et al., 2015, Patrini et al., 2016]. This inevitably suggests that this property may hold for more losses, yet barely anything displays patterns of generalizability in the existing proofs. Our contributions: in this paper, we provide answers to these two questions, with three main contributions. Our first contribution is to show that this generalization indeed holds: other example losses admit equivalent losses in the rado world, meaning in particular that their minimiser classifier is the same, regardless of the dataset of examples. The technique we use exploits a two-player zero sum game representation of convex losses, that has been very useful to analyse boosting algorithms [Schapire, 2003, Telgarsky, 2012], with one key difference: payoffs are non-linear convex, eventually non-differentiable. These also resemble the entropic dual losses [Reid et al., 2015], with the difference that we do not enforce conjugacy over the simplex. The conditions of the game are slightly different for examples and rados. We provide necessary and sufficient conditions for the resulting losses over examples and rados to be equivalent. Informally, equivalence happens iff the convex functions of the games satisfy a symmetry relationship and the weights satisfy a linear system of equations. Some popular losses fit in the equivalence [Nair and Hinton, 2010, Gentile and Warmuth, 1998, Nock and Nielsen, 2008, Telgarsky, 2012, Vapnik, 1998, van Rooyen et al., 2015]. Our second contribution came unexpectedly through this equivalence. Regularizing a loss is standard in machine learning [Bach et al., 2011]. We show a sufficient condition for the equivalence under which regularizing the example loss is equivalent to regularizing the rados in the equivalent rado loss, i.e. making a Minkowski sum of the rado set with a classifier-based set. This property is independent of the regularizer, and incidentally happens to hold for all our cases of equivalence (Cf first contribution). A regularizer added to a loss over examples thus transfers to data in the rado world, in essentially the same way for all regularizers, and if one can solve the non-trivial computational and optimization problem that poses this data modification for one regularized rado loss, then, basically, "A good optimization algorithm for this regularized rado loss may fit to other regularizers as well” Our third contribution exemplifies this. We propose an iterative boosting algorithm, Ω-R.ADABOOST, that learns a classifier from rados using the exponential regularized rado loss, with regularization choice belonging to the ridge, lasso, `∞, or the recently coined SLOPE [Bogdan et al., 2015]. Since rado regularization would theoretically require to modify data at each iteration, such schemes are computationally non-trivial. We show that this modification can in fact be bypassed for the exponential rado loss, and the algorithm, Ω-R.ADABOOST, is as fast as ADABOOST. Ω-R.ADABOOST has however a key advantage over ADABOOST that to our knowledge is new in the boosting world: for any of these four regularizers, Ω-R.ADABOOST is a boosting algorithm — thus, because of the equivalence between the minimization of the logistic loss over examples and the minimization of the exponential rado loss, Ω-R.ADABOOST is in fact an efficient proxy to boost the regularized logistic loss over examples using whichever of the four regularizers, and by extension, linear combination of them (e.g., elastic net regularization [Zou and Hastie, 2005]). We are not aware of any regularized logistic loss formal boosting algorithm with such a wide spectrum of regularizers. Extensive experiments validate this property: Ω-R.ADABOOST is all the better vs ADABOOST (unregularized or regularized) as the domain gets larger, and is able to rapidly learn both accurate and sparse classifiers, making it an especially good contender for supervised learning at large on big domains. The rest of this paper is as follows. Sections §2, 3 and 4 respectively present the equivalence between example and rado losses, its extension to regularized learning and Ω-R.ADABOOST. §5 and 6 respectively present experiments, and conclude. In order not to laden the paper’s body, a Supplementary Material (SM) contains the proofs and additional theoretical and experimental results. 2 Games and equivalent example/rado losses To avoid notational load, we briefly present our learning setting to point the key quantity in our formulation of the general two players game. Let [m] .= {1, 2, ...,m} and Σm . = {−1, 1}m, for m > 0. The classical (batch) supervised learner is example-based: it is given a set of examples S = {(xi, yi), i ∈ [m]} where xi ∈ Rd, yi ∈ Σ1, ∀i ∈ [m]. It returns a classifier h : Rd → R from a predefined set H. Let zi(h) . = yh(xi) and abbreviate z(h) by z for short. The learner fits h to the minimization of a loss. Table 1, column `e, presents some losses that can be used: we remark that h appears only through z, so let us consider in this section that the learner rather fits vector z ∈ Rm. We can now define our two players game setting. Let ϕe : R→ R and ϕr : R→ R two convex and lower-semicontinuous generators. We define functionsLe : Rm×Rm → R andLr : R2 m×Rm → R: Le(p, z) . = ∑ i∈[m] pizi + µe ∑ i∈[m] ϕe(pi) , (1) Lr(q, z) . = ∑ I⊆[m] qI ∑ i∈I zi + µr ∑ I⊆[m] ϕr(qI) , (2) where µe,µr > 0 do not depend on z. For the notation to be meaningful, the coordinates in q are assumed (wlog) to be in bijection with 2[m]. The dependence of both problems in their respective generators is implicit and shall be clear from context. The adversary’s goal is to fit p∗(z) . = arg min p∈Rm Le(p, z) , (3) q∗(z) . = arg min q∈H2m Lr(q, z) , (4) with H2m .= {q ∈ R2m : 1>q = 1}, so as to attain Le(z) . = Le(p ∗(z), z) , (5) Lr(z) . = Lr(q ∗(z), z) , (6) and let ∂Le(z) and ∂Lr(z) denote their subdifferentials. We view the learner’s task as the problem of maximising the corresponding problems in eq. (5) (with examples; this is already sketched above) or (6) (with what we shall call Rademacher observations, or rados), or equivalently minimising negative the corresponding function, and then resort to a loss function. The question of when these two problems are equivalent from the learner’s standpoint motivates the following definition. Definition 1 Two generators ϕe, ϕr are said proportionate iff ∀m > 0, there exists (µe,µr) such that Le(z) = Lr(z) + b , ∀z ∈ Rm . (7) (b does not depend on z) ∀m ∈ N∗, let Gm . = [ 0>2m−1 1 > 2m−1 Gm−1 Gm−1 ] (∈ {0, 1}m×2 m ) (8) if m > 1, and G1 . = [0 1] otherwise (notation zd indicates a vector in Rd). Theorem 2 ϕe, ϕr are proportionate iff the optima p∗(z) and q∗(z) to eqs (3) and (4) satisfy: p∗(z) ∈ ∂Lr(z) , (9) Gmq ∗(z) ∈ ∂Le(z) . (10) If ϕe, ϕr are differentiable and strictly convex, they are proportionate iff p∗(z) = Gmq∗(z). We can alleviate the fact that convexity is strict, which results in a set-valued identity for ϕe, ϕr to be proportionate. This gives a necessary and sufficient condition for two generators to be proportionate. It does not say how to construct one from the other, if possible. We now show that it is indeed possible and prune the search space: if ϕe is proportionate to some ϕr, then it has to be a “symmetrized” version of ϕr, according to the following definition. Definition 3 Let ϕr s.t. domϕr ⊇ (0, 1). ϕs(r)(z) . = ϕr(z) + ϕr(1− z) is the symmetrisation of ϕr. Lemma 4 If ϕe and ϕr are proportionate, then ϕe(z) = (µr/µe) · ϕs(r)(z) + (b/µe) (b is in (7)). To summarize, ϕe and ϕr are proportionate iff (i) they meet the structural property that ϕe is (proportional to) the symmetrized version of ϕr (according to Definition 3), and (ii) the optimal solutions p∗(z) and q∗(z) to problems (1) and (2) satisfy the conditions of Theorem 2. Depending on the direction, we have two cases to craft proportionate generators. First, if we have ϕr, then necessarily ϕe ∝ ϕs(r) so we merely have to check Theorem 2. Second, if we have ϕe, then it matches Definition 31. In this case, we have to find ϕr = f + g where g(z) = −g(1− z) and ϕe(z) = f(z) + f(1− z). We now come back to Le(z), Lr(z) (Definition 1), and make the connection with example and rado losses. In the next definition, an e-loss `e(z) is a function defined over the coordinates of z, and a r-loss `r(z) is a function defined over the subsets of sums of coordinates. Functions can depend on other parameters as well. Definition 5 Suppose e-loss `e(z) and r-loss `r(z) are such that there exist (i) fe : R → R and fr(z) : R→ R both strictly increasing and such that ∀z ∈ Rm, −Le(z) = fe (`e(z)) , (11) −Lr(z) = fr (`r(z)) , (12) where Le(z) and Lr(z) are defined via two proportionate generators ϕe and ϕr (Definition 1). Then the couple (`e, `r) is called a couple of equivalent example-rado losses. Following is the main Theorem of this Section, which summarizes all the cases of equivalence between example and rado losses, and shows that the theory developed on example / rado losses with proportionate generators encompasses the specific proofs and cases already known [Nock et al., 2015, Patrini et al., 2016]. Table 1 also displays generator ϕr. Theorem 6 In each row of Table 1, `e(z,µe) and `r(z,µr) are equivalent for µe and µr as indicated. The proof (SM, Subsection 2.3) details for each case the proportionate generators ϕe and ϕr. 3 Learning with (rado) regularized losses We now detail further the learning setting. In the preceeding Section, we have definef zi(h) . = yh(xi), which we plug in the losses of Table 1 to obtain the corresponding example and rado losses. Losses simplify conveniently when H consists of linear classifiers, h(x) .= θ>x for some θ ∈ Θ ⊆ Rd. In this case, the example loss can be described using edge vectors Se . = {yi · xi, i = 1, 2, ...,m} since zi = θ >(yi ·xi), and the rado loss can be described using rademacher observations [Nock et al., 2015], since ∑ i∈I zi = θ >πσ for σi = yi iff i ∈ I (and −yi otherwise) and πσ . = (1/2) · ∑ i(σi + yi) ·xi. Let us define S∗r . = {πσ,σ ∈ Σm} the set of all rademacher observations. We rewrite any couple of equivalent example and rado losses as `e(Se,θ) and `r(S∗r ,θ) respectively 2, omitting parameters µe and µr, assumed to be fixed beforehand for the equivalence to hold (see Table 1). Let us regularize the example loss, so that the learner’s goal is to minimize `e(Se,θ,Ω) . = `e(Se,θ) + Ω(θ) , (13) 1Alternatively, −ϕe is permissible [Kearns and Mansour, 1999]. 2To prevent notational overload, we blend notions of (pointwise) loss and (samplewise) risk, as just “losses”. Algorithm 1 Ω-R.ADABOOST Input set of rados Sr . = {π1,π2, ...,πn}; T ∈ N∗; parameters γ ∈ (0, 1),ω ∈ R+; Step 1 : let θ0 ← 0, w0 ← (1/n)1 ; Step 2 : for t = 1, 2, ..., T Step 2.1 : call the weak learner: (ι(t), rt)← Ω-WL(Sr,wt,γ,ω,θt−1); Step 2.2 : compute update parameters αι(t) and δt (here, π∗k . = maxj |πjk|): αι(t) ← (1/(2π∗ι(t))) log((1 + rt)/(1− rt)) and δt ← ω · (Ω(θt)− Ω(θt−1)) ; (16) Step 2.3 : update and normalize weights: for j = 1, 2, ..., n, wtj ← w(t−1)j · exp ( −αtπjι(t) + δt ) /Zt ; (17) Return θT ; with Ω a regularizer [Bach et al., 2011]. The following shows that when fe in eq. (11) is linear, there is a rado-loss equivalent to this regularized loss, regardless of Ω. Theorem 7 Suppose H contains linear classifiers. Let (`e(Se,θ), `r(S∗r ,θ)) be any couple of equivalent example-rado losses such that fe in eq. (11) is linear: fe(z) = ae · z + be , (14) for some ae > 0, be ∈ R. Then for any regularizer Ω(.) (assuming wlog Ω(0) = 0), the regularized example loss `e(Se,θ,Ω) is equivalent to rado loss `r(S ∗,Ω,θ r ,θ) computed over regularized rados: S∗,Ω,θr . = S∗r ⊕ {−Ω̃(θ) · θ} , (15) Here, ⊕ is Minkowski sum and Ω̃(θ) .= ae · Ω(θ)/‖θ‖22 if θ 6= 0 (and 0 otherwise). Theorem 7 applies to all rado losses (I-IV) in Table 1. The effect of regularization on rados is intuitive from the margin standpoint: assume that a “good” classifier θ is one that ensures lowerbounded inner products θ>z ≥ τ for some margin threshold τ . Then any good classifier on a regularized rado πσ shall actually meet, over examples, ∑ i:yi=σi θ>(yi · xi) ≥ τ + ae · Ω(θ). This inequality ties an "accuracy" of θ (edges, left hand-side) and its sparsity (right-hand side). Clearly, Theorem 7 has an unfamiliar shape since regularisation modifies data in the rado world: a different θ, or a different Ω, yields a different S∗,Ω,θr , and therefore it may seem very tricky to minimize such a regularized loss. Even more, iterative algorithms like boosting algorithms look at first glance a poor choice, since any update on θ implies an update on the rados as well. What we show in the following Section is essentially the opposite for the exponential rado loss, and a generalization of the RADOBOOST algorithm of Nock et al. [2015], which does not modify rados, is a formal boosting algorithm for a broad set of regularizers. Also, remarkably, only the high-level code of the weak learner depends on the regularizer; that of the strong learner is not affected. 4 Boosting with (rado) regularized losses Ω-R.ADABOOST presents our approach to learning with rados regularized with regularizer Ω to minimise loss `expr (Sr,θ,Ω) in eq. (45). Classifier θt is defined as θt . = ∑t t′=1 αι(t′) · 1ι(t′), where 1k is the kth canonical basis vector. The expected edge rt used to compute αt in eq. (16) is based on the following basis assignation: rι(t) ← 1 π∗ι(t) n∑ j=1 wtjπjι(t) (∈ [−1, 1]) . (19) The computation of rt is eventually tweaked by the weak learner, as displayed in Algorithm ΩWL. We investigate four choices for Ω. For each of them, we prove the boosting ability of ΩR.ADABOOST (Γ is symmetric positive definite, Sd is the symmetric group of order d, |θ| is the Algorithm 2 Ω-WL, for Ω ∈ {‖.‖1, ‖.‖2Γ, ‖.‖∞, ‖.‖Φ} Input set of rados Sr . = {π1,π2, ...,πn}; weights w ∈ 4n; parameters γ ∈ (0, 1), ω ∈ R+; classifier θ ∈ Rd; Step 1 : pick weak feature ι∗ ∈ [d]; Optional — use preference order: ι ι′ ⇔ |rι| − δι ≥ |rι′ | − δι′ // δι . = ω · (Ω(θ + αι · 1ι)− Ω(θ)), rι is given in (19) and αι is given in (16) Step 2 : if Ω = ‖.‖2Γ then r∗ ← { rι∗ if rι∗ ∈ [−γ,γ] sign (rι∗) · γ otherwise ; (18) else r∗ ← rι∗ ; Return (ι∗, r∗); vector whose coordinates are the absolute values of the coordinates of θ): Ω(θ) = ‖θ‖1 . = |θ|>1 Lasso ‖θ‖2Γ . = θ>Γθ Ridge ‖θ‖∞ . = maxk |θk| `∞ ‖θ‖Φ . = maxM∈Sd(M|θ|)>ξ SLOPE (20) [Bach et al., 2011, Bogdan et al., 2015, Duchi and Singer, 2009, Su and Candès, 2015]. The coordinates of ξ in SLOPE are ξk . = Φ−1(1− kq/(2d)) where Φ−1(.) is the quantile of the standard normal distribution and q ∈ (0, 1); thus, the largest coordinates (in absolute value) of θ are more penalized. We now establish the boosting ability of Ω-R.ADABOOST. We give no direction for Step 1 in Ω-WL, which is consistent with the definition of a weak learner in the boosting theory: all we require from the weak learner is |r.| no smaller than some weak learning threshold γWL > 0. Definition 8 Fix any constant γWL ∈ (0, 1). Ω-WL is said to be a γWL-Weak Learner iff the feature ι(t) it picks at iteration t satisfies |rι(t)| ≥ γWL, for any t = 1, 2, ..., T . We also provide an optional step for the weak learner in Ω-WL, which we exploit in the experimentations, which gives a total preference order on features to optimise further Ω-R.ADABOOST. Theorem 9 (boosting with ridge). Take Ω(.) = ‖.‖2Γ. Fix any 0 < a < 1/5, and suppose that ω and the number of iterations T of Ω-R.ADABOOST are chosen so that ω < (2amin k max j π2jk)/(TλΓ) , (21) where λΓ > 0 is the largest eigenvalue of Γ. Then there exists some γ > 0 (depending on a, and given to Ω-WL) such that for any fixed 0 < γWL < γ, if Ω-WL is a γWL-Weak Learner, then Ω-R.ADABOOST returns at the end of the T boosting iterations a classifier θT which meets: `expr (Sr,θT , ‖.‖2Γ) ≤ exp(−aγ2WLT/2) . (22) Furthermore, if we fix a = 1/7, then we can fix γ = 0.98, and if a = 1/10, then we can fix γ = 0.999. Two remarks are in order. First, the cases a = 1/7, 1/10 show that Ω-WL can still obtain large edges in eq. (19), so even a “strong” weak learner might fit in for Ω-WL, without clamping edges. Second, the right-hand side of ineq. (21) may be very large if we consider that mink maxj π2jk may be proportional to m2. So the constraint onω is in fact loose. Theorem 10 (boosting with lasso or `∞). Take Ω(.) ∈ {‖.‖1, ‖.‖∞}. Suppose Ω-WL is a γWL-Weak Learner for some γWL > 0. Suppose ∃0 < a < 3/11 s. t. ω satisfies: ω = aγWL min k max j |πjk| . (23) Then Ω-R.ADABOOST returns at the end of the T boosting iterations a classifier θT which meets: `expr (Sr,θT ,Ω) ≤ exp(−T̃γ2WL/2) , (24) where T̃ = aγWLT if Ω = ‖.‖1, and T̃ = (T − T∗) + aγWL · T∗ if Ω = ‖.‖∞; T∗ is the number of iterations where the feature computing the `∞ norm was updated3. We finally investigate the SLOPE choice. The Theorem is proven for ω = 1 in Ω-R.ADABOOST, for two reasons: it matches the original definition [Bogdan et al., 2015] and furthermore it unveils an interesting connection between boosting and SLOPE properties. Theorem 11 (boosting with SLOPE). Take Ω(.) = ‖.‖Φ. Let a . = min{3γWL/11,Φ−1(1 − q/(2d))/mink maxj |πjk|}. Suppose wlog |θTk| ≥ |θT (k+1)|,∀k, and fix ω = 1. Suppose (i) Ω-WL is a γWL-Weak Learner for some γWL > 0, and (ii) the q-value is chosen to meet: q ≥ 2 ·max k {( 1− Φ ( 3γWL 11 ·max j |πjk| ))/( k d )} . Then classifier θT returned by Ω-R.ADABOOST at the end of the T boosting iterations satisfies: `expr (Sr,θT , ‖.‖Φ) ≤ exp(−aγ2WLT/2) . (25) Constraint (ii) on q is interesting in the light of the properties of SLOPE [Su and Candès, 2015]. Modulo some assumptions, SLOPE yields a control the false discovery rate (FDR) — i.e., negligible coefficients in the "true” linear model θ∗ that are found significant in the learned θ —. Constraint (ii) links the "small” achievable FDR (upperbounded by q) to the "boostability” of the data: the fact that each feature k can be chosen by the weak learner for a "large” γWL, or has maxj |πjk| large, precisely flags potential significant features, thus reducing the risk of sparsity errors, and allowing small q, which is constraint (ii). Using the second order approximation of normal quantiles [Su and Candès, 2015], a sufficient condition for (ii) is that, for some K > 0, γWL min j max j |πjk| ≥ K · √ log d+ log q−1 ; (26) but minj maxj |πjk| is proportional to m, so ineq. (26), and thus (ii), may hold even for small samples and q-values. An additional Theorem deferred to SM sor space considerations shows that for any applicable choice of regularization (eq. 20), the regularized log-loss of θT over examples enjoys with high probability a monotonically decreasing upperbound with T as: `loge (Se,θ,Ω) ≤ log 2− κ · T + τ(m), with τ(m)→ 0 when m→∞ (and τ does not depend on T ), and κ > 0 does not depend on T . Hence, Ω-R.ADABOOST is an efficient proxy to boost the regularized log-loss over examples, using whichever of the ridge, lasso, `∞ or SLOPE regularization — establishing the first boosting algorithm for this choice —, or linear combinations of the choices, e.g. for elastic nets. If we were to compare Theorems 9 – 11 (eqs (22, 24, 25)), then the convergence looks best for ridge (the unsigned exponent is Õ(γ2WL)) while it looks slightly worse for `∞ and SLOPE (the unsigned exponent is now Õ(γ3WL)), the lasso being in between. 5 Experiments We have implemented Ω-WL4 using the order suggested to retrieve the topmost feature in the order. Hence, the weak learner returns the feature maximising |rι| − δι. The rationale for this comes from the proofs of Theorems 9 — 11, showing that ∏ t exp(−(r2ι(t)/2− δι(t))) is an upperbound on the exponential regularized rado-loss. We do not clamp the weak learner for Ω(.) = ‖.‖2Γ, so the weak learner is restricted to Step 1 in Ω-WL5. The objective of these experiments is to evaluate Ω-R.ADABOOST as a contender for supervised learning per se. We compared Ω-R.ADABOOST to ADABOOST/`1 regularized-ADABOOST [Schapire and Singer, 1999, Xi et al., 2009]. All algorithms are run for a total of T = 1000 iterations, and at the end of the iterations, the classifier in the sequence that minimizes the empirical loss is kept. Notice therefore that rado-based classifiers are evaluated on the training set which computes the 3If several features match this criterion, T∗ is the total number of iterations for all these features. 4Code available at: http://users.cecs.anu.edu.au/∼rnock/ 5the values forω that we test, in {10−u, u ∈ {0, 1, 2, 3, 4, 5}}, are small with respect to the upperbound in ineq. (21) given the number of boosting steps (T = 1000), and would yield on most domains a maximal γ ≈ 1. rados. To obtain very sparse solutions for regularized-ADABOOST, we pick its ω (β in [Xi et al., 2009]) in {10−4, 1, 104}. The complete results aggregate experiments on twenty (20) domains, all but one coming from the UCI [Bache and Lichman, 2013] (plus the Kaggle competition domain “Give me some credit”), with up to d =500+ features and m =100 000+ examples. Two tables, in the SM (Tables 1 and 2 in Section 3) report respectively the test errors and sparsity of classifiers, whose summary is given here in Table 2. The experimental setup is a ten-folds stratified cross validation for all algorithms and each domain. ADABOOST/regularized-ADABOOST is trained using the complete training fold. When the domain size m ≤ 40000, the number of rados n used for Ω-R.ADABOOST is a random subset of rados of size equal to that of the training fold. When the domain size exceeds 40000, a random set of n = 10000 rados is computed from the training fold. Thus, (i) there is no optimisation of the examples chosen to compute rados, (ii) we always keep a very small number of rados compared to the maximum available, and (iii) when the domain size gets large, we keep a comparatively tiny number of rados. Hence, the performances of Ω-R.ADABOOST do not stem from any optimization in the choice or size of the rado sample. Ada ∅ ‖.‖2Id ‖.‖1 ‖.‖∞ ‖.‖Φ Ada 11 10 10 8 9 ∅ 9 3 3 2 1 ‖.‖2Id 10 17 11 9 7 ‖.‖1 10 17 7 7 4 ‖.‖∞ 11 18 9 9 8 ‖.‖Φ 10 19 10 10 11 Table 2: Number of domains for which algorithm in row beats algorithm in column (Ada = best result of ADABOOST, ∅ = Ω-R.ADABOOST not regularized, see text). Experiments support several key observations. First, regularization consistently reduces the test error of Ω-R.ADABOOST, by more than 15% on Magic, and 20% on Kaggle. In Table 2, Ω-R.ADABOOST unregularized ("∅") is virtually always beaten by its SLOPE regularized version. Second, Ω-R.ADABOOST is able to obtain both very sparse and accurate classifiers (Magic, Hardware, Marketing, Kaggle). Third, Ω-R.ADABOOST competes or beats ADABOOST on all domains, and is all the better as the domain gets bigger. Even qualitatively as seen in Table 2, the best result obtained by ADABOOST (regularized or not) does not manage to beat any of the regularized versions of Ω-R.ADABOOST on the majority of the domains. Fourth, it is important to have several choices of regularizers at hand. On domain Statlog, the difference in test error between the worst and the best regularization of Ω-R.ADABOOST exceeds 15%. Fifth, as already remarked [Nock et al., 2015], significantly subsampling rados (e.g. Marketing, Kaggle) still yields very accurate classifiers. Sixth, regularization in Ω-R.ADABOOST successfully reduces sparsity to learn more accurate classifiers on several domains (Spectf, Transfusion, Hill-noise, Winered, Magic, Marketing), achieving efficient adaptive sparsity control. Last, the comparatively extremely poor results of ADABOOST on the biggest domains seems to come from another advantage of rados that the theory developed so far does not take into account: on domains for which some features are significantly correlated with the class and for which we have a large number of examples, the concentration of the expected feature value in rados seems to provide leveraging coefficients that tend to have much larger (absolute) value than in ADABOOST, making the convergence of Ω-R.ADABOOST significantly faster than ADABOOST. For example, we have checked that it takes much more than the T = 1000 iterations for ADABOOST to start converging to the results of regularized Ω-R.ADABOOST on Hardware or Kaggle. 6 Conclusion We have shown that the recent equivalences between two example and rado losses can be unified and generalized via a principled representation of a loss function in a two-player zero-sum game. Furthermore, we have shown that this equivalence extends to regularized losses, where the regularization in the rado loss is performed over the rados themselves with Minkowski sums. Our theory and experiments on Ω-R.ADABOOST with prominent regularizers (including ridge, lasso, `∞, SLOPE) indicate that when such a simple regularized form of the rado loss is available, it may help to devise accurate and efficient workarounds to boost a regularized loss over examples via the rado loss, even when the regularizer is significantly more involved like e.g. for group norms [Bach et al., 2011]. Acknowledgments Thanks are due to Stephen Hardy and Giorgio Patrini for stimulating discussions around this material.
1. What is the main contribution of the paper regarding Rademacher observations? 2. How does the paper demonstrate the practical value of rado-losses? 3. Do you have any concerns about the generalization of the theorems to efficient algorithms? 4. How does the new boosting algorithm compare to standard ones in terms of efficiency? 5. What are some suggestions for improving the clarity and precision of the paper's material? 6. What are some interesting and open questions raised by the paper, and how do they relate to other areas of machine learning?
Review
Review The paper makes contributions involving Rademacher observations (previously dubbed "rados"), and their relationship to regularization. It shows that the compression of the dataset's information into the set of rados has a regularizing effect, defines rado-losses which are of special importance in the theory, and demonstrates they hold practical value.The technical contribution of this paper is clear - I recommend acceptance because of the potential value. The theorems apply only to the four losses mentioned, and I was not able to go through the proofs in full detail so I have some doubts about how they would generalize to efficient algorithms otherwise; however, I had more significant doubts after reading the original Nock et al. 2015 paper, and clearly the theory has grown since then. It is indeed interesting that the new boosting algorithm can be implemented as efficiently as the standard ones, but I view this as a side consequence of the formulation and have therefore focused on the remainder in my evaluation. There are several places in the paper with typos (e.g. line 101, 217), and the material can get dense for a general machine learning audience, though it is mathematically clean and precise. E.g., the discussion of the affinely generated functions in Sec. 2 would be much clearer if re-ordered, so that the motivation for the definitions and choices is clear (e.g. rados represent powersets, so the powerset indexing would clear up significantly). For similar reasons I suggest defining (1) and (2) as a different script letter instead of L, due to Eqs. (11,12) and the possible confusion with losses. Following from the paper are several interesting and open questions with possible connections to other areas of machine learning. I am interested in the connection to the two-player game formulation, because it suggests different parametrizations of the problem by choosing the z's differently for other learning settings. The z's are currently chosen in a similar way to the boosting game, this conflation of information happens due to linear constraints on the z_i's (weak learning assumption), so they are chosen in this setting with benefits. But the results in Sec. 2 are general and there are other similar game-theoretic formulations in areas like (e.g. online) learning. Are there other ways of condensing the data, besides rados, that are similarly beneficial for those problems?
NIPS
Title On Regularizing Rademacher Observation Losses Abstract It has recently been shown that supervised learning linear classifiers with two of the most popular losses, the logistic and square loss, is equivalent to optimizing an equivalent loss over sufficient statistics about the class: Rademacher observations (rados). It has also been shown that learning over rados brings solutions to two prominent problems for which the state of the art of learning from examples can be comparatively inferior and in fact less convenient: (i) protecting and learning from private examples, (ii) learning from distributed datasets without entity resolution. Bis repetita placent: the two proofs of equivalence are different and rely on specific properties of the corresponding losses, so whether these can be unified and generalized inevitably comes to mind. This is our first contribution: we show how they can be fit into the same theory for the equivalence between example and rado losses. As a second contribution, we show that the generalization unveils a surprising new connection to regularized learning, and in particular a sufficient condition under which regularizing the loss over examples is equivalent to regularizing the rados (i.e. the data) in the equivalent rado loss, in such a way that an efficient algorithm for one regularized rado loss may be as efficient when changing the regularizer. This is our third contribution: we give a formal boosting algorithm for the regularized exponential rado-loss which boost with any of the ridge, lasso, SLOPE, `∞, or elastic net regularizer, using the same master routine for all. Because the regularized exponential rado-loss is the equivalent of the regularized logistic loss over examples we obtain the first efficient proxy to the minimization of the regularized logistic loss over examples using such a wide spectrum of regularizers. Experiments with a readily available code display that regularization significantly improves rado-based learning and compares favourably with example-based learning. 1 Introduction What kind of data should we use to train a supervised learner ? A recent result has shown that minimising the popular logistic loss over examples with linear classifiers (in supervised learning) is equivalent to the minimisation of the exponential loss over sufficient statistics about the class known as Rademacher observations (rados, [Nock et al., 2015]), for the same classifier. In short, we fit a classifier over data that is different from examples, and the same classifier generalizes well to new observations. It has been shown that rados offer solutions for two problems for which the state of the art involving examples can be comparatively significantly inferior: • protection of the examples’ privacy from various algebraic, geometric, statistical and computational standpoints, and learning from private data [Nock et al., 2015]; • learning from a large number of distributed datasets without having to perform entity resolution between datasets [Patrini et al., 2016]. Quite remarkably, the training time of the algorithms involved can be smaller than it would be on examples, by orders of magnitude [Patrini et al., 2016]. Two key problems remain however: the 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. accuracy of learning from rados can compete experimentally with that of learning from examples, yet there is a gap to reduce for rados to be not just a good material to learn from in a privacy/distributed setting, but also a serious alternative to learning from examples at large, yielding new avenues to supervised learning. Second, theoretically speaking, it is now known that two widely popular losses over examples admit an equivalent loss in the rado world: the logistic loss and the square loss [Nock et al., 2015, Patrini et al., 2016]. This inevitably suggests that this property may hold for more losses, yet barely anything displays patterns of generalizability in the existing proofs. Our contributions: in this paper, we provide answers to these two questions, with three main contributions. Our first contribution is to show that this generalization indeed holds: other example losses admit equivalent losses in the rado world, meaning in particular that their minimiser classifier is the same, regardless of the dataset of examples. The technique we use exploits a two-player zero sum game representation of convex losses, that has been very useful to analyse boosting algorithms [Schapire, 2003, Telgarsky, 2012], with one key difference: payoffs are non-linear convex, eventually non-differentiable. These also resemble the entropic dual losses [Reid et al., 2015], with the difference that we do not enforce conjugacy over the simplex. The conditions of the game are slightly different for examples and rados. We provide necessary and sufficient conditions for the resulting losses over examples and rados to be equivalent. Informally, equivalence happens iff the convex functions of the games satisfy a symmetry relationship and the weights satisfy a linear system of equations. Some popular losses fit in the equivalence [Nair and Hinton, 2010, Gentile and Warmuth, 1998, Nock and Nielsen, 2008, Telgarsky, 2012, Vapnik, 1998, van Rooyen et al., 2015]. Our second contribution came unexpectedly through this equivalence. Regularizing a loss is standard in machine learning [Bach et al., 2011]. We show a sufficient condition for the equivalence under which regularizing the example loss is equivalent to regularizing the rados in the equivalent rado loss, i.e. making a Minkowski sum of the rado set with a classifier-based set. This property is independent of the regularizer, and incidentally happens to hold for all our cases of equivalence (Cf first contribution). A regularizer added to a loss over examples thus transfers to data in the rado world, in essentially the same way for all regularizers, and if one can solve the non-trivial computational and optimization problem that poses this data modification for one regularized rado loss, then, basically, "A good optimization algorithm for this regularized rado loss may fit to other regularizers as well” Our third contribution exemplifies this. We propose an iterative boosting algorithm, Ω-R.ADABOOST, that learns a classifier from rados using the exponential regularized rado loss, with regularization choice belonging to the ridge, lasso, `∞, or the recently coined SLOPE [Bogdan et al., 2015]. Since rado regularization would theoretically require to modify data at each iteration, such schemes are computationally non-trivial. We show that this modification can in fact be bypassed for the exponential rado loss, and the algorithm, Ω-R.ADABOOST, is as fast as ADABOOST. Ω-R.ADABOOST has however a key advantage over ADABOOST that to our knowledge is new in the boosting world: for any of these four regularizers, Ω-R.ADABOOST is a boosting algorithm — thus, because of the equivalence between the minimization of the logistic loss over examples and the minimization of the exponential rado loss, Ω-R.ADABOOST is in fact an efficient proxy to boost the regularized logistic loss over examples using whichever of the four regularizers, and by extension, linear combination of them (e.g., elastic net regularization [Zou and Hastie, 2005]). We are not aware of any regularized logistic loss formal boosting algorithm with such a wide spectrum of regularizers. Extensive experiments validate this property: Ω-R.ADABOOST is all the better vs ADABOOST (unregularized or regularized) as the domain gets larger, and is able to rapidly learn both accurate and sparse classifiers, making it an especially good contender for supervised learning at large on big domains. The rest of this paper is as follows. Sections §2, 3 and 4 respectively present the equivalence between example and rado losses, its extension to regularized learning and Ω-R.ADABOOST. §5 and 6 respectively present experiments, and conclude. In order not to laden the paper’s body, a Supplementary Material (SM) contains the proofs and additional theoretical and experimental results. 2 Games and equivalent example/rado losses To avoid notational load, we briefly present our learning setting to point the key quantity in our formulation of the general two players game. Let [m] .= {1, 2, ...,m} and Σm . = {−1, 1}m, for m > 0. The classical (batch) supervised learner is example-based: it is given a set of examples S = {(xi, yi), i ∈ [m]} where xi ∈ Rd, yi ∈ Σ1, ∀i ∈ [m]. It returns a classifier h : Rd → R from a predefined set H. Let zi(h) . = yh(xi) and abbreviate z(h) by z for short. The learner fits h to the minimization of a loss. Table 1, column `e, presents some losses that can be used: we remark that h appears only through z, so let us consider in this section that the learner rather fits vector z ∈ Rm. We can now define our two players game setting. Let ϕe : R→ R and ϕr : R→ R two convex and lower-semicontinuous generators. We define functionsLe : Rm×Rm → R andLr : R2 m×Rm → R: Le(p, z) . = ∑ i∈[m] pizi + µe ∑ i∈[m] ϕe(pi) , (1) Lr(q, z) . = ∑ I⊆[m] qI ∑ i∈I zi + µr ∑ I⊆[m] ϕr(qI) , (2) where µe,µr > 0 do not depend on z. For the notation to be meaningful, the coordinates in q are assumed (wlog) to be in bijection with 2[m]. The dependence of both problems in their respective generators is implicit and shall be clear from context. The adversary’s goal is to fit p∗(z) . = arg min p∈Rm Le(p, z) , (3) q∗(z) . = arg min q∈H2m Lr(q, z) , (4) with H2m .= {q ∈ R2m : 1>q = 1}, so as to attain Le(z) . = Le(p ∗(z), z) , (5) Lr(z) . = Lr(q ∗(z), z) , (6) and let ∂Le(z) and ∂Lr(z) denote their subdifferentials. We view the learner’s task as the problem of maximising the corresponding problems in eq. (5) (with examples; this is already sketched above) or (6) (with what we shall call Rademacher observations, or rados), or equivalently minimising negative the corresponding function, and then resort to a loss function. The question of when these two problems are equivalent from the learner’s standpoint motivates the following definition. Definition 1 Two generators ϕe, ϕr are said proportionate iff ∀m > 0, there exists (µe,µr) such that Le(z) = Lr(z) + b , ∀z ∈ Rm . (7) (b does not depend on z) ∀m ∈ N∗, let Gm . = [ 0>2m−1 1 > 2m−1 Gm−1 Gm−1 ] (∈ {0, 1}m×2 m ) (8) if m > 1, and G1 . = [0 1] otherwise (notation zd indicates a vector in Rd). Theorem 2 ϕe, ϕr are proportionate iff the optima p∗(z) and q∗(z) to eqs (3) and (4) satisfy: p∗(z) ∈ ∂Lr(z) , (9) Gmq ∗(z) ∈ ∂Le(z) . (10) If ϕe, ϕr are differentiable and strictly convex, they are proportionate iff p∗(z) = Gmq∗(z). We can alleviate the fact that convexity is strict, which results in a set-valued identity for ϕe, ϕr to be proportionate. This gives a necessary and sufficient condition for two generators to be proportionate. It does not say how to construct one from the other, if possible. We now show that it is indeed possible and prune the search space: if ϕe is proportionate to some ϕr, then it has to be a “symmetrized” version of ϕr, according to the following definition. Definition 3 Let ϕr s.t. domϕr ⊇ (0, 1). ϕs(r)(z) . = ϕr(z) + ϕr(1− z) is the symmetrisation of ϕr. Lemma 4 If ϕe and ϕr are proportionate, then ϕe(z) = (µr/µe) · ϕs(r)(z) + (b/µe) (b is in (7)). To summarize, ϕe and ϕr are proportionate iff (i) they meet the structural property that ϕe is (proportional to) the symmetrized version of ϕr (according to Definition 3), and (ii) the optimal solutions p∗(z) and q∗(z) to problems (1) and (2) satisfy the conditions of Theorem 2. Depending on the direction, we have two cases to craft proportionate generators. First, if we have ϕr, then necessarily ϕe ∝ ϕs(r) so we merely have to check Theorem 2. Second, if we have ϕe, then it matches Definition 31. In this case, we have to find ϕr = f + g where g(z) = −g(1− z) and ϕe(z) = f(z) + f(1− z). We now come back to Le(z), Lr(z) (Definition 1), and make the connection with example and rado losses. In the next definition, an e-loss `e(z) is a function defined over the coordinates of z, and a r-loss `r(z) is a function defined over the subsets of sums of coordinates. Functions can depend on other parameters as well. Definition 5 Suppose e-loss `e(z) and r-loss `r(z) are such that there exist (i) fe : R → R and fr(z) : R→ R both strictly increasing and such that ∀z ∈ Rm, −Le(z) = fe (`e(z)) , (11) −Lr(z) = fr (`r(z)) , (12) where Le(z) and Lr(z) are defined via two proportionate generators ϕe and ϕr (Definition 1). Then the couple (`e, `r) is called a couple of equivalent example-rado losses. Following is the main Theorem of this Section, which summarizes all the cases of equivalence between example and rado losses, and shows that the theory developed on example / rado losses with proportionate generators encompasses the specific proofs and cases already known [Nock et al., 2015, Patrini et al., 2016]. Table 1 also displays generator ϕr. Theorem 6 In each row of Table 1, `e(z,µe) and `r(z,µr) are equivalent for µe and µr as indicated. The proof (SM, Subsection 2.3) details for each case the proportionate generators ϕe and ϕr. 3 Learning with (rado) regularized losses We now detail further the learning setting. In the preceeding Section, we have definef zi(h) . = yh(xi), which we plug in the losses of Table 1 to obtain the corresponding example and rado losses. Losses simplify conveniently when H consists of linear classifiers, h(x) .= θ>x for some θ ∈ Θ ⊆ Rd. In this case, the example loss can be described using edge vectors Se . = {yi · xi, i = 1, 2, ...,m} since zi = θ >(yi ·xi), and the rado loss can be described using rademacher observations [Nock et al., 2015], since ∑ i∈I zi = θ >πσ for σi = yi iff i ∈ I (and −yi otherwise) and πσ . = (1/2) · ∑ i(σi + yi) ·xi. Let us define S∗r . = {πσ,σ ∈ Σm} the set of all rademacher observations. We rewrite any couple of equivalent example and rado losses as `e(Se,θ) and `r(S∗r ,θ) respectively 2, omitting parameters µe and µr, assumed to be fixed beforehand for the equivalence to hold (see Table 1). Let us regularize the example loss, so that the learner’s goal is to minimize `e(Se,θ,Ω) . = `e(Se,θ) + Ω(θ) , (13) 1Alternatively, −ϕe is permissible [Kearns and Mansour, 1999]. 2To prevent notational overload, we blend notions of (pointwise) loss and (samplewise) risk, as just “losses”. Algorithm 1 Ω-R.ADABOOST Input set of rados Sr . = {π1,π2, ...,πn}; T ∈ N∗; parameters γ ∈ (0, 1),ω ∈ R+; Step 1 : let θ0 ← 0, w0 ← (1/n)1 ; Step 2 : for t = 1, 2, ..., T Step 2.1 : call the weak learner: (ι(t), rt)← Ω-WL(Sr,wt,γ,ω,θt−1); Step 2.2 : compute update parameters αι(t) and δt (here, π∗k . = maxj |πjk|): αι(t) ← (1/(2π∗ι(t))) log((1 + rt)/(1− rt)) and δt ← ω · (Ω(θt)− Ω(θt−1)) ; (16) Step 2.3 : update and normalize weights: for j = 1, 2, ..., n, wtj ← w(t−1)j · exp ( −αtπjι(t) + δt ) /Zt ; (17) Return θT ; with Ω a regularizer [Bach et al., 2011]. The following shows that when fe in eq. (11) is linear, there is a rado-loss equivalent to this regularized loss, regardless of Ω. Theorem 7 Suppose H contains linear classifiers. Let (`e(Se,θ), `r(S∗r ,θ)) be any couple of equivalent example-rado losses such that fe in eq. (11) is linear: fe(z) = ae · z + be , (14) for some ae > 0, be ∈ R. Then for any regularizer Ω(.) (assuming wlog Ω(0) = 0), the regularized example loss `e(Se,θ,Ω) is equivalent to rado loss `r(S ∗,Ω,θ r ,θ) computed over regularized rados: S∗,Ω,θr . = S∗r ⊕ {−Ω̃(θ) · θ} , (15) Here, ⊕ is Minkowski sum and Ω̃(θ) .= ae · Ω(θ)/‖θ‖22 if θ 6= 0 (and 0 otherwise). Theorem 7 applies to all rado losses (I-IV) in Table 1. The effect of regularization on rados is intuitive from the margin standpoint: assume that a “good” classifier θ is one that ensures lowerbounded inner products θ>z ≥ τ for some margin threshold τ . Then any good classifier on a regularized rado πσ shall actually meet, over examples, ∑ i:yi=σi θ>(yi · xi) ≥ τ + ae · Ω(θ). This inequality ties an "accuracy" of θ (edges, left hand-side) and its sparsity (right-hand side). Clearly, Theorem 7 has an unfamiliar shape since regularisation modifies data in the rado world: a different θ, or a different Ω, yields a different S∗,Ω,θr , and therefore it may seem very tricky to minimize such a regularized loss. Even more, iterative algorithms like boosting algorithms look at first glance a poor choice, since any update on θ implies an update on the rados as well. What we show in the following Section is essentially the opposite for the exponential rado loss, and a generalization of the RADOBOOST algorithm of Nock et al. [2015], which does not modify rados, is a formal boosting algorithm for a broad set of regularizers. Also, remarkably, only the high-level code of the weak learner depends on the regularizer; that of the strong learner is not affected. 4 Boosting with (rado) regularized losses Ω-R.ADABOOST presents our approach to learning with rados regularized with regularizer Ω to minimise loss `expr (Sr,θ,Ω) in eq. (45). Classifier θt is defined as θt . = ∑t t′=1 αι(t′) · 1ι(t′), where 1k is the kth canonical basis vector. The expected edge rt used to compute αt in eq. (16) is based on the following basis assignation: rι(t) ← 1 π∗ι(t) n∑ j=1 wtjπjι(t) (∈ [−1, 1]) . (19) The computation of rt is eventually tweaked by the weak learner, as displayed in Algorithm ΩWL. We investigate four choices for Ω. For each of them, we prove the boosting ability of ΩR.ADABOOST (Γ is symmetric positive definite, Sd is the symmetric group of order d, |θ| is the Algorithm 2 Ω-WL, for Ω ∈ {‖.‖1, ‖.‖2Γ, ‖.‖∞, ‖.‖Φ} Input set of rados Sr . = {π1,π2, ...,πn}; weights w ∈ 4n; parameters γ ∈ (0, 1), ω ∈ R+; classifier θ ∈ Rd; Step 1 : pick weak feature ι∗ ∈ [d]; Optional — use preference order: ι ι′ ⇔ |rι| − δι ≥ |rι′ | − δι′ // δι . = ω · (Ω(θ + αι · 1ι)− Ω(θ)), rι is given in (19) and αι is given in (16) Step 2 : if Ω = ‖.‖2Γ then r∗ ← { rι∗ if rι∗ ∈ [−γ,γ] sign (rι∗) · γ otherwise ; (18) else r∗ ← rι∗ ; Return (ι∗, r∗); vector whose coordinates are the absolute values of the coordinates of θ): Ω(θ) = ‖θ‖1 . = |θ|>1 Lasso ‖θ‖2Γ . = θ>Γθ Ridge ‖θ‖∞ . = maxk |θk| `∞ ‖θ‖Φ . = maxM∈Sd(M|θ|)>ξ SLOPE (20) [Bach et al., 2011, Bogdan et al., 2015, Duchi and Singer, 2009, Su and Candès, 2015]. The coordinates of ξ in SLOPE are ξk . = Φ−1(1− kq/(2d)) where Φ−1(.) is the quantile of the standard normal distribution and q ∈ (0, 1); thus, the largest coordinates (in absolute value) of θ are more penalized. We now establish the boosting ability of Ω-R.ADABOOST. We give no direction for Step 1 in Ω-WL, which is consistent with the definition of a weak learner in the boosting theory: all we require from the weak learner is |r.| no smaller than some weak learning threshold γWL > 0. Definition 8 Fix any constant γWL ∈ (0, 1). Ω-WL is said to be a γWL-Weak Learner iff the feature ι(t) it picks at iteration t satisfies |rι(t)| ≥ γWL, for any t = 1, 2, ..., T . We also provide an optional step for the weak learner in Ω-WL, which we exploit in the experimentations, which gives a total preference order on features to optimise further Ω-R.ADABOOST. Theorem 9 (boosting with ridge). Take Ω(.) = ‖.‖2Γ. Fix any 0 < a < 1/5, and suppose that ω and the number of iterations T of Ω-R.ADABOOST are chosen so that ω < (2amin k max j π2jk)/(TλΓ) , (21) where λΓ > 0 is the largest eigenvalue of Γ. Then there exists some γ > 0 (depending on a, and given to Ω-WL) such that for any fixed 0 < γWL < γ, if Ω-WL is a γWL-Weak Learner, then Ω-R.ADABOOST returns at the end of the T boosting iterations a classifier θT which meets: `expr (Sr,θT , ‖.‖2Γ) ≤ exp(−aγ2WLT/2) . (22) Furthermore, if we fix a = 1/7, then we can fix γ = 0.98, and if a = 1/10, then we can fix γ = 0.999. Two remarks are in order. First, the cases a = 1/7, 1/10 show that Ω-WL can still obtain large edges in eq. (19), so even a “strong” weak learner might fit in for Ω-WL, without clamping edges. Second, the right-hand side of ineq. (21) may be very large if we consider that mink maxj π2jk may be proportional to m2. So the constraint onω is in fact loose. Theorem 10 (boosting with lasso or `∞). Take Ω(.) ∈ {‖.‖1, ‖.‖∞}. Suppose Ω-WL is a γWL-Weak Learner for some γWL > 0. Suppose ∃0 < a < 3/11 s. t. ω satisfies: ω = aγWL min k max j |πjk| . (23) Then Ω-R.ADABOOST returns at the end of the T boosting iterations a classifier θT which meets: `expr (Sr,θT ,Ω) ≤ exp(−T̃γ2WL/2) , (24) where T̃ = aγWLT if Ω = ‖.‖1, and T̃ = (T − T∗) + aγWL · T∗ if Ω = ‖.‖∞; T∗ is the number of iterations where the feature computing the `∞ norm was updated3. We finally investigate the SLOPE choice. The Theorem is proven for ω = 1 in Ω-R.ADABOOST, for two reasons: it matches the original definition [Bogdan et al., 2015] and furthermore it unveils an interesting connection between boosting and SLOPE properties. Theorem 11 (boosting with SLOPE). Take Ω(.) = ‖.‖Φ. Let a . = min{3γWL/11,Φ−1(1 − q/(2d))/mink maxj |πjk|}. Suppose wlog |θTk| ≥ |θT (k+1)|,∀k, and fix ω = 1. Suppose (i) Ω-WL is a γWL-Weak Learner for some γWL > 0, and (ii) the q-value is chosen to meet: q ≥ 2 ·max k {( 1− Φ ( 3γWL 11 ·max j |πjk| ))/( k d )} . Then classifier θT returned by Ω-R.ADABOOST at the end of the T boosting iterations satisfies: `expr (Sr,θT , ‖.‖Φ) ≤ exp(−aγ2WLT/2) . (25) Constraint (ii) on q is interesting in the light of the properties of SLOPE [Su and Candès, 2015]. Modulo some assumptions, SLOPE yields a control the false discovery rate (FDR) — i.e., negligible coefficients in the "true” linear model θ∗ that are found significant in the learned θ —. Constraint (ii) links the "small” achievable FDR (upperbounded by q) to the "boostability” of the data: the fact that each feature k can be chosen by the weak learner for a "large” γWL, or has maxj |πjk| large, precisely flags potential significant features, thus reducing the risk of sparsity errors, and allowing small q, which is constraint (ii). Using the second order approximation of normal quantiles [Su and Candès, 2015], a sufficient condition for (ii) is that, for some K > 0, γWL min j max j |πjk| ≥ K · √ log d+ log q−1 ; (26) but minj maxj |πjk| is proportional to m, so ineq. (26), and thus (ii), may hold even for small samples and q-values. An additional Theorem deferred to SM sor space considerations shows that for any applicable choice of regularization (eq. 20), the regularized log-loss of θT over examples enjoys with high probability a monotonically decreasing upperbound with T as: `loge (Se,θ,Ω) ≤ log 2− κ · T + τ(m), with τ(m)→ 0 when m→∞ (and τ does not depend on T ), and κ > 0 does not depend on T . Hence, Ω-R.ADABOOST is an efficient proxy to boost the regularized log-loss over examples, using whichever of the ridge, lasso, `∞ or SLOPE regularization — establishing the first boosting algorithm for this choice —, or linear combinations of the choices, e.g. for elastic nets. If we were to compare Theorems 9 – 11 (eqs (22, 24, 25)), then the convergence looks best for ridge (the unsigned exponent is Õ(γ2WL)) while it looks slightly worse for `∞ and SLOPE (the unsigned exponent is now Õ(γ3WL)), the lasso being in between. 5 Experiments We have implemented Ω-WL4 using the order suggested to retrieve the topmost feature in the order. Hence, the weak learner returns the feature maximising |rι| − δι. The rationale for this comes from the proofs of Theorems 9 — 11, showing that ∏ t exp(−(r2ι(t)/2− δι(t))) is an upperbound on the exponential regularized rado-loss. We do not clamp the weak learner for Ω(.) = ‖.‖2Γ, so the weak learner is restricted to Step 1 in Ω-WL5. The objective of these experiments is to evaluate Ω-R.ADABOOST as a contender for supervised learning per se. We compared Ω-R.ADABOOST to ADABOOST/`1 regularized-ADABOOST [Schapire and Singer, 1999, Xi et al., 2009]. All algorithms are run for a total of T = 1000 iterations, and at the end of the iterations, the classifier in the sequence that minimizes the empirical loss is kept. Notice therefore that rado-based classifiers are evaluated on the training set which computes the 3If several features match this criterion, T∗ is the total number of iterations for all these features. 4Code available at: http://users.cecs.anu.edu.au/∼rnock/ 5the values forω that we test, in {10−u, u ∈ {0, 1, 2, 3, 4, 5}}, are small with respect to the upperbound in ineq. (21) given the number of boosting steps (T = 1000), and would yield on most domains a maximal γ ≈ 1. rados. To obtain very sparse solutions for regularized-ADABOOST, we pick its ω (β in [Xi et al., 2009]) in {10−4, 1, 104}. The complete results aggregate experiments on twenty (20) domains, all but one coming from the UCI [Bache and Lichman, 2013] (plus the Kaggle competition domain “Give me some credit”), with up to d =500+ features and m =100 000+ examples. Two tables, in the SM (Tables 1 and 2 in Section 3) report respectively the test errors and sparsity of classifiers, whose summary is given here in Table 2. The experimental setup is a ten-folds stratified cross validation for all algorithms and each domain. ADABOOST/regularized-ADABOOST is trained using the complete training fold. When the domain size m ≤ 40000, the number of rados n used for Ω-R.ADABOOST is a random subset of rados of size equal to that of the training fold. When the domain size exceeds 40000, a random set of n = 10000 rados is computed from the training fold. Thus, (i) there is no optimisation of the examples chosen to compute rados, (ii) we always keep a very small number of rados compared to the maximum available, and (iii) when the domain size gets large, we keep a comparatively tiny number of rados. Hence, the performances of Ω-R.ADABOOST do not stem from any optimization in the choice or size of the rado sample. Ada ∅ ‖.‖2Id ‖.‖1 ‖.‖∞ ‖.‖Φ Ada 11 10 10 8 9 ∅ 9 3 3 2 1 ‖.‖2Id 10 17 11 9 7 ‖.‖1 10 17 7 7 4 ‖.‖∞ 11 18 9 9 8 ‖.‖Φ 10 19 10 10 11 Table 2: Number of domains for which algorithm in row beats algorithm in column (Ada = best result of ADABOOST, ∅ = Ω-R.ADABOOST not regularized, see text). Experiments support several key observations. First, regularization consistently reduces the test error of Ω-R.ADABOOST, by more than 15% on Magic, and 20% on Kaggle. In Table 2, Ω-R.ADABOOST unregularized ("∅") is virtually always beaten by its SLOPE regularized version. Second, Ω-R.ADABOOST is able to obtain both very sparse and accurate classifiers (Magic, Hardware, Marketing, Kaggle). Third, Ω-R.ADABOOST competes or beats ADABOOST on all domains, and is all the better as the domain gets bigger. Even qualitatively as seen in Table 2, the best result obtained by ADABOOST (regularized or not) does not manage to beat any of the regularized versions of Ω-R.ADABOOST on the majority of the domains. Fourth, it is important to have several choices of regularizers at hand. On domain Statlog, the difference in test error between the worst and the best regularization of Ω-R.ADABOOST exceeds 15%. Fifth, as already remarked [Nock et al., 2015], significantly subsampling rados (e.g. Marketing, Kaggle) still yields very accurate classifiers. Sixth, regularization in Ω-R.ADABOOST successfully reduces sparsity to learn more accurate classifiers on several domains (Spectf, Transfusion, Hill-noise, Winered, Magic, Marketing), achieving efficient adaptive sparsity control. Last, the comparatively extremely poor results of ADABOOST on the biggest domains seems to come from another advantage of rados that the theory developed so far does not take into account: on domains for which some features are significantly correlated with the class and for which we have a large number of examples, the concentration of the expected feature value in rados seems to provide leveraging coefficients that tend to have much larger (absolute) value than in ADABOOST, making the convergence of Ω-R.ADABOOST significantly faster than ADABOOST. For example, we have checked that it takes much more than the T = 1000 iterations for ADABOOST to start converging to the results of regularized Ω-R.ADABOOST on Hardware or Kaggle. 6 Conclusion We have shown that the recent equivalences between two example and rado losses can be unified and generalized via a principled representation of a loss function in a two-player zero-sum game. Furthermore, we have shown that this equivalence extends to regularized losses, where the regularization in the rado loss is performed over the rados themselves with Minkowski sums. Our theory and experiments on Ω-R.ADABOOST with prominent regularizers (including ridge, lasso, `∞, SLOPE) indicate that when such a simple regularized form of the rado loss is available, it may help to devise accurate and efficient workarounds to boost a regularized loss over examples via the rado loss, even when the regularizer is significantly more involved like e.g. for group norms [Bach et al., 2011]. Acknowledgments Thanks are due to Stephen Hardy and Giorgio Patrini for stimulating discussions around this material.
1. What is the main contribution of the paper in the field of machine learning? 2. What is the significance of the proposed unified framework for finding equivalent rado losses? 3. How does the paper address the issue of differential privacy in machine learning? 4. What are the strengths and weaknesses of the proposed boosting algorithm for the exponential rado loss? 5. How does the reviewer assess the clarity and organization of the paper's content?
Review
Review The paper is about learning in the framework of so-called rado (RADemacher Observation) losses. The idea of this approach to learning is to replace the usual learning over examples to learning over their averages over random subsets. For such a replacement to be possible for some specific loss function ('example loss'), one has to point out the counterpart rado loss such that both losses, as functions of a classifier, are linked by a monotone function, and hence one can minimize the rado loss instead of the example loss. As explained in [Nock et al. 2015], rado learning can be beneficial in the context of differential privacy. Previously in the literature, equivalent losses were known only for the logistic and the quadratic loss. Authors provide a unified framework to prove such equivalence, and use it to establish the counterpart rado losses for a number of other loss functions. They also formulate a sufficient condition under which the equivalence retains when the example loss is regularized. As a final contribution, they provide a boosting algorithm for the exponential rado loss regularized by numerous common regularizers.Authors use their unified framework to find the equivalent rado losses to a number of common loss functions. As such, the results will probably be useful within the differential privacy community. I had some troubles understanding the first part of the paper (Sec. 2) because of how the exposition was organised, namely, that the learning setting is introduced only after the zero-sum game notation.
NIPS
Title Task-based End-to-end Model Learning in Stochastic Optimization Abstract With the increasing popularity of machine learning techniques, it has become common to see prediction algorithms operating within some larger process. However, the criteria by which we train these algorithms often differ from the ultimate criteria on which we evaluate them. This paper proposes an end-to-end approach for learning probabilistic machine learning models in a manner that directly captures the ultimate task-based objective for which they will be used, within the context of stochastic programming. We present three experimental evaluations of the proposed approach: a classical inventory stock problem, a real-world electrical grid scheduling task, and a real-world energy storage arbitrage task. We show that the proposed approach can outperform both traditional modeling and purely black-box policy optimization approaches in these applications. 1 Introduction While prediction algorithms commonly operate within some larger process, the criteria by which we train these algorithms often differ from the ultimate criteria on which we evaluate them: the performance of the full “closed-loop” system on the ultimate task at hand. For instance, instead of merely classifying images in a standalone setting, one may want to use these classifications within planning and control tasks such as autonomous driving. While a typical image classification algorithm might optimize accuracy or log likelihood, in a driving task we may ultimately care more about the difference between classifying a pedestrian as a tree vs. classifying a garbage can as a tree. Similarly, when we use a probabilistic prediction algorithm to generate forecasts of upcoming electricity demand, we then want to use these forecasts to minimize the costs of a scheduling procedure that allocates generation for a power grid. As these examples suggest, instead of using a “generic loss,” we instead may want to learn a model that approximates the ultimate task-based “true loss.” This paper considers an end-to-end approach for learning probabilistic machine learning models that directly capture the objective of their ultimate task. Formally, we consider probabilistic models in the context of stochastic programming, where the goal is to minimize some expected cost over the models’ probabilistic predictions, subject to some (potentially also probabilistic) constraints. As mentioned above, it is common to approach these problems in a two-step fashion: first to fit a predictive model to observed data by minimizing some criterion such as negative log-likelihood, and then to use this model to compute or approximate the necessary expected costs in the stochastic programming setting. While this procedure can work well in many instances, it ignores the fact that the true cost of the system (the optimization objective evaluated on actual instantiations in the real world) may benefit from a model that actually attains worse overall likelihood, but makes more accurate predictions over certain manifolds of the underlying space. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. We propose to train a probabilistic model not (solely) for predictive accuracy, but so that–when it is later used within the loop of a stochastic programming procedure–it produces solutions that minimize the ultimate task-based loss. This formulation may seem somewhat counterintuitive, given that a “perfect” predictive model would of course also be the optimal model to use within a stochastic programming framework. However, the reality that all models do make errors illustrates that we should indeed look to a final task-based objective to determine the proper error tradeoffs within a machine learning setting. This paper proposes one way to evaluate task-based tradeoffs in a fully automated fashion, by computing derivatives through the solution to the stochastic programming problem in a manner that can improve the underlying model. We begin by presenting background material and related work in areas spanning stochastic programming, end-to-end training, and optimizing alternative loss functions. We then describe our approach within the formal context of stochastic programming, and give a generic method for propagating task loss through these problems in a manner that can update the models. We report on three experimental evaluations of the proposed approach: a classical inventory stock problem, a real-world electrical grid scheduling task, and a real-world energy storage arbitrage task. We show that the proposed approach outperforms traditional modeling and purely black-box policy optimization approaches. 2 Background and related work Stochastic programming Stochastic programming is a method for making decisions under uncertainty by modeling or optimizing objectives governed by a random process. It has applications in many domains such as energy [1], finance [2], and manufacturing [3], where the underlying probability distributions are either known or can be estimated. Common considerations include how to best model or approximate the underlying random variable, how to solve the resulting optimization problem, and how to then assess the quality of the resulting (approximate) solution [4]. In cases where the underlying probability distribution is known but the objective cannot be solved analytically, it is common to use Monte Carlo sample average approximation methods, which draw multiple iid samples from the underlying probability distribution and then use deterministic optimization methods to solve the resultant problems [5]. In cases where the underlying distribution is not known, it is common to learn or estimate some model from observed samples [6]. End-to-end training Recent years have seen a dramatic increase in the number of systems building on so-called “end-to-end” learning. Generally speaking, this term refers to systems where the end goal of the machine learning process is directly predicted from raw inputs [e.g. 7, 8]. In the context of deep learning systems, the term now traditionally refers to architectures where, for example, there is no explicit encoding of hand-tuned features on the data, but the system directly predicts what the image, text, etc. is from the raw inputs [9, 10, 11, 12, 13]. The context in which we use the term end-to-end is similar, but slightly more in line with its older usage: instead of (just) attempting to learn an output (with known and typically straightforward loss functions), we are specifically attempting to learn a model based upon an end-to-end task that the user is ultimately trying to accomplish. We feel that this concept–of describing the entire closed-loop performance of the system as evaluated on the real task at hand–is beneficial to add to the notion of end-to-end learning. Also highly related to our work are recent efforts in end-to-end policy learning [14], using value iteration effectively as an optimization procedure in similar networks [15], and multi-objective optimization [16, 17, 18, 19]. These lines of work fit more with the “pure” end-to-end approach we discuss later on (where models are eschewed for pure function approximation methods), but conceptually the approaches have similar motivations in modifying typically-optimized policies to address some task(s) directly. Of course, the actual methodological approaches are quite different, given our specific focus on stochastic programming as the black box of interest in our setting. Optimizing alternative loss functions There has been a great deal of work in recent years on using machine learning procedures to optimize different loss criteria than those “naturally” optimized by the algorithm. For example, Stoyanov et al. [20] and Hazan et al. [21] propose methods for optimizing loss criteria in structured prediction that are different from the inference procedure of the prediction algorithm; this work has also recently been extended to deep networks [22]. Recent work has also explored using auxiliary prediction losses to satisfy multiple objectives [23], learning dynamics models that maximize control performance in Bayesian optimization [24], and learning adaptive predictive models via differentiation through a meta-learning optimization objective [25]. The work we have found in the literature that most closely resembles our approach is the work of Bengio [26], which uses a neural network model for predicting financial prices, and then optimizes the model based on returns obtained via a hedging strategy that employs it. We view this approach–of both using a model and then tuning that model to adapt to a (differentiable) procedure–as a philosophical predecessor to our own work. In concurrent work, Elmachtoub and Grigas [27] also propose an approach for tuning model parameters given optimization results, but in the context of linear programming and outside the context of deep networks. Whereas Bengio [26] and Elmachtoub and Grigas [27] use hand-crafted (but differentiable) algorithms to approximately attain some objective given a predictive model, our approach is tightly coupled to stochastic programming, where the explicit objective is to attempt to optimize the desired task cost via an exact optimization routine, but given underlying randomness. The notions of stochasticity are thus naturally quite different in our work, but we do hope that our work can bring back the original idea of task-based model learning. (Despite Bengio [26]’s original paper being nearly 20 years old, virtually all follow-on work has focused on the financial application, and not on what we feel is the core idea of using a surrogate model within a task-driven optimization procedure.) 3 End-to-end model learning in stochastic programming We first formally define the stochastic modeling and optimization problems with which we are concerned. Let (x 2 X , y 2 Y) ⇠ D denote standard input-output pairs drawn from some (real, unknown) distribution D. We also consider actions z 2 Z that incur some expected loss LD(z) = Ex,y⇠D[f(x, y, z)]. For instance, a power systems operator may try to allocate power generators z given past electricity demand x and future electricity demand y; this allocation’s loss corresponds to the over- or under-generation penalties incurred given future demand instantiations. If we knew D, then we could select optimal actions z?D = argminz LD(z). However, in practice, the true distribution D is unknown. In this paper, we are interested in modeling the conditional distribution y|x using some parameterized model p(y|x; ✓) in order to minimize the real-world cost of the policy implied by this parameterization. Specifically, we find some parameters ✓ to parameterize p(y|x; ✓) (as in the standard statistical setting) and then determine optimal actions z?(x; ✓) (via stochastic optimization) that correspond to our observed input x and the specific choice of parameters ✓ in our probabilistic model. Upon observing the costs of these actions z?(x; ✓) relative to true instantiations of x and y, we update our parameterized model p(y|x; ✓) accordingly, calculate the resultant new z?(x; ✓), and repeat. The goal is to find parameters ✓ such that the corresponding policy z ? (x; ✓) optimizes the loss under the true joint distribution of x and y. Explicitly, we wish to choose ✓ to minimize the task loss L(✓) in the context of x, y ⇠ D, i.e. minimize ✓ L(✓) = E x,y⇠D[f(x, y, z ? (x; ✓))]. (1) Since in reality we do not know the distribution D, we obtain z?(x; ✓) via a proxy stochastic optimization problem for a fixed instantiation of parameters ✓, i.e. z ? (x; ✓) = argmin z E y⇠p(y|x;✓)[f(x, y, z)]. (2) The above setting specifies z?(x; ✓) using a simple (unconstrained) stochastic program, but in reality our decision may be subject to both probabilistic and deterministic constraints. We therefore consider more general decisions produced through a generic stochastic programming problem1 z ? (x; ✓) = argmin z E y⇠p(y|x;✓)[f(x, y, z)] subject to E y⇠p(y|x;✓)[gi(x, y, z)] 0, i = 1, . . . , nineq h i (z) = 0, i = 1, . . . , n eq . (3) 1It is standard to presume in stochastic programming that equality constraints depend only on decision variables (not random variables), as non-trivial random equality constraints are typically not possible to satisfy. In this setting, the full task loss is more complex, since it captures both the expected cost and any deviations from the constraints. We can write this, for instance, as L(✓) = E x,y⇠D[f(x, y, z ?(x; ✓))]+ nineqX i=1 I{E x,y⇠D[gi(x, y, z ?(x; ✓))] 0}+ neqX i=1 E x [I{h i (z?(x; ✓)) = 0}] (4) (where I(·) is the indicator function that is zero when its constraints are satisfied and infinite otherwise). However, the basic intuition behind our approach remains the same for both the constrained and unconstrained cases: in both settings, we attempt to learn parameters of a probabilistic model not to produce strictly “accurate” predictions, but such that when we use the resultant model within a stochastic programming setting, the resulting decisions perform well under the true distribution. Actually solving this problem requires that we differentiate through the “argmin” operator z?(x; ✓) of the stochastic programming problem. This differentiation is not possible for all classes of optimization problems (the argmin operator may be discontinuous), but as we will show shortly, in many practical cases–including cases where the function and constraints are strongly convex–we can indeed efficiently compute these gradients even in the context of constrained optimization. 3.1 Discussion and alternative approaches We highlight our approach in contrast to two alternative existing methods: traditional model learning and model-free black-box policy optimization. In traditional machine learning approaches, it is common to use ✓ to minimize the (conditional) log-likelihood of observed data under the model p(y|x; ✓). This method corresponds to approximately solving the optimization problem minimize ✓ E x,y⇠D [ log p(y|x; ✓)] . (5) If we then need to use the conditional distribution y|x to determine actions z within some later optimization setting, we commonly use the predictive model obtained from (5) directly. This approach has obvious advantages, in that the model-learning phase is well-justified independent of any future use in a task. However, it is also prone to poor performance in the common setting where the true distribution y|x cannot be represented within the class of distributions parameterized by ✓, i.e. where the procedure suffers from model bias. Conceptually, the log-likelihood objective implicitly trades off between model error in different regions of the input/output space, but does so in a manner largely opaque to the modeler, and may ultimately not employ the correct tradeoffs for a given task. In contrast, there is an alternative approach to solving (1) that we describe as the model-free “black-box” policy optimization approach. Here, we forgo learning any model at all of the random variable y. Instead, we attempt to learn a policy mapping directly from inputs x to actions z ? (x; ¯ ✓) that minimize the loss L(¯✓) presented in (4) (where here ¯✓ defines the form of the policy itself, not a predictive model). While such model-free methods can perform well in many settings, they are often very data-inefficient, as the policy class must have enough representational power to describe sufficiently complex policies without recourse to any underlying model.2 Algorithm 1 Task Loss Optimization 1: input: D // samples from true distribution 2: initialize ✓ // some initial parameterization 3: for t = 1, . . . , T do 4: sample (x, y) ⇠ D 5: compute z?(x; ✓) via Equation (3) 6: // step in violated constraint or objective 7: if 9i s.t. g i (x, y, z ? (x; ✓)) > 0 then 8: update ✓ with r ✓ g i (x, y, z ? (x; ✓)) 9: else 10: update ✓ with r ✓ f(x, y, z ? (x; ✓)) 11: end if 12: end for Our approach offers an intermediate setting, where we do still use a surrogate model to determine an optimal decision z?(x; ✓), yet we adapt this model based on the task loss instead of any model prediction accuracy. In practice, we typically want to minimize some weighted combination of log-likelihood and task loss, which can be easily accomplished given our approach. 3.2 Optimizing task loss To solve the generic optimization problem (4), we can in principle adopt a straightforward (constrained) stochastic gradient approach, as detailed in Algorithm 1. At each iteration, we 2This distinction is roughly analogous to the policy search vs. model-based settings in reinforcement learning. However, for the purposes of this paper, we consider much simpler stochastic programs without the multiple rounds that occur in RL, and the extension of these techniques to a full RL setting remains as future work. solve the proxy stochastic programming problem (3) to obtain z?(x, ✓), using the distribution defined by our current values of ✓. Then, we compute the true loss L(✓) using the observed value of y. If any of the inequality constraints g i in L(✓) are violated, we take a gradient step in the violated constraint; otherwise, we take a gradient step in the optimization objective f . We note that if any inequality constraints are probabilistic, Algorithm 1 must be adapted to employ mini-batches in order to determine whether these probabilistic constraints are satisfied. Alternatively, because even the g i constraints are probabilistic, it is common in practice to simply move a weighted version of these constraints to the objective, i.e., we modify the objective by adding some appropriate penalty times the positive part of the function, g i (x, y, z)+, for some > 0. In practice, this has the effect of taking gradient steps jointly in all the violated constraints and the objective in the case that one or more inequality constraints are violated, often resulting in faster convergence. Note that we need only move stochastic constraints into the objective; deterministic constraints on the policy itself will always be satisfied by the optimizer, as they are independent of the model. 3.3 Differentiating the optimization solution to a stochastic programming problem While the above presentation highlights the simplicity of the proposed approach, it avoids the issue of chief technical challenge to this approach, which is computing the gradient of an objective that depends upon the argmin operation z?(x; ✓). Specifically, we need to compute the term @L @✓ = @L @z ? @z ? @✓ (6) which involves the Jacobian @z ? @✓ . This is the Jacobian of the optimal solution with respect to the distribution parameters ✓. Recent approaches have looked into similar argmin differentiations [28, 29], though the methodology we present here is more general and handles the stochasticity of the objective. At a high level, we begin by writing the KKT optimality conditions of the general stochastic programming problem (3). Differentiating these equations and applying the implicit function theorem gives a set of linear equations that we can solve to obtain the necessary Jacobians (with expectations over the distribution y ⇠ p(y|x; ✓) denoted E y✓ , and where g is the vector of inequality constraints)2 64 r2 z E y✓f(z) + nineqX i=1 i r2 z E y✓gi(z) (rzEy✓g(z)) T A T diag( ) (r z E y✓g(z)) diag(Ey✓g(z)) 0 A 0 0 3 75 2 64 @z @✓ @ @✓ @⌫ @✓ 3 75 = 2 64 @rzEy✓ f(z) @✓ + @ Pnineq i=1 irzEy✓ gi(z) @✓ diag( ) @Ey✓ g(z) @✓ 0 3 75 . (7) The terms in these equations look somewhat complex, but fundamentally, the left side gives the optimality conditions of the convex problem, and the right side gives the derivatives of the relevant functions at the achieved solution with respect to the governing parameter ✓. In practice, we calculate the right-hand terms by employing sequential quadratic programming [30] to find the optimal policy z ? (x; ✓) for the given parameters ✓, using a recently-proposed approach for fast solution of the argmin differentiation for QPs [31] to solve the necessary linear equations; we then take the derivatives at the optimum produced by this strategy. Details of this approach are described in the appendix. 4 Experiments We consider three applications of our task-based method: a synthetic inventory stock problem, a real-world energy scheduling task, and a real-world battery arbitrage task. We demonstrate that the task-based end-to-end approach can substantially improve upon other alternatives. Source code for all experiments is available at https://github.com/locuslab/e2e-model-learning. 4.1 Inventory stock problem Problem definition To highlight the performance of the algorithm in a setting where the true underlying model is known to us, we consider a “conditional” variation of the classical inventory stock problem [4]. In this problem, a company must order some quantity z of a product to minimize costs over some stochastic demand y, whose distribution in turn is affected by some observed features x (Figure 1a). There are linear and quadratic costs on the amount of product ordered, plus different linear/quadratic costs on over-orders [z y]+ and under-orders [y z]+. The objective is given by f stock (y, z) = c0z + 1 2 q0z 2 + c b [y z]+ + 1 2 q b ([y z]+)2 + ch[z y]+ + 1 2 q h ([z y]+)2, (8) where [v]+ ⌘ max{v, 0}. For a specific choice of probability model p(y|x; ✓), our proxy stochastic programming problem can then be written as minimize z E y⇠p(y|x;✓)[fstock(y, z)]. (9) To simplify the setting, we further assume that the demands are discrete, taking on values d1, . . . , dk with probabilities (conditional on x) (p ✓ ) i ⌘ p(y = d i |x; ✓). Thus our stochastic programming problem (9) can be written succinctly as a joint quadratic program3 minimize z2R,zb,zh2Rk c0z + 1 2 q0z 2 + kX i=1 (p ✓ ) i ✓ c b (z b ) i + 1 2 q b (z b ) 2 i + c h (z h ) i + 1 2 q h (z h ) 2 i ◆ subject to d z1 z b , z1 d z h , z, z h , z b 0. (10) Further details of this approach are given in the appendix. Experimental setup We examine our algorithm under two main conditions: where the true model is linear, and where it is nonlinear. In all cases, we generate problem instances by randomly sampling some x 2 Rn and then generating p(y|x; ✓) according to either p(y|x; ✓) / exp(⇥Tx) (linear true model) or p(y|x; ✓) / exp((⇥Tx)2) (nonlinear true model) for some ⇥ 2 Rn⇥k. We compare the following approaches on these tasks: 1) the QP allocation based upon the true model (which performs optimally); 2) MLE approaches (with linear or nonlinear probability models) that fit a model to the data, and then compute the allocation by solving the QP; 3) pure end-to-end policy-optimizing models (using linear or nonlinear hypotheses for the policy); and 4) our task-based learning models (with linear or nonlinear probability models). In all cases, we evaluate test performance by running on 1000 random examples, and evaluate performance over 10 folds of different true ✓? parameters. Figures 2(a) and (b) show the performance of these methods given a linear true model, with linear and nonlinear model hypotheses, respectively. As expected, the linear MLE approach performs best, as the true underlying model is in the class of distributions that it can represent and thus solving the stochastic programming problem is a very strong proxy for solving the true optimization problem under the real distribution. While the true model is also contained within the nonlinear MLE’s generic nonlinear distribution class, we see that this method requires more data to converge, and when given less data makes error tradeoffs that are ultimately not the correct tradeoffs for the task at hand; our task-based approach thus outperforms this approach. The task-based approach also substantially outperforms the policy-optimizing neural network, highlighting the fact that it is more data-efficient to run the learning process “through” a reasonable model. Note that here it does not make a difference whether we use the linear or nonlinear model in the task-based approach. Figures 2(c) and (d) show performance in the case of a nonlinear true model, with linear and nonlinear model hypotheses, respectively. Case (c) represents the “non-realizable” case, where the true underlying distribution cannot be represented by the model hypothesis class. Here, the linear MLE, as expected, performs very poorly: it cannot capture the true underlying distribution, and thus the resultant stochastic programming solution would not be expected to perform well. The linear policy model similarly performs poorly. Importantly, the task-based approach with the linear model performs much better here: despite the fact that it still has a misspecified model, the task-based nature of the learning process lets us learn a different linear model than the MLE version, which is 3This is referred to as a two-stage stochastic programming problem (though a very trivial example of one), where first stage variables consist of the amount of product to buy before observing demand, and second-stage variables consist of how much to sell back or additionally purchase once the true demand has been revealed. particularly tuned to the distribution and loss of the task. Finally, also as to be expected, the non-linear models perform better than the linear models in this scenario, but again with the task-based non-linear model outperforming the nonlinear MLE and end-to-end policy approaches. 4.2 Load forecasting and generator scheduling We next consider a more realistic grid-scheduling task, based upon over 8 years of real electrical grid data. In this setting, a power system operator must decide how much electricity generation z 2 R24 to schedule for each hour in the next 24 hours based on some (unknown) distribution over electricity demand (Figure 1b). Given a particular realization y of demand, we impose penalties for both generation excess ( e ) and generation shortage ( s ), with s e . We also add a quadratic regularization term, indicating a preference for generation schedules that closely match demand realizations. Finally, we impose a ramping constraint c r restricting the change in generation between consecutive timepoints, reflecting physical limitations associated with quick changes in electricity output levels. These are reasonable proxies for the actual economic costs incurred by electrical grid operators when scheduling generation, and can be written as the stochastic programming problem minimize z2R24 24X i=1 E y⇠p(y|x;✓) s [y i z i ]+ + e[zi yi]+ + 1 2 (z i y i ) 2 subject to |z i z i 1| cr 8i, (11) where [v]+ ⌘ max{v, 0}. Assuming (as we will in our model), that yi is a Gaussian random variable with mean µ i and variance 2 i , then this expectation has a closed form that can be computed via analytically integrating the Gaussian PDF.4 We then use sequential quadratic programming (SQP) to iteratively approximate the resultant convex objective as a quadratic objective, iterate until convergence, and then compute the necessary Jacobians using the quadratic approximation at the solution, which gives the correct Hessian and gradient terms. Details are given in the appendix. To develop a predictive model, we make use of a highly-tuned load forecasting methodology. Specifically, we input the past day’s electrical load and temperature, the next day’s temperature forecast, and additional features such as non-linear functions of the temperatures, binary indicators of weekends or holidays, and yearly sinusoidal features. We then predict the electrical load over all 24 4 Part of the philosophy behind applying this approach here is that we know the Gaussian assumption is incorrect: the true underlying load is neither Gaussian distributed nor homoskedastic. However, these assumptions are exceedingly common in practice, as they enable easy model learning and exact analytical solutions. Thus, training the (still Gaussian) system with a task-based loss retains computational tractability while still allowing us to modify the distribution’s parameters to improve actual performance on the task at hand. hours of the next day. We employ a 2-hidden-layer neural network for this purpose, with an additional residual connection from the inputs to the outputs initialized to the linear regression solution. ! ∈ ℝ$ 200 % ∈ ℝ&' Past Load Past Temp (Past Temp)2 Future Temp (Future Temp)2 (Future Temp)3 ((Weekday) ((Holiday) ((DST) sin(2-.× DOY) cos(2-× DOY) Future Load 200 pure policy-optimizing network is not shown, as it could not sufficiently learn the ramp constraints. We could not obtain good performance for the policy optimizer even ignoring this infeasibility.) Figure 4 shows the performance of the three models. As expected, the RMSE model performs best with respect to the RMSE of its predictions (its objective). However, the task-based model substantially outperforms the RMSE model when evaluated on task loss, the actual objective that the system operator cares about: specifically, we improve upon the performance of the traditional stochastic programming method by 38.6%. The cost-weighted RMSE’s performance is extremely variable, and overall, the task net improves upon this method by 8.6%. 4.3 Price forecasting and battery storage Finally, we consider a battery arbitrage task, based upon 6 years of real electrical grid data. Here, a grid-scale battery must operate over a 24 hour period based on some (unknown) distribution over future electricity prices (Figure 1c). For each hour, the operator must decide how much to charge (zin 2 R24) or discharge (zout 2 R24) the battery, thus inducing a particular state of charge in the battery (zstate 2 R24). Given a particular realization y of prices, the operator optimizes over: 1) profits, 2) flexibility to participate in other markets, by keeping the battery near half its capacity B (with weight ), and 3) battery health, by discouraging rapid charging/discharging (with weight ✏, 5It is worth noting that a cost-weighted RMSE approach is only possible when direct costs can be assigned independently to each decision point, i.e. when costs do not depend on multiple decision points (as in this experiment). Our task-based method, however, accommodates the (typical) more general setting. ✏ < ). The battery also has a charging efficiency ( eff), limits on speed of charge (cin) and discharge (cout), and begins at half charge. This can be written as the stochastic programming problem minimize zin,zout,zstate2R24 E y⇠p(y|x;✓) " 24X i=1 y i (zin zout)i + zstate B 2 2 + ✏kzink2 + ✏kzoutk2 # subject to zstate,i+1 = zstate,i zout,i + effzin,i 8i, zstate,1 = B/2, 0 zin cin, 0 zout cout, 0 zstate B. (12) Assuming (as we will in our model) that y i is a random variable with mean µ i , then this expectation has a closed form that depends only on the mean. Further details are given in the appendix. To develop a predictive model for the mean, we use an architecture similar to that described in Section 4.2. In this case, we input the past day’s prices and temperature, the next day’s load forecasts and temperature forecasts, and additional features such as non-linear functions of the temperatures and temporal features similar to those in Section 4.2. We again train the model to minimize the mean squared error between the model’s predictions and the actual prices (giving the mean prediction µ i ), using about 5 years of data to train the model and 1 subsequent year for testing. Using the mean predictions of this base model, we then solve the storage scheduling problem by solving the optimization problem (12), again learning network parameters by minimizing the task loss. We compare against a traditional stochastic programming model that minimizes just the RMSE. Table 1 shows the performance of the two models. As energy prices are difficult to predict due to numerous outliers and price spikes, the models in this case are not as well-tuned as in our load forecasting experiment; thus, their performance is relatively variable. Even then, in all cases, our task-based model demonstrates better average performance than the RMSE model when evaluated on task loss, the objective most important to the battery operator (although the improvements are not statistically significant). More interestingly, our task-based method shows less (and in some cases, far less) variability in performance than the RMSE-minimizing method. Qualitatively, our task-based method hedges against perverse events such as price spikes that could substantially affect the performance of a battery charging schedule. The task-based method thus yields more reliable performance than a pure RMSE-minimizing method in the case the models are inaccurate due to a high level of stochasticity in the prediction task. 5 Conclusions and future work This paper proposes an end-to-end approach for learning machine learning models that will be used in the loop of a larger process. Specifically, we consider training probabilistic models in the context of stochastic programming to directly capture a task-based objective. Preliminary experiments indicate that our task-based learning model substantially outperforms MLE and policy-optimizing approaches in all but the (rare) case that the MLE model “perfectly” characterizes the underlying distribution. Our method also achieves a 38.6% performance improvement over a highly-optimized real-world stochastic programming algorithm for scheduling electricity generation based on predicted load. In the case of energy price prediction, where there is a high degree of inherent stochasticity in the problem, our method demonstrates more reliable task performance than a traditional predictive method. The task-based approach thus demonstrates promise in optimizing in-the-loop predictions. Future work includes an extension of our approach to stochastic learning models with multiple rounds, and further to model predictive control and full reinforcement learning settings. Acknowledgments This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE1252522, and by the Department of Energy Computational Science Graduate Fellowship.
1. What is the main contribution of the paper regarding predictive models? 2. What are the strengths of the paper, particularly in terms of motivation and technical soundness? 3. What are the potential issues or limitations of the proposed approach? 4. How does the reviewer assess the novelty and relevance of the paper's content? 5. Are there any suggestions for additional references or improvements to the paper?
Review
Review The paper proposes an approach to train predictive models performance of which is not based on classic likelihood objectives, but is instead based on performance on some task external to the model. In order to achieve this, the model parameters are optimized so as to minimize the loss on external task, which in turn may involve a sub-optimization problem that depends on model parameters. A synthetic and real-data experiments are presented that clearly illustrate the usefulness of proposed approach. The introduction is very well-motivated and the exposition is generally clear. The paper is technically sound and builds on sound foundations - I see no obvious flaws. The paper contains good motivation about why the proposed approach is necessary. I see this work as a worthwhile and well-motivated application of existing technical contributions. The important technical piece that make this approach possible, which is differentiation though an argmax are already presented in [Amos 2016]. While building on existing results, this work applies them in a very relevant and well-motivated formulation of task-based learning and I believe would be of interest to the machine learning community. The proposed benefit, but also a potential issue of the proposed approach is that the model is not independent of the task. Is is possible to characterize how much the model may be overfitting to the specific task and whether it generalizes well to a different (potentially only slightly different) task? It would be good to reference related work on meta-learning “Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks” by Finn et al, as that’s another example of differentiation through an optimization procedure.
NIPS
Title Task-based End-to-end Model Learning in Stochastic Optimization Abstract With the increasing popularity of machine learning techniques, it has become common to see prediction algorithms operating within some larger process. However, the criteria by which we train these algorithms often differ from the ultimate criteria on which we evaluate them. This paper proposes an end-to-end approach for learning probabilistic machine learning models in a manner that directly captures the ultimate task-based objective for which they will be used, within the context of stochastic programming. We present three experimental evaluations of the proposed approach: a classical inventory stock problem, a real-world electrical grid scheduling task, and a real-world energy storage arbitrage task. We show that the proposed approach can outperform both traditional modeling and purely black-box policy optimization approaches in these applications. 1 Introduction While prediction algorithms commonly operate within some larger process, the criteria by which we train these algorithms often differ from the ultimate criteria on which we evaluate them: the performance of the full “closed-loop” system on the ultimate task at hand. For instance, instead of merely classifying images in a standalone setting, one may want to use these classifications within planning and control tasks such as autonomous driving. While a typical image classification algorithm might optimize accuracy or log likelihood, in a driving task we may ultimately care more about the difference between classifying a pedestrian as a tree vs. classifying a garbage can as a tree. Similarly, when we use a probabilistic prediction algorithm to generate forecasts of upcoming electricity demand, we then want to use these forecasts to minimize the costs of a scheduling procedure that allocates generation for a power grid. As these examples suggest, instead of using a “generic loss,” we instead may want to learn a model that approximates the ultimate task-based “true loss.” This paper considers an end-to-end approach for learning probabilistic machine learning models that directly capture the objective of their ultimate task. Formally, we consider probabilistic models in the context of stochastic programming, where the goal is to minimize some expected cost over the models’ probabilistic predictions, subject to some (potentially also probabilistic) constraints. As mentioned above, it is common to approach these problems in a two-step fashion: first to fit a predictive model to observed data by minimizing some criterion such as negative log-likelihood, and then to use this model to compute or approximate the necessary expected costs in the stochastic programming setting. While this procedure can work well in many instances, it ignores the fact that the true cost of the system (the optimization objective evaluated on actual instantiations in the real world) may benefit from a model that actually attains worse overall likelihood, but makes more accurate predictions over certain manifolds of the underlying space. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. We propose to train a probabilistic model not (solely) for predictive accuracy, but so that–when it is later used within the loop of a stochastic programming procedure–it produces solutions that minimize the ultimate task-based loss. This formulation may seem somewhat counterintuitive, given that a “perfect” predictive model would of course also be the optimal model to use within a stochastic programming framework. However, the reality that all models do make errors illustrates that we should indeed look to a final task-based objective to determine the proper error tradeoffs within a machine learning setting. This paper proposes one way to evaluate task-based tradeoffs in a fully automated fashion, by computing derivatives through the solution to the stochastic programming problem in a manner that can improve the underlying model. We begin by presenting background material and related work in areas spanning stochastic programming, end-to-end training, and optimizing alternative loss functions. We then describe our approach within the formal context of stochastic programming, and give a generic method for propagating task loss through these problems in a manner that can update the models. We report on three experimental evaluations of the proposed approach: a classical inventory stock problem, a real-world electrical grid scheduling task, and a real-world energy storage arbitrage task. We show that the proposed approach outperforms traditional modeling and purely black-box policy optimization approaches. 2 Background and related work Stochastic programming Stochastic programming is a method for making decisions under uncertainty by modeling or optimizing objectives governed by a random process. It has applications in many domains such as energy [1], finance [2], and manufacturing [3], where the underlying probability distributions are either known or can be estimated. Common considerations include how to best model or approximate the underlying random variable, how to solve the resulting optimization problem, and how to then assess the quality of the resulting (approximate) solution [4]. In cases where the underlying probability distribution is known but the objective cannot be solved analytically, it is common to use Monte Carlo sample average approximation methods, which draw multiple iid samples from the underlying probability distribution and then use deterministic optimization methods to solve the resultant problems [5]. In cases where the underlying distribution is not known, it is common to learn or estimate some model from observed samples [6]. End-to-end training Recent years have seen a dramatic increase in the number of systems building on so-called “end-to-end” learning. Generally speaking, this term refers to systems where the end goal of the machine learning process is directly predicted from raw inputs [e.g. 7, 8]. In the context of deep learning systems, the term now traditionally refers to architectures where, for example, there is no explicit encoding of hand-tuned features on the data, but the system directly predicts what the image, text, etc. is from the raw inputs [9, 10, 11, 12, 13]. The context in which we use the term end-to-end is similar, but slightly more in line with its older usage: instead of (just) attempting to learn an output (with known and typically straightforward loss functions), we are specifically attempting to learn a model based upon an end-to-end task that the user is ultimately trying to accomplish. We feel that this concept–of describing the entire closed-loop performance of the system as evaluated on the real task at hand–is beneficial to add to the notion of end-to-end learning. Also highly related to our work are recent efforts in end-to-end policy learning [14], using value iteration effectively as an optimization procedure in similar networks [15], and multi-objective optimization [16, 17, 18, 19]. These lines of work fit more with the “pure” end-to-end approach we discuss later on (where models are eschewed for pure function approximation methods), but conceptually the approaches have similar motivations in modifying typically-optimized policies to address some task(s) directly. Of course, the actual methodological approaches are quite different, given our specific focus on stochastic programming as the black box of interest in our setting. Optimizing alternative loss functions There has been a great deal of work in recent years on using machine learning procedures to optimize different loss criteria than those “naturally” optimized by the algorithm. For example, Stoyanov et al. [20] and Hazan et al. [21] propose methods for optimizing loss criteria in structured prediction that are different from the inference procedure of the prediction algorithm; this work has also recently been extended to deep networks [22]. Recent work has also explored using auxiliary prediction losses to satisfy multiple objectives [23], learning dynamics models that maximize control performance in Bayesian optimization [24], and learning adaptive predictive models via differentiation through a meta-learning optimization objective [25]. The work we have found in the literature that most closely resembles our approach is the work of Bengio [26], which uses a neural network model for predicting financial prices, and then optimizes the model based on returns obtained via a hedging strategy that employs it. We view this approach–of both using a model and then tuning that model to adapt to a (differentiable) procedure–as a philosophical predecessor to our own work. In concurrent work, Elmachtoub and Grigas [27] also propose an approach for tuning model parameters given optimization results, but in the context of linear programming and outside the context of deep networks. Whereas Bengio [26] and Elmachtoub and Grigas [27] use hand-crafted (but differentiable) algorithms to approximately attain some objective given a predictive model, our approach is tightly coupled to stochastic programming, where the explicit objective is to attempt to optimize the desired task cost via an exact optimization routine, but given underlying randomness. The notions of stochasticity are thus naturally quite different in our work, but we do hope that our work can bring back the original idea of task-based model learning. (Despite Bengio [26]’s original paper being nearly 20 years old, virtually all follow-on work has focused on the financial application, and not on what we feel is the core idea of using a surrogate model within a task-driven optimization procedure.) 3 End-to-end model learning in stochastic programming We first formally define the stochastic modeling and optimization problems with which we are concerned. Let (x 2 X , y 2 Y) ⇠ D denote standard input-output pairs drawn from some (real, unknown) distribution D. We also consider actions z 2 Z that incur some expected loss LD(z) = Ex,y⇠D[f(x, y, z)]. For instance, a power systems operator may try to allocate power generators z given past electricity demand x and future electricity demand y; this allocation’s loss corresponds to the over- or under-generation penalties incurred given future demand instantiations. If we knew D, then we could select optimal actions z?D = argminz LD(z). However, in practice, the true distribution D is unknown. In this paper, we are interested in modeling the conditional distribution y|x using some parameterized model p(y|x; ✓) in order to minimize the real-world cost of the policy implied by this parameterization. Specifically, we find some parameters ✓ to parameterize p(y|x; ✓) (as in the standard statistical setting) and then determine optimal actions z?(x; ✓) (via stochastic optimization) that correspond to our observed input x and the specific choice of parameters ✓ in our probabilistic model. Upon observing the costs of these actions z?(x; ✓) relative to true instantiations of x and y, we update our parameterized model p(y|x; ✓) accordingly, calculate the resultant new z?(x; ✓), and repeat. The goal is to find parameters ✓ such that the corresponding policy z ? (x; ✓) optimizes the loss under the true joint distribution of x and y. Explicitly, we wish to choose ✓ to minimize the task loss L(✓) in the context of x, y ⇠ D, i.e. minimize ✓ L(✓) = E x,y⇠D[f(x, y, z ? (x; ✓))]. (1) Since in reality we do not know the distribution D, we obtain z?(x; ✓) via a proxy stochastic optimization problem for a fixed instantiation of parameters ✓, i.e. z ? (x; ✓) = argmin z E y⇠p(y|x;✓)[f(x, y, z)]. (2) The above setting specifies z?(x; ✓) using a simple (unconstrained) stochastic program, but in reality our decision may be subject to both probabilistic and deterministic constraints. We therefore consider more general decisions produced through a generic stochastic programming problem1 z ? (x; ✓) = argmin z E y⇠p(y|x;✓)[f(x, y, z)] subject to E y⇠p(y|x;✓)[gi(x, y, z)] 0, i = 1, . . . , nineq h i (z) = 0, i = 1, . . . , n eq . (3) 1It is standard to presume in stochastic programming that equality constraints depend only on decision variables (not random variables), as non-trivial random equality constraints are typically not possible to satisfy. In this setting, the full task loss is more complex, since it captures both the expected cost and any deviations from the constraints. We can write this, for instance, as L(✓) = E x,y⇠D[f(x, y, z ?(x; ✓))]+ nineqX i=1 I{E x,y⇠D[gi(x, y, z ?(x; ✓))] 0}+ neqX i=1 E x [I{h i (z?(x; ✓)) = 0}] (4) (where I(·) is the indicator function that is zero when its constraints are satisfied and infinite otherwise). However, the basic intuition behind our approach remains the same for both the constrained and unconstrained cases: in both settings, we attempt to learn parameters of a probabilistic model not to produce strictly “accurate” predictions, but such that when we use the resultant model within a stochastic programming setting, the resulting decisions perform well under the true distribution. Actually solving this problem requires that we differentiate through the “argmin” operator z?(x; ✓) of the stochastic programming problem. This differentiation is not possible for all classes of optimization problems (the argmin operator may be discontinuous), but as we will show shortly, in many practical cases–including cases where the function and constraints are strongly convex–we can indeed efficiently compute these gradients even in the context of constrained optimization. 3.1 Discussion and alternative approaches We highlight our approach in contrast to two alternative existing methods: traditional model learning and model-free black-box policy optimization. In traditional machine learning approaches, it is common to use ✓ to minimize the (conditional) log-likelihood of observed data under the model p(y|x; ✓). This method corresponds to approximately solving the optimization problem minimize ✓ E x,y⇠D [ log p(y|x; ✓)] . (5) If we then need to use the conditional distribution y|x to determine actions z within some later optimization setting, we commonly use the predictive model obtained from (5) directly. This approach has obvious advantages, in that the model-learning phase is well-justified independent of any future use in a task. However, it is also prone to poor performance in the common setting where the true distribution y|x cannot be represented within the class of distributions parameterized by ✓, i.e. where the procedure suffers from model bias. Conceptually, the log-likelihood objective implicitly trades off between model error in different regions of the input/output space, but does so in a manner largely opaque to the modeler, and may ultimately not employ the correct tradeoffs for a given task. In contrast, there is an alternative approach to solving (1) that we describe as the model-free “black-box” policy optimization approach. Here, we forgo learning any model at all of the random variable y. Instead, we attempt to learn a policy mapping directly from inputs x to actions z ? (x; ¯ ✓) that minimize the loss L(¯✓) presented in (4) (where here ¯✓ defines the form of the policy itself, not a predictive model). While such model-free methods can perform well in many settings, they are often very data-inefficient, as the policy class must have enough representational power to describe sufficiently complex policies without recourse to any underlying model.2 Algorithm 1 Task Loss Optimization 1: input: D // samples from true distribution 2: initialize ✓ // some initial parameterization 3: for t = 1, . . . , T do 4: sample (x, y) ⇠ D 5: compute z?(x; ✓) via Equation (3) 6: // step in violated constraint or objective 7: if 9i s.t. g i (x, y, z ? (x; ✓)) > 0 then 8: update ✓ with r ✓ g i (x, y, z ? (x; ✓)) 9: else 10: update ✓ with r ✓ f(x, y, z ? (x; ✓)) 11: end if 12: end for Our approach offers an intermediate setting, where we do still use a surrogate model to determine an optimal decision z?(x; ✓), yet we adapt this model based on the task loss instead of any model prediction accuracy. In practice, we typically want to minimize some weighted combination of log-likelihood and task loss, which can be easily accomplished given our approach. 3.2 Optimizing task loss To solve the generic optimization problem (4), we can in principle adopt a straightforward (constrained) stochastic gradient approach, as detailed in Algorithm 1. At each iteration, we 2This distinction is roughly analogous to the policy search vs. model-based settings in reinforcement learning. However, for the purposes of this paper, we consider much simpler stochastic programs without the multiple rounds that occur in RL, and the extension of these techniques to a full RL setting remains as future work. solve the proxy stochastic programming problem (3) to obtain z?(x, ✓), using the distribution defined by our current values of ✓. Then, we compute the true loss L(✓) using the observed value of y. If any of the inequality constraints g i in L(✓) are violated, we take a gradient step in the violated constraint; otherwise, we take a gradient step in the optimization objective f . We note that if any inequality constraints are probabilistic, Algorithm 1 must be adapted to employ mini-batches in order to determine whether these probabilistic constraints are satisfied. Alternatively, because even the g i constraints are probabilistic, it is common in practice to simply move a weighted version of these constraints to the objective, i.e., we modify the objective by adding some appropriate penalty times the positive part of the function, g i (x, y, z)+, for some > 0. In practice, this has the effect of taking gradient steps jointly in all the violated constraints and the objective in the case that one or more inequality constraints are violated, often resulting in faster convergence. Note that we need only move stochastic constraints into the objective; deterministic constraints on the policy itself will always be satisfied by the optimizer, as they are independent of the model. 3.3 Differentiating the optimization solution to a stochastic programming problem While the above presentation highlights the simplicity of the proposed approach, it avoids the issue of chief technical challenge to this approach, which is computing the gradient of an objective that depends upon the argmin operation z?(x; ✓). Specifically, we need to compute the term @L @✓ = @L @z ? @z ? @✓ (6) which involves the Jacobian @z ? @✓ . This is the Jacobian of the optimal solution with respect to the distribution parameters ✓. Recent approaches have looked into similar argmin differentiations [28, 29], though the methodology we present here is more general and handles the stochasticity of the objective. At a high level, we begin by writing the KKT optimality conditions of the general stochastic programming problem (3). Differentiating these equations and applying the implicit function theorem gives a set of linear equations that we can solve to obtain the necessary Jacobians (with expectations over the distribution y ⇠ p(y|x; ✓) denoted E y✓ , and where g is the vector of inequality constraints)2 64 r2 z E y✓f(z) + nineqX i=1 i r2 z E y✓gi(z) (rzEy✓g(z)) T A T diag( ) (r z E y✓g(z)) diag(Ey✓g(z)) 0 A 0 0 3 75 2 64 @z @✓ @ @✓ @⌫ @✓ 3 75 = 2 64 @rzEy✓ f(z) @✓ + @ Pnineq i=1 irzEy✓ gi(z) @✓ diag( ) @Ey✓ g(z) @✓ 0 3 75 . (7) The terms in these equations look somewhat complex, but fundamentally, the left side gives the optimality conditions of the convex problem, and the right side gives the derivatives of the relevant functions at the achieved solution with respect to the governing parameter ✓. In practice, we calculate the right-hand terms by employing sequential quadratic programming [30] to find the optimal policy z ? (x; ✓) for the given parameters ✓, using a recently-proposed approach for fast solution of the argmin differentiation for QPs [31] to solve the necessary linear equations; we then take the derivatives at the optimum produced by this strategy. Details of this approach are described in the appendix. 4 Experiments We consider three applications of our task-based method: a synthetic inventory stock problem, a real-world energy scheduling task, and a real-world battery arbitrage task. We demonstrate that the task-based end-to-end approach can substantially improve upon other alternatives. Source code for all experiments is available at https://github.com/locuslab/e2e-model-learning. 4.1 Inventory stock problem Problem definition To highlight the performance of the algorithm in a setting where the true underlying model is known to us, we consider a “conditional” variation of the classical inventory stock problem [4]. In this problem, a company must order some quantity z of a product to minimize costs over some stochastic demand y, whose distribution in turn is affected by some observed features x (Figure 1a). There are linear and quadratic costs on the amount of product ordered, plus different linear/quadratic costs on over-orders [z y]+ and under-orders [y z]+. The objective is given by f stock (y, z) = c0z + 1 2 q0z 2 + c b [y z]+ + 1 2 q b ([y z]+)2 + ch[z y]+ + 1 2 q h ([z y]+)2, (8) where [v]+ ⌘ max{v, 0}. For a specific choice of probability model p(y|x; ✓), our proxy stochastic programming problem can then be written as minimize z E y⇠p(y|x;✓)[fstock(y, z)]. (9) To simplify the setting, we further assume that the demands are discrete, taking on values d1, . . . , dk with probabilities (conditional on x) (p ✓ ) i ⌘ p(y = d i |x; ✓). Thus our stochastic programming problem (9) can be written succinctly as a joint quadratic program3 minimize z2R,zb,zh2Rk c0z + 1 2 q0z 2 + kX i=1 (p ✓ ) i ✓ c b (z b ) i + 1 2 q b (z b ) 2 i + c h (z h ) i + 1 2 q h (z h ) 2 i ◆ subject to d z1 z b , z1 d z h , z, z h , z b 0. (10) Further details of this approach are given in the appendix. Experimental setup We examine our algorithm under two main conditions: where the true model is linear, and where it is nonlinear. In all cases, we generate problem instances by randomly sampling some x 2 Rn and then generating p(y|x; ✓) according to either p(y|x; ✓) / exp(⇥Tx) (linear true model) or p(y|x; ✓) / exp((⇥Tx)2) (nonlinear true model) for some ⇥ 2 Rn⇥k. We compare the following approaches on these tasks: 1) the QP allocation based upon the true model (which performs optimally); 2) MLE approaches (with linear or nonlinear probability models) that fit a model to the data, and then compute the allocation by solving the QP; 3) pure end-to-end policy-optimizing models (using linear or nonlinear hypotheses for the policy); and 4) our task-based learning models (with linear or nonlinear probability models). In all cases, we evaluate test performance by running on 1000 random examples, and evaluate performance over 10 folds of different true ✓? parameters. Figures 2(a) and (b) show the performance of these methods given a linear true model, with linear and nonlinear model hypotheses, respectively. As expected, the linear MLE approach performs best, as the true underlying model is in the class of distributions that it can represent and thus solving the stochastic programming problem is a very strong proxy for solving the true optimization problem under the real distribution. While the true model is also contained within the nonlinear MLE’s generic nonlinear distribution class, we see that this method requires more data to converge, and when given less data makes error tradeoffs that are ultimately not the correct tradeoffs for the task at hand; our task-based approach thus outperforms this approach. The task-based approach also substantially outperforms the policy-optimizing neural network, highlighting the fact that it is more data-efficient to run the learning process “through” a reasonable model. Note that here it does not make a difference whether we use the linear or nonlinear model in the task-based approach. Figures 2(c) and (d) show performance in the case of a nonlinear true model, with linear and nonlinear model hypotheses, respectively. Case (c) represents the “non-realizable” case, where the true underlying distribution cannot be represented by the model hypothesis class. Here, the linear MLE, as expected, performs very poorly: it cannot capture the true underlying distribution, and thus the resultant stochastic programming solution would not be expected to perform well. The linear policy model similarly performs poorly. Importantly, the task-based approach with the linear model performs much better here: despite the fact that it still has a misspecified model, the task-based nature of the learning process lets us learn a different linear model than the MLE version, which is 3This is referred to as a two-stage stochastic programming problem (though a very trivial example of one), where first stage variables consist of the amount of product to buy before observing demand, and second-stage variables consist of how much to sell back or additionally purchase once the true demand has been revealed. particularly tuned to the distribution and loss of the task. Finally, also as to be expected, the non-linear models perform better than the linear models in this scenario, but again with the task-based non-linear model outperforming the nonlinear MLE and end-to-end policy approaches. 4.2 Load forecasting and generator scheduling We next consider a more realistic grid-scheduling task, based upon over 8 years of real electrical grid data. In this setting, a power system operator must decide how much electricity generation z 2 R24 to schedule for each hour in the next 24 hours based on some (unknown) distribution over electricity demand (Figure 1b). Given a particular realization y of demand, we impose penalties for both generation excess ( e ) and generation shortage ( s ), with s e . We also add a quadratic regularization term, indicating a preference for generation schedules that closely match demand realizations. Finally, we impose a ramping constraint c r restricting the change in generation between consecutive timepoints, reflecting physical limitations associated with quick changes in electricity output levels. These are reasonable proxies for the actual economic costs incurred by electrical grid operators when scheduling generation, and can be written as the stochastic programming problem minimize z2R24 24X i=1 E y⇠p(y|x;✓) s [y i z i ]+ + e[zi yi]+ + 1 2 (z i y i ) 2 subject to |z i z i 1| cr 8i, (11) where [v]+ ⌘ max{v, 0}. Assuming (as we will in our model), that yi is a Gaussian random variable with mean µ i and variance 2 i , then this expectation has a closed form that can be computed via analytically integrating the Gaussian PDF.4 We then use sequential quadratic programming (SQP) to iteratively approximate the resultant convex objective as a quadratic objective, iterate until convergence, and then compute the necessary Jacobians using the quadratic approximation at the solution, which gives the correct Hessian and gradient terms. Details are given in the appendix. To develop a predictive model, we make use of a highly-tuned load forecasting methodology. Specifically, we input the past day’s electrical load and temperature, the next day’s temperature forecast, and additional features such as non-linear functions of the temperatures, binary indicators of weekends or holidays, and yearly sinusoidal features. We then predict the electrical load over all 24 4 Part of the philosophy behind applying this approach here is that we know the Gaussian assumption is incorrect: the true underlying load is neither Gaussian distributed nor homoskedastic. However, these assumptions are exceedingly common in practice, as they enable easy model learning and exact analytical solutions. Thus, training the (still Gaussian) system with a task-based loss retains computational tractability while still allowing us to modify the distribution’s parameters to improve actual performance on the task at hand. hours of the next day. We employ a 2-hidden-layer neural network for this purpose, with an additional residual connection from the inputs to the outputs initialized to the linear regression solution. ! ∈ ℝ$ 200 % ∈ ℝ&' Past Load Past Temp (Past Temp)2 Future Temp (Future Temp)2 (Future Temp)3 ((Weekday) ((Holiday) ((DST) sin(2-.× DOY) cos(2-× DOY) Future Load 200 pure policy-optimizing network is not shown, as it could not sufficiently learn the ramp constraints. We could not obtain good performance for the policy optimizer even ignoring this infeasibility.) Figure 4 shows the performance of the three models. As expected, the RMSE model performs best with respect to the RMSE of its predictions (its objective). However, the task-based model substantially outperforms the RMSE model when evaluated on task loss, the actual objective that the system operator cares about: specifically, we improve upon the performance of the traditional stochastic programming method by 38.6%. The cost-weighted RMSE’s performance is extremely variable, and overall, the task net improves upon this method by 8.6%. 4.3 Price forecasting and battery storage Finally, we consider a battery arbitrage task, based upon 6 years of real electrical grid data. Here, a grid-scale battery must operate over a 24 hour period based on some (unknown) distribution over future electricity prices (Figure 1c). For each hour, the operator must decide how much to charge (zin 2 R24) or discharge (zout 2 R24) the battery, thus inducing a particular state of charge in the battery (zstate 2 R24). Given a particular realization y of prices, the operator optimizes over: 1) profits, 2) flexibility to participate in other markets, by keeping the battery near half its capacity B (with weight ), and 3) battery health, by discouraging rapid charging/discharging (with weight ✏, 5It is worth noting that a cost-weighted RMSE approach is only possible when direct costs can be assigned independently to each decision point, i.e. when costs do not depend on multiple decision points (as in this experiment). Our task-based method, however, accommodates the (typical) more general setting. ✏ < ). The battery also has a charging efficiency ( eff), limits on speed of charge (cin) and discharge (cout), and begins at half charge. This can be written as the stochastic programming problem minimize zin,zout,zstate2R24 E y⇠p(y|x;✓) " 24X i=1 y i (zin zout)i + zstate B 2 2 + ✏kzink2 + ✏kzoutk2 # subject to zstate,i+1 = zstate,i zout,i + effzin,i 8i, zstate,1 = B/2, 0 zin cin, 0 zout cout, 0 zstate B. (12) Assuming (as we will in our model) that y i is a random variable with mean µ i , then this expectation has a closed form that depends only on the mean. Further details are given in the appendix. To develop a predictive model for the mean, we use an architecture similar to that described in Section 4.2. In this case, we input the past day’s prices and temperature, the next day’s load forecasts and temperature forecasts, and additional features such as non-linear functions of the temperatures and temporal features similar to those in Section 4.2. We again train the model to minimize the mean squared error between the model’s predictions and the actual prices (giving the mean prediction µ i ), using about 5 years of data to train the model and 1 subsequent year for testing. Using the mean predictions of this base model, we then solve the storage scheduling problem by solving the optimization problem (12), again learning network parameters by minimizing the task loss. We compare against a traditional stochastic programming model that minimizes just the RMSE. Table 1 shows the performance of the two models. As energy prices are difficult to predict due to numerous outliers and price spikes, the models in this case are not as well-tuned as in our load forecasting experiment; thus, their performance is relatively variable. Even then, in all cases, our task-based model demonstrates better average performance than the RMSE model when evaluated on task loss, the objective most important to the battery operator (although the improvements are not statistically significant). More interestingly, our task-based method shows less (and in some cases, far less) variability in performance than the RMSE-minimizing method. Qualitatively, our task-based method hedges against perverse events such as price spikes that could substantially affect the performance of a battery charging schedule. The task-based method thus yields more reliable performance than a pure RMSE-minimizing method in the case the models are inaccurate due to a high level of stochasticity in the prediction task. 5 Conclusions and future work This paper proposes an end-to-end approach for learning machine learning models that will be used in the loop of a larger process. Specifically, we consider training probabilistic models in the context of stochastic programming to directly capture a task-based objective. Preliminary experiments indicate that our task-based learning model substantially outperforms MLE and policy-optimizing approaches in all but the (rare) case that the MLE model “perfectly” characterizes the underlying distribution. Our method also achieves a 38.6% performance improvement over a highly-optimized real-world stochastic programming algorithm for scheduling electricity generation based on predicted load. In the case of energy price prediction, where there is a high degree of inherent stochasticity in the problem, our method demonstrates more reliable task performance than a traditional predictive method. The task-based approach thus demonstrates promise in optimizing in-the-loop predictions. Future work includes an extension of our approach to stochastic learning models with multiple rounds, and further to model predictive control and full reinforcement learning settings. Acknowledgments This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE1252522, and by the Department of Energy Computational Science Graduate Fellowship.
1. What is the focus of the paper, and what are the authors' main ideas? 2. What are the strengths of the proposed method, particularly in contrast to traditional approaches? 3. What is the major concern regarding the method's lack of convergence guarantees? 4. How does the reviewer assess the importance of theoretical justification for the method's solution? 5. What are some minor suggestions for improving the paper, such as clarifying certain arguments or expanding the scope of the method's applications?
Review
Review --Brief summary of the paper: The paper proposes a learning method for solving two-stage stochastic programming problems which involve minimizing f(x,y,z) w.r.t. z. The main idea of the paper is to learn a predictive model p(y|x;theta) such that the task's objective function f is directly optimized. In contrast, traditional approaches learn p(y|x;theta) to minimize a prediction error without considering f. The main technical challenge in the paper is to solve a sub-optimization problem involving argmin w.r.t. z, and the proposed method can do so in an efficient manner by assuming that the optimization problem is convex in z. The method is experimentally evaluated on two problems and it is shown to outperform traditional methods. --Major comments: The idea of adopting end-to-end learning to solve two-stage stochastic programming is interesting. However, I have a major concern for the proposed method which is the lack of convergence guarantees. Since the optimization problem is assumed to be convex in z, the obtained solution z*(x;theta) is supposed to be the "true" optimal if data is drawn from the true distribution p(x,y). However, a solution obtained using the predictive model p(y|x;theta) is unlikely to be true optimal unless p(y|x;theta) is the true conditional distribution p(y|x). (This issue is commonly known as model bias in the context of model-based reinforcement learning which usually involves non-convex objectives.) Since the proposed method does not theoretically guarantee that p(y|x;theta) converges to p(y|x) even when the model hypothesis is correct, it seems likely that even for a convex optimization problem the method may only find a sub-optimal solution. For this reason, I think having convergence guarantees or error bounds either for the predictive model or for the obtained solution itself are very important to theoretically justify the method and would be a significant contribution to the paper. --Questions: 1) It is not clear why Algorithm 1 requires mini-batches training since Line 7 of the algorithm only checks the constraint for a single sample. 2) In the first experiment, why does the performance of the end-to-end policy optimization method depend on the model hypothesis when it does not rely on a predictive model? --Minor suggestions: 1) In line 154 the paper argue that the model-free approach requires a rich policy class and is data inefficient. However, the model-based approach also requires a rich model class as well. Moreover, the model-based approach can suffer from model bias while the model-free approach cannot. 2) The applicability of the proposed method is quite limited. As mentioned in the paper, solving a sub-optimization problem with argmin is not trivial and convexity assumption can help in this regard. However, practical decision making problems may involve non-convex or unknown objective functions. A variant of the proposed method that is applicable to these tasks would make the method more appealing. 3) The last term of Eq.(4) should have an expectation over the density of x. --Comments after author's response: I feel more positive about the paper after reading the author’s response. Now I think that the proposed method is an important contribution to the field and I will increase my score. However, I am still not convince that the proposed method will be useful outside domains with convex objectives without empirical evidences.
NIPS
Title Task-based End-to-end Model Learning in Stochastic Optimization Abstract With the increasing popularity of machine learning techniques, it has become common to see prediction algorithms operating within some larger process. However, the criteria by which we train these algorithms often differ from the ultimate criteria on which we evaluate them. This paper proposes an end-to-end approach for learning probabilistic machine learning models in a manner that directly captures the ultimate task-based objective for which they will be used, within the context of stochastic programming. We present three experimental evaluations of the proposed approach: a classical inventory stock problem, a real-world electrical grid scheduling task, and a real-world energy storage arbitrage task. We show that the proposed approach can outperform both traditional modeling and purely black-box policy optimization approaches in these applications. 1 Introduction While prediction algorithms commonly operate within some larger process, the criteria by which we train these algorithms often differ from the ultimate criteria on which we evaluate them: the performance of the full “closed-loop” system on the ultimate task at hand. For instance, instead of merely classifying images in a standalone setting, one may want to use these classifications within planning and control tasks such as autonomous driving. While a typical image classification algorithm might optimize accuracy or log likelihood, in a driving task we may ultimately care more about the difference between classifying a pedestrian as a tree vs. classifying a garbage can as a tree. Similarly, when we use a probabilistic prediction algorithm to generate forecasts of upcoming electricity demand, we then want to use these forecasts to minimize the costs of a scheduling procedure that allocates generation for a power grid. As these examples suggest, instead of using a “generic loss,” we instead may want to learn a model that approximates the ultimate task-based “true loss.” This paper considers an end-to-end approach for learning probabilistic machine learning models that directly capture the objective of their ultimate task. Formally, we consider probabilistic models in the context of stochastic programming, where the goal is to minimize some expected cost over the models’ probabilistic predictions, subject to some (potentially also probabilistic) constraints. As mentioned above, it is common to approach these problems in a two-step fashion: first to fit a predictive model to observed data by minimizing some criterion such as negative log-likelihood, and then to use this model to compute or approximate the necessary expected costs in the stochastic programming setting. While this procedure can work well in many instances, it ignores the fact that the true cost of the system (the optimization objective evaluated on actual instantiations in the real world) may benefit from a model that actually attains worse overall likelihood, but makes more accurate predictions over certain manifolds of the underlying space. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. We propose to train a probabilistic model not (solely) for predictive accuracy, but so that–when it is later used within the loop of a stochastic programming procedure–it produces solutions that minimize the ultimate task-based loss. This formulation may seem somewhat counterintuitive, given that a “perfect” predictive model would of course also be the optimal model to use within a stochastic programming framework. However, the reality that all models do make errors illustrates that we should indeed look to a final task-based objective to determine the proper error tradeoffs within a machine learning setting. This paper proposes one way to evaluate task-based tradeoffs in a fully automated fashion, by computing derivatives through the solution to the stochastic programming problem in a manner that can improve the underlying model. We begin by presenting background material and related work in areas spanning stochastic programming, end-to-end training, and optimizing alternative loss functions. We then describe our approach within the formal context of stochastic programming, and give a generic method for propagating task loss through these problems in a manner that can update the models. We report on three experimental evaluations of the proposed approach: a classical inventory stock problem, a real-world electrical grid scheduling task, and a real-world energy storage arbitrage task. We show that the proposed approach outperforms traditional modeling and purely black-box policy optimization approaches. 2 Background and related work Stochastic programming Stochastic programming is a method for making decisions under uncertainty by modeling or optimizing objectives governed by a random process. It has applications in many domains such as energy [1], finance [2], and manufacturing [3], where the underlying probability distributions are either known or can be estimated. Common considerations include how to best model or approximate the underlying random variable, how to solve the resulting optimization problem, and how to then assess the quality of the resulting (approximate) solution [4]. In cases where the underlying probability distribution is known but the objective cannot be solved analytically, it is common to use Monte Carlo sample average approximation methods, which draw multiple iid samples from the underlying probability distribution and then use deterministic optimization methods to solve the resultant problems [5]. In cases where the underlying distribution is not known, it is common to learn or estimate some model from observed samples [6]. End-to-end training Recent years have seen a dramatic increase in the number of systems building on so-called “end-to-end” learning. Generally speaking, this term refers to systems where the end goal of the machine learning process is directly predicted from raw inputs [e.g. 7, 8]. In the context of deep learning systems, the term now traditionally refers to architectures where, for example, there is no explicit encoding of hand-tuned features on the data, but the system directly predicts what the image, text, etc. is from the raw inputs [9, 10, 11, 12, 13]. The context in which we use the term end-to-end is similar, but slightly more in line with its older usage: instead of (just) attempting to learn an output (with known and typically straightforward loss functions), we are specifically attempting to learn a model based upon an end-to-end task that the user is ultimately trying to accomplish. We feel that this concept–of describing the entire closed-loop performance of the system as evaluated on the real task at hand–is beneficial to add to the notion of end-to-end learning. Also highly related to our work are recent efforts in end-to-end policy learning [14], using value iteration effectively as an optimization procedure in similar networks [15], and multi-objective optimization [16, 17, 18, 19]. These lines of work fit more with the “pure” end-to-end approach we discuss later on (where models are eschewed for pure function approximation methods), but conceptually the approaches have similar motivations in modifying typically-optimized policies to address some task(s) directly. Of course, the actual methodological approaches are quite different, given our specific focus on stochastic programming as the black box of interest in our setting. Optimizing alternative loss functions There has been a great deal of work in recent years on using machine learning procedures to optimize different loss criteria than those “naturally” optimized by the algorithm. For example, Stoyanov et al. [20] and Hazan et al. [21] propose methods for optimizing loss criteria in structured prediction that are different from the inference procedure of the prediction algorithm; this work has also recently been extended to deep networks [22]. Recent work has also explored using auxiliary prediction losses to satisfy multiple objectives [23], learning dynamics models that maximize control performance in Bayesian optimization [24], and learning adaptive predictive models via differentiation through a meta-learning optimization objective [25]. The work we have found in the literature that most closely resembles our approach is the work of Bengio [26], which uses a neural network model for predicting financial prices, and then optimizes the model based on returns obtained via a hedging strategy that employs it. We view this approach–of both using a model and then tuning that model to adapt to a (differentiable) procedure–as a philosophical predecessor to our own work. In concurrent work, Elmachtoub and Grigas [27] also propose an approach for tuning model parameters given optimization results, but in the context of linear programming and outside the context of deep networks. Whereas Bengio [26] and Elmachtoub and Grigas [27] use hand-crafted (but differentiable) algorithms to approximately attain some objective given a predictive model, our approach is tightly coupled to stochastic programming, where the explicit objective is to attempt to optimize the desired task cost via an exact optimization routine, but given underlying randomness. The notions of stochasticity are thus naturally quite different in our work, but we do hope that our work can bring back the original idea of task-based model learning. (Despite Bengio [26]’s original paper being nearly 20 years old, virtually all follow-on work has focused on the financial application, and not on what we feel is the core idea of using a surrogate model within a task-driven optimization procedure.) 3 End-to-end model learning in stochastic programming We first formally define the stochastic modeling and optimization problems with which we are concerned. Let (x 2 X , y 2 Y) ⇠ D denote standard input-output pairs drawn from some (real, unknown) distribution D. We also consider actions z 2 Z that incur some expected loss LD(z) = Ex,y⇠D[f(x, y, z)]. For instance, a power systems operator may try to allocate power generators z given past electricity demand x and future electricity demand y; this allocation’s loss corresponds to the over- or under-generation penalties incurred given future demand instantiations. If we knew D, then we could select optimal actions z?D = argminz LD(z). However, in practice, the true distribution D is unknown. In this paper, we are interested in modeling the conditional distribution y|x using some parameterized model p(y|x; ✓) in order to minimize the real-world cost of the policy implied by this parameterization. Specifically, we find some parameters ✓ to parameterize p(y|x; ✓) (as in the standard statistical setting) and then determine optimal actions z?(x; ✓) (via stochastic optimization) that correspond to our observed input x and the specific choice of parameters ✓ in our probabilistic model. Upon observing the costs of these actions z?(x; ✓) relative to true instantiations of x and y, we update our parameterized model p(y|x; ✓) accordingly, calculate the resultant new z?(x; ✓), and repeat. The goal is to find parameters ✓ such that the corresponding policy z ? (x; ✓) optimizes the loss under the true joint distribution of x and y. Explicitly, we wish to choose ✓ to minimize the task loss L(✓) in the context of x, y ⇠ D, i.e. minimize ✓ L(✓) = E x,y⇠D[f(x, y, z ? (x; ✓))]. (1) Since in reality we do not know the distribution D, we obtain z?(x; ✓) via a proxy stochastic optimization problem for a fixed instantiation of parameters ✓, i.e. z ? (x; ✓) = argmin z E y⇠p(y|x;✓)[f(x, y, z)]. (2) The above setting specifies z?(x; ✓) using a simple (unconstrained) stochastic program, but in reality our decision may be subject to both probabilistic and deterministic constraints. We therefore consider more general decisions produced through a generic stochastic programming problem1 z ? (x; ✓) = argmin z E y⇠p(y|x;✓)[f(x, y, z)] subject to E y⇠p(y|x;✓)[gi(x, y, z)] 0, i = 1, . . . , nineq h i (z) = 0, i = 1, . . . , n eq . (3) 1It is standard to presume in stochastic programming that equality constraints depend only on decision variables (not random variables), as non-trivial random equality constraints are typically not possible to satisfy. In this setting, the full task loss is more complex, since it captures both the expected cost and any deviations from the constraints. We can write this, for instance, as L(✓) = E x,y⇠D[f(x, y, z ?(x; ✓))]+ nineqX i=1 I{E x,y⇠D[gi(x, y, z ?(x; ✓))] 0}+ neqX i=1 E x [I{h i (z?(x; ✓)) = 0}] (4) (where I(·) is the indicator function that is zero when its constraints are satisfied and infinite otherwise). However, the basic intuition behind our approach remains the same for both the constrained and unconstrained cases: in both settings, we attempt to learn parameters of a probabilistic model not to produce strictly “accurate” predictions, but such that when we use the resultant model within a stochastic programming setting, the resulting decisions perform well under the true distribution. Actually solving this problem requires that we differentiate through the “argmin” operator z?(x; ✓) of the stochastic programming problem. This differentiation is not possible for all classes of optimization problems (the argmin operator may be discontinuous), but as we will show shortly, in many practical cases–including cases where the function and constraints are strongly convex–we can indeed efficiently compute these gradients even in the context of constrained optimization. 3.1 Discussion and alternative approaches We highlight our approach in contrast to two alternative existing methods: traditional model learning and model-free black-box policy optimization. In traditional machine learning approaches, it is common to use ✓ to minimize the (conditional) log-likelihood of observed data under the model p(y|x; ✓). This method corresponds to approximately solving the optimization problem minimize ✓ E x,y⇠D [ log p(y|x; ✓)] . (5) If we then need to use the conditional distribution y|x to determine actions z within some later optimization setting, we commonly use the predictive model obtained from (5) directly. This approach has obvious advantages, in that the model-learning phase is well-justified independent of any future use in a task. However, it is also prone to poor performance in the common setting where the true distribution y|x cannot be represented within the class of distributions parameterized by ✓, i.e. where the procedure suffers from model bias. Conceptually, the log-likelihood objective implicitly trades off between model error in different regions of the input/output space, but does so in a manner largely opaque to the modeler, and may ultimately not employ the correct tradeoffs for a given task. In contrast, there is an alternative approach to solving (1) that we describe as the model-free “black-box” policy optimization approach. Here, we forgo learning any model at all of the random variable y. Instead, we attempt to learn a policy mapping directly from inputs x to actions z ? (x; ¯ ✓) that minimize the loss L(¯✓) presented in (4) (where here ¯✓ defines the form of the policy itself, not a predictive model). While such model-free methods can perform well in many settings, they are often very data-inefficient, as the policy class must have enough representational power to describe sufficiently complex policies without recourse to any underlying model.2 Algorithm 1 Task Loss Optimization 1: input: D // samples from true distribution 2: initialize ✓ // some initial parameterization 3: for t = 1, . . . , T do 4: sample (x, y) ⇠ D 5: compute z?(x; ✓) via Equation (3) 6: // step in violated constraint or objective 7: if 9i s.t. g i (x, y, z ? (x; ✓)) > 0 then 8: update ✓ with r ✓ g i (x, y, z ? (x; ✓)) 9: else 10: update ✓ with r ✓ f(x, y, z ? (x; ✓)) 11: end if 12: end for Our approach offers an intermediate setting, where we do still use a surrogate model to determine an optimal decision z?(x; ✓), yet we adapt this model based on the task loss instead of any model prediction accuracy. In practice, we typically want to minimize some weighted combination of log-likelihood and task loss, which can be easily accomplished given our approach. 3.2 Optimizing task loss To solve the generic optimization problem (4), we can in principle adopt a straightforward (constrained) stochastic gradient approach, as detailed in Algorithm 1. At each iteration, we 2This distinction is roughly analogous to the policy search vs. model-based settings in reinforcement learning. However, for the purposes of this paper, we consider much simpler stochastic programs without the multiple rounds that occur in RL, and the extension of these techniques to a full RL setting remains as future work. solve the proxy stochastic programming problem (3) to obtain z?(x, ✓), using the distribution defined by our current values of ✓. Then, we compute the true loss L(✓) using the observed value of y. If any of the inequality constraints g i in L(✓) are violated, we take a gradient step in the violated constraint; otherwise, we take a gradient step in the optimization objective f . We note that if any inequality constraints are probabilistic, Algorithm 1 must be adapted to employ mini-batches in order to determine whether these probabilistic constraints are satisfied. Alternatively, because even the g i constraints are probabilistic, it is common in practice to simply move a weighted version of these constraints to the objective, i.e., we modify the objective by adding some appropriate penalty times the positive part of the function, g i (x, y, z)+, for some > 0. In practice, this has the effect of taking gradient steps jointly in all the violated constraints and the objective in the case that one or more inequality constraints are violated, often resulting in faster convergence. Note that we need only move stochastic constraints into the objective; deterministic constraints on the policy itself will always be satisfied by the optimizer, as they are independent of the model. 3.3 Differentiating the optimization solution to a stochastic programming problem While the above presentation highlights the simplicity of the proposed approach, it avoids the issue of chief technical challenge to this approach, which is computing the gradient of an objective that depends upon the argmin operation z?(x; ✓). Specifically, we need to compute the term @L @✓ = @L @z ? @z ? @✓ (6) which involves the Jacobian @z ? @✓ . This is the Jacobian of the optimal solution with respect to the distribution parameters ✓. Recent approaches have looked into similar argmin differentiations [28, 29], though the methodology we present here is more general and handles the stochasticity of the objective. At a high level, we begin by writing the KKT optimality conditions of the general stochastic programming problem (3). Differentiating these equations and applying the implicit function theorem gives a set of linear equations that we can solve to obtain the necessary Jacobians (with expectations over the distribution y ⇠ p(y|x; ✓) denoted E y✓ , and where g is the vector of inequality constraints)2 64 r2 z E y✓f(z) + nineqX i=1 i r2 z E y✓gi(z) (rzEy✓g(z)) T A T diag( ) (r z E y✓g(z)) diag(Ey✓g(z)) 0 A 0 0 3 75 2 64 @z @✓ @ @✓ @⌫ @✓ 3 75 = 2 64 @rzEy✓ f(z) @✓ + @ Pnineq i=1 irzEy✓ gi(z) @✓ diag( ) @Ey✓ g(z) @✓ 0 3 75 . (7) The terms in these equations look somewhat complex, but fundamentally, the left side gives the optimality conditions of the convex problem, and the right side gives the derivatives of the relevant functions at the achieved solution with respect to the governing parameter ✓. In practice, we calculate the right-hand terms by employing sequential quadratic programming [30] to find the optimal policy z ? (x; ✓) for the given parameters ✓, using a recently-proposed approach for fast solution of the argmin differentiation for QPs [31] to solve the necessary linear equations; we then take the derivatives at the optimum produced by this strategy. Details of this approach are described in the appendix. 4 Experiments We consider three applications of our task-based method: a synthetic inventory stock problem, a real-world energy scheduling task, and a real-world battery arbitrage task. We demonstrate that the task-based end-to-end approach can substantially improve upon other alternatives. Source code for all experiments is available at https://github.com/locuslab/e2e-model-learning. 4.1 Inventory stock problem Problem definition To highlight the performance of the algorithm in a setting where the true underlying model is known to us, we consider a “conditional” variation of the classical inventory stock problem [4]. In this problem, a company must order some quantity z of a product to minimize costs over some stochastic demand y, whose distribution in turn is affected by some observed features x (Figure 1a). There are linear and quadratic costs on the amount of product ordered, plus different linear/quadratic costs on over-orders [z y]+ and under-orders [y z]+. The objective is given by f stock (y, z) = c0z + 1 2 q0z 2 + c b [y z]+ + 1 2 q b ([y z]+)2 + ch[z y]+ + 1 2 q h ([z y]+)2, (8) where [v]+ ⌘ max{v, 0}. For a specific choice of probability model p(y|x; ✓), our proxy stochastic programming problem can then be written as minimize z E y⇠p(y|x;✓)[fstock(y, z)]. (9) To simplify the setting, we further assume that the demands are discrete, taking on values d1, . . . , dk with probabilities (conditional on x) (p ✓ ) i ⌘ p(y = d i |x; ✓). Thus our stochastic programming problem (9) can be written succinctly as a joint quadratic program3 minimize z2R,zb,zh2Rk c0z + 1 2 q0z 2 + kX i=1 (p ✓ ) i ✓ c b (z b ) i + 1 2 q b (z b ) 2 i + c h (z h ) i + 1 2 q h (z h ) 2 i ◆ subject to d z1 z b , z1 d z h , z, z h , z b 0. (10) Further details of this approach are given in the appendix. Experimental setup We examine our algorithm under two main conditions: where the true model is linear, and where it is nonlinear. In all cases, we generate problem instances by randomly sampling some x 2 Rn and then generating p(y|x; ✓) according to either p(y|x; ✓) / exp(⇥Tx) (linear true model) or p(y|x; ✓) / exp((⇥Tx)2) (nonlinear true model) for some ⇥ 2 Rn⇥k. We compare the following approaches on these tasks: 1) the QP allocation based upon the true model (which performs optimally); 2) MLE approaches (with linear or nonlinear probability models) that fit a model to the data, and then compute the allocation by solving the QP; 3) pure end-to-end policy-optimizing models (using linear or nonlinear hypotheses for the policy); and 4) our task-based learning models (with linear or nonlinear probability models). In all cases, we evaluate test performance by running on 1000 random examples, and evaluate performance over 10 folds of different true ✓? parameters. Figures 2(a) and (b) show the performance of these methods given a linear true model, with linear and nonlinear model hypotheses, respectively. As expected, the linear MLE approach performs best, as the true underlying model is in the class of distributions that it can represent and thus solving the stochastic programming problem is a very strong proxy for solving the true optimization problem under the real distribution. While the true model is also contained within the nonlinear MLE’s generic nonlinear distribution class, we see that this method requires more data to converge, and when given less data makes error tradeoffs that are ultimately not the correct tradeoffs for the task at hand; our task-based approach thus outperforms this approach. The task-based approach also substantially outperforms the policy-optimizing neural network, highlighting the fact that it is more data-efficient to run the learning process “through” a reasonable model. Note that here it does not make a difference whether we use the linear or nonlinear model in the task-based approach. Figures 2(c) and (d) show performance in the case of a nonlinear true model, with linear and nonlinear model hypotheses, respectively. Case (c) represents the “non-realizable” case, where the true underlying distribution cannot be represented by the model hypothesis class. Here, the linear MLE, as expected, performs very poorly: it cannot capture the true underlying distribution, and thus the resultant stochastic programming solution would not be expected to perform well. The linear policy model similarly performs poorly. Importantly, the task-based approach with the linear model performs much better here: despite the fact that it still has a misspecified model, the task-based nature of the learning process lets us learn a different linear model than the MLE version, which is 3This is referred to as a two-stage stochastic programming problem (though a very trivial example of one), where first stage variables consist of the amount of product to buy before observing demand, and second-stage variables consist of how much to sell back or additionally purchase once the true demand has been revealed. particularly tuned to the distribution and loss of the task. Finally, also as to be expected, the non-linear models perform better than the linear models in this scenario, but again with the task-based non-linear model outperforming the nonlinear MLE and end-to-end policy approaches. 4.2 Load forecasting and generator scheduling We next consider a more realistic grid-scheduling task, based upon over 8 years of real electrical grid data. In this setting, a power system operator must decide how much electricity generation z 2 R24 to schedule for each hour in the next 24 hours based on some (unknown) distribution over electricity demand (Figure 1b). Given a particular realization y of demand, we impose penalties for both generation excess ( e ) and generation shortage ( s ), with s e . We also add a quadratic regularization term, indicating a preference for generation schedules that closely match demand realizations. Finally, we impose a ramping constraint c r restricting the change in generation between consecutive timepoints, reflecting physical limitations associated with quick changes in electricity output levels. These are reasonable proxies for the actual economic costs incurred by electrical grid operators when scheduling generation, and can be written as the stochastic programming problem minimize z2R24 24X i=1 E y⇠p(y|x;✓) s [y i z i ]+ + e[zi yi]+ + 1 2 (z i y i ) 2 subject to |z i z i 1| cr 8i, (11) where [v]+ ⌘ max{v, 0}. Assuming (as we will in our model), that yi is a Gaussian random variable with mean µ i and variance 2 i , then this expectation has a closed form that can be computed via analytically integrating the Gaussian PDF.4 We then use sequential quadratic programming (SQP) to iteratively approximate the resultant convex objective as a quadratic objective, iterate until convergence, and then compute the necessary Jacobians using the quadratic approximation at the solution, which gives the correct Hessian and gradient terms. Details are given in the appendix. To develop a predictive model, we make use of a highly-tuned load forecasting methodology. Specifically, we input the past day’s electrical load and temperature, the next day’s temperature forecast, and additional features such as non-linear functions of the temperatures, binary indicators of weekends or holidays, and yearly sinusoidal features. We then predict the electrical load over all 24 4 Part of the philosophy behind applying this approach here is that we know the Gaussian assumption is incorrect: the true underlying load is neither Gaussian distributed nor homoskedastic. However, these assumptions are exceedingly common in practice, as they enable easy model learning and exact analytical solutions. Thus, training the (still Gaussian) system with a task-based loss retains computational tractability while still allowing us to modify the distribution’s parameters to improve actual performance on the task at hand. hours of the next day. We employ a 2-hidden-layer neural network for this purpose, with an additional residual connection from the inputs to the outputs initialized to the linear regression solution. ! ∈ ℝ$ 200 % ∈ ℝ&' Past Load Past Temp (Past Temp)2 Future Temp (Future Temp)2 (Future Temp)3 ((Weekday) ((Holiday) ((DST) sin(2-.× DOY) cos(2-× DOY) Future Load 200 pure policy-optimizing network is not shown, as it could not sufficiently learn the ramp constraints. We could not obtain good performance for the policy optimizer even ignoring this infeasibility.) Figure 4 shows the performance of the three models. As expected, the RMSE model performs best with respect to the RMSE of its predictions (its objective). However, the task-based model substantially outperforms the RMSE model when evaluated on task loss, the actual objective that the system operator cares about: specifically, we improve upon the performance of the traditional stochastic programming method by 38.6%. The cost-weighted RMSE’s performance is extremely variable, and overall, the task net improves upon this method by 8.6%. 4.3 Price forecasting and battery storage Finally, we consider a battery arbitrage task, based upon 6 years of real electrical grid data. Here, a grid-scale battery must operate over a 24 hour period based on some (unknown) distribution over future electricity prices (Figure 1c). For each hour, the operator must decide how much to charge (zin 2 R24) or discharge (zout 2 R24) the battery, thus inducing a particular state of charge in the battery (zstate 2 R24). Given a particular realization y of prices, the operator optimizes over: 1) profits, 2) flexibility to participate in other markets, by keeping the battery near half its capacity B (with weight ), and 3) battery health, by discouraging rapid charging/discharging (with weight ✏, 5It is worth noting that a cost-weighted RMSE approach is only possible when direct costs can be assigned independently to each decision point, i.e. when costs do not depend on multiple decision points (as in this experiment). Our task-based method, however, accommodates the (typical) more general setting. ✏ < ). The battery also has a charging efficiency ( eff), limits on speed of charge (cin) and discharge (cout), and begins at half charge. This can be written as the stochastic programming problem minimize zin,zout,zstate2R24 E y⇠p(y|x;✓) " 24X i=1 y i (zin zout)i + zstate B 2 2 + ✏kzink2 + ✏kzoutk2 # subject to zstate,i+1 = zstate,i zout,i + effzin,i 8i, zstate,1 = B/2, 0 zin cin, 0 zout cout, 0 zstate B. (12) Assuming (as we will in our model) that y i is a random variable with mean µ i , then this expectation has a closed form that depends only on the mean. Further details are given in the appendix. To develop a predictive model for the mean, we use an architecture similar to that described in Section 4.2. In this case, we input the past day’s prices and temperature, the next day’s load forecasts and temperature forecasts, and additional features such as non-linear functions of the temperatures and temporal features similar to those in Section 4.2. We again train the model to minimize the mean squared error between the model’s predictions and the actual prices (giving the mean prediction µ i ), using about 5 years of data to train the model and 1 subsequent year for testing. Using the mean predictions of this base model, we then solve the storage scheduling problem by solving the optimization problem (12), again learning network parameters by minimizing the task loss. We compare against a traditional stochastic programming model that minimizes just the RMSE. Table 1 shows the performance of the two models. As energy prices are difficult to predict due to numerous outliers and price spikes, the models in this case are not as well-tuned as in our load forecasting experiment; thus, their performance is relatively variable. Even then, in all cases, our task-based model demonstrates better average performance than the RMSE model when evaluated on task loss, the objective most important to the battery operator (although the improvements are not statistically significant). More interestingly, our task-based method shows less (and in some cases, far less) variability in performance than the RMSE-minimizing method. Qualitatively, our task-based method hedges against perverse events such as price spikes that could substantially affect the performance of a battery charging schedule. The task-based method thus yields more reliable performance than a pure RMSE-minimizing method in the case the models are inaccurate due to a high level of stochasticity in the prediction task. 5 Conclusions and future work This paper proposes an end-to-end approach for learning machine learning models that will be used in the loop of a larger process. Specifically, we consider training probabilistic models in the context of stochastic programming to directly capture a task-based objective. Preliminary experiments indicate that our task-based learning model substantially outperforms MLE and policy-optimizing approaches in all but the (rare) case that the MLE model “perfectly” characterizes the underlying distribution. Our method also achieves a 38.6% performance improvement over a highly-optimized real-world stochastic programming algorithm for scheduling electricity generation based on predicted load. In the case of energy price prediction, where there is a high degree of inherent stochasticity in the problem, our method demonstrates more reliable task performance than a traditional predictive method. The task-based approach thus demonstrates promise in optimizing in-the-loop predictions. Future work includes an extension of our approach to stochastic learning models with multiple rounds, and further to model predictive control and full reinforcement learning settings. Acknowledgments This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE1252522, and by the Department of Energy Computational Science Graduate Fellowship.
1. What are the sets \mathcal{X}, \mathcal{Y}, and \mathcal{Z} in the paper's problem setup? 2. Are these sets compact or unbounded in $\mathbb{R}^n$? 3. Are they finite sets? 4. How does the paper's approach relate to other end-to-end optimization schemes, such as those used in stochastic programming? 5. Can the authors provide examples or results for problems more familiar to the NIPS crowd, such as continuous control problems?
Review
Review Frankly I didn't understand the problem setup. What are \mathcal{X, Y, Z}? compact sets in R^n? Unbounded sets? Or maybe these are finite-sets? I understand that they describe an end-to-end optimization scheme like [29], but the problem set of stochastic programming (e.g. inventory stacking ) is unfamiliar to me. The results of the load forecasting task looks like it might be impressive, but again, I am unfamiliar with these problems. I'd love to see result on problems more familiar to me (and I suspect the NIPS crowd in general) e.g. continuous control problems.
NIPS
Title Two Time-scale Off-Policy TD Learning: Non-asymptotic Analysis over Markovian Samples Abstract Gradient-based temporal difference (GTD) algorithms are widely used in off-policy learning scenarios. Among them, the two time-scale TD with gradient correction (TDC) algorithm has been shown to have superior performance. In contrast to previous studies that characterized the non-asymptotic convergence rate of TDC only under identical and independently distributed (i.i.d.) data samples, we provide the first non-asymptotic convergence analysis for two time-scale TDC under a non-i.i.d. Markovian sample path and linear function approximation. We show that the two time-scale TDC can converge as fast as O( log t t2/3 ) under diminishing stepsize, and can converge exponentially fast under constant stepsize, but at the cost of a non-vanishing error. We further propose a TDC algorithm with blockwisely diminishing stepsize, and show that it asymptotically converges with an arbitrarily small error at a blockwisely linear convergence rate. Our experiments demonstrate that such an algorithm converges as fast as TDC under constant stepsize, and still enjoys comparable accuracy as TDC under diminishing stepsize. N/A t2/3 ) under diminishing stepsize, and can converge exponentially fast under constant stepsize, but at the cost of a non-vanishing error. We further propose a TDC algorithm with blockwisely diminishing stepsize, and show that it asymptotically converges with an arbitrarily small error at a blockwisely linear convergence rate. Our experiments demonstrate that such an algorithm converges as fast as TDC under constant stepsize, and still enjoys comparable accuracy as TDC under diminishing stepsize. 1 Introduction In practice, it is very common that we wish to learn the value function of a target policy based on data sampled by a different behavior policy, in order to make maximum use of the data available. For such off-policy scenarios, it has been shown that conventional temporal difference (TD) algorithms [24, 25] and Q-learning [33] may diverge to infinity when using linear function approximation [2]. To overcome the divergence issue in off-policy TD learning, [27, 26, 17] proposed a family of gradient-based TD (GTD) algorithms, which were shown to have guaranteed convergence in off-policy settings and are more flexible than on-policy learning in practice [18, 23]. Among those GTD algorithms, the TD with gradient correction (TDC) algorithm has been verified to have superior performance [17] [9] and is widely used in practice. To elaborate, TDC uses the mean squared projected Bellman error as the objective function, and iteratively updates the function approximation parameter with the assistance of an auxiliary parameter that is also iteratively updated. These two 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. parameters are typically updated with stepsizes diminishing at different rates, resulting the two time-scale implementation of TDC, i.e., the function approximation parameter is updated at a slower time-scale and the auxiliary parameter is updated at a faster time-scale. The convergence of two time-scale TDC and general two time-scale stochastic approximation (SA) have been well studied. The asymptotic convergence has been shown in [4, 6] for two time-scale SA, and in [26] for two time-scale TDC, where both studies assume that the data are sampled in an identical and independently distributed (i.i.d.) manner. Under non-i.i.d. observed samples, the asymptotic convergence of the general two time-scale SA and TDC were established in [14, 36]. All the above studies did not characterize how fast the two time-scale algorithms converge, i.e, they did not establish the non-asymptotic convergence rate, which is specially important for a two time-scale algorithm. In order for two time-scale TDC to perform well, it is important to properly choose the relative scaling rate of the stepsizes for the two time-scale iterations. In practice, this can be done by fixing one stepsize and treating the other stepsize as a tuning hyper-parameter [9], which is very costly. The non-asymptotic convergence rate by nature captures how the scaling of the two stepsizes affect the performance and hence can serve as a guidance for choosing the two time-scale stepsizes in practice. Recently, [8] established the non-asymptotic convergence rate for the projected two time-scale TDC with i.i.d. samples under diminishing stepsize. • One important open problem that still needs to be addressed is to characterize the non-asymptotic convergence rate for two time-scale TDC under non-i.i.d. samples and diminishing stepsizes, and explore what such a result suggests for designing the stepsizes of the fast and slow time-scales accordingly. Existing method developed in [8] that handles the non-asymptotic analysis for i.i.d. sampled TDC does not accommodate a direct extension to the non-i.i.d. setting. Thus, new technical developments are necessary to solve this problem. Furthermore, although diminishing stepsize offers accurate convergence, constant stepsize is often preferred in practice due to its much faster error decay (i.e., convergence) rate. For example, empirical results have shown that for one time-scale conventional TD, constant stepsize not only yields fast convergence, but also results in comparable convergence accuracy as diminishing stepsize [9]. However, for two time-scale TDC, our experiments (see Section 4.2) demonstrate that constant stepsize, although yields faster convergence, has much bigger convergence error than diminishing stepsize. This motivates to address the following two open issues. • It is important to theoretically understand/explain why constant stepsize yields large convergence error for two-time scale TDC. Existing non-asymptotic analysis for two time-scale TDC [8] focused only on the diminishing stepsize, and does not characterize the convergence rate of two time-scale TDC under constant stepsize. • For two-time scale TDC, given the fact that constant stepsize yields large convergence error but converges fast, whereas diminishing stepsize has small convergence error but converges slowly, it is desirable to design a new update scheme for TDC that converges faster than diminishing stepsize, but has as good convergence error as diminishing stepsize. In this paper, we comprehensively address the above issues. 1.1 Our Contribution Our main contributions are summarized as follows. We develop a novel non-asymptotic analysis for two time-scale TDC with a single sample path and under non-i.i.d. data. We show that under the diminishing stepsizes αt = cα/(1 + t)σ and βt = cβ/(1 + t) ν respectively for slow and fast time-scales (where cα, cβ , ν, σ are positive constants and 0 < ν < σ ≤ 1), the convergence rate can be as large as O( log t t2/3 ), which is achieved by σ = 32ν = 1. This recovers the convergence rate (up to log t factor due to non-i.i.d. data) in [8] for i.i.d. data as a special case. We also develop the non-asymptotic analysis for TDC under non-i.i.d. data and constant stepsize. In contrast to conventional one time-scale analysis, our result shows that the training error (at slow time-scale) and the tracking error (at fast time scale) converge at different rates (due to different condition numbers), though both converge linearly to the neighborhood of the solution. Our result also characterizes the impact of the tracking error on the training error. Our result suggests that TDC under constant stepsize can converge faster than that under diminishing stepsize at the cost of a large training error, due to a large tracking error caused by the auxiliary parameter iteration in TDC. We take a further step and propose a TDC algorithm under a blockwise diminishing stepsize inspired by [35] in conventional optimization, in which both stepsizes are constants over a block, and decay across blocks. We show that TDC asymptotically converges with an arbitrarily small training error at a blockwisely linear convergence rate as long as the block length and the decay of stepsizes across blocks are chosen properly. Our experiments demonstrate that TDC under a blockwise diminishing stepsize converges as fast as vanilla TDC under constant stepsize, and still enjoys comparable accuracy as TDC under diminishing stepsize. From the technical standpoint, our proof develops new tool to handle the non-asymptotic analysis of bias due to non-i.i.d. data for two time-scale algorithms under diminishing stepsize that does not require square summability, to bound the impact of the fast-time-scale tracking error on the slow-time-scale training error, and the analysis to recursively refine the error bound in order to sharpening the convergence rate. 1.2 Related Work Due to extensive studies on TD learning, we here include only the most relevant work to this paper. On policy TD and SA. The convergence of TD learning with linear function approximation with i.i.d samples has been well established by using standard results in SA [5]. The non-asymptotic convergence have been established in [4, 12, 30] for the general SA algorithms with martingale difference noise, and in [7] for TD with i.i.d. samples. For the Markovian settings, the asymptotic convergence has been established in [31, 28] for TD(λ), and the non-asymptotic convergence has been provided for projected TD(λ) in [3] and for linear SA with Markovian noise in [13, 22, 21]. For linear SA with dynamic Markovian noise, the non-asymptotic analysis of on-policy SARSA under non-i.i.d. samples was recently studied in [37]. Off policy one time-scale GTD. The convergence of one time-scale GTD and GTD2 (which are off-policy TD algorithms) were derived by applying standard results in SA [27, 26, 17]. The nonasymptotic analysis for GTD and GTD2 have been conducted in [16] by converting the objective function into a convex-concave saddle problem, and was further generalized to the Markovian setting in [32]. However, such an approach cannot be generalized for analyzing two-time scale TDC that we study here because TDC does not have an explicit saddle-point representation. Off policy two time-scale TDC and SA. The asymptotic convergence of two time-scale TDC under i.i.d. samples has been established in [26, 17], and the non-asymptotic analysis has been provided in [8] as a special case of two time-scale linear SA. Under Markovian setting, the convergence of various two time-scale GTD algorithms has been studied in [36]. The non-asymptotic analysis of two time-scale TDC under non-i.i.d. data has not been studied before, which is the focus of this paper. General two time-scale SA has also been studied. The convergence of two time-scale SA with martingale difference noise was established in [4], and its non-asymptotic convergence was provided in [15, 20, 8, 6]. Some of these results can be applied to two time-scale TDC under i.i.d. samples (which can fit into a special case of SA with martingale difference noise), but not to the noni.i.d. setting. For two time-scale linear SA with more general Markovian noise, only asymptotic convergence was established in [29, 34, 14]. In fact, our non-asymptotic analysis for two time-scale TDC can be of independent interest here to be further generalized for studying linear SA with more general Markovian noise. Two concurrent and independent studies were posted online recently, which are related to our study. [10] provided a non-asymptotic analysis for two time-scale linear SA under the non-i.i.d setting, in which both variables are updated with constant stepsize. In contrast, our study provides the convergence rate for the case with the two variables being updated by the stepsizes that diminish at different rates, and hence our analysis technique is very different from that in [10]. Another study [11] proposed an interesting approach to analyze the convergence rate of TD learning in the Markovian setting via a Markov jump linear system. Such an approach, however, cannot be applied directly to study the two time-scale TD algorithm that we study here. 2 Problem Formulation 2.1 Off-policy Value Function Evaluation We consider the problem of policy evaluation for a Markov decision process (MDP) (S,A,P, r, γ), where S ⊂ Rd is a compact state space, A is a finite action set, P = P(s′|s, a) is the transition kernel, r(s, a, s′) is the reward function bounded by rmax, and γ ∈ (0, 1) is the discount factor. A stationary policy π maps a state s ∈ S to a probability distribution π(·|s) over A. At time-step t, suppose the process is in some state st ∈ S. Then an action at ∈ A is taken based on the distribution π(·|st), the system transitions to a next state st+1 ∈ S governed by the transition kernel P(·|st, at), and a reward rt = r(st, at, st+1) is received. Assuming the associated Markov chain p(s′|s) = ∑ a∈A p(s ′|s, a)π(a|s) is ergodic, let µπ be the induced stationary distribution of this MDP, i.e., ∑ s p(s ′|s)µπ(s) = µπ(s′). The value function for policy π is defined as: vπ (s) = E[ ∑∞ t=0 γ tr(st, at, st+1)|s0 = s, π], and it is known that vπ(s) is the unique fixed point of the Bellman operator Tπ, i.e., vπ(s) = Tπvπ(s) := rπ(s) + γEs′|svπ(s′), where rπ(s) = Ea,s′|sr(s, a, s′) is the expected reward of the Markov chain induced by policy π. We consider policy evaluation problem in the off-policy setting. Namely, a sample path {(st, at, st+1)}t≥0 is generated by the Markov chain according to the behavior policy πb, but our goal is to obtain the value function of a target policy π, which is different from πb. 2.2 Two Time-Scale TDC When S is large or infinite, a linear function v̂(s, θ) = φ(s)>θ is often used to approximate the value function, where φ(s) ∈ Rd is a fixed feature vector for state s and θ ∈ Rd is a parameter vector. We can also write the linear approximation in the vector form as v̂(θ) = Φθ, where Φ is the |S| × d feature matrix. To find a parameter θ∗ ∈ Rd with Eµπb v̂(s, θ ∗) = EµπbT π v̂(s, θ∗). The gradient-based TD algorithm TDC [26] updates the parameter by minimizing the mean-square projected Bellman error (MSPBE) objective, defined as J(θ) = Eµπb [v̂(s, θ)−ΠT π v̂(s, θ)]2, where Π = Φ(Φ>ΞΦ)−1Φ>Ξ is the orthogonal projection operation into the function space V̂ = {v̂(θ) | θ ∈ Rd and v̂(·, θ) = φ(·)>θ} and Ξ denotes the |S| × |S| diagonal matrix with the components of µπb as its diagonal entries. Then, we define the matrices A, B, C and the vector b as A := Eµπb [ρ(s, a)φ(s)(γφ(s ′)− φ(s))>], B := −γEµπb [ρ(s, a)φ(s ′)φ(s)>], C := −Eµπb [φ(s)φ(s) >], b := Eµπb [ρ(s, a)r(s, a, s ′)φ(s)], where ρ(s, a) = π(a|s)/πb(a|s) is the importance weighting factor with ρmax being its maximum value. If A and C are both non-singular, J(θ) is strongly convex and has θ∗ = −A−1b as its global minimum, i.e., J(θ∗) = 0. Motivated by minimizing the MSPBE objective function using the stochastic gradient methods, TDC was proposed with the following update rules: θt+1 = ΠRθ (θt + αt(Atθt + bt +Btwt)) , (1) wt+1 = ΠRw (wt + βt(Atθt + bt + Ctwt)) , (2) where At = ρ(st, at)φ(st)(γφ(st+1) − φ(st))>, Bt = −γρ(st, at)φ(st+1)φ(st)>, Ct = −φ(st)φ(st)>, bt = ρ(st, at)r(st, at, st+1)φ(st), and ΠR(x) = argminx′:||x′||2≤R ||x − x ′||2 is the projection operator onto a norm ball of radius R < ∞. The projection step is widely used in the stochastic approximation literature. As we will show later, iterations (1)-(2) are guaranteed to converge to the optimal parameter θ∗ if we choose the value of Rθ and Rw appropriately. TDC with the update rules (1)-(2) is a two time-scale algorithm. The parameter θ iterates at a slow time-scale determined by the stepsize {αt}, whereas w iterates at a fast time-scale determined by the stepsize {βt}. Throughout the paper, we make the following standard assumptions [3, 32, 17]. Assumption 1 (Problem solvability). The matrix A and C are non-singular. Assumption 2 (Bounded feature). ‖φ(s)‖2 ≤ 1 for all s ∈ S and ρmax <∞. Assumption 3 (Geometric ergodicity). There exist constants m > 0 and ρ ∈ (0, 1) such that sup s∈S dTV (P(st ∈ ·|s0 = s), µπb) ≤ mρt,∀t ≥ 0, where dTV (P,Q) denotes the total-variation distance between the probability measures P and Q. In Assumption 1, the matrix A is required to be non-singular so that the optimal parameter θ∗ = −A−1b is well defined. The matrix C is non-singular when the feature matrix Φ has linearly independent columns. Assumption 2 can be ensured by normalizing the basis functions {φi}di=1 and when πb(·|s) is non-degenerate for all s. Assumption 3 holds for any time-homogeneous Markov chain with finite state-space and any uniformly ergodic Markov chains with general state space. Throughout the paper, we require Rθ ≥ ‖A‖2 ‖b‖2 and Rw ≥ 2 ∥∥C−1∥∥ 2 ‖A‖2Rθ. In practice, we can estimate A, C and b as mentioned in [3] or simply let Rθ and Rw to be large enough. 3 Main Theorems 3.1 Non-asymptotic Analysis under Diminishing Stepsize Our first main result is the convergence rate of two time-scale TDC with diminishing stepsize. We define the tracking error: zt = wt −ψ(θt), where ψ(θt) = −C−1(b+Aθt) is the stationary point of the ODE given by ẇ(t) = Cw(t) + Aθt + b, with θt being fixed. Let λθ and λw be any constants that satisfy λmax(2A>C−1A) ≤ λθ < 0 and λmax(2C) ≤ λw < 0. Theorem 1. Consider the projected two time-scale TDC algorithm in (1)-(2). Suppose Assumptions 1-3 hold. Suppose we apply diminishing stepsize αt = cα(1+t)σ , βt = cβ (1+t)ν which satisfy 0 < ν < σ < 1, 0 < cα < 1|λθ| and 0 < cβ < 1 |λw| . Suppose and ′ can be any constants in (0, σ − ν] and (0, 0.5], respectively. Then we have for t ≥ 0: E ‖θt − θ∗‖22 ≤ O(e −|λθ|cα 1−σ (t 1−σ−1)) +O ( log t tσ ) +O ( log t tν + h(σ, ν) )1− ′ , (3) E ‖zt‖22 ≤ O ( log t tν ) +O(h(σ, ν)), (4) where h(σ, ν) = { 1 tν , σ > 1.5ν, 1 t2(σ−ν)− , ν < σ ≤ 1.5ν. (5) If 0 < ν < σ = 1, with cα = 1|λθ| and 0 < cβ < 1 |λw| , we have for t ≥ 0 E ‖θt − θ∗‖22 ≤ O ( (log t)2 t ) +O ( log t tν + h(1, ν) )1− ′ . (6) For explicit expressions of (3), (4) and (6), please refer to (25), (18) and (28) in the Appendix. We further explain Theorem 1 as follows: (a) In (3) and (5), since both and ′ can be arbitrarily small, the convergence of E ‖θt − θ∗‖22 can be almost as fast as 1 t2(σ−ν) when ν < σ < 1.5ν, and log t tν when 1.5ν ≤ σ. Then best convergence rate is almost as fast as O( log t t2/3 ) with σ = 32ν = 1. (b) If data are i.i.d. generated, then our bound reduces to E ‖θt − θ∗‖22 ≤ O(exp(λθcα(t1−σ − 1)/(1− σ))) +O(1/tσ) +O(h(σ, ν))1− ′ with h(σ, ν) = 1tν when σ > 1.5ν, and h(σ, ν) = 1 t2(σ−ν)− when ν < σ ≤ 1.5ν. The best convergence rate is almost as fast as 1 t2/3 with σ = 32ν = 1 as given in [8]. Theorem 1 characterizes the relationship between the convergence rate of θt and stepsizes αt and βt. The first term of the bound in (3) corresponds to the convergence rate of θt with full gradient ∇J(θt), which exponentially decays with t. The second term is introduced by the bias and variance of the gradient estimator which decays sublinearly with t. The last term arises due to the accumulated tracking error zt, which specifically arises in two time-scale algorithms, and captures how accurately wt tracks ψ(θt). Thus, if wt tracks the stationary point ψ(θt) in each step perfectly, then we have only the first two terms in (3), which matches the results of one time-scale TD learning [3, 7]. Theorem 1 indicates that asymptotically, (3) is dominated by the tracking error term O(h(σ, ν)1− ′), which depends on the diminishing rate of αt and βt. Since both and ′ can be arbitrarily small, if the diminishing rate of αt is close to that of βt, then the tracking error is dominated by the slow drift, which has an approximate order O(1/t2(σ−ν)); if the diminishing rate of αt is much faster than that of βt, then the tracking error is dominated by the accumulated bias, which has an approximate order O(log t/tν). Moreover, (5) and (6) suggest that for any fixed σ ∈ (0, 1], the optimal diminishing rate of βt is achieved by σ = 32ν. From the technical standpoint, we develop novel techniques to handle the interaction between the training error and the tracking error and sharpen the error bounds recursively. The proof sketch and the detailed steps are provided in Appendix A. 3.2 Non-asymptotic Analysis under Constant Stepsize As we remark in Section 1, it has been demonstrated by empirical results [9] that the standard TD under constant stepsize not only converges fast, but also has comparable training error as that under diminishing stepsize. However, this does not hold for TDC. When the two variables in TDC are updated both under constant stepsize, our experiments demonstrate that constant stepsize yields fast convergence, but has large training error. In this subsection, we aim to explain why this happens by analyzing the convergence rate of the two variables in TDC, and the impact of one on the other. The following theorem provides the convergence result for TDC with the two variables iteratively updated respectively by two different constant stepsizes. Theorem 2. Consider the projected TDC algorithm in eqs. (1) and (2). Suppose Assumption 1-3 hold. Suppose we apply constant stepsize αt = α, βt = β and α = ηβ which satisfy η > 0, 0 < α < 1|λθ| and 0 < β < 1|λw| . We then have for t ≥ 0: E ‖θt − θ∗‖22 ≤ (1− |λθ|α) t(‖θ0 − θ∗‖22 + C1) + C2 max{α, α ln 1 α }+ (C3 max{β, β ln 1 β }+ C4η)0.5 (7) E ‖zt‖22 ≤ (1− |λw|β) t ‖z0‖22 + C5 max{β, β ln 1 β }+ C6η, (8) where C1 = 4γρmaxRθRw 1−(1−|λθ|α)T+1 |λθ|(1−|λθ|α)T+1 with T = d ln[C5 max{β,ln( 1β )β}/‖z0‖ 2 2] − ln(1−|λw|β) e, and C2, C3, C4, C5 and C6 are positive constants independent of α and β. For explicit expressions of C2, C3, C4, C5 and C6, please refer to (67), (68), (69), (59), and (60) in the Supplementary Materials. Theorem 2 shows that TDC with constant stepsize converges to a neighborhood of θ∗ exponentially fast. The size of the neighborhood depends on the second and the third terms of the bound in (7), which arise from the bias and variance of the update of θt and the tracking error zt in (8), respectively. Clearly, the convergence zt, although is also exponentially fast to a neighborhood, is under a different rate due to the different condition number. We further note that as the stepsize parameters α, β approach 0 in a way such that α/β → 0, θt approaches to θ∗ as t→∞, which matches the asymptotic convergence result for two time-scale TDC under constant stepsize in [36]. Diminishing vs Constant Stepsize: We next discuss the comparison between TDC under diminishing stepsize and constant stepsize. Generally, Theorem 1 suggests that diminishing stepsize yields better converge guarantee (i.e., converges exactly to θ∗) than constant stepsize shown in Theorem 2 (i.e., converges to the neighborhood of θ∗). In practice, constant stepsize is recommended because diminishing stepsize may take much longer time to converge. However, as Figure 2 in Section 4.2 shows, although TDC with large constant stepsize converges fast, the training error due to the convergence to the neighborhood is significantly worse than the diminishing stepsize. More specifically, when η = α/β is fixed, as α grows, the convergence becomes faster, but as a consequence, the term (C3 max{β, β ln 1β }+C4η) 0.5 due to the tracking error increases and results in a large training error. Alternatively, if α gets small so that the training error is comparable to that under diminishing stepsize, then the convergence becomes very slow. This suggests that simply setting the stepsize to be constant for TDC does not yield desired performance. This motivates us to design an appropriate update scheme for TDC such that it can enjoy as fast error convergence rate as constant stepsize offers, but still have comparable accuracy as diminishing stepsize enjoys. 3.3 TDC under Blockwise Diminishing Stepsize In this subsection, we propose a blockwise diminishing stepsize scheme for TDC (see Algorithm 1), and study its theoretical convergence guarantee. In Algorithm 1, we define ts = ∑s i=0 Ts. The idea of Algorithm 1 is to divide the iteration process into blocks, and diminish the stepsize blockwisely, but keep the stepsize to be constant within each block. In this way, within each block, Algorithm 1 Blockwise Diminishing Stepsize TDC Input: θ0,0 = θ0, w0,0 = w0 = 0, T0 = 0, block index S 1: for s = 1, 2, ..., S do 2: θs,0 = θs−1, ws,0 = ws−1 3: for i = 1, 2, ..., Ts do 4: Sample (sts−1+i, ats−1+i, sts−1+i+1, rts−1+i) from trajetory 5: θs,i = ΠRθ ( θs,i−1 + αs(Ats−1+iθs,i−1 + bts−1+i +Bts−1+iws,i−1) ) 6: ws,i = ΠRw ( ws,i−1 + βs(Ats−1+iθs,i−1 + bts−1+i + Cts−1+iws,i−1) ) 7: end for 8: θs = θs,Ts , ws = ws,Ts 9: end for Output: θS , wS TDC can decay fast due to constant stepsize and still achieve an accurate solution due to blockwisely decay of the stepsize, as we will demonstrate in Section 4. More specifically, the constant stepsizes αs and βs for block s are chosen to decay geometrically, such that the tracking error and accumulated variance and bias are asymptotically small; and the block length Ts increases geometrically across blocks, such that the training error E ‖θs − θ∗‖22 decreases geometrically blockwisely. We note that the design of the algorithm is inspired by the method proposed in [35] for conventional optimization problems. The following theorem characterizes the convergence of Algorithm 1. Theorem 3. Consider the projected TDC algorithm with blockwise diminishing stepsize as in Algorithm 1. Suppose Assumptions 1-3 hold. Suppose max{log(1/αs)αs, αs} ≤ min{ s−1/(4C7), 1/|λx|}, βs = ηαs and Ts = dlog1/(1−|λx|αs) 4e, where λx < 0 and C7 > 0 are constant independent of s (see (72) and (75) in the Supplementary Materials for explicit expression of λx and C7), s = ‖θ0 − θ∗‖2 /2s and η ≥ 1/2 max{0, λmin(C−1(A> + A))}. Then, after S = dlog2( 0/ )e blocks, we have E ‖θS − θ∗‖22 ≤ . The total sample complexity is O( 1 log 1 ). Theorem 3 indicates that the sample complexity of TDC under blockwise diminishing stepsize is slightly better than that under diminishing stepsize. Our empirical results (see Section 4.3) also demonstrate that blockwise diminishing stepsize yields as fast convergence as constant stepsize and has comparable training error as diminishing stepsize. However, we want to point out that the advantage of blockwise diminishing stepsize does not come for free, rather at the cost of some extra parameter tuning in practice to estimate 0, |λx|, C7 and η; whereas diminishing stepsize scheme as guided by our Theorem 1 requires to tune at most three parameters to obtain desirable performance. 4 Experimental Results In this section, we provide numerical experiments to verify our theoretical results and the efficiency of Algorithm 1. More precisely, we consider Garnet problems [1] denoted as G(nS , nA, p, q), where ns denotes the number of states, nA denotes the number of actions, p denotes the number of possible next states for each state-action pair, and q denotes the number of features. The reward is state-dependent and both the reward and the feature vectors are generated randomly. The discount factor γ is set to 0.95 in all experiments. We consider the G(500, 20, 50, 20) problem. For all experiments, we choose θ0 = w0 = 0. All plots report the evolution of the mean square error over 500 independent runs. 4.1 Optimal Diminishing Stepsize In this subsection, we provide numerical results to verify Theorem 1. We compare the performance of TDC updates with the same αt but different βt. We consider four different diminishing stepsize settings: (1) cα = cβ = 0.03, σ = 0.15; (2) cα = cβ = 0.18, σ = 0.30; (3) cα = cβ = 1, σ = 0.45; (4) cα = cβ = 4, σ = 0.60. For each case with fixed slow time-scale parameter σ, the fast time-scale stepsize βt has decay rate ν to be 12σ, 1 3σ, 5 9σ, 2 3σ, 5 6σ, and σ. Our results are reported in Figure 1, in which for each case the left figure reports the overall iteration process and the right figure reports the corresponding zoomed tail process of the last 100000 iterations. It can be seen that in all cases, TDC iterations with the same slow time-scale stepsize σ share similar error decay rates (see the left plot), and the difference among the fast time-scale parameter ν is reflected by the behavior of the error convergence tails (see the right plot). We observe that ν = 23σ yields the best error decay rate. This corroborates Theorem 1, which illustrates that the fast time-scale stepsize βt with parameter ν affects only the tracking error term in (3), that dominates the error decay rate asymptotically. 4.2 Constant Stepsize vs Diminishing Stepsize In this subsection, we compare the error decay of TDC under diminishing stepsize with that of TDC under four different constant stepsizes. For diminishing stepsize, we set cα = cβ and σ = 32ν, and tune their values to the best, which are given by cα = cβ = 1.8, σ = 32ν = 0.45. For the four constant-stepsize cases, we fix α for each case, and tune β to the best. The resulting parameter settings are respectively as follows: αt = 0.01, βt = 0.006; αt = 0.02, βt = 0.008; αt = 0.05, βt = 0.02; and αt = 0.1, βt = 0.02. The results are reported in Figure 2, in which for both the training and tracking errors, the left plot illustrates the overall iteration process and the right plot illustrates the corresponding zoomed error tails. The results suggest that although some large constant stepsizes (αt = 0.05, βt = 0.02 and αt = 0.1, βt = 0.02) yield initially faster convergence than diminishing stepsize, they eventually oscillate around a large neighborhood of θ∗ due to the large tracking error. Small constant stepsize (αt = 0.02, βt = 0.008 and αt = 0.01, βt = 0.006) can have almost the same asymptotic accuracy as that under diminishing stepsize, but has very slow convergence rate. We can also observe strong correlation between the training and tracking errors under constant stepsize, i.e., larger training error corresponds to larger tracking error, which corroborates Theorem 2 and suggests that the accuracy of TDC heavily depends on the decay of the tracking error ‖zt‖2. 4.3 Blockwise Diminishing Stepsize In this subsection, we compare the error decay of TDC under blockwise diminishing stepsize with that of TDC under diminishing stepsize and constant stepsize. We use the best tuned parameter settings as listed in Section 4.2 for the latter two algorithms, i.e., cα = cβ = 1.8 and σ = 32ν = 0.45 for diminishing stepsize, and αt = 0.1, βt = 0.02 for constant stepsize. We report our results in Figure 3. It can be seen that TDC under blockwise diminishing stepsize converges faster than that under diminishing stepsize and almost as fast as that under constant stepsize. Furthermore, TDC under blockwise diminishing stepsize also has comparable training error as that under diminishing stepsize. Since the stepsize decreases geometrically blockwisely, the algorithm approaches to a very small neighborhood of θ∗ in the later blocks. We can also observe that the tracking error under blockwise diminishing stepsize decreases rapidly blockwisely. 4.4 Robustness to Blocksize In this subsection, we investigate the robustness of TDC under blockwise diminishing stepsize with respect to the blocksize. We consider the same setting as in Section 4.3, and perturb all blocksizes by certain percentages of the original blocksize suggested in the algorithm. It can be seen from Figure 4 that the error decay rate changes only very slightly even with a substantial change in the blocksize. 5 Conclusion In this work, we provided the first non-asymptotic analysis for the two time-scale TDC algorithm over Markovian sample path. We developed a novel technique to handle the accumulative tracking error caused by the two time-scale update, using which we characterized the non-asymptotic convergence rate with general diminishing stepsize and constant stepsize. We also proposed a blockwise diminishing stepsize scheme for TDC and proved its convergence. Our experiments demonstrated the performance advantage of such an algorithm over both the diminishing and constant stepsize TDC algorithms. Our technique for non-asymptotic analysis of two time-scale algorithms can be applied to studying other off-policy algorithms such as actor-critic [18] and gradient Q-learning algorithms [19]. Acknowledgment The work of T. Xu and Y. Liang was supported in part by the U.S. National Science Foundation under the grants CCF-1761506, ECCS-1818904, and CCF-1801855.
1. What are the main contributions and novel aspects of the paper regarding non-asymptotic convergence for two time-scale TDC? 2. What are the strengths of the paper, particularly in its analysis and experimental results? 3. How does the reviewer assess the significance of the paper's content, including its relevance to previous works and practical applications? 4. What are the weaknesses or areas for improvement in the paper, such as comparing the non-asymptotic convergence analysis with other GTD algorithms and discussing the challenges of analyzing algorithms with blockwise diminishing stepsize? 5. Are there any additional comments or suggestions for future research directions related to the paper's topics?
Review
Review This paper analyzes the non-asymptotic convergence for two time-scale TDC under a non-i.i.d. Markovian sample path and linear function approximation. The results are new and important to the field, and the analysis in this setting seems nontrivial. In addition, the paper also develops a new variant of TDC under a blockwise diminishing stepsize, and proves it asymptotically convergent with an arbitrarily small training error at linear convergence rate. Extensive experiments demonstrate that the new TDC variant can converge as fast as vanilla TDC with constant stepsize, and at the same time it enjoys comparable accuracy as TDC with diminishing stepsize. Overall, the paper has both analytical as well as practical value. However, the following issues need to be addressed. 1. The non-asymptotic convergence analysis for other GTD algorithms under a non-i.i.d. Markovian sample path has been studied in e.g., [30,34]. Hence, the new challenges of analyzing TDC relative to GTD in [30, 34] need to be compared and highlighted. 2. The paper generalizes the stagewise stepsize in the conventional (one timescale) optimization e.g., [34] to the considered two-timescale optimization. The new challenges of analyzing algorithms with blockwise diminishing stepsize in this new settings need to be discussed. 3. It is mentioned that the non-asymptotic analysis can be applied to studying other off-policy algorithms such as the actor-critic and the gradient Q-learning algorithms. A comment is due on how the theoretical guarantees can be affected in these settings?
NIPS
Title Two Time-scale Off-Policy TD Learning: Non-asymptotic Analysis over Markovian Samples Abstract Gradient-based temporal difference (GTD) algorithms are widely used in off-policy learning scenarios. Among them, the two time-scale TD with gradient correction (TDC) algorithm has been shown to have superior performance. In contrast to previous studies that characterized the non-asymptotic convergence rate of TDC only under identical and independently distributed (i.i.d.) data samples, we provide the first non-asymptotic convergence analysis for two time-scale TDC under a non-i.i.d. Markovian sample path and linear function approximation. We show that the two time-scale TDC can converge as fast as O( log t t2/3 ) under diminishing stepsize, and can converge exponentially fast under constant stepsize, but at the cost of a non-vanishing error. We further propose a TDC algorithm with blockwisely diminishing stepsize, and show that it asymptotically converges with an arbitrarily small error at a blockwisely linear convergence rate. Our experiments demonstrate that such an algorithm converges as fast as TDC under constant stepsize, and still enjoys comparable accuracy as TDC under diminishing stepsize. N/A t2/3 ) under diminishing stepsize, and can converge exponentially fast under constant stepsize, but at the cost of a non-vanishing error. We further propose a TDC algorithm with blockwisely diminishing stepsize, and show that it asymptotically converges with an arbitrarily small error at a blockwisely linear convergence rate. Our experiments demonstrate that such an algorithm converges as fast as TDC under constant stepsize, and still enjoys comparable accuracy as TDC under diminishing stepsize. 1 Introduction In practice, it is very common that we wish to learn the value function of a target policy based on data sampled by a different behavior policy, in order to make maximum use of the data available. For such off-policy scenarios, it has been shown that conventional temporal difference (TD) algorithms [24, 25] and Q-learning [33] may diverge to infinity when using linear function approximation [2]. To overcome the divergence issue in off-policy TD learning, [27, 26, 17] proposed a family of gradient-based TD (GTD) algorithms, which were shown to have guaranteed convergence in off-policy settings and are more flexible than on-policy learning in practice [18, 23]. Among those GTD algorithms, the TD with gradient correction (TDC) algorithm has been verified to have superior performance [17] [9] and is widely used in practice. To elaborate, TDC uses the mean squared projected Bellman error as the objective function, and iteratively updates the function approximation parameter with the assistance of an auxiliary parameter that is also iteratively updated. These two 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. parameters are typically updated with stepsizes diminishing at different rates, resulting the two time-scale implementation of TDC, i.e., the function approximation parameter is updated at a slower time-scale and the auxiliary parameter is updated at a faster time-scale. The convergence of two time-scale TDC and general two time-scale stochastic approximation (SA) have been well studied. The asymptotic convergence has been shown in [4, 6] for two time-scale SA, and in [26] for two time-scale TDC, where both studies assume that the data are sampled in an identical and independently distributed (i.i.d.) manner. Under non-i.i.d. observed samples, the asymptotic convergence of the general two time-scale SA and TDC were established in [14, 36]. All the above studies did not characterize how fast the two time-scale algorithms converge, i.e, they did not establish the non-asymptotic convergence rate, which is specially important for a two time-scale algorithm. In order for two time-scale TDC to perform well, it is important to properly choose the relative scaling rate of the stepsizes for the two time-scale iterations. In practice, this can be done by fixing one stepsize and treating the other stepsize as a tuning hyper-parameter [9], which is very costly. The non-asymptotic convergence rate by nature captures how the scaling of the two stepsizes affect the performance and hence can serve as a guidance for choosing the two time-scale stepsizes in practice. Recently, [8] established the non-asymptotic convergence rate for the projected two time-scale TDC with i.i.d. samples under diminishing stepsize. • One important open problem that still needs to be addressed is to characterize the non-asymptotic convergence rate for two time-scale TDC under non-i.i.d. samples and diminishing stepsizes, and explore what such a result suggests for designing the stepsizes of the fast and slow time-scales accordingly. Existing method developed in [8] that handles the non-asymptotic analysis for i.i.d. sampled TDC does not accommodate a direct extension to the non-i.i.d. setting. Thus, new technical developments are necessary to solve this problem. Furthermore, although diminishing stepsize offers accurate convergence, constant stepsize is often preferred in practice due to its much faster error decay (i.e., convergence) rate. For example, empirical results have shown that for one time-scale conventional TD, constant stepsize not only yields fast convergence, but also results in comparable convergence accuracy as diminishing stepsize [9]. However, for two time-scale TDC, our experiments (see Section 4.2) demonstrate that constant stepsize, although yields faster convergence, has much bigger convergence error than diminishing stepsize. This motivates to address the following two open issues. • It is important to theoretically understand/explain why constant stepsize yields large convergence error for two-time scale TDC. Existing non-asymptotic analysis for two time-scale TDC [8] focused only on the diminishing stepsize, and does not characterize the convergence rate of two time-scale TDC under constant stepsize. • For two-time scale TDC, given the fact that constant stepsize yields large convergence error but converges fast, whereas diminishing stepsize has small convergence error but converges slowly, it is desirable to design a new update scheme for TDC that converges faster than diminishing stepsize, but has as good convergence error as diminishing stepsize. In this paper, we comprehensively address the above issues. 1.1 Our Contribution Our main contributions are summarized as follows. We develop a novel non-asymptotic analysis for two time-scale TDC with a single sample path and under non-i.i.d. data. We show that under the diminishing stepsizes αt = cα/(1 + t)σ and βt = cβ/(1 + t) ν respectively for slow and fast time-scales (where cα, cβ , ν, σ are positive constants and 0 < ν < σ ≤ 1), the convergence rate can be as large as O( log t t2/3 ), which is achieved by σ = 32ν = 1. This recovers the convergence rate (up to log t factor due to non-i.i.d. data) in [8] for i.i.d. data as a special case. We also develop the non-asymptotic analysis for TDC under non-i.i.d. data and constant stepsize. In contrast to conventional one time-scale analysis, our result shows that the training error (at slow time-scale) and the tracking error (at fast time scale) converge at different rates (due to different condition numbers), though both converge linearly to the neighborhood of the solution. Our result also characterizes the impact of the tracking error on the training error. Our result suggests that TDC under constant stepsize can converge faster than that under diminishing stepsize at the cost of a large training error, due to a large tracking error caused by the auxiliary parameter iteration in TDC. We take a further step and propose a TDC algorithm under a blockwise diminishing stepsize inspired by [35] in conventional optimization, in which both stepsizes are constants over a block, and decay across blocks. We show that TDC asymptotically converges with an arbitrarily small training error at a blockwisely linear convergence rate as long as the block length and the decay of stepsizes across blocks are chosen properly. Our experiments demonstrate that TDC under a blockwise diminishing stepsize converges as fast as vanilla TDC under constant stepsize, and still enjoys comparable accuracy as TDC under diminishing stepsize. From the technical standpoint, our proof develops new tool to handle the non-asymptotic analysis of bias due to non-i.i.d. data for two time-scale algorithms under diminishing stepsize that does not require square summability, to bound the impact of the fast-time-scale tracking error on the slow-time-scale training error, and the analysis to recursively refine the error bound in order to sharpening the convergence rate. 1.2 Related Work Due to extensive studies on TD learning, we here include only the most relevant work to this paper. On policy TD and SA. The convergence of TD learning with linear function approximation with i.i.d samples has been well established by using standard results in SA [5]. The non-asymptotic convergence have been established in [4, 12, 30] for the general SA algorithms with martingale difference noise, and in [7] for TD with i.i.d. samples. For the Markovian settings, the asymptotic convergence has been established in [31, 28] for TD(λ), and the non-asymptotic convergence has been provided for projected TD(λ) in [3] and for linear SA with Markovian noise in [13, 22, 21]. For linear SA with dynamic Markovian noise, the non-asymptotic analysis of on-policy SARSA under non-i.i.d. samples was recently studied in [37]. Off policy one time-scale GTD. The convergence of one time-scale GTD and GTD2 (which are off-policy TD algorithms) were derived by applying standard results in SA [27, 26, 17]. The nonasymptotic analysis for GTD and GTD2 have been conducted in [16] by converting the objective function into a convex-concave saddle problem, and was further generalized to the Markovian setting in [32]. However, such an approach cannot be generalized for analyzing two-time scale TDC that we study here because TDC does not have an explicit saddle-point representation. Off policy two time-scale TDC and SA. The asymptotic convergence of two time-scale TDC under i.i.d. samples has been established in [26, 17], and the non-asymptotic analysis has been provided in [8] as a special case of two time-scale linear SA. Under Markovian setting, the convergence of various two time-scale GTD algorithms has been studied in [36]. The non-asymptotic analysis of two time-scale TDC under non-i.i.d. data has not been studied before, which is the focus of this paper. General two time-scale SA has also been studied. The convergence of two time-scale SA with martingale difference noise was established in [4], and its non-asymptotic convergence was provided in [15, 20, 8, 6]. Some of these results can be applied to two time-scale TDC under i.i.d. samples (which can fit into a special case of SA with martingale difference noise), but not to the noni.i.d. setting. For two time-scale linear SA with more general Markovian noise, only asymptotic convergence was established in [29, 34, 14]. In fact, our non-asymptotic analysis for two time-scale TDC can be of independent interest here to be further generalized for studying linear SA with more general Markovian noise. Two concurrent and independent studies were posted online recently, which are related to our study. [10] provided a non-asymptotic analysis for two time-scale linear SA under the non-i.i.d setting, in which both variables are updated with constant stepsize. In contrast, our study provides the convergence rate for the case with the two variables being updated by the stepsizes that diminish at different rates, and hence our analysis technique is very different from that in [10]. Another study [11] proposed an interesting approach to analyze the convergence rate of TD learning in the Markovian setting via a Markov jump linear system. Such an approach, however, cannot be applied directly to study the two time-scale TD algorithm that we study here. 2 Problem Formulation 2.1 Off-policy Value Function Evaluation We consider the problem of policy evaluation for a Markov decision process (MDP) (S,A,P, r, γ), where S ⊂ Rd is a compact state space, A is a finite action set, P = P(s′|s, a) is the transition kernel, r(s, a, s′) is the reward function bounded by rmax, and γ ∈ (0, 1) is the discount factor. A stationary policy π maps a state s ∈ S to a probability distribution π(·|s) over A. At time-step t, suppose the process is in some state st ∈ S. Then an action at ∈ A is taken based on the distribution π(·|st), the system transitions to a next state st+1 ∈ S governed by the transition kernel P(·|st, at), and a reward rt = r(st, at, st+1) is received. Assuming the associated Markov chain p(s′|s) = ∑ a∈A p(s ′|s, a)π(a|s) is ergodic, let µπ be the induced stationary distribution of this MDP, i.e., ∑ s p(s ′|s)µπ(s) = µπ(s′). The value function for policy π is defined as: vπ (s) = E[ ∑∞ t=0 γ tr(st, at, st+1)|s0 = s, π], and it is known that vπ(s) is the unique fixed point of the Bellman operator Tπ, i.e., vπ(s) = Tπvπ(s) := rπ(s) + γEs′|svπ(s′), where rπ(s) = Ea,s′|sr(s, a, s′) is the expected reward of the Markov chain induced by policy π. We consider policy evaluation problem in the off-policy setting. Namely, a sample path {(st, at, st+1)}t≥0 is generated by the Markov chain according to the behavior policy πb, but our goal is to obtain the value function of a target policy π, which is different from πb. 2.2 Two Time-Scale TDC When S is large or infinite, a linear function v̂(s, θ) = φ(s)>θ is often used to approximate the value function, where φ(s) ∈ Rd is a fixed feature vector for state s and θ ∈ Rd is a parameter vector. We can also write the linear approximation in the vector form as v̂(θ) = Φθ, where Φ is the |S| × d feature matrix. To find a parameter θ∗ ∈ Rd with Eµπb v̂(s, θ ∗) = EµπbT π v̂(s, θ∗). The gradient-based TD algorithm TDC [26] updates the parameter by minimizing the mean-square projected Bellman error (MSPBE) objective, defined as J(θ) = Eµπb [v̂(s, θ)−ΠT π v̂(s, θ)]2, where Π = Φ(Φ>ΞΦ)−1Φ>Ξ is the orthogonal projection operation into the function space V̂ = {v̂(θ) | θ ∈ Rd and v̂(·, θ) = φ(·)>θ} and Ξ denotes the |S| × |S| diagonal matrix with the components of µπb as its diagonal entries. Then, we define the matrices A, B, C and the vector b as A := Eµπb [ρ(s, a)φ(s)(γφ(s ′)− φ(s))>], B := −γEµπb [ρ(s, a)φ(s ′)φ(s)>], C := −Eµπb [φ(s)φ(s) >], b := Eµπb [ρ(s, a)r(s, a, s ′)φ(s)], where ρ(s, a) = π(a|s)/πb(a|s) is the importance weighting factor with ρmax being its maximum value. If A and C are both non-singular, J(θ) is strongly convex and has θ∗ = −A−1b as its global minimum, i.e., J(θ∗) = 0. Motivated by minimizing the MSPBE objective function using the stochastic gradient methods, TDC was proposed with the following update rules: θt+1 = ΠRθ (θt + αt(Atθt + bt +Btwt)) , (1) wt+1 = ΠRw (wt + βt(Atθt + bt + Ctwt)) , (2) where At = ρ(st, at)φ(st)(γφ(st+1) − φ(st))>, Bt = −γρ(st, at)φ(st+1)φ(st)>, Ct = −φ(st)φ(st)>, bt = ρ(st, at)r(st, at, st+1)φ(st), and ΠR(x) = argminx′:||x′||2≤R ||x − x ′||2 is the projection operator onto a norm ball of radius R < ∞. The projection step is widely used in the stochastic approximation literature. As we will show later, iterations (1)-(2) are guaranteed to converge to the optimal parameter θ∗ if we choose the value of Rθ and Rw appropriately. TDC with the update rules (1)-(2) is a two time-scale algorithm. The parameter θ iterates at a slow time-scale determined by the stepsize {αt}, whereas w iterates at a fast time-scale determined by the stepsize {βt}. Throughout the paper, we make the following standard assumptions [3, 32, 17]. Assumption 1 (Problem solvability). The matrix A and C are non-singular. Assumption 2 (Bounded feature). ‖φ(s)‖2 ≤ 1 for all s ∈ S and ρmax <∞. Assumption 3 (Geometric ergodicity). There exist constants m > 0 and ρ ∈ (0, 1) such that sup s∈S dTV (P(st ∈ ·|s0 = s), µπb) ≤ mρt,∀t ≥ 0, where dTV (P,Q) denotes the total-variation distance between the probability measures P and Q. In Assumption 1, the matrix A is required to be non-singular so that the optimal parameter θ∗ = −A−1b is well defined. The matrix C is non-singular when the feature matrix Φ has linearly independent columns. Assumption 2 can be ensured by normalizing the basis functions {φi}di=1 and when πb(·|s) is non-degenerate for all s. Assumption 3 holds for any time-homogeneous Markov chain with finite state-space and any uniformly ergodic Markov chains with general state space. Throughout the paper, we require Rθ ≥ ‖A‖2 ‖b‖2 and Rw ≥ 2 ∥∥C−1∥∥ 2 ‖A‖2Rθ. In practice, we can estimate A, C and b as mentioned in [3] or simply let Rθ and Rw to be large enough. 3 Main Theorems 3.1 Non-asymptotic Analysis under Diminishing Stepsize Our first main result is the convergence rate of two time-scale TDC with diminishing stepsize. We define the tracking error: zt = wt −ψ(θt), where ψ(θt) = −C−1(b+Aθt) is the stationary point of the ODE given by ẇ(t) = Cw(t) + Aθt + b, with θt being fixed. Let λθ and λw be any constants that satisfy λmax(2A>C−1A) ≤ λθ < 0 and λmax(2C) ≤ λw < 0. Theorem 1. Consider the projected two time-scale TDC algorithm in (1)-(2). Suppose Assumptions 1-3 hold. Suppose we apply diminishing stepsize αt = cα(1+t)σ , βt = cβ (1+t)ν which satisfy 0 < ν < σ < 1, 0 < cα < 1|λθ| and 0 < cβ < 1 |λw| . Suppose and ′ can be any constants in (0, σ − ν] and (0, 0.5], respectively. Then we have for t ≥ 0: E ‖θt − θ∗‖22 ≤ O(e −|λθ|cα 1−σ (t 1−σ−1)) +O ( log t tσ ) +O ( log t tν + h(σ, ν) )1− ′ , (3) E ‖zt‖22 ≤ O ( log t tν ) +O(h(σ, ν)), (4) where h(σ, ν) = { 1 tν , σ > 1.5ν, 1 t2(σ−ν)− , ν < σ ≤ 1.5ν. (5) If 0 < ν < σ = 1, with cα = 1|λθ| and 0 < cβ < 1 |λw| , we have for t ≥ 0 E ‖θt − θ∗‖22 ≤ O ( (log t)2 t ) +O ( log t tν + h(1, ν) )1− ′ . (6) For explicit expressions of (3), (4) and (6), please refer to (25), (18) and (28) in the Appendix. We further explain Theorem 1 as follows: (a) In (3) and (5), since both and ′ can be arbitrarily small, the convergence of E ‖θt − θ∗‖22 can be almost as fast as 1 t2(σ−ν) when ν < σ < 1.5ν, and log t tν when 1.5ν ≤ σ. Then best convergence rate is almost as fast as O( log t t2/3 ) with σ = 32ν = 1. (b) If data are i.i.d. generated, then our bound reduces to E ‖θt − θ∗‖22 ≤ O(exp(λθcα(t1−σ − 1)/(1− σ))) +O(1/tσ) +O(h(σ, ν))1− ′ with h(σ, ν) = 1tν when σ > 1.5ν, and h(σ, ν) = 1 t2(σ−ν)− when ν < σ ≤ 1.5ν. The best convergence rate is almost as fast as 1 t2/3 with σ = 32ν = 1 as given in [8]. Theorem 1 characterizes the relationship between the convergence rate of θt and stepsizes αt and βt. The first term of the bound in (3) corresponds to the convergence rate of θt with full gradient ∇J(θt), which exponentially decays with t. The second term is introduced by the bias and variance of the gradient estimator which decays sublinearly with t. The last term arises due to the accumulated tracking error zt, which specifically arises in two time-scale algorithms, and captures how accurately wt tracks ψ(θt). Thus, if wt tracks the stationary point ψ(θt) in each step perfectly, then we have only the first two terms in (3), which matches the results of one time-scale TD learning [3, 7]. Theorem 1 indicates that asymptotically, (3) is dominated by the tracking error term O(h(σ, ν)1− ′), which depends on the diminishing rate of αt and βt. Since both and ′ can be arbitrarily small, if the diminishing rate of αt is close to that of βt, then the tracking error is dominated by the slow drift, which has an approximate order O(1/t2(σ−ν)); if the diminishing rate of αt is much faster than that of βt, then the tracking error is dominated by the accumulated bias, which has an approximate order O(log t/tν). Moreover, (5) and (6) suggest that for any fixed σ ∈ (0, 1], the optimal diminishing rate of βt is achieved by σ = 32ν. From the technical standpoint, we develop novel techniques to handle the interaction between the training error and the tracking error and sharpen the error bounds recursively. The proof sketch and the detailed steps are provided in Appendix A. 3.2 Non-asymptotic Analysis under Constant Stepsize As we remark in Section 1, it has been demonstrated by empirical results [9] that the standard TD under constant stepsize not only converges fast, but also has comparable training error as that under diminishing stepsize. However, this does not hold for TDC. When the two variables in TDC are updated both under constant stepsize, our experiments demonstrate that constant stepsize yields fast convergence, but has large training error. In this subsection, we aim to explain why this happens by analyzing the convergence rate of the two variables in TDC, and the impact of one on the other. The following theorem provides the convergence result for TDC with the two variables iteratively updated respectively by two different constant stepsizes. Theorem 2. Consider the projected TDC algorithm in eqs. (1) and (2). Suppose Assumption 1-3 hold. Suppose we apply constant stepsize αt = α, βt = β and α = ηβ which satisfy η > 0, 0 < α < 1|λθ| and 0 < β < 1|λw| . We then have for t ≥ 0: E ‖θt − θ∗‖22 ≤ (1− |λθ|α) t(‖θ0 − θ∗‖22 + C1) + C2 max{α, α ln 1 α }+ (C3 max{β, β ln 1 β }+ C4η)0.5 (7) E ‖zt‖22 ≤ (1− |λw|β) t ‖z0‖22 + C5 max{β, β ln 1 β }+ C6η, (8) where C1 = 4γρmaxRθRw 1−(1−|λθ|α)T+1 |λθ|(1−|λθ|α)T+1 with T = d ln[C5 max{β,ln( 1β )β}/‖z0‖ 2 2] − ln(1−|λw|β) e, and C2, C3, C4, C5 and C6 are positive constants independent of α and β. For explicit expressions of C2, C3, C4, C5 and C6, please refer to (67), (68), (69), (59), and (60) in the Supplementary Materials. Theorem 2 shows that TDC with constant stepsize converges to a neighborhood of θ∗ exponentially fast. The size of the neighborhood depends on the second and the third terms of the bound in (7), which arise from the bias and variance of the update of θt and the tracking error zt in (8), respectively. Clearly, the convergence zt, although is also exponentially fast to a neighborhood, is under a different rate due to the different condition number. We further note that as the stepsize parameters α, β approach 0 in a way such that α/β → 0, θt approaches to θ∗ as t→∞, which matches the asymptotic convergence result for two time-scale TDC under constant stepsize in [36]. Diminishing vs Constant Stepsize: We next discuss the comparison between TDC under diminishing stepsize and constant stepsize. Generally, Theorem 1 suggests that diminishing stepsize yields better converge guarantee (i.e., converges exactly to θ∗) than constant stepsize shown in Theorem 2 (i.e., converges to the neighborhood of θ∗). In practice, constant stepsize is recommended because diminishing stepsize may take much longer time to converge. However, as Figure 2 in Section 4.2 shows, although TDC with large constant stepsize converges fast, the training error due to the convergence to the neighborhood is significantly worse than the diminishing stepsize. More specifically, when η = α/β is fixed, as α grows, the convergence becomes faster, but as a consequence, the term (C3 max{β, β ln 1β }+C4η) 0.5 due to the tracking error increases and results in a large training error. Alternatively, if α gets small so that the training error is comparable to that under diminishing stepsize, then the convergence becomes very slow. This suggests that simply setting the stepsize to be constant for TDC does not yield desired performance. This motivates us to design an appropriate update scheme for TDC such that it can enjoy as fast error convergence rate as constant stepsize offers, but still have comparable accuracy as diminishing stepsize enjoys. 3.3 TDC under Blockwise Diminishing Stepsize In this subsection, we propose a blockwise diminishing stepsize scheme for TDC (see Algorithm 1), and study its theoretical convergence guarantee. In Algorithm 1, we define ts = ∑s i=0 Ts. The idea of Algorithm 1 is to divide the iteration process into blocks, and diminish the stepsize blockwisely, but keep the stepsize to be constant within each block. In this way, within each block, Algorithm 1 Blockwise Diminishing Stepsize TDC Input: θ0,0 = θ0, w0,0 = w0 = 0, T0 = 0, block index S 1: for s = 1, 2, ..., S do 2: θs,0 = θs−1, ws,0 = ws−1 3: for i = 1, 2, ..., Ts do 4: Sample (sts−1+i, ats−1+i, sts−1+i+1, rts−1+i) from trajetory 5: θs,i = ΠRθ ( θs,i−1 + αs(Ats−1+iθs,i−1 + bts−1+i +Bts−1+iws,i−1) ) 6: ws,i = ΠRw ( ws,i−1 + βs(Ats−1+iθs,i−1 + bts−1+i + Cts−1+iws,i−1) ) 7: end for 8: θs = θs,Ts , ws = ws,Ts 9: end for Output: θS , wS TDC can decay fast due to constant stepsize and still achieve an accurate solution due to blockwisely decay of the stepsize, as we will demonstrate in Section 4. More specifically, the constant stepsizes αs and βs for block s are chosen to decay geometrically, such that the tracking error and accumulated variance and bias are asymptotically small; and the block length Ts increases geometrically across blocks, such that the training error E ‖θs − θ∗‖22 decreases geometrically blockwisely. We note that the design of the algorithm is inspired by the method proposed in [35] for conventional optimization problems. The following theorem characterizes the convergence of Algorithm 1. Theorem 3. Consider the projected TDC algorithm with blockwise diminishing stepsize as in Algorithm 1. Suppose Assumptions 1-3 hold. Suppose max{log(1/αs)αs, αs} ≤ min{ s−1/(4C7), 1/|λx|}, βs = ηαs and Ts = dlog1/(1−|λx|αs) 4e, where λx < 0 and C7 > 0 are constant independent of s (see (72) and (75) in the Supplementary Materials for explicit expression of λx and C7), s = ‖θ0 − θ∗‖2 /2s and η ≥ 1/2 max{0, λmin(C−1(A> + A))}. Then, after S = dlog2( 0/ )e blocks, we have E ‖θS − θ∗‖22 ≤ . The total sample complexity is O( 1 log 1 ). Theorem 3 indicates that the sample complexity of TDC under blockwise diminishing stepsize is slightly better than that under diminishing stepsize. Our empirical results (see Section 4.3) also demonstrate that blockwise diminishing stepsize yields as fast convergence as constant stepsize and has comparable training error as diminishing stepsize. However, we want to point out that the advantage of blockwise diminishing stepsize does not come for free, rather at the cost of some extra parameter tuning in practice to estimate 0, |λx|, C7 and η; whereas diminishing stepsize scheme as guided by our Theorem 1 requires to tune at most three parameters to obtain desirable performance. 4 Experimental Results In this section, we provide numerical experiments to verify our theoretical results and the efficiency of Algorithm 1. More precisely, we consider Garnet problems [1] denoted as G(nS , nA, p, q), where ns denotes the number of states, nA denotes the number of actions, p denotes the number of possible next states for each state-action pair, and q denotes the number of features. The reward is state-dependent and both the reward and the feature vectors are generated randomly. The discount factor γ is set to 0.95 in all experiments. We consider the G(500, 20, 50, 20) problem. For all experiments, we choose θ0 = w0 = 0. All plots report the evolution of the mean square error over 500 independent runs. 4.1 Optimal Diminishing Stepsize In this subsection, we provide numerical results to verify Theorem 1. We compare the performance of TDC updates with the same αt but different βt. We consider four different diminishing stepsize settings: (1) cα = cβ = 0.03, σ = 0.15; (2) cα = cβ = 0.18, σ = 0.30; (3) cα = cβ = 1, σ = 0.45; (4) cα = cβ = 4, σ = 0.60. For each case with fixed slow time-scale parameter σ, the fast time-scale stepsize βt has decay rate ν to be 12σ, 1 3σ, 5 9σ, 2 3σ, 5 6σ, and σ. Our results are reported in Figure 1, in which for each case the left figure reports the overall iteration process and the right figure reports the corresponding zoomed tail process of the last 100000 iterations. It can be seen that in all cases, TDC iterations with the same slow time-scale stepsize σ share similar error decay rates (see the left plot), and the difference among the fast time-scale parameter ν is reflected by the behavior of the error convergence tails (see the right plot). We observe that ν = 23σ yields the best error decay rate. This corroborates Theorem 1, which illustrates that the fast time-scale stepsize βt with parameter ν affects only the tracking error term in (3), that dominates the error decay rate asymptotically. 4.2 Constant Stepsize vs Diminishing Stepsize In this subsection, we compare the error decay of TDC under diminishing stepsize with that of TDC under four different constant stepsizes. For diminishing stepsize, we set cα = cβ and σ = 32ν, and tune their values to the best, which are given by cα = cβ = 1.8, σ = 32ν = 0.45. For the four constant-stepsize cases, we fix α for each case, and tune β to the best. The resulting parameter settings are respectively as follows: αt = 0.01, βt = 0.006; αt = 0.02, βt = 0.008; αt = 0.05, βt = 0.02; and αt = 0.1, βt = 0.02. The results are reported in Figure 2, in which for both the training and tracking errors, the left plot illustrates the overall iteration process and the right plot illustrates the corresponding zoomed error tails. The results suggest that although some large constant stepsizes (αt = 0.05, βt = 0.02 and αt = 0.1, βt = 0.02) yield initially faster convergence than diminishing stepsize, they eventually oscillate around a large neighborhood of θ∗ due to the large tracking error. Small constant stepsize (αt = 0.02, βt = 0.008 and αt = 0.01, βt = 0.006) can have almost the same asymptotic accuracy as that under diminishing stepsize, but has very slow convergence rate. We can also observe strong correlation between the training and tracking errors under constant stepsize, i.e., larger training error corresponds to larger tracking error, which corroborates Theorem 2 and suggests that the accuracy of TDC heavily depends on the decay of the tracking error ‖zt‖2. 4.3 Blockwise Diminishing Stepsize In this subsection, we compare the error decay of TDC under blockwise diminishing stepsize with that of TDC under diminishing stepsize and constant stepsize. We use the best tuned parameter settings as listed in Section 4.2 for the latter two algorithms, i.e., cα = cβ = 1.8 and σ = 32ν = 0.45 for diminishing stepsize, and αt = 0.1, βt = 0.02 for constant stepsize. We report our results in Figure 3. It can be seen that TDC under blockwise diminishing stepsize converges faster than that under diminishing stepsize and almost as fast as that under constant stepsize. Furthermore, TDC under blockwise diminishing stepsize also has comparable training error as that under diminishing stepsize. Since the stepsize decreases geometrically blockwisely, the algorithm approaches to a very small neighborhood of θ∗ in the later blocks. We can also observe that the tracking error under blockwise diminishing stepsize decreases rapidly blockwisely. 4.4 Robustness to Blocksize In this subsection, we investigate the robustness of TDC under blockwise diminishing stepsize with respect to the blocksize. We consider the same setting as in Section 4.3, and perturb all blocksizes by certain percentages of the original blocksize suggested in the algorithm. It can be seen from Figure 4 that the error decay rate changes only very slightly even with a substantial change in the blocksize. 5 Conclusion In this work, we provided the first non-asymptotic analysis for the two time-scale TDC algorithm over Markovian sample path. We developed a novel technique to handle the accumulative tracking error caused by the two time-scale update, using which we characterized the non-asymptotic convergence rate with general diminishing stepsize and constant stepsize. We also proposed a blockwise diminishing stepsize scheme for TDC and proved its convergence. Our experiments demonstrated the performance advantage of such an algorithm over both the diminishing and constant stepsize TDC algorithms. Our technique for non-asymptotic analysis of two time-scale algorithms can be applied to studying other off-policy algorithms such as actor-critic [18] and gradient Q-learning algorithms [19]. Acknowledgment The work of T. Xu and Y. Liang was supported in part by the U.S. National Science Foundation under the grants CCF-1761506, ECCS-1818904, and CCF-1801855.
1. What is the focus of the paper regarding temporal difference learning? 2. What are the strengths of the paper, particularly in its novel analysis? 3. Do you have any concerns or questions about the experimental results or the theoretical bounds? 4. How do the bounds provided in the paper offer insights into the properties of the problem, behavior policy, and representation? 5. Can you clarify what "more flexible" means in the context of gradient-based TD algorithms compared to on-policy learning?
Review
Review This paper provides finite-time bounds for TD with gradient correction (TDC). While non-asymptotic behavior of TD and asymptotic behavior of TDC have been studied before, non-asymptotic analysis for TDC is new and interesting given the importance of off-policy learning and the challenge of step-size tuning in two time-scale algorithms. The paper is well-written, the discussion on the impact of the two step-sizes is clear and is supported by experiments. Questions: - The plots show the error between \theta and \theta^*. How is \theta^* obtained for these domains? - How would the worst-case errors predicted by the bound compare to the errors observed empirically in the experiments? - Besides implications for the choice of step-size, do these bounds provide insight on what properties of the problem, the behavior policy, and the representation affect the rate of convergence? - The first paragraph says that gradient-based TD algorithms are "more flexible than on-policy learning in practice." What does more flexible mean here?
NIPS
Title Two Time-scale Off-Policy TD Learning: Non-asymptotic Analysis over Markovian Samples Abstract Gradient-based temporal difference (GTD) algorithms are widely used in off-policy learning scenarios. Among them, the two time-scale TD with gradient correction (TDC) algorithm has been shown to have superior performance. In contrast to previous studies that characterized the non-asymptotic convergence rate of TDC only under identical and independently distributed (i.i.d.) data samples, we provide the first non-asymptotic convergence analysis for two time-scale TDC under a non-i.i.d. Markovian sample path and linear function approximation. We show that the two time-scale TDC can converge as fast as O( log t t2/3 ) under diminishing stepsize, and can converge exponentially fast under constant stepsize, but at the cost of a non-vanishing error. We further propose a TDC algorithm with blockwisely diminishing stepsize, and show that it asymptotically converges with an arbitrarily small error at a blockwisely linear convergence rate. Our experiments demonstrate that such an algorithm converges as fast as TDC under constant stepsize, and still enjoys comparable accuracy as TDC under diminishing stepsize. N/A t2/3 ) under diminishing stepsize, and can converge exponentially fast under constant stepsize, but at the cost of a non-vanishing error. We further propose a TDC algorithm with blockwisely diminishing stepsize, and show that it asymptotically converges with an arbitrarily small error at a blockwisely linear convergence rate. Our experiments demonstrate that such an algorithm converges as fast as TDC under constant stepsize, and still enjoys comparable accuracy as TDC under diminishing stepsize. 1 Introduction In practice, it is very common that we wish to learn the value function of a target policy based on data sampled by a different behavior policy, in order to make maximum use of the data available. For such off-policy scenarios, it has been shown that conventional temporal difference (TD) algorithms [24, 25] and Q-learning [33] may diverge to infinity when using linear function approximation [2]. To overcome the divergence issue in off-policy TD learning, [27, 26, 17] proposed a family of gradient-based TD (GTD) algorithms, which were shown to have guaranteed convergence in off-policy settings and are more flexible than on-policy learning in practice [18, 23]. Among those GTD algorithms, the TD with gradient correction (TDC) algorithm has been verified to have superior performance [17] [9] and is widely used in practice. To elaborate, TDC uses the mean squared projected Bellman error as the objective function, and iteratively updates the function approximation parameter with the assistance of an auxiliary parameter that is also iteratively updated. These two 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. parameters are typically updated with stepsizes diminishing at different rates, resulting the two time-scale implementation of TDC, i.e., the function approximation parameter is updated at a slower time-scale and the auxiliary parameter is updated at a faster time-scale. The convergence of two time-scale TDC and general two time-scale stochastic approximation (SA) have been well studied. The asymptotic convergence has been shown in [4, 6] for two time-scale SA, and in [26] for two time-scale TDC, where both studies assume that the data are sampled in an identical and independently distributed (i.i.d.) manner. Under non-i.i.d. observed samples, the asymptotic convergence of the general two time-scale SA and TDC were established in [14, 36]. All the above studies did not characterize how fast the two time-scale algorithms converge, i.e, they did not establish the non-asymptotic convergence rate, which is specially important for a two time-scale algorithm. In order for two time-scale TDC to perform well, it is important to properly choose the relative scaling rate of the stepsizes for the two time-scale iterations. In practice, this can be done by fixing one stepsize and treating the other stepsize as a tuning hyper-parameter [9], which is very costly. The non-asymptotic convergence rate by nature captures how the scaling of the two stepsizes affect the performance and hence can serve as a guidance for choosing the two time-scale stepsizes in practice. Recently, [8] established the non-asymptotic convergence rate for the projected two time-scale TDC with i.i.d. samples under diminishing stepsize. • One important open problem that still needs to be addressed is to characterize the non-asymptotic convergence rate for two time-scale TDC under non-i.i.d. samples and diminishing stepsizes, and explore what such a result suggests for designing the stepsizes of the fast and slow time-scales accordingly. Existing method developed in [8] that handles the non-asymptotic analysis for i.i.d. sampled TDC does not accommodate a direct extension to the non-i.i.d. setting. Thus, new technical developments are necessary to solve this problem. Furthermore, although diminishing stepsize offers accurate convergence, constant stepsize is often preferred in practice due to its much faster error decay (i.e., convergence) rate. For example, empirical results have shown that for one time-scale conventional TD, constant stepsize not only yields fast convergence, but also results in comparable convergence accuracy as diminishing stepsize [9]. However, for two time-scale TDC, our experiments (see Section 4.2) demonstrate that constant stepsize, although yields faster convergence, has much bigger convergence error than diminishing stepsize. This motivates to address the following two open issues. • It is important to theoretically understand/explain why constant stepsize yields large convergence error for two-time scale TDC. Existing non-asymptotic analysis for two time-scale TDC [8] focused only on the diminishing stepsize, and does not characterize the convergence rate of two time-scale TDC under constant stepsize. • For two-time scale TDC, given the fact that constant stepsize yields large convergence error but converges fast, whereas diminishing stepsize has small convergence error but converges slowly, it is desirable to design a new update scheme for TDC that converges faster than diminishing stepsize, but has as good convergence error as diminishing stepsize. In this paper, we comprehensively address the above issues. 1.1 Our Contribution Our main contributions are summarized as follows. We develop a novel non-asymptotic analysis for two time-scale TDC with a single sample path and under non-i.i.d. data. We show that under the diminishing stepsizes αt = cα/(1 + t)σ and βt = cβ/(1 + t) ν respectively for slow and fast time-scales (where cα, cβ , ν, σ are positive constants and 0 < ν < σ ≤ 1), the convergence rate can be as large as O( log t t2/3 ), which is achieved by σ = 32ν = 1. This recovers the convergence rate (up to log t factor due to non-i.i.d. data) in [8] for i.i.d. data as a special case. We also develop the non-asymptotic analysis for TDC under non-i.i.d. data and constant stepsize. In contrast to conventional one time-scale analysis, our result shows that the training error (at slow time-scale) and the tracking error (at fast time scale) converge at different rates (due to different condition numbers), though both converge linearly to the neighborhood of the solution. Our result also characterizes the impact of the tracking error on the training error. Our result suggests that TDC under constant stepsize can converge faster than that under diminishing stepsize at the cost of a large training error, due to a large tracking error caused by the auxiliary parameter iteration in TDC. We take a further step and propose a TDC algorithm under a blockwise diminishing stepsize inspired by [35] in conventional optimization, in which both stepsizes are constants over a block, and decay across blocks. We show that TDC asymptotically converges with an arbitrarily small training error at a blockwisely linear convergence rate as long as the block length and the decay of stepsizes across blocks are chosen properly. Our experiments demonstrate that TDC under a blockwise diminishing stepsize converges as fast as vanilla TDC under constant stepsize, and still enjoys comparable accuracy as TDC under diminishing stepsize. From the technical standpoint, our proof develops new tool to handle the non-asymptotic analysis of bias due to non-i.i.d. data for two time-scale algorithms under diminishing stepsize that does not require square summability, to bound the impact of the fast-time-scale tracking error on the slow-time-scale training error, and the analysis to recursively refine the error bound in order to sharpening the convergence rate. 1.2 Related Work Due to extensive studies on TD learning, we here include only the most relevant work to this paper. On policy TD and SA. The convergence of TD learning with linear function approximation with i.i.d samples has been well established by using standard results in SA [5]. The non-asymptotic convergence have been established in [4, 12, 30] for the general SA algorithms with martingale difference noise, and in [7] for TD with i.i.d. samples. For the Markovian settings, the asymptotic convergence has been established in [31, 28] for TD(λ), and the non-asymptotic convergence has been provided for projected TD(λ) in [3] and for linear SA with Markovian noise in [13, 22, 21]. For linear SA with dynamic Markovian noise, the non-asymptotic analysis of on-policy SARSA under non-i.i.d. samples was recently studied in [37]. Off policy one time-scale GTD. The convergence of one time-scale GTD and GTD2 (which are off-policy TD algorithms) were derived by applying standard results in SA [27, 26, 17]. The nonasymptotic analysis for GTD and GTD2 have been conducted in [16] by converting the objective function into a convex-concave saddle problem, and was further generalized to the Markovian setting in [32]. However, such an approach cannot be generalized for analyzing two-time scale TDC that we study here because TDC does not have an explicit saddle-point representation. Off policy two time-scale TDC and SA. The asymptotic convergence of two time-scale TDC under i.i.d. samples has been established in [26, 17], and the non-asymptotic analysis has been provided in [8] as a special case of two time-scale linear SA. Under Markovian setting, the convergence of various two time-scale GTD algorithms has been studied in [36]. The non-asymptotic analysis of two time-scale TDC under non-i.i.d. data has not been studied before, which is the focus of this paper. General two time-scale SA has also been studied. The convergence of two time-scale SA with martingale difference noise was established in [4], and its non-asymptotic convergence was provided in [15, 20, 8, 6]. Some of these results can be applied to two time-scale TDC under i.i.d. samples (which can fit into a special case of SA with martingale difference noise), but not to the noni.i.d. setting. For two time-scale linear SA with more general Markovian noise, only asymptotic convergence was established in [29, 34, 14]. In fact, our non-asymptotic analysis for two time-scale TDC can be of independent interest here to be further generalized for studying linear SA with more general Markovian noise. Two concurrent and independent studies were posted online recently, which are related to our study. [10] provided a non-asymptotic analysis for two time-scale linear SA under the non-i.i.d setting, in which both variables are updated with constant stepsize. In contrast, our study provides the convergence rate for the case with the two variables being updated by the stepsizes that diminish at different rates, and hence our analysis technique is very different from that in [10]. Another study [11] proposed an interesting approach to analyze the convergence rate of TD learning in the Markovian setting via a Markov jump linear system. Such an approach, however, cannot be applied directly to study the two time-scale TD algorithm that we study here. 2 Problem Formulation 2.1 Off-policy Value Function Evaluation We consider the problem of policy evaluation for a Markov decision process (MDP) (S,A,P, r, γ), where S ⊂ Rd is a compact state space, A is a finite action set, P = P(s′|s, a) is the transition kernel, r(s, a, s′) is the reward function bounded by rmax, and γ ∈ (0, 1) is the discount factor. A stationary policy π maps a state s ∈ S to a probability distribution π(·|s) over A. At time-step t, suppose the process is in some state st ∈ S. Then an action at ∈ A is taken based on the distribution π(·|st), the system transitions to a next state st+1 ∈ S governed by the transition kernel P(·|st, at), and a reward rt = r(st, at, st+1) is received. Assuming the associated Markov chain p(s′|s) = ∑ a∈A p(s ′|s, a)π(a|s) is ergodic, let µπ be the induced stationary distribution of this MDP, i.e., ∑ s p(s ′|s)µπ(s) = µπ(s′). The value function for policy π is defined as: vπ (s) = E[ ∑∞ t=0 γ tr(st, at, st+1)|s0 = s, π], and it is known that vπ(s) is the unique fixed point of the Bellman operator Tπ, i.e., vπ(s) = Tπvπ(s) := rπ(s) + γEs′|svπ(s′), where rπ(s) = Ea,s′|sr(s, a, s′) is the expected reward of the Markov chain induced by policy π. We consider policy evaluation problem in the off-policy setting. Namely, a sample path {(st, at, st+1)}t≥0 is generated by the Markov chain according to the behavior policy πb, but our goal is to obtain the value function of a target policy π, which is different from πb. 2.2 Two Time-Scale TDC When S is large or infinite, a linear function v̂(s, θ) = φ(s)>θ is often used to approximate the value function, where φ(s) ∈ Rd is a fixed feature vector for state s and θ ∈ Rd is a parameter vector. We can also write the linear approximation in the vector form as v̂(θ) = Φθ, where Φ is the |S| × d feature matrix. To find a parameter θ∗ ∈ Rd with Eµπb v̂(s, θ ∗) = EµπbT π v̂(s, θ∗). The gradient-based TD algorithm TDC [26] updates the parameter by minimizing the mean-square projected Bellman error (MSPBE) objective, defined as J(θ) = Eµπb [v̂(s, θ)−ΠT π v̂(s, θ)]2, where Π = Φ(Φ>ΞΦ)−1Φ>Ξ is the orthogonal projection operation into the function space V̂ = {v̂(θ) | θ ∈ Rd and v̂(·, θ) = φ(·)>θ} and Ξ denotes the |S| × |S| diagonal matrix with the components of µπb as its diagonal entries. Then, we define the matrices A, B, C and the vector b as A := Eµπb [ρ(s, a)φ(s)(γφ(s ′)− φ(s))>], B := −γEµπb [ρ(s, a)φ(s ′)φ(s)>], C := −Eµπb [φ(s)φ(s) >], b := Eµπb [ρ(s, a)r(s, a, s ′)φ(s)], where ρ(s, a) = π(a|s)/πb(a|s) is the importance weighting factor with ρmax being its maximum value. If A and C are both non-singular, J(θ) is strongly convex and has θ∗ = −A−1b as its global minimum, i.e., J(θ∗) = 0. Motivated by minimizing the MSPBE objective function using the stochastic gradient methods, TDC was proposed with the following update rules: θt+1 = ΠRθ (θt + αt(Atθt + bt +Btwt)) , (1) wt+1 = ΠRw (wt + βt(Atθt + bt + Ctwt)) , (2) where At = ρ(st, at)φ(st)(γφ(st+1) − φ(st))>, Bt = −γρ(st, at)φ(st+1)φ(st)>, Ct = −φ(st)φ(st)>, bt = ρ(st, at)r(st, at, st+1)φ(st), and ΠR(x) = argminx′:||x′||2≤R ||x − x ′||2 is the projection operator onto a norm ball of radius R < ∞. The projection step is widely used in the stochastic approximation literature. As we will show later, iterations (1)-(2) are guaranteed to converge to the optimal parameter θ∗ if we choose the value of Rθ and Rw appropriately. TDC with the update rules (1)-(2) is a two time-scale algorithm. The parameter θ iterates at a slow time-scale determined by the stepsize {αt}, whereas w iterates at a fast time-scale determined by the stepsize {βt}. Throughout the paper, we make the following standard assumptions [3, 32, 17]. Assumption 1 (Problem solvability). The matrix A and C are non-singular. Assumption 2 (Bounded feature). ‖φ(s)‖2 ≤ 1 for all s ∈ S and ρmax <∞. Assumption 3 (Geometric ergodicity). There exist constants m > 0 and ρ ∈ (0, 1) such that sup s∈S dTV (P(st ∈ ·|s0 = s), µπb) ≤ mρt,∀t ≥ 0, where dTV (P,Q) denotes the total-variation distance between the probability measures P and Q. In Assumption 1, the matrix A is required to be non-singular so that the optimal parameter θ∗ = −A−1b is well defined. The matrix C is non-singular when the feature matrix Φ has linearly independent columns. Assumption 2 can be ensured by normalizing the basis functions {φi}di=1 and when πb(·|s) is non-degenerate for all s. Assumption 3 holds for any time-homogeneous Markov chain with finite state-space and any uniformly ergodic Markov chains with general state space. Throughout the paper, we require Rθ ≥ ‖A‖2 ‖b‖2 and Rw ≥ 2 ∥∥C−1∥∥ 2 ‖A‖2Rθ. In practice, we can estimate A, C and b as mentioned in [3] or simply let Rθ and Rw to be large enough. 3 Main Theorems 3.1 Non-asymptotic Analysis under Diminishing Stepsize Our first main result is the convergence rate of two time-scale TDC with diminishing stepsize. We define the tracking error: zt = wt −ψ(θt), where ψ(θt) = −C−1(b+Aθt) is the stationary point of the ODE given by ẇ(t) = Cw(t) + Aθt + b, with θt being fixed. Let λθ and λw be any constants that satisfy λmax(2A>C−1A) ≤ λθ < 0 and λmax(2C) ≤ λw < 0. Theorem 1. Consider the projected two time-scale TDC algorithm in (1)-(2). Suppose Assumptions 1-3 hold. Suppose we apply diminishing stepsize αt = cα(1+t)σ , βt = cβ (1+t)ν which satisfy 0 < ν < σ < 1, 0 < cα < 1|λθ| and 0 < cβ < 1 |λw| . Suppose and ′ can be any constants in (0, σ − ν] and (0, 0.5], respectively. Then we have for t ≥ 0: E ‖θt − θ∗‖22 ≤ O(e −|λθ|cα 1−σ (t 1−σ−1)) +O ( log t tσ ) +O ( log t tν + h(σ, ν) )1− ′ , (3) E ‖zt‖22 ≤ O ( log t tν ) +O(h(σ, ν)), (4) where h(σ, ν) = { 1 tν , σ > 1.5ν, 1 t2(σ−ν)− , ν < σ ≤ 1.5ν. (5) If 0 < ν < σ = 1, with cα = 1|λθ| and 0 < cβ < 1 |λw| , we have for t ≥ 0 E ‖θt − θ∗‖22 ≤ O ( (log t)2 t ) +O ( log t tν + h(1, ν) )1− ′ . (6) For explicit expressions of (3), (4) and (6), please refer to (25), (18) and (28) in the Appendix. We further explain Theorem 1 as follows: (a) In (3) and (5), since both and ′ can be arbitrarily small, the convergence of E ‖θt − θ∗‖22 can be almost as fast as 1 t2(σ−ν) when ν < σ < 1.5ν, and log t tν when 1.5ν ≤ σ. Then best convergence rate is almost as fast as O( log t t2/3 ) with σ = 32ν = 1. (b) If data are i.i.d. generated, then our bound reduces to E ‖θt − θ∗‖22 ≤ O(exp(λθcα(t1−σ − 1)/(1− σ))) +O(1/tσ) +O(h(σ, ν))1− ′ with h(σ, ν) = 1tν when σ > 1.5ν, and h(σ, ν) = 1 t2(σ−ν)− when ν < σ ≤ 1.5ν. The best convergence rate is almost as fast as 1 t2/3 with σ = 32ν = 1 as given in [8]. Theorem 1 characterizes the relationship between the convergence rate of θt and stepsizes αt and βt. The first term of the bound in (3) corresponds to the convergence rate of θt with full gradient ∇J(θt), which exponentially decays with t. The second term is introduced by the bias and variance of the gradient estimator which decays sublinearly with t. The last term arises due to the accumulated tracking error zt, which specifically arises in two time-scale algorithms, and captures how accurately wt tracks ψ(θt). Thus, if wt tracks the stationary point ψ(θt) in each step perfectly, then we have only the first two terms in (3), which matches the results of one time-scale TD learning [3, 7]. Theorem 1 indicates that asymptotically, (3) is dominated by the tracking error term O(h(σ, ν)1− ′), which depends on the diminishing rate of αt and βt. Since both and ′ can be arbitrarily small, if the diminishing rate of αt is close to that of βt, then the tracking error is dominated by the slow drift, which has an approximate order O(1/t2(σ−ν)); if the diminishing rate of αt is much faster than that of βt, then the tracking error is dominated by the accumulated bias, which has an approximate order O(log t/tν). Moreover, (5) and (6) suggest that for any fixed σ ∈ (0, 1], the optimal diminishing rate of βt is achieved by σ = 32ν. From the technical standpoint, we develop novel techniques to handle the interaction between the training error and the tracking error and sharpen the error bounds recursively. The proof sketch and the detailed steps are provided in Appendix A. 3.2 Non-asymptotic Analysis under Constant Stepsize As we remark in Section 1, it has been demonstrated by empirical results [9] that the standard TD under constant stepsize not only converges fast, but also has comparable training error as that under diminishing stepsize. However, this does not hold for TDC. When the two variables in TDC are updated both under constant stepsize, our experiments demonstrate that constant stepsize yields fast convergence, but has large training error. In this subsection, we aim to explain why this happens by analyzing the convergence rate of the two variables in TDC, and the impact of one on the other. The following theorem provides the convergence result for TDC with the two variables iteratively updated respectively by two different constant stepsizes. Theorem 2. Consider the projected TDC algorithm in eqs. (1) and (2). Suppose Assumption 1-3 hold. Suppose we apply constant stepsize αt = α, βt = β and α = ηβ which satisfy η > 0, 0 < α < 1|λθ| and 0 < β < 1|λw| . We then have for t ≥ 0: E ‖θt − θ∗‖22 ≤ (1− |λθ|α) t(‖θ0 − θ∗‖22 + C1) + C2 max{α, α ln 1 α }+ (C3 max{β, β ln 1 β }+ C4η)0.5 (7) E ‖zt‖22 ≤ (1− |λw|β) t ‖z0‖22 + C5 max{β, β ln 1 β }+ C6η, (8) where C1 = 4γρmaxRθRw 1−(1−|λθ|α)T+1 |λθ|(1−|λθ|α)T+1 with T = d ln[C5 max{β,ln( 1β )β}/‖z0‖ 2 2] − ln(1−|λw|β) e, and C2, C3, C4, C5 and C6 are positive constants independent of α and β. For explicit expressions of C2, C3, C4, C5 and C6, please refer to (67), (68), (69), (59), and (60) in the Supplementary Materials. Theorem 2 shows that TDC with constant stepsize converges to a neighborhood of θ∗ exponentially fast. The size of the neighborhood depends on the second and the third terms of the bound in (7), which arise from the bias and variance of the update of θt and the tracking error zt in (8), respectively. Clearly, the convergence zt, although is also exponentially fast to a neighborhood, is under a different rate due to the different condition number. We further note that as the stepsize parameters α, β approach 0 in a way such that α/β → 0, θt approaches to θ∗ as t→∞, which matches the asymptotic convergence result for two time-scale TDC under constant stepsize in [36]. Diminishing vs Constant Stepsize: We next discuss the comparison between TDC under diminishing stepsize and constant stepsize. Generally, Theorem 1 suggests that diminishing stepsize yields better converge guarantee (i.e., converges exactly to θ∗) than constant stepsize shown in Theorem 2 (i.e., converges to the neighborhood of θ∗). In practice, constant stepsize is recommended because diminishing stepsize may take much longer time to converge. However, as Figure 2 in Section 4.2 shows, although TDC with large constant stepsize converges fast, the training error due to the convergence to the neighborhood is significantly worse than the diminishing stepsize. More specifically, when η = α/β is fixed, as α grows, the convergence becomes faster, but as a consequence, the term (C3 max{β, β ln 1β }+C4η) 0.5 due to the tracking error increases and results in a large training error. Alternatively, if α gets small so that the training error is comparable to that under diminishing stepsize, then the convergence becomes very slow. This suggests that simply setting the stepsize to be constant for TDC does not yield desired performance. This motivates us to design an appropriate update scheme for TDC such that it can enjoy as fast error convergence rate as constant stepsize offers, but still have comparable accuracy as diminishing stepsize enjoys. 3.3 TDC under Blockwise Diminishing Stepsize In this subsection, we propose a blockwise diminishing stepsize scheme for TDC (see Algorithm 1), and study its theoretical convergence guarantee. In Algorithm 1, we define ts = ∑s i=0 Ts. The idea of Algorithm 1 is to divide the iteration process into blocks, and diminish the stepsize blockwisely, but keep the stepsize to be constant within each block. In this way, within each block, Algorithm 1 Blockwise Diminishing Stepsize TDC Input: θ0,0 = θ0, w0,0 = w0 = 0, T0 = 0, block index S 1: for s = 1, 2, ..., S do 2: θs,0 = θs−1, ws,0 = ws−1 3: for i = 1, 2, ..., Ts do 4: Sample (sts−1+i, ats−1+i, sts−1+i+1, rts−1+i) from trajetory 5: θs,i = ΠRθ ( θs,i−1 + αs(Ats−1+iθs,i−1 + bts−1+i +Bts−1+iws,i−1) ) 6: ws,i = ΠRw ( ws,i−1 + βs(Ats−1+iθs,i−1 + bts−1+i + Cts−1+iws,i−1) ) 7: end for 8: θs = θs,Ts , ws = ws,Ts 9: end for Output: θS , wS TDC can decay fast due to constant stepsize and still achieve an accurate solution due to blockwisely decay of the stepsize, as we will demonstrate in Section 4. More specifically, the constant stepsizes αs and βs for block s are chosen to decay geometrically, such that the tracking error and accumulated variance and bias are asymptotically small; and the block length Ts increases geometrically across blocks, such that the training error E ‖θs − θ∗‖22 decreases geometrically blockwisely. We note that the design of the algorithm is inspired by the method proposed in [35] for conventional optimization problems. The following theorem characterizes the convergence of Algorithm 1. Theorem 3. Consider the projected TDC algorithm with blockwise diminishing stepsize as in Algorithm 1. Suppose Assumptions 1-3 hold. Suppose max{log(1/αs)αs, αs} ≤ min{ s−1/(4C7), 1/|λx|}, βs = ηαs and Ts = dlog1/(1−|λx|αs) 4e, where λx < 0 and C7 > 0 are constant independent of s (see (72) and (75) in the Supplementary Materials for explicit expression of λx and C7), s = ‖θ0 − θ∗‖2 /2s and η ≥ 1/2 max{0, λmin(C−1(A> + A))}. Then, after S = dlog2( 0/ )e blocks, we have E ‖θS − θ∗‖22 ≤ . The total sample complexity is O( 1 log 1 ). Theorem 3 indicates that the sample complexity of TDC under blockwise diminishing stepsize is slightly better than that under diminishing stepsize. Our empirical results (see Section 4.3) also demonstrate that blockwise diminishing stepsize yields as fast convergence as constant stepsize and has comparable training error as diminishing stepsize. However, we want to point out that the advantage of blockwise diminishing stepsize does not come for free, rather at the cost of some extra parameter tuning in practice to estimate 0, |λx|, C7 and η; whereas diminishing stepsize scheme as guided by our Theorem 1 requires to tune at most three parameters to obtain desirable performance. 4 Experimental Results In this section, we provide numerical experiments to verify our theoretical results and the efficiency of Algorithm 1. More precisely, we consider Garnet problems [1] denoted as G(nS , nA, p, q), where ns denotes the number of states, nA denotes the number of actions, p denotes the number of possible next states for each state-action pair, and q denotes the number of features. The reward is state-dependent and both the reward and the feature vectors are generated randomly. The discount factor γ is set to 0.95 in all experiments. We consider the G(500, 20, 50, 20) problem. For all experiments, we choose θ0 = w0 = 0. All plots report the evolution of the mean square error over 500 independent runs. 4.1 Optimal Diminishing Stepsize In this subsection, we provide numerical results to verify Theorem 1. We compare the performance of TDC updates with the same αt but different βt. We consider four different diminishing stepsize settings: (1) cα = cβ = 0.03, σ = 0.15; (2) cα = cβ = 0.18, σ = 0.30; (3) cα = cβ = 1, σ = 0.45; (4) cα = cβ = 4, σ = 0.60. For each case with fixed slow time-scale parameter σ, the fast time-scale stepsize βt has decay rate ν to be 12σ, 1 3σ, 5 9σ, 2 3σ, 5 6σ, and σ. Our results are reported in Figure 1, in which for each case the left figure reports the overall iteration process and the right figure reports the corresponding zoomed tail process of the last 100000 iterations. It can be seen that in all cases, TDC iterations with the same slow time-scale stepsize σ share similar error decay rates (see the left plot), and the difference among the fast time-scale parameter ν is reflected by the behavior of the error convergence tails (see the right plot). We observe that ν = 23σ yields the best error decay rate. This corroborates Theorem 1, which illustrates that the fast time-scale stepsize βt with parameter ν affects only the tracking error term in (3), that dominates the error decay rate asymptotically. 4.2 Constant Stepsize vs Diminishing Stepsize In this subsection, we compare the error decay of TDC under diminishing stepsize with that of TDC under four different constant stepsizes. For diminishing stepsize, we set cα = cβ and σ = 32ν, and tune their values to the best, which are given by cα = cβ = 1.8, σ = 32ν = 0.45. For the four constant-stepsize cases, we fix α for each case, and tune β to the best. The resulting parameter settings are respectively as follows: αt = 0.01, βt = 0.006; αt = 0.02, βt = 0.008; αt = 0.05, βt = 0.02; and αt = 0.1, βt = 0.02. The results are reported in Figure 2, in which for both the training and tracking errors, the left plot illustrates the overall iteration process and the right plot illustrates the corresponding zoomed error tails. The results suggest that although some large constant stepsizes (αt = 0.05, βt = 0.02 and αt = 0.1, βt = 0.02) yield initially faster convergence than diminishing stepsize, they eventually oscillate around a large neighborhood of θ∗ due to the large tracking error. Small constant stepsize (αt = 0.02, βt = 0.008 and αt = 0.01, βt = 0.006) can have almost the same asymptotic accuracy as that under diminishing stepsize, but has very slow convergence rate. We can also observe strong correlation between the training and tracking errors under constant stepsize, i.e., larger training error corresponds to larger tracking error, which corroborates Theorem 2 and suggests that the accuracy of TDC heavily depends on the decay of the tracking error ‖zt‖2. 4.3 Blockwise Diminishing Stepsize In this subsection, we compare the error decay of TDC under blockwise diminishing stepsize with that of TDC under diminishing stepsize and constant stepsize. We use the best tuned parameter settings as listed in Section 4.2 for the latter two algorithms, i.e., cα = cβ = 1.8 and σ = 32ν = 0.45 for diminishing stepsize, and αt = 0.1, βt = 0.02 for constant stepsize. We report our results in Figure 3. It can be seen that TDC under blockwise diminishing stepsize converges faster than that under diminishing stepsize and almost as fast as that under constant stepsize. Furthermore, TDC under blockwise diminishing stepsize also has comparable training error as that under diminishing stepsize. Since the stepsize decreases geometrically blockwisely, the algorithm approaches to a very small neighborhood of θ∗ in the later blocks. We can also observe that the tracking error under blockwise diminishing stepsize decreases rapidly blockwisely. 4.4 Robustness to Blocksize In this subsection, we investigate the robustness of TDC under blockwise diminishing stepsize with respect to the blocksize. We consider the same setting as in Section 4.3, and perturb all blocksizes by certain percentages of the original blocksize suggested in the algorithm. It can be seen from Figure 4 that the error decay rate changes only very slightly even with a substantial change in the blocksize. 5 Conclusion In this work, we provided the first non-asymptotic analysis for the two time-scale TDC algorithm over Markovian sample path. We developed a novel technique to handle the accumulative tracking error caused by the two time-scale update, using which we characterized the non-asymptotic convergence rate with general diminishing stepsize and constant stepsize. We also proposed a blockwise diminishing stepsize scheme for TDC and proved its convergence. Our experiments demonstrated the performance advantage of such an algorithm over both the diminishing and constant stepsize TDC algorithms. Our technique for non-asymptotic analysis of two time-scale algorithms can be applied to studying other off-policy algorithms such as actor-critic [18] and gradient Q-learning algorithms [19]. Acknowledgment The work of T. Xu and Y. Liang was supported in part by the U.S. National Science Foundation under the grants CCF-1761506, ECCS-1818904, and CCF-1801855.
1. What is the focus of the paper in terms of time-scale TDC? 2. What are the strengths of the proposed approach compared to other methods? 3. What are the concerns regarding the blockwise diminishing step-size method? 4. How does the author address the concern about blocksize hyperparameter? 5. What is the impact of the blocksize on the algorithm's performance?
Review
Review To my knowledge the proposed analysis about two time-scale TDC under diminishing step-size and constant step-size is novel. However I need to acknowledge that I am not familiar with all related work in this area and I did not go through the proof details. Later, an blockwise diminishing step-size method is proposed to combine the advantage of constant step-size and diminishing step-size. However, to me, it looks like this ideal property need an careful choice of blocksize, and Thm 3 seems verified that. I have some concern about how to set the blocksize properly here without prior knowledge and how robust the algorithm is with respect to the blocksize hyperparameter. === After Rebuttal === I'm glad that the authors could show the algorithm performance is very robust to the blocksize. The author's further explanation addressed my concern. So I will change my score accordingly.
NIPS
Title Constrained GPI for Zero-Shot Transfer in Reinforcement Learning Abstract For zero-shot transfer in reinforcement learning where the reward function varies between different tasks, the successor features framework has been one of the popular approaches. However, in this framework, the transfer to new target tasks with generalized policy improvement (GPI) relies on only the source successor features [6] or additional successor features obtained from the function approximators’ generalization to novel inputs [12]. The goal of this work is to improve the transfer by more tightly bounding the value approximation errors of successor features on the new target tasks. Given a set of source tasks with their successor features, we present lower and upper bounds on the optimal values for novel task vectors that are expressible as linear combinations of source task vectors. Based on the bounds, we propose constrained GPI as a simple test-time approach that can improve transfer by constraining action-value approximation errors on new target tasks. Through experiments in the Scavenger and Reacher environment with state observations as well as the DeepMind Lab environment with visual observations, we show that the proposed constrained GPI significantly outperforms the prior GPI’s transfer performance. Our code and additional information are available at https://jaekyeom.github.io/projects/cgpi/. 1 Introduction For sequential decision making, deep reinforcement learning (RL) has been shown to be effective for various types of problems including games [31] and robotics [20, 22]. With such great successes, interest in multi-task RL has also surged, where its goal is to train a single agent that can efficiently solve multiple varying tasks. In multi-task RL, we focus on the transfer learning setting, where the agent learns shared structural knowledge from a set of source tasks during training, and exploits and generalizes them in new, unseen target tasks at test time. One popular approach to transfer in RL is to leverage the successor features (SFs) framework [1, 6, 7, 12, 25], which transfers policies learned on source tasks to target tasks, where the tasks share the same environment dynamics but differ in their reward functions. Successor features build a representation of value functions decoupled from reward functions, and transfer to the tasks with arbitrary reward functions by taking an inner product with corresponding task vectors. They utilize generalized policy improvement (GPI) [6], which generalizes policy improvement with multiple policies and provides the performance lower bounds for GPI policies. However, GPI does not take into account any information from the smoothness of optimal actionvalue functions with respect to task vectors. Tackling this issue, Borsa et al. [12] propose universal successor features approximators (USFAs), which can estimate the optimal successor features for novel task vectors. Nevertheless, the function approximator can make high approximation errors on the task vectors, especially when the new task vectors are distant from the source task vectors. For instance, when USFAs are trained with source tasks to get close to given goals, they may not 36th Conference on Neural Information Processing Systems (NeurIPS 2022). generalize well to the target tasks where the agent should get away from the given goals. That is, if the elements of target task vectors have the opposite signs from the source task vectors, USFAs could output successor features with high approximation errors. To improve the successor features approximation of USFAs for the new tasks, we aim at bounding value approximation errors on the new target tasks. We first introduce a new theorem on bounding the optimal values for the tasks that are expressible as linear combinations of source tasks. Our theorem generalizes the conical combination condition used by the prior theorem by Nemecek and Parr [25]. Using our new bounds as constraints, we can train the successor features approximators whose action-value approximation errors on novel tasks are bounded. We extend this idea so that we accomplish a similar effect with no additional training; as a result, we propose constrained GPI as a test-time approach to bounding the approximation errors. Despite its simplicity and no need for modification to the training procedure, we empirically show that constrained GPI attains large performance improvements compared to the original GPI in multiple environments, including the Scavenger [8, 9] and Reacher [13] environments with state observations and the DeepMind Lab [7, 10, 12] environment with first-person visual observations. Our main contributions can be summarized as follows: • We present a novel theorem on lower- and upper-bounding optimal values for novel tasks that can be expressed as linear combinations of source tasks. It extends and generalizes the previous theorem for conical combinations by Nemecek and Parr [25], to enable a broader application of the bounds. • Based on our new theorem, we propose constrained GPI as a simple test-time approach that can improve transfer to novel tasks by constraining action-value approximation errors on new target tasks, with no modification to the training procedure. • We empirically show that our approach can improve the performance over the GPI baselines by large margins in the Scavenger, Reacher and DeepMind Lab environments. We also provide analyses for a better understanding of our results. 2 Preliminaries We describe the problem setting and background on successor features and universal successor features approximators. We refer the reader to Appendix for an in-depth discussion of related work. 2.1 The Zero-Shot Transfer Problem in RL We define a Markov Decision Process (MDP) as M ≡ (S,A, P,R, γ). S and A are the state and action spaces, respectively. P (·|s, a) defines the transition probability distribution of the next states given s ∈ S and a ∈ A. R(s, a, s′) is the reward for taking action a at state s resulting in s′, and γ ∈ (0, 1] is the discount factor. We assume that rewards are bounded. We consider the zero-shot transfer problem; as in [6], each task is defined by its task vector w ∈ Rd, and only the reward functions differ across tasks, being decomposed as Rw(s, a, s ′) = φ(s, a, s′)>w, (1) where φ(s, a, s′) ∈ Rd is the features of (s, a, s′). We denote the set of source task vectors as T , which is used for training. At test time, we evaluate the transferred policy on each target taskw′ /∈ T with no additional update of pre-trained components. We examine both the possible scenarios: (i) the features φ(s, a, s′) are available to the agent [6, 12] and (ii) no pre-defined features are available and the agent needs to construct its own features and task vectors. We first introduce the formulation for (i) in Section 2.2 and then its variant for (ii) in Section 2.3. 2.2 Successor Features and Universal Successor Features Approximators We now review successor features (SFs) [6] and how they are transferred to different tasks. Equation (1) allows expressing the action-value function for policy π on task w as Qπw(s, a) = Eπ [ ∞∑ i=0 γirt+i ∣∣∣St= s,At= a] = Eπ [ ∞∑ i=0 γiφt+i ∣∣∣St= s,At= a]>w = ψπ(s, a)>w, (2) where φt = φ(st, at, st+1) ∈ Rd. Here, ψπ(s, a) ∈ Rd is called the SFs for policy π at (s, a), and taking its inner product with an arbitrary task w results in the action-value for π on w; i.e., Qπw(s, a). Thanks to the analogy between (rewards r, action-value functions Q) and (features φ, successor features ψ), the Bellman equation applies to SFs and thus they can be trained similarly to the way action-value functions are learned; e.g., Q-learning. The policy improvement theorem [11] states that a new policy that takes a greedy action according to a given policy’s value function at each state performs at least as well as the original policy. Generalized policy improvement (GPI) [6] extends policy improvement to the case where the value functions of multiple policies are available. Given a task w′, a set of policies π1, . . . , πn, their action-value functions Qπ1w′ , . . . , Q πn w′ and their approximations Q̃ π1 w′ , . . . , Q̃ πn w′ , the GPI policy is defined as πGPI(s) ∈ argmax a max i Q̃πiw′(s, a). (3) Barreto et al. [6] suggest that QπGPIw′ (s, a) ≥ maxiQπiw′(s, a) − 21−γ maxi ∥∥∥Qπiw′ − Q̃πiw′∥∥∥∞. They also provide the upper bound on the suboptimality of the GPI policy as ‖Q∗w′ −QπGPIw′ ‖∞ ≤ 2 1− γ { min i ‖φ‖∞‖w′ −wi‖+max i ∥∥∥Qπiw′ − Q̃πiw′∥∥∥∞} , (4) where each πi is an optimal policy for wi. While the GPI theorem allows the transfer of learned successor features to arbitrary tasks that share the same environment dynamics, it is limited in the following aspect. GPI uses the action-values for source tasks on target tasks based on the reward decomposition assumption (Equation (1)) i.e., Q̃πiw′(s, a) = ψ̃ πi(s, a)>w′ for each i. However, it does not take any advantage of the smoothness of the optimal action-value functions with respect to different task vectors [12]. To overcome this limitation, Borsa et al. [12] introduce universal successor features approximators (USFAs). Inspired by universal value functions (UVFs) [30], they extend the original successor features with policy vectors z ∈ Rl as input to their approximators. More specifically, universal successor features (USFs) are defined to satisfy ψπz (s, a) ≡ ψ(s, a,z) ≈ ψ̃(s, a,z), (5) where z is a policy vector for the policy πz , and USFAs ψ̃ are the learned approximators of USFs ψ. Naturally, the value functions are expressed as Q̃πzw (s, a) = ψ̃(s, a,z) >w ≈ ψ(s, a,z)>w = Qπzw (s, a). (6) Each reward function induces optimal policies, which can be encoded using the corresponding task vectors. That is, one can simply choose to define the policy vector space to be the same as the task vector space (l = d) and let z = w be a policy vector of an optimal policy for task w. Then, πw and Qπww denote an optimal policy for w and its action-value function, respectively. The training of USFAs is similar to that of SFs, except for that it additionally involves sampling of policy vectors given task vectors. The update of USFAs at the k-th iteration is ψ̃(k+1) ← argmin ψ Ew∼T ,z∼Dz(·|w),(s,a,s′)∼µ [∥∥∥φ(s, a, s′) + γψ̃(k)(s′, a′, z)− ψ(s, a,z)∥∥∥2] (7) for a′ = argmaxb ψ̃ (k)(s′, b,z)>z. Dz(·|w) is the policy vector distribution; for instance,N (w, σI) can be used for better training with diversified inputs. µ is the transition sampling distribution, which involves the GPI policy of the samples from Dz(·|w) or a replay buffer. We use gradient descent to update the parameters. USFAs provide a benefit that they allow a GPI policy to use an arbitrary set of policies {πz}z∈C as πGPI(s) ∈ argmaxamaxz∈C Q̃πzw′(s, a). However, the generalization of USFAs to new policy vectors depends on a function approximator ψ, and thus if C contains policy vector(s) distant from source vectors, a GPI policy with C may have high approximation errors and perform poorly or even worse than a GPI policy with only source vectors [12], as will be demonstrated later in our experiments. 2.3 Universal Successor Features Approximators with Learned φ For the scenario where features φ’s are not provided to the agent1, we adopt the problem formulation from Ma et al. [23] where for each task the task information g ∈ G is given to the agent. Although the task information g, unlike a task vector, cannot be directly combined with successor features for transfer to a novel task, zero-shot inference could still be possible by leveraging the information about the task. Specifically, we not only perform the original learning of ψ̃ letting the task information induce policy vectors instead, but also train φ̃ and w̃ to approximate the reward decomposition with transition samples. As done in [23], we update ψ̃, φ̃ and w̃ using gradient descent to minimize Eg∼T g,z∼Dgz(·|g),(s,a,r,s′)∼µ [ Lψ + LQ ] for Lψ := 1 d ∥∥∥φ̃(s, a, s′) + γψ̃(k)(s′, a′, z)− ψ̃(s, a,z)∥∥∥2 (8) LQ := { r + γψ̃(k)(s′, a′, z)>w̃(k)(z)− ψ̃(s, a,z)>w̃(z) }2 (9) and a′ = argmaxb ψ̃ (k)(s′, b,z)>w̃(k)(z) at the k-th iteration. The superscript (k) denotes the target, T g is the source task information set, Dgz(·|g) is the policy vector distribution conditioned on the task information and µ is the sampling distribution. 3 Constrained GPI for Improved Zero-Shot Transfer of Successor Features To mitigate the aforementioned issue of the possibly unlimited approximation errors of USFAs, we propose a simple yet effective method that improves the transfer of successor features by further leveraging the reward decomposition structure in Equation (1). We first present, under a more relaxed condition, lower and upper bounds on the optimal values for novel task vectors that are expressed as linear combinations of source task vectors (Section 3.1). Then, we propose a novel approach called constrained GPI, which effectively confines the approximated action-values inside the computed lower and upper bounds (Section 3.2). 3.1 Bounding Optimal Values for New Tasks Theorem 1 of [25] provides the lower and upper bounds on the value of an optimal policy for a new task, whose vector w′ is a positive conical combination of source task vectors i.e., w′ = ∑ w∈T αww such that αw ≥ 0,∀w ∈ T and∑w∈T αw > 02. However, for a broad application of such bounds, the positive conical combination condition can be too restrictive, since the resulting bounds only apply to the task vectors that appear inside the conical hull of source task vectors. Therefore, we suggest a more relaxed theorem, which holds for an arbitrary task vector w′ that is expressed as a linear combination of the source task vectors i.e., w′ = ∑ w∈T αww for αw ∈ R,∀w ∈ T . Figure 1 shows an example that compares the task space coverage of conical [25] and 1One typical example presented later in our experiments is the case where the agent observes visual inputs. Then, it is not trivial to derive features that linearly decompose reward functions. 2We slightly abuse the notation and let αw denote the coefficient for vector w. our linear combinations. With our extended task space coverage, we can apply the bounds to more general target tasks outside of the conical hull, which will be further discussed in the next section. We define πw1w2 to be an upper bound on the approximation error of Q̃ πw1 w2 for arbitrary tasks w1,w2 such that |Qπw1w2 (s, a)− Q̃ πw1 w2 (s, a)| ≤ πw1 w2 (s, a), ∀(s, a) ∈ S ×A, (10) and we present our theorem as follows. Theorem 1. Given a task vectorw′ = ∑ w∈T αww for αw ∈ R,∀w ∈ T , for all state-action pairs (s, a) ∈ S ×A, the action-value of πw′ , which is an optimal policy for task w′, on task w′ is lowerand upper-bounded as Lw′,T (s, a) ≤ Qπw′w′ (s, a) ≤ Uw′,T ,α(s, a) for Lw′,T (s, a) := max w∈T [ Q̃πww′ (s, a)− πww′ (s, a) ] , (11) Uw′,T ,α(s, a) := ∑ w∈T max { αw ( Q̃πww (s, a) + πw w (s, a) ) , αwCw(s, a) } , (12) for someCw(s, a) ≤ minπ Qπw(s, a) such asCw(s, a) = 11−γ rminw where rminw is the minimum reward on w i.e., rminw = min(s,a)∈S×ARw(s, a) and α = {αw}w∈T . Proof. For the derivation of the lower bound Lw′,T (s, a), since Q πw′ w′ is the optimal action-value function for task w′ and Qπw′w′ (s, a) ≥ Qπww′ (s, a) for arbitrary task w and state-action pair (s, a), Q πw′ w′ (s, a) ≥ max w∈T Qπww′ (s, a) ≥ max w∈T [ Q̃πww′ (s, a)− πww′ (s, a) ] . (13) For the upper bound Uw′,T ,α(s, a), we use that Q πw′ w (s, a) ≤ Qπww (s, a) and Q πw′ w (s, a) ≥ minπ Q π w(s, a) ≥ Cw(s, a) for arbitrary task w and state-action pair (s, a), which leads to Q πw′ w′ (s, a) = ∑ w∈T αw (Q πw′ w (s, a)− Cw(s, a)) + ∑ w∈T αwCw(s, a) (14) ≤ ∑ w∈T max {αw (Qπw′w (s, a)− Cw(s, a)) , 0}+ ∑ w∈T αwCw(s, a) (15) ≤ ∑ w∈T max {αw (Qπww (s, a)− Cw(s, a)) , 0}+ ∑ w∈T αwCw(s, a) (16) = ∑ w∈T {max {αw (Qπww (s, a)− Cw(s, a)) , 0}+ αwCw(s, a)} (17) = ∑ w∈T max {αwQπww (s, a), αwCw(s, a)} (18) ≤ ∑ w∈T max { αw ( Q̃πww (s, a) + πw w (s, a) ) , αwCw(s, a) } . (19) In Equation (19), for each w ∈ T , the sign of αw determines which of the two terms in the max operator is used. If αw ≥ 0, the max operator selects the first term, whereas a negative αw lets the second term be used. Note that our Theorem 1 recovers Theorem 1 of [25] when w′ is a conical combination of w’s from T i.e., αw ≥ 0,∀w ∈ T . Intuitively, this theorem states the condition that the optimal action-value for an arbitrary target task must satisfy, given the optimal successor features for the source tasks. The theorem is applicable to different problems wherever bounding of optimal values is useful. One example is policy cache construction, where the agent should decide whether to reuse existing policies in the cache set or learn a new one given each new task [25]. As will be shown in the next section, we employ the bounding as a constraint on the action-values for novel target tasks, for the guidance of transfer. In Sections 4.1 and 4.2 and appendix B, we empirically show that the application of our Theorem 1 can significantly improve the performance in the cases where target tasks are outside the conical hull of source tasks. 3.2 Constrained Training and Constrained GPI As described in Section 2.2, the universal successor features approximators (USFAs) [12] improve the original successor features so that arbitrary policy vectors, including the ones for target tasks, can be used for GPI. However, the use of arbitrary policy vectors with USFAs solely relies on the generalization power of the approximators (e.g., neural networks). Thus, the obtained successor features on novel tasks might contain high approximation errors, which could make the GPI policy perform poorly. Our high-level idea to tackle the issue is to exploit the reward decomposition structure in Equation (1) even for obtaining SFs for new tasks, instead of solely relying on the approximators. We employ the lower and upper bounds on optimal values from Theorem 1 to enforce the bounds on the approximate successor features. As a result, the approximation errors can be reduced by restricting the estimated optimal values to be inside those bounds around the optimal values, which can prevent the use of erroneous values during the transfer to unseen tasks. For now, we will first introduce how to train the successor features approximators to output the successor features that satisfy the bounds on novel tasks. Then, we will point out that an analogous effect can be accomplished by modifying only the inference algorithm, and propose constrained GPI as a simple yet effective test-time approach to improving zero-shot transfer to novel tasks. Constrained training of SF approximators. In the original training of USFAs, the approximators are learned with a set of source tasks T in Equation (7). We propose to guide the training by employing Theorem 1; we impose constraints for the approximators using the lower and upper bounds on the optimal values for arbitrary linear combinations of source tasks. Specifically, for the training of USFAs, we use Equation (7) but with the following constraints: Lw′,T (s, a) ≤ ψ̃(s, a,w′)>w′ ≤ Uw′,T ,ξ(w′,T ,s,a)(s, a) for w′ ∈ W, (20) where (s, a) is the same sample as the main objective of Equation (7). W is a set of task vectors for the constraints, which can be independent of the source task set T , and ξ(·) determines the coefficients α given a target taskw′ and T . We will explain later how to determine ξ(·). W can be any subset of the linear span of source task vectors, but practically, we randomly sample a number of vectors from the span at each update. Since the targets of the constraints are not fixed with respect to both w′ and (s, a) throughout the training, we use penalty terms (or soft constraints) that linearly penalize the constraint violations as 1 |W| ∑ w′∈W ({ Lw′,T (s, a)− Q̃πw′w′ (s, a) } + + { Q̃ πw′ w′ (s, a)− Uw′,T ,ξ(w′,T ,s,a)(s, a) } + ) , where {x}+ denotes max{x, 0}. The constrained training suggested above can make the approximators comply with the bounds for any tasks without requiring any additional interactions with the environment. However, it has some downsides. First, since it is a new training procedure, existing pre-trained models cannot be used. It requires some additional computational cost compared to the naive training of successor features approximators. Second, the enforcement of the constraints for training can introduce additional hyperparameters (e.g., the weight coefficient for the penalty terms). Thus, suboptimal hyperparameters may introduce either instability in the training or a decrease in the performance. Test-time constrained GPI. Our idea starts with the observation that in the constrained training, the learned successor features from source tasks are considered the “trustworthy” features for the constraints, because the USFAs are trained on the source tasks. Besides, only the source successor features are used for computing the constraints for all the other tasks. It implies that the learning of the source successor features better not be affected by other criteria, and more accurate source successor features would produce better constraints for other tasks with smaller errors. Based on the implication, we propose constrained GPI, which can not only overcome the limitation of USFAs as done by the aforementioned constrained training but also have two additional practical merits: (i) it is computationally simpler, and (ii) it is a test-time approach with no training. Simply put, we propose replacing the usual GPI policy with the constrained GPI policy as πCGPI(s) ∈ argmax a max z∈C [ min { max { Q̃πzw′(s, a), Lw′,T (s, a) } , Uw′,T ,ξ(w′,T ,s,a)(s, a) }] , (21) where the target task w′ is expressible as a linear combination of the source tasks and ξ(·) again outputs α given w′ and T as in Equation (20). C is a set of policies that we can freely choose when applying the constrained GPI. The constrained GPI policy selects the actions that maximize the maximum action-values as the original GPI policy does but also caps the values with the lower and upper bound constraints derived from the source successor features. The upper bound constraint fixes the overestimation of values computed with approximate successor features for either the target task w′ or any other tasks used for constrained GPI. The lower bound constraint ensures that action-values on the target task for the greedy action selection are at least as close to the optimal target action-values as the lower bounds. The approximation error terms in the lower and upper bounds i.e., πww′ (s, a) and πw w (s, a) in Theorem 1 could be ignored in practice, as long as the approximation errors of the source successor features are sufficiently small. Also, we can obtain the tightest upper bound by defining ξ(·) as ξ(w′, T , s, a) := argmin {αw}w∈T Uw′,T ,{αw}w∈T (s, a) subject to w ′ = ∑ w∈T αww. (22) The objective Uw′,T ,{αw}w∈T (s, a) is the sum of the piecewise linear functions. Thus, Equation (22) can be solved with linear programming. We observe that using the lower bound constraint with Lw′,T (s, a) is equivalent to including the successor features for source tasks in the input to the constrained GPI; i.e., T ⊆ C. Also, since Lw′,T (s, a) ≤ Uw′,T ,ξ(w′,T ,s,a)(s, a), there would be no difference between GPI and constrained GPI when C = T . Thus, in our experiments, we mainly use C = {w′}, which is equivalent to using C = T ∪ {w′}. 4 Experiments 4.1 Scavenger Experiments We start our experiments in the Scavenger environment [8, 9], which can assess our approach with minimal influence from external causes. In Scavenger, the agent is positioned at one of the cells in a G×G grid, and the goal is to maximize the return by collecting objects. Both the agent and objects are spawned at random locations, and there are d classes of objects where the class determines the value of the reward. The state space is S = {0, 1}G×G×(d+1), where the first d channels describe the current locations of the objects on the map and the last channel specifies the walls where the agent cannot go and objects do not appear. There are four actions available: A = {UP, DOWN, LEFT, RIGHT}, and the agent picks up an object by visiting the cell of the object, which spawns a new object of a random class at a random location. The feature φ(s, a, s′) ∈ {0, 1}d is a one-hot vector whose element represents whether the agent picks up an object of that type or not within the transition. The task vector w ∈ Rd determines the reward values for the d different classes of objects. Please see Barreto et al. [9] for the full details. We evaluate the zero-shot transfer performance of different approaches i.e., we first train USFAs as proposed in [12], and measure the performance of GPI and constrained GPI policies that use the same set of USFAs on target tasks with no further policy updates. We set G = 11 and use 20 objects in total with the different numbers of classes; d = 2 and d = 4. With d = 4, we also test the USFAs that are learned with the constrained training for a comparison. We use the standard basis vectors of Rd as the set of source tasks as done in [12], and evaluate agents on the set of target tasks defined by {−1, 1}d. Therefore, all the target tasks except for the all-ones vector 1 are not covered by the conical hull of source tasks, which requires Theorem 1 for bounding of values. We train eight USFAs agents for 1M steps, and evaluate them on each target vector 10 times with a fixed set of 10 random seeds. To be invariant to the reward scale differences between different tasks, we normalize the scores (or returns) from the environment by the minimum and maximum scores with respect to all the agents’ evaluation episodes on each task. Figures 2 and 3 compare the performance of the USFAs agents with GPI and constrained GPI for exploitation, following the evaluation scheme suggested by [2]. Although they use the same set of trained USFAs, the constrained GPI brings a notable performance improvement in comparison with the original GPI. Also, Figure 3 suggests that the constrained GPI, the test-time method, can match 0.40 0.45 0.50 CGPI-T (ours) CT (ours) + GPI-ST CT (ours) + GPI-T CT (ours) + GPI-S GPI-ST GPI-T GPI-S Median 0.36 0.40 0.44 0.48 IQM 0.42 0.45 0.48 Mean 0.51 0.54 0.57 0.60 Optimality Gap Normalized Score (a) All Median IQM Mean Optimality Gap or even outperform the agents learned with the constrained training. One possible explanation is that the constrained training might experience some instability in learning depending on the choice of the hyperparameters, as described in Section 3.2. In the first and the second columns of Table 1, we present the proportions of the action-values that are changed by the lower and upper bounds of the constrained GPI, measured for the evaluation on Scavenger. The third column shows the proportions of resulting greedy actions changed by them. It implies that USFAs i.e., the function approximators of successor features, may not satisfy the optimal value bounds presented in Theorem 1, and applying the bounds could change a fair proportion of greedy actions to improve the performance. 4.2 DeepMind Lab Experiments with Learned φ For evaluation of our approach in a more complex and realistic setting, we employ DeepMind Lab [7, 10, 12] and conduct experiments in a first-person view 3D environment. In a single room, a goal object is placed arbitrarily, and the objective is to reach the goal before the episode ends where its location changes between tasks. Figure 4 shows an example scene that the agent sees with the goal object in red. At every time step, the agent observes an 84 × 84 × 3 image from the environment and outputs one of 45 possible actions, which include 5, 3 and 3 choices for LOOK_LEFT_RIGHT_PIXELS_PER_FRAME, STRAFE_LEFT_RIGHT and MOVE_BACK_FORWARD controls, respectively. Since observations are in the firstperson view, the goal object may not be seen by the agent, which makes transfer given the task information g critical to the success of the tasks. In each task, we divide the room into an 11× 11 grid and place the goal object in one of the cells. The task information g is a two-dimensional vector that contains the coordinate of the goal in the grid. Starting at the center of the room, the agent receives a reward of one if it reaches the goal within the episode horizon or no rewards otherwise. Therefore, the reward functions are sparse. For these experiments where the agent observes rendered images rather than the underlying states, it may not be viable to define features φ’s and task vectors w’s that linearly decompose reward functions. Therefore, we train agents with the learning of φ̃ and w̃ from samples from the source tasks with d = 2 as described in Section 2.3. Inspired by Hong et al. [18], we examine zero-shot transfer with the GPI and constrained GPI using two transfer settings: “left-to-right” and “near-to-far”. In the “left-to-right” setting, the agent is trained on the source tasks whose goals are sampled from the left half of the room and is tested on the target tasks with goals from the right half. In the “near-to-far” setting, the source tasks have the goals within an L∞ = 2 distance from the center of the room, and target tasks set the goals farther than L∞ = 2. For each setting, we train eight USFAs agents with different seeds for 3M environment steps on the source tasks and test them on the target tasks. Figure 5 presents the comparison of the GPI with different C’s and the constrained GPI. Leveraging the same set of trained USFAs with learned φ̃ and w̃, the constrained GPI outperforms the GPI with the three C’s in both settings by a notable margin. Another observation is that the trained USFAs agents seem to overfit more to the source tasks in the “near-to-far” setting compared to the “left-to-right” setting. It makes the performance on the target tasks much worse. Nonetheless, the constrained GPI is still helpful in such overfitting situations. 5 Conclusion and Discussion We presented constrained GPI, a simple yet effective test-time approach for transfer with approximate successor features. We first focused on the issue that although universal successor features approximators (USFAs) exploit the smoothness of optimal values across different tasks, their approximation errors on novel target tasks could be large especially when those tasks are quite distant from source tasks. Thus, we introduced a theorem about lower and upper bounds on the optimal values for novel task vectors that belong to the task vector space linearly spanned by the set of source task vectors, relaxing the conical combination condition used for the theorem by Nemecek and Parr [25]. We proposed a constrained training scheme making use of those bounds for reducing the action-value errors of the learned approximators on novel tasks. We then suggested constrained GPI that uses the bounds at test time to achieve an analogous effect, allowing the use of previously trained models. We empirically showed that this test-time approach can improve the zero-shot transfer performance by a large margin in multiple environments. Limitations and future directions. There may be some cases where the minimum rewards for source tasks i.e., rminw ’s are overly small, which could lead to less changes of both action-values and behaviors induced by the upper-bounding in Theorem 1 with Cw(s, a) = 11−γ r min w . An interesting direction to tackle the issue is to learn the minimum action-value function during the training and to use the approximate minimum value at each state-action pair as Cw(s, a) for deriving the upper bound in Theorem 1. It may allow computing upper bounds more tightly and adaptively for different state-action pairs. Also, if the learned successor features approximators have large errors even on source tasks, not only GPI but also constrained GPI’s bounding may not be meaningfully helpful. One idea to mitigate the issue is to take the uncertainty in the approximators and the approximation error term that appears in Theorem 1 into account, e.g., by using ensemble models. As an intriguing direction for future research, we could extend our constrained GPI to other non-linear forms of reward or value decompositions. It may also be interesting to make transfer with successor features and constrained GPI compatible with large-scale approaches for generalization such as [28]. We do not see direct negative societal impacts of this work. Acknowledgements We thank the anonymous reviewers for their valuable comments. This work was supported by Samsung Advanced Institute of Technology, and Institute of Information & communications Technology Planning & Evaluation (IITP) grants funded by the Korea government (MSIT), including (No.20190-01082, SW StarLab), (No.2022-0-00156, Fundamental research on continual meta-learning for quality enhancement of casual videos and their 3D metaverse transformation), and (No.2021-0-01343, Artificial Intelligence Graduate School Program (Seoul National University)). Jaekyeom Kim was partly supported by Google PhD Fellowship. Gunhee Kim is the corresponding author.
1. What is the focus and contribution of the paper regarding universal successor feature approximators? 2. What are the strengths and weaknesses of the proposed constrained training of successor features? 3. Do you have any concerns or suggestions for improving the generalization ability of the method? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any limitations discussed in the paper that the reviewer agrees or disagrees with?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper notes that Universal successor feature approximators (USFA) ψ ( s , a , z ) (here z is the policy vector) exploits the smoothness of optimal value functions across different tasks but its approximation error could be large when target task vectors are far away. To improve generalization to a new task w (which can be expressed as a linear combination of source tasks), the paper proposes constrained training of successor features that reduces its approximation error on the new task. It then shows that a similar effect can be achieved by using similar constraints at test time without changing the training of the successor features. The proposed method is able to improve task generalization on scavenger and reacher domains. Strengths And Weaknesses Strengths: The resulting constrained GPI has good theoretical motivations It exhibits better task generalization than GPI baselines Weaknesses: The baselines used for comparisons are inadequate. For eg: recent works on bilinear decomposition of Q functions (bilinear value networks (Hong et al., 2022, https://arxiv.org/pdf/2204.13695.pdf)) showed improved generalization on novel tasks. Specifically, for goal condtitioned task with goal g , they choose the parameterization Q ( s , a , g ) = ϕ ( s , a ) T w ( s , g ) (as opposed to Q ( s , a , g ) = ϕ ( s , a , g ) T w or Q ( s , a , g ) = ϕ ( s , a ) T w ( g ) . It would be nice to see comparisons to this bilinear decomposition. Furthermore, the domains (i.e. scavenger and reacher) on which the method is tested is limited. I would appreciate if authors can provide a more extensive evaluation on various goal conditioned tasks (such as the ones in fetch-gym (https://github.com/jmichaux/gym-fetch)). Questions Recommendation: I think this is a borderline paper and I am happy to increase my score if the authors can evaluate their method on more tasks (such as the ones in fetch-gym) and compare it to better baselines (such as bilinear value networks). Limitations The authors discuss the limitations of their work in Section 5
NIPS
Title Constrained GPI for Zero-Shot Transfer in Reinforcement Learning Abstract For zero-shot transfer in reinforcement learning where the reward function varies between different tasks, the successor features framework has been one of the popular approaches. However, in this framework, the transfer to new target tasks with generalized policy improvement (GPI) relies on only the source successor features [6] or additional successor features obtained from the function approximators’ generalization to novel inputs [12]. The goal of this work is to improve the transfer by more tightly bounding the value approximation errors of successor features on the new target tasks. Given a set of source tasks with their successor features, we present lower and upper bounds on the optimal values for novel task vectors that are expressible as linear combinations of source task vectors. Based on the bounds, we propose constrained GPI as a simple test-time approach that can improve transfer by constraining action-value approximation errors on new target tasks. Through experiments in the Scavenger and Reacher environment with state observations as well as the DeepMind Lab environment with visual observations, we show that the proposed constrained GPI significantly outperforms the prior GPI’s transfer performance. Our code and additional information are available at https://jaekyeom.github.io/projects/cgpi/. 1 Introduction For sequential decision making, deep reinforcement learning (RL) has been shown to be effective for various types of problems including games [31] and robotics [20, 22]. With such great successes, interest in multi-task RL has also surged, where its goal is to train a single agent that can efficiently solve multiple varying tasks. In multi-task RL, we focus on the transfer learning setting, where the agent learns shared structural knowledge from a set of source tasks during training, and exploits and generalizes them in new, unseen target tasks at test time. One popular approach to transfer in RL is to leverage the successor features (SFs) framework [1, 6, 7, 12, 25], which transfers policies learned on source tasks to target tasks, where the tasks share the same environment dynamics but differ in their reward functions. Successor features build a representation of value functions decoupled from reward functions, and transfer to the tasks with arbitrary reward functions by taking an inner product with corresponding task vectors. They utilize generalized policy improvement (GPI) [6], which generalizes policy improvement with multiple policies and provides the performance lower bounds for GPI policies. However, GPI does not take into account any information from the smoothness of optimal actionvalue functions with respect to task vectors. Tackling this issue, Borsa et al. [12] propose universal successor features approximators (USFAs), which can estimate the optimal successor features for novel task vectors. Nevertheless, the function approximator can make high approximation errors on the task vectors, especially when the new task vectors are distant from the source task vectors. For instance, when USFAs are trained with source tasks to get close to given goals, they may not 36th Conference on Neural Information Processing Systems (NeurIPS 2022). generalize well to the target tasks where the agent should get away from the given goals. That is, if the elements of target task vectors have the opposite signs from the source task vectors, USFAs could output successor features with high approximation errors. To improve the successor features approximation of USFAs for the new tasks, we aim at bounding value approximation errors on the new target tasks. We first introduce a new theorem on bounding the optimal values for the tasks that are expressible as linear combinations of source tasks. Our theorem generalizes the conical combination condition used by the prior theorem by Nemecek and Parr [25]. Using our new bounds as constraints, we can train the successor features approximators whose action-value approximation errors on novel tasks are bounded. We extend this idea so that we accomplish a similar effect with no additional training; as a result, we propose constrained GPI as a test-time approach to bounding the approximation errors. Despite its simplicity and no need for modification to the training procedure, we empirically show that constrained GPI attains large performance improvements compared to the original GPI in multiple environments, including the Scavenger [8, 9] and Reacher [13] environments with state observations and the DeepMind Lab [7, 10, 12] environment with first-person visual observations. Our main contributions can be summarized as follows: • We present a novel theorem on lower- and upper-bounding optimal values for novel tasks that can be expressed as linear combinations of source tasks. It extends and generalizes the previous theorem for conical combinations by Nemecek and Parr [25], to enable a broader application of the bounds. • Based on our new theorem, we propose constrained GPI as a simple test-time approach that can improve transfer to novel tasks by constraining action-value approximation errors on new target tasks, with no modification to the training procedure. • We empirically show that our approach can improve the performance over the GPI baselines by large margins in the Scavenger, Reacher and DeepMind Lab environments. We also provide analyses for a better understanding of our results. 2 Preliminaries We describe the problem setting and background on successor features and universal successor features approximators. We refer the reader to Appendix for an in-depth discussion of related work. 2.1 The Zero-Shot Transfer Problem in RL We define a Markov Decision Process (MDP) as M ≡ (S,A, P,R, γ). S and A are the state and action spaces, respectively. P (·|s, a) defines the transition probability distribution of the next states given s ∈ S and a ∈ A. R(s, a, s′) is the reward for taking action a at state s resulting in s′, and γ ∈ (0, 1] is the discount factor. We assume that rewards are bounded. We consider the zero-shot transfer problem; as in [6], each task is defined by its task vector w ∈ Rd, and only the reward functions differ across tasks, being decomposed as Rw(s, a, s ′) = φ(s, a, s′)>w, (1) where φ(s, a, s′) ∈ Rd is the features of (s, a, s′). We denote the set of source task vectors as T , which is used for training. At test time, we evaluate the transferred policy on each target taskw′ /∈ T with no additional update of pre-trained components. We examine both the possible scenarios: (i) the features φ(s, a, s′) are available to the agent [6, 12] and (ii) no pre-defined features are available and the agent needs to construct its own features and task vectors. We first introduce the formulation for (i) in Section 2.2 and then its variant for (ii) in Section 2.3. 2.2 Successor Features and Universal Successor Features Approximators We now review successor features (SFs) [6] and how they are transferred to different tasks. Equation (1) allows expressing the action-value function for policy π on task w as Qπw(s, a) = Eπ [ ∞∑ i=0 γirt+i ∣∣∣St= s,At= a] = Eπ [ ∞∑ i=0 γiφt+i ∣∣∣St= s,At= a]>w = ψπ(s, a)>w, (2) where φt = φ(st, at, st+1) ∈ Rd. Here, ψπ(s, a) ∈ Rd is called the SFs for policy π at (s, a), and taking its inner product with an arbitrary task w results in the action-value for π on w; i.e., Qπw(s, a). Thanks to the analogy between (rewards r, action-value functions Q) and (features φ, successor features ψ), the Bellman equation applies to SFs and thus they can be trained similarly to the way action-value functions are learned; e.g., Q-learning. The policy improvement theorem [11] states that a new policy that takes a greedy action according to a given policy’s value function at each state performs at least as well as the original policy. Generalized policy improvement (GPI) [6] extends policy improvement to the case where the value functions of multiple policies are available. Given a task w′, a set of policies π1, . . . , πn, their action-value functions Qπ1w′ , . . . , Q πn w′ and their approximations Q̃ π1 w′ , . . . , Q̃ πn w′ , the GPI policy is defined as πGPI(s) ∈ argmax a max i Q̃πiw′(s, a). (3) Barreto et al. [6] suggest that QπGPIw′ (s, a) ≥ maxiQπiw′(s, a) − 21−γ maxi ∥∥∥Qπiw′ − Q̃πiw′∥∥∥∞. They also provide the upper bound on the suboptimality of the GPI policy as ‖Q∗w′ −QπGPIw′ ‖∞ ≤ 2 1− γ { min i ‖φ‖∞‖w′ −wi‖+max i ∥∥∥Qπiw′ − Q̃πiw′∥∥∥∞} , (4) where each πi is an optimal policy for wi. While the GPI theorem allows the transfer of learned successor features to arbitrary tasks that share the same environment dynamics, it is limited in the following aspect. GPI uses the action-values for source tasks on target tasks based on the reward decomposition assumption (Equation (1)) i.e., Q̃πiw′(s, a) = ψ̃ πi(s, a)>w′ for each i. However, it does not take any advantage of the smoothness of the optimal action-value functions with respect to different task vectors [12]. To overcome this limitation, Borsa et al. [12] introduce universal successor features approximators (USFAs). Inspired by universal value functions (UVFs) [30], they extend the original successor features with policy vectors z ∈ Rl as input to their approximators. More specifically, universal successor features (USFs) are defined to satisfy ψπz (s, a) ≡ ψ(s, a,z) ≈ ψ̃(s, a,z), (5) where z is a policy vector for the policy πz , and USFAs ψ̃ are the learned approximators of USFs ψ. Naturally, the value functions are expressed as Q̃πzw (s, a) = ψ̃(s, a,z) >w ≈ ψ(s, a,z)>w = Qπzw (s, a). (6) Each reward function induces optimal policies, which can be encoded using the corresponding task vectors. That is, one can simply choose to define the policy vector space to be the same as the task vector space (l = d) and let z = w be a policy vector of an optimal policy for task w. Then, πw and Qπww denote an optimal policy for w and its action-value function, respectively. The training of USFAs is similar to that of SFs, except for that it additionally involves sampling of policy vectors given task vectors. The update of USFAs at the k-th iteration is ψ̃(k+1) ← argmin ψ Ew∼T ,z∼Dz(·|w),(s,a,s′)∼µ [∥∥∥φ(s, a, s′) + γψ̃(k)(s′, a′, z)− ψ(s, a,z)∥∥∥2] (7) for a′ = argmaxb ψ̃ (k)(s′, b,z)>z. Dz(·|w) is the policy vector distribution; for instance,N (w, σI) can be used for better training with diversified inputs. µ is the transition sampling distribution, which involves the GPI policy of the samples from Dz(·|w) or a replay buffer. We use gradient descent to update the parameters. USFAs provide a benefit that they allow a GPI policy to use an arbitrary set of policies {πz}z∈C as πGPI(s) ∈ argmaxamaxz∈C Q̃πzw′(s, a). However, the generalization of USFAs to new policy vectors depends on a function approximator ψ, and thus if C contains policy vector(s) distant from source vectors, a GPI policy with C may have high approximation errors and perform poorly or even worse than a GPI policy with only source vectors [12], as will be demonstrated later in our experiments. 2.3 Universal Successor Features Approximators with Learned φ For the scenario where features φ’s are not provided to the agent1, we adopt the problem formulation from Ma et al. [23] where for each task the task information g ∈ G is given to the agent. Although the task information g, unlike a task vector, cannot be directly combined with successor features for transfer to a novel task, zero-shot inference could still be possible by leveraging the information about the task. Specifically, we not only perform the original learning of ψ̃ letting the task information induce policy vectors instead, but also train φ̃ and w̃ to approximate the reward decomposition with transition samples. As done in [23], we update ψ̃, φ̃ and w̃ using gradient descent to minimize Eg∼T g,z∼Dgz(·|g),(s,a,r,s′)∼µ [ Lψ + LQ ] for Lψ := 1 d ∥∥∥φ̃(s, a, s′) + γψ̃(k)(s′, a′, z)− ψ̃(s, a,z)∥∥∥2 (8) LQ := { r + γψ̃(k)(s′, a′, z)>w̃(k)(z)− ψ̃(s, a,z)>w̃(z) }2 (9) and a′ = argmaxb ψ̃ (k)(s′, b,z)>w̃(k)(z) at the k-th iteration. The superscript (k) denotes the target, T g is the source task information set, Dgz(·|g) is the policy vector distribution conditioned on the task information and µ is the sampling distribution. 3 Constrained GPI for Improved Zero-Shot Transfer of Successor Features To mitigate the aforementioned issue of the possibly unlimited approximation errors of USFAs, we propose a simple yet effective method that improves the transfer of successor features by further leveraging the reward decomposition structure in Equation (1). We first present, under a more relaxed condition, lower and upper bounds on the optimal values for novel task vectors that are expressed as linear combinations of source task vectors (Section 3.1). Then, we propose a novel approach called constrained GPI, which effectively confines the approximated action-values inside the computed lower and upper bounds (Section 3.2). 3.1 Bounding Optimal Values for New Tasks Theorem 1 of [25] provides the lower and upper bounds on the value of an optimal policy for a new task, whose vector w′ is a positive conical combination of source task vectors i.e., w′ = ∑ w∈T αww such that αw ≥ 0,∀w ∈ T and∑w∈T αw > 02. However, for a broad application of such bounds, the positive conical combination condition can be too restrictive, since the resulting bounds only apply to the task vectors that appear inside the conical hull of source task vectors. Therefore, we suggest a more relaxed theorem, which holds for an arbitrary task vector w′ that is expressed as a linear combination of the source task vectors i.e., w′ = ∑ w∈T αww for αw ∈ R,∀w ∈ T . Figure 1 shows an example that compares the task space coverage of conical [25] and 1One typical example presented later in our experiments is the case where the agent observes visual inputs. Then, it is not trivial to derive features that linearly decompose reward functions. 2We slightly abuse the notation and let αw denote the coefficient for vector w. our linear combinations. With our extended task space coverage, we can apply the bounds to more general target tasks outside of the conical hull, which will be further discussed in the next section. We define πw1w2 to be an upper bound on the approximation error of Q̃ πw1 w2 for arbitrary tasks w1,w2 such that |Qπw1w2 (s, a)− Q̃ πw1 w2 (s, a)| ≤ πw1 w2 (s, a), ∀(s, a) ∈ S ×A, (10) and we present our theorem as follows. Theorem 1. Given a task vectorw′ = ∑ w∈T αww for αw ∈ R,∀w ∈ T , for all state-action pairs (s, a) ∈ S ×A, the action-value of πw′ , which is an optimal policy for task w′, on task w′ is lowerand upper-bounded as Lw′,T (s, a) ≤ Qπw′w′ (s, a) ≤ Uw′,T ,α(s, a) for Lw′,T (s, a) := max w∈T [ Q̃πww′ (s, a)− πww′ (s, a) ] , (11) Uw′,T ,α(s, a) := ∑ w∈T max { αw ( Q̃πww (s, a) + πw w (s, a) ) , αwCw(s, a) } , (12) for someCw(s, a) ≤ minπ Qπw(s, a) such asCw(s, a) = 11−γ rminw where rminw is the minimum reward on w i.e., rminw = min(s,a)∈S×ARw(s, a) and α = {αw}w∈T . Proof. For the derivation of the lower bound Lw′,T (s, a), since Q πw′ w′ is the optimal action-value function for task w′ and Qπw′w′ (s, a) ≥ Qπww′ (s, a) for arbitrary task w and state-action pair (s, a), Q πw′ w′ (s, a) ≥ max w∈T Qπww′ (s, a) ≥ max w∈T [ Q̃πww′ (s, a)− πww′ (s, a) ] . (13) For the upper bound Uw′,T ,α(s, a), we use that Q πw′ w (s, a) ≤ Qπww (s, a) and Q πw′ w (s, a) ≥ minπ Q π w(s, a) ≥ Cw(s, a) for arbitrary task w and state-action pair (s, a), which leads to Q πw′ w′ (s, a) = ∑ w∈T αw (Q πw′ w (s, a)− Cw(s, a)) + ∑ w∈T αwCw(s, a) (14) ≤ ∑ w∈T max {αw (Qπw′w (s, a)− Cw(s, a)) , 0}+ ∑ w∈T αwCw(s, a) (15) ≤ ∑ w∈T max {αw (Qπww (s, a)− Cw(s, a)) , 0}+ ∑ w∈T αwCw(s, a) (16) = ∑ w∈T {max {αw (Qπww (s, a)− Cw(s, a)) , 0}+ αwCw(s, a)} (17) = ∑ w∈T max {αwQπww (s, a), αwCw(s, a)} (18) ≤ ∑ w∈T max { αw ( Q̃πww (s, a) + πw w (s, a) ) , αwCw(s, a) } . (19) In Equation (19), for each w ∈ T , the sign of αw determines which of the two terms in the max operator is used. If αw ≥ 0, the max operator selects the first term, whereas a negative αw lets the second term be used. Note that our Theorem 1 recovers Theorem 1 of [25] when w′ is a conical combination of w’s from T i.e., αw ≥ 0,∀w ∈ T . Intuitively, this theorem states the condition that the optimal action-value for an arbitrary target task must satisfy, given the optimal successor features for the source tasks. The theorem is applicable to different problems wherever bounding of optimal values is useful. One example is policy cache construction, where the agent should decide whether to reuse existing policies in the cache set or learn a new one given each new task [25]. As will be shown in the next section, we employ the bounding as a constraint on the action-values for novel target tasks, for the guidance of transfer. In Sections 4.1 and 4.2 and appendix B, we empirically show that the application of our Theorem 1 can significantly improve the performance in the cases where target tasks are outside the conical hull of source tasks. 3.2 Constrained Training and Constrained GPI As described in Section 2.2, the universal successor features approximators (USFAs) [12] improve the original successor features so that arbitrary policy vectors, including the ones for target tasks, can be used for GPI. However, the use of arbitrary policy vectors with USFAs solely relies on the generalization power of the approximators (e.g., neural networks). Thus, the obtained successor features on novel tasks might contain high approximation errors, which could make the GPI policy perform poorly. Our high-level idea to tackle the issue is to exploit the reward decomposition structure in Equation (1) even for obtaining SFs for new tasks, instead of solely relying on the approximators. We employ the lower and upper bounds on optimal values from Theorem 1 to enforce the bounds on the approximate successor features. As a result, the approximation errors can be reduced by restricting the estimated optimal values to be inside those bounds around the optimal values, which can prevent the use of erroneous values during the transfer to unseen tasks. For now, we will first introduce how to train the successor features approximators to output the successor features that satisfy the bounds on novel tasks. Then, we will point out that an analogous effect can be accomplished by modifying only the inference algorithm, and propose constrained GPI as a simple yet effective test-time approach to improving zero-shot transfer to novel tasks. Constrained training of SF approximators. In the original training of USFAs, the approximators are learned with a set of source tasks T in Equation (7). We propose to guide the training by employing Theorem 1; we impose constraints for the approximators using the lower and upper bounds on the optimal values for arbitrary linear combinations of source tasks. Specifically, for the training of USFAs, we use Equation (7) but with the following constraints: Lw′,T (s, a) ≤ ψ̃(s, a,w′)>w′ ≤ Uw′,T ,ξ(w′,T ,s,a)(s, a) for w′ ∈ W, (20) where (s, a) is the same sample as the main objective of Equation (7). W is a set of task vectors for the constraints, which can be independent of the source task set T , and ξ(·) determines the coefficients α given a target taskw′ and T . We will explain later how to determine ξ(·). W can be any subset of the linear span of source task vectors, but practically, we randomly sample a number of vectors from the span at each update. Since the targets of the constraints are not fixed with respect to both w′ and (s, a) throughout the training, we use penalty terms (or soft constraints) that linearly penalize the constraint violations as 1 |W| ∑ w′∈W ({ Lw′,T (s, a)− Q̃πw′w′ (s, a) } + + { Q̃ πw′ w′ (s, a)− Uw′,T ,ξ(w′,T ,s,a)(s, a) } + ) , where {x}+ denotes max{x, 0}. The constrained training suggested above can make the approximators comply with the bounds for any tasks without requiring any additional interactions with the environment. However, it has some downsides. First, since it is a new training procedure, existing pre-trained models cannot be used. It requires some additional computational cost compared to the naive training of successor features approximators. Second, the enforcement of the constraints for training can introduce additional hyperparameters (e.g., the weight coefficient for the penalty terms). Thus, suboptimal hyperparameters may introduce either instability in the training or a decrease in the performance. Test-time constrained GPI. Our idea starts with the observation that in the constrained training, the learned successor features from source tasks are considered the “trustworthy” features for the constraints, because the USFAs are trained on the source tasks. Besides, only the source successor features are used for computing the constraints for all the other tasks. It implies that the learning of the source successor features better not be affected by other criteria, and more accurate source successor features would produce better constraints for other tasks with smaller errors. Based on the implication, we propose constrained GPI, which can not only overcome the limitation of USFAs as done by the aforementioned constrained training but also have two additional practical merits: (i) it is computationally simpler, and (ii) it is a test-time approach with no training. Simply put, we propose replacing the usual GPI policy with the constrained GPI policy as πCGPI(s) ∈ argmax a max z∈C [ min { max { Q̃πzw′(s, a), Lw′,T (s, a) } , Uw′,T ,ξ(w′,T ,s,a)(s, a) }] , (21) where the target task w′ is expressible as a linear combination of the source tasks and ξ(·) again outputs α given w′ and T as in Equation (20). C is a set of policies that we can freely choose when applying the constrained GPI. The constrained GPI policy selects the actions that maximize the maximum action-values as the original GPI policy does but also caps the values with the lower and upper bound constraints derived from the source successor features. The upper bound constraint fixes the overestimation of values computed with approximate successor features for either the target task w′ or any other tasks used for constrained GPI. The lower bound constraint ensures that action-values on the target task for the greedy action selection are at least as close to the optimal target action-values as the lower bounds. The approximation error terms in the lower and upper bounds i.e., πww′ (s, a) and πw w (s, a) in Theorem 1 could be ignored in practice, as long as the approximation errors of the source successor features are sufficiently small. Also, we can obtain the tightest upper bound by defining ξ(·) as ξ(w′, T , s, a) := argmin {αw}w∈T Uw′,T ,{αw}w∈T (s, a) subject to w ′ = ∑ w∈T αww. (22) The objective Uw′,T ,{αw}w∈T (s, a) is the sum of the piecewise linear functions. Thus, Equation (22) can be solved with linear programming. We observe that using the lower bound constraint with Lw′,T (s, a) is equivalent to including the successor features for source tasks in the input to the constrained GPI; i.e., T ⊆ C. Also, since Lw′,T (s, a) ≤ Uw′,T ,ξ(w′,T ,s,a)(s, a), there would be no difference between GPI and constrained GPI when C = T . Thus, in our experiments, we mainly use C = {w′}, which is equivalent to using C = T ∪ {w′}. 4 Experiments 4.1 Scavenger Experiments We start our experiments in the Scavenger environment [8, 9], which can assess our approach with minimal influence from external causes. In Scavenger, the agent is positioned at one of the cells in a G×G grid, and the goal is to maximize the return by collecting objects. Both the agent and objects are spawned at random locations, and there are d classes of objects where the class determines the value of the reward. The state space is S = {0, 1}G×G×(d+1), where the first d channels describe the current locations of the objects on the map and the last channel specifies the walls where the agent cannot go and objects do not appear. There are four actions available: A = {UP, DOWN, LEFT, RIGHT}, and the agent picks up an object by visiting the cell of the object, which spawns a new object of a random class at a random location. The feature φ(s, a, s′) ∈ {0, 1}d is a one-hot vector whose element represents whether the agent picks up an object of that type or not within the transition. The task vector w ∈ Rd determines the reward values for the d different classes of objects. Please see Barreto et al. [9] for the full details. We evaluate the zero-shot transfer performance of different approaches i.e., we first train USFAs as proposed in [12], and measure the performance of GPI and constrained GPI policies that use the same set of USFAs on target tasks with no further policy updates. We set G = 11 and use 20 objects in total with the different numbers of classes; d = 2 and d = 4. With d = 4, we also test the USFAs that are learned with the constrained training for a comparison. We use the standard basis vectors of Rd as the set of source tasks as done in [12], and evaluate agents on the set of target tasks defined by {−1, 1}d. Therefore, all the target tasks except for the all-ones vector 1 are not covered by the conical hull of source tasks, which requires Theorem 1 for bounding of values. We train eight USFAs agents for 1M steps, and evaluate them on each target vector 10 times with a fixed set of 10 random seeds. To be invariant to the reward scale differences between different tasks, we normalize the scores (or returns) from the environment by the minimum and maximum scores with respect to all the agents’ evaluation episodes on each task. Figures 2 and 3 compare the performance of the USFAs agents with GPI and constrained GPI for exploitation, following the evaluation scheme suggested by [2]. Although they use the same set of trained USFAs, the constrained GPI brings a notable performance improvement in comparison with the original GPI. Also, Figure 3 suggests that the constrained GPI, the test-time method, can match 0.40 0.45 0.50 CGPI-T (ours) CT (ours) + GPI-ST CT (ours) + GPI-T CT (ours) + GPI-S GPI-ST GPI-T GPI-S Median 0.36 0.40 0.44 0.48 IQM 0.42 0.45 0.48 Mean 0.51 0.54 0.57 0.60 Optimality Gap Normalized Score (a) All Median IQM Mean Optimality Gap or even outperform the agents learned with the constrained training. One possible explanation is that the constrained training might experience some instability in learning depending on the choice of the hyperparameters, as described in Section 3.2. In the first and the second columns of Table 1, we present the proportions of the action-values that are changed by the lower and upper bounds of the constrained GPI, measured for the evaluation on Scavenger. The third column shows the proportions of resulting greedy actions changed by them. It implies that USFAs i.e., the function approximators of successor features, may not satisfy the optimal value bounds presented in Theorem 1, and applying the bounds could change a fair proportion of greedy actions to improve the performance. 4.2 DeepMind Lab Experiments with Learned φ For evaluation of our approach in a more complex and realistic setting, we employ DeepMind Lab [7, 10, 12] and conduct experiments in a first-person view 3D environment. In a single room, a goal object is placed arbitrarily, and the objective is to reach the goal before the episode ends where its location changes between tasks. Figure 4 shows an example scene that the agent sees with the goal object in red. At every time step, the agent observes an 84 × 84 × 3 image from the environment and outputs one of 45 possible actions, which include 5, 3 and 3 choices for LOOK_LEFT_RIGHT_PIXELS_PER_FRAME, STRAFE_LEFT_RIGHT and MOVE_BACK_FORWARD controls, respectively. Since observations are in the firstperson view, the goal object may not be seen by the agent, which makes transfer given the task information g critical to the success of the tasks. In each task, we divide the room into an 11× 11 grid and place the goal object in one of the cells. The task information g is a two-dimensional vector that contains the coordinate of the goal in the grid. Starting at the center of the room, the agent receives a reward of one if it reaches the goal within the episode horizon or no rewards otherwise. Therefore, the reward functions are sparse. For these experiments where the agent observes rendered images rather than the underlying states, it may not be viable to define features φ’s and task vectors w’s that linearly decompose reward functions. Therefore, we train agents with the learning of φ̃ and w̃ from samples from the source tasks with d = 2 as described in Section 2.3. Inspired by Hong et al. [18], we examine zero-shot transfer with the GPI and constrained GPI using two transfer settings: “left-to-right” and “near-to-far”. In the “left-to-right” setting, the agent is trained on the source tasks whose goals are sampled from the left half of the room and is tested on the target tasks with goals from the right half. In the “near-to-far” setting, the source tasks have the goals within an L∞ = 2 distance from the center of the room, and target tasks set the goals farther than L∞ = 2. For each setting, we train eight USFAs agents with different seeds for 3M environment steps on the source tasks and test them on the target tasks. Figure 5 presents the comparison of the GPI with different C’s and the constrained GPI. Leveraging the same set of trained USFAs with learned φ̃ and w̃, the constrained GPI outperforms the GPI with the three C’s in both settings by a notable margin. Another observation is that the trained USFAs agents seem to overfit more to the source tasks in the “near-to-far” setting compared to the “left-to-right” setting. It makes the performance on the target tasks much worse. Nonetheless, the constrained GPI is still helpful in such overfitting situations. 5 Conclusion and Discussion We presented constrained GPI, a simple yet effective test-time approach for transfer with approximate successor features. We first focused on the issue that although universal successor features approximators (USFAs) exploit the smoothness of optimal values across different tasks, their approximation errors on novel target tasks could be large especially when those tasks are quite distant from source tasks. Thus, we introduced a theorem about lower and upper bounds on the optimal values for novel task vectors that belong to the task vector space linearly spanned by the set of source task vectors, relaxing the conical combination condition used for the theorem by Nemecek and Parr [25]. We proposed a constrained training scheme making use of those bounds for reducing the action-value errors of the learned approximators on novel tasks. We then suggested constrained GPI that uses the bounds at test time to achieve an analogous effect, allowing the use of previously trained models. We empirically showed that this test-time approach can improve the zero-shot transfer performance by a large margin in multiple environments. Limitations and future directions. There may be some cases where the minimum rewards for source tasks i.e., rminw ’s are overly small, which could lead to less changes of both action-values and behaviors induced by the upper-bounding in Theorem 1 with Cw(s, a) = 11−γ r min w . An interesting direction to tackle the issue is to learn the minimum action-value function during the training and to use the approximate minimum value at each state-action pair as Cw(s, a) for deriving the upper bound in Theorem 1. It may allow computing upper bounds more tightly and adaptively for different state-action pairs. Also, if the learned successor features approximators have large errors even on source tasks, not only GPI but also constrained GPI’s bounding may not be meaningfully helpful. One idea to mitigate the issue is to take the uncertainty in the approximators and the approximation error term that appears in Theorem 1 into account, e.g., by using ensemble models. As an intriguing direction for future research, we could extend our constrained GPI to other non-linear forms of reward or value decompositions. It may also be interesting to make transfer with successor features and constrained GPI compatible with large-scale approaches for generalization such as [28]. We do not see direct negative societal impacts of this work. Acknowledgements We thank the anonymous reviewers for their valuable comments. This work was supported by Samsung Advanced Institute of Technology, and Institute of Information & communications Technology Planning & Evaluation (IITP) grants funded by the Korea government (MSIT), including (No.20190-01082, SW StarLab), (No.2022-0-00156, Fundamental research on continual meta-learning for quality enhancement of casual videos and their 3D metaverse transformation), and (No.2021-0-01343, Artificial Intelligence Graduate School Program (Seoul National University)). Jaekyeom Kim was partly supported by Google PhD Fellowship. Gunhee Kim is the corresponding author.
1. What is the focus and contribution of the paper regarding reinforcement learning transfer? 2. What are the strengths of the proposed approach, particularly in terms of constrained GPI? 3. What are the weaknesses of the paper, especially regarding the limitations and future work? 4. Do you have any concerns about the extension of constrained GPI to non-linear reward functions? 5. How would the method perform with an imperfect feature vector? 6. Are there any minor clarifications or comments you would like to add?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper The paper looks at the problem of transfer in reinforcement learning from source tasks to target tasks when the reward signal changes across tasks but the state and action spaces remain the same. Authors address a limitation of a current approach in this setting: GPI with universal successor features approximators (USFAs). The works argues that USFAs can make high approximation errors on test targets if their solutions are distant from the source targets' ones, and proposes a solution to mitigate this limitation. The solution involves constraining the approximation action-value error on new target task, using a lower and upper bounds introduced in the paper. The experiments show a zero-shot transfer improvement in performance compared to baselines (USFA + GPI) on synthetic scavenger tasks and robotic locomotion tasks. Strengths And Weaknesses Strengths: The paper addresses relevant and important open problem in RL (transfer learning). The writing is clear, rigorous and the paper is easy to follow. A pleasant read! I found theoretical contribution to be main significant result, extending Nemecek and Parr result to about the value of an optimal policy for a new task beyond the conical hull is an important result. The proposed method, constrained GPI, is an elegant way of improving GPI. Weaknesses: The limitations of the work are not really discussed, and no direction for future work is given. This make it difficult to gauge if the method and its guarantees would extend to more complex and large scale environments. Questions Questions Is there a way to extend constrained GPI if the reward cannot be linearly decomposed (but an arbitrary non linear function of the features)? The theoretical and empirical results presented rely on the prior knowledge of the feature vector ϕ . How would constrained GPI perform with an unperfected (e.g. learned) feature vector instead? Minor clarifications and comments: It would be helpful to spell out the different methods acronyms of figures 2 and 3 in the main text or in the caption. I did not understand caption of figure 2b Limitations Very limited discussions on the limitations. In fact, I did not really understand the remark on not getting the "full effectiveness". No negative societal impacts were reported, and I don't see any myself either.
NIPS
Title Constrained GPI for Zero-Shot Transfer in Reinforcement Learning Abstract For zero-shot transfer in reinforcement learning where the reward function varies between different tasks, the successor features framework has been one of the popular approaches. However, in this framework, the transfer to new target tasks with generalized policy improvement (GPI) relies on only the source successor features [6] or additional successor features obtained from the function approximators’ generalization to novel inputs [12]. The goal of this work is to improve the transfer by more tightly bounding the value approximation errors of successor features on the new target tasks. Given a set of source tasks with their successor features, we present lower and upper bounds on the optimal values for novel task vectors that are expressible as linear combinations of source task vectors. Based on the bounds, we propose constrained GPI as a simple test-time approach that can improve transfer by constraining action-value approximation errors on new target tasks. Through experiments in the Scavenger and Reacher environment with state observations as well as the DeepMind Lab environment with visual observations, we show that the proposed constrained GPI significantly outperforms the prior GPI’s transfer performance. Our code and additional information are available at https://jaekyeom.github.io/projects/cgpi/. 1 Introduction For sequential decision making, deep reinforcement learning (RL) has been shown to be effective for various types of problems including games [31] and robotics [20, 22]. With such great successes, interest in multi-task RL has also surged, where its goal is to train a single agent that can efficiently solve multiple varying tasks. In multi-task RL, we focus on the transfer learning setting, where the agent learns shared structural knowledge from a set of source tasks during training, and exploits and generalizes them in new, unseen target tasks at test time. One popular approach to transfer in RL is to leverage the successor features (SFs) framework [1, 6, 7, 12, 25], which transfers policies learned on source tasks to target tasks, where the tasks share the same environment dynamics but differ in their reward functions. Successor features build a representation of value functions decoupled from reward functions, and transfer to the tasks with arbitrary reward functions by taking an inner product with corresponding task vectors. They utilize generalized policy improvement (GPI) [6], which generalizes policy improvement with multiple policies and provides the performance lower bounds for GPI policies. However, GPI does not take into account any information from the smoothness of optimal actionvalue functions with respect to task vectors. Tackling this issue, Borsa et al. [12] propose universal successor features approximators (USFAs), which can estimate the optimal successor features for novel task vectors. Nevertheless, the function approximator can make high approximation errors on the task vectors, especially when the new task vectors are distant from the source task vectors. For instance, when USFAs are trained with source tasks to get close to given goals, they may not 36th Conference on Neural Information Processing Systems (NeurIPS 2022). generalize well to the target tasks where the agent should get away from the given goals. That is, if the elements of target task vectors have the opposite signs from the source task vectors, USFAs could output successor features with high approximation errors. To improve the successor features approximation of USFAs for the new tasks, we aim at bounding value approximation errors on the new target tasks. We first introduce a new theorem on bounding the optimal values for the tasks that are expressible as linear combinations of source tasks. Our theorem generalizes the conical combination condition used by the prior theorem by Nemecek and Parr [25]. Using our new bounds as constraints, we can train the successor features approximators whose action-value approximation errors on novel tasks are bounded. We extend this idea so that we accomplish a similar effect with no additional training; as a result, we propose constrained GPI as a test-time approach to bounding the approximation errors. Despite its simplicity and no need for modification to the training procedure, we empirically show that constrained GPI attains large performance improvements compared to the original GPI in multiple environments, including the Scavenger [8, 9] and Reacher [13] environments with state observations and the DeepMind Lab [7, 10, 12] environment with first-person visual observations. Our main contributions can be summarized as follows: • We present a novel theorem on lower- and upper-bounding optimal values for novel tasks that can be expressed as linear combinations of source tasks. It extends and generalizes the previous theorem for conical combinations by Nemecek and Parr [25], to enable a broader application of the bounds. • Based on our new theorem, we propose constrained GPI as a simple test-time approach that can improve transfer to novel tasks by constraining action-value approximation errors on new target tasks, with no modification to the training procedure. • We empirically show that our approach can improve the performance over the GPI baselines by large margins in the Scavenger, Reacher and DeepMind Lab environments. We also provide analyses for a better understanding of our results. 2 Preliminaries We describe the problem setting and background on successor features and universal successor features approximators. We refer the reader to Appendix for an in-depth discussion of related work. 2.1 The Zero-Shot Transfer Problem in RL We define a Markov Decision Process (MDP) as M ≡ (S,A, P,R, γ). S and A are the state and action spaces, respectively. P (·|s, a) defines the transition probability distribution of the next states given s ∈ S and a ∈ A. R(s, a, s′) is the reward for taking action a at state s resulting in s′, and γ ∈ (0, 1] is the discount factor. We assume that rewards are bounded. We consider the zero-shot transfer problem; as in [6], each task is defined by its task vector w ∈ Rd, and only the reward functions differ across tasks, being decomposed as Rw(s, a, s ′) = φ(s, a, s′)>w, (1) where φ(s, a, s′) ∈ Rd is the features of (s, a, s′). We denote the set of source task vectors as T , which is used for training. At test time, we evaluate the transferred policy on each target taskw′ /∈ T with no additional update of pre-trained components. We examine both the possible scenarios: (i) the features φ(s, a, s′) are available to the agent [6, 12] and (ii) no pre-defined features are available and the agent needs to construct its own features and task vectors. We first introduce the formulation for (i) in Section 2.2 and then its variant for (ii) in Section 2.3. 2.2 Successor Features and Universal Successor Features Approximators We now review successor features (SFs) [6] and how they are transferred to different tasks. Equation (1) allows expressing the action-value function for policy π on task w as Qπw(s, a) = Eπ [ ∞∑ i=0 γirt+i ∣∣∣St= s,At= a] = Eπ [ ∞∑ i=0 γiφt+i ∣∣∣St= s,At= a]>w = ψπ(s, a)>w, (2) where φt = φ(st, at, st+1) ∈ Rd. Here, ψπ(s, a) ∈ Rd is called the SFs for policy π at (s, a), and taking its inner product with an arbitrary task w results in the action-value for π on w; i.e., Qπw(s, a). Thanks to the analogy between (rewards r, action-value functions Q) and (features φ, successor features ψ), the Bellman equation applies to SFs and thus they can be trained similarly to the way action-value functions are learned; e.g., Q-learning. The policy improvement theorem [11] states that a new policy that takes a greedy action according to a given policy’s value function at each state performs at least as well as the original policy. Generalized policy improvement (GPI) [6] extends policy improvement to the case where the value functions of multiple policies are available. Given a task w′, a set of policies π1, . . . , πn, their action-value functions Qπ1w′ , . . . , Q πn w′ and their approximations Q̃ π1 w′ , . . . , Q̃ πn w′ , the GPI policy is defined as πGPI(s) ∈ argmax a max i Q̃πiw′(s, a). (3) Barreto et al. [6] suggest that QπGPIw′ (s, a) ≥ maxiQπiw′(s, a) − 21−γ maxi ∥∥∥Qπiw′ − Q̃πiw′∥∥∥∞. They also provide the upper bound on the suboptimality of the GPI policy as ‖Q∗w′ −QπGPIw′ ‖∞ ≤ 2 1− γ { min i ‖φ‖∞‖w′ −wi‖+max i ∥∥∥Qπiw′ − Q̃πiw′∥∥∥∞} , (4) where each πi is an optimal policy for wi. While the GPI theorem allows the transfer of learned successor features to arbitrary tasks that share the same environment dynamics, it is limited in the following aspect. GPI uses the action-values for source tasks on target tasks based on the reward decomposition assumption (Equation (1)) i.e., Q̃πiw′(s, a) = ψ̃ πi(s, a)>w′ for each i. However, it does not take any advantage of the smoothness of the optimal action-value functions with respect to different task vectors [12]. To overcome this limitation, Borsa et al. [12] introduce universal successor features approximators (USFAs). Inspired by universal value functions (UVFs) [30], they extend the original successor features with policy vectors z ∈ Rl as input to their approximators. More specifically, universal successor features (USFs) are defined to satisfy ψπz (s, a) ≡ ψ(s, a,z) ≈ ψ̃(s, a,z), (5) where z is a policy vector for the policy πz , and USFAs ψ̃ are the learned approximators of USFs ψ. Naturally, the value functions are expressed as Q̃πzw (s, a) = ψ̃(s, a,z) >w ≈ ψ(s, a,z)>w = Qπzw (s, a). (6) Each reward function induces optimal policies, which can be encoded using the corresponding task vectors. That is, one can simply choose to define the policy vector space to be the same as the task vector space (l = d) and let z = w be a policy vector of an optimal policy for task w. Then, πw and Qπww denote an optimal policy for w and its action-value function, respectively. The training of USFAs is similar to that of SFs, except for that it additionally involves sampling of policy vectors given task vectors. The update of USFAs at the k-th iteration is ψ̃(k+1) ← argmin ψ Ew∼T ,z∼Dz(·|w),(s,a,s′)∼µ [∥∥∥φ(s, a, s′) + γψ̃(k)(s′, a′, z)− ψ(s, a,z)∥∥∥2] (7) for a′ = argmaxb ψ̃ (k)(s′, b,z)>z. Dz(·|w) is the policy vector distribution; for instance,N (w, σI) can be used for better training with diversified inputs. µ is the transition sampling distribution, which involves the GPI policy of the samples from Dz(·|w) or a replay buffer. We use gradient descent to update the parameters. USFAs provide a benefit that they allow a GPI policy to use an arbitrary set of policies {πz}z∈C as πGPI(s) ∈ argmaxamaxz∈C Q̃πzw′(s, a). However, the generalization of USFAs to new policy vectors depends on a function approximator ψ, and thus if C contains policy vector(s) distant from source vectors, a GPI policy with C may have high approximation errors and perform poorly or even worse than a GPI policy with only source vectors [12], as will be demonstrated later in our experiments. 2.3 Universal Successor Features Approximators with Learned φ For the scenario where features φ’s are not provided to the agent1, we adopt the problem formulation from Ma et al. [23] where for each task the task information g ∈ G is given to the agent. Although the task information g, unlike a task vector, cannot be directly combined with successor features for transfer to a novel task, zero-shot inference could still be possible by leveraging the information about the task. Specifically, we not only perform the original learning of ψ̃ letting the task information induce policy vectors instead, but also train φ̃ and w̃ to approximate the reward decomposition with transition samples. As done in [23], we update ψ̃, φ̃ and w̃ using gradient descent to minimize Eg∼T g,z∼Dgz(·|g),(s,a,r,s′)∼µ [ Lψ + LQ ] for Lψ := 1 d ∥∥∥φ̃(s, a, s′) + γψ̃(k)(s′, a′, z)− ψ̃(s, a,z)∥∥∥2 (8) LQ := { r + γψ̃(k)(s′, a′, z)>w̃(k)(z)− ψ̃(s, a,z)>w̃(z) }2 (9) and a′ = argmaxb ψ̃ (k)(s′, b,z)>w̃(k)(z) at the k-th iteration. The superscript (k) denotes the target, T g is the source task information set, Dgz(·|g) is the policy vector distribution conditioned on the task information and µ is the sampling distribution. 3 Constrained GPI for Improved Zero-Shot Transfer of Successor Features To mitigate the aforementioned issue of the possibly unlimited approximation errors of USFAs, we propose a simple yet effective method that improves the transfer of successor features by further leveraging the reward decomposition structure in Equation (1). We first present, under a more relaxed condition, lower and upper bounds on the optimal values for novel task vectors that are expressed as linear combinations of source task vectors (Section 3.1). Then, we propose a novel approach called constrained GPI, which effectively confines the approximated action-values inside the computed lower and upper bounds (Section 3.2). 3.1 Bounding Optimal Values for New Tasks Theorem 1 of [25] provides the lower and upper bounds on the value of an optimal policy for a new task, whose vector w′ is a positive conical combination of source task vectors i.e., w′ = ∑ w∈T αww such that αw ≥ 0,∀w ∈ T and∑w∈T αw > 02. However, for a broad application of such bounds, the positive conical combination condition can be too restrictive, since the resulting bounds only apply to the task vectors that appear inside the conical hull of source task vectors. Therefore, we suggest a more relaxed theorem, which holds for an arbitrary task vector w′ that is expressed as a linear combination of the source task vectors i.e., w′ = ∑ w∈T αww for αw ∈ R,∀w ∈ T . Figure 1 shows an example that compares the task space coverage of conical [25] and 1One typical example presented later in our experiments is the case where the agent observes visual inputs. Then, it is not trivial to derive features that linearly decompose reward functions. 2We slightly abuse the notation and let αw denote the coefficient for vector w. our linear combinations. With our extended task space coverage, we can apply the bounds to more general target tasks outside of the conical hull, which will be further discussed in the next section. We define πw1w2 to be an upper bound on the approximation error of Q̃ πw1 w2 for arbitrary tasks w1,w2 such that |Qπw1w2 (s, a)− Q̃ πw1 w2 (s, a)| ≤ πw1 w2 (s, a), ∀(s, a) ∈ S ×A, (10) and we present our theorem as follows. Theorem 1. Given a task vectorw′ = ∑ w∈T αww for αw ∈ R,∀w ∈ T , for all state-action pairs (s, a) ∈ S ×A, the action-value of πw′ , which is an optimal policy for task w′, on task w′ is lowerand upper-bounded as Lw′,T (s, a) ≤ Qπw′w′ (s, a) ≤ Uw′,T ,α(s, a) for Lw′,T (s, a) := max w∈T [ Q̃πww′ (s, a)− πww′ (s, a) ] , (11) Uw′,T ,α(s, a) := ∑ w∈T max { αw ( Q̃πww (s, a) + πw w (s, a) ) , αwCw(s, a) } , (12) for someCw(s, a) ≤ minπ Qπw(s, a) such asCw(s, a) = 11−γ rminw where rminw is the minimum reward on w i.e., rminw = min(s,a)∈S×ARw(s, a) and α = {αw}w∈T . Proof. For the derivation of the lower bound Lw′,T (s, a), since Q πw′ w′ is the optimal action-value function for task w′ and Qπw′w′ (s, a) ≥ Qπww′ (s, a) for arbitrary task w and state-action pair (s, a), Q πw′ w′ (s, a) ≥ max w∈T Qπww′ (s, a) ≥ max w∈T [ Q̃πww′ (s, a)− πww′ (s, a) ] . (13) For the upper bound Uw′,T ,α(s, a), we use that Q πw′ w (s, a) ≤ Qπww (s, a) and Q πw′ w (s, a) ≥ minπ Q π w(s, a) ≥ Cw(s, a) for arbitrary task w and state-action pair (s, a), which leads to Q πw′ w′ (s, a) = ∑ w∈T αw (Q πw′ w (s, a)− Cw(s, a)) + ∑ w∈T αwCw(s, a) (14) ≤ ∑ w∈T max {αw (Qπw′w (s, a)− Cw(s, a)) , 0}+ ∑ w∈T αwCw(s, a) (15) ≤ ∑ w∈T max {αw (Qπww (s, a)− Cw(s, a)) , 0}+ ∑ w∈T αwCw(s, a) (16) = ∑ w∈T {max {αw (Qπww (s, a)− Cw(s, a)) , 0}+ αwCw(s, a)} (17) = ∑ w∈T max {αwQπww (s, a), αwCw(s, a)} (18) ≤ ∑ w∈T max { αw ( Q̃πww (s, a) + πw w (s, a) ) , αwCw(s, a) } . (19) In Equation (19), for each w ∈ T , the sign of αw determines which of the two terms in the max operator is used. If αw ≥ 0, the max operator selects the first term, whereas a negative αw lets the second term be used. Note that our Theorem 1 recovers Theorem 1 of [25] when w′ is a conical combination of w’s from T i.e., αw ≥ 0,∀w ∈ T . Intuitively, this theorem states the condition that the optimal action-value for an arbitrary target task must satisfy, given the optimal successor features for the source tasks. The theorem is applicable to different problems wherever bounding of optimal values is useful. One example is policy cache construction, where the agent should decide whether to reuse existing policies in the cache set or learn a new one given each new task [25]. As will be shown in the next section, we employ the bounding as a constraint on the action-values for novel target tasks, for the guidance of transfer. In Sections 4.1 and 4.2 and appendix B, we empirically show that the application of our Theorem 1 can significantly improve the performance in the cases where target tasks are outside the conical hull of source tasks. 3.2 Constrained Training and Constrained GPI As described in Section 2.2, the universal successor features approximators (USFAs) [12] improve the original successor features so that arbitrary policy vectors, including the ones for target tasks, can be used for GPI. However, the use of arbitrary policy vectors with USFAs solely relies on the generalization power of the approximators (e.g., neural networks). Thus, the obtained successor features on novel tasks might contain high approximation errors, which could make the GPI policy perform poorly. Our high-level idea to tackle the issue is to exploit the reward decomposition structure in Equation (1) even for obtaining SFs for new tasks, instead of solely relying on the approximators. We employ the lower and upper bounds on optimal values from Theorem 1 to enforce the bounds on the approximate successor features. As a result, the approximation errors can be reduced by restricting the estimated optimal values to be inside those bounds around the optimal values, which can prevent the use of erroneous values during the transfer to unseen tasks. For now, we will first introduce how to train the successor features approximators to output the successor features that satisfy the bounds on novel tasks. Then, we will point out that an analogous effect can be accomplished by modifying only the inference algorithm, and propose constrained GPI as a simple yet effective test-time approach to improving zero-shot transfer to novel tasks. Constrained training of SF approximators. In the original training of USFAs, the approximators are learned with a set of source tasks T in Equation (7). We propose to guide the training by employing Theorem 1; we impose constraints for the approximators using the lower and upper bounds on the optimal values for arbitrary linear combinations of source tasks. Specifically, for the training of USFAs, we use Equation (7) but with the following constraints: Lw′,T (s, a) ≤ ψ̃(s, a,w′)>w′ ≤ Uw′,T ,ξ(w′,T ,s,a)(s, a) for w′ ∈ W, (20) where (s, a) is the same sample as the main objective of Equation (7). W is a set of task vectors for the constraints, which can be independent of the source task set T , and ξ(·) determines the coefficients α given a target taskw′ and T . We will explain later how to determine ξ(·). W can be any subset of the linear span of source task vectors, but practically, we randomly sample a number of vectors from the span at each update. Since the targets of the constraints are not fixed with respect to both w′ and (s, a) throughout the training, we use penalty terms (or soft constraints) that linearly penalize the constraint violations as 1 |W| ∑ w′∈W ({ Lw′,T (s, a)− Q̃πw′w′ (s, a) } + + { Q̃ πw′ w′ (s, a)− Uw′,T ,ξ(w′,T ,s,a)(s, a) } + ) , where {x}+ denotes max{x, 0}. The constrained training suggested above can make the approximators comply with the bounds for any tasks without requiring any additional interactions with the environment. However, it has some downsides. First, since it is a new training procedure, existing pre-trained models cannot be used. It requires some additional computational cost compared to the naive training of successor features approximators. Second, the enforcement of the constraints for training can introduce additional hyperparameters (e.g., the weight coefficient for the penalty terms). Thus, suboptimal hyperparameters may introduce either instability in the training or a decrease in the performance. Test-time constrained GPI. Our idea starts with the observation that in the constrained training, the learned successor features from source tasks are considered the “trustworthy” features for the constraints, because the USFAs are trained on the source tasks. Besides, only the source successor features are used for computing the constraints for all the other tasks. It implies that the learning of the source successor features better not be affected by other criteria, and more accurate source successor features would produce better constraints for other tasks with smaller errors. Based on the implication, we propose constrained GPI, which can not only overcome the limitation of USFAs as done by the aforementioned constrained training but also have two additional practical merits: (i) it is computationally simpler, and (ii) it is a test-time approach with no training. Simply put, we propose replacing the usual GPI policy with the constrained GPI policy as πCGPI(s) ∈ argmax a max z∈C [ min { max { Q̃πzw′(s, a), Lw′,T (s, a) } , Uw′,T ,ξ(w′,T ,s,a)(s, a) }] , (21) where the target task w′ is expressible as a linear combination of the source tasks and ξ(·) again outputs α given w′ and T as in Equation (20). C is a set of policies that we can freely choose when applying the constrained GPI. The constrained GPI policy selects the actions that maximize the maximum action-values as the original GPI policy does but also caps the values with the lower and upper bound constraints derived from the source successor features. The upper bound constraint fixes the overestimation of values computed with approximate successor features for either the target task w′ or any other tasks used for constrained GPI. The lower bound constraint ensures that action-values on the target task for the greedy action selection are at least as close to the optimal target action-values as the lower bounds. The approximation error terms in the lower and upper bounds i.e., πww′ (s, a) and πw w (s, a) in Theorem 1 could be ignored in practice, as long as the approximation errors of the source successor features are sufficiently small. Also, we can obtain the tightest upper bound by defining ξ(·) as ξ(w′, T , s, a) := argmin {αw}w∈T Uw′,T ,{αw}w∈T (s, a) subject to w ′ = ∑ w∈T αww. (22) The objective Uw′,T ,{αw}w∈T (s, a) is the sum of the piecewise linear functions. Thus, Equation (22) can be solved with linear programming. We observe that using the lower bound constraint with Lw′,T (s, a) is equivalent to including the successor features for source tasks in the input to the constrained GPI; i.e., T ⊆ C. Also, since Lw′,T (s, a) ≤ Uw′,T ,ξ(w′,T ,s,a)(s, a), there would be no difference between GPI and constrained GPI when C = T . Thus, in our experiments, we mainly use C = {w′}, which is equivalent to using C = T ∪ {w′}. 4 Experiments 4.1 Scavenger Experiments We start our experiments in the Scavenger environment [8, 9], which can assess our approach with minimal influence from external causes. In Scavenger, the agent is positioned at one of the cells in a G×G grid, and the goal is to maximize the return by collecting objects. Both the agent and objects are spawned at random locations, and there are d classes of objects where the class determines the value of the reward. The state space is S = {0, 1}G×G×(d+1), where the first d channels describe the current locations of the objects on the map and the last channel specifies the walls where the agent cannot go and objects do not appear. There are four actions available: A = {UP, DOWN, LEFT, RIGHT}, and the agent picks up an object by visiting the cell of the object, which spawns a new object of a random class at a random location. The feature φ(s, a, s′) ∈ {0, 1}d is a one-hot vector whose element represents whether the agent picks up an object of that type or not within the transition. The task vector w ∈ Rd determines the reward values for the d different classes of objects. Please see Barreto et al. [9] for the full details. We evaluate the zero-shot transfer performance of different approaches i.e., we first train USFAs as proposed in [12], and measure the performance of GPI and constrained GPI policies that use the same set of USFAs on target tasks with no further policy updates. We set G = 11 and use 20 objects in total with the different numbers of classes; d = 2 and d = 4. With d = 4, we also test the USFAs that are learned with the constrained training for a comparison. We use the standard basis vectors of Rd as the set of source tasks as done in [12], and evaluate agents on the set of target tasks defined by {−1, 1}d. Therefore, all the target tasks except for the all-ones vector 1 are not covered by the conical hull of source tasks, which requires Theorem 1 for bounding of values. We train eight USFAs agents for 1M steps, and evaluate them on each target vector 10 times with a fixed set of 10 random seeds. To be invariant to the reward scale differences between different tasks, we normalize the scores (or returns) from the environment by the minimum and maximum scores with respect to all the agents’ evaluation episodes on each task. Figures 2 and 3 compare the performance of the USFAs agents with GPI and constrained GPI for exploitation, following the evaluation scheme suggested by [2]. Although they use the same set of trained USFAs, the constrained GPI brings a notable performance improvement in comparison with the original GPI. Also, Figure 3 suggests that the constrained GPI, the test-time method, can match 0.40 0.45 0.50 CGPI-T (ours) CT (ours) + GPI-ST CT (ours) + GPI-T CT (ours) + GPI-S GPI-ST GPI-T GPI-S Median 0.36 0.40 0.44 0.48 IQM 0.42 0.45 0.48 Mean 0.51 0.54 0.57 0.60 Optimality Gap Normalized Score (a) All Median IQM Mean Optimality Gap or even outperform the agents learned with the constrained training. One possible explanation is that the constrained training might experience some instability in learning depending on the choice of the hyperparameters, as described in Section 3.2. In the first and the second columns of Table 1, we present the proportions of the action-values that are changed by the lower and upper bounds of the constrained GPI, measured for the evaluation on Scavenger. The third column shows the proportions of resulting greedy actions changed by them. It implies that USFAs i.e., the function approximators of successor features, may not satisfy the optimal value bounds presented in Theorem 1, and applying the bounds could change a fair proportion of greedy actions to improve the performance. 4.2 DeepMind Lab Experiments with Learned φ For evaluation of our approach in a more complex and realistic setting, we employ DeepMind Lab [7, 10, 12] and conduct experiments in a first-person view 3D environment. In a single room, a goal object is placed arbitrarily, and the objective is to reach the goal before the episode ends where its location changes between tasks. Figure 4 shows an example scene that the agent sees with the goal object in red. At every time step, the agent observes an 84 × 84 × 3 image from the environment and outputs one of 45 possible actions, which include 5, 3 and 3 choices for LOOK_LEFT_RIGHT_PIXELS_PER_FRAME, STRAFE_LEFT_RIGHT and MOVE_BACK_FORWARD controls, respectively. Since observations are in the firstperson view, the goal object may not be seen by the agent, which makes transfer given the task information g critical to the success of the tasks. In each task, we divide the room into an 11× 11 grid and place the goal object in one of the cells. The task information g is a two-dimensional vector that contains the coordinate of the goal in the grid. Starting at the center of the room, the agent receives a reward of one if it reaches the goal within the episode horizon or no rewards otherwise. Therefore, the reward functions are sparse. For these experiments where the agent observes rendered images rather than the underlying states, it may not be viable to define features φ’s and task vectors w’s that linearly decompose reward functions. Therefore, we train agents with the learning of φ̃ and w̃ from samples from the source tasks with d = 2 as described in Section 2.3. Inspired by Hong et al. [18], we examine zero-shot transfer with the GPI and constrained GPI using two transfer settings: “left-to-right” and “near-to-far”. In the “left-to-right” setting, the agent is trained on the source tasks whose goals are sampled from the left half of the room and is tested on the target tasks with goals from the right half. In the “near-to-far” setting, the source tasks have the goals within an L∞ = 2 distance from the center of the room, and target tasks set the goals farther than L∞ = 2. For each setting, we train eight USFAs agents with different seeds for 3M environment steps on the source tasks and test them on the target tasks. Figure 5 presents the comparison of the GPI with different C’s and the constrained GPI. Leveraging the same set of trained USFAs with learned φ̃ and w̃, the constrained GPI outperforms the GPI with the three C’s in both settings by a notable margin. Another observation is that the trained USFAs agents seem to overfit more to the source tasks in the “near-to-far” setting compared to the “left-to-right” setting. It makes the performance on the target tasks much worse. Nonetheless, the constrained GPI is still helpful in such overfitting situations. 5 Conclusion and Discussion We presented constrained GPI, a simple yet effective test-time approach for transfer with approximate successor features. We first focused on the issue that although universal successor features approximators (USFAs) exploit the smoothness of optimal values across different tasks, their approximation errors on novel target tasks could be large especially when those tasks are quite distant from source tasks. Thus, we introduced a theorem about lower and upper bounds on the optimal values for novel task vectors that belong to the task vector space linearly spanned by the set of source task vectors, relaxing the conical combination condition used for the theorem by Nemecek and Parr [25]. We proposed a constrained training scheme making use of those bounds for reducing the action-value errors of the learned approximators on novel tasks. We then suggested constrained GPI that uses the bounds at test time to achieve an analogous effect, allowing the use of previously trained models. We empirically showed that this test-time approach can improve the zero-shot transfer performance by a large margin in multiple environments. Limitations and future directions. There may be some cases where the minimum rewards for source tasks i.e., rminw ’s are overly small, which could lead to less changes of both action-values and behaviors induced by the upper-bounding in Theorem 1 with Cw(s, a) = 11−γ r min w . An interesting direction to tackle the issue is to learn the minimum action-value function during the training and to use the approximate minimum value at each state-action pair as Cw(s, a) for deriving the upper bound in Theorem 1. It may allow computing upper bounds more tightly and adaptively for different state-action pairs. Also, if the learned successor features approximators have large errors even on source tasks, not only GPI but also constrained GPI’s bounding may not be meaningfully helpful. One idea to mitigate the issue is to take the uncertainty in the approximators and the approximation error term that appears in Theorem 1 into account, e.g., by using ensemble models. As an intriguing direction for future research, we could extend our constrained GPI to other non-linear forms of reward or value decompositions. It may also be interesting to make transfer with successor features and constrained GPI compatible with large-scale approaches for generalization such as [28]. We do not see direct negative societal impacts of this work. Acknowledgements We thank the anonymous reviewers for their valuable comments. This work was supported by Samsung Advanced Institute of Technology, and Institute of Information & communications Technology Planning & Evaluation (IITP) grants funded by the Korea government (MSIT), including (No.20190-01082, SW StarLab), (No.2022-0-00156, Fundamental research on continual meta-learning for quality enhancement of casual videos and their 3D metaverse transformation), and (No.2021-0-01343, Artificial Intelligence Graduate School Program (Seoul National University)). Jaekyeom Kim was partly supported by Google PhD Fellowship. Gunhee Kim is the corresponding author.
1. What is the focus and contribution of the paper regarding successor features and generalized policy improvement? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical grounding and empirical results? 3. What are the weaknesses of the paper, especially regarding the test environments and limited scalability? 4. Do you have any concerns or suggestions for improving the approach's applicability to more complex tasks and domains? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Questions Limitations
Summary Of The Paper Successor Features / Generalized Policy improvement provide a method for generalizing to new tasks that are a linear combination of existing tasks. USFA combine this approach to generalization with neural networks with also generalize by training function approximators conditioned on the task. This work extends USFAs by showing that one can establish lower and upper bounds on the generalization performance. When estimating action-values on a new task, these bounds are used to constrain the function approximator estimate. They test this approach (comparing to USFAs) on two simple environments and find using these constraints outperforms USFAs. Strengths And Weaknesses Strengths: Paper is well communicated. Paper will be of interest to community working on successor features and related approaches. Paper introduces both theoretical grounding and empirical results. Weaknesses: In common with a lot of work in SF, the test environments are "toy" and its unclear whether these approaches scale to more interesting tasks. For this reason, it may be of interest only to a smaller subset of the NeurIPS community. It would be interesting to compare very different approaches, such as a conditional decision transformer [1] or GATO style policy distillation [2]. [3] introduces an alternative method for improving the performance of GPI (in the max-ent RL framework) that reduces the generalization error and should be cited, and ideally compared against. [1] Chen, Lili, et al. "Decision transformer: Reinforcement learning via sequence modeling." Advances in neural information processing systems 34 (2021): 15084-15097. [2] Reed, Scott, et al. "A generalist agent." arXiv preprint arXiv:2205.06175 (2022). [3] Hunt, Jonathan, et al. "Composing entropic policies using divergence correction." International Conference on Machine Learning. PMLR, 2019. Questions Could you comment on the scalability of this approach to larger domains and tasks? Limitations Yes although some comments (and ideally testing) on scaling to more complex domains would be useful.
NIPS
Title Cross-lingual Retrieval for Iterative Self-Supervised Training Abstract Recent studies have demonstrated the cross-lingual alignment ability of multilingual pretrained language models. In this work, we found that the cross-lingual alignment can be further improved by training seq2seq models on sentence pairs mined using their own encoder outputs. We utilized these findings to develop a new approach — cross-lingual retrieval for iterative self-supervised training (CRISS), where mining and training processes are applied iteratively, improving cross-lingual alignment and translation ability at the same time. Using this method, we achieved stateof-the-art unsupervised machine translation results on 9 language directions with an average improvement of 2.4 BLEU, and on the Tatoeba sentence retrieval task in the XTREME benchmark on 16 languages with an average improvement of 21.5% in absolute accuracy. Furthermore, CRISS also brings an additional 1.8 BLEU improvement on average compared to mBART, when finetuned on supervised machine translation downstream tasks. Our code and pretrained models are publicly available. 1 1 Introduction Pretraining has demonstrated success in various natural language processing (NLP) tasks. In particular, self-supervised pretraining can learn useful representations by training with pretext task, such as cloze and masked language modeling, denoising autoencoder, etc. on large amounts of unlabelled data [11, 25, 28, 32, 35, 51]. Such learned “universal representations" can be finetuned on task-specific training data to achieve good performance on downstream tasks. More recently, new pretraining techniques have been developed in the multilingual settings, pushing the state-of-the-art on cross-lingual understandin, and machine translation. Since the access to labeled parallel data is very limited, especially for low resource languages, better pretraining techniques that utilizes unlabeled data is the key to unlock better machine translation performances [9, 27, 42]. In this work, we propose a novel self-supervised pretraining method for multilingual sequence generation: Cross-lingual Retrieval for Iterative Self-Supervised training (CRISS). CRISS is developed based on the finding that the encoder outputs of multilingual denoising autoencoder can be used as language agnostic representation to retrieve parallel sentence pairs, and training the model on these retrieved sentence pairs can further improve its sentence retrieval and translation capabilities in an iterative manner. Using only unlabeled data from many different languages, CRISS iteratively mines 1https://github.com/pytorch/fairseq/blob/master/examples/criss 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. for parallel sentences across languages, trains a new better multilingual model using these mined sentence pairs, mines again for better parallel sentences, and repeats. In summary, we present the following contributions in this paper: • We present empirical results that show the encoder outputs of multilingual denoising autoencoder (mBART) represent language-agnostic semantic meaning. • We present empirical results that show finetuning mBART on only one pair of parallel bi-text will improve cross-lingual alignment for all language directions. • We introduce a new iterative self-supervised learning method that combines mining and multilingual training to improve both tasks after each iteration. • We significantly outperform the previous state of the art on unsupervised machine translation and sentence retrieval. • We show that our pretraining method can further improve the performance of supervised machine translation task compared to mBART. This paper is organized as follows. Section 2 is devoted to related work. Section 3 introduces improvable language agnostic representation emerging from pretraining. Section 4 describes the details of cross-lingual retrieval for iterative self-supervised training (CRISS). Section 5 evaluates CRISS with unsupervised and supervised machine translation tasks as well as sentence retrieval tasks. Section 6 iterates over ablation studies to understand the right configurations of the approach. Then we conclude by Section 7. 2 Related work Emergent Cross-Lingual Alignment On the cross-lingual alignment from pretrained language models, [49] [33] present empirical evidences that there exists cross-lingual alignment structure in the encoder, which is trained with multiple languages on a shared masked language modeling task. Analysis from [46] shows that shared subword vocabulary has negligible effect, while model depth matters more for cross-lingual transferability. In English language modeling, retrieval-based data augmentation has been explored by [20] and [15]. Our work combines this idea with the emergent cross-lingual alignment to retrieve sentences in another language instead of retrieving paraphrases in the same language in unsupervised manner. Cross-Lingual Representations Various previous works have explored leveraging cross-lingual word representations to build parallel dictionaries and phrase tables, then applying them to downstream tasks [1, 2, 3, 4, 24, 29]. Our work shows that we can work directly with sentence-level representations to mine for parallel sentence pairs. Additionally, our approach shares the same neural networks architecture for pretraining and downstream tasks, making it easier to finetune for downstream tasks such as mining and translation. There is also a large area of research in using sentence-level representations to mine pseudo-parallel sentence pairs [6, 8, 14, 17, 39, 40, 43]. Compared to supervised approaches such as [14, 40], CRISS performs mining with unsupervised sentence representations pretrained from large monolingual data. This enables us to achieve good sentence retrieval performance on very low resource languages such as Kazakh, Nepali, Sinhala, Gujarati. Compared to [17], we used full sentence representations instead of segment detection through unsupervised word representations. This enables us to get stronger machine translation results. Multilingual Pretraining Methods With large amounts of unlabeled data, various self-supervised pretraining approaches have been proposed to initialize models or parts of the models for downstream tasks (e.g. machine translation, classification, inference and so on) [11, 12, 25, 27, 28, 32, 35, 36, 37, 42, 51]. Recently these approaches have been extended from single language training to crosslingual training [9, 22, 27, 45]. In the supervised machine learning literature, data augmentation [5, 21, 41, 50] has been applied to improve learning performance. To the best our knowledge, little work has been explored on self-supervised data augmentation for pretraining. This work pretrains multilingual model with self-supervised data augmenting procedure using the power of emergent cross-lingual representation alignment discovered by the model itself in an iterative manner. Unsupervised Machine Translation Several authors have explored unsupervised machine translation techniques to utilize monolingual data for machine translation. A major line of work that does not use any labeled parallel data typically works as follow: They first train an initial language model using a noisy reconstruction objective, then finetune it using on-the-fly backtranslation loss [13, 23, 26, 27, 42]. Our work differs from this line of work in that we do not use backtranslation but instead retrieve the target sentence directly from a monolingual corpus. More recently, [38] and [48] start incorporating explicit cross-lingual data into pretraining. However, since the quality of cross-lingual data extracted is low, additional mechanisms such as sentence editing, or finetuning with iterative backtranslation is needed. To the best of our knowledge, our approach is the first one that achieves competitive unsupervised machine translation results without using backtranslation. 3 Self-Improvable Language Agnostic Representation from Pretraining We start our investigation with the language agnostic representation emerging from mBART pretrained models [27]. Our approach is grounded by the following properties that we discovered in mBART models: (1) the mBART encoder output represents the semantics of sentences, (2) the representation is language agnostic, and (3) the language agnostics can be improved by finetuning the mBART models with bitext data of a small number of language pairs (or even only 1 pair of languages) in an iterative manner. We explain these findings in details in this section and the next section on the iterative procedure. 3.1 Cross-lingual Language Representations We use mBART [27] seq2seq pre-training scheme to initialize cross-lingual models for both parallel data mining and multilingual machine translation. The mBART training covers N languages: D = {D1, ...,DN} where each Di is a collection of monolingual documents in language i. mBART trains a seq2seq model to predict the original text X given g(X) where g is a noising function, defined below, that corrupts text. Formally we aim to maximize Lθ: Lθ = ∑ Di∈D ∑ x∈Di logP (x|g(x); θ) , (1) where x is an instance in language i and the distribution P is defined by the Seq2Seq model. In this paper we used the mbart.cc25 checkpoint [27] open sourced in the Fairseq library [30] 2. This model is pretrained using two types of noise in g — random span masking and order permutation — as described in [27]. With a pretrained mBART model, sentences are then encoded simply by extracting L2-normalized average-pooled encoder outputs. 3.2 Case Study To understand the language agnosticity of the mBART sentence representations, we study sentence retrieval tasks. For each language pair, we go through each sentence in the source language, find the closest sentence to that sentence in the target language (using cosine similarity), and report the average top-1 retrieval accuracy for each language pair. We use the TED58 dataset which contains multi-way translations of TED talks in 58 languages [34]3. The sentence retrieval accuracy on this TED dataset is depicted in Figure 1. The average retrieval accuracy is 57% from the mBART model which is purely trained on monolingual data of 25 languages without any parallel data or dictionary; the baseline accuracy for random guessing is 0.04%. We also see high retrieval accuracy for language pairs with very different token distribution such as Russian-German (72%) or Korean-Romanian (58%). The high retrieval accuracy suggests that mBART model trained by monolingual data of multiple languages is able to generate language agnostic representation that are aligned at the semantic level in the vector space. 2https://github.com/pytorch/fairseq/blob/master/examples/mbart/README.md 3We filter the test split to samples that have translations for all 15 languages: Arabic, Czech, German, English, Spanish, French, Italian, Japanese, Korean, Dutch, Romanian, Russian, Turkish, Vietnamese, Chinese (simplified). As a result, we get a dataset of 2253 sentences which are translated into 15 different languages (a total of 33, 795 sentences). Moreover the cross-lingual semantic alignment over multiple languages not only emerges from the monolingual training but also can be improved by a relatively small amount of parallel data of just one direction. Figure 2 shows the sentence retrieval accuracy of an mBART model that are finetuned with Japanese-English in the IWSLT17 parallel dataset (223, 000 sentence pairs) [7]. Even with parallel data of one direction, the retrieval accuracy improved for all language directions by 27% (absolute) on average. Inspired by these case studies, we hypothesize that the language agnostic representation of the pretrained model can be self-improved by parallel data mined by the model itself without any supervised parallel data. We will devote section 4 to details of the self-mining capability and the derived Cross-lingual Retrieval Iterative Self-Supervised Training (CRISS) procedure. 4 Cross-lingual Retrieval for Iterative Self-Supervised Training (CRISS) Algorithm 1 Unsupervised Parallel Data Mining 1: function MINE(Θ, Di, Dj) 2: Input: (1) monolingual data setsDi andDj for language i and j respectively, (2) a pretrained model Θ, 3: Set k, M , τ to be the desired KNN size, the desired mining size, and the desired minimum score threshold respectively 4: for each x in Di, each y in Dj do 5: x, y ← Embed(Θ, x),Embed(Θ, y) 6: Nx, Ny ← KNN(x,Dj , k),KNN(y,Di, k) . Using FAISS [19] 7: end for 8: return D′ = {(x, y)} where (x, y) are the top M pairs s.t. score(x, y) ≥ τ following Equation 2 9: end function Algorithm 2 CRISS training 1: Input: (1) monolingual data from N languages {Dn}Nn=1, (2) a pretrained mBART model Ψ, (3) total number of iterations T 2: Initialize Θ← Ψ, t = 0 3: while t < T do 4: for every language pairs (i, j) where i 6= j do 5: D′i,j ← Mine(Θ, Di, Dj) . Algorithm 1 6: end for 7: Θ← MultilingualTrain(Ψ, {D′i,j | i 6= j}) . Note: Train from the initial mBART model Ψ 8: end while In this section, we make use the of the language agnostic representation from mBART described in Section 3 to mine parallel data from a collection of monolingual data. The mined parallel data is then used to further improve the cross-lingual alignment performance of the pretrained model. The mining and improving processes are repeated multiple times to improve the performance of retrieval, unsupervised and supervised multilingual translation tasks as depicted in Figure 3 using Algorithm 2. Unsupervised parallel data mining To mine parallel data without supervised signals, we employ the margin function formulation [5, 6, 40] based on K nearest neighbors (KNN) to score and rank pairs of sentences from any two languages. Let x and y be the vector representation of two sentences in languages i and j respectively, we score the semantic similarity of x and y using a ratio margin function [5] defined as the following: score(x, y) = cos(x, y)∑ z∈Nx cos(x,z) 2k + ∑ z∈Ny cos(z,y) 2k (2) where Nx is the KNN neighborhood of x in the monolingual dataset of y’s language; and Ny is the KNN neighborhood of y in the monolingual dataset x’s language. The margin scoring function can be interpreted as a cosine score normalized by average distances (broadly defined) to the margin regions established by the cross-lingual KNN neighborhoods of the source and target sentences respectively. The KNN distance metrics are defined by cos(x, y). We use FAISS [19] — a distributed dense vector similarity search library — to simultaneously search for all neighborhoods in an efficient manner at billion-scale. Thus we obtain the mining Algorithm 1. We find that in order to train a multilingual model that can translate between N languages, we don’t need to mine the training data for all N ∗ (N − 1) language pairs, but only a subset of them. This can be explained by our earlier finding that finetuning on one pair of parallel data helps improve the cross-lingual alignment between every language pair. Iteratively mining and multilingual training Building on the unsupervised mining capability of the pretrained model (Algorithm 1), we can conduct iteratively mining and multilingual training procedure (Algorithm 2) to improve the pretrained models for both mining and downstream tasks. In Algorithm 2, we repeat mining and multilingual training T times. On multilingual training, we simply augment each mined pair (x, y) of sentences by adding a target language token at the beginning of y to form a target language token augmented pair (x, y′). We then aggregate all mined pairs {(x, y)′} of the mined language pairs into a single data set to train a standard seq2seq machine translation transformer models [44] from the pretrained mBART model. To avoid overfitting to the noisy mined data, at each iteration we always refine from the original monolingual trained mBART model but only update the mined dataset using the improved model (line 7 of Algorithm 2). 5 Experiment Evaluation We pretrained an mBART model with Common Crawl dataset constrained to the 25 languages as in [27] for which we have evaluation data. We also employ the same model architecture, the same BPE pre-processing and adding the same set of language tokens as in [27]. We keep the same BPE vocab and the same model architecture throughout pretraining and downstream tasks. On mining monolingual data, we use the same text extraction process as described in [40] to get the monolingual data to mine from, which is a curated version of the Common Crawl corpus. We refer the reader to [40, 47] for a detailed description of how the monolingual data are preprocessed. For faster mining, we subsample the resulting common crawl data to 100 million sentences in each language. For low resources, we may have fewer than 100 million monolingual sentences. The statistics of monolingual data used are included in the supplementary materials. We set the K = 5 for the KNN neighborhood retrieval for the margin score functions (Equation 2). In each iteration, we tune the margin score threshold based on validation BLEU on a sampled validation set of size 2000. The sizes of mined bi-text in each iteration are included in the supplementary materials. Our default configuration mines sentences to and from English, Hindi, Spanish, and Chinese, for a total of 90 languages pairs (180 language directions) instead of all 300 language pairs (600 directions) for the 25 languages. With the mined 180 directions parallel data, we then train the multilingual transformer model for maximum 20, 000 steps using label-smoothed cross-entropy loss as described in Algorithm 2. We sweep for the best maximum learning rate using validation BLEUs. After pretraining, the same model are evaluated with three tasks: sentence retrieval, unsupervised machine translation, and supervised machine translation tasks. For supervised machine translation, we use CRISS model to initialize the weights to train models for supervised machine translation. 5.1 Unsupervised Machine Translation We evaluate CRISS on unsupervised neural machine translation benchmarks that cover both low resource and high resource language directions. For English-French we use WMT’14, for EnglishGerman and English-Romanian we use WMT’16 test data, and for English-Nepali and EnglishSinhala we use Flores test set [16]. For decoding in both the unsupervised and supervised machine translation tasks, we use beam-search with beam size 5, and report the final results in BLEU [31]. To be consistent with previous literature, we used multi-bleu.pl4 for evaluation. As shown in Table 1, on these unsupervised benchmarks the CRISS model overperforms state-ofthe-art in 9 out of 10 language directions. Our approach works well on dissimilar language pairs, achieving 14.4 BLEU on Nepali-English (improving 4.4 BLEU compared to previous method), and 13.6 BLEU on Sinhala-English (improving 5.4 BLEU compared to previous method). On similar language pairs, we also improved Romanian-English from 33.6 to 37.6 BLEU, German-English from 35.5 to 37.1 BLEU. We also report translation quality for other language pairs that do not have previous benchmark in the supplementary materials. 5.2 Tatoeba: Similarity Retrieval We use the Tatoeba dataset [6] to evaluate the cross-lingual alignment quality of CRISS model following the evaluation procedure specified in the XTREME benchmark [18]. As shown in Table 2, compared to other pretraining approaches that don’t use parallel data, CRISS outperforms state-of-the-art by a large margin, improving all 16 languages by an average of 21.5% in 4https://github.com/moses-smt/mosesdecoder/blob/master/ scripts/generic/multi-bleu.perl absolute accuracy. Our approach even beats the state-of-the-art supervised approach [6] in Kazakh, improving accuracy from 17.39% to 77.9%. This shows the potential of our work to improve translation for language pairs with little labeled parallel training data. 5.3 Supervised Machine Translation For the supervised machine translation task, we use the same benchmark data as in mBART [27]. We finetune models learned from CRISS iteration 1, 2, and 3 on supervised training data of each bilingual direction. For all directions, we use 0.3 dropout rate, 0.2 label smoothing, 2500 learning rate warm-up steps, 3e− 5 maximum learning rate. We use a maximum of 40K training steps, and final models are selected based on best valid loss. As shown in Table 3, CRISS improved upon mBART on low resource directions such as Gujarati-English (17.7 BLEU improvement), Kazakh-English (5.9 BLEU improvement), Nepali-English (4.5 BLEU improvement). Overall, we improved 26 out of 34 directions, with an average improvement of 1.8 BLEU. 6 Ablation Studies We conduct ablation studies to understand the key ingredients of our methods: the pretrained language model, the performance implications of bilingual training versus multilingual training, and the number of pivot languages used to mine parallel data. 6.1 Starting from bilingual pretrained models To study the benefits of the iterative mining-training procedure on a single language pair (ignoring the effects of multilingual data), we run the CRISS procedure on the English-Romanian language pair. We start with the mBART02 checkpoint trained on English-Romanian monolingual data [27], and apply the CRISS procedure for two iterations. As shown in Table 4 and 5, the CRISS procedure does work on a single language pair, improving both unsupervised machine translation quality, and sentence retrieval accuracy over time. Moreover, the sentence retrieval accuracy on bilingual CRISS-EnRo is higher than that of CRISS25, but the unsupervised machine translation results for CRISS25 are higher. We hypothesize that CRISS-EnRo can mine higher quality English-Romanian pseudo-parallel data, since the the encoder can specialize in representing these two languages. However, CRISS25 can utilize more pseudo-parallel data from other directions, which help it achieve better machine translation results. Direction en-ro ro-en CRISS-EnRo Iter 1 30.1 32.2 CRISS-EnRo Iter 2 33.9 35 CRISS25 Iter 1 24.9 27.9 CRISS25 Iter 2 34.1 36.5 Table 4: Unsupservised machine translation results on CRISS starting from bilingual pretrained models Direction Retrieval Accuracy CRISS-EnRo Iter 1 98.3 CRISS-EnRo Iter 2 98.6 CRISS25 Iter 1 97.8 CRISS25 Iter 2 98.5 Table 5: Sentence retrieval on WMT’16 English-Romanian test set 6.2 Multilingual Training versus Bilingual Training Multilingual training is known to help improve translation quality for low resource language directions. Since we only use mined pseudo-parallel data at finetuning step, every language direction is essentially low-resource. We ran experiments to confirm that finetuning multilingually help with translation quality for every direction. Here, we use mBART25 encoder outputs to mine parallel data for 24 from-English and 24 to-English directions. For the bilingual config, we finetune 48 separate models using only bilingual pseudo-parallel data for each direction. For the multilingual config, we combine all 48 directions data and finetune a single multilingual model. As we can see in Figure 5 and Figure 6 multilingual finetuning performs better in general5 and particularly on to-English directions. 6.3 Number of Pivot Languages In this section, we explore how choosing the number of language directions to mine and finetune affect the model performance. [6] found that using two target languages was enough to make the sentence embedding space aligned. Unlike their work which requires parallel training data, we are not limited by the amount of labeled data, and can mine for pseudo-parallel data in every language direction. However, our experiments show that there is limited performance gain by using more than 2 pivot languages. We report the translation quality and sentence retrieval accuracy of 1st iteration 5en-cs direction is an outlier which is worth further investigation. esen my -enit-e n jaen hiensi-e n fr-e n lt-e n ne -enkken etentr-e n lven nlen aren de -enruenfi-e n koen gu -enroen vien csen direction 5.0 2.5 0.0 2.5 5.0 7.5 10.0 12.5 15.0 fro m -E ng lis h M T pe rfo rm an ce im pr ov em en t Bilingual Multilingual Figure 4: Bilingual versus Multingual: x-En en -cs en -esen -hi en -kk en -neen -aren -si en -ruen -fr en -m y en -lv en -jaen -lt en -it en -guen -tr en -ko en -deen -ro en -eten -fi en -nl en -vi direction 5.0 2.5 0.0 2.5 5.0 7.5 10.0 12.5 15.0 fro m -E ng lis h M T pe rfo rm an ce im pr ov em en t Bilingual Multilingual Figure 5: Bilingual versus Multingual: En-x CRISS trained with 1 pivot language (English), 2 pivot languages (English, Spanish), and 4 pivot languages (English, Spanish, Hindi, Chinese). de -en ne -enzhen my -enkoen jaen gu -ensi-e n tr-e n lvenfi-e n vien nlen esen roen ruen hienit-e n arenfr-e n eten kkenlt-e n csen direction 5.0 2.5 0.0 2.5 5.0 7.5 10.0 12.5 15.0 to -E ng lis h M T pe rfo rm an ce im pr ov em en t 1 pivot language 2 pivot languages 4 pivot languages Figure 6: Pivot languages ablation: x-En MT en -deen -kken -ja en -m y en -zh en -koen -et en -neen -sien -fi en -guen -lt en -hi en -csen -tr en -aren -vi en -lv en -ro en -ruen -nlen -it en -esen -fr direction 5.0 2.5 0.0 2.5 5.0 7.5 10.0 12.5 15.0 fro m -E ng lis h M T pe rfo rm an ce im pr ov em en t 1 pivot language 2 pivot languages 4 pivot languages Figure 7: Pivot languages ablation: En-x MT Note that the embedding space are already well aligned even with 1 pivot language because of the emergent alignment in multilingual denoising autoencoder training, and because we jointly train the from-English and to-English directions in the same model. We observe that the average translation quality and sentence retrieval accuracy improve slightly as we increase the number of pivot languages. Since using more pivot languages will increase the computational cost linearly in the mining stage, we used 4 pivot languages in the final version. 7 Conclusion We introduced a new self-supervised training approach that iteratively combine mining and multilingual training procedures to achieve state-of-the-art performances in unsupervised machine translation and sentence retrieval. The proposed approach achieved these results even though we artificially limited the amount of unlabeled data we used. Future work should explore (1) a thorough analysis and theoretical understanding on how the language agnostic representation arises from denoising pretrainining, (2) whether the same approach can be extended to pretrain models for non-seq2seq applications, e.g. unsupervised structural discovery and alignment, and (3) whether the learned cross-lingual representation can be applied to other other NLP and non-NLP tasks and how. Broader Impact Our work advances the state-of-the-art in unsupervised machine translation. For languages where labelled parallel data is hard to obtain, training methods that better utilize unlabeled data is key to unlocking better translation quality. This technique contributes toward the goal of removing language barriers across the world, particularly for the community speaking low resource languages. However, the goal is still far from being achieved, and more efforts from the community is needed for us to get there. One common pitfall of mining-based techniques in machine translation systems, however, is that they tend to retrieve similar-but-not-exact matches. For example, since the terms "US" and "Canada" tends to appear in similar context, the token embedding for them could be close to each other, then at mining stage it could retrieve "I want to live in the US" as the translation instead of "I want to live in Canada". If the translation is over-fitted to these mined data, it could repeat the same mistake. We advise practitioners who apply mining-based techniques in production translation systems to be aware of this issue. More broadly the monolingual pretraining method could heavily be influenced by the crawled data. We will need to carefuly study the properties of the trained models and how they response to data bias such as profanity. In general, we need to further study historical biases and possible malicious data pollution attacks in the crawled data to avoid undesired behaviors of the learned models. Acknowledgments and Disclosure of Funding We thank Angela Fan, Vishrav Chaudhary, Don Husa, Alexis Conneau, and Veselin Stoyanov for their valuable and constructive suggestions during the planning and development of this research work.
1. What is the main contribution of the paper, and how does it combine two lines of work? 2. What are the key strengths of the proposed approach, particularly in terms of improvement over previous works? 3. Can you provide more details about the case studies and their significance in demonstrating the effectiveness of the method? 4. Are there any limitations or areas for improvement in the paper, such as typos and grammatical errors?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This work combines together two lines of work on (i) parallel sentence mining using cross-lingual embeddings and (ii) Unsupervised machine translation, to come up with a novel iterative self-supervised approach to improve on both tasks. The approach involves training a multilingual de-noising auto-encoder on monolingual data from multiple languages. The representations learnt on this task are used to initialize an iterative process of mining parallel data, and then using this parallel data to fine-tune the multilingual auto-encoder on a supervised translation task. Training on supervised translation improves the alignment of the cross-lingual embeddings improving the quality of the next set of mined data, while mining better parallel data improves the quality of the model on translation, resulting in a positive feedback loop that benefits both tasks. Key Contributions: (i) The paper proposes a novel approach for unsupervised parallel corpus mining and unsupervised machine translation, improving on the SoTA on both tasks by significant margins. Experiments are conducted on the Tatoeba retrieval task and a 25 language translation task based on a combination of a few academic benchmark datasets. (ii) Careful experiments to demonstrate how using parallel data from just one language pair significantly improves the cross-lingual embedding alignment in a multilingual de-noising auto-encoder. Strengths This is a very interesting project combining together two lines of research, resulting in strong improvements on both tasks. (i) Solid quality improvements over strong baselines on both unsupervised retrieval and translation tasks. (ii) Careful case studies with retrieval on the 58-language Ted dataset to demonstrate how fine-tuning even on a single language pair significantly improves the quality of retrieval on all language pairs. This example clearly motivates the rest of the study. (iii) Very well motivated and easy to read. Weaknesses No major weaknesses. (i) The paper has several typos and grammatical errors. Line 26: Retrievial -> Retrieval Line 99 -> agnostic -> agnosticity, representation -> representations Line 152 prepossessing -> pre-processing
NIPS
Title Cross-lingual Retrieval for Iterative Self-Supervised Training Abstract Recent studies have demonstrated the cross-lingual alignment ability of multilingual pretrained language models. In this work, we found that the cross-lingual alignment can be further improved by training seq2seq models on sentence pairs mined using their own encoder outputs. We utilized these findings to develop a new approach — cross-lingual retrieval for iterative self-supervised training (CRISS), where mining and training processes are applied iteratively, improving cross-lingual alignment and translation ability at the same time. Using this method, we achieved stateof-the-art unsupervised machine translation results on 9 language directions with an average improvement of 2.4 BLEU, and on the Tatoeba sentence retrieval task in the XTREME benchmark on 16 languages with an average improvement of 21.5% in absolute accuracy. Furthermore, CRISS also brings an additional 1.8 BLEU improvement on average compared to mBART, when finetuned on supervised machine translation downstream tasks. Our code and pretrained models are publicly available. 1 1 Introduction Pretraining has demonstrated success in various natural language processing (NLP) tasks. In particular, self-supervised pretraining can learn useful representations by training with pretext task, such as cloze and masked language modeling, denoising autoencoder, etc. on large amounts of unlabelled data [11, 25, 28, 32, 35, 51]. Such learned “universal representations" can be finetuned on task-specific training data to achieve good performance on downstream tasks. More recently, new pretraining techniques have been developed in the multilingual settings, pushing the state-of-the-art on cross-lingual understandin, and machine translation. Since the access to labeled parallel data is very limited, especially for low resource languages, better pretraining techniques that utilizes unlabeled data is the key to unlock better machine translation performances [9, 27, 42]. In this work, we propose a novel self-supervised pretraining method for multilingual sequence generation: Cross-lingual Retrieval for Iterative Self-Supervised training (CRISS). CRISS is developed based on the finding that the encoder outputs of multilingual denoising autoencoder can be used as language agnostic representation to retrieve parallel sentence pairs, and training the model on these retrieved sentence pairs can further improve its sentence retrieval and translation capabilities in an iterative manner. Using only unlabeled data from many different languages, CRISS iteratively mines 1https://github.com/pytorch/fairseq/blob/master/examples/criss 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. for parallel sentences across languages, trains a new better multilingual model using these mined sentence pairs, mines again for better parallel sentences, and repeats. In summary, we present the following contributions in this paper: • We present empirical results that show the encoder outputs of multilingual denoising autoencoder (mBART) represent language-agnostic semantic meaning. • We present empirical results that show finetuning mBART on only one pair of parallel bi-text will improve cross-lingual alignment for all language directions. • We introduce a new iterative self-supervised learning method that combines mining and multilingual training to improve both tasks after each iteration. • We significantly outperform the previous state of the art on unsupervised machine translation and sentence retrieval. • We show that our pretraining method can further improve the performance of supervised machine translation task compared to mBART. This paper is organized as follows. Section 2 is devoted to related work. Section 3 introduces improvable language agnostic representation emerging from pretraining. Section 4 describes the details of cross-lingual retrieval for iterative self-supervised training (CRISS). Section 5 evaluates CRISS with unsupervised and supervised machine translation tasks as well as sentence retrieval tasks. Section 6 iterates over ablation studies to understand the right configurations of the approach. Then we conclude by Section 7. 2 Related work Emergent Cross-Lingual Alignment On the cross-lingual alignment from pretrained language models, [49] [33] present empirical evidences that there exists cross-lingual alignment structure in the encoder, which is trained with multiple languages on a shared masked language modeling task. Analysis from [46] shows that shared subword vocabulary has negligible effect, while model depth matters more for cross-lingual transferability. In English language modeling, retrieval-based data augmentation has been explored by [20] and [15]. Our work combines this idea with the emergent cross-lingual alignment to retrieve sentences in another language instead of retrieving paraphrases in the same language in unsupervised manner. Cross-Lingual Representations Various previous works have explored leveraging cross-lingual word representations to build parallel dictionaries and phrase tables, then applying them to downstream tasks [1, 2, 3, 4, 24, 29]. Our work shows that we can work directly with sentence-level representations to mine for parallel sentence pairs. Additionally, our approach shares the same neural networks architecture for pretraining and downstream tasks, making it easier to finetune for downstream tasks such as mining and translation. There is also a large area of research in using sentence-level representations to mine pseudo-parallel sentence pairs [6, 8, 14, 17, 39, 40, 43]. Compared to supervised approaches such as [14, 40], CRISS performs mining with unsupervised sentence representations pretrained from large monolingual data. This enables us to achieve good sentence retrieval performance on very low resource languages such as Kazakh, Nepali, Sinhala, Gujarati. Compared to [17], we used full sentence representations instead of segment detection through unsupervised word representations. This enables us to get stronger machine translation results. Multilingual Pretraining Methods With large amounts of unlabeled data, various self-supervised pretraining approaches have been proposed to initialize models or parts of the models for downstream tasks (e.g. machine translation, classification, inference and so on) [11, 12, 25, 27, 28, 32, 35, 36, 37, 42, 51]. Recently these approaches have been extended from single language training to crosslingual training [9, 22, 27, 45]. In the supervised machine learning literature, data augmentation [5, 21, 41, 50] has been applied to improve learning performance. To the best our knowledge, little work has been explored on self-supervised data augmentation for pretraining. This work pretrains multilingual model with self-supervised data augmenting procedure using the power of emergent cross-lingual representation alignment discovered by the model itself in an iterative manner. Unsupervised Machine Translation Several authors have explored unsupervised machine translation techniques to utilize monolingual data for machine translation. A major line of work that does not use any labeled parallel data typically works as follow: They first train an initial language model using a noisy reconstruction objective, then finetune it using on-the-fly backtranslation loss [13, 23, 26, 27, 42]. Our work differs from this line of work in that we do not use backtranslation but instead retrieve the target sentence directly from a monolingual corpus. More recently, [38] and [48] start incorporating explicit cross-lingual data into pretraining. However, since the quality of cross-lingual data extracted is low, additional mechanisms such as sentence editing, or finetuning with iterative backtranslation is needed. To the best of our knowledge, our approach is the first one that achieves competitive unsupervised machine translation results without using backtranslation. 3 Self-Improvable Language Agnostic Representation from Pretraining We start our investigation with the language agnostic representation emerging from mBART pretrained models [27]. Our approach is grounded by the following properties that we discovered in mBART models: (1) the mBART encoder output represents the semantics of sentences, (2) the representation is language agnostic, and (3) the language agnostics can be improved by finetuning the mBART models with bitext data of a small number of language pairs (or even only 1 pair of languages) in an iterative manner. We explain these findings in details in this section and the next section on the iterative procedure. 3.1 Cross-lingual Language Representations We use mBART [27] seq2seq pre-training scheme to initialize cross-lingual models for both parallel data mining and multilingual machine translation. The mBART training covers N languages: D = {D1, ...,DN} where each Di is a collection of monolingual documents in language i. mBART trains a seq2seq model to predict the original text X given g(X) where g is a noising function, defined below, that corrupts text. Formally we aim to maximize Lθ: Lθ = ∑ Di∈D ∑ x∈Di logP (x|g(x); θ) , (1) where x is an instance in language i and the distribution P is defined by the Seq2Seq model. In this paper we used the mbart.cc25 checkpoint [27] open sourced in the Fairseq library [30] 2. This model is pretrained using two types of noise in g — random span masking and order permutation — as described in [27]. With a pretrained mBART model, sentences are then encoded simply by extracting L2-normalized average-pooled encoder outputs. 3.2 Case Study To understand the language agnosticity of the mBART sentence representations, we study sentence retrieval tasks. For each language pair, we go through each sentence in the source language, find the closest sentence to that sentence in the target language (using cosine similarity), and report the average top-1 retrieval accuracy for each language pair. We use the TED58 dataset which contains multi-way translations of TED talks in 58 languages [34]3. The sentence retrieval accuracy on this TED dataset is depicted in Figure 1. The average retrieval accuracy is 57% from the mBART model which is purely trained on monolingual data of 25 languages without any parallel data or dictionary; the baseline accuracy for random guessing is 0.04%. We also see high retrieval accuracy for language pairs with very different token distribution such as Russian-German (72%) or Korean-Romanian (58%). The high retrieval accuracy suggests that mBART model trained by monolingual data of multiple languages is able to generate language agnostic representation that are aligned at the semantic level in the vector space. 2https://github.com/pytorch/fairseq/blob/master/examples/mbart/README.md 3We filter the test split to samples that have translations for all 15 languages: Arabic, Czech, German, English, Spanish, French, Italian, Japanese, Korean, Dutch, Romanian, Russian, Turkish, Vietnamese, Chinese (simplified). As a result, we get a dataset of 2253 sentences which are translated into 15 different languages (a total of 33, 795 sentences). Moreover the cross-lingual semantic alignment over multiple languages not only emerges from the monolingual training but also can be improved by a relatively small amount of parallel data of just one direction. Figure 2 shows the sentence retrieval accuracy of an mBART model that are finetuned with Japanese-English in the IWSLT17 parallel dataset (223, 000 sentence pairs) [7]. Even with parallel data of one direction, the retrieval accuracy improved for all language directions by 27% (absolute) on average. Inspired by these case studies, we hypothesize that the language agnostic representation of the pretrained model can be self-improved by parallel data mined by the model itself without any supervised parallel data. We will devote section 4 to details of the self-mining capability and the derived Cross-lingual Retrieval Iterative Self-Supervised Training (CRISS) procedure. 4 Cross-lingual Retrieval for Iterative Self-Supervised Training (CRISS) Algorithm 1 Unsupervised Parallel Data Mining 1: function MINE(Θ, Di, Dj) 2: Input: (1) monolingual data setsDi andDj for language i and j respectively, (2) a pretrained model Θ, 3: Set k, M , τ to be the desired KNN size, the desired mining size, and the desired minimum score threshold respectively 4: for each x in Di, each y in Dj do 5: x, y ← Embed(Θ, x),Embed(Θ, y) 6: Nx, Ny ← KNN(x,Dj , k),KNN(y,Di, k) . Using FAISS [19] 7: end for 8: return D′ = {(x, y)} where (x, y) are the top M pairs s.t. score(x, y) ≥ τ following Equation 2 9: end function Algorithm 2 CRISS training 1: Input: (1) monolingual data from N languages {Dn}Nn=1, (2) a pretrained mBART model Ψ, (3) total number of iterations T 2: Initialize Θ← Ψ, t = 0 3: while t < T do 4: for every language pairs (i, j) where i 6= j do 5: D′i,j ← Mine(Θ, Di, Dj) . Algorithm 1 6: end for 7: Θ← MultilingualTrain(Ψ, {D′i,j | i 6= j}) . Note: Train from the initial mBART model Ψ 8: end while In this section, we make use the of the language agnostic representation from mBART described in Section 3 to mine parallel data from a collection of monolingual data. The mined parallel data is then used to further improve the cross-lingual alignment performance of the pretrained model. The mining and improving processes are repeated multiple times to improve the performance of retrieval, unsupervised and supervised multilingual translation tasks as depicted in Figure 3 using Algorithm 2. Unsupervised parallel data mining To mine parallel data without supervised signals, we employ the margin function formulation [5, 6, 40] based on K nearest neighbors (KNN) to score and rank pairs of sentences from any two languages. Let x and y be the vector representation of two sentences in languages i and j respectively, we score the semantic similarity of x and y using a ratio margin function [5] defined as the following: score(x, y) = cos(x, y)∑ z∈Nx cos(x,z) 2k + ∑ z∈Ny cos(z,y) 2k (2) where Nx is the KNN neighborhood of x in the monolingual dataset of y’s language; and Ny is the KNN neighborhood of y in the monolingual dataset x’s language. The margin scoring function can be interpreted as a cosine score normalized by average distances (broadly defined) to the margin regions established by the cross-lingual KNN neighborhoods of the source and target sentences respectively. The KNN distance metrics are defined by cos(x, y). We use FAISS [19] — a distributed dense vector similarity search library — to simultaneously search for all neighborhoods in an efficient manner at billion-scale. Thus we obtain the mining Algorithm 1. We find that in order to train a multilingual model that can translate between N languages, we don’t need to mine the training data for all N ∗ (N − 1) language pairs, but only a subset of them. This can be explained by our earlier finding that finetuning on one pair of parallel data helps improve the cross-lingual alignment between every language pair. Iteratively mining and multilingual training Building on the unsupervised mining capability of the pretrained model (Algorithm 1), we can conduct iteratively mining and multilingual training procedure (Algorithm 2) to improve the pretrained models for both mining and downstream tasks. In Algorithm 2, we repeat mining and multilingual training T times. On multilingual training, we simply augment each mined pair (x, y) of sentences by adding a target language token at the beginning of y to form a target language token augmented pair (x, y′). We then aggregate all mined pairs {(x, y)′} of the mined language pairs into a single data set to train a standard seq2seq machine translation transformer models [44] from the pretrained mBART model. To avoid overfitting to the noisy mined data, at each iteration we always refine from the original monolingual trained mBART model but only update the mined dataset using the improved model (line 7 of Algorithm 2). 5 Experiment Evaluation We pretrained an mBART model with Common Crawl dataset constrained to the 25 languages as in [27] for which we have evaluation data. We also employ the same model architecture, the same BPE pre-processing and adding the same set of language tokens as in [27]. We keep the same BPE vocab and the same model architecture throughout pretraining and downstream tasks. On mining monolingual data, we use the same text extraction process as described in [40] to get the monolingual data to mine from, which is a curated version of the Common Crawl corpus. We refer the reader to [40, 47] for a detailed description of how the monolingual data are preprocessed. For faster mining, we subsample the resulting common crawl data to 100 million sentences in each language. For low resources, we may have fewer than 100 million monolingual sentences. The statistics of monolingual data used are included in the supplementary materials. We set the K = 5 for the KNN neighborhood retrieval for the margin score functions (Equation 2). In each iteration, we tune the margin score threshold based on validation BLEU on a sampled validation set of size 2000. The sizes of mined bi-text in each iteration are included in the supplementary materials. Our default configuration mines sentences to and from English, Hindi, Spanish, and Chinese, for a total of 90 languages pairs (180 language directions) instead of all 300 language pairs (600 directions) for the 25 languages. With the mined 180 directions parallel data, we then train the multilingual transformer model for maximum 20, 000 steps using label-smoothed cross-entropy loss as described in Algorithm 2. We sweep for the best maximum learning rate using validation BLEUs. After pretraining, the same model are evaluated with three tasks: sentence retrieval, unsupervised machine translation, and supervised machine translation tasks. For supervised machine translation, we use CRISS model to initialize the weights to train models for supervised machine translation. 5.1 Unsupervised Machine Translation We evaluate CRISS on unsupervised neural machine translation benchmarks that cover both low resource and high resource language directions. For English-French we use WMT’14, for EnglishGerman and English-Romanian we use WMT’16 test data, and for English-Nepali and EnglishSinhala we use Flores test set [16]. For decoding in both the unsupervised and supervised machine translation tasks, we use beam-search with beam size 5, and report the final results in BLEU [31]. To be consistent with previous literature, we used multi-bleu.pl4 for evaluation. As shown in Table 1, on these unsupervised benchmarks the CRISS model overperforms state-ofthe-art in 9 out of 10 language directions. Our approach works well on dissimilar language pairs, achieving 14.4 BLEU on Nepali-English (improving 4.4 BLEU compared to previous method), and 13.6 BLEU on Sinhala-English (improving 5.4 BLEU compared to previous method). On similar language pairs, we also improved Romanian-English from 33.6 to 37.6 BLEU, German-English from 35.5 to 37.1 BLEU. We also report translation quality for other language pairs that do not have previous benchmark in the supplementary materials. 5.2 Tatoeba: Similarity Retrieval We use the Tatoeba dataset [6] to evaluate the cross-lingual alignment quality of CRISS model following the evaluation procedure specified in the XTREME benchmark [18]. As shown in Table 2, compared to other pretraining approaches that don’t use parallel data, CRISS outperforms state-of-the-art by a large margin, improving all 16 languages by an average of 21.5% in 4https://github.com/moses-smt/mosesdecoder/blob/master/ scripts/generic/multi-bleu.perl absolute accuracy. Our approach even beats the state-of-the-art supervised approach [6] in Kazakh, improving accuracy from 17.39% to 77.9%. This shows the potential of our work to improve translation for language pairs with little labeled parallel training data. 5.3 Supervised Machine Translation For the supervised machine translation task, we use the same benchmark data as in mBART [27]. We finetune models learned from CRISS iteration 1, 2, and 3 on supervised training data of each bilingual direction. For all directions, we use 0.3 dropout rate, 0.2 label smoothing, 2500 learning rate warm-up steps, 3e− 5 maximum learning rate. We use a maximum of 40K training steps, and final models are selected based on best valid loss. As shown in Table 3, CRISS improved upon mBART on low resource directions such as Gujarati-English (17.7 BLEU improvement), Kazakh-English (5.9 BLEU improvement), Nepali-English (4.5 BLEU improvement). Overall, we improved 26 out of 34 directions, with an average improvement of 1.8 BLEU. 6 Ablation Studies We conduct ablation studies to understand the key ingredients of our methods: the pretrained language model, the performance implications of bilingual training versus multilingual training, and the number of pivot languages used to mine parallel data. 6.1 Starting from bilingual pretrained models To study the benefits of the iterative mining-training procedure on a single language pair (ignoring the effects of multilingual data), we run the CRISS procedure on the English-Romanian language pair. We start with the mBART02 checkpoint trained on English-Romanian monolingual data [27], and apply the CRISS procedure for two iterations. As shown in Table 4 and 5, the CRISS procedure does work on a single language pair, improving both unsupervised machine translation quality, and sentence retrieval accuracy over time. Moreover, the sentence retrieval accuracy on bilingual CRISS-EnRo is higher than that of CRISS25, but the unsupervised machine translation results for CRISS25 are higher. We hypothesize that CRISS-EnRo can mine higher quality English-Romanian pseudo-parallel data, since the the encoder can specialize in representing these two languages. However, CRISS25 can utilize more pseudo-parallel data from other directions, which help it achieve better machine translation results. Direction en-ro ro-en CRISS-EnRo Iter 1 30.1 32.2 CRISS-EnRo Iter 2 33.9 35 CRISS25 Iter 1 24.9 27.9 CRISS25 Iter 2 34.1 36.5 Table 4: Unsupservised machine translation results on CRISS starting from bilingual pretrained models Direction Retrieval Accuracy CRISS-EnRo Iter 1 98.3 CRISS-EnRo Iter 2 98.6 CRISS25 Iter 1 97.8 CRISS25 Iter 2 98.5 Table 5: Sentence retrieval on WMT’16 English-Romanian test set 6.2 Multilingual Training versus Bilingual Training Multilingual training is known to help improve translation quality for low resource language directions. Since we only use mined pseudo-parallel data at finetuning step, every language direction is essentially low-resource. We ran experiments to confirm that finetuning multilingually help with translation quality for every direction. Here, we use mBART25 encoder outputs to mine parallel data for 24 from-English and 24 to-English directions. For the bilingual config, we finetune 48 separate models using only bilingual pseudo-parallel data for each direction. For the multilingual config, we combine all 48 directions data and finetune a single multilingual model. As we can see in Figure 5 and Figure 6 multilingual finetuning performs better in general5 and particularly on to-English directions. 6.3 Number of Pivot Languages In this section, we explore how choosing the number of language directions to mine and finetune affect the model performance. [6] found that using two target languages was enough to make the sentence embedding space aligned. Unlike their work which requires parallel training data, we are not limited by the amount of labeled data, and can mine for pseudo-parallel data in every language direction. However, our experiments show that there is limited performance gain by using more than 2 pivot languages. We report the translation quality and sentence retrieval accuracy of 1st iteration 5en-cs direction is an outlier which is worth further investigation. esen my -enit-e n jaen hiensi-e n fr-e n lt-e n ne -enkken etentr-e n lven nlen aren de -enruenfi-e n koen gu -enroen vien csen direction 5.0 2.5 0.0 2.5 5.0 7.5 10.0 12.5 15.0 fro m -E ng lis h M T pe rfo rm an ce im pr ov em en t Bilingual Multilingual Figure 4: Bilingual versus Multingual: x-En en -cs en -esen -hi en -kk en -neen -aren -si en -ruen -fr en -m y en -lv en -jaen -lt en -it en -guen -tr en -ko en -deen -ro en -eten -fi en -nl en -vi direction 5.0 2.5 0.0 2.5 5.0 7.5 10.0 12.5 15.0 fro m -E ng lis h M T pe rfo rm an ce im pr ov em en t Bilingual Multilingual Figure 5: Bilingual versus Multingual: En-x CRISS trained with 1 pivot language (English), 2 pivot languages (English, Spanish), and 4 pivot languages (English, Spanish, Hindi, Chinese). de -en ne -enzhen my -enkoen jaen gu -ensi-e n tr-e n lvenfi-e n vien nlen esen roen ruen hienit-e n arenfr-e n eten kkenlt-e n csen direction 5.0 2.5 0.0 2.5 5.0 7.5 10.0 12.5 15.0 to -E ng lis h M T pe rfo rm an ce im pr ov em en t 1 pivot language 2 pivot languages 4 pivot languages Figure 6: Pivot languages ablation: x-En MT en -deen -kken -ja en -m y en -zh en -koen -et en -neen -sien -fi en -guen -lt en -hi en -csen -tr en -aren -vi en -lv en -ro en -ruen -nlen -it en -esen -fr direction 5.0 2.5 0.0 2.5 5.0 7.5 10.0 12.5 15.0 fro m -E ng lis h M T pe rfo rm an ce im pr ov em en t 1 pivot language 2 pivot languages 4 pivot languages Figure 7: Pivot languages ablation: En-x MT Note that the embedding space are already well aligned even with 1 pivot language because of the emergent alignment in multilingual denoising autoencoder training, and because we jointly train the from-English and to-English directions in the same model. We observe that the average translation quality and sentence retrieval accuracy improve slightly as we increase the number of pivot languages. Since using more pivot languages will increase the computational cost linearly in the mining stage, we used 4 pivot languages in the final version. 7 Conclusion We introduced a new self-supervised training approach that iteratively combine mining and multilingual training procedures to achieve state-of-the-art performances in unsupervised machine translation and sentence retrieval. The proposed approach achieved these results even though we artificially limited the amount of unlabeled data we used. Future work should explore (1) a thorough analysis and theoretical understanding on how the language agnostic representation arises from denoising pretrainining, (2) whether the same approach can be extended to pretrain models for non-seq2seq applications, e.g. unsupervised structural discovery and alignment, and (3) whether the learned cross-lingual representation can be applied to other other NLP and non-NLP tasks and how. Broader Impact Our work advances the state-of-the-art in unsupervised machine translation. For languages where labelled parallel data is hard to obtain, training methods that better utilize unlabeled data is key to unlocking better translation quality. This technique contributes toward the goal of removing language barriers across the world, particularly for the community speaking low resource languages. However, the goal is still far from being achieved, and more efforts from the community is needed for us to get there. One common pitfall of mining-based techniques in machine translation systems, however, is that they tend to retrieve similar-but-not-exact matches. For example, since the terms "US" and "Canada" tends to appear in similar context, the token embedding for them could be close to each other, then at mining stage it could retrieve "I want to live in the US" as the translation instead of "I want to live in Canada". If the translation is over-fitted to these mined data, it could repeat the same mistake. We advise practitioners who apply mining-based techniques in production translation systems to be aware of this issue. More broadly the monolingual pretraining method could heavily be influenced by the crawled data. We will need to carefuly study the properties of the trained models and how they response to data bias such as profanity. In general, we need to further study historical biases and possible malicious data pollution attacks in the crawled data to avoid undesired behaviors of the learned models. Acknowledgments and Disclosure of Funding We thank Angela Fan, Vishrav Chaudhary, Don Husa, Alexis Conneau, and Veselin Stoyanov for their valuable and constructive suggestions during the planning and development of this research work.
1. What is the focus and contribution of the paper in machine translation and sentence retrieval? 2. What are the strengths of the proposed approach, particularly in its experimental results? 3. What are the weaknesses of the paper, especially in its writing quality and ablation study?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The paper produces convincing results in supervised and unsupervised machine translation and sentence retrieval through an approach with cross-lingual retrieval for iterative self-supervision, whereby mining and training are applied iteratively. Strengths 1. Convincing results across three tasks. 2. Mostly thorough experimental exposition. 3. Nice case study and model description. Weaknesses 1. The writeup needs to be improved, e.g. multiple singular-plural mismatches. 2. The ablation study with pivot languages is not conclusive. Why is the 4-pivots curve so jittered?
NIPS
Title Cross-lingual Retrieval for Iterative Self-Supervised Training Abstract Recent studies have demonstrated the cross-lingual alignment ability of multilingual pretrained language models. In this work, we found that the cross-lingual alignment can be further improved by training seq2seq models on sentence pairs mined using their own encoder outputs. We utilized these findings to develop a new approach — cross-lingual retrieval for iterative self-supervised training (CRISS), where mining and training processes are applied iteratively, improving cross-lingual alignment and translation ability at the same time. Using this method, we achieved stateof-the-art unsupervised machine translation results on 9 language directions with an average improvement of 2.4 BLEU, and on the Tatoeba sentence retrieval task in the XTREME benchmark on 16 languages with an average improvement of 21.5% in absolute accuracy. Furthermore, CRISS also brings an additional 1.8 BLEU improvement on average compared to mBART, when finetuned on supervised machine translation downstream tasks. Our code and pretrained models are publicly available. 1 1 Introduction Pretraining has demonstrated success in various natural language processing (NLP) tasks. In particular, self-supervised pretraining can learn useful representations by training with pretext task, such as cloze and masked language modeling, denoising autoencoder, etc. on large amounts of unlabelled data [11, 25, 28, 32, 35, 51]. Such learned “universal representations" can be finetuned on task-specific training data to achieve good performance on downstream tasks. More recently, new pretraining techniques have been developed in the multilingual settings, pushing the state-of-the-art on cross-lingual understandin, and machine translation. Since the access to labeled parallel data is very limited, especially for low resource languages, better pretraining techniques that utilizes unlabeled data is the key to unlock better machine translation performances [9, 27, 42]. In this work, we propose a novel self-supervised pretraining method for multilingual sequence generation: Cross-lingual Retrieval for Iterative Self-Supervised training (CRISS). CRISS is developed based on the finding that the encoder outputs of multilingual denoising autoencoder can be used as language agnostic representation to retrieve parallel sentence pairs, and training the model on these retrieved sentence pairs can further improve its sentence retrieval and translation capabilities in an iterative manner. Using only unlabeled data from many different languages, CRISS iteratively mines 1https://github.com/pytorch/fairseq/blob/master/examples/criss 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. for parallel sentences across languages, trains a new better multilingual model using these mined sentence pairs, mines again for better parallel sentences, and repeats. In summary, we present the following contributions in this paper: • We present empirical results that show the encoder outputs of multilingual denoising autoencoder (mBART) represent language-agnostic semantic meaning. • We present empirical results that show finetuning mBART on only one pair of parallel bi-text will improve cross-lingual alignment for all language directions. • We introduce a new iterative self-supervised learning method that combines mining and multilingual training to improve both tasks after each iteration. • We significantly outperform the previous state of the art on unsupervised machine translation and sentence retrieval. • We show that our pretraining method can further improve the performance of supervised machine translation task compared to mBART. This paper is organized as follows. Section 2 is devoted to related work. Section 3 introduces improvable language agnostic representation emerging from pretraining. Section 4 describes the details of cross-lingual retrieval for iterative self-supervised training (CRISS). Section 5 evaluates CRISS with unsupervised and supervised machine translation tasks as well as sentence retrieval tasks. Section 6 iterates over ablation studies to understand the right configurations of the approach. Then we conclude by Section 7. 2 Related work Emergent Cross-Lingual Alignment On the cross-lingual alignment from pretrained language models, [49] [33] present empirical evidences that there exists cross-lingual alignment structure in the encoder, which is trained with multiple languages on a shared masked language modeling task. Analysis from [46] shows that shared subword vocabulary has negligible effect, while model depth matters more for cross-lingual transferability. In English language modeling, retrieval-based data augmentation has been explored by [20] and [15]. Our work combines this idea with the emergent cross-lingual alignment to retrieve sentences in another language instead of retrieving paraphrases in the same language in unsupervised manner. Cross-Lingual Representations Various previous works have explored leveraging cross-lingual word representations to build parallel dictionaries and phrase tables, then applying them to downstream tasks [1, 2, 3, 4, 24, 29]. Our work shows that we can work directly with sentence-level representations to mine for parallel sentence pairs. Additionally, our approach shares the same neural networks architecture for pretraining and downstream tasks, making it easier to finetune for downstream tasks such as mining and translation. There is also a large area of research in using sentence-level representations to mine pseudo-parallel sentence pairs [6, 8, 14, 17, 39, 40, 43]. Compared to supervised approaches such as [14, 40], CRISS performs mining with unsupervised sentence representations pretrained from large monolingual data. This enables us to achieve good sentence retrieval performance on very low resource languages such as Kazakh, Nepali, Sinhala, Gujarati. Compared to [17], we used full sentence representations instead of segment detection through unsupervised word representations. This enables us to get stronger machine translation results. Multilingual Pretraining Methods With large amounts of unlabeled data, various self-supervised pretraining approaches have been proposed to initialize models or parts of the models for downstream tasks (e.g. machine translation, classification, inference and so on) [11, 12, 25, 27, 28, 32, 35, 36, 37, 42, 51]. Recently these approaches have been extended from single language training to crosslingual training [9, 22, 27, 45]. In the supervised machine learning literature, data augmentation [5, 21, 41, 50] has been applied to improve learning performance. To the best our knowledge, little work has been explored on self-supervised data augmentation for pretraining. This work pretrains multilingual model with self-supervised data augmenting procedure using the power of emergent cross-lingual representation alignment discovered by the model itself in an iterative manner. Unsupervised Machine Translation Several authors have explored unsupervised machine translation techniques to utilize monolingual data for machine translation. A major line of work that does not use any labeled parallel data typically works as follow: They first train an initial language model using a noisy reconstruction objective, then finetune it using on-the-fly backtranslation loss [13, 23, 26, 27, 42]. Our work differs from this line of work in that we do not use backtranslation but instead retrieve the target sentence directly from a monolingual corpus. More recently, [38] and [48] start incorporating explicit cross-lingual data into pretraining. However, since the quality of cross-lingual data extracted is low, additional mechanisms such as sentence editing, or finetuning with iterative backtranslation is needed. To the best of our knowledge, our approach is the first one that achieves competitive unsupervised machine translation results without using backtranslation. 3 Self-Improvable Language Agnostic Representation from Pretraining We start our investigation with the language agnostic representation emerging from mBART pretrained models [27]. Our approach is grounded by the following properties that we discovered in mBART models: (1) the mBART encoder output represents the semantics of sentences, (2) the representation is language agnostic, and (3) the language agnostics can be improved by finetuning the mBART models with bitext data of a small number of language pairs (or even only 1 pair of languages) in an iterative manner. We explain these findings in details in this section and the next section on the iterative procedure. 3.1 Cross-lingual Language Representations We use mBART [27] seq2seq pre-training scheme to initialize cross-lingual models for both parallel data mining and multilingual machine translation. The mBART training covers N languages: D = {D1, ...,DN} where each Di is a collection of monolingual documents in language i. mBART trains a seq2seq model to predict the original text X given g(X) where g is a noising function, defined below, that corrupts text. Formally we aim to maximize Lθ: Lθ = ∑ Di∈D ∑ x∈Di logP (x|g(x); θ) , (1) where x is an instance in language i and the distribution P is defined by the Seq2Seq model. In this paper we used the mbart.cc25 checkpoint [27] open sourced in the Fairseq library [30] 2. This model is pretrained using two types of noise in g — random span masking and order permutation — as described in [27]. With a pretrained mBART model, sentences are then encoded simply by extracting L2-normalized average-pooled encoder outputs. 3.2 Case Study To understand the language agnosticity of the mBART sentence representations, we study sentence retrieval tasks. For each language pair, we go through each sentence in the source language, find the closest sentence to that sentence in the target language (using cosine similarity), and report the average top-1 retrieval accuracy for each language pair. We use the TED58 dataset which contains multi-way translations of TED talks in 58 languages [34]3. The sentence retrieval accuracy on this TED dataset is depicted in Figure 1. The average retrieval accuracy is 57% from the mBART model which is purely trained on monolingual data of 25 languages without any parallel data or dictionary; the baseline accuracy for random guessing is 0.04%. We also see high retrieval accuracy for language pairs with very different token distribution such as Russian-German (72%) or Korean-Romanian (58%). The high retrieval accuracy suggests that mBART model trained by monolingual data of multiple languages is able to generate language agnostic representation that are aligned at the semantic level in the vector space. 2https://github.com/pytorch/fairseq/blob/master/examples/mbart/README.md 3We filter the test split to samples that have translations for all 15 languages: Arabic, Czech, German, English, Spanish, French, Italian, Japanese, Korean, Dutch, Romanian, Russian, Turkish, Vietnamese, Chinese (simplified). As a result, we get a dataset of 2253 sentences which are translated into 15 different languages (a total of 33, 795 sentences). Moreover the cross-lingual semantic alignment over multiple languages not only emerges from the monolingual training but also can be improved by a relatively small amount of parallel data of just one direction. Figure 2 shows the sentence retrieval accuracy of an mBART model that are finetuned with Japanese-English in the IWSLT17 parallel dataset (223, 000 sentence pairs) [7]. Even with parallel data of one direction, the retrieval accuracy improved for all language directions by 27% (absolute) on average. Inspired by these case studies, we hypothesize that the language agnostic representation of the pretrained model can be self-improved by parallel data mined by the model itself without any supervised parallel data. We will devote section 4 to details of the self-mining capability and the derived Cross-lingual Retrieval Iterative Self-Supervised Training (CRISS) procedure. 4 Cross-lingual Retrieval for Iterative Self-Supervised Training (CRISS) Algorithm 1 Unsupervised Parallel Data Mining 1: function MINE(Θ, Di, Dj) 2: Input: (1) monolingual data setsDi andDj for language i and j respectively, (2) a pretrained model Θ, 3: Set k, M , τ to be the desired KNN size, the desired mining size, and the desired minimum score threshold respectively 4: for each x in Di, each y in Dj do 5: x, y ← Embed(Θ, x),Embed(Θ, y) 6: Nx, Ny ← KNN(x,Dj , k),KNN(y,Di, k) . Using FAISS [19] 7: end for 8: return D′ = {(x, y)} where (x, y) are the top M pairs s.t. score(x, y) ≥ τ following Equation 2 9: end function Algorithm 2 CRISS training 1: Input: (1) monolingual data from N languages {Dn}Nn=1, (2) a pretrained mBART model Ψ, (3) total number of iterations T 2: Initialize Θ← Ψ, t = 0 3: while t < T do 4: for every language pairs (i, j) where i 6= j do 5: D′i,j ← Mine(Θ, Di, Dj) . Algorithm 1 6: end for 7: Θ← MultilingualTrain(Ψ, {D′i,j | i 6= j}) . Note: Train from the initial mBART model Ψ 8: end while In this section, we make use the of the language agnostic representation from mBART described in Section 3 to mine parallel data from a collection of monolingual data. The mined parallel data is then used to further improve the cross-lingual alignment performance of the pretrained model. The mining and improving processes are repeated multiple times to improve the performance of retrieval, unsupervised and supervised multilingual translation tasks as depicted in Figure 3 using Algorithm 2. Unsupervised parallel data mining To mine parallel data without supervised signals, we employ the margin function formulation [5, 6, 40] based on K nearest neighbors (KNN) to score and rank pairs of sentences from any two languages. Let x and y be the vector representation of two sentences in languages i and j respectively, we score the semantic similarity of x and y using a ratio margin function [5] defined as the following: score(x, y) = cos(x, y)∑ z∈Nx cos(x,z) 2k + ∑ z∈Ny cos(z,y) 2k (2) where Nx is the KNN neighborhood of x in the monolingual dataset of y’s language; and Ny is the KNN neighborhood of y in the monolingual dataset x’s language. The margin scoring function can be interpreted as a cosine score normalized by average distances (broadly defined) to the margin regions established by the cross-lingual KNN neighborhoods of the source and target sentences respectively. The KNN distance metrics are defined by cos(x, y). We use FAISS [19] — a distributed dense vector similarity search library — to simultaneously search for all neighborhoods in an efficient manner at billion-scale. Thus we obtain the mining Algorithm 1. We find that in order to train a multilingual model that can translate between N languages, we don’t need to mine the training data for all N ∗ (N − 1) language pairs, but only a subset of them. This can be explained by our earlier finding that finetuning on one pair of parallel data helps improve the cross-lingual alignment between every language pair. Iteratively mining and multilingual training Building on the unsupervised mining capability of the pretrained model (Algorithm 1), we can conduct iteratively mining and multilingual training procedure (Algorithm 2) to improve the pretrained models for both mining and downstream tasks. In Algorithm 2, we repeat mining and multilingual training T times. On multilingual training, we simply augment each mined pair (x, y) of sentences by adding a target language token at the beginning of y to form a target language token augmented pair (x, y′). We then aggregate all mined pairs {(x, y)′} of the mined language pairs into a single data set to train a standard seq2seq machine translation transformer models [44] from the pretrained mBART model. To avoid overfitting to the noisy mined data, at each iteration we always refine from the original monolingual trained mBART model but only update the mined dataset using the improved model (line 7 of Algorithm 2). 5 Experiment Evaluation We pretrained an mBART model with Common Crawl dataset constrained to the 25 languages as in [27] for which we have evaluation data. We also employ the same model architecture, the same BPE pre-processing and adding the same set of language tokens as in [27]. We keep the same BPE vocab and the same model architecture throughout pretraining and downstream tasks. On mining monolingual data, we use the same text extraction process as described in [40] to get the monolingual data to mine from, which is a curated version of the Common Crawl corpus. We refer the reader to [40, 47] for a detailed description of how the monolingual data are preprocessed. For faster mining, we subsample the resulting common crawl data to 100 million sentences in each language. For low resources, we may have fewer than 100 million monolingual sentences. The statistics of monolingual data used are included in the supplementary materials. We set the K = 5 for the KNN neighborhood retrieval for the margin score functions (Equation 2). In each iteration, we tune the margin score threshold based on validation BLEU on a sampled validation set of size 2000. The sizes of mined bi-text in each iteration are included in the supplementary materials. Our default configuration mines sentences to and from English, Hindi, Spanish, and Chinese, for a total of 90 languages pairs (180 language directions) instead of all 300 language pairs (600 directions) for the 25 languages. With the mined 180 directions parallel data, we then train the multilingual transformer model for maximum 20, 000 steps using label-smoothed cross-entropy loss as described in Algorithm 2. We sweep for the best maximum learning rate using validation BLEUs. After pretraining, the same model are evaluated with three tasks: sentence retrieval, unsupervised machine translation, and supervised machine translation tasks. For supervised machine translation, we use CRISS model to initialize the weights to train models for supervised machine translation. 5.1 Unsupervised Machine Translation We evaluate CRISS on unsupervised neural machine translation benchmarks that cover both low resource and high resource language directions. For English-French we use WMT’14, for EnglishGerman and English-Romanian we use WMT’16 test data, and for English-Nepali and EnglishSinhala we use Flores test set [16]. For decoding in both the unsupervised and supervised machine translation tasks, we use beam-search with beam size 5, and report the final results in BLEU [31]. To be consistent with previous literature, we used multi-bleu.pl4 for evaluation. As shown in Table 1, on these unsupervised benchmarks the CRISS model overperforms state-ofthe-art in 9 out of 10 language directions. Our approach works well on dissimilar language pairs, achieving 14.4 BLEU on Nepali-English (improving 4.4 BLEU compared to previous method), and 13.6 BLEU on Sinhala-English (improving 5.4 BLEU compared to previous method). On similar language pairs, we also improved Romanian-English from 33.6 to 37.6 BLEU, German-English from 35.5 to 37.1 BLEU. We also report translation quality for other language pairs that do not have previous benchmark in the supplementary materials. 5.2 Tatoeba: Similarity Retrieval We use the Tatoeba dataset [6] to evaluate the cross-lingual alignment quality of CRISS model following the evaluation procedure specified in the XTREME benchmark [18]. As shown in Table 2, compared to other pretraining approaches that don’t use parallel data, CRISS outperforms state-of-the-art by a large margin, improving all 16 languages by an average of 21.5% in 4https://github.com/moses-smt/mosesdecoder/blob/master/ scripts/generic/multi-bleu.perl absolute accuracy. Our approach even beats the state-of-the-art supervised approach [6] in Kazakh, improving accuracy from 17.39% to 77.9%. This shows the potential of our work to improve translation for language pairs with little labeled parallel training data. 5.3 Supervised Machine Translation For the supervised machine translation task, we use the same benchmark data as in mBART [27]. We finetune models learned from CRISS iteration 1, 2, and 3 on supervised training data of each bilingual direction. For all directions, we use 0.3 dropout rate, 0.2 label smoothing, 2500 learning rate warm-up steps, 3e− 5 maximum learning rate. We use a maximum of 40K training steps, and final models are selected based on best valid loss. As shown in Table 3, CRISS improved upon mBART on low resource directions such as Gujarati-English (17.7 BLEU improvement), Kazakh-English (5.9 BLEU improvement), Nepali-English (4.5 BLEU improvement). Overall, we improved 26 out of 34 directions, with an average improvement of 1.8 BLEU. 6 Ablation Studies We conduct ablation studies to understand the key ingredients of our methods: the pretrained language model, the performance implications of bilingual training versus multilingual training, and the number of pivot languages used to mine parallel data. 6.1 Starting from bilingual pretrained models To study the benefits of the iterative mining-training procedure on a single language pair (ignoring the effects of multilingual data), we run the CRISS procedure on the English-Romanian language pair. We start with the mBART02 checkpoint trained on English-Romanian monolingual data [27], and apply the CRISS procedure for two iterations. As shown in Table 4 and 5, the CRISS procedure does work on a single language pair, improving both unsupervised machine translation quality, and sentence retrieval accuracy over time. Moreover, the sentence retrieval accuracy on bilingual CRISS-EnRo is higher than that of CRISS25, but the unsupervised machine translation results for CRISS25 are higher. We hypothesize that CRISS-EnRo can mine higher quality English-Romanian pseudo-parallel data, since the the encoder can specialize in representing these two languages. However, CRISS25 can utilize more pseudo-parallel data from other directions, which help it achieve better machine translation results. Direction en-ro ro-en CRISS-EnRo Iter 1 30.1 32.2 CRISS-EnRo Iter 2 33.9 35 CRISS25 Iter 1 24.9 27.9 CRISS25 Iter 2 34.1 36.5 Table 4: Unsupservised machine translation results on CRISS starting from bilingual pretrained models Direction Retrieval Accuracy CRISS-EnRo Iter 1 98.3 CRISS-EnRo Iter 2 98.6 CRISS25 Iter 1 97.8 CRISS25 Iter 2 98.5 Table 5: Sentence retrieval on WMT’16 English-Romanian test set 6.2 Multilingual Training versus Bilingual Training Multilingual training is known to help improve translation quality for low resource language directions. Since we only use mined pseudo-parallel data at finetuning step, every language direction is essentially low-resource. We ran experiments to confirm that finetuning multilingually help with translation quality for every direction. Here, we use mBART25 encoder outputs to mine parallel data for 24 from-English and 24 to-English directions. For the bilingual config, we finetune 48 separate models using only bilingual pseudo-parallel data for each direction. For the multilingual config, we combine all 48 directions data and finetune a single multilingual model. As we can see in Figure 5 and Figure 6 multilingual finetuning performs better in general5 and particularly on to-English directions. 6.3 Number of Pivot Languages In this section, we explore how choosing the number of language directions to mine and finetune affect the model performance. [6] found that using two target languages was enough to make the sentence embedding space aligned. Unlike their work which requires parallel training data, we are not limited by the amount of labeled data, and can mine for pseudo-parallel data in every language direction. However, our experiments show that there is limited performance gain by using more than 2 pivot languages. We report the translation quality and sentence retrieval accuracy of 1st iteration 5en-cs direction is an outlier which is worth further investigation. esen my -enit-e n jaen hiensi-e n fr-e n lt-e n ne -enkken etentr-e n lven nlen aren de -enruenfi-e n koen gu -enroen vien csen direction 5.0 2.5 0.0 2.5 5.0 7.5 10.0 12.5 15.0 fro m -E ng lis h M T pe rfo rm an ce im pr ov em en t Bilingual Multilingual Figure 4: Bilingual versus Multingual: x-En en -cs en -esen -hi en -kk en -neen -aren -si en -ruen -fr en -m y en -lv en -jaen -lt en -it en -guen -tr en -ko en -deen -ro en -eten -fi en -nl en -vi direction 5.0 2.5 0.0 2.5 5.0 7.5 10.0 12.5 15.0 fro m -E ng lis h M T pe rfo rm an ce im pr ov em en t Bilingual Multilingual Figure 5: Bilingual versus Multingual: En-x CRISS trained with 1 pivot language (English), 2 pivot languages (English, Spanish), and 4 pivot languages (English, Spanish, Hindi, Chinese). de -en ne -enzhen my -enkoen jaen gu -ensi-e n tr-e n lvenfi-e n vien nlen esen roen ruen hienit-e n arenfr-e n eten kkenlt-e n csen direction 5.0 2.5 0.0 2.5 5.0 7.5 10.0 12.5 15.0 to -E ng lis h M T pe rfo rm an ce im pr ov em en t 1 pivot language 2 pivot languages 4 pivot languages Figure 6: Pivot languages ablation: x-En MT en -deen -kken -ja en -m y en -zh en -koen -et en -neen -sien -fi en -guen -lt en -hi en -csen -tr en -aren -vi en -lv en -ro en -ruen -nlen -it en -esen -fr direction 5.0 2.5 0.0 2.5 5.0 7.5 10.0 12.5 15.0 fro m -E ng lis h M T pe rfo rm an ce im pr ov em en t 1 pivot language 2 pivot languages 4 pivot languages Figure 7: Pivot languages ablation: En-x MT Note that the embedding space are already well aligned even with 1 pivot language because of the emergent alignment in multilingual denoising autoencoder training, and because we jointly train the from-English and to-English directions in the same model. We observe that the average translation quality and sentence retrieval accuracy improve slightly as we increase the number of pivot languages. Since using more pivot languages will increase the computational cost linearly in the mining stage, we used 4 pivot languages in the final version. 7 Conclusion We introduced a new self-supervised training approach that iteratively combine mining and multilingual training procedures to achieve state-of-the-art performances in unsupervised machine translation and sentence retrieval. The proposed approach achieved these results even though we artificially limited the amount of unlabeled data we used. Future work should explore (1) a thorough analysis and theoretical understanding on how the language agnostic representation arises from denoising pretrainining, (2) whether the same approach can be extended to pretrain models for non-seq2seq applications, e.g. unsupervised structural discovery and alignment, and (3) whether the learned cross-lingual representation can be applied to other other NLP and non-NLP tasks and how. Broader Impact Our work advances the state-of-the-art in unsupervised machine translation. For languages where labelled parallel data is hard to obtain, training methods that better utilize unlabeled data is key to unlocking better translation quality. This technique contributes toward the goal of removing language barriers across the world, particularly for the community speaking low resource languages. However, the goal is still far from being achieved, and more efforts from the community is needed for us to get there. One common pitfall of mining-based techniques in machine translation systems, however, is that they tend to retrieve similar-but-not-exact matches. For example, since the terms "US" and "Canada" tends to appear in similar context, the token embedding for them could be close to each other, then at mining stage it could retrieve "I want to live in the US" as the translation instead of "I want to live in Canada". If the translation is over-fitted to these mined data, it could repeat the same mistake. We advise practitioners who apply mining-based techniques in production translation systems to be aware of this issue. More broadly the monolingual pretraining method could heavily be influenced by the crawled data. We will need to carefuly study the properties of the trained models and how they response to data bias such as profanity. In general, we need to further study historical biases and possible malicious data pollution attacks in the crawled data to avoid undesired behaviors of the learned models. Acknowledgments and Disclosure of Funding We thank Angela Fan, Vishrav Chaudhary, Don Husa, Alexis Conneau, and Veselin Stoyanov for their valuable and constructive suggestions during the planning and development of this research work.
1. What is the focus and contribution of the paper on multilingual machine translation? 2. What are the strengths of the proposed approach, particularly in terms of leveraging unsupervised parallel data mining and cross-lingual sentence retrieval experiments? 3. What are the weaknesses of the paper regarding its methodology and limitations? 4. How does the reviewer assess the novelty and significance of the work compared to prior studies in the field? 5. Are there any questions or concerns regarding the applicability and scalability of the proposed method to other pretrained models, bilingual setups, and low-resource languages?
Summary and Contributions Strengths Weaknesses
Summary and Contributions ======= Update after the author response ======= I would like to thank the authors for the time they took to provide answers and clarifications to all my questions - and there were quite a few questions there! Overall, if the authors incorporate these responses and clarifications into the revised paper, that would make the paper much stronger, so I would strongly suggest the authors to do so. After checking the author feedback, I am more convinced that this paper does make a solid contribution to the field of unsupervised and weakly supervised MT and cross-lingual applications, and the insight on improving bilingual sentence mining in an unseen language pair by relying on sentences from another language pair is particularly interesting. All in all, I have adjusted my scores according to this reassessment of the work. In this work, the authors propose an iterative method on top of the mBART model pretrained on 25 languages. The main idea of the work is quite simple: use a module for mining additional pseudo-parallel corpora from monolingual data to continue training and tuning the initial mBART model on the pseudo-parallel corpora. The pool of pseudo-parallel corpora is augmented and refined over time which consequently leads to improvements in the mBART-based MT model. The actual "twist" is that everything is done with mBART: the unsupervised parallel data mining module is also based on fine-tuned mBART, which keeps improving both mining and MT iteratively. One interesting aspect, empirically verified in the work and which motivated the whole idea is that supplementing the model with parallel data in only a (reasonably small) subset of language pairs yields improved performance also to "unseen" language pairs. This implies that some inter-language structural similarity is implicitly leveraged by the model, and I find this a very interesting empirical finding. The initial mBART itself is a very strong starting point, and the authors show that applying the idea of iterative self-supervised training to mBART can yield even stronger performance and state-of-the-art performance almost across the board in a range of MT setups. As a sanity check, prior to MT evaluation, the authors also show the boosts in cross-lingual sentence retrieval experiments, which is a quantitative validation of the previously mentioned phenomenon of structural similarity in multilingual models such as mBART. Overall, while the results are quite strong (which can be very much credited to mBART and large-scale pretraining), the idea of iterative self-learning on similar models has been tried out before, and I do not find the paper very novel methodologically. For instance, a recent work of Artetxe et al. (https://www.aclweb.org/anthology/P19-1494.pdf) applied a very similar idea in the context of learning iteratively better and better static cross-lingual word embeddings from an unsupervised NMT model which can then, iteratively, improve phrase tables of the NMT models and yield a better NMT model. Conceptually, this is exactly the same idea as the one presented in this work (while the previous work has not been credited). Strengths - A simple and effective iterative self-learning idea which combines iteratively improving pseudo-parallel corpora mining with iteratively improving NMT, based on the state-of-the-art mBART model. - Very strong results in sentence retrieval and MT experiments in a range of setups (both unsupervised and supervised ones). - An interesting and empirically verified observation that fine-tuning mBART on only one language pair (i.e., their corresponding parallel corpora) can improve retrieval performance also for other (unseen) language pairs. - The paper is very easy to read and follow. Weaknesses - The presented iterative framework is mostly reapplying the existing ideas on pseudo-parallel sentence mining to the latest and exciting multilingual model: mBART. Therefore, methodologically, the paper does not bring anything new, and seems a bit as teaching a new dog an old trick. - The paper still lacks a lot of meaningful analysis: (i) decomposition of factors (scale of pretraining [data, languages], # sentences / language, interaction effects) required to achieve transfer capability. - In theory, the same idea could be applied to other prominent pretrained multilingual models such XLM-R, MASS, etc. The paper does not provide any insight how much of the performance gains is due to a strong starting point, and how much is due to the proposed iterative scheme per se. What would be the scores of CRISS-style tuning applied on MASS or XLM-R? Overall, the paper should provide additional experiments and insights that would profile the main properties of the method and indicate if the same methodology can be directly applied to other initial pretrained models, to bilingual setups, and with other pseudo-parallel mining mechanisms. - The method inherits the limitations of the initial mBART model: it can be successfully applied only to the 25 languages covered by mBART-25. What about all other languages? The paper should discuss how to adapt this to other truly low-resource languages - the problem is far from solved, and the paper, although claiming it in its 'broad impact statement', does not make a step forward here, due to the inherited limitations of mBART. Disclaimer: This is not a weakness in the strict sense, but rather a subjective opinion of this reviewer: I feel like I haven't learned much from this paper (and this sentiment holds after rereading it multiple times): it is mostly the application of an already existing idea to a new strong model with strong results, and as such it doesn't offer any truly novel insight or exceptional result (except for the expected SOTA performance, given that it starts from a strong model and simply improves over it with data augmentation and additional fine-tuning). Therefore, I see this more as an engineering contribution rather than as a true scientific contribution required for a conference such as NeurIPS.
NIPS
Title Generalization of Reinforcement Learners with Working and Episodic Memory Abstract Memory is an important aspect of intelligence and plays a role in many deep reinforcement learning models. However, little progress has been made in understanding when specific memory systems help more than others and how well they generalize. The field also has yet to see a prevalent consistent and rigorous approach for evaluating agent performance on holdout data. In this paper, we aim to develop a comprehensive methodology to test different kinds of memory in an agent and assess how well the agent can apply what it learns in training to a holdout set that differs from the training set along dimensions that we suggest are relevant for evaluating memory-specific generalization. To that end, we first construct a diverse set of memory tasks1 that allow us to evaluate test-time generalization across multiple dimensions. Second, we develop and perform multiple ablations on an agent architecture that combines multiple memory systems, observe its baseline models, and investigate its performance against the task suite. 1 Introduction Humans use memory to reason, imagine, plan, and learn. Memory is a foundational component of intelligence, and enables information from past events and contexts to inform decision-making in the present and future. Recently, agents that utilize memory systems have advanced the state of the art in various research areas including reasoning, planning, program execution and navigation, among others (Graves et al., 2016; Zambaldi et al., 2018; Santoro et al., 2018; Banino et al., 2018; Vaswani et al., 2017; Sukhbaatar et al., 2015). Memory has many aspects, and having access to different kinds allows intelligent organisms to bring the most relevant past information to bear on different sets of circumstances. In cognitive psychology and neuroscience, two commonly studied types of memory are working and episodic memory. Working memory (Miyake and Shah, 1999) is a short-term temporary store with limited capacity. In contrast, episodic memory (Tulving and Murray, 1985) is typically a larger autobiographical database of experience (e.g. recalling a meal eaten last month) that lets one store information over a longer time scale and compile sequences of events into episodes (Tulving, 2002). Episodic memory has been shown to help reinforcement learning agents adapt more quickly and thereby boost data 1https://github.com/deepmind/dm_memorytasks. Videos available at https://sites.google.com/view/memorytasks-suite 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. efficiency (Blundell et al., 2016; Pritzel et al., 2017; Hansen et al., 2018). More recently, Ritter et al. (2018) shows how episodic memory can be used to provide agents with context-switching abilities in contextual bandit problems. The transformer (Vaswani et al., 2017) can be viewed as a hybrid of working memory and episodic memory that has been successfully applied to many supervised learning problems. In this work, we explore adding such memory systems to agents and propose a consistent and rigorous approach for evaluating whether an agent demonstrates generalization-enabling memory capabilities similar to those seen in animals and humans. One fundamental principle in machine learning is to train on one set of data and test on an unseen holdout set, but it has to date been common in reinforcement learning to evaluate agent performance solely on the training set which is suboptimal for testing generalization (Pineau, 2018). Also, though advances have recently been made on evaluating generalization in reinforcement learning (Cobbe et al., 2018) these have not been specific to memory. Our approach is to construct a train-holdout split where the holdout set differs from the training set along axes that we propose are relevant specifically to memory, i.e. the scale of the task and precise objects used in the task environments. For instance, if an agent learns in training to travel to an apple placed in a room, altering the room size or apple color as part of a generalization test should ideally not throw it off. We propose a set of environments that possess such a split and test different aspects of working and episodic memory, to help us better understand when different kinds of memory systems are most helpful and identify memory architectures in agents with memory abilities that cognitive scientists and psychologists have observed in humans. Alongside these tasks, we develop a benchmark memory-based agent, the Memory Recall Agent (MRA), that brings together previously developed systems thought to mimic working memory and episodic memory. This combination of a controller that models working memory, an external episodic memory, and an architecture that encourages long-term representational credit assignment via an auxiliary unsupervised loss and backpropagation through time that can ‘jump’ over several time-steps obtains better performance than baselines across the suite. In particular, episodic memory and learning good representations both prove crucial and in some cases stack synergistically. To summarize, our contribution is to: • Introduce a suite of tasks that require an agent to utilize fundamental functional properties of memory in order to solve in a way that generalizes to holdout data. • Develop an agent architecture that explicitly models the operation of memory by integrating components that functionally mimic humans’ episodic and working memory. • Show that different components of our agent’s memory have different effectiveness in training and in generalizing to holdout sets. • Show that none of the models fully generalize outside of the train set on the more challenging tasks, and that the extrapolation incurs a greater level of degradation. 2 Task suite overview We define a suite of 13 tasks designed to test different aspects of memory, with train-test splits that test for generalization across multiple dimensions (https://github.com/deepmind/dm _memorytasks). These include cognitive psychology tasks adapted from PsychLab (Leibo et al., 2018) and DMLab (Beattie et al., 2016), and new tasks built with the Unity 3D game engine (uni) that require the agent to 1) spot the difference between two scenes; 2) remember the location of a goal and navigate to it; or 3) infer an indirect transitive relation between objects. Videos with task descriptions are at https://sites.google.com/view/memory-tasks-suite. 2.1 PsychLab Four tasks in the Memory Tasks Suite use the PsychLab environment (Leibo et al., 2018), which simulates a psychology laboratory in first-person. The agent is presented with a set of one or multiple consecutive images, where each set is called a ‘trial’. Each episode has multiple trials. In Arbitrary Visuomotor Mapping (AVM) a series of objects is presented, each with an associated look-direction (e.g. up,left). The agent is rewarded if it looks in the associated direction the next time it sees a given object in the episode (Fig 8(a) in App. B). Continuous Recognition presents a series of images with rewards given for correctly indicating whether an image has been previously shown in the episode (Fig 8(b) in App. B). In Change Detection the agent sees two consecutive images, separated by a variable-length delay, and has to correctly indicate if the two images differ (Fig 8(c) in App. B). In What Then Where the agent is shown a single ‘challenge’ MNIST digit, then an image of that digit with three other digits, each placed along an edge of the rectangular screen. It next has to correctly indicate the location of the ‘challenge’ digit (Fig 8(d) in App. B). 2.2 3D tasks Spot the Difference: This tests whether the agent can correctly identify the difference between two nearly identical scenes (Figure 1(a)). The agent has to move from the first to the second room, with a ‘delay’ corridor in between. See Fig. 2 for the four different variants. (a) Spot the Difference Basic (b) Spot the Difference Passive (c) Spot the Difference Multi-Object (d) Spot the Difference Motion Figure 2: Spot the Difference tasks. (a) All the tasks in this family are variants of this basic setup, where each room contains two blocks. (b) By placing Room 1’s blocks right next to the corridor entrance, we guarantee that the agent will always see them. (c) The number of objects varies. (d) Instead of differing in color between rooms, the altered block follows a different motion pattern. Goal Navigation: This task family was inspired by the Morris Watermaze (Miyake and Shah, 1999) setup used with rodents in behavioral neuroscience. The agent is rewarded every time it successfully reaches the goal; once it gets there it is respawned randomly in the arena and has to find its way back to the goal. The goal location is re-randomized at the start of episode (Fig. 1(b), Fig. 3). Transitive Inference: This task tests if an agent can learn an overall transitive ordering over a chain of objects, through being presented with ordered pairs of adjacent objects (See Fig. 1(c) and App. B). 2.3 Scale and Stimulus Split To test how well the agent can generalize to holdout data after training, we create per-task holdout levels that differ from the training level along a scale and a stimulus dimension. The scale dimension is intended to capture something about the memory demand of the task: e.g., a task with a longer time delay between events that must be related should be harder than one with a short delay. The stimulus dimension is to guard against trivial overfitting to the particular visual input presented to the input: the memory representation should be more abstract than the particular colour of an object. The training level comprises a ‘small’ and ‘large’ scale version of the task. When training the agent we uniformly sample between these two scales. As for the holdout levels, one of them – ‘holdout-interpolate’ – corresponds to an interpolation between those two scales (call it ‘medium’) and the other, ‘holdout-extrapolate’, corresponds to an extrapolation beyond the ‘large’ scale (call it ‘extra-large’). Alterations made for each task split and their settings are in Table 2 in App. A. 3 The Memory Recall Agent Our agent, the Memory Recall Agent (MRA), incorporates five components: 1) a pixel-input convolutional, residual network, 2) a working memory, 3) a slot-based episodic memory, 4) an auxiliary contrastive loss for representation learning (van den Oord et al., 2018), 5) a jumpy backpropagationthrough-time training regime. Our agent architecture is shown in Figure 4(a). The overall agent is built on top of the IMPALA model (Espeholt et al., 2018) and is trained in the same way with the exceptions described below. Component descriptions are below. Pixel Input Pixel input is fed to a convolutional neural network, as is common in recent agents, followed by a residual block (He et al., 2015). The precise hyper-parameters are given in C.2: we use three convolutional layers followed by two residual layers. The output of this process is xt in Figure 4(a) and serves as input to three other parts of the network: 1) part of the input to the working memory module, 2) in the formation of keys and queries for the episodic memory, 3) as part of the target for the contrastive predictive coding. Working Memory Working memory is often realized through latent recurrent neural networks (RNNs) with some form of gating, such as LSTMs and Relational Memory architectures (Hochreiter and Schmidhuber, 1997; Santoro et al., 2018). These working memory models calculate the next set of hidden units using the current input and the previous hidden units. Although models which rely on working memory can perform well on a variety of problems, their ability to tackle dependencies and represent variables over long time periods is limited. The short-term nature of working memory is pragmatically, and perhaps unintentionally, reflected in the use of truncated backprop through time and the tendency for gradients through these RNNs to explode or vanish. Our agent uses an LSTM as a model of working memory. As we shall see in experiments, this module is able to perform working memory–like operations on tasks: i.e., learn calculations involving short-term memory. As depicted in Figure 4(a), the LSTM takes as input xt from the pixel input network and mt from the episodic memory module. As in Espeholt et al. (2018), the LSTM has two heads as output, producing the policy ⇡ and the baseline value function V . In our architecture these are derived from the output from the LSTM, ht. ht is also used to form episodic memories, as described below. Episodic Memory (MEM) If our agent only consisted of the working memory and pixel input described above, it would be almost identical to the model in IMPALA (Espeholt et al., 2018), an already powerful RL agent. But MRA also includes a slot-based episodic memory module as that can store values more reliably and longer-term than an LSTM, is less susceptible to the intricacies of gradient propagation, and its fundamental operations afford the agent different abilities (as observed in our experiments). The MEM in MRA has a key-value structure which the agent reads from and writes to at every time-step (see Fig. 4(a)). MRA implements a mechanism to learn how to store summaries of past experiences and retrieve relevant information when it encounters similar contexts. The reads from memory are used as additional inputs to the neural network (controller), which produces the model predictions. This effectively augments the controller’s working memory capabilities with experiences from different time scales retrieved from the MEM, which facilitate learning long-term dependencies, a difficult task when relying entirely on backpropagation in recurrent architectures (Hochreiter and Schmidhuber, 1997; Graves et al., 2016; Vaswani et al., 2017). The MEM has a number of slots, indexed by i. Each slot stores activations from the pixel input network and LSTM from previous times ti in the past. The MEM acts as a fixed-size circular (first-in-first-out) buffer: New keys and values are added, overwriting the least recently added entry if there are no unused slots available. The contents of the episodic memory buffer is wiped at the end of each episode. Memory Writing Crucially, writing to episodic memory is done without gradients. At each step a free slot is chosen for writing, denoted i. Next, the following is stored: pi xt; vi ht; ki Wk[pi, vi] + bk (1) where pi is the pixel input embedding from step t and vi is the LSTM hidden state (if the working memory is something else, e.g. a feedforward, this would be the output activations). ki is the key, used for reading (described below), computed as a simple linear function of the other two values stored. Caching the key speeds up memory reads significantly. However, the key can become stale as the weights and biases, Wk and bk are learnt (the procedure for learning them is described below under Jumpy Backpropagation). In our experiments we did not see an adverse effect of this staleness. Memory Reading The agent uses a form of dot-product attention (Bahdanau et al., 2015) over its MEM, to select the most relevant events to provide as input mt to the LSTM. The query qt is a linear transform of the pixel input embedding xt and the LSTM hidden state from the previous time-step ht 1, with weight Wq and bias bq . qt = Wq[xt, ht 1] + bq (2) The query qt is then compared against the keys in MEM as in Pritzel et al. (2017): Let (pj , vj , kj), 1 j K be the K nearest neighbors to qt from MEM, under an L2 norm between kj and qt. mt = KX j=1 wjvj where wj / 1 ✏+ ||qt Wk[pj , vj ] bk||22 (3) We compute a weighted aggregate of the values (vj) of the K nearest neighbors, weighted by the inverse of each neighbor-key’s distance to the query. Note that the distance is re-calculated from values stored in the MEM, via the linear projection Wk, bk in (1). We concatenate the resulting weighted aggregate memory mt with the embedded pixel input xt, and pass it as input to the working memory as shown in Figure 4(a). Jumpy backpropagation We now turn to how gradients flow into memory writes. Full backpropagation can become computationally infeasible as this would require backpropagation into every write that is read from and so on. Thus as a new (pi, vi, ki)-triplet is added to the MEM, there are trade-offs to be made regarding computational complexity versus performance of the agent. To make it more computationally tractable, we place a stop-gradient in the memory write. In particular, the write operation for the key in (1) becomes: ki Wk[SG(pi), SG(vi)] + bk (4) where SG(·) denote that the gradients are stopped. This allows the parameters Wk and bk to receive gradients from the loss during writing and reading, while at the same time bounding the computational complexity as the gradients do not flow back into the recurrent working memory (or via that back into the MEM). To re-calculate the distances, we want to use these learnt parameters rather than, say, random projection, so we need to store the arguments xt and ht of the key-generating linear transform Wk, bk for all previous time-steps. Thus in the MEM we store the full (pi, vi, ki)-triplet, where pi = xti , vi = hti and ti is the step that write i was made. We call this technique ‘jumpy backpropagation’ because the intermediate steps between the current time-step t and the memory write step ti are not taken into account in the gradient updates. This approach is similar to Sparse Attentive Backtracking (Ke et al., 2018, SAB) which uses sparse replay by passing gradients only through memories selected as relevant at each step. Our model differs in that it does not have a fixed chunking scheme and does not do full backpropagation through the architecture (which in our case becomes quickly intractable). Our approach has minimal computational overhead as we only recompute the keys for the nearest neighbors. Auxiliary Unsupervised Losses An agent with good memory provides a good basis for forming a rich representation of the environment, as it captures a history of the states visited by the agent. This is the primary basis for many rich probabilistic state representations in reinforcement learning such as belief states and predictive state representations (Littman and Sutton, 2002). Auxiliary unsupervised losses can significantly improve agent performance (Jaderberg et al., 2016). Recently it has been shown that agents augmented with one-step contrastive predictive coding (van den Oord et al., 2018, CPC) can learn belief state representations of the environment (Guo et al., 2018). Thus in MRA we combine the working and episodic memory mechanisms listed above with a CPC unsupervised loss to imbue the agent with a rich state representation. The CPC auxiliary loss is added to the usual RL losses, and is of the following form: NX ⌧=1 CPCLoss [ht;xt+1, xt+2, . . . , xt+⌧ ] (5) where CPCLoss is from van den Oord et al. (2018), ht is the working memory hidden state, and xt+⌧ is the encoding pixel input at ⌧ steps in the future. N is the number of CPC steps (typically 10 or 50 in our experiments). See Figure 4(b) for an illustration and further details and equations elaborating on this loss in App. C.3. Reconstruction losses have also been used as an auxiliary task (Jaderberg et al., 2016; Wayne et al., 2018) and we include this as a baseline in our experiments. Our reconstruction baseline minimizes the L2 distance between the predicted reward and predicted pixel input and the true reward and pixel input, using the working memory state ht as input. Details of this baseline are given in App. C.4. 4 Experiments Setup We ran 10 ablations on the MRA architecture, on the training and the two holdout levels: • Working Memory component: Either feedforward neural network (‘FF’ for short) or LSTM. The LSTM-only baseline corresponds to IMPALA (Espeholt et al., 2018). • With or without using episodic memory module (‘MEM’). • With or without auxiliary unsupervised loss (either CPC or reconstruction loss (‘REC’)). • With or without jumpy backpropagation, for MRA (i.e. LSTM + MEM + CPC) Given that the experiments are computationally demanding, we only performed small variations within as part of our hyper-parameter tuning process for each task (see App. D). We hypothesize that in general the agent should perform the best in training, somewhat worse on the holdout-interpolation level and the worst on the holdout-extrapolation level. That is, we expect to see a generalization gap. Our results validated this hypothesis for the tasks that were much harder for agents than for humans. 4.1 Full comparison We computed human-normalized scores (details in App. B) and plotted them into a heatmap (Fig 5) sorted such that the model with the highest train scores on average is the top row and the task with highest train scores on average is the leftmost column. The heatmap suggests that the MRA architecture, LSTM + MEM + CPC, broadly outperforms the other models (App. B Table 3). This ranking was almost always maintained across train and holdout levels, despite MRA performing worse than the LSTM-only baseline on What Then Where. What Then Where was one of the tasks where all models did poorly, along with Spot the Difference: Multi-Object, Spot the Difference: Multi-Object, Spot the Difference: Multi-Object (rightmost columns in heatmap). At the other end of the difficulty spectrum, LSTM + MEM had superhuman scores on Visible Goal Procedural Maze in training and on Transitive Inference in training and holdout, and further adding CPC or REC boosted the scores even higher. 4.2 Results Different memory systems worked best for different kinds of tasks, but the MRA architecture’s combination of LSTM + MEM + CPC did the best overall on training and holdout (Fig. 6). Removing jumpy backpropagation from MRA hurt performance in five Memory Suite tasks (App. B Fig. 10), while performance was the same in the remaining ones (App. B Fig. 11 and 12). Generalization gap widens as task difficulty increases The hypothesized generalization gap was minimal for some tasks e.g. AVM and Continuous Recognition but significant for others e.g. What Then Where and Spot the Difference: Multi-Object (Fig 7). We observed that the gap tended to be wider as the task difficulty went up, and that in PsychLab, the two tasks where the scale was the number of trials seemed to be easier than the other two tasks where the scale was the delay duration. MEM critical on some tasks, is enhanced by auxiliary unsupervised loss Adding MEM improved scores on nine tasks in training, six in holdout-interpolate, and six in holdout-extrapolate. Adding MEM alone, without an auxiliary unsupervised loss, was enough to improve scores on AVM and Continuous Recognition, all Spot the Difference tasks except Spot the Difference: Multi-Object, all Goal Navigation tasks except Visible Goal Procedural Maze, and also for Transitive Inference. Adding MEM helped to significantly boost holdout performance for Transitive Inference, AVM, and Continuous Recognition. For the two PsychLab tasks this finding was in line with our expectations, since they both can be solved by memorizing single images and determining exact matches and thus an external episodic memory would be the most useful. For Transitive Inference, in training MEM helped when the working memory was FF but made little difference on an LSTM, but on holdout MEM helped noticeably for both FF and LSTM. In Change Detection and Multi-Object, adding MEM alone had little or no effect but combining it with CPC or REC provided a noticeable boost. Synergistic effect of MEM + CPC, for LSTM On average, adding either the MEM + CPC stack or MEM + REC stack to any working memory appeared to improve the agent’s ability to generalize to holdout levels (Fig. 6). Interestingly, on several tasks we found that combining MEM + CPC had a synergistic effect when the working memory was LSTM: The performance boost from adding MEM + CPC was larger than the sum of the boost from adding MEM or CPC alone. We observed this phenomenon in seven tasks in training, six in holdout-interpolate, and six in holdout-extrapolate. Among these, the tasks where there was MEM + CPC synergy across training, holdout-interpolate, and holdout-extrapolate were: the easiest task, Visible Goal Procedural Maze; Visible Goal with Buildings; Spot the Difference: Basic; and the hardest task, Spot the Difference: Multi-Object. CPC vs. REC CPC was better than REC on all Spot the Difference tasks, and the two harder PsychLab tasks Change Detection and What Then Where. On the other two PsychLab tasks there was no difference between CPC and REC. However, REC was better on all Goal Navigation tasks except Invisible Goal Empty Arena. When averaged out, REC was more useful when the working memory was FF, but CPC was more useful for an LSTM working memory. 5 Discussion & Future Work We constructed a diverse set of environments 2 to test memory-specific generalization, based on tasks designed to identify working memory and episodic memory in humans, and also developed an agent that demonstrates many of these cognitive abilities. We propose both a testbed and benchmark for further work on agents with memory, and demonstrate how better understanding the memory and generalization abilities of reinforcement learning agents can point to new avenues of research to improve agent performance and data efficiency. There is still room for improvement on the trickiest tasks in the suite where the agent fared relatively poorly. In particular, solving Spot the Difference: 2Available at https://github.com/deepmind/dm_memorytasks. Motion might need a generative model that enables forward planning to imagine how future motion unrolls (e.g., (Racanière et al., 2017)). Our results indicate that adding an auxiliary loss such as CPC or reconstruction loss to an architecture that already has an external episodic memory improves generalization performance on holdout sets, sometimes synergistically. This suggests that existing agents that use episodic memory, such as DNC and NEC, could potentially boost performance by implementing an additional auxiliary unsupervised loss. Acknowledgements We would like to thank Jessica Hamrick, Jean-Baptiste Lespiau, Frederic Besse, Josh Abramson, Oriol Vinyals, Federico Carnevale, Charlie Beattie, Piotr Trochim, Piermaria Mendolicchio, Aaron van den Oord, Chloe Hillier, Tom Ward, Ricardo Barreira, Matthew Mauger, Thomas Köppe, Pauline Coquinot and many others at DeepMind for insightful discussions, comments and feedback on this work.
1. What is the main contribution of the paper regarding RL agent tasks? 2. How does the proposed neural architecture utilize working memory and episodic memory differently? 3. What are the strengths of the paper, particularly in terms of experimental design and ablation studies? 4. Do you have any concerns regarding the presentation format of Section 2? 5. Are there any questions regarding the novelty of the architecture or its components? 6. How does the reviewer assess the quality and impact of the paper's content?
Review
Review tl;dr: This is a good paper. I recommend acceptance. The authors do a good job of motivating their work, and they contribute a nice experimental section with good results. The ablation study was thorough. Well done! --- Many tasks that might be given to an RL agent are impossible without working memory. This paper presents a suite of tasks which require use of that memory in order to succeed. These tasks are compiled from a variety of other sources, either directly or re-implemented for this suite. They're good tasks. This paper also presents a neural architecture for using both working memory and episodic memory. The working memory is implemented with an LSTM, not unlike IMPALA. The episodic memory, however, writes memory which is indexed into a many dimensional vector space. The paper claims that this type of memory lasts longer than the LSTM memory. The authors make a point of saying that none of the models, including the one presented in the paper, are able to do well on some of the tasks. They also show that none of the models perform well on extrapolated tasks (where the difficulty was increased after train time). I think they're doing this to show that their suite of tasks are challenging and worth trying to learn. There appears to be a marked improvement between agents without episodic memory and agents with episodic memory on the heldout test sets. Also, there is the same improvement between feed forward and LSTM agents (working memory). They did develop a novel architecture, though none of the pieces are particularly novel. However, their ablation tests successfully show that the agents with working memory and episodic memory perform better than similar agents without episodic memory or working memory at both training time and test time. Pros: - Generally easy to read - The neural architecture seems sufficiently novel - Need for both working and episodic memory seems well justified. - Thorough ablation tests Cons: - The formatting for Section 2 is *lousy*. Because figures, figure text, and main text are all over the page, it's hard to keep track of what refers to what. The intro to Section 2 says there are 13 tasks, but it's difficult to keep tally throughout the section. It would be especially helpful if the order of the figures matched the order that the tasks are presented in the main text. I think the direction is good, the experiments is good, and the overall quality is good. I wish they had another diagram which really showed their claims about generalization. For example, rather than showing all the data for the individual tasks in one, it could be nicer to show a graph which combined the information across tasks, or some handpicked results that demonstrated successes and failures (in addition to the data they have given). I didn't feel like their results had much to do with generalization as much as it had to do with the need of memory for different types of tasks. Personally, I would have liked more discussion on the need for different types of memory and how their results backed up the theory/intuition.
NIPS
Title Generalization of Reinforcement Learners with Working and Episodic Memory Abstract Memory is an important aspect of intelligence and plays a role in many deep reinforcement learning models. However, little progress has been made in understanding when specific memory systems help more than others and how well they generalize. The field also has yet to see a prevalent consistent and rigorous approach for evaluating agent performance on holdout data. In this paper, we aim to develop a comprehensive methodology to test different kinds of memory in an agent and assess how well the agent can apply what it learns in training to a holdout set that differs from the training set along dimensions that we suggest are relevant for evaluating memory-specific generalization. To that end, we first construct a diverse set of memory tasks1 that allow us to evaluate test-time generalization across multiple dimensions. Second, we develop and perform multiple ablations on an agent architecture that combines multiple memory systems, observe its baseline models, and investigate its performance against the task suite. 1 Introduction Humans use memory to reason, imagine, plan, and learn. Memory is a foundational component of intelligence, and enables information from past events and contexts to inform decision-making in the present and future. Recently, agents that utilize memory systems have advanced the state of the art in various research areas including reasoning, planning, program execution and navigation, among others (Graves et al., 2016; Zambaldi et al., 2018; Santoro et al., 2018; Banino et al., 2018; Vaswani et al., 2017; Sukhbaatar et al., 2015). Memory has many aspects, and having access to different kinds allows intelligent organisms to bring the most relevant past information to bear on different sets of circumstances. In cognitive psychology and neuroscience, two commonly studied types of memory are working and episodic memory. Working memory (Miyake and Shah, 1999) is a short-term temporary store with limited capacity. In contrast, episodic memory (Tulving and Murray, 1985) is typically a larger autobiographical database of experience (e.g. recalling a meal eaten last month) that lets one store information over a longer time scale and compile sequences of events into episodes (Tulving, 2002). Episodic memory has been shown to help reinforcement learning agents adapt more quickly and thereby boost data 1https://github.com/deepmind/dm_memorytasks. Videos available at https://sites.google.com/view/memorytasks-suite 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. efficiency (Blundell et al., 2016; Pritzel et al., 2017; Hansen et al., 2018). More recently, Ritter et al. (2018) shows how episodic memory can be used to provide agents with context-switching abilities in contextual bandit problems. The transformer (Vaswani et al., 2017) can be viewed as a hybrid of working memory and episodic memory that has been successfully applied to many supervised learning problems. In this work, we explore adding such memory systems to agents and propose a consistent and rigorous approach for evaluating whether an agent demonstrates generalization-enabling memory capabilities similar to those seen in animals and humans. One fundamental principle in machine learning is to train on one set of data and test on an unseen holdout set, but it has to date been common in reinforcement learning to evaluate agent performance solely on the training set which is suboptimal for testing generalization (Pineau, 2018). Also, though advances have recently been made on evaluating generalization in reinforcement learning (Cobbe et al., 2018) these have not been specific to memory. Our approach is to construct a train-holdout split where the holdout set differs from the training set along axes that we propose are relevant specifically to memory, i.e. the scale of the task and precise objects used in the task environments. For instance, if an agent learns in training to travel to an apple placed in a room, altering the room size or apple color as part of a generalization test should ideally not throw it off. We propose a set of environments that possess such a split and test different aspects of working and episodic memory, to help us better understand when different kinds of memory systems are most helpful and identify memory architectures in agents with memory abilities that cognitive scientists and psychologists have observed in humans. Alongside these tasks, we develop a benchmark memory-based agent, the Memory Recall Agent (MRA), that brings together previously developed systems thought to mimic working memory and episodic memory. This combination of a controller that models working memory, an external episodic memory, and an architecture that encourages long-term representational credit assignment via an auxiliary unsupervised loss and backpropagation through time that can ‘jump’ over several time-steps obtains better performance than baselines across the suite. In particular, episodic memory and learning good representations both prove crucial and in some cases stack synergistically. To summarize, our contribution is to: • Introduce a suite of tasks that require an agent to utilize fundamental functional properties of memory in order to solve in a way that generalizes to holdout data. • Develop an agent architecture that explicitly models the operation of memory by integrating components that functionally mimic humans’ episodic and working memory. • Show that different components of our agent’s memory have different effectiveness in training and in generalizing to holdout sets. • Show that none of the models fully generalize outside of the train set on the more challenging tasks, and that the extrapolation incurs a greater level of degradation. 2 Task suite overview We define a suite of 13 tasks designed to test different aspects of memory, with train-test splits that test for generalization across multiple dimensions (https://github.com/deepmind/dm _memorytasks). These include cognitive psychology tasks adapted from PsychLab (Leibo et al., 2018) and DMLab (Beattie et al., 2016), and new tasks built with the Unity 3D game engine (uni) that require the agent to 1) spot the difference between two scenes; 2) remember the location of a goal and navigate to it; or 3) infer an indirect transitive relation between objects. Videos with task descriptions are at https://sites.google.com/view/memory-tasks-suite. 2.1 PsychLab Four tasks in the Memory Tasks Suite use the PsychLab environment (Leibo et al., 2018), which simulates a psychology laboratory in first-person. The agent is presented with a set of one or multiple consecutive images, where each set is called a ‘trial’. Each episode has multiple trials. In Arbitrary Visuomotor Mapping (AVM) a series of objects is presented, each with an associated look-direction (e.g. up,left). The agent is rewarded if it looks in the associated direction the next time it sees a given object in the episode (Fig 8(a) in App. B). Continuous Recognition presents a series of images with rewards given for correctly indicating whether an image has been previously shown in the episode (Fig 8(b) in App. B). In Change Detection the agent sees two consecutive images, separated by a variable-length delay, and has to correctly indicate if the two images differ (Fig 8(c) in App. B). In What Then Where the agent is shown a single ‘challenge’ MNIST digit, then an image of that digit with three other digits, each placed along an edge of the rectangular screen. It next has to correctly indicate the location of the ‘challenge’ digit (Fig 8(d) in App. B). 2.2 3D tasks Spot the Difference: This tests whether the agent can correctly identify the difference between two nearly identical scenes (Figure 1(a)). The agent has to move from the first to the second room, with a ‘delay’ corridor in between. See Fig. 2 for the four different variants. (a) Spot the Difference Basic (b) Spot the Difference Passive (c) Spot the Difference Multi-Object (d) Spot the Difference Motion Figure 2: Spot the Difference tasks. (a) All the tasks in this family are variants of this basic setup, where each room contains two blocks. (b) By placing Room 1’s blocks right next to the corridor entrance, we guarantee that the agent will always see them. (c) The number of objects varies. (d) Instead of differing in color between rooms, the altered block follows a different motion pattern. Goal Navigation: This task family was inspired by the Morris Watermaze (Miyake and Shah, 1999) setup used with rodents in behavioral neuroscience. The agent is rewarded every time it successfully reaches the goal; once it gets there it is respawned randomly in the arena and has to find its way back to the goal. The goal location is re-randomized at the start of episode (Fig. 1(b), Fig. 3). Transitive Inference: This task tests if an agent can learn an overall transitive ordering over a chain of objects, through being presented with ordered pairs of adjacent objects (See Fig. 1(c) and App. B). 2.3 Scale and Stimulus Split To test how well the agent can generalize to holdout data after training, we create per-task holdout levels that differ from the training level along a scale and a stimulus dimension. The scale dimension is intended to capture something about the memory demand of the task: e.g., a task with a longer time delay between events that must be related should be harder than one with a short delay. The stimulus dimension is to guard against trivial overfitting to the particular visual input presented to the input: the memory representation should be more abstract than the particular colour of an object. The training level comprises a ‘small’ and ‘large’ scale version of the task. When training the agent we uniformly sample between these two scales. As for the holdout levels, one of them – ‘holdout-interpolate’ – corresponds to an interpolation between those two scales (call it ‘medium’) and the other, ‘holdout-extrapolate’, corresponds to an extrapolation beyond the ‘large’ scale (call it ‘extra-large’). Alterations made for each task split and their settings are in Table 2 in App. A. 3 The Memory Recall Agent Our agent, the Memory Recall Agent (MRA), incorporates five components: 1) a pixel-input convolutional, residual network, 2) a working memory, 3) a slot-based episodic memory, 4) an auxiliary contrastive loss for representation learning (van den Oord et al., 2018), 5) a jumpy backpropagationthrough-time training regime. Our agent architecture is shown in Figure 4(a). The overall agent is built on top of the IMPALA model (Espeholt et al., 2018) and is trained in the same way with the exceptions described below. Component descriptions are below. Pixel Input Pixel input is fed to a convolutional neural network, as is common in recent agents, followed by a residual block (He et al., 2015). The precise hyper-parameters are given in C.2: we use three convolutional layers followed by two residual layers. The output of this process is xt in Figure 4(a) and serves as input to three other parts of the network: 1) part of the input to the working memory module, 2) in the formation of keys and queries for the episodic memory, 3) as part of the target for the contrastive predictive coding. Working Memory Working memory is often realized through latent recurrent neural networks (RNNs) with some form of gating, such as LSTMs and Relational Memory architectures (Hochreiter and Schmidhuber, 1997; Santoro et al., 2018). These working memory models calculate the next set of hidden units using the current input and the previous hidden units. Although models which rely on working memory can perform well on a variety of problems, their ability to tackle dependencies and represent variables over long time periods is limited. The short-term nature of working memory is pragmatically, and perhaps unintentionally, reflected in the use of truncated backprop through time and the tendency for gradients through these RNNs to explode or vanish. Our agent uses an LSTM as a model of working memory. As we shall see in experiments, this module is able to perform working memory–like operations on tasks: i.e., learn calculations involving short-term memory. As depicted in Figure 4(a), the LSTM takes as input xt from the pixel input network and mt from the episodic memory module. As in Espeholt et al. (2018), the LSTM has two heads as output, producing the policy ⇡ and the baseline value function V . In our architecture these are derived from the output from the LSTM, ht. ht is also used to form episodic memories, as described below. Episodic Memory (MEM) If our agent only consisted of the working memory and pixel input described above, it would be almost identical to the model in IMPALA (Espeholt et al., 2018), an already powerful RL agent. But MRA also includes a slot-based episodic memory module as that can store values more reliably and longer-term than an LSTM, is less susceptible to the intricacies of gradient propagation, and its fundamental operations afford the agent different abilities (as observed in our experiments). The MEM in MRA has a key-value structure which the agent reads from and writes to at every time-step (see Fig. 4(a)). MRA implements a mechanism to learn how to store summaries of past experiences and retrieve relevant information when it encounters similar contexts. The reads from memory are used as additional inputs to the neural network (controller), which produces the model predictions. This effectively augments the controller’s working memory capabilities with experiences from different time scales retrieved from the MEM, which facilitate learning long-term dependencies, a difficult task when relying entirely on backpropagation in recurrent architectures (Hochreiter and Schmidhuber, 1997; Graves et al., 2016; Vaswani et al., 2017). The MEM has a number of slots, indexed by i. Each slot stores activations from the pixel input network and LSTM from previous times ti in the past. The MEM acts as a fixed-size circular (first-in-first-out) buffer: New keys and values are added, overwriting the least recently added entry if there are no unused slots available. The contents of the episodic memory buffer is wiped at the end of each episode. Memory Writing Crucially, writing to episodic memory is done without gradients. At each step a free slot is chosen for writing, denoted i. Next, the following is stored: pi xt; vi ht; ki Wk[pi, vi] + bk (1) where pi is the pixel input embedding from step t and vi is the LSTM hidden state (if the working memory is something else, e.g. a feedforward, this would be the output activations). ki is the key, used for reading (described below), computed as a simple linear function of the other two values stored. Caching the key speeds up memory reads significantly. However, the key can become stale as the weights and biases, Wk and bk are learnt (the procedure for learning them is described below under Jumpy Backpropagation). In our experiments we did not see an adverse effect of this staleness. Memory Reading The agent uses a form of dot-product attention (Bahdanau et al., 2015) over its MEM, to select the most relevant events to provide as input mt to the LSTM. The query qt is a linear transform of the pixel input embedding xt and the LSTM hidden state from the previous time-step ht 1, with weight Wq and bias bq . qt = Wq[xt, ht 1] + bq (2) The query qt is then compared against the keys in MEM as in Pritzel et al. (2017): Let (pj , vj , kj), 1 j K be the K nearest neighbors to qt from MEM, under an L2 norm between kj and qt. mt = KX j=1 wjvj where wj / 1 ✏+ ||qt Wk[pj , vj ] bk||22 (3) We compute a weighted aggregate of the values (vj) of the K nearest neighbors, weighted by the inverse of each neighbor-key’s distance to the query. Note that the distance is re-calculated from values stored in the MEM, via the linear projection Wk, bk in (1). We concatenate the resulting weighted aggregate memory mt with the embedded pixel input xt, and pass it as input to the working memory as shown in Figure 4(a). Jumpy backpropagation We now turn to how gradients flow into memory writes. Full backpropagation can become computationally infeasible as this would require backpropagation into every write that is read from and so on. Thus as a new (pi, vi, ki)-triplet is added to the MEM, there are trade-offs to be made regarding computational complexity versus performance of the agent. To make it more computationally tractable, we place a stop-gradient in the memory write. In particular, the write operation for the key in (1) becomes: ki Wk[SG(pi), SG(vi)] + bk (4) where SG(·) denote that the gradients are stopped. This allows the parameters Wk and bk to receive gradients from the loss during writing and reading, while at the same time bounding the computational complexity as the gradients do not flow back into the recurrent working memory (or via that back into the MEM). To re-calculate the distances, we want to use these learnt parameters rather than, say, random projection, so we need to store the arguments xt and ht of the key-generating linear transform Wk, bk for all previous time-steps. Thus in the MEM we store the full (pi, vi, ki)-triplet, where pi = xti , vi = hti and ti is the step that write i was made. We call this technique ‘jumpy backpropagation’ because the intermediate steps between the current time-step t and the memory write step ti are not taken into account in the gradient updates. This approach is similar to Sparse Attentive Backtracking (Ke et al., 2018, SAB) which uses sparse replay by passing gradients only through memories selected as relevant at each step. Our model differs in that it does not have a fixed chunking scheme and does not do full backpropagation through the architecture (which in our case becomes quickly intractable). Our approach has minimal computational overhead as we only recompute the keys for the nearest neighbors. Auxiliary Unsupervised Losses An agent with good memory provides a good basis for forming a rich representation of the environment, as it captures a history of the states visited by the agent. This is the primary basis for many rich probabilistic state representations in reinforcement learning such as belief states and predictive state representations (Littman and Sutton, 2002). Auxiliary unsupervised losses can significantly improve agent performance (Jaderberg et al., 2016). Recently it has been shown that agents augmented with one-step contrastive predictive coding (van den Oord et al., 2018, CPC) can learn belief state representations of the environment (Guo et al., 2018). Thus in MRA we combine the working and episodic memory mechanisms listed above with a CPC unsupervised loss to imbue the agent with a rich state representation. The CPC auxiliary loss is added to the usual RL losses, and is of the following form: NX ⌧=1 CPCLoss [ht;xt+1, xt+2, . . . , xt+⌧ ] (5) where CPCLoss is from van den Oord et al. (2018), ht is the working memory hidden state, and xt+⌧ is the encoding pixel input at ⌧ steps in the future. N is the number of CPC steps (typically 10 or 50 in our experiments). See Figure 4(b) for an illustration and further details and equations elaborating on this loss in App. C.3. Reconstruction losses have also been used as an auxiliary task (Jaderberg et al., 2016; Wayne et al., 2018) and we include this as a baseline in our experiments. Our reconstruction baseline minimizes the L2 distance between the predicted reward and predicted pixel input and the true reward and pixel input, using the working memory state ht as input. Details of this baseline are given in App. C.4. 4 Experiments Setup We ran 10 ablations on the MRA architecture, on the training and the two holdout levels: • Working Memory component: Either feedforward neural network (‘FF’ for short) or LSTM. The LSTM-only baseline corresponds to IMPALA (Espeholt et al., 2018). • With or without using episodic memory module (‘MEM’). • With or without auxiliary unsupervised loss (either CPC or reconstruction loss (‘REC’)). • With or without jumpy backpropagation, for MRA (i.e. LSTM + MEM + CPC) Given that the experiments are computationally demanding, we only performed small variations within as part of our hyper-parameter tuning process for each task (see App. D). We hypothesize that in general the agent should perform the best in training, somewhat worse on the holdout-interpolation level and the worst on the holdout-extrapolation level. That is, we expect to see a generalization gap. Our results validated this hypothesis for the tasks that were much harder for agents than for humans. 4.1 Full comparison We computed human-normalized scores (details in App. B) and plotted them into a heatmap (Fig 5) sorted such that the model with the highest train scores on average is the top row and the task with highest train scores on average is the leftmost column. The heatmap suggests that the MRA architecture, LSTM + MEM + CPC, broadly outperforms the other models (App. B Table 3). This ranking was almost always maintained across train and holdout levels, despite MRA performing worse than the LSTM-only baseline on What Then Where. What Then Where was one of the tasks where all models did poorly, along with Spot the Difference: Multi-Object, Spot the Difference: Multi-Object, Spot the Difference: Multi-Object (rightmost columns in heatmap). At the other end of the difficulty spectrum, LSTM + MEM had superhuman scores on Visible Goal Procedural Maze in training and on Transitive Inference in training and holdout, and further adding CPC or REC boosted the scores even higher. 4.2 Results Different memory systems worked best for different kinds of tasks, but the MRA architecture’s combination of LSTM + MEM + CPC did the best overall on training and holdout (Fig. 6). Removing jumpy backpropagation from MRA hurt performance in five Memory Suite tasks (App. B Fig. 10), while performance was the same in the remaining ones (App. B Fig. 11 and 12). Generalization gap widens as task difficulty increases The hypothesized generalization gap was minimal for some tasks e.g. AVM and Continuous Recognition but significant for others e.g. What Then Where and Spot the Difference: Multi-Object (Fig 7). We observed that the gap tended to be wider as the task difficulty went up, and that in PsychLab, the two tasks where the scale was the number of trials seemed to be easier than the other two tasks where the scale was the delay duration. MEM critical on some tasks, is enhanced by auxiliary unsupervised loss Adding MEM improved scores on nine tasks in training, six in holdout-interpolate, and six in holdout-extrapolate. Adding MEM alone, without an auxiliary unsupervised loss, was enough to improve scores on AVM and Continuous Recognition, all Spot the Difference tasks except Spot the Difference: Multi-Object, all Goal Navigation tasks except Visible Goal Procedural Maze, and also for Transitive Inference. Adding MEM helped to significantly boost holdout performance for Transitive Inference, AVM, and Continuous Recognition. For the two PsychLab tasks this finding was in line with our expectations, since they both can be solved by memorizing single images and determining exact matches and thus an external episodic memory would be the most useful. For Transitive Inference, in training MEM helped when the working memory was FF but made little difference on an LSTM, but on holdout MEM helped noticeably for both FF and LSTM. In Change Detection and Multi-Object, adding MEM alone had little or no effect but combining it with CPC or REC provided a noticeable boost. Synergistic effect of MEM + CPC, for LSTM On average, adding either the MEM + CPC stack or MEM + REC stack to any working memory appeared to improve the agent’s ability to generalize to holdout levels (Fig. 6). Interestingly, on several tasks we found that combining MEM + CPC had a synergistic effect when the working memory was LSTM: The performance boost from adding MEM + CPC was larger than the sum of the boost from adding MEM or CPC alone. We observed this phenomenon in seven tasks in training, six in holdout-interpolate, and six in holdout-extrapolate. Among these, the tasks where there was MEM + CPC synergy across training, holdout-interpolate, and holdout-extrapolate were: the easiest task, Visible Goal Procedural Maze; Visible Goal with Buildings; Spot the Difference: Basic; and the hardest task, Spot the Difference: Multi-Object. CPC vs. REC CPC was better than REC on all Spot the Difference tasks, and the two harder PsychLab tasks Change Detection and What Then Where. On the other two PsychLab tasks there was no difference between CPC and REC. However, REC was better on all Goal Navigation tasks except Invisible Goal Empty Arena. When averaged out, REC was more useful when the working memory was FF, but CPC was more useful for an LSTM working memory. 5 Discussion & Future Work We constructed a diverse set of environments 2 to test memory-specific generalization, based on tasks designed to identify working memory and episodic memory in humans, and also developed an agent that demonstrates many of these cognitive abilities. We propose both a testbed and benchmark for further work on agents with memory, and demonstrate how better understanding the memory and generalization abilities of reinforcement learning agents can point to new avenues of research to improve agent performance and data efficiency. There is still room for improvement on the trickiest tasks in the suite where the agent fared relatively poorly. In particular, solving Spot the Difference: 2Available at https://github.com/deepmind/dm_memorytasks. Motion might need a generative model that enables forward planning to imagine how future motion unrolls (e.g., (Racanière et al., 2017)). Our results indicate that adding an auxiliary loss such as CPC or reconstruction loss to an architecture that already has an external episodic memory improves generalization performance on holdout sets, sometimes synergistically. This suggests that existing agents that use episodic memory, such as DNC and NEC, could potentially boost performance by implementing an additional auxiliary unsupervised loss. Acknowledgements We would like to thank Jessica Hamrick, Jean-Baptiste Lespiau, Frederic Besse, Josh Abramson, Oriol Vinyals, Federico Carnevale, Charlie Beattie, Piotr Trochim, Piermaria Mendolicchio, Aaron van den Oord, Chloe Hillier, Tom Ward, Ricardo Barreira, Matthew Mauger, Thomas Köppe, Pauline Coquinot and many others at DeepMind for insightful discussions, comments and feedback on this work.
1. What is the original contribution of the paper in the field of model-free reinforcement learning? 2. How does the reviewer assess the quality of the work presented in the paper? 3. What are the strengths and weaknesses of the paper regarding its clarity and significance? 4. Does the reviewer have any questions or concerns about the technical contribution, ablation study, or comparisons made in the paper?
Review
Review # Originality The problem of incorporating memory in model-free RL is not new, however there is a general lack of qualitative analysis on the problem due to the lack of clear testbeds (since most current ones might have many confounding elements, or different focus) and baselines. This paper attempts at providing both, and thus makes for a good and original contribution to the NeurIPS community. I also appreciated the focus on testing for generalisation across instances of the tasks, since that is an important metric that is often lacking in published papers in the area. # Quality The work presented is overall of high quality. The technical contribution is theoretically sound, as it is a relatively straightforward combination of existing methods. A satisfactory ablation study was provided, and the method was compared against a state of the art distributed RL algorithm, IMPALA. The authors are mostly careful about their state claims about performance of their method, and they managed to mostly convince me of the quality of the presented testbeds. # Clarity The paper is well written, albeit at times a bit too reliant on the presence of supplementary materials. As this is a common (and not easily addressable) problem with work presenting testbed-baselines pairs, this didn't affect the score too heavily, however the exposition would have gained from strongly focusing on any of the two main contributions. # Significance The problem of incorporating and utilising memory in model-free agents is a relatively strong focus of the RL community, and this work sets out to provide both testbeds and baselines to work towards tackling this important issue. The paper provides some insights on the usefulness of auxiliary reconstruction losses, which confirm and strengthen previous findings. Provided the code and the tasks are successfully released, this paper will make for an important baseline towards the quest to solve this general problem.
NIPS
Title Generalization of Reinforcement Learners with Working and Episodic Memory Abstract Memory is an important aspect of intelligence and plays a role in many deep reinforcement learning models. However, little progress has been made in understanding when specific memory systems help more than others and how well they generalize. The field also has yet to see a prevalent consistent and rigorous approach for evaluating agent performance on holdout data. In this paper, we aim to develop a comprehensive methodology to test different kinds of memory in an agent and assess how well the agent can apply what it learns in training to a holdout set that differs from the training set along dimensions that we suggest are relevant for evaluating memory-specific generalization. To that end, we first construct a diverse set of memory tasks1 that allow us to evaluate test-time generalization across multiple dimensions. Second, we develop and perform multiple ablations on an agent architecture that combines multiple memory systems, observe its baseline models, and investigate its performance against the task suite. 1 Introduction Humans use memory to reason, imagine, plan, and learn. Memory is a foundational component of intelligence, and enables information from past events and contexts to inform decision-making in the present and future. Recently, agents that utilize memory systems have advanced the state of the art in various research areas including reasoning, planning, program execution and navigation, among others (Graves et al., 2016; Zambaldi et al., 2018; Santoro et al., 2018; Banino et al., 2018; Vaswani et al., 2017; Sukhbaatar et al., 2015). Memory has many aspects, and having access to different kinds allows intelligent organisms to bring the most relevant past information to bear on different sets of circumstances. In cognitive psychology and neuroscience, two commonly studied types of memory are working and episodic memory. Working memory (Miyake and Shah, 1999) is a short-term temporary store with limited capacity. In contrast, episodic memory (Tulving and Murray, 1985) is typically a larger autobiographical database of experience (e.g. recalling a meal eaten last month) that lets one store information over a longer time scale and compile sequences of events into episodes (Tulving, 2002). Episodic memory has been shown to help reinforcement learning agents adapt more quickly and thereby boost data 1https://github.com/deepmind/dm_memorytasks. Videos available at https://sites.google.com/view/memorytasks-suite 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. efficiency (Blundell et al., 2016; Pritzel et al., 2017; Hansen et al., 2018). More recently, Ritter et al. (2018) shows how episodic memory can be used to provide agents with context-switching abilities in contextual bandit problems. The transformer (Vaswani et al., 2017) can be viewed as a hybrid of working memory and episodic memory that has been successfully applied to many supervised learning problems. In this work, we explore adding such memory systems to agents and propose a consistent and rigorous approach for evaluating whether an agent demonstrates generalization-enabling memory capabilities similar to those seen in animals and humans. One fundamental principle in machine learning is to train on one set of data and test on an unseen holdout set, but it has to date been common in reinforcement learning to evaluate agent performance solely on the training set which is suboptimal for testing generalization (Pineau, 2018). Also, though advances have recently been made on evaluating generalization in reinforcement learning (Cobbe et al., 2018) these have not been specific to memory. Our approach is to construct a train-holdout split where the holdout set differs from the training set along axes that we propose are relevant specifically to memory, i.e. the scale of the task and precise objects used in the task environments. For instance, if an agent learns in training to travel to an apple placed in a room, altering the room size or apple color as part of a generalization test should ideally not throw it off. We propose a set of environments that possess such a split and test different aspects of working and episodic memory, to help us better understand when different kinds of memory systems are most helpful and identify memory architectures in agents with memory abilities that cognitive scientists and psychologists have observed in humans. Alongside these tasks, we develop a benchmark memory-based agent, the Memory Recall Agent (MRA), that brings together previously developed systems thought to mimic working memory and episodic memory. This combination of a controller that models working memory, an external episodic memory, and an architecture that encourages long-term representational credit assignment via an auxiliary unsupervised loss and backpropagation through time that can ‘jump’ over several time-steps obtains better performance than baselines across the suite. In particular, episodic memory and learning good representations both prove crucial and in some cases stack synergistically. To summarize, our contribution is to: • Introduce a suite of tasks that require an agent to utilize fundamental functional properties of memory in order to solve in a way that generalizes to holdout data. • Develop an agent architecture that explicitly models the operation of memory by integrating components that functionally mimic humans’ episodic and working memory. • Show that different components of our agent’s memory have different effectiveness in training and in generalizing to holdout sets. • Show that none of the models fully generalize outside of the train set on the more challenging tasks, and that the extrapolation incurs a greater level of degradation. 2 Task suite overview We define a suite of 13 tasks designed to test different aspects of memory, with train-test splits that test for generalization across multiple dimensions (https://github.com/deepmind/dm _memorytasks). These include cognitive psychology tasks adapted from PsychLab (Leibo et al., 2018) and DMLab (Beattie et al., 2016), and new tasks built with the Unity 3D game engine (uni) that require the agent to 1) spot the difference between two scenes; 2) remember the location of a goal and navigate to it; or 3) infer an indirect transitive relation between objects. Videos with task descriptions are at https://sites.google.com/view/memory-tasks-suite. 2.1 PsychLab Four tasks in the Memory Tasks Suite use the PsychLab environment (Leibo et al., 2018), which simulates a psychology laboratory in first-person. The agent is presented with a set of one or multiple consecutive images, where each set is called a ‘trial’. Each episode has multiple trials. In Arbitrary Visuomotor Mapping (AVM) a series of objects is presented, each with an associated look-direction (e.g. up,left). The agent is rewarded if it looks in the associated direction the next time it sees a given object in the episode (Fig 8(a) in App. B). Continuous Recognition presents a series of images with rewards given for correctly indicating whether an image has been previously shown in the episode (Fig 8(b) in App. B). In Change Detection the agent sees two consecutive images, separated by a variable-length delay, and has to correctly indicate if the two images differ (Fig 8(c) in App. B). In What Then Where the agent is shown a single ‘challenge’ MNIST digit, then an image of that digit with three other digits, each placed along an edge of the rectangular screen. It next has to correctly indicate the location of the ‘challenge’ digit (Fig 8(d) in App. B). 2.2 3D tasks Spot the Difference: This tests whether the agent can correctly identify the difference between two nearly identical scenes (Figure 1(a)). The agent has to move from the first to the second room, with a ‘delay’ corridor in between. See Fig. 2 for the four different variants. (a) Spot the Difference Basic (b) Spot the Difference Passive (c) Spot the Difference Multi-Object (d) Spot the Difference Motion Figure 2: Spot the Difference tasks. (a) All the tasks in this family are variants of this basic setup, where each room contains two blocks. (b) By placing Room 1’s blocks right next to the corridor entrance, we guarantee that the agent will always see them. (c) The number of objects varies. (d) Instead of differing in color between rooms, the altered block follows a different motion pattern. Goal Navigation: This task family was inspired by the Morris Watermaze (Miyake and Shah, 1999) setup used with rodents in behavioral neuroscience. The agent is rewarded every time it successfully reaches the goal; once it gets there it is respawned randomly in the arena and has to find its way back to the goal. The goal location is re-randomized at the start of episode (Fig. 1(b), Fig. 3). Transitive Inference: This task tests if an agent can learn an overall transitive ordering over a chain of objects, through being presented with ordered pairs of adjacent objects (See Fig. 1(c) and App. B). 2.3 Scale and Stimulus Split To test how well the agent can generalize to holdout data after training, we create per-task holdout levels that differ from the training level along a scale and a stimulus dimension. The scale dimension is intended to capture something about the memory demand of the task: e.g., a task with a longer time delay between events that must be related should be harder than one with a short delay. The stimulus dimension is to guard against trivial overfitting to the particular visual input presented to the input: the memory representation should be more abstract than the particular colour of an object. The training level comprises a ‘small’ and ‘large’ scale version of the task. When training the agent we uniformly sample between these two scales. As for the holdout levels, one of them – ‘holdout-interpolate’ – corresponds to an interpolation between those two scales (call it ‘medium’) and the other, ‘holdout-extrapolate’, corresponds to an extrapolation beyond the ‘large’ scale (call it ‘extra-large’). Alterations made for each task split and their settings are in Table 2 in App. A. 3 The Memory Recall Agent Our agent, the Memory Recall Agent (MRA), incorporates five components: 1) a pixel-input convolutional, residual network, 2) a working memory, 3) a slot-based episodic memory, 4) an auxiliary contrastive loss for representation learning (van den Oord et al., 2018), 5) a jumpy backpropagationthrough-time training regime. Our agent architecture is shown in Figure 4(a). The overall agent is built on top of the IMPALA model (Espeholt et al., 2018) and is trained in the same way with the exceptions described below. Component descriptions are below. Pixel Input Pixel input is fed to a convolutional neural network, as is common in recent agents, followed by a residual block (He et al., 2015). The precise hyper-parameters are given in C.2: we use three convolutional layers followed by two residual layers. The output of this process is xt in Figure 4(a) and serves as input to three other parts of the network: 1) part of the input to the working memory module, 2) in the formation of keys and queries for the episodic memory, 3) as part of the target for the contrastive predictive coding. Working Memory Working memory is often realized through latent recurrent neural networks (RNNs) with some form of gating, such as LSTMs and Relational Memory architectures (Hochreiter and Schmidhuber, 1997; Santoro et al., 2018). These working memory models calculate the next set of hidden units using the current input and the previous hidden units. Although models which rely on working memory can perform well on a variety of problems, their ability to tackle dependencies and represent variables over long time periods is limited. The short-term nature of working memory is pragmatically, and perhaps unintentionally, reflected in the use of truncated backprop through time and the tendency for gradients through these RNNs to explode or vanish. Our agent uses an LSTM as a model of working memory. As we shall see in experiments, this module is able to perform working memory–like operations on tasks: i.e., learn calculations involving short-term memory. As depicted in Figure 4(a), the LSTM takes as input xt from the pixel input network and mt from the episodic memory module. As in Espeholt et al. (2018), the LSTM has two heads as output, producing the policy ⇡ and the baseline value function V . In our architecture these are derived from the output from the LSTM, ht. ht is also used to form episodic memories, as described below. Episodic Memory (MEM) If our agent only consisted of the working memory and pixel input described above, it would be almost identical to the model in IMPALA (Espeholt et al., 2018), an already powerful RL agent. But MRA also includes a slot-based episodic memory module as that can store values more reliably and longer-term than an LSTM, is less susceptible to the intricacies of gradient propagation, and its fundamental operations afford the agent different abilities (as observed in our experiments). The MEM in MRA has a key-value structure which the agent reads from and writes to at every time-step (see Fig. 4(a)). MRA implements a mechanism to learn how to store summaries of past experiences and retrieve relevant information when it encounters similar contexts. The reads from memory are used as additional inputs to the neural network (controller), which produces the model predictions. This effectively augments the controller’s working memory capabilities with experiences from different time scales retrieved from the MEM, which facilitate learning long-term dependencies, a difficult task when relying entirely on backpropagation in recurrent architectures (Hochreiter and Schmidhuber, 1997; Graves et al., 2016; Vaswani et al., 2017). The MEM has a number of slots, indexed by i. Each slot stores activations from the pixel input network and LSTM from previous times ti in the past. The MEM acts as a fixed-size circular (first-in-first-out) buffer: New keys and values are added, overwriting the least recently added entry if there are no unused slots available. The contents of the episodic memory buffer is wiped at the end of each episode. Memory Writing Crucially, writing to episodic memory is done without gradients. At each step a free slot is chosen for writing, denoted i. Next, the following is stored: pi xt; vi ht; ki Wk[pi, vi] + bk (1) where pi is the pixel input embedding from step t and vi is the LSTM hidden state (if the working memory is something else, e.g. a feedforward, this would be the output activations). ki is the key, used for reading (described below), computed as a simple linear function of the other two values stored. Caching the key speeds up memory reads significantly. However, the key can become stale as the weights and biases, Wk and bk are learnt (the procedure for learning them is described below under Jumpy Backpropagation). In our experiments we did not see an adverse effect of this staleness. Memory Reading The agent uses a form of dot-product attention (Bahdanau et al., 2015) over its MEM, to select the most relevant events to provide as input mt to the LSTM. The query qt is a linear transform of the pixel input embedding xt and the LSTM hidden state from the previous time-step ht 1, with weight Wq and bias bq . qt = Wq[xt, ht 1] + bq (2) The query qt is then compared against the keys in MEM as in Pritzel et al. (2017): Let (pj , vj , kj), 1 j K be the K nearest neighbors to qt from MEM, under an L2 norm between kj and qt. mt = KX j=1 wjvj where wj / 1 ✏+ ||qt Wk[pj , vj ] bk||22 (3) We compute a weighted aggregate of the values (vj) of the K nearest neighbors, weighted by the inverse of each neighbor-key’s distance to the query. Note that the distance is re-calculated from values stored in the MEM, via the linear projection Wk, bk in (1). We concatenate the resulting weighted aggregate memory mt with the embedded pixel input xt, and pass it as input to the working memory as shown in Figure 4(a). Jumpy backpropagation We now turn to how gradients flow into memory writes. Full backpropagation can become computationally infeasible as this would require backpropagation into every write that is read from and so on. Thus as a new (pi, vi, ki)-triplet is added to the MEM, there are trade-offs to be made regarding computational complexity versus performance of the agent. To make it more computationally tractable, we place a stop-gradient in the memory write. In particular, the write operation for the key in (1) becomes: ki Wk[SG(pi), SG(vi)] + bk (4) where SG(·) denote that the gradients are stopped. This allows the parameters Wk and bk to receive gradients from the loss during writing and reading, while at the same time bounding the computational complexity as the gradients do not flow back into the recurrent working memory (or via that back into the MEM). To re-calculate the distances, we want to use these learnt parameters rather than, say, random projection, so we need to store the arguments xt and ht of the key-generating linear transform Wk, bk for all previous time-steps. Thus in the MEM we store the full (pi, vi, ki)-triplet, where pi = xti , vi = hti and ti is the step that write i was made. We call this technique ‘jumpy backpropagation’ because the intermediate steps between the current time-step t and the memory write step ti are not taken into account in the gradient updates. This approach is similar to Sparse Attentive Backtracking (Ke et al., 2018, SAB) which uses sparse replay by passing gradients only through memories selected as relevant at each step. Our model differs in that it does not have a fixed chunking scheme and does not do full backpropagation through the architecture (which in our case becomes quickly intractable). Our approach has minimal computational overhead as we only recompute the keys for the nearest neighbors. Auxiliary Unsupervised Losses An agent with good memory provides a good basis for forming a rich representation of the environment, as it captures a history of the states visited by the agent. This is the primary basis for many rich probabilistic state representations in reinforcement learning such as belief states and predictive state representations (Littman and Sutton, 2002). Auxiliary unsupervised losses can significantly improve agent performance (Jaderberg et al., 2016). Recently it has been shown that agents augmented with one-step contrastive predictive coding (van den Oord et al., 2018, CPC) can learn belief state representations of the environment (Guo et al., 2018). Thus in MRA we combine the working and episodic memory mechanisms listed above with a CPC unsupervised loss to imbue the agent with a rich state representation. The CPC auxiliary loss is added to the usual RL losses, and is of the following form: NX ⌧=1 CPCLoss [ht;xt+1, xt+2, . . . , xt+⌧ ] (5) where CPCLoss is from van den Oord et al. (2018), ht is the working memory hidden state, and xt+⌧ is the encoding pixel input at ⌧ steps in the future. N is the number of CPC steps (typically 10 or 50 in our experiments). See Figure 4(b) for an illustration and further details and equations elaborating on this loss in App. C.3. Reconstruction losses have also been used as an auxiliary task (Jaderberg et al., 2016; Wayne et al., 2018) and we include this as a baseline in our experiments. Our reconstruction baseline minimizes the L2 distance between the predicted reward and predicted pixel input and the true reward and pixel input, using the working memory state ht as input. Details of this baseline are given in App. C.4. 4 Experiments Setup We ran 10 ablations on the MRA architecture, on the training and the two holdout levels: • Working Memory component: Either feedforward neural network (‘FF’ for short) or LSTM. The LSTM-only baseline corresponds to IMPALA (Espeholt et al., 2018). • With or without using episodic memory module (‘MEM’). • With or without auxiliary unsupervised loss (either CPC or reconstruction loss (‘REC’)). • With or without jumpy backpropagation, for MRA (i.e. LSTM + MEM + CPC) Given that the experiments are computationally demanding, we only performed small variations within as part of our hyper-parameter tuning process for each task (see App. D). We hypothesize that in general the agent should perform the best in training, somewhat worse on the holdout-interpolation level and the worst on the holdout-extrapolation level. That is, we expect to see a generalization gap. Our results validated this hypothesis for the tasks that were much harder for agents than for humans. 4.1 Full comparison We computed human-normalized scores (details in App. B) and plotted them into a heatmap (Fig 5) sorted such that the model with the highest train scores on average is the top row and the task with highest train scores on average is the leftmost column. The heatmap suggests that the MRA architecture, LSTM + MEM + CPC, broadly outperforms the other models (App. B Table 3). This ranking was almost always maintained across train and holdout levels, despite MRA performing worse than the LSTM-only baseline on What Then Where. What Then Where was one of the tasks where all models did poorly, along with Spot the Difference: Multi-Object, Spot the Difference: Multi-Object, Spot the Difference: Multi-Object (rightmost columns in heatmap). At the other end of the difficulty spectrum, LSTM + MEM had superhuman scores on Visible Goal Procedural Maze in training and on Transitive Inference in training and holdout, and further adding CPC or REC boosted the scores even higher. 4.2 Results Different memory systems worked best for different kinds of tasks, but the MRA architecture’s combination of LSTM + MEM + CPC did the best overall on training and holdout (Fig. 6). Removing jumpy backpropagation from MRA hurt performance in five Memory Suite tasks (App. B Fig. 10), while performance was the same in the remaining ones (App. B Fig. 11 and 12). Generalization gap widens as task difficulty increases The hypothesized generalization gap was minimal for some tasks e.g. AVM and Continuous Recognition but significant for others e.g. What Then Where and Spot the Difference: Multi-Object (Fig 7). We observed that the gap tended to be wider as the task difficulty went up, and that in PsychLab, the two tasks where the scale was the number of trials seemed to be easier than the other two tasks where the scale was the delay duration. MEM critical on some tasks, is enhanced by auxiliary unsupervised loss Adding MEM improved scores on nine tasks in training, six in holdout-interpolate, and six in holdout-extrapolate. Adding MEM alone, without an auxiliary unsupervised loss, was enough to improve scores on AVM and Continuous Recognition, all Spot the Difference tasks except Spot the Difference: Multi-Object, all Goal Navigation tasks except Visible Goal Procedural Maze, and also for Transitive Inference. Adding MEM helped to significantly boost holdout performance for Transitive Inference, AVM, and Continuous Recognition. For the two PsychLab tasks this finding was in line with our expectations, since they both can be solved by memorizing single images and determining exact matches and thus an external episodic memory would be the most useful. For Transitive Inference, in training MEM helped when the working memory was FF but made little difference on an LSTM, but on holdout MEM helped noticeably for both FF and LSTM. In Change Detection and Multi-Object, adding MEM alone had little or no effect but combining it with CPC or REC provided a noticeable boost. Synergistic effect of MEM + CPC, for LSTM On average, adding either the MEM + CPC stack or MEM + REC stack to any working memory appeared to improve the agent’s ability to generalize to holdout levels (Fig. 6). Interestingly, on several tasks we found that combining MEM + CPC had a synergistic effect when the working memory was LSTM: The performance boost from adding MEM + CPC was larger than the sum of the boost from adding MEM or CPC alone. We observed this phenomenon in seven tasks in training, six in holdout-interpolate, and six in holdout-extrapolate. Among these, the tasks where there was MEM + CPC synergy across training, holdout-interpolate, and holdout-extrapolate were: the easiest task, Visible Goal Procedural Maze; Visible Goal with Buildings; Spot the Difference: Basic; and the hardest task, Spot the Difference: Multi-Object. CPC vs. REC CPC was better than REC on all Spot the Difference tasks, and the two harder PsychLab tasks Change Detection and What Then Where. On the other two PsychLab tasks there was no difference between CPC and REC. However, REC was better on all Goal Navigation tasks except Invisible Goal Empty Arena. When averaged out, REC was more useful when the working memory was FF, but CPC was more useful for an LSTM working memory. 5 Discussion & Future Work We constructed a diverse set of environments 2 to test memory-specific generalization, based on tasks designed to identify working memory and episodic memory in humans, and also developed an agent that demonstrates many of these cognitive abilities. We propose both a testbed and benchmark for further work on agents with memory, and demonstrate how better understanding the memory and generalization abilities of reinforcement learning agents can point to new avenues of research to improve agent performance and data efficiency. There is still room for improvement on the trickiest tasks in the suite where the agent fared relatively poorly. In particular, solving Spot the Difference: 2Available at https://github.com/deepmind/dm_memorytasks. Motion might need a generative model that enables forward planning to imagine how future motion unrolls (e.g., (Racanière et al., 2017)). Our results indicate that adding an auxiliary loss such as CPC or reconstruction loss to an architecture that already has an external episodic memory improves generalization performance on holdout sets, sometimes synergistically. This suggests that existing agents that use episodic memory, such as DNC and NEC, could potentially boost performance by implementing an additional auxiliary unsupervised loss. Acknowledgements We would like to thank Jessica Hamrick, Jean-Baptiste Lespiau, Frederic Besse, Josh Abramson, Oriol Vinyals, Federico Carnevale, Charlie Beattie, Piotr Trochim, Piermaria Mendolicchio, Aaron van den Oord, Chloe Hillier, Tom Ward, Ricardo Barreira, Matthew Mauger, Thomas Köppe, Pauline Coquinot and many others at DeepMind for insightful discussions, comments and feedback on this work.
1. What is the focus and contribution of the paper? 2. What are the strengths and weaknesses of the proposed approach? 3. How does the reviewer assess the clarity and quality of the paper's content? 4. What are the nitpicks or minor issues that the reviewer has identified? 5. Are there any concerns or limitations regarding the proposed method?
Review
Review EDIT: changed my overall score from 6 to 7 in light of author's feedback. Positive/negative things +/-: + clearly written + not all tasks are solved - "We plan to release the full task suite within six months of publication." weakens the article as one of its main contribution is this task suite. Overall a good submission, but I feel like the contribution of the task suite is bigger than the modeling contribution. The delayed release of the task suite a big drawback. Nitpicks: It is weird to describe IMPALA (Importance Weighted Actor-Learner Architecture) an agent: "it would be almost identical to IMPALA" -> "it would be almost identical to the model in Espeholt et al. 2018." (page 4). I applaud trying to make it better with heatmap coloring, but Figure 5 is still a bit hard to read (I don't mean the font size).
NIPS
Title Neural Rule-Execution Tracking Machine For Transformer-Based Text Generation Abstract Sequence-to-Sequence (Seq2Seq) neural text generation models, especially the pre-trained ones (e.g., BART and T5), have exhibited compelling performance on various natural language generation tasks. However, the black-box nature of these models limits their application in tasks where specific rules (e.g., controllable constraints, prior knowledge) need to be executed. Previous works either design specific model structures (e.g., Copy Mechanism corresponding to the rule “the generated output should include certain words in the source input”) or implement specialized inference algorithms (e.g., Constrained Beam Search) to execute particular rules through the text generation. These methods require the careful design case-by-case and are difficult to support multiple rules concurrently. In this paper, we propose a novel module named Neural Rule-Execution Tracking Machine (NRETM) that can be equipped into various transformer-based generators to leverage multiple rules simultaneously to guide the neural generation model for superior generation performance in an unified and scalable way. Extensive experiments on several benchmarks verify the effectiveness of our proposed model in both controllable and general text generation tasks. 1 Introduction Transformer-based neural language models (LMs), such as GPT/BART [1–3], have led a wave of new trends in natural language generation, producing texts of prominent quality. They are trained roughly on huge amounts of text corpora to reconstruct the full sentences (i.e., next coming tokens and missing text fragments). Despite their success in varieties of NLP tasks, we argue that the black-box nature of these models leads to inefficiently learning to follow constraints and incorporating prior knowledge. In controllable text generation, most relevant studies [4–6] focus on controlling high-level text attributes (e.g., topic, sentiment) or simply keyword/phrase. More complex fine-grained control constraints such as “generate a sequence of tokens with ‘apple’ in the first sentence which has 15 words and ‘orange’ or ‘oranges’ in the fourth sentence” are less explored. A very recent work [7] reveals that large-scale LMs do not learn to obey the underlying constraints reliably, even in a quite simple constrained generation task (cover all the given keywords without hallucinating new ones). In general text generation, existing works on various tasks reveal the benefit of incorporating task-specific prior knowledge: machine translation [8] (e.g., each source phrase should be translated ∗Work done during the internship at Microsoft STCA. †Corresponding author: Daxin Jiang ([email protected]). 35th Conference on Neural Information Processing Systems (NeurIPS 2021). into exactly one target phrase), text summarization [9] (e.g., the lead bias: front loading the most salient information), dialogue generation [10] (e.g., humans tend to repeat entity names or even long phrases in conversation). However, they either need designing specific model architectures (e.g., Coverage Mechanism and Copy Mechanism) or devising well-designed learning objectives (e.g., GSG [11]). These methods require careful design case-by-case and are difficult to combine multiple arbitrary constraints or prior knowledge simultaneously. Motivated by the above research dilemma, we take the first step towards building an unified framework to handle Fine-grained Control and Prior Knowledge Integration and propose a novel module Neural Rule-Execution Tracking Machine (NRETM) 3 Specifically, NRETM is a trainable neural module that can be equipped with transformer-based sequence-to-sequence pre-trained LMs. It can handle constraints in any Predicate Logic Formula, which crucially includes the arbitrarily complicated relations among different control tasks. For example, the above fine-grained constraint can be written as:( InSen(apple, 1 ) ∧ Len(1 , 15 ) ) ∧ ( InSen(orange, 4 ) ∨ InSen(oranges, 4 ) ) To build NRETM, we combat three major challenges: i) modeling the complicated relationships among control tasks and the logic operators (i.e., ∧, ∨) in the constraint expressions; ii) an unified control system is required to execute different control tasks simultaneously and iii), the control signals for different control tasks should be properly aligned with the constraint expressions. NRETM uses the encoder of transformer-based pre-trained sequence-to-sequence LMs to model the relationship between control tasks and the logic operators. NRETM completes different control tasks via nondifferential Logic Trackers (empowered by executable programs) in an unified control progress system during the decoding process. Finally, the encoded constraint expressions and control progress signals are combined together in the transformer decoder. NRETM is fine-tuned with the pre-trained LMs (except logical trackers) to follow the control progress signal and predicate logic formula. NRETM reconciles symbolic computing (that has precise logic and numerical calculation capabilities from logic trackers) with neural language generation (that has an exceptional ability of wording and phrasing), which results in both the accurate controllability and the superior generation performance. For evaluation, we select three representative benchmarks because all of them involve constraints or prior knowledge, allowing us to verify the effectiveness of our proposed NRETM model: ROCStories [12] are five-sentence stories with complicated predicate constraints over the story structure; Commonsense Generation task [13] with the constraints of mentioning all input concepts; TED15 Zh-En document-level machine translation benchmark [14] with prior knowledge of translating input sentences one by one. Our contributions in this work are three-fold: (1) To the best of our knowledge, we are the first to propose a general framework that incorporates control signal and prior knowledge, formulated as predicate logic constraints, into transformer-based seq2seq text generation models; (2) We train (or fine-tune) the transformer-based seq2seq text generation models to follow the predicate logic constraints(i.e., control signal or prior knowledge) by dynamically updating the rule execution intermediate progress value to the text decoder; and (3) Empirical verification of the effectiveness of the proposed approach on three benchmarks. 2 Approach This section first formalizes fine-grained content control task, then introduces an overview of proposed NRETM model, followed by diving into details of each component. 2.1 Fine-Grained Content Control In this work, we focus on fine-grained content control task where the model input consists of predicate logic constraints x = [x1, . . . , xlx ] ∈ X that should be satisfied in the outputs and optional context input c = [c1, . . . , clc ]. The encoder takes concatenation of x and c (i.e., [c;x]) as input. At decoding step t, the decoder take y:t = [y1, · · · , yt] ∈ Y as input and generate yt+1. 3Our Source Code can be found in https://github.com/GaryYufei/NRETM 2.2 Predicate Logic Constraint We define predicate U(a, y) as a boolean function that indicates whether output y has satisfied control task a which could be values (e.g., status, total length, stop word counts) or lexicons (e.g., copying particular words/phrases). In this paper, NRETM accepts predicate logic constraints in Conjunctive Normal Form (CNF): ( U1 · · · ∨ Ui ) ∧ · · · ∧ ( Uk · · · ∨ Un ) . Each predicate logic constraint includes multiple predicates Ui and basic logic operators (e.g., ∨, ∧ and brackets). 2.3 Neural Rule-Execution Tracking Machine NRETM can be equipped into transformer-based sequence-to-sequence LMs. Figure 1 illustrates an overview of our neural rule-execution tracking machine (NRETM). To enable LMs to follow predicate logic constraints, it is essential to 1) model the complicated relationships among predicates and basic logic operators; 2) control multiple predicates (i.e., control tasks) in the constraints simultaneously; 3) combine the control signals with the predicate logic constraint expressions. For 1), we treat the whole constraint expressions as natural language sentences and feed it into the transformer encoder. For 2), we propose a set of unified control signals that can be used to dynamically describe the step-wise execution progress of different predicates. For 3), we represent the control signals as relative position embedding and align them with encoded constraints expressions in the transformer decoder. 2.3.1 Encoding Predicate Logic Constraints Given predicate logic constraint expression x = [x1, . . . , xlx ] where xi either corresponds to a predicate Ui or a basic logic operator, we feed x into the transformer encoder. Due to the tokenization strategies of pre-trained LMs, each xi may be tokenized into a continuous token sequence. x is tokenized into t = [t1, · · · , tlt ] where lt ≥ lx and there exists one-to-one mapping m(ti) = xj . We use he to denote the encoder output of x. As pre-trained LMs is trained with significant amount of natural language sentences, it should encode complicated sequential relationships within the constraints expressions. 2.3.2 Mentoring Control Progress Specialized controlling components (e.g., Constrained Beam Search [15] and Copy Mechanism [10]) can only be used for limited control tasks. To enable unified controlling system, we propose to complete control by mentoring control progress. We describe the control progress of different predicates using an unified progress state system. Each predicate Ui has a corresponding Logic Tracker QUi(y), which is a non-differentiable executable program (i.e., written by Python) and takes current generated outputs and returns one progress state at each generation step, formulated as follows: QUi(y) = S0 Ui is ∅ S1 Ui is not triggered in y (S2,V) Ui is in progress in y S3 Ui is satisfied in y (1) where State S0 always is assigned to non-predicate ∅ (i.e., basic logic operators in the constraint expression); State S1 means the tracking for predicate Ui is not triggered in y. For example, when controlling the stop word counts of the second sentence, the Logic Tracker returns S1 when the LMs are generating the first sentence; State S2 means predicate Ui is in progress and V is the optimal intermediate value that allows fine-grained tracking. For example, in generation length control, V could be total target length minus the current length informing pre-trained LMs the number of words left to satisfy the constraint; State S3 means Ui is satisfied in y. In short, Logic Tracker unifies different predicates by returning the same set of control signals. Global Or-Clause Update: Each Logic Tracker traces the execution progress of its corresponding predicate Ui independently. This independent tracing strategy works well in the And-Clause because all involved predicates are required to reach State S3. However, only a subset of predicates are required to reach State S3 in the Or-Clause. Our preliminary experiment shows that the independent tracing strategy trains the model not to complete the constraints. To solve this issue, we propose to update the status of all predicates in the same Or-Clause to State S3 when one of the predicates reach State S3. This forces all predicates finish themselves in State S3 and improves the constraint satisfaction ratio in the Or-Clause. Control Progress Matrix: Given the predicate logic constraint expressions t = [t1 · · · , tlt ], we further define Control Progress Matrix S to align the predicates with their control progress signals returned by Logic Trackers: S = [C(t, ε); C(t,y:1); · · · ; C(t,y:t)] (2) C(t,y:t) = [v(t1,y:t), · · · , v(tlt ,y:t)] (3) where ε is the empty string at first decoding step. S is a two-dimensional matrix where each row describes the control progress of all tokens in t at a single decoding step and each column describes the control progress of a single token in t along all decoding steps. Recall that basic logic operators in predicate logic constraint expressions do not require control progress tracking. Each cell Si,j in S is formulated as: Si,j = v(ti,y:j−1) = { Q∅(y) m(ti) = xk and xk is a basic logic operator QUq (y) m(ti) = xk and xk is a predicate Uq (4) Example: In Figure 2, we are given three logic constraints, a) copy “car”; b) the stop word ratio of the output should be 0.5 and c) the length of second sentence should be 6. The basic logic operators & are assigned with S0. Length control and Stop Word Ratio maintain intermediate values (e.g., the residual Length and Stop Word Ratio). The length control is assigned with S1 when generating the first sentence because it will only be triggered in the second sentence. Copy control does not have intermediate values and its State are updated from S2 to S3 only when the corresponding words (at step 10 in our example) appear in the y:t. Control Progress Matrix Encoder: Control Progress Matrix S aligns the results from Logic Tracker with the encoded predicate logic constraint expressions. However, S is a non-differentiable symbolic matrix with each cell Si,j being discrete symbol S0 to S3 combined with additional numbers (i.e., V ). As the encoder has already captured the inter-relationship in the predicate logic constraints, we only model each cell Si,j independently. To support various types of predicates, we treat Si,j as a string and encode it using a single-layer transformer-based encoder ShallowEncoder which shares the same vocabulary and word embeddings as the pre-trained LMs: hsij = ShallowEncoder(Si,j) (5) h̄sij = MeanPooling(h s ij) (6) where hsij ∈ Rl s ij×d, h̄sij ∈ Rd and lsij is the length of the tokenized Si,j and d is the hidden size of ShallowEncoder. We use h̄s to denote the neural representation of whole S. 2.3.3 Combining Predicate Logic Constraint with Control Progress Matrix Finally, we combine the encoded Predicate Logic Constraints he with the encoded Control Progress Matrix h̄s in the transformer-based pre-trained LMs. Injecting h̄s into the transformer encoder would result in encoder content re-computation at each decoding step and stop the standard parallel training for transformer-based decoders. In addition, as Control Progress Matrix incrementally increases as the decoding goes on, it is reasonable to equip h̄s into the transformer decoder. Given the encoder output he, decoder input y:t, the probability of the next token yt+1 can be calculated by: hdt = KV(W s qy:t,W s ky:t,W s vy:t (7) ot+1 = CrossKV(W c qh d t ,W c kh e,Wcvh e) (8) p(yt+1|x1:lx , y1:t) = softmax(Wo ot+1) (9) where ot+1 ∈ Rdc is the hidden state at step t with dc the hidden size, and Wo ∈ R|V |×dc , Both KV and CrossKV are the standard key-value self-attention described in [16]. In the CrossKV which takes hdt and h e as input, the resulting attention score matrix has the same size as S, making CrossKV suitable to incorporate our Control Progress Matrix. Control Progress Matrix as Relative Position: Inspired by [17] which incorporates token relative positions into the self-attention module, we propose to inject Control Progress Matrix as the “relative positions” between encoder output he and current decoder input y:t in the cross-attention (Eq. 8) module. Following this approach, we linearly project each h̄ij into Control Progress Matrix key hkij = W f k · h̄sij + b f k and Control Progress Matrix Value h v ij = W f v · h̄sij + bfv . All transformer decoder layers share the same representations. Eq. 8 is changed to: ot+1 = R(W c qH d t ,W c kH e,WcvH e,hk,hv) (10) where Rlx×t×d and R is the Self-Attention function with relative position, defined as follows: R(q,k,v,mk,mv)j = lx∑ i=1 ai,j(vi + m v i,j) (11) where a∗,j = Softmax (e∗,j) and ei,j = qj(ki + mki,j) T d−1/2. 2.4 Why NRETM Could Satisfy Constraints A powerful implicit compulsion comes from the combined force of two aspects: 1) before generating the EOS token (i.e., End-Of-Sequence Token), all the predicate constraints should be satisfied. As demonstrated in Fig 2, all elements in Control Progress Matrix are set to “satisfied” (i.e., S3) at EOS position; 2) The pre-trained LMs are trained to generate text with limited length. Such a soft way of combining symbolic operators (good at logical and mathematical calculations ) and neural operators (good at wording and phrasing) can retain their respective strengths to the utmost extent. 2.5 What If NRETM Fails to Satisfy Constraints NRETM does not forces the pre-trained LMs to execute the hard constraints on the text decoder explicitly, but instead, provides Control Progress Matrix as input features describing rule execution intermediate values to the text decoder. That is, no explicit effect when NRETM fails to satisfy the constraints. It is possible that our text generators decide to stop the generation before completing all constraints. In our experiments, NRETM has less than 1% chance not to complete all constraints. 2.6 The Generalization Ability of NRETM The generalization ability of NRETM comes from two aspects: 1) NRETM can construct new constraints via combining pre-trained predicates with basic logic operators in arbitrarily complicated ways; 2) To expand a new predicate, users only need to implement the corresponding Logic Trackers, which returns S1-S3 and intermediate values, via executable programs. 3 Experiment We test our proposed NRETM on the controllable text generation and general text generation tasks. For controllable text generation, we verify NRETM on the complex fine-grained control instructions in the ROCStories Benchmark [12]. Further, we test NRETM on the general text generation tasks, commonsense generation and document-level machine translation, to show that NRETM can efficiently integrate prior knowledge into seq2seq models towards superior generation performance. 3.1 Controllable ROC Stories ROCStories is a corpus of five-sentence stories that capture a rich set of causal and temporal commonsense relations between daily events. Following [18], we extract key phrases from the ground-truth stories. In this experiment, we design multiple predicate logic constraints to inform NRETM about the stories to be generated and verify if NRETM can follow these constraints exactly. Predicate Logic Formulation As shown in table 1, five constraints with increasing difficulties are used: (1) Generate a story with storyline wi in the pith sentence. (2) Generate a story with an ordered storyline w1, · · · , w4 (3) Generate a story with storyline wi in the pith sentence which has lpi words (i = 1, 2). (4) Generate a storyline w1 in the p1th sentence which has lp1 words or sp1 stop words and w2 in the p2th sentence that does not mention w3 (5) Generate a storyline wi in the pith sentence which has lpi words or spi stop words (i = 1, 2). Baselines and Metrics Both baseline and NRETM use T5-Base model [19]. We report Constraints Success Ratio (CSR), the ratio of stories that completely satisfy the given constraints. We additionally report ROUGE-L (RL), BERT-Score (BS), BLEU-1/4 (B1/4) to show the generated stories quality. Main Results As shown in Table 1, in all five predicate logic constraints, compared to the T5 model, the NRETM model achieves higher Constraint Success Ratio and maintains a similar level of ROUGHL, showing that the NRETM model can be flexibly controlled without loss of generated text quality. The gap in CSR between the T5 and NRETM model is moderate in the first two constraints with simple token permutations. However, the success ratio of T5 model drops significantly given constraints that requires long-range numerical tracking (e.g., sentence length and the count of stop words). 3.2 Commonsense Generation COMMONGEN is a generation benchmark dataset target explicitly test machines for the ability of generative commonsense reasoning. Given a set of common concepts the task is to generate a coherent sentence describing an everyday scenario using these concepts. Predicate Logic Formulation The input is an unordered set of n concepts x = {xi}ni=1. From the expectation of COMMONGEN, one easily obtained prior knowledge is that each xi must appear in output y. The corresponding predicate logic constraint Pc is: Pc = ∧ni=1 ( Copy(xi) ) where y will appear by default, for the sake of brevity, we have omitted y in predicate Copy. Another prior knowledge comes from the observation that generating y requires giving the correct morphological inflections of the concept word rather than copy its original form. Let x̃i = {x̃ik} |x̃i| k=1 denote all inflections of xi. y covers concept xi, if at least one of {x̃ik} |x̃i| k=1 appears. The constraint P̂c is: P̂c = ∧ni=1 ( ∨|x̃ i| j=1 Copy(x̃ i j) ) Baselines and Metrics We experiment with T5-Base and T5-Large. We equip NRETM into the T5Large and T5-Base model to incorporate Pc and P̂c respectively (+ NRETM Pc) (+ NRETM P̂c). Grid Beam Search (GBS) [20] (+ G) is a well-designed decoding method that ensures the generation model satisfies the lexical constraints. We only apply GBS to the T5-Base model due to the memory constraint. Following the suggestions in [13], we use CIDEr [21] and SPICE [22] to automatically assess the quality of generated texts. We calculate constraint satisfaction for all constraints (ALL), novel constraints (Novel) and seen constraints (Seen). Main Results Table 2 shows that the NRETM model improves the constraint satisfaction over the baselines for all cases, achieving close to 100% (i.e., 99.5% and 99.2%). While GBS achieves perfect constraint satisfaction (i.e., 100%), doing so significantly degrades the output text quality (more than 50 CIDEr), indicating the necessity integrating prior knowledge in training rather than inference. In addition, both prior knowledge Pc and P̂c have a positive effect on our model, improving our T5-large baseline by 3.1 and 5.0 CIDEr score, respectively. Finally, our T5-Large + NRETM P̂c model outperforms the previous state-of-the-art result [23], which integrates the ConceptNet [24] into the BART model, suggesting that our incorporated task-specific prior knowledge could be as powerful as knowledge from large-scale hand-crafted corpus. All of the above shows how potential it is to find a method that could execute multiple rules effectively. 3.3 Document-Level Machine Translation Document-level machine translation tasks is a general text generation task, where the goal is to translate segments of text (up to an entire document). Following [14], we use TED15 Zh-En (from IWSLT 2014 and 2015 [25, 26]) as training and validation set and 2010-2013 TED as the test set. Predicate Logic Formulation The input is an ordered set of n sentences in the source language that form a document x = {xi}ni=1, the expected output is a translated document y = {yi}ni=1 in the target language. We observed that neural model is prone to sentence correspondence confusion (the ith sentence in source document is translated as the jth sentence in target document) when doing document-level translation. To alleviate this problem, we propose incorporating Doc-mBART25 with prior knowledge: each source sentence should be translated only once. It is formulated as: TranslatedOnce(xi) = { S3 θ(y:t) > i S2 θ(y:t) = i S1 θ(y:t) < i (12) where θ(·) returns the line number of yt in y, as t is monotonic during generation, the status only set to be 2 once. To trace the sentence translation progress, we add an additional End-Of-Sentence token at the end of each sentence to the training data. Once NRETM finishes the ith sentence (generating an end-of-sentence token) in the decoder, we assume that the ith sentence in the encoder has been translated. The predicate logic constraint Pc of this task can be formulated as: Pc = ∧ni=1 ( TranslatedOnce(xi) ) Baselines and Metrics We combine our NRETM Pc component with the Doc-mBART25 model proposed in [3] which is a state-of-the-art multilingual pre-trained language model. We compare this model with the state-of-the-art non-pretraining and pretraining approaches, including HAN (Hierarchical Attention Networks) [14], Doc-mBART25 and Sen-mBART25 proposed in [3]. When implementing our model, we use the same pre-processing method, blocks segmentation strategy and beam search setting as [3]. TED15 Zh-En provides sentence-to-sentence translation from Chinese to English. We use both document-level (d-BLEU) and sentence-level (s-BLEU) to measure the similarities between generated target document and the source document. We also report Sentence Aligned Ratio (SAR), the ratio of source and target documents with the same sentence count, to show the effectiveness of our control over this translation prior knowledge. Main Results Table 3 shows that the NRETM Pc component helps the Doc-mBART25 model to better capture the sentence-level corresponding relationship between the source and target documents. In particular, sentence-level alignment ratio is improved from 98.7% to 100%. The improvement in s-BLEU (+ 1.1 BLEU) also confirms that our final Doc-mBART25 + NRETM Pc model learns to translate sentences based on the sentence order in source documents. 3.4 Discussion Updating Progress in Encoder In Sec 2.3.3, we incorporate the Control Progress Matrix as relative position embeddings in the decoder. To show the importance of this design choice, we conduct an ablation study in Table 4 where the row of Control Progress Matrix is concatenated with the encoder output. We find that updating the rule execution progress information with the encoder output contributes little to improve the CSR. This shows that simply extracting rule execution intermediate values is not enough. This could be because the encoder that encodes the rule execution intermediate values cannot effectively broadcast this information into text decoders. NRETM Robustness The above experiment results are based on the perfect training data. In this section, we explore the effect of training data noise. We corrupt the training data by replacing the input commonsense keywords with a random sampled one under the probability 5%, 10%, 15%, 25%, and 50% (Validation and Test Split remain unchanged). As shown in Table 5, in all noise levels, NRETM successfully achieves higher constraint coverage (i.e, Cons) and CIDEr score than the T5 baseline model, showing that NRETM is robust to the training data noise. It is worthwhile to note that the main goal of NRETM is to incorporate constraints that are satisfied by the training data into transformer-based seq2seq text generators. It is reasonable to assume that in practice, the noise level should be relatively low (e.g., 0% - 10%). Zero-Shot Execution In Table 1, we show that the pre-trained language model T5 cannot handle complicated and fine-grained constraints even after fine-tuning. Here, we further demonstrate that NRETM model is capable to handle zero-shot rule execution. We train the T5 and NRETM model to only mention keywords in the 3rd, 4th and 5th sentence and test these models to mention keywords in the first and second sentence of the whole story. As shown in Table 6, although both T5 and NRETM model mention most of the keywords (95.7% and 98.3% respectively) in the generated story, the T5 model only mention 19.7% of keywords in the correct sentence and the NRETM model makes 97.7% of keywords correct. This is becuase the T5 model cannot recognize the novel sentence index (i.e., the first and second) during the generation. The logic tracker helps the NRETM model to generalize to handle these cases. Running Efficiency We compare the inference time (in minutes) for NRETM on the test split of commonsense generation task in Table 7. All models use the beam search decoding algorithm with beam size 5. Adding NRETM components to T5-Base and T5-Large approximately double the inference time. While the Grid Beam Search (GBS) algorithm uses a much longer inference time. Compared to existing constrained decoding approaches, NRETM uses much less computational costs. 4 Related Work NRETM is mainly related to two lines of research work in text generation: constrained decoding and prior knowledge integration. Constrained Decoding NEUROLOGICEarly work in constrained decoding can be traced back to dual decomposition and lagrangian relaxation [27, 28]. These works focus on sequence labelling and parsing problems where the solution space is relatively small, compared to text generation tasks. Research efforts in text generation tasks [29–31] involve controllable generation methods where the generators are trained on text data with the labeled target attributes. CTRL [4], PPLM [6] and CoCon [32] are recent approaches that built on the transformer-based large-scale pretrained LMs, they pay more attention on controlling high-level attributes, phrases and keywords. [33, 34] propose to trace the control task progress in the text generation decoder. [33] treats the control signal as training loss in memory network and [34] treats the control signal as additional input features. [35] controls the text generation outputs via mentoring the output gradient. However, these work only focus on specific controlling tasks such as phrases copying and generation length. While NRETM focuses on controlling text generation to follow arbitrary logical constraints, leading to a fine-grained control. They can be seen as special cases of NRETM. Recently, GDC [36] permits to specify both pointwise and distributional constraints over the target LMs. Very recently, NEUROLOGIC [7] was proposed to generate fluent text while satisfying complex lexical constraints (in a predicate logic form). There are three main differences between NRETM and NEUROLOGIC: 1) NEUROLOGIC only provides control constraints over the text generators. Instead, NRETM is a general framework that provides control constraints (e.g., copy or not copy words) and prior knowledge (e.g., translating sentences one by one). NEUROLOGIC can be viewed as a special case of NRETM; 2) NEUROLOGIC is an inference-only algorithm that only controls the model to generate or avoid specific words or phrases at decoding time; while NRETM fine-tunes the pre-trained transformer-based seq2seq text generators with the predicate logic constraints; 3) NEUROLOGIC only supports the “copy” predicate (i.e., to generate or not to generate specific words or phrases), while NRETM is a general framework that supports various control predicates. NRETM supports 6 kinds of logic operators in this paper, and it is also possible for users to expand new logic operators. Prior Knowledge Integration Existing efforts [37–41] to incorporate prior knowledge into sequence-to-sequence framework either resort to modifying model architectures, including adding external memory components, specialized decoding method or designing training objectives, including minimum risk training. These methods usually can only support to inject one narrow type of knowledge into the neural models. To the best of our knowledge, we first attempt to formalize the prior knowledge integration in seq2seq generation as text generation that conforms to predicate logic constraints. 5 Conclusion and Future Work In this paper, we propose a unified controllable generation framework that leverages predicate logic constraints to implement efficient complex fine-grained control and scalable prior knowledge integration. We explore and compare two controllable strategies: dynamic tracking and static strategy, and show that the proposed dynamic tracking mechanism significantly outperforms the static ones. Empirical results on three benchmarks indicate that NRETM could achieve accurate control and exhibits a superior generation ability over different tasks. Pre-trained models have been the dominant paradigm in natural language processing, and researchers resort to massive data and large-scale models to improve performance. We unify the rules used in various tasks into the form of predicate logic, provide the possibility to pretrain models on massive rules. In the future, we will explore pre-training large-scale neural rule-execution machine with massive rules and data. Broader Impact Our work proposes a unified and scalable approach to efficiently perform fine-grained controllable text generation and incorporate multiple prior knowledge for superior text generation performance. This work uses story generation, machine translation, commonsense generation as applications to verify the effectiveness. However, while our proposed method achieves promise performance on several benchmarks, deployment of our method in the real world requires a careful analysis of potential societal benefits and harms (e.g., the harms associated with furthering negative stereotypes against certain vulnerable groups). The potential ethical issues include: powerful language models might be used to generate abuse, faked or misleading content in the news or on social media; they might pose safety concerns if they are used to generate harassing or hateful materials. In order to mitigate these risks, it is possible to use AI systems to fight against misleading content and harassing material. However, as discussed in previous work [42, 43], mitigating these risks could be an extremely complex socio-technical problem that many are working to understand and solve. Acknowledgement We thank anonymous reviewers for their insightful suggestions to improve this paper. Funding Transparency Statement Yufei Wang, Can Xu, Huang Hu, Chongyang Tao and Daxin Jiang are supported by Microsoft Software Technology Center at Asia (STCA). Yufei Wang also receives a MQ Research Excellence Scholarship and a CSIRO’s DATA61 Top-up Scholarship.
1. What is the main contribution of the paper in the field of natural language processing? 2. How does the proposed NRETM model incorporate rules and constraints into transformer-based seq2seq generation models? 3. Can you explain how the state flag and logic tracker work together to guide the model in generating output sequences? 4. Why did the authors choose to use a one-layer transformer encoder to encode the state flag? 5. How effective is the NRETM model compared to other baseline models, including NEUROLOGIC DECODING? 6. What are some limitations of the evaluation metrics used in the paper, such as Rouge L and CIDEr? 7. Are there any potential issues with the generalizability of the framework, given its reliance on pre-defined logic operators? 8. Could the authors have included more diverse automatic metrics to evaluate the quality of their generated outputs?
Summary Of The Paper Review
Summary Of The Paper This paper proposes a general framework, NRETM, for the conditional transformer-based seq2seq model. The paper first uses a state flag to indicate the expression progress of each predicate in the constraints set as a concatenation of logic trackers. The paper uses a one-layer transformer encoder to encode the state flag. The paper utilizes the state matrix as a relative position to integrate it into the model. The results in three experiments show that the NRETM improves the transformer performance. Review Strengths: The paper focus on an interesting problem about constraint-based generation tasks. The paper solves this problem by proposing a new Neural Rule-execution tracking machine that tries to incorporate rules into current transformer-based seq2seq generation models. The paper utilizes the logic tracker as relative position information to guide the model in the generation The experiment results over three different tasks seem to be promising. The paper lists detailed experiment settings for three different tasks. The paper also discusses the ability of the model in a zero-shot setting. The paper includes code as a supplement and put more generation results as case studies for three different tasks. Those examples show that the model captures those constraints through the proposed framework, Weaknesses: Section 2.4 is a little bit confusing. Even though the paper provides Figure 2 as a running example of the NRETM model with three logic constraints, it still takes me some time to understand all the concepts introduced in this section. It would be better to reorganize the content. For example, a logic tracker should be introduced before the state flag which can help readers to better understand the concepts. In section 3, the paper only compares their model to a limited baseline. The paper needs to add some baseline mentioned in related work such as NEUROLOGIC DECODING (Liu et al., 2020). In section 3.2, the constraint satisfaction metric is not clearly defined. The evaluation metrics for all/novel/seen constraints are also absent. The evaluation metrics are also limited. For story generation, rouge L is not able to cover all the details. Similarly, CIDEr can also show some limited aspect of generation results. It would be better to include more automatic metrics to show a more comprehensive view of the generation quality. such as Bertscore(Zhang et al., 2019), BLEU, etc. The generalizability of the framework is unclear. It seems that the model can only be applied with several pre-defined logic operators. Those logic operators Lu, X., West, P., Zellers, R., Bras, R. L., Bhagavatula, C., & Choi, Y. (2020). Neurologic decoding:(un) supervised neural text generation with predicate logic constraints. arXiv preprint arXiv:2010.12884. Zhang, T., Kishore, V., Wu, F., Weinberger, K. Q., & Artzi, Y. (2019). Bertscore: Evaluating text generation with bert. arXiv preprint arXiv:1904.09675.
NIPS
Title Neural Rule-Execution Tracking Machine For Transformer-Based Text Generation Abstract Sequence-to-Sequence (Seq2Seq) neural text generation models, especially the pre-trained ones (e.g., BART and T5), have exhibited compelling performance on various natural language generation tasks. However, the black-box nature of these models limits their application in tasks where specific rules (e.g., controllable constraints, prior knowledge) need to be executed. Previous works either design specific model structures (e.g., Copy Mechanism corresponding to the rule “the generated output should include certain words in the source input”) or implement specialized inference algorithms (e.g., Constrained Beam Search) to execute particular rules through the text generation. These methods require the careful design case-by-case and are difficult to support multiple rules concurrently. In this paper, we propose a novel module named Neural Rule-Execution Tracking Machine (NRETM) that can be equipped into various transformer-based generators to leverage multiple rules simultaneously to guide the neural generation model for superior generation performance in an unified and scalable way. Extensive experiments on several benchmarks verify the effectiveness of our proposed model in both controllable and general text generation tasks. 1 Introduction Transformer-based neural language models (LMs), such as GPT/BART [1–3], have led a wave of new trends in natural language generation, producing texts of prominent quality. They are trained roughly on huge amounts of text corpora to reconstruct the full sentences (i.e., next coming tokens and missing text fragments). Despite their success in varieties of NLP tasks, we argue that the black-box nature of these models leads to inefficiently learning to follow constraints and incorporating prior knowledge. In controllable text generation, most relevant studies [4–6] focus on controlling high-level text attributes (e.g., topic, sentiment) or simply keyword/phrase. More complex fine-grained control constraints such as “generate a sequence of tokens with ‘apple’ in the first sentence which has 15 words and ‘orange’ or ‘oranges’ in the fourth sentence” are less explored. A very recent work [7] reveals that large-scale LMs do not learn to obey the underlying constraints reliably, even in a quite simple constrained generation task (cover all the given keywords without hallucinating new ones). In general text generation, existing works on various tasks reveal the benefit of incorporating task-specific prior knowledge: machine translation [8] (e.g., each source phrase should be translated ∗Work done during the internship at Microsoft STCA. †Corresponding author: Daxin Jiang ([email protected]). 35th Conference on Neural Information Processing Systems (NeurIPS 2021). into exactly one target phrase), text summarization [9] (e.g., the lead bias: front loading the most salient information), dialogue generation [10] (e.g., humans tend to repeat entity names or even long phrases in conversation). However, they either need designing specific model architectures (e.g., Coverage Mechanism and Copy Mechanism) or devising well-designed learning objectives (e.g., GSG [11]). These methods require careful design case-by-case and are difficult to combine multiple arbitrary constraints or prior knowledge simultaneously. Motivated by the above research dilemma, we take the first step towards building an unified framework to handle Fine-grained Control and Prior Knowledge Integration and propose a novel module Neural Rule-Execution Tracking Machine (NRETM) 3 Specifically, NRETM is a trainable neural module that can be equipped with transformer-based sequence-to-sequence pre-trained LMs. It can handle constraints in any Predicate Logic Formula, which crucially includes the arbitrarily complicated relations among different control tasks. For example, the above fine-grained constraint can be written as:( InSen(apple, 1 ) ∧ Len(1 , 15 ) ) ∧ ( InSen(orange, 4 ) ∨ InSen(oranges, 4 ) ) To build NRETM, we combat three major challenges: i) modeling the complicated relationships among control tasks and the logic operators (i.e., ∧, ∨) in the constraint expressions; ii) an unified control system is required to execute different control tasks simultaneously and iii), the control signals for different control tasks should be properly aligned with the constraint expressions. NRETM uses the encoder of transformer-based pre-trained sequence-to-sequence LMs to model the relationship between control tasks and the logic operators. NRETM completes different control tasks via nondifferential Logic Trackers (empowered by executable programs) in an unified control progress system during the decoding process. Finally, the encoded constraint expressions and control progress signals are combined together in the transformer decoder. NRETM is fine-tuned with the pre-trained LMs (except logical trackers) to follow the control progress signal and predicate logic formula. NRETM reconciles symbolic computing (that has precise logic and numerical calculation capabilities from logic trackers) with neural language generation (that has an exceptional ability of wording and phrasing), which results in both the accurate controllability and the superior generation performance. For evaluation, we select three representative benchmarks because all of them involve constraints or prior knowledge, allowing us to verify the effectiveness of our proposed NRETM model: ROCStories [12] are five-sentence stories with complicated predicate constraints over the story structure; Commonsense Generation task [13] with the constraints of mentioning all input concepts; TED15 Zh-En document-level machine translation benchmark [14] with prior knowledge of translating input sentences one by one. Our contributions in this work are three-fold: (1) To the best of our knowledge, we are the first to propose a general framework that incorporates control signal and prior knowledge, formulated as predicate logic constraints, into transformer-based seq2seq text generation models; (2) We train (or fine-tune) the transformer-based seq2seq text generation models to follow the predicate logic constraints(i.e., control signal or prior knowledge) by dynamically updating the rule execution intermediate progress value to the text decoder; and (3) Empirical verification of the effectiveness of the proposed approach on three benchmarks. 2 Approach This section first formalizes fine-grained content control task, then introduces an overview of proposed NRETM model, followed by diving into details of each component. 2.1 Fine-Grained Content Control In this work, we focus on fine-grained content control task where the model input consists of predicate logic constraints x = [x1, . . . , xlx ] ∈ X that should be satisfied in the outputs and optional context input c = [c1, . . . , clc ]. The encoder takes concatenation of x and c (i.e., [c;x]) as input. At decoding step t, the decoder take y:t = [y1, · · · , yt] ∈ Y as input and generate yt+1. 3Our Source Code can be found in https://github.com/GaryYufei/NRETM 2.2 Predicate Logic Constraint We define predicate U(a, y) as a boolean function that indicates whether output y has satisfied control task a which could be values (e.g., status, total length, stop word counts) or lexicons (e.g., copying particular words/phrases). In this paper, NRETM accepts predicate logic constraints in Conjunctive Normal Form (CNF): ( U1 · · · ∨ Ui ) ∧ · · · ∧ ( Uk · · · ∨ Un ) . Each predicate logic constraint includes multiple predicates Ui and basic logic operators (e.g., ∨, ∧ and brackets). 2.3 Neural Rule-Execution Tracking Machine NRETM can be equipped into transformer-based sequence-to-sequence LMs. Figure 1 illustrates an overview of our neural rule-execution tracking machine (NRETM). To enable LMs to follow predicate logic constraints, it is essential to 1) model the complicated relationships among predicates and basic logic operators; 2) control multiple predicates (i.e., control tasks) in the constraints simultaneously; 3) combine the control signals with the predicate logic constraint expressions. For 1), we treat the whole constraint expressions as natural language sentences and feed it into the transformer encoder. For 2), we propose a set of unified control signals that can be used to dynamically describe the step-wise execution progress of different predicates. For 3), we represent the control signals as relative position embedding and align them with encoded constraints expressions in the transformer decoder. 2.3.1 Encoding Predicate Logic Constraints Given predicate logic constraint expression x = [x1, . . . , xlx ] where xi either corresponds to a predicate Ui or a basic logic operator, we feed x into the transformer encoder. Due to the tokenization strategies of pre-trained LMs, each xi may be tokenized into a continuous token sequence. x is tokenized into t = [t1, · · · , tlt ] where lt ≥ lx and there exists one-to-one mapping m(ti) = xj . We use he to denote the encoder output of x. As pre-trained LMs is trained with significant amount of natural language sentences, it should encode complicated sequential relationships within the constraints expressions. 2.3.2 Mentoring Control Progress Specialized controlling components (e.g., Constrained Beam Search [15] and Copy Mechanism [10]) can only be used for limited control tasks. To enable unified controlling system, we propose to complete control by mentoring control progress. We describe the control progress of different predicates using an unified progress state system. Each predicate Ui has a corresponding Logic Tracker QUi(y), which is a non-differentiable executable program (i.e., written by Python) and takes current generated outputs and returns one progress state at each generation step, formulated as follows: QUi(y) = S0 Ui is ∅ S1 Ui is not triggered in y (S2,V) Ui is in progress in y S3 Ui is satisfied in y (1) where State S0 always is assigned to non-predicate ∅ (i.e., basic logic operators in the constraint expression); State S1 means the tracking for predicate Ui is not triggered in y. For example, when controlling the stop word counts of the second sentence, the Logic Tracker returns S1 when the LMs are generating the first sentence; State S2 means predicate Ui is in progress and V is the optimal intermediate value that allows fine-grained tracking. For example, in generation length control, V could be total target length minus the current length informing pre-trained LMs the number of words left to satisfy the constraint; State S3 means Ui is satisfied in y. In short, Logic Tracker unifies different predicates by returning the same set of control signals. Global Or-Clause Update: Each Logic Tracker traces the execution progress of its corresponding predicate Ui independently. This independent tracing strategy works well in the And-Clause because all involved predicates are required to reach State S3. However, only a subset of predicates are required to reach State S3 in the Or-Clause. Our preliminary experiment shows that the independent tracing strategy trains the model not to complete the constraints. To solve this issue, we propose to update the status of all predicates in the same Or-Clause to State S3 when one of the predicates reach State S3. This forces all predicates finish themselves in State S3 and improves the constraint satisfaction ratio in the Or-Clause. Control Progress Matrix: Given the predicate logic constraint expressions t = [t1 · · · , tlt ], we further define Control Progress Matrix S to align the predicates with their control progress signals returned by Logic Trackers: S = [C(t, ε); C(t,y:1); · · · ; C(t,y:t)] (2) C(t,y:t) = [v(t1,y:t), · · · , v(tlt ,y:t)] (3) where ε is the empty string at first decoding step. S is a two-dimensional matrix where each row describes the control progress of all tokens in t at a single decoding step and each column describes the control progress of a single token in t along all decoding steps. Recall that basic logic operators in predicate logic constraint expressions do not require control progress tracking. Each cell Si,j in S is formulated as: Si,j = v(ti,y:j−1) = { Q∅(y) m(ti) = xk and xk is a basic logic operator QUq (y) m(ti) = xk and xk is a predicate Uq (4) Example: In Figure 2, we are given three logic constraints, a) copy “car”; b) the stop word ratio of the output should be 0.5 and c) the length of second sentence should be 6. The basic logic operators & are assigned with S0. Length control and Stop Word Ratio maintain intermediate values (e.g., the residual Length and Stop Word Ratio). The length control is assigned with S1 when generating the first sentence because it will only be triggered in the second sentence. Copy control does not have intermediate values and its State are updated from S2 to S3 only when the corresponding words (at step 10 in our example) appear in the y:t. Control Progress Matrix Encoder: Control Progress Matrix S aligns the results from Logic Tracker with the encoded predicate logic constraint expressions. However, S is a non-differentiable symbolic matrix with each cell Si,j being discrete symbol S0 to S3 combined with additional numbers (i.e., V ). As the encoder has already captured the inter-relationship in the predicate logic constraints, we only model each cell Si,j independently. To support various types of predicates, we treat Si,j as a string and encode it using a single-layer transformer-based encoder ShallowEncoder which shares the same vocabulary and word embeddings as the pre-trained LMs: hsij = ShallowEncoder(Si,j) (5) h̄sij = MeanPooling(h s ij) (6) where hsij ∈ Rl s ij×d, h̄sij ∈ Rd and lsij is the length of the tokenized Si,j and d is the hidden size of ShallowEncoder. We use h̄s to denote the neural representation of whole S. 2.3.3 Combining Predicate Logic Constraint with Control Progress Matrix Finally, we combine the encoded Predicate Logic Constraints he with the encoded Control Progress Matrix h̄s in the transformer-based pre-trained LMs. Injecting h̄s into the transformer encoder would result in encoder content re-computation at each decoding step and stop the standard parallel training for transformer-based decoders. In addition, as Control Progress Matrix incrementally increases as the decoding goes on, it is reasonable to equip h̄s into the transformer decoder. Given the encoder output he, decoder input y:t, the probability of the next token yt+1 can be calculated by: hdt = KV(W s qy:t,W s ky:t,W s vy:t (7) ot+1 = CrossKV(W c qh d t ,W c kh e,Wcvh e) (8) p(yt+1|x1:lx , y1:t) = softmax(Wo ot+1) (9) where ot+1 ∈ Rdc is the hidden state at step t with dc the hidden size, and Wo ∈ R|V |×dc , Both KV and CrossKV are the standard key-value self-attention described in [16]. In the CrossKV which takes hdt and h e as input, the resulting attention score matrix has the same size as S, making CrossKV suitable to incorporate our Control Progress Matrix. Control Progress Matrix as Relative Position: Inspired by [17] which incorporates token relative positions into the self-attention module, we propose to inject Control Progress Matrix as the “relative positions” between encoder output he and current decoder input y:t in the cross-attention (Eq. 8) module. Following this approach, we linearly project each h̄ij into Control Progress Matrix key hkij = W f k · h̄sij + b f k and Control Progress Matrix Value h v ij = W f v · h̄sij + bfv . All transformer decoder layers share the same representations. Eq. 8 is changed to: ot+1 = R(W c qH d t ,W c kH e,WcvH e,hk,hv) (10) where Rlx×t×d and R is the Self-Attention function with relative position, defined as follows: R(q,k,v,mk,mv)j = lx∑ i=1 ai,j(vi + m v i,j) (11) where a∗,j = Softmax (e∗,j) and ei,j = qj(ki + mki,j) T d−1/2. 2.4 Why NRETM Could Satisfy Constraints A powerful implicit compulsion comes from the combined force of two aspects: 1) before generating the EOS token (i.e., End-Of-Sequence Token), all the predicate constraints should be satisfied. As demonstrated in Fig 2, all elements in Control Progress Matrix are set to “satisfied” (i.e., S3) at EOS position; 2) The pre-trained LMs are trained to generate text with limited length. Such a soft way of combining symbolic operators (good at logical and mathematical calculations ) and neural operators (good at wording and phrasing) can retain their respective strengths to the utmost extent. 2.5 What If NRETM Fails to Satisfy Constraints NRETM does not forces the pre-trained LMs to execute the hard constraints on the text decoder explicitly, but instead, provides Control Progress Matrix as input features describing rule execution intermediate values to the text decoder. That is, no explicit effect when NRETM fails to satisfy the constraints. It is possible that our text generators decide to stop the generation before completing all constraints. In our experiments, NRETM has less than 1% chance not to complete all constraints. 2.6 The Generalization Ability of NRETM The generalization ability of NRETM comes from two aspects: 1) NRETM can construct new constraints via combining pre-trained predicates with basic logic operators in arbitrarily complicated ways; 2) To expand a new predicate, users only need to implement the corresponding Logic Trackers, which returns S1-S3 and intermediate values, via executable programs. 3 Experiment We test our proposed NRETM on the controllable text generation and general text generation tasks. For controllable text generation, we verify NRETM on the complex fine-grained control instructions in the ROCStories Benchmark [12]. Further, we test NRETM on the general text generation tasks, commonsense generation and document-level machine translation, to show that NRETM can efficiently integrate prior knowledge into seq2seq models towards superior generation performance. 3.1 Controllable ROC Stories ROCStories is a corpus of five-sentence stories that capture a rich set of causal and temporal commonsense relations between daily events. Following [18], we extract key phrases from the ground-truth stories. In this experiment, we design multiple predicate logic constraints to inform NRETM about the stories to be generated and verify if NRETM can follow these constraints exactly. Predicate Logic Formulation As shown in table 1, five constraints with increasing difficulties are used: (1) Generate a story with storyline wi in the pith sentence. (2) Generate a story with an ordered storyline w1, · · · , w4 (3) Generate a story with storyline wi in the pith sentence which has lpi words (i = 1, 2). (4) Generate a storyline w1 in the p1th sentence which has lp1 words or sp1 stop words and w2 in the p2th sentence that does not mention w3 (5) Generate a storyline wi in the pith sentence which has lpi words or spi stop words (i = 1, 2). Baselines and Metrics Both baseline and NRETM use T5-Base model [19]. We report Constraints Success Ratio (CSR), the ratio of stories that completely satisfy the given constraints. We additionally report ROUGE-L (RL), BERT-Score (BS), BLEU-1/4 (B1/4) to show the generated stories quality. Main Results As shown in Table 1, in all five predicate logic constraints, compared to the T5 model, the NRETM model achieves higher Constraint Success Ratio and maintains a similar level of ROUGHL, showing that the NRETM model can be flexibly controlled without loss of generated text quality. The gap in CSR between the T5 and NRETM model is moderate in the first two constraints with simple token permutations. However, the success ratio of T5 model drops significantly given constraints that requires long-range numerical tracking (e.g., sentence length and the count of stop words). 3.2 Commonsense Generation COMMONGEN is a generation benchmark dataset target explicitly test machines for the ability of generative commonsense reasoning. Given a set of common concepts the task is to generate a coherent sentence describing an everyday scenario using these concepts. Predicate Logic Formulation The input is an unordered set of n concepts x = {xi}ni=1. From the expectation of COMMONGEN, one easily obtained prior knowledge is that each xi must appear in output y. The corresponding predicate logic constraint Pc is: Pc = ∧ni=1 ( Copy(xi) ) where y will appear by default, for the sake of brevity, we have omitted y in predicate Copy. Another prior knowledge comes from the observation that generating y requires giving the correct morphological inflections of the concept word rather than copy its original form. Let x̃i = {x̃ik} |x̃i| k=1 denote all inflections of xi. y covers concept xi, if at least one of {x̃ik} |x̃i| k=1 appears. The constraint P̂c is: P̂c = ∧ni=1 ( ∨|x̃ i| j=1 Copy(x̃ i j) ) Baselines and Metrics We experiment with T5-Base and T5-Large. We equip NRETM into the T5Large and T5-Base model to incorporate Pc and P̂c respectively (+ NRETM Pc) (+ NRETM P̂c). Grid Beam Search (GBS) [20] (+ G) is a well-designed decoding method that ensures the generation model satisfies the lexical constraints. We only apply GBS to the T5-Base model due to the memory constraint. Following the suggestions in [13], we use CIDEr [21] and SPICE [22] to automatically assess the quality of generated texts. We calculate constraint satisfaction for all constraints (ALL), novel constraints (Novel) and seen constraints (Seen). Main Results Table 2 shows that the NRETM model improves the constraint satisfaction over the baselines for all cases, achieving close to 100% (i.e., 99.5% and 99.2%). While GBS achieves perfect constraint satisfaction (i.e., 100%), doing so significantly degrades the output text quality (more than 50 CIDEr), indicating the necessity integrating prior knowledge in training rather than inference. In addition, both prior knowledge Pc and P̂c have a positive effect on our model, improving our T5-large baseline by 3.1 and 5.0 CIDEr score, respectively. Finally, our T5-Large + NRETM P̂c model outperforms the previous state-of-the-art result [23], which integrates the ConceptNet [24] into the BART model, suggesting that our incorporated task-specific prior knowledge could be as powerful as knowledge from large-scale hand-crafted corpus. All of the above shows how potential it is to find a method that could execute multiple rules effectively. 3.3 Document-Level Machine Translation Document-level machine translation tasks is a general text generation task, where the goal is to translate segments of text (up to an entire document). Following [14], we use TED15 Zh-En (from IWSLT 2014 and 2015 [25, 26]) as training and validation set and 2010-2013 TED as the test set. Predicate Logic Formulation The input is an ordered set of n sentences in the source language that form a document x = {xi}ni=1, the expected output is a translated document y = {yi}ni=1 in the target language. We observed that neural model is prone to sentence correspondence confusion (the ith sentence in source document is translated as the jth sentence in target document) when doing document-level translation. To alleviate this problem, we propose incorporating Doc-mBART25 with prior knowledge: each source sentence should be translated only once. It is formulated as: TranslatedOnce(xi) = { S3 θ(y:t) > i S2 θ(y:t) = i S1 θ(y:t) < i (12) where θ(·) returns the line number of yt in y, as t is monotonic during generation, the status only set to be 2 once. To trace the sentence translation progress, we add an additional End-Of-Sentence token at the end of each sentence to the training data. Once NRETM finishes the ith sentence (generating an end-of-sentence token) in the decoder, we assume that the ith sentence in the encoder has been translated. The predicate logic constraint Pc of this task can be formulated as: Pc = ∧ni=1 ( TranslatedOnce(xi) ) Baselines and Metrics We combine our NRETM Pc component with the Doc-mBART25 model proposed in [3] which is a state-of-the-art multilingual pre-trained language model. We compare this model with the state-of-the-art non-pretraining and pretraining approaches, including HAN (Hierarchical Attention Networks) [14], Doc-mBART25 and Sen-mBART25 proposed in [3]. When implementing our model, we use the same pre-processing method, blocks segmentation strategy and beam search setting as [3]. TED15 Zh-En provides sentence-to-sentence translation from Chinese to English. We use both document-level (d-BLEU) and sentence-level (s-BLEU) to measure the similarities between generated target document and the source document. We also report Sentence Aligned Ratio (SAR), the ratio of source and target documents with the same sentence count, to show the effectiveness of our control over this translation prior knowledge. Main Results Table 3 shows that the NRETM Pc component helps the Doc-mBART25 model to better capture the sentence-level corresponding relationship between the source and target documents. In particular, sentence-level alignment ratio is improved from 98.7% to 100%. The improvement in s-BLEU (+ 1.1 BLEU) also confirms that our final Doc-mBART25 + NRETM Pc model learns to translate sentences based on the sentence order in source documents. 3.4 Discussion Updating Progress in Encoder In Sec 2.3.3, we incorporate the Control Progress Matrix as relative position embeddings in the decoder. To show the importance of this design choice, we conduct an ablation study in Table 4 where the row of Control Progress Matrix is concatenated with the encoder output. We find that updating the rule execution progress information with the encoder output contributes little to improve the CSR. This shows that simply extracting rule execution intermediate values is not enough. This could be because the encoder that encodes the rule execution intermediate values cannot effectively broadcast this information into text decoders. NRETM Robustness The above experiment results are based on the perfect training data. In this section, we explore the effect of training data noise. We corrupt the training data by replacing the input commonsense keywords with a random sampled one under the probability 5%, 10%, 15%, 25%, and 50% (Validation and Test Split remain unchanged). As shown in Table 5, in all noise levels, NRETM successfully achieves higher constraint coverage (i.e, Cons) and CIDEr score than the T5 baseline model, showing that NRETM is robust to the training data noise. It is worthwhile to note that the main goal of NRETM is to incorporate constraints that are satisfied by the training data into transformer-based seq2seq text generators. It is reasonable to assume that in practice, the noise level should be relatively low (e.g., 0% - 10%). Zero-Shot Execution In Table 1, we show that the pre-trained language model T5 cannot handle complicated and fine-grained constraints even after fine-tuning. Here, we further demonstrate that NRETM model is capable to handle zero-shot rule execution. We train the T5 and NRETM model to only mention keywords in the 3rd, 4th and 5th sentence and test these models to mention keywords in the first and second sentence of the whole story. As shown in Table 6, although both T5 and NRETM model mention most of the keywords (95.7% and 98.3% respectively) in the generated story, the T5 model only mention 19.7% of keywords in the correct sentence and the NRETM model makes 97.7% of keywords correct. This is becuase the T5 model cannot recognize the novel sentence index (i.e., the first and second) during the generation. The logic tracker helps the NRETM model to generalize to handle these cases. Running Efficiency We compare the inference time (in minutes) for NRETM on the test split of commonsense generation task in Table 7. All models use the beam search decoding algorithm with beam size 5. Adding NRETM components to T5-Base and T5-Large approximately double the inference time. While the Grid Beam Search (GBS) algorithm uses a much longer inference time. Compared to existing constrained decoding approaches, NRETM uses much less computational costs. 4 Related Work NRETM is mainly related to two lines of research work in text generation: constrained decoding and prior knowledge integration. Constrained Decoding NEUROLOGICEarly work in constrained decoding can be traced back to dual decomposition and lagrangian relaxation [27, 28]. These works focus on sequence labelling and parsing problems where the solution space is relatively small, compared to text generation tasks. Research efforts in text generation tasks [29–31] involve controllable generation methods where the generators are trained on text data with the labeled target attributes. CTRL [4], PPLM [6] and CoCon [32] are recent approaches that built on the transformer-based large-scale pretrained LMs, they pay more attention on controlling high-level attributes, phrases and keywords. [33, 34] propose to trace the control task progress in the text generation decoder. [33] treats the control signal as training loss in memory network and [34] treats the control signal as additional input features. [35] controls the text generation outputs via mentoring the output gradient. However, these work only focus on specific controlling tasks such as phrases copying and generation length. While NRETM focuses on controlling text generation to follow arbitrary logical constraints, leading to a fine-grained control. They can be seen as special cases of NRETM. Recently, GDC [36] permits to specify both pointwise and distributional constraints over the target LMs. Very recently, NEUROLOGIC [7] was proposed to generate fluent text while satisfying complex lexical constraints (in a predicate logic form). There are three main differences between NRETM and NEUROLOGIC: 1) NEUROLOGIC only provides control constraints over the text generators. Instead, NRETM is a general framework that provides control constraints (e.g., copy or not copy words) and prior knowledge (e.g., translating sentences one by one). NEUROLOGIC can be viewed as a special case of NRETM; 2) NEUROLOGIC is an inference-only algorithm that only controls the model to generate or avoid specific words or phrases at decoding time; while NRETM fine-tunes the pre-trained transformer-based seq2seq text generators with the predicate logic constraints; 3) NEUROLOGIC only supports the “copy” predicate (i.e., to generate or not to generate specific words or phrases), while NRETM is a general framework that supports various control predicates. NRETM supports 6 kinds of logic operators in this paper, and it is also possible for users to expand new logic operators. Prior Knowledge Integration Existing efforts [37–41] to incorporate prior knowledge into sequence-to-sequence framework either resort to modifying model architectures, including adding external memory components, specialized decoding method or designing training objectives, including minimum risk training. These methods usually can only support to inject one narrow type of knowledge into the neural models. To the best of our knowledge, we first attempt to formalize the prior knowledge integration in seq2seq generation as text generation that conforms to predicate logic constraints. 5 Conclusion and Future Work In this paper, we propose a unified controllable generation framework that leverages predicate logic constraints to implement efficient complex fine-grained control and scalable prior knowledge integration. We explore and compare two controllable strategies: dynamic tracking and static strategy, and show that the proposed dynamic tracking mechanism significantly outperforms the static ones. Empirical results on three benchmarks indicate that NRETM could achieve accurate control and exhibits a superior generation ability over different tasks. Pre-trained models have been the dominant paradigm in natural language processing, and researchers resort to massive data and large-scale models to improve performance. We unify the rules used in various tasks into the form of predicate logic, provide the possibility to pretrain models on massive rules. In the future, we will explore pre-training large-scale neural rule-execution machine with massive rules and data. Broader Impact Our work proposes a unified and scalable approach to efficiently perform fine-grained controllable text generation and incorporate multiple prior knowledge for superior text generation performance. This work uses story generation, machine translation, commonsense generation as applications to verify the effectiveness. However, while our proposed method achieves promise performance on several benchmarks, deployment of our method in the real world requires a careful analysis of potential societal benefits and harms (e.g., the harms associated with furthering negative stereotypes against certain vulnerable groups). The potential ethical issues include: powerful language models might be used to generate abuse, faked or misleading content in the news or on social media; they might pose safety concerns if they are used to generate harassing or hateful materials. In order to mitigate these risks, it is possible to use AI systems to fight against misleading content and harassing material. However, as discussed in previous work [42, 43], mitigating these risks could be an extremely complex socio-technical problem that many are working to understand and solve. Acknowledgement We thank anonymous reviewers for their insightful suggestions to improve this paper. Funding Transparency Statement Yufei Wang, Can Xu, Huang Hu, Chongyang Tao and Daxin Jiang are supported by Microsoft Software Technology Center at Asia (STCA). Yufei Wang also receives a MQ Research Excellence Scholarship and a CSIRO’s DATA61 Top-up Scholarship.
1. What is the focus and contribution of the paper regarding sequence-to-sequence models? 2. What are the strengths of the proposed approach, particularly in handling complex constraints? 3. Do you have any concerns about the novelty of the paper compared to prior works? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. What are some minor suggestions for improving the paper?
Summary Of The Paper Review
Summary Of The Paper The authors propose an approach aimed at encouraging transformer-based sequence-to-sequence models to respect constraints on the text they generate. In particular, the authors propose a particular syntactic representation of whether propositional logic-style constraints have been satisfied in some (partially) generated text, which is tracked throughout decoding. This syntactic representation is a function both of the current decoding step as well as the index of the particular constraint, and is updated dynamically as decoding progresses. The authors tokenize and encode these representations, and use them inside a relative-position-style attention mechanism. The authors fine-tune their models using these dynamic representations and they show that this allows for better control over various aspects of the generated text on RocStories, Commongen, and Zh-En document level translation tasks, with comparable or better quality. Review This paper tackles an interesting problem, and it obtains good results. Another nice thing about the paper is the authors largely consider constraints that are not easy to implement with just a simple constrained beam search. In particular, rather than constraints which disallow certain tokens or alignments, the authors consider constraints that require the presence or a particular number of certain words, which are much more challenging to guarantee with standard decoding algorithms. In terms of contribution, the core contribution of the paper appears to be the proposal of an approach to dynamically and explicitly updating the decoder in response to whether or how much the text generated so far conforms with some pre-specified constraints. While this sort of dynamic updating seems quite reasonable, similar ideas have been proposed before (as the authors note), such as in the case of coverage attention (Tu et al., 2016). Accordingly, I think the proposed approach would feel more compelling if the authors could argue that it is superior to other approaches to updating the decoder in response to how well constraints have been satisfied in the generated text so far. However, I don't think there are any such comparisons. In particular, it seems the main baselines the authors consider are either a baseline seq2seq model with the constraints encoded statically on the source side (Table 1) or a baseline seq2seq model with a constrained decoding algorithm (Table 2). As such, it's hard to tell how crucial the various modeling/encoding choices the authors have made in implementing their approach are. (A natural baseline not considered, for instance, might involve dynamically updating the constraint tokens consumed by the encoder in the baseline in Table 1 instead of using the authors' proposed attention model). In terms of presentation, I think the paper is largely clear, although I would encourage the authors to emphasize earlier on that their method involves fine-tuning the model rather than simply implementing constraints at decoding time; I think the fact that fine-tuning is involved only really becomes clear on the bottom of page 5. Minor: Are the script 'U's on like 116 different from the non-script upper case 'U's on line 109? Update after response from authors: thanks for your response; I'm increasing my score in view of the new baseline results.
NIPS
Title Neural Rule-Execution Tracking Machine For Transformer-Based Text Generation Abstract Sequence-to-Sequence (Seq2Seq) neural text generation models, especially the pre-trained ones (e.g., BART and T5), have exhibited compelling performance on various natural language generation tasks. However, the black-box nature of these models limits their application in tasks where specific rules (e.g., controllable constraints, prior knowledge) need to be executed. Previous works either design specific model structures (e.g., Copy Mechanism corresponding to the rule “the generated output should include certain words in the source input”) or implement specialized inference algorithms (e.g., Constrained Beam Search) to execute particular rules through the text generation. These methods require the careful design case-by-case and are difficult to support multiple rules concurrently. In this paper, we propose a novel module named Neural Rule-Execution Tracking Machine (NRETM) that can be equipped into various transformer-based generators to leverage multiple rules simultaneously to guide the neural generation model for superior generation performance in an unified and scalable way. Extensive experiments on several benchmarks verify the effectiveness of our proposed model in both controllable and general text generation tasks. 1 Introduction Transformer-based neural language models (LMs), such as GPT/BART [1–3], have led a wave of new trends in natural language generation, producing texts of prominent quality. They are trained roughly on huge amounts of text corpora to reconstruct the full sentences (i.e., next coming tokens and missing text fragments). Despite their success in varieties of NLP tasks, we argue that the black-box nature of these models leads to inefficiently learning to follow constraints and incorporating prior knowledge. In controllable text generation, most relevant studies [4–6] focus on controlling high-level text attributes (e.g., topic, sentiment) or simply keyword/phrase. More complex fine-grained control constraints such as “generate a sequence of tokens with ‘apple’ in the first sentence which has 15 words and ‘orange’ or ‘oranges’ in the fourth sentence” are less explored. A very recent work [7] reveals that large-scale LMs do not learn to obey the underlying constraints reliably, even in a quite simple constrained generation task (cover all the given keywords without hallucinating new ones). In general text generation, existing works on various tasks reveal the benefit of incorporating task-specific prior knowledge: machine translation [8] (e.g., each source phrase should be translated ∗Work done during the internship at Microsoft STCA. †Corresponding author: Daxin Jiang ([email protected]). 35th Conference on Neural Information Processing Systems (NeurIPS 2021). into exactly one target phrase), text summarization [9] (e.g., the lead bias: front loading the most salient information), dialogue generation [10] (e.g., humans tend to repeat entity names or even long phrases in conversation). However, they either need designing specific model architectures (e.g., Coverage Mechanism and Copy Mechanism) or devising well-designed learning objectives (e.g., GSG [11]). These methods require careful design case-by-case and are difficult to combine multiple arbitrary constraints or prior knowledge simultaneously. Motivated by the above research dilemma, we take the first step towards building an unified framework to handle Fine-grained Control and Prior Knowledge Integration and propose a novel module Neural Rule-Execution Tracking Machine (NRETM) 3 Specifically, NRETM is a trainable neural module that can be equipped with transformer-based sequence-to-sequence pre-trained LMs. It can handle constraints in any Predicate Logic Formula, which crucially includes the arbitrarily complicated relations among different control tasks. For example, the above fine-grained constraint can be written as:( InSen(apple, 1 ) ∧ Len(1 , 15 ) ) ∧ ( InSen(orange, 4 ) ∨ InSen(oranges, 4 ) ) To build NRETM, we combat three major challenges: i) modeling the complicated relationships among control tasks and the logic operators (i.e., ∧, ∨) in the constraint expressions; ii) an unified control system is required to execute different control tasks simultaneously and iii), the control signals for different control tasks should be properly aligned with the constraint expressions. NRETM uses the encoder of transformer-based pre-trained sequence-to-sequence LMs to model the relationship between control tasks and the logic operators. NRETM completes different control tasks via nondifferential Logic Trackers (empowered by executable programs) in an unified control progress system during the decoding process. Finally, the encoded constraint expressions and control progress signals are combined together in the transformer decoder. NRETM is fine-tuned with the pre-trained LMs (except logical trackers) to follow the control progress signal and predicate logic formula. NRETM reconciles symbolic computing (that has precise logic and numerical calculation capabilities from logic trackers) with neural language generation (that has an exceptional ability of wording and phrasing), which results in both the accurate controllability and the superior generation performance. For evaluation, we select three representative benchmarks because all of them involve constraints or prior knowledge, allowing us to verify the effectiveness of our proposed NRETM model: ROCStories [12] are five-sentence stories with complicated predicate constraints over the story structure; Commonsense Generation task [13] with the constraints of mentioning all input concepts; TED15 Zh-En document-level machine translation benchmark [14] with prior knowledge of translating input sentences one by one. Our contributions in this work are three-fold: (1) To the best of our knowledge, we are the first to propose a general framework that incorporates control signal and prior knowledge, formulated as predicate logic constraints, into transformer-based seq2seq text generation models; (2) We train (or fine-tune) the transformer-based seq2seq text generation models to follow the predicate logic constraints(i.e., control signal or prior knowledge) by dynamically updating the rule execution intermediate progress value to the text decoder; and (3) Empirical verification of the effectiveness of the proposed approach on three benchmarks. 2 Approach This section first formalizes fine-grained content control task, then introduces an overview of proposed NRETM model, followed by diving into details of each component. 2.1 Fine-Grained Content Control In this work, we focus on fine-grained content control task where the model input consists of predicate logic constraints x = [x1, . . . , xlx ] ∈ X that should be satisfied in the outputs and optional context input c = [c1, . . . , clc ]. The encoder takes concatenation of x and c (i.e., [c;x]) as input. At decoding step t, the decoder take y:t = [y1, · · · , yt] ∈ Y as input and generate yt+1. 3Our Source Code can be found in https://github.com/GaryYufei/NRETM 2.2 Predicate Logic Constraint We define predicate U(a, y) as a boolean function that indicates whether output y has satisfied control task a which could be values (e.g., status, total length, stop word counts) or lexicons (e.g., copying particular words/phrases). In this paper, NRETM accepts predicate logic constraints in Conjunctive Normal Form (CNF): ( U1 · · · ∨ Ui ) ∧ · · · ∧ ( Uk · · · ∨ Un ) . Each predicate logic constraint includes multiple predicates Ui and basic logic operators (e.g., ∨, ∧ and brackets). 2.3 Neural Rule-Execution Tracking Machine NRETM can be equipped into transformer-based sequence-to-sequence LMs. Figure 1 illustrates an overview of our neural rule-execution tracking machine (NRETM). To enable LMs to follow predicate logic constraints, it is essential to 1) model the complicated relationships among predicates and basic logic operators; 2) control multiple predicates (i.e., control tasks) in the constraints simultaneously; 3) combine the control signals with the predicate logic constraint expressions. For 1), we treat the whole constraint expressions as natural language sentences and feed it into the transformer encoder. For 2), we propose a set of unified control signals that can be used to dynamically describe the step-wise execution progress of different predicates. For 3), we represent the control signals as relative position embedding and align them with encoded constraints expressions in the transformer decoder. 2.3.1 Encoding Predicate Logic Constraints Given predicate logic constraint expression x = [x1, . . . , xlx ] where xi either corresponds to a predicate Ui or a basic logic operator, we feed x into the transformer encoder. Due to the tokenization strategies of pre-trained LMs, each xi may be tokenized into a continuous token sequence. x is tokenized into t = [t1, · · · , tlt ] where lt ≥ lx and there exists one-to-one mapping m(ti) = xj . We use he to denote the encoder output of x. As pre-trained LMs is trained with significant amount of natural language sentences, it should encode complicated sequential relationships within the constraints expressions. 2.3.2 Mentoring Control Progress Specialized controlling components (e.g., Constrained Beam Search [15] and Copy Mechanism [10]) can only be used for limited control tasks. To enable unified controlling system, we propose to complete control by mentoring control progress. We describe the control progress of different predicates using an unified progress state system. Each predicate Ui has a corresponding Logic Tracker QUi(y), which is a non-differentiable executable program (i.e., written by Python) and takes current generated outputs and returns one progress state at each generation step, formulated as follows: QUi(y) = S0 Ui is ∅ S1 Ui is not triggered in y (S2,V) Ui is in progress in y S3 Ui is satisfied in y (1) where State S0 always is assigned to non-predicate ∅ (i.e., basic logic operators in the constraint expression); State S1 means the tracking for predicate Ui is not triggered in y. For example, when controlling the stop word counts of the second sentence, the Logic Tracker returns S1 when the LMs are generating the first sentence; State S2 means predicate Ui is in progress and V is the optimal intermediate value that allows fine-grained tracking. For example, in generation length control, V could be total target length minus the current length informing pre-trained LMs the number of words left to satisfy the constraint; State S3 means Ui is satisfied in y. In short, Logic Tracker unifies different predicates by returning the same set of control signals. Global Or-Clause Update: Each Logic Tracker traces the execution progress of its corresponding predicate Ui independently. This independent tracing strategy works well in the And-Clause because all involved predicates are required to reach State S3. However, only a subset of predicates are required to reach State S3 in the Or-Clause. Our preliminary experiment shows that the independent tracing strategy trains the model not to complete the constraints. To solve this issue, we propose to update the status of all predicates in the same Or-Clause to State S3 when one of the predicates reach State S3. This forces all predicates finish themselves in State S3 and improves the constraint satisfaction ratio in the Or-Clause. Control Progress Matrix: Given the predicate logic constraint expressions t = [t1 · · · , tlt ], we further define Control Progress Matrix S to align the predicates with their control progress signals returned by Logic Trackers: S = [C(t, ε); C(t,y:1); · · · ; C(t,y:t)] (2) C(t,y:t) = [v(t1,y:t), · · · , v(tlt ,y:t)] (3) where ε is the empty string at first decoding step. S is a two-dimensional matrix where each row describes the control progress of all tokens in t at a single decoding step and each column describes the control progress of a single token in t along all decoding steps. Recall that basic logic operators in predicate logic constraint expressions do not require control progress tracking. Each cell Si,j in S is formulated as: Si,j = v(ti,y:j−1) = { Q∅(y) m(ti) = xk and xk is a basic logic operator QUq (y) m(ti) = xk and xk is a predicate Uq (4) Example: In Figure 2, we are given three logic constraints, a) copy “car”; b) the stop word ratio of the output should be 0.5 and c) the length of second sentence should be 6. The basic logic operators & are assigned with S0. Length control and Stop Word Ratio maintain intermediate values (e.g., the residual Length and Stop Word Ratio). The length control is assigned with S1 when generating the first sentence because it will only be triggered in the second sentence. Copy control does not have intermediate values and its State are updated from S2 to S3 only when the corresponding words (at step 10 in our example) appear in the y:t. Control Progress Matrix Encoder: Control Progress Matrix S aligns the results from Logic Tracker with the encoded predicate logic constraint expressions. However, S is a non-differentiable symbolic matrix with each cell Si,j being discrete symbol S0 to S3 combined with additional numbers (i.e., V ). As the encoder has already captured the inter-relationship in the predicate logic constraints, we only model each cell Si,j independently. To support various types of predicates, we treat Si,j as a string and encode it using a single-layer transformer-based encoder ShallowEncoder which shares the same vocabulary and word embeddings as the pre-trained LMs: hsij = ShallowEncoder(Si,j) (5) h̄sij = MeanPooling(h s ij) (6) where hsij ∈ Rl s ij×d, h̄sij ∈ Rd and lsij is the length of the tokenized Si,j and d is the hidden size of ShallowEncoder. We use h̄s to denote the neural representation of whole S. 2.3.3 Combining Predicate Logic Constraint with Control Progress Matrix Finally, we combine the encoded Predicate Logic Constraints he with the encoded Control Progress Matrix h̄s in the transformer-based pre-trained LMs. Injecting h̄s into the transformer encoder would result in encoder content re-computation at each decoding step and stop the standard parallel training for transformer-based decoders. In addition, as Control Progress Matrix incrementally increases as the decoding goes on, it is reasonable to equip h̄s into the transformer decoder. Given the encoder output he, decoder input y:t, the probability of the next token yt+1 can be calculated by: hdt = KV(W s qy:t,W s ky:t,W s vy:t (7) ot+1 = CrossKV(W c qh d t ,W c kh e,Wcvh e) (8) p(yt+1|x1:lx , y1:t) = softmax(Wo ot+1) (9) where ot+1 ∈ Rdc is the hidden state at step t with dc the hidden size, and Wo ∈ R|V |×dc , Both KV and CrossKV are the standard key-value self-attention described in [16]. In the CrossKV which takes hdt and h e as input, the resulting attention score matrix has the same size as S, making CrossKV suitable to incorporate our Control Progress Matrix. Control Progress Matrix as Relative Position: Inspired by [17] which incorporates token relative positions into the self-attention module, we propose to inject Control Progress Matrix as the “relative positions” between encoder output he and current decoder input y:t in the cross-attention (Eq. 8) module. Following this approach, we linearly project each h̄ij into Control Progress Matrix key hkij = W f k · h̄sij + b f k and Control Progress Matrix Value h v ij = W f v · h̄sij + bfv . All transformer decoder layers share the same representations. Eq. 8 is changed to: ot+1 = R(W c qH d t ,W c kH e,WcvH e,hk,hv) (10) where Rlx×t×d and R is the Self-Attention function with relative position, defined as follows: R(q,k,v,mk,mv)j = lx∑ i=1 ai,j(vi + m v i,j) (11) where a∗,j = Softmax (e∗,j) and ei,j = qj(ki + mki,j) T d−1/2. 2.4 Why NRETM Could Satisfy Constraints A powerful implicit compulsion comes from the combined force of two aspects: 1) before generating the EOS token (i.e., End-Of-Sequence Token), all the predicate constraints should be satisfied. As demonstrated in Fig 2, all elements in Control Progress Matrix are set to “satisfied” (i.e., S3) at EOS position; 2) The pre-trained LMs are trained to generate text with limited length. Such a soft way of combining symbolic operators (good at logical and mathematical calculations ) and neural operators (good at wording and phrasing) can retain their respective strengths to the utmost extent. 2.5 What If NRETM Fails to Satisfy Constraints NRETM does not forces the pre-trained LMs to execute the hard constraints on the text decoder explicitly, but instead, provides Control Progress Matrix as input features describing rule execution intermediate values to the text decoder. That is, no explicit effect when NRETM fails to satisfy the constraints. It is possible that our text generators decide to stop the generation before completing all constraints. In our experiments, NRETM has less than 1% chance not to complete all constraints. 2.6 The Generalization Ability of NRETM The generalization ability of NRETM comes from two aspects: 1) NRETM can construct new constraints via combining pre-trained predicates with basic logic operators in arbitrarily complicated ways; 2) To expand a new predicate, users only need to implement the corresponding Logic Trackers, which returns S1-S3 and intermediate values, via executable programs. 3 Experiment We test our proposed NRETM on the controllable text generation and general text generation tasks. For controllable text generation, we verify NRETM on the complex fine-grained control instructions in the ROCStories Benchmark [12]. Further, we test NRETM on the general text generation tasks, commonsense generation and document-level machine translation, to show that NRETM can efficiently integrate prior knowledge into seq2seq models towards superior generation performance. 3.1 Controllable ROC Stories ROCStories is a corpus of five-sentence stories that capture a rich set of causal and temporal commonsense relations between daily events. Following [18], we extract key phrases from the ground-truth stories. In this experiment, we design multiple predicate logic constraints to inform NRETM about the stories to be generated and verify if NRETM can follow these constraints exactly. Predicate Logic Formulation As shown in table 1, five constraints with increasing difficulties are used: (1) Generate a story with storyline wi in the pith sentence. (2) Generate a story with an ordered storyline w1, · · · , w4 (3) Generate a story with storyline wi in the pith sentence which has lpi words (i = 1, 2). (4) Generate a storyline w1 in the p1th sentence which has lp1 words or sp1 stop words and w2 in the p2th sentence that does not mention w3 (5) Generate a storyline wi in the pith sentence which has lpi words or spi stop words (i = 1, 2). Baselines and Metrics Both baseline and NRETM use T5-Base model [19]. We report Constraints Success Ratio (CSR), the ratio of stories that completely satisfy the given constraints. We additionally report ROUGE-L (RL), BERT-Score (BS), BLEU-1/4 (B1/4) to show the generated stories quality. Main Results As shown in Table 1, in all five predicate logic constraints, compared to the T5 model, the NRETM model achieves higher Constraint Success Ratio and maintains a similar level of ROUGHL, showing that the NRETM model can be flexibly controlled without loss of generated text quality. The gap in CSR between the T5 and NRETM model is moderate in the first two constraints with simple token permutations. However, the success ratio of T5 model drops significantly given constraints that requires long-range numerical tracking (e.g., sentence length and the count of stop words). 3.2 Commonsense Generation COMMONGEN is a generation benchmark dataset target explicitly test machines for the ability of generative commonsense reasoning. Given a set of common concepts the task is to generate a coherent sentence describing an everyday scenario using these concepts. Predicate Logic Formulation The input is an unordered set of n concepts x = {xi}ni=1. From the expectation of COMMONGEN, one easily obtained prior knowledge is that each xi must appear in output y. The corresponding predicate logic constraint Pc is: Pc = ∧ni=1 ( Copy(xi) ) where y will appear by default, for the sake of brevity, we have omitted y in predicate Copy. Another prior knowledge comes from the observation that generating y requires giving the correct morphological inflections of the concept word rather than copy its original form. Let x̃i = {x̃ik} |x̃i| k=1 denote all inflections of xi. y covers concept xi, if at least one of {x̃ik} |x̃i| k=1 appears. The constraint P̂c is: P̂c = ∧ni=1 ( ∨|x̃ i| j=1 Copy(x̃ i j) ) Baselines and Metrics We experiment with T5-Base and T5-Large. We equip NRETM into the T5Large and T5-Base model to incorporate Pc and P̂c respectively (+ NRETM Pc) (+ NRETM P̂c). Grid Beam Search (GBS) [20] (+ G) is a well-designed decoding method that ensures the generation model satisfies the lexical constraints. We only apply GBS to the T5-Base model due to the memory constraint. Following the suggestions in [13], we use CIDEr [21] and SPICE [22] to automatically assess the quality of generated texts. We calculate constraint satisfaction for all constraints (ALL), novel constraints (Novel) and seen constraints (Seen). Main Results Table 2 shows that the NRETM model improves the constraint satisfaction over the baselines for all cases, achieving close to 100% (i.e., 99.5% and 99.2%). While GBS achieves perfect constraint satisfaction (i.e., 100%), doing so significantly degrades the output text quality (more than 50 CIDEr), indicating the necessity integrating prior knowledge in training rather than inference. In addition, both prior knowledge Pc and P̂c have a positive effect on our model, improving our T5-large baseline by 3.1 and 5.0 CIDEr score, respectively. Finally, our T5-Large + NRETM P̂c model outperforms the previous state-of-the-art result [23], which integrates the ConceptNet [24] into the BART model, suggesting that our incorporated task-specific prior knowledge could be as powerful as knowledge from large-scale hand-crafted corpus. All of the above shows how potential it is to find a method that could execute multiple rules effectively. 3.3 Document-Level Machine Translation Document-level machine translation tasks is a general text generation task, where the goal is to translate segments of text (up to an entire document). Following [14], we use TED15 Zh-En (from IWSLT 2014 and 2015 [25, 26]) as training and validation set and 2010-2013 TED as the test set. Predicate Logic Formulation The input is an ordered set of n sentences in the source language that form a document x = {xi}ni=1, the expected output is a translated document y = {yi}ni=1 in the target language. We observed that neural model is prone to sentence correspondence confusion (the ith sentence in source document is translated as the jth sentence in target document) when doing document-level translation. To alleviate this problem, we propose incorporating Doc-mBART25 with prior knowledge: each source sentence should be translated only once. It is formulated as: TranslatedOnce(xi) = { S3 θ(y:t) > i S2 θ(y:t) = i S1 θ(y:t) < i (12) where θ(·) returns the line number of yt in y, as t is monotonic during generation, the status only set to be 2 once. To trace the sentence translation progress, we add an additional End-Of-Sentence token at the end of each sentence to the training data. Once NRETM finishes the ith sentence (generating an end-of-sentence token) in the decoder, we assume that the ith sentence in the encoder has been translated. The predicate logic constraint Pc of this task can be formulated as: Pc = ∧ni=1 ( TranslatedOnce(xi) ) Baselines and Metrics We combine our NRETM Pc component with the Doc-mBART25 model proposed in [3] which is a state-of-the-art multilingual pre-trained language model. We compare this model with the state-of-the-art non-pretraining and pretraining approaches, including HAN (Hierarchical Attention Networks) [14], Doc-mBART25 and Sen-mBART25 proposed in [3]. When implementing our model, we use the same pre-processing method, blocks segmentation strategy and beam search setting as [3]. TED15 Zh-En provides sentence-to-sentence translation from Chinese to English. We use both document-level (d-BLEU) and sentence-level (s-BLEU) to measure the similarities between generated target document and the source document. We also report Sentence Aligned Ratio (SAR), the ratio of source and target documents with the same sentence count, to show the effectiveness of our control over this translation prior knowledge. Main Results Table 3 shows that the NRETM Pc component helps the Doc-mBART25 model to better capture the sentence-level corresponding relationship between the source and target documents. In particular, sentence-level alignment ratio is improved from 98.7% to 100%. The improvement in s-BLEU (+ 1.1 BLEU) also confirms that our final Doc-mBART25 + NRETM Pc model learns to translate sentences based on the sentence order in source documents. 3.4 Discussion Updating Progress in Encoder In Sec 2.3.3, we incorporate the Control Progress Matrix as relative position embeddings in the decoder. To show the importance of this design choice, we conduct an ablation study in Table 4 where the row of Control Progress Matrix is concatenated with the encoder output. We find that updating the rule execution progress information with the encoder output contributes little to improve the CSR. This shows that simply extracting rule execution intermediate values is not enough. This could be because the encoder that encodes the rule execution intermediate values cannot effectively broadcast this information into text decoders. NRETM Robustness The above experiment results are based on the perfect training data. In this section, we explore the effect of training data noise. We corrupt the training data by replacing the input commonsense keywords with a random sampled one under the probability 5%, 10%, 15%, 25%, and 50% (Validation and Test Split remain unchanged). As shown in Table 5, in all noise levels, NRETM successfully achieves higher constraint coverage (i.e, Cons) and CIDEr score than the T5 baseline model, showing that NRETM is robust to the training data noise. It is worthwhile to note that the main goal of NRETM is to incorporate constraints that are satisfied by the training data into transformer-based seq2seq text generators. It is reasonable to assume that in practice, the noise level should be relatively low (e.g., 0% - 10%). Zero-Shot Execution In Table 1, we show that the pre-trained language model T5 cannot handle complicated and fine-grained constraints even after fine-tuning. Here, we further demonstrate that NRETM model is capable to handle zero-shot rule execution. We train the T5 and NRETM model to only mention keywords in the 3rd, 4th and 5th sentence and test these models to mention keywords in the first and second sentence of the whole story. As shown in Table 6, although both T5 and NRETM model mention most of the keywords (95.7% and 98.3% respectively) in the generated story, the T5 model only mention 19.7% of keywords in the correct sentence and the NRETM model makes 97.7% of keywords correct. This is becuase the T5 model cannot recognize the novel sentence index (i.e., the first and second) during the generation. The logic tracker helps the NRETM model to generalize to handle these cases. Running Efficiency We compare the inference time (in minutes) for NRETM on the test split of commonsense generation task in Table 7. All models use the beam search decoding algorithm with beam size 5. Adding NRETM components to T5-Base and T5-Large approximately double the inference time. While the Grid Beam Search (GBS) algorithm uses a much longer inference time. Compared to existing constrained decoding approaches, NRETM uses much less computational costs. 4 Related Work NRETM is mainly related to two lines of research work in text generation: constrained decoding and prior knowledge integration. Constrained Decoding NEUROLOGICEarly work in constrained decoding can be traced back to dual decomposition and lagrangian relaxation [27, 28]. These works focus on sequence labelling and parsing problems where the solution space is relatively small, compared to text generation tasks. Research efforts in text generation tasks [29–31] involve controllable generation methods where the generators are trained on text data with the labeled target attributes. CTRL [4], PPLM [6] and CoCon [32] are recent approaches that built on the transformer-based large-scale pretrained LMs, they pay more attention on controlling high-level attributes, phrases and keywords. [33, 34] propose to trace the control task progress in the text generation decoder. [33] treats the control signal as training loss in memory network and [34] treats the control signal as additional input features. [35] controls the text generation outputs via mentoring the output gradient. However, these work only focus on specific controlling tasks such as phrases copying and generation length. While NRETM focuses on controlling text generation to follow arbitrary logical constraints, leading to a fine-grained control. They can be seen as special cases of NRETM. Recently, GDC [36] permits to specify both pointwise and distributional constraints over the target LMs. Very recently, NEUROLOGIC [7] was proposed to generate fluent text while satisfying complex lexical constraints (in a predicate logic form). There are three main differences between NRETM and NEUROLOGIC: 1) NEUROLOGIC only provides control constraints over the text generators. Instead, NRETM is a general framework that provides control constraints (e.g., copy or not copy words) and prior knowledge (e.g., translating sentences one by one). NEUROLOGIC can be viewed as a special case of NRETM; 2) NEUROLOGIC is an inference-only algorithm that only controls the model to generate or avoid specific words or phrases at decoding time; while NRETM fine-tunes the pre-trained transformer-based seq2seq text generators with the predicate logic constraints; 3) NEUROLOGIC only supports the “copy” predicate (i.e., to generate or not to generate specific words or phrases), while NRETM is a general framework that supports various control predicates. NRETM supports 6 kinds of logic operators in this paper, and it is also possible for users to expand new logic operators. Prior Knowledge Integration Existing efforts [37–41] to incorporate prior knowledge into sequence-to-sequence framework either resort to modifying model architectures, including adding external memory components, specialized decoding method or designing training objectives, including minimum risk training. These methods usually can only support to inject one narrow type of knowledge into the neural models. To the best of our knowledge, we first attempt to formalize the prior knowledge integration in seq2seq generation as text generation that conforms to predicate logic constraints. 5 Conclusion and Future Work In this paper, we propose a unified controllable generation framework that leverages predicate logic constraints to implement efficient complex fine-grained control and scalable prior knowledge integration. We explore and compare two controllable strategies: dynamic tracking and static strategy, and show that the proposed dynamic tracking mechanism significantly outperforms the static ones. Empirical results on three benchmarks indicate that NRETM could achieve accurate control and exhibits a superior generation ability over different tasks. Pre-trained models have been the dominant paradigm in natural language processing, and researchers resort to massive data and large-scale models to improve performance. We unify the rules used in various tasks into the form of predicate logic, provide the possibility to pretrain models on massive rules. In the future, we will explore pre-training large-scale neural rule-execution machine with massive rules and data. Broader Impact Our work proposes a unified and scalable approach to efficiently perform fine-grained controllable text generation and incorporate multiple prior knowledge for superior text generation performance. This work uses story generation, machine translation, commonsense generation as applications to verify the effectiveness. However, while our proposed method achieves promise performance on several benchmarks, deployment of our method in the real world requires a careful analysis of potential societal benefits and harms (e.g., the harms associated with furthering negative stereotypes against certain vulnerable groups). The potential ethical issues include: powerful language models might be used to generate abuse, faked or misleading content in the news or on social media; they might pose safety concerns if they are used to generate harassing or hateful materials. In order to mitigate these risks, it is possible to use AI systems to fight against misleading content and harassing material. However, as discussed in previous work [42, 43], mitigating these risks could be an extremely complex socio-technical problem that many are working to understand and solve. Acknowledgement We thank anonymous reviewers for their insightful suggestions to improve this paper. Funding Transparency Statement Yufei Wang, Can Xu, Huang Hu, Chongyang Tao and Daxin Jiang are supported by Microsoft Software Technology Center at Asia (STCA). Yufei Wang also receives a MQ Research Excellence Scholarship and a CSIRO’s DATA61 Top-up Scholarship.
1. What is the main contribution of the paper regarding text generation? 2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to incorporate prior knowledge? 3. Do you have any concerns about the novelty of the work compared to other approaches in the field? 4. How do you assess the quality and rigor of the experiments presented in the paper? 5. Are there any specific areas where the paper could be improved, such as the inclusion of more competitive baselines or clearer definitions of key concepts?
Summary Of The Paper Review
Summary Of The Paper Interesting problem, but many issues along the way Review This work aims to find a way to impose the symbolic rules into powerful neural networks. To this end, it proposes a unified and scalable approach, named NRETM, to perform fine-grained controllable text generation and incorporate multiple prior knowledge. Strength: This paper proposes a framework to unify the constrained generation and general text generation with prior knowledge incorporation. Experimental results on story generation, machine translation, and commonsense generation partially demonstrate the effectiveness of the proposed method. Drawbacks: The novelty of this work is not clear. It seems to extract the intermediate values of the rules to the model, which is helpful for the model to learn the relationship between the rules and the final outputs. If my understanding is correct, is it not surprising that NRETM can make better results since additional prior knowledge can usually benefit the generation process. The experiments lack rigor. The author should add some competitive baselines for comparison. This paper mainly compares NRETM with pre-trained model T5. But T5 can not utilize prior knowledge efficiently. Feeding the predicate logic constraints as inputs directly is not a suitable way to impose prior knowledge. The manuscript has some typos: eq. 1 P(y|x) is not defined. line 103, in 1 -> in Figure 1. line 116, consist -> consists.
NIPS
Title Neural Rule-Execution Tracking Machine For Transformer-Based Text Generation Abstract Sequence-to-Sequence (Seq2Seq) neural text generation models, especially the pre-trained ones (e.g., BART and T5), have exhibited compelling performance on various natural language generation tasks. However, the black-box nature of these models limits their application in tasks where specific rules (e.g., controllable constraints, prior knowledge) need to be executed. Previous works either design specific model structures (e.g., Copy Mechanism corresponding to the rule “the generated output should include certain words in the source input”) or implement specialized inference algorithms (e.g., Constrained Beam Search) to execute particular rules through the text generation. These methods require the careful design case-by-case and are difficult to support multiple rules concurrently. In this paper, we propose a novel module named Neural Rule-Execution Tracking Machine (NRETM) that can be equipped into various transformer-based generators to leverage multiple rules simultaneously to guide the neural generation model for superior generation performance in an unified and scalable way. Extensive experiments on several benchmarks verify the effectiveness of our proposed model in both controllable and general text generation tasks. 1 Introduction Transformer-based neural language models (LMs), such as GPT/BART [1–3], have led a wave of new trends in natural language generation, producing texts of prominent quality. They are trained roughly on huge amounts of text corpora to reconstruct the full sentences (i.e., next coming tokens and missing text fragments). Despite their success in varieties of NLP tasks, we argue that the black-box nature of these models leads to inefficiently learning to follow constraints and incorporating prior knowledge. In controllable text generation, most relevant studies [4–6] focus on controlling high-level text attributes (e.g., topic, sentiment) or simply keyword/phrase. More complex fine-grained control constraints such as “generate a sequence of tokens with ‘apple’ in the first sentence which has 15 words and ‘orange’ or ‘oranges’ in the fourth sentence” are less explored. A very recent work [7] reveals that large-scale LMs do not learn to obey the underlying constraints reliably, even in a quite simple constrained generation task (cover all the given keywords without hallucinating new ones). In general text generation, existing works on various tasks reveal the benefit of incorporating task-specific prior knowledge: machine translation [8] (e.g., each source phrase should be translated ∗Work done during the internship at Microsoft STCA. †Corresponding author: Daxin Jiang ([email protected]). 35th Conference on Neural Information Processing Systems (NeurIPS 2021). into exactly one target phrase), text summarization [9] (e.g., the lead bias: front loading the most salient information), dialogue generation [10] (e.g., humans tend to repeat entity names or even long phrases in conversation). However, they either need designing specific model architectures (e.g., Coverage Mechanism and Copy Mechanism) or devising well-designed learning objectives (e.g., GSG [11]). These methods require careful design case-by-case and are difficult to combine multiple arbitrary constraints or prior knowledge simultaneously. Motivated by the above research dilemma, we take the first step towards building an unified framework to handle Fine-grained Control and Prior Knowledge Integration and propose a novel module Neural Rule-Execution Tracking Machine (NRETM) 3 Specifically, NRETM is a trainable neural module that can be equipped with transformer-based sequence-to-sequence pre-trained LMs. It can handle constraints in any Predicate Logic Formula, which crucially includes the arbitrarily complicated relations among different control tasks. For example, the above fine-grained constraint can be written as:( InSen(apple, 1 ) ∧ Len(1 , 15 ) ) ∧ ( InSen(orange, 4 ) ∨ InSen(oranges, 4 ) ) To build NRETM, we combat three major challenges: i) modeling the complicated relationships among control tasks and the logic operators (i.e., ∧, ∨) in the constraint expressions; ii) an unified control system is required to execute different control tasks simultaneously and iii), the control signals for different control tasks should be properly aligned with the constraint expressions. NRETM uses the encoder of transformer-based pre-trained sequence-to-sequence LMs to model the relationship between control tasks and the logic operators. NRETM completes different control tasks via nondifferential Logic Trackers (empowered by executable programs) in an unified control progress system during the decoding process. Finally, the encoded constraint expressions and control progress signals are combined together in the transformer decoder. NRETM is fine-tuned with the pre-trained LMs (except logical trackers) to follow the control progress signal and predicate logic formula. NRETM reconciles symbolic computing (that has precise logic and numerical calculation capabilities from logic trackers) with neural language generation (that has an exceptional ability of wording and phrasing), which results in both the accurate controllability and the superior generation performance. For evaluation, we select three representative benchmarks because all of them involve constraints or prior knowledge, allowing us to verify the effectiveness of our proposed NRETM model: ROCStories [12] are five-sentence stories with complicated predicate constraints over the story structure; Commonsense Generation task [13] with the constraints of mentioning all input concepts; TED15 Zh-En document-level machine translation benchmark [14] with prior knowledge of translating input sentences one by one. Our contributions in this work are three-fold: (1) To the best of our knowledge, we are the first to propose a general framework that incorporates control signal and prior knowledge, formulated as predicate logic constraints, into transformer-based seq2seq text generation models; (2) We train (or fine-tune) the transformer-based seq2seq text generation models to follow the predicate logic constraints(i.e., control signal or prior knowledge) by dynamically updating the rule execution intermediate progress value to the text decoder; and (3) Empirical verification of the effectiveness of the proposed approach on three benchmarks. 2 Approach This section first formalizes fine-grained content control task, then introduces an overview of proposed NRETM model, followed by diving into details of each component. 2.1 Fine-Grained Content Control In this work, we focus on fine-grained content control task where the model input consists of predicate logic constraints x = [x1, . . . , xlx ] ∈ X that should be satisfied in the outputs and optional context input c = [c1, . . . , clc ]. The encoder takes concatenation of x and c (i.e., [c;x]) as input. At decoding step t, the decoder take y:t = [y1, · · · , yt] ∈ Y as input and generate yt+1. 3Our Source Code can be found in https://github.com/GaryYufei/NRETM 2.2 Predicate Logic Constraint We define predicate U(a, y) as a boolean function that indicates whether output y has satisfied control task a which could be values (e.g., status, total length, stop word counts) or lexicons (e.g., copying particular words/phrases). In this paper, NRETM accepts predicate logic constraints in Conjunctive Normal Form (CNF): ( U1 · · · ∨ Ui ) ∧ · · · ∧ ( Uk · · · ∨ Un ) . Each predicate logic constraint includes multiple predicates Ui and basic logic operators (e.g., ∨, ∧ and brackets). 2.3 Neural Rule-Execution Tracking Machine NRETM can be equipped into transformer-based sequence-to-sequence LMs. Figure 1 illustrates an overview of our neural rule-execution tracking machine (NRETM). To enable LMs to follow predicate logic constraints, it is essential to 1) model the complicated relationships among predicates and basic logic operators; 2) control multiple predicates (i.e., control tasks) in the constraints simultaneously; 3) combine the control signals with the predicate logic constraint expressions. For 1), we treat the whole constraint expressions as natural language sentences and feed it into the transformer encoder. For 2), we propose a set of unified control signals that can be used to dynamically describe the step-wise execution progress of different predicates. For 3), we represent the control signals as relative position embedding and align them with encoded constraints expressions in the transformer decoder. 2.3.1 Encoding Predicate Logic Constraints Given predicate logic constraint expression x = [x1, . . . , xlx ] where xi either corresponds to a predicate Ui or a basic logic operator, we feed x into the transformer encoder. Due to the tokenization strategies of pre-trained LMs, each xi may be tokenized into a continuous token sequence. x is tokenized into t = [t1, · · · , tlt ] where lt ≥ lx and there exists one-to-one mapping m(ti) = xj . We use he to denote the encoder output of x. As pre-trained LMs is trained with significant amount of natural language sentences, it should encode complicated sequential relationships within the constraints expressions. 2.3.2 Mentoring Control Progress Specialized controlling components (e.g., Constrained Beam Search [15] and Copy Mechanism [10]) can only be used for limited control tasks. To enable unified controlling system, we propose to complete control by mentoring control progress. We describe the control progress of different predicates using an unified progress state system. Each predicate Ui has a corresponding Logic Tracker QUi(y), which is a non-differentiable executable program (i.e., written by Python) and takes current generated outputs and returns one progress state at each generation step, formulated as follows: QUi(y) = S0 Ui is ∅ S1 Ui is not triggered in y (S2,V) Ui is in progress in y S3 Ui is satisfied in y (1) where State S0 always is assigned to non-predicate ∅ (i.e., basic logic operators in the constraint expression); State S1 means the tracking for predicate Ui is not triggered in y. For example, when controlling the stop word counts of the second sentence, the Logic Tracker returns S1 when the LMs are generating the first sentence; State S2 means predicate Ui is in progress and V is the optimal intermediate value that allows fine-grained tracking. For example, in generation length control, V could be total target length minus the current length informing pre-trained LMs the number of words left to satisfy the constraint; State S3 means Ui is satisfied in y. In short, Logic Tracker unifies different predicates by returning the same set of control signals. Global Or-Clause Update: Each Logic Tracker traces the execution progress of its corresponding predicate Ui independently. This independent tracing strategy works well in the And-Clause because all involved predicates are required to reach State S3. However, only a subset of predicates are required to reach State S3 in the Or-Clause. Our preliminary experiment shows that the independent tracing strategy trains the model not to complete the constraints. To solve this issue, we propose to update the status of all predicates in the same Or-Clause to State S3 when one of the predicates reach State S3. This forces all predicates finish themselves in State S3 and improves the constraint satisfaction ratio in the Or-Clause. Control Progress Matrix: Given the predicate logic constraint expressions t = [t1 · · · , tlt ], we further define Control Progress Matrix S to align the predicates with their control progress signals returned by Logic Trackers: S = [C(t, ε); C(t,y:1); · · · ; C(t,y:t)] (2) C(t,y:t) = [v(t1,y:t), · · · , v(tlt ,y:t)] (3) where ε is the empty string at first decoding step. S is a two-dimensional matrix where each row describes the control progress of all tokens in t at a single decoding step and each column describes the control progress of a single token in t along all decoding steps. Recall that basic logic operators in predicate logic constraint expressions do not require control progress tracking. Each cell Si,j in S is formulated as: Si,j = v(ti,y:j−1) = { Q∅(y) m(ti) = xk and xk is a basic logic operator QUq (y) m(ti) = xk and xk is a predicate Uq (4) Example: In Figure 2, we are given three logic constraints, a) copy “car”; b) the stop word ratio of the output should be 0.5 and c) the length of second sentence should be 6. The basic logic operators & are assigned with S0. Length control and Stop Word Ratio maintain intermediate values (e.g., the residual Length and Stop Word Ratio). The length control is assigned with S1 when generating the first sentence because it will only be triggered in the second sentence. Copy control does not have intermediate values and its State are updated from S2 to S3 only when the corresponding words (at step 10 in our example) appear in the y:t. Control Progress Matrix Encoder: Control Progress Matrix S aligns the results from Logic Tracker with the encoded predicate logic constraint expressions. However, S is a non-differentiable symbolic matrix with each cell Si,j being discrete symbol S0 to S3 combined with additional numbers (i.e., V ). As the encoder has already captured the inter-relationship in the predicate logic constraints, we only model each cell Si,j independently. To support various types of predicates, we treat Si,j as a string and encode it using a single-layer transformer-based encoder ShallowEncoder which shares the same vocabulary and word embeddings as the pre-trained LMs: hsij = ShallowEncoder(Si,j) (5) h̄sij = MeanPooling(h s ij) (6) where hsij ∈ Rl s ij×d, h̄sij ∈ Rd and lsij is the length of the tokenized Si,j and d is the hidden size of ShallowEncoder. We use h̄s to denote the neural representation of whole S. 2.3.3 Combining Predicate Logic Constraint with Control Progress Matrix Finally, we combine the encoded Predicate Logic Constraints he with the encoded Control Progress Matrix h̄s in the transformer-based pre-trained LMs. Injecting h̄s into the transformer encoder would result in encoder content re-computation at each decoding step and stop the standard parallel training for transformer-based decoders. In addition, as Control Progress Matrix incrementally increases as the decoding goes on, it is reasonable to equip h̄s into the transformer decoder. Given the encoder output he, decoder input y:t, the probability of the next token yt+1 can be calculated by: hdt = KV(W s qy:t,W s ky:t,W s vy:t (7) ot+1 = CrossKV(W c qh d t ,W c kh e,Wcvh e) (8) p(yt+1|x1:lx , y1:t) = softmax(Wo ot+1) (9) where ot+1 ∈ Rdc is the hidden state at step t with dc the hidden size, and Wo ∈ R|V |×dc , Both KV and CrossKV are the standard key-value self-attention described in [16]. In the CrossKV which takes hdt and h e as input, the resulting attention score matrix has the same size as S, making CrossKV suitable to incorporate our Control Progress Matrix. Control Progress Matrix as Relative Position: Inspired by [17] which incorporates token relative positions into the self-attention module, we propose to inject Control Progress Matrix as the “relative positions” between encoder output he and current decoder input y:t in the cross-attention (Eq. 8) module. Following this approach, we linearly project each h̄ij into Control Progress Matrix key hkij = W f k · h̄sij + b f k and Control Progress Matrix Value h v ij = W f v · h̄sij + bfv . All transformer decoder layers share the same representations. Eq. 8 is changed to: ot+1 = R(W c qH d t ,W c kH e,WcvH e,hk,hv) (10) where Rlx×t×d and R is the Self-Attention function with relative position, defined as follows: R(q,k,v,mk,mv)j = lx∑ i=1 ai,j(vi + m v i,j) (11) where a∗,j = Softmax (e∗,j) and ei,j = qj(ki + mki,j) T d−1/2. 2.4 Why NRETM Could Satisfy Constraints A powerful implicit compulsion comes from the combined force of two aspects: 1) before generating the EOS token (i.e., End-Of-Sequence Token), all the predicate constraints should be satisfied. As demonstrated in Fig 2, all elements in Control Progress Matrix are set to “satisfied” (i.e., S3) at EOS position; 2) The pre-trained LMs are trained to generate text with limited length. Such a soft way of combining symbolic operators (good at logical and mathematical calculations ) and neural operators (good at wording and phrasing) can retain their respective strengths to the utmost extent. 2.5 What If NRETM Fails to Satisfy Constraints NRETM does not forces the pre-trained LMs to execute the hard constraints on the text decoder explicitly, but instead, provides Control Progress Matrix as input features describing rule execution intermediate values to the text decoder. That is, no explicit effect when NRETM fails to satisfy the constraints. It is possible that our text generators decide to stop the generation before completing all constraints. In our experiments, NRETM has less than 1% chance not to complete all constraints. 2.6 The Generalization Ability of NRETM The generalization ability of NRETM comes from two aspects: 1) NRETM can construct new constraints via combining pre-trained predicates with basic logic operators in arbitrarily complicated ways; 2) To expand a new predicate, users only need to implement the corresponding Logic Trackers, which returns S1-S3 and intermediate values, via executable programs. 3 Experiment We test our proposed NRETM on the controllable text generation and general text generation tasks. For controllable text generation, we verify NRETM on the complex fine-grained control instructions in the ROCStories Benchmark [12]. Further, we test NRETM on the general text generation tasks, commonsense generation and document-level machine translation, to show that NRETM can efficiently integrate prior knowledge into seq2seq models towards superior generation performance. 3.1 Controllable ROC Stories ROCStories is a corpus of five-sentence stories that capture a rich set of causal and temporal commonsense relations between daily events. Following [18], we extract key phrases from the ground-truth stories. In this experiment, we design multiple predicate logic constraints to inform NRETM about the stories to be generated and verify if NRETM can follow these constraints exactly. Predicate Logic Formulation As shown in table 1, five constraints with increasing difficulties are used: (1) Generate a story with storyline wi in the pith sentence. (2) Generate a story with an ordered storyline w1, · · · , w4 (3) Generate a story with storyline wi in the pith sentence which has lpi words (i = 1, 2). (4) Generate a storyline w1 in the p1th sentence which has lp1 words or sp1 stop words and w2 in the p2th sentence that does not mention w3 (5) Generate a storyline wi in the pith sentence which has lpi words or spi stop words (i = 1, 2). Baselines and Metrics Both baseline and NRETM use T5-Base model [19]. We report Constraints Success Ratio (CSR), the ratio of stories that completely satisfy the given constraints. We additionally report ROUGE-L (RL), BERT-Score (BS), BLEU-1/4 (B1/4) to show the generated stories quality. Main Results As shown in Table 1, in all five predicate logic constraints, compared to the T5 model, the NRETM model achieves higher Constraint Success Ratio and maintains a similar level of ROUGHL, showing that the NRETM model can be flexibly controlled without loss of generated text quality. The gap in CSR between the T5 and NRETM model is moderate in the first two constraints with simple token permutations. However, the success ratio of T5 model drops significantly given constraints that requires long-range numerical tracking (e.g., sentence length and the count of stop words). 3.2 Commonsense Generation COMMONGEN is a generation benchmark dataset target explicitly test machines for the ability of generative commonsense reasoning. Given a set of common concepts the task is to generate a coherent sentence describing an everyday scenario using these concepts. Predicate Logic Formulation The input is an unordered set of n concepts x = {xi}ni=1. From the expectation of COMMONGEN, one easily obtained prior knowledge is that each xi must appear in output y. The corresponding predicate logic constraint Pc is: Pc = ∧ni=1 ( Copy(xi) ) where y will appear by default, for the sake of brevity, we have omitted y in predicate Copy. Another prior knowledge comes from the observation that generating y requires giving the correct morphological inflections of the concept word rather than copy its original form. Let x̃i = {x̃ik} |x̃i| k=1 denote all inflections of xi. y covers concept xi, if at least one of {x̃ik} |x̃i| k=1 appears. The constraint P̂c is: P̂c = ∧ni=1 ( ∨|x̃ i| j=1 Copy(x̃ i j) ) Baselines and Metrics We experiment with T5-Base and T5-Large. We equip NRETM into the T5Large and T5-Base model to incorporate Pc and P̂c respectively (+ NRETM Pc) (+ NRETM P̂c). Grid Beam Search (GBS) [20] (+ G) is a well-designed decoding method that ensures the generation model satisfies the lexical constraints. We only apply GBS to the T5-Base model due to the memory constraint. Following the suggestions in [13], we use CIDEr [21] and SPICE [22] to automatically assess the quality of generated texts. We calculate constraint satisfaction for all constraints (ALL), novel constraints (Novel) and seen constraints (Seen). Main Results Table 2 shows that the NRETM model improves the constraint satisfaction over the baselines for all cases, achieving close to 100% (i.e., 99.5% and 99.2%). While GBS achieves perfect constraint satisfaction (i.e., 100%), doing so significantly degrades the output text quality (more than 50 CIDEr), indicating the necessity integrating prior knowledge in training rather than inference. In addition, both prior knowledge Pc and P̂c have a positive effect on our model, improving our T5-large baseline by 3.1 and 5.0 CIDEr score, respectively. Finally, our T5-Large + NRETM P̂c model outperforms the previous state-of-the-art result [23], which integrates the ConceptNet [24] into the BART model, suggesting that our incorporated task-specific prior knowledge could be as powerful as knowledge from large-scale hand-crafted corpus. All of the above shows how potential it is to find a method that could execute multiple rules effectively. 3.3 Document-Level Machine Translation Document-level machine translation tasks is a general text generation task, where the goal is to translate segments of text (up to an entire document). Following [14], we use TED15 Zh-En (from IWSLT 2014 and 2015 [25, 26]) as training and validation set and 2010-2013 TED as the test set. Predicate Logic Formulation The input is an ordered set of n sentences in the source language that form a document x = {xi}ni=1, the expected output is a translated document y = {yi}ni=1 in the target language. We observed that neural model is prone to sentence correspondence confusion (the ith sentence in source document is translated as the jth sentence in target document) when doing document-level translation. To alleviate this problem, we propose incorporating Doc-mBART25 with prior knowledge: each source sentence should be translated only once. It is formulated as: TranslatedOnce(xi) = { S3 θ(y:t) > i S2 θ(y:t) = i S1 θ(y:t) < i (12) where θ(·) returns the line number of yt in y, as t is monotonic during generation, the status only set to be 2 once. To trace the sentence translation progress, we add an additional End-Of-Sentence token at the end of each sentence to the training data. Once NRETM finishes the ith sentence (generating an end-of-sentence token) in the decoder, we assume that the ith sentence in the encoder has been translated. The predicate logic constraint Pc of this task can be formulated as: Pc = ∧ni=1 ( TranslatedOnce(xi) ) Baselines and Metrics We combine our NRETM Pc component with the Doc-mBART25 model proposed in [3] which is a state-of-the-art multilingual pre-trained language model. We compare this model with the state-of-the-art non-pretraining and pretraining approaches, including HAN (Hierarchical Attention Networks) [14], Doc-mBART25 and Sen-mBART25 proposed in [3]. When implementing our model, we use the same pre-processing method, blocks segmentation strategy and beam search setting as [3]. TED15 Zh-En provides sentence-to-sentence translation from Chinese to English. We use both document-level (d-BLEU) and sentence-level (s-BLEU) to measure the similarities between generated target document and the source document. We also report Sentence Aligned Ratio (SAR), the ratio of source and target documents with the same sentence count, to show the effectiveness of our control over this translation prior knowledge. Main Results Table 3 shows that the NRETM Pc component helps the Doc-mBART25 model to better capture the sentence-level corresponding relationship between the source and target documents. In particular, sentence-level alignment ratio is improved from 98.7% to 100%. The improvement in s-BLEU (+ 1.1 BLEU) also confirms that our final Doc-mBART25 + NRETM Pc model learns to translate sentences based on the sentence order in source documents. 3.4 Discussion Updating Progress in Encoder In Sec 2.3.3, we incorporate the Control Progress Matrix as relative position embeddings in the decoder. To show the importance of this design choice, we conduct an ablation study in Table 4 where the row of Control Progress Matrix is concatenated with the encoder output. We find that updating the rule execution progress information with the encoder output contributes little to improve the CSR. This shows that simply extracting rule execution intermediate values is not enough. This could be because the encoder that encodes the rule execution intermediate values cannot effectively broadcast this information into text decoders. NRETM Robustness The above experiment results are based on the perfect training data. In this section, we explore the effect of training data noise. We corrupt the training data by replacing the input commonsense keywords with a random sampled one under the probability 5%, 10%, 15%, 25%, and 50% (Validation and Test Split remain unchanged). As shown in Table 5, in all noise levels, NRETM successfully achieves higher constraint coverage (i.e, Cons) and CIDEr score than the T5 baseline model, showing that NRETM is robust to the training data noise. It is worthwhile to note that the main goal of NRETM is to incorporate constraints that are satisfied by the training data into transformer-based seq2seq text generators. It is reasonable to assume that in practice, the noise level should be relatively low (e.g., 0% - 10%). Zero-Shot Execution In Table 1, we show that the pre-trained language model T5 cannot handle complicated and fine-grained constraints even after fine-tuning. Here, we further demonstrate that NRETM model is capable to handle zero-shot rule execution. We train the T5 and NRETM model to only mention keywords in the 3rd, 4th and 5th sentence and test these models to mention keywords in the first and second sentence of the whole story. As shown in Table 6, although both T5 and NRETM model mention most of the keywords (95.7% and 98.3% respectively) in the generated story, the T5 model only mention 19.7% of keywords in the correct sentence and the NRETM model makes 97.7% of keywords correct. This is becuase the T5 model cannot recognize the novel sentence index (i.e., the first and second) during the generation. The logic tracker helps the NRETM model to generalize to handle these cases. Running Efficiency We compare the inference time (in minutes) for NRETM on the test split of commonsense generation task in Table 7. All models use the beam search decoding algorithm with beam size 5. Adding NRETM components to T5-Base and T5-Large approximately double the inference time. While the Grid Beam Search (GBS) algorithm uses a much longer inference time. Compared to existing constrained decoding approaches, NRETM uses much less computational costs. 4 Related Work NRETM is mainly related to two lines of research work in text generation: constrained decoding and prior knowledge integration. Constrained Decoding NEUROLOGICEarly work in constrained decoding can be traced back to dual decomposition and lagrangian relaxation [27, 28]. These works focus on sequence labelling and parsing problems where the solution space is relatively small, compared to text generation tasks. Research efforts in text generation tasks [29–31] involve controllable generation methods where the generators are trained on text data with the labeled target attributes. CTRL [4], PPLM [6] and CoCon [32] are recent approaches that built on the transformer-based large-scale pretrained LMs, they pay more attention on controlling high-level attributes, phrases and keywords. [33, 34] propose to trace the control task progress in the text generation decoder. [33] treats the control signal as training loss in memory network and [34] treats the control signal as additional input features. [35] controls the text generation outputs via mentoring the output gradient. However, these work only focus on specific controlling tasks such as phrases copying and generation length. While NRETM focuses on controlling text generation to follow arbitrary logical constraints, leading to a fine-grained control. They can be seen as special cases of NRETM. Recently, GDC [36] permits to specify both pointwise and distributional constraints over the target LMs. Very recently, NEUROLOGIC [7] was proposed to generate fluent text while satisfying complex lexical constraints (in a predicate logic form). There are three main differences between NRETM and NEUROLOGIC: 1) NEUROLOGIC only provides control constraints over the text generators. Instead, NRETM is a general framework that provides control constraints (e.g., copy or not copy words) and prior knowledge (e.g., translating sentences one by one). NEUROLOGIC can be viewed as a special case of NRETM; 2) NEUROLOGIC is an inference-only algorithm that only controls the model to generate or avoid specific words or phrases at decoding time; while NRETM fine-tunes the pre-trained transformer-based seq2seq text generators with the predicate logic constraints; 3) NEUROLOGIC only supports the “copy” predicate (i.e., to generate or not to generate specific words or phrases), while NRETM is a general framework that supports various control predicates. NRETM supports 6 kinds of logic operators in this paper, and it is also possible for users to expand new logic operators. Prior Knowledge Integration Existing efforts [37–41] to incorporate prior knowledge into sequence-to-sequence framework either resort to modifying model architectures, including adding external memory components, specialized decoding method or designing training objectives, including minimum risk training. These methods usually can only support to inject one narrow type of knowledge into the neural models. To the best of our knowledge, we first attempt to formalize the prior knowledge integration in seq2seq generation as text generation that conforms to predicate logic constraints. 5 Conclusion and Future Work In this paper, we propose a unified controllable generation framework that leverages predicate logic constraints to implement efficient complex fine-grained control and scalable prior knowledge integration. We explore and compare two controllable strategies: dynamic tracking and static strategy, and show that the proposed dynamic tracking mechanism significantly outperforms the static ones. Empirical results on three benchmarks indicate that NRETM could achieve accurate control and exhibits a superior generation ability over different tasks. Pre-trained models have been the dominant paradigm in natural language processing, and researchers resort to massive data and large-scale models to improve performance. We unify the rules used in various tasks into the form of predicate logic, provide the possibility to pretrain models on massive rules. In the future, we will explore pre-training large-scale neural rule-execution machine with massive rules and data. Broader Impact Our work proposes a unified and scalable approach to efficiently perform fine-grained controllable text generation and incorporate multiple prior knowledge for superior text generation performance. This work uses story generation, machine translation, commonsense generation as applications to verify the effectiveness. However, while our proposed method achieves promise performance on several benchmarks, deployment of our method in the real world requires a careful analysis of potential societal benefits and harms (e.g., the harms associated with furthering negative stereotypes against certain vulnerable groups). The potential ethical issues include: powerful language models might be used to generate abuse, faked or misleading content in the news or on social media; they might pose safety concerns if they are used to generate harassing or hateful materials. In order to mitigate these risks, it is possible to use AI systems to fight against misleading content and harassing material. However, as discussed in previous work [42, 43], mitigating these risks could be an extremely complex socio-technical problem that many are working to understand and solve. Acknowledgement We thank anonymous reviewers for their insightful suggestions to improve this paper. Funding Transparency Statement Yufei Wang, Can Xu, Huang Hu, Chongyang Tao and Daxin Jiang are supported by Microsoft Software Technology Center at Asia (STCA). Yufei Wang also receives a MQ Research Excellence Scholarship and a CSIRO’s DATA61 Top-up Scholarship.
1. What is the main contribution of the paper regarding incorporating prior knowledge into text generation via predicate logic constraints? 2. What are the strengths and weaknesses of the proposed approach compared to previous works? 3. What are some questions or concerns raised by the reviewer regarding the methodology, experiments, and results? 4. How does the paper address the issue of incorporating prior knowledge as soft constraints into the decoding process for text generation? 5. Can you provide examples or explanations for some of the key components of the proposed framework, such as the state flag matrix, NRETM module, and dynamic vs static strategy?
Summary Of The Paper Review
Summary Of The Paper The paper proposes a unified framework to incorporate prior knowledge into the text generation process via predicate logic constraints. The text generation model is a transformer and constraints are also encoded by a separate transformer. The encoded constraints then become part of the attention mechanism, therefore it can be considered as an approach that uses “soft” constraints. Contributions A general framework of incorporating prior knowledge as (soft) constraints into the decoding process for text generation. Review After Authors' Response The authors' response was very thorough and addressed all my raised concerns, therefore I have increased my score. Strength - The idea makes sense on a high level and having an efficient manner of incorporating constraints into the decoding process for text generation would be very beneficial for many tasks and applications. Weaknesses - Some important points about the method and the experiments are left unclear (see also questions below). - The writing could be improved (see also Typos & Additional Questions below) - Multiple runs and significance tests are missing. This makes it hard to judge the improvements (Table 2 & 3). Most Important Questions - Line 156: What is q_ij^k here exactly? I thought q_ij was a state flag, such as “2” or “0”. But you tokenize it and encode it, so it sounds more like it is something like “Copy(snow)”? (If it is the latter, then what is the meaning of tokenizing and encoding something like “Len(9)”?) - 192: What exactly is storyline and what do you need it for? - The baseline takes the predicate logic constraints as input: How does T6 know what to do with these inputs? Was the model trained on this but without the NRETM module? Can you give an example of what the input looks likes? How do these inputs guide which sentences should be generated? Looking at the datsset, it feels like one would need at least the first 2 sentences or so to know how to continue. Maybe this information is now in your constraints but it would be important to understand what they look like and how they were created. Is there no other suitable baseline for this experiment? - What is the overhead of your method compared to standard decoding approaches? (you mention GBS can only be used with T5-Base, so your method is more efficient? That would be important to point out) - What happens if the decoding process cannot find a sequence that satisfies all constraint? - Document-level MT: How do you know at test time whether the system translates a particular sentence or not? - How many sentences are misaligned by Doc-mBART25? What are the s-BLEU and d-BLEU values on the subset that NRETM aligns correctly and Doc does not? - Why was NEUROLOGIC not used as a comparison baseline? - What is dynamic vs static strategy? In which experiment did you show that dynamic works better than static (from conclusion)? Typos & Additional Questions - Line 40: you could mention here that the examples will be translated into logic forms in the next section. - Paragraph starting at line 53: Why did you choose these datasets? How will they help evaluate the proposed approach? - Line 75: a and b should be bold faced? - 83: “that used” -> “that are used” - 83: “details” -> “for details” - Paragraph at line 86: At this point, the state matrix is unclear. What are the initial values? How can the state matrix be used to understand if a constraint is satisfied or not? - 98: “take[s]” & “generate[s]” - 108: “be all” -> “all be” - Paragraph at line 101: What is dynamic vs static strategy? - Paragraph at line 109: The state flag explanation would greatly benefit from an example. Does q_i refer to whether a particular U_i is satisfied? - Eq 2: What is the meaning of N? Can it change depending on the definition of U_k? Does it mean this constraint is not relevant for x_i? - 133: Figure 1 should be Figure 2 - Figure 2: What exactly do the “&” rows track? - Figure 2: Is the state flag matrix equal to the state matrix? If not, how do you go from one to the other? - Line 146: What does the inf in the superscript signify? - 177: What is the symbolic operator? - Paragraph at line 194: Without understanding what a storyline is, it is not clear what the constraints are. An example might be helpful here. - Line 204: what is the ROUGH-L metric? Do you mean ROUGE-L? - Line 223: How do you obtain the morphological inflections for the concepts? - 237: @necessity [of] integrating” - 3.3: How exactly is the document-level MT done? Is the entire input document the input to T5? - 293: “because” typo - 3.4 where/how exactly is the sentence index used?
NIPS
Title Unfolding the Alternating Optimization for Blind Super Resolution Abstract Previous methods decompose blind super resolution (SR) problem into two sequential steps: i) estimating blur kernel from given low-resolution (LR) image and ii) restoring SR image based on estimated kernel. This two-step solution involves two independently trained models, which may not be well compatible with each other. Small estimation error of the first step could cause severe performance drop of the second one. While on the other hand, the first step can only utilize limited information from LR image, which makes it difficult to predict highly accurate blur kernel. Towards these issues, instead of considering these two steps separately, we adopt an alternating optimization algorithm, which can estimate blur kernel and restore SR image in a single model. Specifically, we design two convolutional neural modules, namely Restorer and Estimator. Restorer restores SR image based on predicted kernel, and Estimator estimates blur kernel with the help of restored SR image. We alternate these two modules repeatedly and unfold this process to form an end-to-end trainable network. In this way, Estimator utilizes information from both LR and SR images, which makes the estimation of blur kernel easier. More importantly, Restorer is trained with the kernel estimated by Estimator, instead of ground-truth kernel, thus Restorer could be more tolerant to the estimation error of Estimator. Extensive experiments on synthetic datasets and real-world images show that our model can largely outperform state-of-the-art methods and produce more visually favorable results at much higher speed. The source code is available at https://github.com/greatlog/DAN.git. 1 Introduction Single image super resolution (SISR) aims to recover the high-resolution (HR) version of a given degraded low-resolution (LR) image. It has wide applications in video enhancement, medical imaging, as well as security and surveillance imaging. Mathematically, the degradation process can be expressed as y = (x⊗ k) ↓s +n (1) where x is the original HR image, y is the degraded LR image, ⊗ denotes the two-dimensional convolution of x with blur kernel k, n denotes Additive White Gaussian Noise (AWGN), and ↓s denotes the standard s-fold downsampler, which means keeping only the upper-left pixel for each ∗Corresponding author 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. distinct s × s patch [35]. Then SISR refers to the process of recovering x from y. It is a highly ill-posed problem due to this inverse property, and thus has always been a challenging task. Recently, deep neural networks (DNNs) have achieved remarkable results on SISR. But most of these methods [39, 2, 40, 23, 8, 21] assume that the blur kernel is predefined as the kernel of bicubic interpolation. In this way, large number of training samples can be manually synthesized and further used to train powerful DNNs. However, blur kernels in real applications are much more complicated, and there is a domain gap between bicubically synthesized training samples and real images. This domain gap will lead to severe performance drop when these networks are applied to real applications. Thus, more attention should be paid to SR in the context of unknown blur kernels, i.e. blind SR. In blind SR, there is one more undetermined variable, i.e. blur kernel k, and the optimization also becomes much more difficult. To make this problem easier to be solved, previous methods [37, 32, 38, 35] usually decompose it into two sequential steps: i) estimating blur kernel from LR image and ii) restoring SR image based on estimated kernel. This two-step solution involves two independently trained models, thus they may be not well compatible to each other. Small estimation error of the first step could cause severe performance drop of the following one [14]. But on the other hand, the first step can only utilize limited information from LR image, which makes it difficult to predict highly accurate blur kernel. As a result, although both models can perform well individually, the final result may be suboptimal when they are combined together. Instead of considering these two steps separately, we adopt an alternating optimization algorithm, which can estimate blur kernel k and restore SR image x in the same model. Specifically, we design two convolutional neural modules, namely Restorer and Estimator. Restorer restores SR image based on blur kernel predicted by Estimator, and the restored SR image is further used to help Estimator estimate better blur kernel. Once the blur kernel is manually initialized, the two modules can well corporate with each other to form a closed loop, which can be iterated over and over. The iterating process is then unfolded to an end-to-end trainable network, which is called deep alternating network (DAN). In this way, Estimator can utilize information from both LR and SR images, which makes the estimation of blur kernel easier. More importantly, Restorer is trained with the kernel estimated by Estimator, instead of ground-truth kernel. Thus during testing Restorer could be more tolerant to the estimation error of Estimator. Besides, the results of both modules could be substantially improved during the iterations, thus it is likely for our alternating optimization algorithm to get better final results than the direct two-step solutions. We summarize our contributions into three points: 1. We adopt an alternating optimization algorithm to estimate blur kernel and restore SR image for blind SR in a single network (DAN), which helps the two modules to be well compatible with each other and likely to get better final results than previous two-step solutions. 2. We design two convolutional neural modules, which can be alternated repeatedly and then unfolded to form an end-to-end trainable network, without any pre/post-processing. It is easier to be trained and has higher speed than previous two-step solutions. To the best of our knowledge, the proposed method is the first end-to-end network for blind SR. 3. Extensive experiments on synthetic datasets and real-world images show that our model can largely outperform state-of-the art methods and produce more visually favorable results at much higher speed. 2 Related Work 2.1 Super Resolution in the Context of Bicubic Interpolation Learning based methods for SISR usually require a large number of paired HR and LR images as training samples. However, these paired samples are hard to get in real world. As a result, researchers manually synthesize LR images from HR images with predefined downsampling settings. The most popular setting is bicubic interpolation, i.e. defining k in Equation 1 as bicubic kernel. From the arising of SRCNN [9], various DNNs [21, 40, 39, 10, 16, 18] have been proposed based on this setting. Recently, after the proposal of RCAN [39] and RRDB [30], the performance of these non-blind methods even start to saturate on common benchmark datasets. However, the blur kernels for real images are indeed much more complicated. In real applications, kernels are unknown and differ from image to image. As a result, despite that these methods have excellent performance in the context of bicubic downsampling, they still cannot be directly applied to real images due to the domain gap. 2.2 Super Resolution for Multiple Degradations Another kind of non-blind SR methods aims to propose a single model for multiple degradations, i.e. the second step of the two-step solution for blind SR. These methods take both LR image and its corresponding blur kernel as inputs. In [13, 29], the blur kernel is used to downsample images and synthesize training samples, which can be used to train a specific model for given kernel and LR image. In [37], the blur kernel and LR image are directly concatenated at the first layer of a DNN. Thus, the SR result can be closely correlated to both LR image and blur kernel. In [38], Zhang et al. proposed a method based on ADMM algorithm. They interpret this problem as MAP optimization and solve the data term and prior term alternately. In [14], a spatial feature transform (SFT) layer is proposed to better preserve the details in LR image while blur kernel is an additional input. However, as pointed out in [14], the SR results of these methods are usually sensitive to the provided blur kernels. Small deviation of provided kernel from the ground truth will cause severe performance drop of these non-blind SR methods. 2.3 Blind Super Resolution Previous methods for blind SR are usually the sequential combinations of a kernel-estimation method and a non-blind SR method. Thus kernel-estimation methods are also an important part of blind SR. In [26], Michaeli et al. estimate the blur kernel by utilizing the internal patch recurrence. In [3] and [5], LR image is firstly downsampled by a generative network, and then a discriminator is used to verify whether the downsampled image has the same distribution with original LR image. In this way, the blur kernel can be learned by the generative network. In [14], Gu et al. not only train a network for kernel estimation, but also propose a correction network to iteratively correct the kernel. Although the accuracy of estimated kernel is largely improved, it requires training of two or even three networks, which is rather complicated. Instead, DAN is an end-to-end trainable network that is much easier to be trained and has much higher speed. 3 End-to-End Blind Super Resolution 3.1 Problem Formulation As shown in Equation 1, there are three variables, i.e. x, k and n, to be determined in blind SR problem. In literature, we can apply a denoise algorithm [36, 7, 15] in the first place. Then blind SR algorithm only needs to focus on solving k and x. It can be mathematically expressed an optimization problem: argmin k,x ‖y − (x⊗ k) ↓s ‖1 + φ(x) (2) where the former part is the reconstruction term, and φ(x) is prior term for HR image. The prior term is usually unknown and hard to be analytically expressed. Thus it is extremely difficult to solve this problem directly. Previous methods decompose this problem into two sequential steps:{ k =M(y) x = argmin x ‖y − (x⊗ k) ↓s ‖1 + φ(x) (3) where M(·) denotes the function that estimates k from y, and the second step is usually solved by a non-blind SR method described in Sec 2.2. This two-step solution has its drawbacks in threefold. Firstly, this algorithm usually requires training of two or even more models, which is rather complicated. Secondly, M(·) can only utilize information from y, which treats k as a kind of prior of y. But in fact, k could not be properly solved without information from x. At last, the non-blind SR model for the second step is trained with ground-truth kernels. While during testing, it can only have access to kernels estimated in the first step. The difference between ground-truth and estimated kernels will usually cause serve performance drop of the non-blind SR model [14]. Towards these drawbacks, we propose an end-to-end network that can largely release these issues. We still split it into two subproblems, but instead of solving them sequentially, we adopt an alternating optimization algorithm, which restores SR image and estimates corresponding blur kernel alternately. The mathematical expression is ki+1 = argmin k ‖y − (xi ⊗ k) ↓s ‖1 xi+1 = argmin x ‖y − (x⊗ ki) ↓s ‖1 + φ(x) (4) We alternately solve these two subproblems both via convolutional neural modules, namely Estimator and Restorer respectively. Actually, there even has an analytic solution for Estimator. But we experimentally find that analytic solution is more time-consuming and not robust enough (when noise is not fully removed). We fix the number of iterations as T and unfold the iterating process to form an end-to-end trainable network, which is called deep alternating network (DAN). As shown in Figure 1, we initialize the kernel by Dirac function, i.e. the center of the kernel is one and zeros otherwise. Following [14, 37], the kernel is also reshaped and then reduced by principal component analysis (PCA). We set T = 4 in practice and both modules are supervised only at the last iteration by L1 loss. The whole network could be well trained without any restrictions on intermediate results, because the parameters of both modules are shared between different iterations. In DAN, Estimator takes both LR and SR images as inputs, which makes the estimation of blur kernel k much easier. More importantly, Restorer is trained with the kernel estimated by Estimator, instead of ground-truth kernel as previous methods do. Thus, Restorer could be more tolerant to the estimation error of Estimator during testing. Besides, compared with previous two-step solutions, the results of both modules in DAN could be substantially improved, and it is likely for DAN to get better final results. Specially, in the case where the scale factor s = 1, DAN becomes an deblurring network. Due to limited pages, we only discuss SR cases in this paper. 3.2 Instantiate the Convolutional Neural Modules Both modules in our network have two inputs. Estimator takes LR and SR image, and Restorer takes LR image and blur kernel as inputs. We define LR image as basic input, and the other one is conditional input. For example, blur kernel is the conditional input of Restorer. During iterations, the basic inputs of both modules keep the same, but their conditional inputs are repeatedly updated. We claim that it is significantly important to keep the output of each module closely related to its conditional input. Otherwise, the iterating results will collapse to a fixed point at the first iteration. Specifically, if Estimator outputs the same kernel regardless the value of SR image, or Restorer outputs the same SR image regardless of the value of blur kernel, their outputs will only depend on the basic input, and the results will keep the same during the iterations. To ensure that the outputs of Estimator and Restorer are closely related to their conditional inputs, we propose a conditional residual block (CRB). On the basis of the residual block in [39], we concatenate the conditional and basic inputs at the beginning: fout = R(Concat([fbasic, fcond])) + fbasic (5) where R(·) denotes the residual mapping function of CRB and Concat([·, ·]) denotes concatenation. fbasic and fcond are the basic input and conditional input respectively. As shown in Figure 2 (c), the residual mapping function consists of two 3× 3 convolutional layers and one channel attention layer [17]. Both Estimator and Restorer are build by CRBs. Estimator. The whole structure of Estimator is shown in Figure 2 (b). We firstly downsample SR image by a convolutional layer with stride s. Then the feature maps are sent to all CRBs as conditional inputs. At the end of the network, we squeeze the features by global average pooling to form the elements of predicted kernel. Since the kernel is reduced by PCA, Estimator only needs to estimate the PCA result of blur kernel. In practice, Estimator has 5 CRBs, and both basic input and conditional input of each CRB have 32 channels. Restorer. The whole structure of Restorer is shown in Figure 2 (a). In Restorer, we stretch the kernel in spatial dimension to the same spatial size as LR image. Then the stretched kernel is sent to all CRBs of Restorer as conditional inputs. We use PixelShuffle [28] layers to upscale the features to desired size. In practice, Restorer has 40 CRBs, and the basic input and conditional input of each CRB has 64 and 10 channels respectively. 4 Experiments 4.1 Experiments on Synthetic Test Images 4.1.1 Data, Training and Testing We collect 3450 HR images from DIV2K [1] and Flickr2K [11] as training set. To make reasonable comparison with other methods, we train models with two different degradation settings. One is the setting in [14], which only focuses on cases with isotropic Gaussian blur kernels. The other is the setting in [3], which focuses on cases with more general and irregular blur kernels. Setting 1. Following the setting in [14], the kernel size is set as 21. During training, the kernel width is uniformly sampled in [0.2, 4.0], [0.2, 3.0] and [0.2, 2.0] for scale factors 4, 3 and 2 respectively. For quantitative evaluation, we collect HR images from the commonly used benchmark datasets, i.e. Set5 [4], Set14 [34], Urban100 [19], BSD100 [24] and Manga109 [25]. Since determined kernels are needed for reasonable comparison, we uniformly choose 8 kernels, denoted as Gaussian8, from range [1.8, 3.2], [1.35, 2.40] and [0.80, 1.60] for scale factors 4, 3 and 2 respectively. The HR images are first blurred by the selected blur kernels and then downsampled to form synthetic test images. Setting 2. Following the setting in [3], we set the kernel size as 11. We firstly generate anisotropic Gaussian kernels. The lengths of both axises are uniformly distributed in (0.6, 5), rotated by a random angle uniformly distributed in [−π, π]. To deviate from a regular Gaussian, we further apply uniform multiplicative noise (up to 25% of each pixel value of the kernel) and normalize it to sum to one. For testing, we use the benchmark dataset DIV2KRK that is used in [3]. The input size during training is 64× 64 for all scale factors. The batch size is 64. Each model is trained for 4× 105 iterations. We use Adam [22] as our optimizer, with β1 = 0.9, β2 = 0.99. The initial learning rate is 2× 10−4, and will decay by half at every 1× 105 iterations. All models are trained on RTX2080Ti GPUs. 4.1.2 Quantitative Results Setting 1. For the first setting, we evaluate our method on test images synthesized by Gaussian8 kernels. We mainly compare our results with ZSSR [29] and IKC [14], which are methods designed for blind SR. We also include a comparison with CARN [2]. Since it is not designed for blind SR, we perform deblurring method [27] before or after CARN. The PSNR and SSIM results on Y channel of transformed YCbCr space are shown in Table 1. Despite that CARN achieves remarkable results in the context of bicubic downsampling, it suffers severe performance drop when applied to images with unknown blur kernels. Its performance is largely improved when it is followed by a deblurring method, but still inferior to that of blind-SR methods. ZSSR trains specific network for each single tested image by utilizing the internal patch recurrence. However, ZSSR has an in-born drawback: the training samples for each image are limited, and thus it cannot learn a good prior for HR images. IKC is also a two-step solution for blind SR. Although the accuracy of estimated kernel is largely improved in IKC, the final result is still suboptimal. DAN is trained in an end-to-end scheme, which is not only much easier to be trained than two-step solutions, but also likely to a reach a better optimum point. As shown in Table 1, the PSNR result of DAN on Manga109 for scale ×3 is even 4.95dB higher than that of IKC. For other scales and datasets, DAN also largely outperforms IKC. The visual results of img 005 in Urban100 are shown in Figure 3 for comparison. As one can see, CARN and ZSSR even cannot restore the edges for the window. IKC performs better, but the edges are severely blurred. While DAN can restore sharp edges and produce more visually pleasant result. Setting 2. The second setting involves irregular blur kernels, which is more general, but also more difficult to solve. For Setting 2, we mainly compare methods of three different classes: i) SOTA SR algorithms trained on bicubically downsampled images such as EDSR [23] and RCAN [39] , ii) blind SR methods designed for NTIRE competition such as PDN [31] and WDSR [33], iii) the two-step solutions, i.e. the combination of a kernel estimation method and a non-blind SR method, such as Kernel-GAN [3] and ZSSR [29]. The PSNR and SSIM results on Y channl are shown in Table 2. Similarly, the performance of methods trained on bicubically downsampled images is limited by the domain gap. Thus, their results are only slightly better than that of interpolation. The methods in Class 2 are trained on synthesized images provided in NTIRE competition. Although these methods achieve remarkable results in the competition, they still cannot generalize well to irregular blur kernels. The comparison between methods of Class 3 can enlighten us a lot. Specifically, USRNet [35] achieves remarkable results when GT kernels are provided, and KernelGAN also performs well on kernel estimation. However, when they are combined together, as shown in Table 2, the final SR results are worse than most other methods. This indicates that it is important for the Estimator and Restorer to be compatible with each other. Additionally, although better kernel-estimation method can benefit the SR results, the overall performance is still largely inferior to that of DAN. DAN outperforms the combination of KernelGAN and ZSSR by 2.20dB and 0.74dB for scales ×2 and ×4 respectively. The visual results of img 892 in DIVKRK are shown in Figure 4. Although the combination of KernelGAN and ZSSR can produce slightly shaper edges than interpolation, it suffers from severe artifacts. The SR image of DAN is obviously much cleaner and has more reliable details. 4.1.3 Study of Estimated Kernels To evaluated the accuracy of predicted kernels, we calculate their L1 errors in the reduced space, and the results on Urban100 are shown in Figure 5 (a). As one can see that the L1 error of reduced kernels predicted by DAN are much lower than that of IKC. It suggests that the overall improvements of DAN may partially come from more accurate retrieved kernels. We also plot the PSNR results with respect to kernels with different sigma in Figure 5 (b). As sigma increases, the performance gap between IKC and DAN also becomes larger. It indicates that DAN may have better generalization ability. We also replace the estimated kernel by ground truth (GT) to further investigate the influence of Estimator. If GT kernels are provided, the iterating processing becomes meaningless. Thus we test the Restorer with just once forward propagation. The tested results for Setting 1 is shown in Table 3. The result almost keeps unchanged and sometimes even gets worser when GT kernels are provided. It indicates that Predictor may have already satisfied the requirements of Restorer, and the superiority of DAN also partially comes from the good cooperation between its Predictor and Restorer. 4.1.4 Study of Iterations After the model is trained, we also change the number of iterations to see whether the two modules have learned the property of convergence or just have ‘remembered’ the iteration number. The model is trained with 4 iterations, but during testing we increase the iteration number from 1 to 7. As shown in Figure 6 (a) and (c), the average PSNR results on Set5 and Set14 firstly increase rapidly and then gradually converge. It should be noted that when we iterate more times than training, the performance dose not becomes worse, and sometimes even becomes better. For example, the average PSNR on Set14 is 20.43dB when the iteration number is 5, higher than 20.42dB when we iterate 4 times. Although the incremental is relatively small, it suggests that the two modules may have learned to cooperate with each other, instead of solving this problem like ordinary end-to-end networks, in which cases, the performance will drop significantly when the setting of testing is different from that of training. It also suggests that the estimation error of intermediate results does not destroy the convergence of DAN. In other words, DAN is robust to various estimation error. 4.2 Inference Speed One more superiority of our end-to-end model is that it has higher inference speed. To make a quantitative comparison, we evaluate the average speed of different methods on the same platform. We choose the 40 images synthesized by Gaussian8 kernels from Set5 as testing images, and all methods are evaluated on the same platform with a RTX2080Ti GPU. We choose KernelGAN [3] + ZSSR [29] as the one of the representative methods. Its speed is 415.7 seconds per image. IKC [14] has much faster inference speed, which is only 3.93 seconds per image. As a comparison, the average speed of DAN is 0.75 seconds per image, nearly 554 times faster than KernelGAN + ZSSR, and 5 times faster than IKC. In other words, DAN not only can largely outperform SOTA blind SR methods on PSNR results, but also has much higher speed. 4.3 Experiments on Real World Images We also conduct experiments to prove that DAN can generalize well to real wold images. In this case, we need to consider the influence of additive noise. As we mentioned in Sec 3.1, we can perform an denoise algorithm in the first place. But for simplicity, we retrain a different model by adding AWGN to LR image during training. In this way, DAN would be forced to generalize to noisy images. The covariance of noise is set as 15. We use KernelGAN [3] + ZSSR [29] and IKC [14] as the representative methods for blind SR, and CARN [2] as the representative method for non-blind SR method. The commonly used image chip [12] is chosen as test image. It should be noted that it is a real image and we do not have the ground truth. Thus we can only provide a visual comparison in Figure [12]. As one can see, the result of KernelGAN + ZSSR is slightly better than bicubic interpolation, but is still heavily blurred. The result of CARN is over smoothed and the edge is not sharp enough. IKC produces cleaner result, but there are still some artifacts. The letter ‘X’ restored by IKC has an obvious dark line at the top right part. But this dark line is much lighter in the image restored by DAN. It suggests that if trained with noisy images, DAN can also learn to denoise, and produce more visually pleasant results with more reliable details. This is because that both modules are implemented via convolutional layers, which are flexible enough to be adapted to different tasks. 5 Conclusion In this paper, we have proposed an end-to-end algorithm for blind SR. This algorithm is based on alternating optimization, the two parts of which are both implemented by convolutional modules, namely Restorer and Estimator. We unfold the alternating process to form an end-to-end trainable network. In this way, Estimator can utilize information from both LR and SR images, which makes it easier to estimate blur kernel. More importantly, Restorer is trained with the kernel estimated by Estimator, instead of ground-truth kernel, thus Restorer could be more tolerant to with the estimation error of Estimator. Besides, the results of both modules could be substantially improved during the iterations, thus it is likely for DAN to get better final results than previous two-step solutions. Experiments also prove that DAN outperforms SOTA blind SR methods by a large margin. In the future, if the two parts of DAN can be implemented by more powerful modules, we believe that its performance could be further improved. Broader Impact Super Resolution is a traditional task in computer vision. It has been studied for several decades and has wide applications in video enhancement, medical imaging, as well as security and surveillance imaging. These techniques have largely benefited the society in various areas for years and have no negative impact yet. The proposed method (DAN) could further improve the merits of these applications especially in cases where the degradations are unknown. DAN has relatively better performance and much higher speed, and it is possible for DAN to be used in real-time video enhancement or surveillance imaging. This work does not present any negative foreseeable societal consequence. Acknowledgements This work is jointly supported by National Key Research and Development Program of China (2016YFB1001000), Key Research Program of Frontier Sciences, CAS (ZDBS-LY-JSC032), Shandong Provincial Key Research and Development Program (2019JZZY010119), and CAS-AIR.
1. What is the main contribution of the paper regarding Blind-SR? 2. What are the strengths of the proposed approach, particularly in its simplicity and results? 3. What are the weaknesses of the paper, especially regarding experiments, comparisons, and understanding? 4. How does the reviewer suggest improving the ablation and architectural elements of the proposed method? 5. What is unclear about the comparison to ZSSR? 6. What additional information should be included in the supplementary material to enhance the paper's understanding and visualization?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper tackles Blind-SR by unfolding the two steps approach of IKC (Gu'19) into one trainable network that requires no test-time optimization. Strengths 1. Simple and elegant way to idealize the two steps. 2. Impressive results, large meaningful margin. Weaknesses All weaknesses are related to experiments, analysis and understanding. 1. Missing Methods to compare to: - NTIRE'20 leaders in real-SR tracks seems to be a must. - Zhang et al. Deep Unfolding Network for Image Super-Resolution CVPR'20 (cited [33] but not compared against) - Cornillere et al. Blind Image Super-Resolution with Spatially Variant Degradations SIGA"19 2. Comparisons settings: Setting2- DIV2KRK is a great choice, but only few methods are tested on it. As for setting1- Why use specifically Gauss8 for setting 1? why not using a set of kernels for example as in [33]? Also- comparison on non-blind setting with bicubic kernel is important to understand if the improvement is in the upscaling or in the kernel estimation. 3. Ablation needed: While results are impressive, there is no analysis that can scientifically contribute to understand the contributions. Using GT kernel and compare, try different intializations, ablate architectural elements (what happens if you do the high-level idea using the basic networks introduced in IKC?- this will let us know if the advantage comes from the elegant idea or from an optimized architecture). 4. Visualizing retrived kernels: This is important, in order to see if the kernel is indeed estimated correctly as claimed or some magic just happens to help produce the results. 5. Unclear comparison to ZSSR: See correctness section. 6. Supplementary material: Contains code which is great. However I would expect other things that are missing: - Visual compariosns to other methods in order to understand the visual effects. - Kernel visualizations
NIPS
Title Unfolding the Alternating Optimization for Blind Super Resolution Abstract Previous methods decompose blind super resolution (SR) problem into two sequential steps: i) estimating blur kernel from given low-resolution (LR) image and ii) restoring SR image based on estimated kernel. This two-step solution involves two independently trained models, which may not be well compatible with each other. Small estimation error of the first step could cause severe performance drop of the second one. While on the other hand, the first step can only utilize limited information from LR image, which makes it difficult to predict highly accurate blur kernel. Towards these issues, instead of considering these two steps separately, we adopt an alternating optimization algorithm, which can estimate blur kernel and restore SR image in a single model. Specifically, we design two convolutional neural modules, namely Restorer and Estimator. Restorer restores SR image based on predicted kernel, and Estimator estimates blur kernel with the help of restored SR image. We alternate these two modules repeatedly and unfold this process to form an end-to-end trainable network. In this way, Estimator utilizes information from both LR and SR images, which makes the estimation of blur kernel easier. More importantly, Restorer is trained with the kernel estimated by Estimator, instead of ground-truth kernel, thus Restorer could be more tolerant to the estimation error of Estimator. Extensive experiments on synthetic datasets and real-world images show that our model can largely outperform state-of-the-art methods and produce more visually favorable results at much higher speed. The source code is available at https://github.com/greatlog/DAN.git. 1 Introduction Single image super resolution (SISR) aims to recover the high-resolution (HR) version of a given degraded low-resolution (LR) image. It has wide applications in video enhancement, medical imaging, as well as security and surveillance imaging. Mathematically, the degradation process can be expressed as y = (x⊗ k) ↓s +n (1) where x is the original HR image, y is the degraded LR image, ⊗ denotes the two-dimensional convolution of x with blur kernel k, n denotes Additive White Gaussian Noise (AWGN), and ↓s denotes the standard s-fold downsampler, which means keeping only the upper-left pixel for each ∗Corresponding author 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. distinct s × s patch [35]. Then SISR refers to the process of recovering x from y. It is a highly ill-posed problem due to this inverse property, and thus has always been a challenging task. Recently, deep neural networks (DNNs) have achieved remarkable results on SISR. But most of these methods [39, 2, 40, 23, 8, 21] assume that the blur kernel is predefined as the kernel of bicubic interpolation. In this way, large number of training samples can be manually synthesized and further used to train powerful DNNs. However, blur kernels in real applications are much more complicated, and there is a domain gap between bicubically synthesized training samples and real images. This domain gap will lead to severe performance drop when these networks are applied to real applications. Thus, more attention should be paid to SR in the context of unknown blur kernels, i.e. blind SR. In blind SR, there is one more undetermined variable, i.e. blur kernel k, and the optimization also becomes much more difficult. To make this problem easier to be solved, previous methods [37, 32, 38, 35] usually decompose it into two sequential steps: i) estimating blur kernel from LR image and ii) restoring SR image based on estimated kernel. This two-step solution involves two independently trained models, thus they may be not well compatible to each other. Small estimation error of the first step could cause severe performance drop of the following one [14]. But on the other hand, the first step can only utilize limited information from LR image, which makes it difficult to predict highly accurate blur kernel. As a result, although both models can perform well individually, the final result may be suboptimal when they are combined together. Instead of considering these two steps separately, we adopt an alternating optimization algorithm, which can estimate blur kernel k and restore SR image x in the same model. Specifically, we design two convolutional neural modules, namely Restorer and Estimator. Restorer restores SR image based on blur kernel predicted by Estimator, and the restored SR image is further used to help Estimator estimate better blur kernel. Once the blur kernel is manually initialized, the two modules can well corporate with each other to form a closed loop, which can be iterated over and over. The iterating process is then unfolded to an end-to-end trainable network, which is called deep alternating network (DAN). In this way, Estimator can utilize information from both LR and SR images, which makes the estimation of blur kernel easier. More importantly, Restorer is trained with the kernel estimated by Estimator, instead of ground-truth kernel. Thus during testing Restorer could be more tolerant to the estimation error of Estimator. Besides, the results of both modules could be substantially improved during the iterations, thus it is likely for our alternating optimization algorithm to get better final results than the direct two-step solutions. We summarize our contributions into three points: 1. We adopt an alternating optimization algorithm to estimate blur kernel and restore SR image for blind SR in a single network (DAN), which helps the two modules to be well compatible with each other and likely to get better final results than previous two-step solutions. 2. We design two convolutional neural modules, which can be alternated repeatedly and then unfolded to form an end-to-end trainable network, without any pre/post-processing. It is easier to be trained and has higher speed than previous two-step solutions. To the best of our knowledge, the proposed method is the first end-to-end network for blind SR. 3. Extensive experiments on synthetic datasets and real-world images show that our model can largely outperform state-of-the art methods and produce more visually favorable results at much higher speed. 2 Related Work 2.1 Super Resolution in the Context of Bicubic Interpolation Learning based methods for SISR usually require a large number of paired HR and LR images as training samples. However, these paired samples are hard to get in real world. As a result, researchers manually synthesize LR images from HR images with predefined downsampling settings. The most popular setting is bicubic interpolation, i.e. defining k in Equation 1 as bicubic kernel. From the arising of SRCNN [9], various DNNs [21, 40, 39, 10, 16, 18] have been proposed based on this setting. Recently, after the proposal of RCAN [39] and RRDB [30], the performance of these non-blind methods even start to saturate on common benchmark datasets. However, the blur kernels for real images are indeed much more complicated. In real applications, kernels are unknown and differ from image to image. As a result, despite that these methods have excellent performance in the context of bicubic downsampling, they still cannot be directly applied to real images due to the domain gap. 2.2 Super Resolution for Multiple Degradations Another kind of non-blind SR methods aims to propose a single model for multiple degradations, i.e. the second step of the two-step solution for blind SR. These methods take both LR image and its corresponding blur kernel as inputs. In [13, 29], the blur kernel is used to downsample images and synthesize training samples, which can be used to train a specific model for given kernel and LR image. In [37], the blur kernel and LR image are directly concatenated at the first layer of a DNN. Thus, the SR result can be closely correlated to both LR image and blur kernel. In [38], Zhang et al. proposed a method based on ADMM algorithm. They interpret this problem as MAP optimization and solve the data term and prior term alternately. In [14], a spatial feature transform (SFT) layer is proposed to better preserve the details in LR image while blur kernel is an additional input. However, as pointed out in [14], the SR results of these methods are usually sensitive to the provided blur kernels. Small deviation of provided kernel from the ground truth will cause severe performance drop of these non-blind SR methods. 2.3 Blind Super Resolution Previous methods for blind SR are usually the sequential combinations of a kernel-estimation method and a non-blind SR method. Thus kernel-estimation methods are also an important part of blind SR. In [26], Michaeli et al. estimate the blur kernel by utilizing the internal patch recurrence. In [3] and [5], LR image is firstly downsampled by a generative network, and then a discriminator is used to verify whether the downsampled image has the same distribution with original LR image. In this way, the blur kernel can be learned by the generative network. In [14], Gu et al. not only train a network for kernel estimation, but also propose a correction network to iteratively correct the kernel. Although the accuracy of estimated kernel is largely improved, it requires training of two or even three networks, which is rather complicated. Instead, DAN is an end-to-end trainable network that is much easier to be trained and has much higher speed. 3 End-to-End Blind Super Resolution 3.1 Problem Formulation As shown in Equation 1, there are three variables, i.e. x, k and n, to be determined in blind SR problem. In literature, we can apply a denoise algorithm [36, 7, 15] in the first place. Then blind SR algorithm only needs to focus on solving k and x. It can be mathematically expressed an optimization problem: argmin k,x ‖y − (x⊗ k) ↓s ‖1 + φ(x) (2) where the former part is the reconstruction term, and φ(x) is prior term for HR image. The prior term is usually unknown and hard to be analytically expressed. Thus it is extremely difficult to solve this problem directly. Previous methods decompose this problem into two sequential steps:{ k =M(y) x = argmin x ‖y − (x⊗ k) ↓s ‖1 + φ(x) (3) where M(·) denotes the function that estimates k from y, and the second step is usually solved by a non-blind SR method described in Sec 2.2. This two-step solution has its drawbacks in threefold. Firstly, this algorithm usually requires training of two or even more models, which is rather complicated. Secondly, M(·) can only utilize information from y, which treats k as a kind of prior of y. But in fact, k could not be properly solved without information from x. At last, the non-blind SR model for the second step is trained with ground-truth kernels. While during testing, it can only have access to kernels estimated in the first step. The difference between ground-truth and estimated kernels will usually cause serve performance drop of the non-blind SR model [14]. Towards these drawbacks, we propose an end-to-end network that can largely release these issues. We still split it into two subproblems, but instead of solving them sequentially, we adopt an alternating optimization algorithm, which restores SR image and estimates corresponding blur kernel alternately. The mathematical expression is ki+1 = argmin k ‖y − (xi ⊗ k) ↓s ‖1 xi+1 = argmin x ‖y − (x⊗ ki) ↓s ‖1 + φ(x) (4) We alternately solve these two subproblems both via convolutional neural modules, namely Estimator and Restorer respectively. Actually, there even has an analytic solution for Estimator. But we experimentally find that analytic solution is more time-consuming and not robust enough (when noise is not fully removed). We fix the number of iterations as T and unfold the iterating process to form an end-to-end trainable network, which is called deep alternating network (DAN). As shown in Figure 1, we initialize the kernel by Dirac function, i.e. the center of the kernel is one and zeros otherwise. Following [14, 37], the kernel is also reshaped and then reduced by principal component analysis (PCA). We set T = 4 in practice and both modules are supervised only at the last iteration by L1 loss. The whole network could be well trained without any restrictions on intermediate results, because the parameters of both modules are shared between different iterations. In DAN, Estimator takes both LR and SR images as inputs, which makes the estimation of blur kernel k much easier. More importantly, Restorer is trained with the kernel estimated by Estimator, instead of ground-truth kernel as previous methods do. Thus, Restorer could be more tolerant to the estimation error of Estimator during testing. Besides, compared with previous two-step solutions, the results of both modules in DAN could be substantially improved, and it is likely for DAN to get better final results. Specially, in the case where the scale factor s = 1, DAN becomes an deblurring network. Due to limited pages, we only discuss SR cases in this paper. 3.2 Instantiate the Convolutional Neural Modules Both modules in our network have two inputs. Estimator takes LR and SR image, and Restorer takes LR image and blur kernel as inputs. We define LR image as basic input, and the other one is conditional input. For example, blur kernel is the conditional input of Restorer. During iterations, the basic inputs of both modules keep the same, but their conditional inputs are repeatedly updated. We claim that it is significantly important to keep the output of each module closely related to its conditional input. Otherwise, the iterating results will collapse to a fixed point at the first iteration. Specifically, if Estimator outputs the same kernel regardless the value of SR image, or Restorer outputs the same SR image regardless of the value of blur kernel, their outputs will only depend on the basic input, and the results will keep the same during the iterations. To ensure that the outputs of Estimator and Restorer are closely related to their conditional inputs, we propose a conditional residual block (CRB). On the basis of the residual block in [39], we concatenate the conditional and basic inputs at the beginning: fout = R(Concat([fbasic, fcond])) + fbasic (5) where R(·) denotes the residual mapping function of CRB and Concat([·, ·]) denotes concatenation. fbasic and fcond are the basic input and conditional input respectively. As shown in Figure 2 (c), the residual mapping function consists of two 3× 3 convolutional layers and one channel attention layer [17]. Both Estimator and Restorer are build by CRBs. Estimator. The whole structure of Estimator is shown in Figure 2 (b). We firstly downsample SR image by a convolutional layer with stride s. Then the feature maps are sent to all CRBs as conditional inputs. At the end of the network, we squeeze the features by global average pooling to form the elements of predicted kernel. Since the kernel is reduced by PCA, Estimator only needs to estimate the PCA result of blur kernel. In practice, Estimator has 5 CRBs, and both basic input and conditional input of each CRB have 32 channels. Restorer. The whole structure of Restorer is shown in Figure 2 (a). In Restorer, we stretch the kernel in spatial dimension to the same spatial size as LR image. Then the stretched kernel is sent to all CRBs of Restorer as conditional inputs. We use PixelShuffle [28] layers to upscale the features to desired size. In practice, Restorer has 40 CRBs, and the basic input and conditional input of each CRB has 64 and 10 channels respectively. 4 Experiments 4.1 Experiments on Synthetic Test Images 4.1.1 Data, Training and Testing We collect 3450 HR images from DIV2K [1] and Flickr2K [11] as training set. To make reasonable comparison with other methods, we train models with two different degradation settings. One is the setting in [14], which only focuses on cases with isotropic Gaussian blur kernels. The other is the setting in [3], which focuses on cases with more general and irregular blur kernels. Setting 1. Following the setting in [14], the kernel size is set as 21. During training, the kernel width is uniformly sampled in [0.2, 4.0], [0.2, 3.0] and [0.2, 2.0] for scale factors 4, 3 and 2 respectively. For quantitative evaluation, we collect HR images from the commonly used benchmark datasets, i.e. Set5 [4], Set14 [34], Urban100 [19], BSD100 [24] and Manga109 [25]. Since determined kernels are needed for reasonable comparison, we uniformly choose 8 kernels, denoted as Gaussian8, from range [1.8, 3.2], [1.35, 2.40] and [0.80, 1.60] for scale factors 4, 3 and 2 respectively. The HR images are first blurred by the selected blur kernels and then downsampled to form synthetic test images. Setting 2. Following the setting in [3], we set the kernel size as 11. We firstly generate anisotropic Gaussian kernels. The lengths of both axises are uniformly distributed in (0.6, 5), rotated by a random angle uniformly distributed in [−π, π]. To deviate from a regular Gaussian, we further apply uniform multiplicative noise (up to 25% of each pixel value of the kernel) and normalize it to sum to one. For testing, we use the benchmark dataset DIV2KRK that is used in [3]. The input size during training is 64× 64 for all scale factors. The batch size is 64. Each model is trained for 4× 105 iterations. We use Adam [22] as our optimizer, with β1 = 0.9, β2 = 0.99. The initial learning rate is 2× 10−4, and will decay by half at every 1× 105 iterations. All models are trained on RTX2080Ti GPUs. 4.1.2 Quantitative Results Setting 1. For the first setting, we evaluate our method on test images synthesized by Gaussian8 kernels. We mainly compare our results with ZSSR [29] and IKC [14], which are methods designed for blind SR. We also include a comparison with CARN [2]. Since it is not designed for blind SR, we perform deblurring method [27] before or after CARN. The PSNR and SSIM results on Y channel of transformed YCbCr space are shown in Table 1. Despite that CARN achieves remarkable results in the context of bicubic downsampling, it suffers severe performance drop when applied to images with unknown blur kernels. Its performance is largely improved when it is followed by a deblurring method, but still inferior to that of blind-SR methods. ZSSR trains specific network for each single tested image by utilizing the internal patch recurrence. However, ZSSR has an in-born drawback: the training samples for each image are limited, and thus it cannot learn a good prior for HR images. IKC is also a two-step solution for blind SR. Although the accuracy of estimated kernel is largely improved in IKC, the final result is still suboptimal. DAN is trained in an end-to-end scheme, which is not only much easier to be trained than two-step solutions, but also likely to a reach a better optimum point. As shown in Table 1, the PSNR result of DAN on Manga109 for scale ×3 is even 4.95dB higher than that of IKC. For other scales and datasets, DAN also largely outperforms IKC. The visual results of img 005 in Urban100 are shown in Figure 3 for comparison. As one can see, CARN and ZSSR even cannot restore the edges for the window. IKC performs better, but the edges are severely blurred. While DAN can restore sharp edges and produce more visually pleasant result. Setting 2. The second setting involves irregular blur kernels, which is more general, but also more difficult to solve. For Setting 2, we mainly compare methods of three different classes: i) SOTA SR algorithms trained on bicubically downsampled images such as EDSR [23] and RCAN [39] , ii) blind SR methods designed for NTIRE competition such as PDN [31] and WDSR [33], iii) the two-step solutions, i.e. the combination of a kernel estimation method and a non-blind SR method, such as Kernel-GAN [3] and ZSSR [29]. The PSNR and SSIM results on Y channl are shown in Table 2. Similarly, the performance of methods trained on bicubically downsampled images is limited by the domain gap. Thus, their results are only slightly better than that of interpolation. The methods in Class 2 are trained on synthesized images provided in NTIRE competition. Although these methods achieve remarkable results in the competition, they still cannot generalize well to irregular blur kernels. The comparison between methods of Class 3 can enlighten us a lot. Specifically, USRNet [35] achieves remarkable results when GT kernels are provided, and KernelGAN also performs well on kernel estimation. However, when they are combined together, as shown in Table 2, the final SR results are worse than most other methods. This indicates that it is important for the Estimator and Restorer to be compatible with each other. Additionally, although better kernel-estimation method can benefit the SR results, the overall performance is still largely inferior to that of DAN. DAN outperforms the combination of KernelGAN and ZSSR by 2.20dB and 0.74dB for scales ×2 and ×4 respectively. The visual results of img 892 in DIVKRK are shown in Figure 4. Although the combination of KernelGAN and ZSSR can produce slightly shaper edges than interpolation, it suffers from severe artifacts. The SR image of DAN is obviously much cleaner and has more reliable details. 4.1.3 Study of Estimated Kernels To evaluated the accuracy of predicted kernels, we calculate their L1 errors in the reduced space, and the results on Urban100 are shown in Figure 5 (a). As one can see that the L1 error of reduced kernels predicted by DAN are much lower than that of IKC. It suggests that the overall improvements of DAN may partially come from more accurate retrieved kernels. We also plot the PSNR results with respect to kernels with different sigma in Figure 5 (b). As sigma increases, the performance gap between IKC and DAN also becomes larger. It indicates that DAN may have better generalization ability. We also replace the estimated kernel by ground truth (GT) to further investigate the influence of Estimator. If GT kernels are provided, the iterating processing becomes meaningless. Thus we test the Restorer with just once forward propagation. The tested results for Setting 1 is shown in Table 3. The result almost keeps unchanged and sometimes even gets worser when GT kernels are provided. It indicates that Predictor may have already satisfied the requirements of Restorer, and the superiority of DAN also partially comes from the good cooperation between its Predictor and Restorer. 4.1.4 Study of Iterations After the model is trained, we also change the number of iterations to see whether the two modules have learned the property of convergence or just have ‘remembered’ the iteration number. The model is trained with 4 iterations, but during testing we increase the iteration number from 1 to 7. As shown in Figure 6 (a) and (c), the average PSNR results on Set5 and Set14 firstly increase rapidly and then gradually converge. It should be noted that when we iterate more times than training, the performance dose not becomes worse, and sometimes even becomes better. For example, the average PSNR on Set14 is 20.43dB when the iteration number is 5, higher than 20.42dB when we iterate 4 times. Although the incremental is relatively small, it suggests that the two modules may have learned to cooperate with each other, instead of solving this problem like ordinary end-to-end networks, in which cases, the performance will drop significantly when the setting of testing is different from that of training. It also suggests that the estimation error of intermediate results does not destroy the convergence of DAN. In other words, DAN is robust to various estimation error. 4.2 Inference Speed One more superiority of our end-to-end model is that it has higher inference speed. To make a quantitative comparison, we evaluate the average speed of different methods on the same platform. We choose the 40 images synthesized by Gaussian8 kernels from Set5 as testing images, and all methods are evaluated on the same platform with a RTX2080Ti GPU. We choose KernelGAN [3] + ZSSR [29] as the one of the representative methods. Its speed is 415.7 seconds per image. IKC [14] has much faster inference speed, which is only 3.93 seconds per image. As a comparison, the average speed of DAN is 0.75 seconds per image, nearly 554 times faster than KernelGAN + ZSSR, and 5 times faster than IKC. In other words, DAN not only can largely outperform SOTA blind SR methods on PSNR results, but also has much higher speed. 4.3 Experiments on Real World Images We also conduct experiments to prove that DAN can generalize well to real wold images. In this case, we need to consider the influence of additive noise. As we mentioned in Sec 3.1, we can perform an denoise algorithm in the first place. But for simplicity, we retrain a different model by adding AWGN to LR image during training. In this way, DAN would be forced to generalize to noisy images. The covariance of noise is set as 15. We use KernelGAN [3] + ZSSR [29] and IKC [14] as the representative methods for blind SR, and CARN [2] as the representative method for non-blind SR method. The commonly used image chip [12] is chosen as test image. It should be noted that it is a real image and we do not have the ground truth. Thus we can only provide a visual comparison in Figure [12]. As one can see, the result of KernelGAN + ZSSR is slightly better than bicubic interpolation, but is still heavily blurred. The result of CARN is over smoothed and the edge is not sharp enough. IKC produces cleaner result, but there are still some artifacts. The letter ‘X’ restored by IKC has an obvious dark line at the top right part. But this dark line is much lighter in the image restored by DAN. It suggests that if trained with noisy images, DAN can also learn to denoise, and produce more visually pleasant results with more reliable details. This is because that both modules are implemented via convolutional layers, which are flexible enough to be adapted to different tasks. 5 Conclusion In this paper, we have proposed an end-to-end algorithm for blind SR. This algorithm is based on alternating optimization, the two parts of which are both implemented by convolutional modules, namely Restorer and Estimator. We unfold the alternating process to form an end-to-end trainable network. In this way, Estimator can utilize information from both LR and SR images, which makes it easier to estimate blur kernel. More importantly, Restorer is trained with the kernel estimated by Estimator, instead of ground-truth kernel, thus Restorer could be more tolerant to with the estimation error of Estimator. Besides, the results of both modules could be substantially improved during the iterations, thus it is likely for DAN to get better final results than previous two-step solutions. Experiments also prove that DAN outperforms SOTA blind SR methods by a large margin. In the future, if the two parts of DAN can be implemented by more powerful modules, we believe that its performance could be further improved. Broader Impact Super Resolution is a traditional task in computer vision. It has been studied for several decades and has wide applications in video enhancement, medical imaging, as well as security and surveillance imaging. These techniques have largely benefited the society in various areas for years and have no negative impact yet. The proposed method (DAN) could further improve the merits of these applications especially in cases where the degradations are unknown. DAN has relatively better performance and much higher speed, and it is possible for DAN to be used in real-time video enhancement or surveillance imaging. This work does not present any negative foreseeable societal consequence. Acknowledgements This work is jointly supported by National Key Research and Development Program of China (2016YFB1001000), Key Research Program of Frontier Sciences, CAS (ZDBS-LY-JSC032), Shandong Provincial Key Research and Development Program (2019JZZY010119), and CAS-AIR.
1. What is the focus and contribution of the paper on blind image deblurring? 2. What are the strengths of the proposed approach, particularly in terms of efficiency and performance? 3. What are the weaknesses of the paper, especially regarding the accuracy of the blur kernel estimator and the upper bound performance? 4. How does the reviewer assess the limitations of the proposed solution and its potential failures?
Summary and Contributions Strengths Weaknesses
Summary and Contributions The authors propose to jointly optimize blur kernel estimation and SR using a CNN design with Estimator and Restorer modules unfolding the optimization process in an end-to-end trainable network. The proposed solution is shown to improve in accuracy over compared methods as well as in speed. Strengths + novel blind SR design + efficiency wrt prior work + improved PSNR and SSIM performance on different settings Weaknesses It would be interesting to know - how accurate the blur kernel estimator is - given perfect kernel estimation (ground truth) what would be the upper bound in performance for the restorer / overall solution - how the performance varies with the blur kernels - what are the limitations of the proposed solution, when it fails
NIPS
Title Unfolding the Alternating Optimization for Blind Super Resolution Abstract Previous methods decompose blind super resolution (SR) problem into two sequential steps: i) estimating blur kernel from given low-resolution (LR) image and ii) restoring SR image based on estimated kernel. This two-step solution involves two independently trained models, which may not be well compatible with each other. Small estimation error of the first step could cause severe performance drop of the second one. While on the other hand, the first step can only utilize limited information from LR image, which makes it difficult to predict highly accurate blur kernel. Towards these issues, instead of considering these two steps separately, we adopt an alternating optimization algorithm, which can estimate blur kernel and restore SR image in a single model. Specifically, we design two convolutional neural modules, namely Restorer and Estimator. Restorer restores SR image based on predicted kernel, and Estimator estimates blur kernel with the help of restored SR image. We alternate these two modules repeatedly and unfold this process to form an end-to-end trainable network. In this way, Estimator utilizes information from both LR and SR images, which makes the estimation of blur kernel easier. More importantly, Restorer is trained with the kernel estimated by Estimator, instead of ground-truth kernel, thus Restorer could be more tolerant to the estimation error of Estimator. Extensive experiments on synthetic datasets and real-world images show that our model can largely outperform state-of-the-art methods and produce more visually favorable results at much higher speed. The source code is available at https://github.com/greatlog/DAN.git. 1 Introduction Single image super resolution (SISR) aims to recover the high-resolution (HR) version of a given degraded low-resolution (LR) image. It has wide applications in video enhancement, medical imaging, as well as security and surveillance imaging. Mathematically, the degradation process can be expressed as y = (x⊗ k) ↓s +n (1) where x is the original HR image, y is the degraded LR image, ⊗ denotes the two-dimensional convolution of x with blur kernel k, n denotes Additive White Gaussian Noise (AWGN), and ↓s denotes the standard s-fold downsampler, which means keeping only the upper-left pixel for each ∗Corresponding author 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. distinct s × s patch [35]. Then SISR refers to the process of recovering x from y. It is a highly ill-posed problem due to this inverse property, and thus has always been a challenging task. Recently, deep neural networks (DNNs) have achieved remarkable results on SISR. But most of these methods [39, 2, 40, 23, 8, 21] assume that the blur kernel is predefined as the kernel of bicubic interpolation. In this way, large number of training samples can be manually synthesized and further used to train powerful DNNs. However, blur kernels in real applications are much more complicated, and there is a domain gap between bicubically synthesized training samples and real images. This domain gap will lead to severe performance drop when these networks are applied to real applications. Thus, more attention should be paid to SR in the context of unknown blur kernels, i.e. blind SR. In blind SR, there is one more undetermined variable, i.e. blur kernel k, and the optimization also becomes much more difficult. To make this problem easier to be solved, previous methods [37, 32, 38, 35] usually decompose it into two sequential steps: i) estimating blur kernel from LR image and ii) restoring SR image based on estimated kernel. This two-step solution involves two independently trained models, thus they may be not well compatible to each other. Small estimation error of the first step could cause severe performance drop of the following one [14]. But on the other hand, the first step can only utilize limited information from LR image, which makes it difficult to predict highly accurate blur kernel. As a result, although both models can perform well individually, the final result may be suboptimal when they are combined together. Instead of considering these two steps separately, we adopt an alternating optimization algorithm, which can estimate blur kernel k and restore SR image x in the same model. Specifically, we design two convolutional neural modules, namely Restorer and Estimator. Restorer restores SR image based on blur kernel predicted by Estimator, and the restored SR image is further used to help Estimator estimate better blur kernel. Once the blur kernel is manually initialized, the two modules can well corporate with each other to form a closed loop, which can be iterated over and over. The iterating process is then unfolded to an end-to-end trainable network, which is called deep alternating network (DAN). In this way, Estimator can utilize information from both LR and SR images, which makes the estimation of blur kernel easier. More importantly, Restorer is trained with the kernel estimated by Estimator, instead of ground-truth kernel. Thus during testing Restorer could be more tolerant to the estimation error of Estimator. Besides, the results of both modules could be substantially improved during the iterations, thus it is likely for our alternating optimization algorithm to get better final results than the direct two-step solutions. We summarize our contributions into three points: 1. We adopt an alternating optimization algorithm to estimate blur kernel and restore SR image for blind SR in a single network (DAN), which helps the two modules to be well compatible with each other and likely to get better final results than previous two-step solutions. 2. We design two convolutional neural modules, which can be alternated repeatedly and then unfolded to form an end-to-end trainable network, without any pre/post-processing. It is easier to be trained and has higher speed than previous two-step solutions. To the best of our knowledge, the proposed method is the first end-to-end network for blind SR. 3. Extensive experiments on synthetic datasets and real-world images show that our model can largely outperform state-of-the art methods and produce more visually favorable results at much higher speed. 2 Related Work 2.1 Super Resolution in the Context of Bicubic Interpolation Learning based methods for SISR usually require a large number of paired HR and LR images as training samples. However, these paired samples are hard to get in real world. As a result, researchers manually synthesize LR images from HR images with predefined downsampling settings. The most popular setting is bicubic interpolation, i.e. defining k in Equation 1 as bicubic kernel. From the arising of SRCNN [9], various DNNs [21, 40, 39, 10, 16, 18] have been proposed based on this setting. Recently, after the proposal of RCAN [39] and RRDB [30], the performance of these non-blind methods even start to saturate on common benchmark datasets. However, the blur kernels for real images are indeed much more complicated. In real applications, kernels are unknown and differ from image to image. As a result, despite that these methods have excellent performance in the context of bicubic downsampling, they still cannot be directly applied to real images due to the domain gap. 2.2 Super Resolution for Multiple Degradations Another kind of non-blind SR methods aims to propose a single model for multiple degradations, i.e. the second step of the two-step solution for blind SR. These methods take both LR image and its corresponding blur kernel as inputs. In [13, 29], the blur kernel is used to downsample images and synthesize training samples, which can be used to train a specific model for given kernel and LR image. In [37], the blur kernel and LR image are directly concatenated at the first layer of a DNN. Thus, the SR result can be closely correlated to both LR image and blur kernel. In [38], Zhang et al. proposed a method based on ADMM algorithm. They interpret this problem as MAP optimization and solve the data term and prior term alternately. In [14], a spatial feature transform (SFT) layer is proposed to better preserve the details in LR image while blur kernel is an additional input. However, as pointed out in [14], the SR results of these methods are usually sensitive to the provided blur kernels. Small deviation of provided kernel from the ground truth will cause severe performance drop of these non-blind SR methods. 2.3 Blind Super Resolution Previous methods for blind SR are usually the sequential combinations of a kernel-estimation method and a non-blind SR method. Thus kernel-estimation methods are also an important part of blind SR. In [26], Michaeli et al. estimate the blur kernel by utilizing the internal patch recurrence. In [3] and [5], LR image is firstly downsampled by a generative network, and then a discriminator is used to verify whether the downsampled image has the same distribution with original LR image. In this way, the blur kernel can be learned by the generative network. In [14], Gu et al. not only train a network for kernel estimation, but also propose a correction network to iteratively correct the kernel. Although the accuracy of estimated kernel is largely improved, it requires training of two or even three networks, which is rather complicated. Instead, DAN is an end-to-end trainable network that is much easier to be trained and has much higher speed. 3 End-to-End Blind Super Resolution 3.1 Problem Formulation As shown in Equation 1, there are three variables, i.e. x, k and n, to be determined in blind SR problem. In literature, we can apply a denoise algorithm [36, 7, 15] in the first place. Then blind SR algorithm only needs to focus on solving k and x. It can be mathematically expressed an optimization problem: argmin k,x ‖y − (x⊗ k) ↓s ‖1 + φ(x) (2) where the former part is the reconstruction term, and φ(x) is prior term for HR image. The prior term is usually unknown and hard to be analytically expressed. Thus it is extremely difficult to solve this problem directly. Previous methods decompose this problem into two sequential steps:{ k =M(y) x = argmin x ‖y − (x⊗ k) ↓s ‖1 + φ(x) (3) where M(·) denotes the function that estimates k from y, and the second step is usually solved by a non-blind SR method described in Sec 2.2. This two-step solution has its drawbacks in threefold. Firstly, this algorithm usually requires training of two or even more models, which is rather complicated. Secondly, M(·) can only utilize information from y, which treats k as a kind of prior of y. But in fact, k could not be properly solved without information from x. At last, the non-blind SR model for the second step is trained with ground-truth kernels. While during testing, it can only have access to kernels estimated in the first step. The difference between ground-truth and estimated kernels will usually cause serve performance drop of the non-blind SR model [14]. Towards these drawbacks, we propose an end-to-end network that can largely release these issues. We still split it into two subproblems, but instead of solving them sequentially, we adopt an alternating optimization algorithm, which restores SR image and estimates corresponding blur kernel alternately. The mathematical expression is ki+1 = argmin k ‖y − (xi ⊗ k) ↓s ‖1 xi+1 = argmin x ‖y − (x⊗ ki) ↓s ‖1 + φ(x) (4) We alternately solve these two subproblems both via convolutional neural modules, namely Estimator and Restorer respectively. Actually, there even has an analytic solution for Estimator. But we experimentally find that analytic solution is more time-consuming and not robust enough (when noise is not fully removed). We fix the number of iterations as T and unfold the iterating process to form an end-to-end trainable network, which is called deep alternating network (DAN). As shown in Figure 1, we initialize the kernel by Dirac function, i.e. the center of the kernel is one and zeros otherwise. Following [14, 37], the kernel is also reshaped and then reduced by principal component analysis (PCA). We set T = 4 in practice and both modules are supervised only at the last iteration by L1 loss. The whole network could be well trained without any restrictions on intermediate results, because the parameters of both modules are shared between different iterations. In DAN, Estimator takes both LR and SR images as inputs, which makes the estimation of blur kernel k much easier. More importantly, Restorer is trained with the kernel estimated by Estimator, instead of ground-truth kernel as previous methods do. Thus, Restorer could be more tolerant to the estimation error of Estimator during testing. Besides, compared with previous two-step solutions, the results of both modules in DAN could be substantially improved, and it is likely for DAN to get better final results. Specially, in the case where the scale factor s = 1, DAN becomes an deblurring network. Due to limited pages, we only discuss SR cases in this paper. 3.2 Instantiate the Convolutional Neural Modules Both modules in our network have two inputs. Estimator takes LR and SR image, and Restorer takes LR image and blur kernel as inputs. We define LR image as basic input, and the other one is conditional input. For example, blur kernel is the conditional input of Restorer. During iterations, the basic inputs of both modules keep the same, but their conditional inputs are repeatedly updated. We claim that it is significantly important to keep the output of each module closely related to its conditional input. Otherwise, the iterating results will collapse to a fixed point at the first iteration. Specifically, if Estimator outputs the same kernel regardless the value of SR image, or Restorer outputs the same SR image regardless of the value of blur kernel, their outputs will only depend on the basic input, and the results will keep the same during the iterations. To ensure that the outputs of Estimator and Restorer are closely related to their conditional inputs, we propose a conditional residual block (CRB). On the basis of the residual block in [39], we concatenate the conditional and basic inputs at the beginning: fout = R(Concat([fbasic, fcond])) + fbasic (5) where R(·) denotes the residual mapping function of CRB and Concat([·, ·]) denotes concatenation. fbasic and fcond are the basic input and conditional input respectively. As shown in Figure 2 (c), the residual mapping function consists of two 3× 3 convolutional layers and one channel attention layer [17]. Both Estimator and Restorer are build by CRBs. Estimator. The whole structure of Estimator is shown in Figure 2 (b). We firstly downsample SR image by a convolutional layer with stride s. Then the feature maps are sent to all CRBs as conditional inputs. At the end of the network, we squeeze the features by global average pooling to form the elements of predicted kernel. Since the kernel is reduced by PCA, Estimator only needs to estimate the PCA result of blur kernel. In practice, Estimator has 5 CRBs, and both basic input and conditional input of each CRB have 32 channels. Restorer. The whole structure of Restorer is shown in Figure 2 (a). In Restorer, we stretch the kernel in spatial dimension to the same spatial size as LR image. Then the stretched kernel is sent to all CRBs of Restorer as conditional inputs. We use PixelShuffle [28] layers to upscale the features to desired size. In practice, Restorer has 40 CRBs, and the basic input and conditional input of each CRB has 64 and 10 channels respectively. 4 Experiments 4.1 Experiments on Synthetic Test Images 4.1.1 Data, Training and Testing We collect 3450 HR images from DIV2K [1] and Flickr2K [11] as training set. To make reasonable comparison with other methods, we train models with two different degradation settings. One is the setting in [14], which only focuses on cases with isotropic Gaussian blur kernels. The other is the setting in [3], which focuses on cases with more general and irregular blur kernels. Setting 1. Following the setting in [14], the kernel size is set as 21. During training, the kernel width is uniformly sampled in [0.2, 4.0], [0.2, 3.0] and [0.2, 2.0] for scale factors 4, 3 and 2 respectively. For quantitative evaluation, we collect HR images from the commonly used benchmark datasets, i.e. Set5 [4], Set14 [34], Urban100 [19], BSD100 [24] and Manga109 [25]. Since determined kernels are needed for reasonable comparison, we uniformly choose 8 kernels, denoted as Gaussian8, from range [1.8, 3.2], [1.35, 2.40] and [0.80, 1.60] for scale factors 4, 3 and 2 respectively. The HR images are first blurred by the selected blur kernels and then downsampled to form synthetic test images. Setting 2. Following the setting in [3], we set the kernel size as 11. We firstly generate anisotropic Gaussian kernels. The lengths of both axises are uniformly distributed in (0.6, 5), rotated by a random angle uniformly distributed in [−π, π]. To deviate from a regular Gaussian, we further apply uniform multiplicative noise (up to 25% of each pixel value of the kernel) and normalize it to sum to one. For testing, we use the benchmark dataset DIV2KRK that is used in [3]. The input size during training is 64× 64 for all scale factors. The batch size is 64. Each model is trained for 4× 105 iterations. We use Adam [22] as our optimizer, with β1 = 0.9, β2 = 0.99. The initial learning rate is 2× 10−4, and will decay by half at every 1× 105 iterations. All models are trained on RTX2080Ti GPUs. 4.1.2 Quantitative Results Setting 1. For the first setting, we evaluate our method on test images synthesized by Gaussian8 kernels. We mainly compare our results with ZSSR [29] and IKC [14], which are methods designed for blind SR. We also include a comparison with CARN [2]. Since it is not designed for blind SR, we perform deblurring method [27] before or after CARN. The PSNR and SSIM results on Y channel of transformed YCbCr space are shown in Table 1. Despite that CARN achieves remarkable results in the context of bicubic downsampling, it suffers severe performance drop when applied to images with unknown blur kernels. Its performance is largely improved when it is followed by a deblurring method, but still inferior to that of blind-SR methods. ZSSR trains specific network for each single tested image by utilizing the internal patch recurrence. However, ZSSR has an in-born drawback: the training samples for each image are limited, and thus it cannot learn a good prior for HR images. IKC is also a two-step solution for blind SR. Although the accuracy of estimated kernel is largely improved in IKC, the final result is still suboptimal. DAN is trained in an end-to-end scheme, which is not only much easier to be trained than two-step solutions, but also likely to a reach a better optimum point. As shown in Table 1, the PSNR result of DAN on Manga109 for scale ×3 is even 4.95dB higher than that of IKC. For other scales and datasets, DAN also largely outperforms IKC. The visual results of img 005 in Urban100 are shown in Figure 3 for comparison. As one can see, CARN and ZSSR even cannot restore the edges for the window. IKC performs better, but the edges are severely blurred. While DAN can restore sharp edges and produce more visually pleasant result. Setting 2. The second setting involves irregular blur kernels, which is more general, but also more difficult to solve. For Setting 2, we mainly compare methods of three different classes: i) SOTA SR algorithms trained on bicubically downsampled images such as EDSR [23] and RCAN [39] , ii) blind SR methods designed for NTIRE competition such as PDN [31] and WDSR [33], iii) the two-step solutions, i.e. the combination of a kernel estimation method and a non-blind SR method, such as Kernel-GAN [3] and ZSSR [29]. The PSNR and SSIM results on Y channl are shown in Table 2. Similarly, the performance of methods trained on bicubically downsampled images is limited by the domain gap. Thus, their results are only slightly better than that of interpolation. The methods in Class 2 are trained on synthesized images provided in NTIRE competition. Although these methods achieve remarkable results in the competition, they still cannot generalize well to irregular blur kernels. The comparison between methods of Class 3 can enlighten us a lot. Specifically, USRNet [35] achieves remarkable results when GT kernels are provided, and KernelGAN also performs well on kernel estimation. However, when they are combined together, as shown in Table 2, the final SR results are worse than most other methods. This indicates that it is important for the Estimator and Restorer to be compatible with each other. Additionally, although better kernel-estimation method can benefit the SR results, the overall performance is still largely inferior to that of DAN. DAN outperforms the combination of KernelGAN and ZSSR by 2.20dB and 0.74dB for scales ×2 and ×4 respectively. The visual results of img 892 in DIVKRK are shown in Figure 4. Although the combination of KernelGAN and ZSSR can produce slightly shaper edges than interpolation, it suffers from severe artifacts. The SR image of DAN is obviously much cleaner and has more reliable details. 4.1.3 Study of Estimated Kernels To evaluated the accuracy of predicted kernels, we calculate their L1 errors in the reduced space, and the results on Urban100 are shown in Figure 5 (a). As one can see that the L1 error of reduced kernels predicted by DAN are much lower than that of IKC. It suggests that the overall improvements of DAN may partially come from more accurate retrieved kernels. We also plot the PSNR results with respect to kernels with different sigma in Figure 5 (b). As sigma increases, the performance gap between IKC and DAN also becomes larger. It indicates that DAN may have better generalization ability. We also replace the estimated kernel by ground truth (GT) to further investigate the influence of Estimator. If GT kernels are provided, the iterating processing becomes meaningless. Thus we test the Restorer with just once forward propagation. The tested results for Setting 1 is shown in Table 3. The result almost keeps unchanged and sometimes even gets worser when GT kernels are provided. It indicates that Predictor may have already satisfied the requirements of Restorer, and the superiority of DAN also partially comes from the good cooperation between its Predictor and Restorer. 4.1.4 Study of Iterations After the model is trained, we also change the number of iterations to see whether the two modules have learned the property of convergence or just have ‘remembered’ the iteration number. The model is trained with 4 iterations, but during testing we increase the iteration number from 1 to 7. As shown in Figure 6 (a) and (c), the average PSNR results on Set5 and Set14 firstly increase rapidly and then gradually converge. It should be noted that when we iterate more times than training, the performance dose not becomes worse, and sometimes even becomes better. For example, the average PSNR on Set14 is 20.43dB when the iteration number is 5, higher than 20.42dB when we iterate 4 times. Although the incremental is relatively small, it suggests that the two modules may have learned to cooperate with each other, instead of solving this problem like ordinary end-to-end networks, in which cases, the performance will drop significantly when the setting of testing is different from that of training. It also suggests that the estimation error of intermediate results does not destroy the convergence of DAN. In other words, DAN is robust to various estimation error. 4.2 Inference Speed One more superiority of our end-to-end model is that it has higher inference speed. To make a quantitative comparison, we evaluate the average speed of different methods on the same platform. We choose the 40 images synthesized by Gaussian8 kernels from Set5 as testing images, and all methods are evaluated on the same platform with a RTX2080Ti GPU. We choose KernelGAN [3] + ZSSR [29] as the one of the representative methods. Its speed is 415.7 seconds per image. IKC [14] has much faster inference speed, which is only 3.93 seconds per image. As a comparison, the average speed of DAN is 0.75 seconds per image, nearly 554 times faster than KernelGAN + ZSSR, and 5 times faster than IKC. In other words, DAN not only can largely outperform SOTA blind SR methods on PSNR results, but also has much higher speed. 4.3 Experiments on Real World Images We also conduct experiments to prove that DAN can generalize well to real wold images. In this case, we need to consider the influence of additive noise. As we mentioned in Sec 3.1, we can perform an denoise algorithm in the first place. But for simplicity, we retrain a different model by adding AWGN to LR image during training. In this way, DAN would be forced to generalize to noisy images. The covariance of noise is set as 15. We use KernelGAN [3] + ZSSR [29] and IKC [14] as the representative methods for blind SR, and CARN [2] as the representative method for non-blind SR method. The commonly used image chip [12] is chosen as test image. It should be noted that it is a real image and we do not have the ground truth. Thus we can only provide a visual comparison in Figure [12]. As one can see, the result of KernelGAN + ZSSR is slightly better than bicubic interpolation, but is still heavily blurred. The result of CARN is over smoothed and the edge is not sharp enough. IKC produces cleaner result, but there are still some artifacts. The letter ‘X’ restored by IKC has an obvious dark line at the top right part. But this dark line is much lighter in the image restored by DAN. It suggests that if trained with noisy images, DAN can also learn to denoise, and produce more visually pleasant results with more reliable details. This is because that both modules are implemented via convolutional layers, which are flexible enough to be adapted to different tasks. 5 Conclusion In this paper, we have proposed an end-to-end algorithm for blind SR. This algorithm is based on alternating optimization, the two parts of which are both implemented by convolutional modules, namely Restorer and Estimator. We unfold the alternating process to form an end-to-end trainable network. In this way, Estimator can utilize information from both LR and SR images, which makes it easier to estimate blur kernel. More importantly, Restorer is trained with the kernel estimated by Estimator, instead of ground-truth kernel, thus Restorer could be more tolerant to with the estimation error of Estimator. Besides, the results of both modules could be substantially improved during the iterations, thus it is likely for DAN to get better final results than previous two-step solutions. Experiments also prove that DAN outperforms SOTA blind SR methods by a large margin. In the future, if the two parts of DAN can be implemented by more powerful modules, we believe that its performance could be further improved. Broader Impact Super Resolution is a traditional task in computer vision. It has been studied for several decades and has wide applications in video enhancement, medical imaging, as well as security and surveillance imaging. These techniques have largely benefited the society in various areas for years and have no negative impact yet. The proposed method (DAN) could further improve the merits of these applications especially in cases where the degradations are unknown. DAN has relatively better performance and much higher speed, and it is possible for DAN to be used in real-time video enhancement or surveillance imaging. This work does not present any negative foreseeable societal consequence. Acknowledgements This work is jointly supported by National Key Research and Development Program of China (2016YFB1001000), Key Research Program of Frontier Sciences, CAS (ZDBS-LY-JSC032), Shandong Provincial Key Research and Development Program (2019JZZY010119), and CAS-AIR.
1. What is the main contribution of the paper in the field of image super-resolution? 2. What are the strengths of the proposed method, particularly in terms of its end-to-end training mechanism? 3. What are the weaknesses of the paper regarding the estimation of blur kernels and the use of estimated blur kernels for SR? 4. Do you have any concerns about the fairness of the comparisons made in the paper? 5. How do you assess the significance of the results achieved by the proposed method on real-world images?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper proposes a blind image SR method based on deep CNNs. The proposed network is motivated by the conventional optimization-based method (1). It contains an estimator and restorer which is jointly trained in an end-to-end manner. Experimental results show the effectiveness of the proposed method. Strengths The paper is well motivated. It proposes a blind image SR based on deep CNNs, where the blur kernels and SR results are both estimated alternatively. It proposes a conditional residual block (CRB) to improve the final results. The proposed method generates better results on synthetic datasets and comparable results on real-world images. The code is also provided. Weaknesses The paper explicitly estimates blur kernel for SR. However, it is not clear whether the estimated blur kernels are accurate or not. No evaluations are provided. The use of estimated blur kernels for SR is not explained clearly. It is mainly based on the CRB. However, it is not clear how this operation can remove blur or not. The comparisons may be unfair. The authors should retrain the deep learning-based methods using the same training datasets for evaluations. Please provide model parameters to show whether the performance gains are mainly due to the use of large models. The results on the real-world images are not significant.
NIPS
Title Unfolding the Alternating Optimization for Blind Super Resolution Abstract Previous methods decompose blind super resolution (SR) problem into two sequential steps: i) estimating blur kernel from given low-resolution (LR) image and ii) restoring SR image based on estimated kernel. This two-step solution involves two independently trained models, which may not be well compatible with each other. Small estimation error of the first step could cause severe performance drop of the second one. While on the other hand, the first step can only utilize limited information from LR image, which makes it difficult to predict highly accurate blur kernel. Towards these issues, instead of considering these two steps separately, we adopt an alternating optimization algorithm, which can estimate blur kernel and restore SR image in a single model. Specifically, we design two convolutional neural modules, namely Restorer and Estimator. Restorer restores SR image based on predicted kernel, and Estimator estimates blur kernel with the help of restored SR image. We alternate these two modules repeatedly and unfold this process to form an end-to-end trainable network. In this way, Estimator utilizes information from both LR and SR images, which makes the estimation of blur kernel easier. More importantly, Restorer is trained with the kernel estimated by Estimator, instead of ground-truth kernel, thus Restorer could be more tolerant to the estimation error of Estimator. Extensive experiments on synthetic datasets and real-world images show that our model can largely outperform state-of-the-art methods and produce more visually favorable results at much higher speed. The source code is available at https://github.com/greatlog/DAN.git. 1 Introduction Single image super resolution (SISR) aims to recover the high-resolution (HR) version of a given degraded low-resolution (LR) image. It has wide applications in video enhancement, medical imaging, as well as security and surveillance imaging. Mathematically, the degradation process can be expressed as y = (x⊗ k) ↓s +n (1) where x is the original HR image, y is the degraded LR image, ⊗ denotes the two-dimensional convolution of x with blur kernel k, n denotes Additive White Gaussian Noise (AWGN), and ↓s denotes the standard s-fold downsampler, which means keeping only the upper-left pixel for each ∗Corresponding author 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. distinct s × s patch [35]. Then SISR refers to the process of recovering x from y. It is a highly ill-posed problem due to this inverse property, and thus has always been a challenging task. Recently, deep neural networks (DNNs) have achieved remarkable results on SISR. But most of these methods [39, 2, 40, 23, 8, 21] assume that the blur kernel is predefined as the kernel of bicubic interpolation. In this way, large number of training samples can be manually synthesized and further used to train powerful DNNs. However, blur kernels in real applications are much more complicated, and there is a domain gap between bicubically synthesized training samples and real images. This domain gap will lead to severe performance drop when these networks are applied to real applications. Thus, more attention should be paid to SR in the context of unknown blur kernels, i.e. blind SR. In blind SR, there is one more undetermined variable, i.e. blur kernel k, and the optimization also becomes much more difficult. To make this problem easier to be solved, previous methods [37, 32, 38, 35] usually decompose it into two sequential steps: i) estimating blur kernel from LR image and ii) restoring SR image based on estimated kernel. This two-step solution involves two independently trained models, thus they may be not well compatible to each other. Small estimation error of the first step could cause severe performance drop of the following one [14]. But on the other hand, the first step can only utilize limited information from LR image, which makes it difficult to predict highly accurate blur kernel. As a result, although both models can perform well individually, the final result may be suboptimal when they are combined together. Instead of considering these two steps separately, we adopt an alternating optimization algorithm, which can estimate blur kernel k and restore SR image x in the same model. Specifically, we design two convolutional neural modules, namely Restorer and Estimator. Restorer restores SR image based on blur kernel predicted by Estimator, and the restored SR image is further used to help Estimator estimate better blur kernel. Once the blur kernel is manually initialized, the two modules can well corporate with each other to form a closed loop, which can be iterated over and over. The iterating process is then unfolded to an end-to-end trainable network, which is called deep alternating network (DAN). In this way, Estimator can utilize information from both LR and SR images, which makes the estimation of blur kernel easier. More importantly, Restorer is trained with the kernel estimated by Estimator, instead of ground-truth kernel. Thus during testing Restorer could be more tolerant to the estimation error of Estimator. Besides, the results of both modules could be substantially improved during the iterations, thus it is likely for our alternating optimization algorithm to get better final results than the direct two-step solutions. We summarize our contributions into three points: 1. We adopt an alternating optimization algorithm to estimate blur kernel and restore SR image for blind SR in a single network (DAN), which helps the two modules to be well compatible with each other and likely to get better final results than previous two-step solutions. 2. We design two convolutional neural modules, which can be alternated repeatedly and then unfolded to form an end-to-end trainable network, without any pre/post-processing. It is easier to be trained and has higher speed than previous two-step solutions. To the best of our knowledge, the proposed method is the first end-to-end network for blind SR. 3. Extensive experiments on synthetic datasets and real-world images show that our model can largely outperform state-of-the art methods and produce more visually favorable results at much higher speed. 2 Related Work 2.1 Super Resolution in the Context of Bicubic Interpolation Learning based methods for SISR usually require a large number of paired HR and LR images as training samples. However, these paired samples are hard to get in real world. As a result, researchers manually synthesize LR images from HR images with predefined downsampling settings. The most popular setting is bicubic interpolation, i.e. defining k in Equation 1 as bicubic kernel. From the arising of SRCNN [9], various DNNs [21, 40, 39, 10, 16, 18] have been proposed based on this setting. Recently, after the proposal of RCAN [39] and RRDB [30], the performance of these non-blind methods even start to saturate on common benchmark datasets. However, the blur kernels for real images are indeed much more complicated. In real applications, kernels are unknown and differ from image to image. As a result, despite that these methods have excellent performance in the context of bicubic downsampling, they still cannot be directly applied to real images due to the domain gap. 2.2 Super Resolution for Multiple Degradations Another kind of non-blind SR methods aims to propose a single model for multiple degradations, i.e. the second step of the two-step solution for blind SR. These methods take both LR image and its corresponding blur kernel as inputs. In [13, 29], the blur kernel is used to downsample images and synthesize training samples, which can be used to train a specific model for given kernel and LR image. In [37], the blur kernel and LR image are directly concatenated at the first layer of a DNN. Thus, the SR result can be closely correlated to both LR image and blur kernel. In [38], Zhang et al. proposed a method based on ADMM algorithm. They interpret this problem as MAP optimization and solve the data term and prior term alternately. In [14], a spatial feature transform (SFT) layer is proposed to better preserve the details in LR image while blur kernel is an additional input. However, as pointed out in [14], the SR results of these methods are usually sensitive to the provided blur kernels. Small deviation of provided kernel from the ground truth will cause severe performance drop of these non-blind SR methods. 2.3 Blind Super Resolution Previous methods for blind SR are usually the sequential combinations of a kernel-estimation method and a non-blind SR method. Thus kernel-estimation methods are also an important part of blind SR. In [26], Michaeli et al. estimate the blur kernel by utilizing the internal patch recurrence. In [3] and [5], LR image is firstly downsampled by a generative network, and then a discriminator is used to verify whether the downsampled image has the same distribution with original LR image. In this way, the blur kernel can be learned by the generative network. In [14], Gu et al. not only train a network for kernel estimation, but also propose a correction network to iteratively correct the kernel. Although the accuracy of estimated kernel is largely improved, it requires training of two or even three networks, which is rather complicated. Instead, DAN is an end-to-end trainable network that is much easier to be trained and has much higher speed. 3 End-to-End Blind Super Resolution 3.1 Problem Formulation As shown in Equation 1, there are three variables, i.e. x, k and n, to be determined in blind SR problem. In literature, we can apply a denoise algorithm [36, 7, 15] in the first place. Then blind SR algorithm only needs to focus on solving k and x. It can be mathematically expressed an optimization problem: argmin k,x ‖y − (x⊗ k) ↓s ‖1 + φ(x) (2) where the former part is the reconstruction term, and φ(x) is prior term for HR image. The prior term is usually unknown and hard to be analytically expressed. Thus it is extremely difficult to solve this problem directly. Previous methods decompose this problem into two sequential steps:{ k =M(y) x = argmin x ‖y − (x⊗ k) ↓s ‖1 + φ(x) (3) where M(·) denotes the function that estimates k from y, and the second step is usually solved by a non-blind SR method described in Sec 2.2. This two-step solution has its drawbacks in threefold. Firstly, this algorithm usually requires training of two or even more models, which is rather complicated. Secondly, M(·) can only utilize information from y, which treats k as a kind of prior of y. But in fact, k could not be properly solved without information from x. At last, the non-blind SR model for the second step is trained with ground-truth kernels. While during testing, it can only have access to kernels estimated in the first step. The difference between ground-truth and estimated kernels will usually cause serve performance drop of the non-blind SR model [14]. Towards these drawbacks, we propose an end-to-end network that can largely release these issues. We still split it into two subproblems, but instead of solving them sequentially, we adopt an alternating optimization algorithm, which restores SR image and estimates corresponding blur kernel alternately. The mathematical expression is ki+1 = argmin k ‖y − (xi ⊗ k) ↓s ‖1 xi+1 = argmin x ‖y − (x⊗ ki) ↓s ‖1 + φ(x) (4) We alternately solve these two subproblems both via convolutional neural modules, namely Estimator and Restorer respectively. Actually, there even has an analytic solution for Estimator. But we experimentally find that analytic solution is more time-consuming and not robust enough (when noise is not fully removed). We fix the number of iterations as T and unfold the iterating process to form an end-to-end trainable network, which is called deep alternating network (DAN). As shown in Figure 1, we initialize the kernel by Dirac function, i.e. the center of the kernel is one and zeros otherwise. Following [14, 37], the kernel is also reshaped and then reduced by principal component analysis (PCA). We set T = 4 in practice and both modules are supervised only at the last iteration by L1 loss. The whole network could be well trained without any restrictions on intermediate results, because the parameters of both modules are shared between different iterations. In DAN, Estimator takes both LR and SR images as inputs, which makes the estimation of blur kernel k much easier. More importantly, Restorer is trained with the kernel estimated by Estimator, instead of ground-truth kernel as previous methods do. Thus, Restorer could be more tolerant to the estimation error of Estimator during testing. Besides, compared with previous two-step solutions, the results of both modules in DAN could be substantially improved, and it is likely for DAN to get better final results. Specially, in the case where the scale factor s = 1, DAN becomes an deblurring network. Due to limited pages, we only discuss SR cases in this paper. 3.2 Instantiate the Convolutional Neural Modules Both modules in our network have two inputs. Estimator takes LR and SR image, and Restorer takes LR image and blur kernel as inputs. We define LR image as basic input, and the other one is conditional input. For example, blur kernel is the conditional input of Restorer. During iterations, the basic inputs of both modules keep the same, but their conditional inputs are repeatedly updated. We claim that it is significantly important to keep the output of each module closely related to its conditional input. Otherwise, the iterating results will collapse to a fixed point at the first iteration. Specifically, if Estimator outputs the same kernel regardless the value of SR image, or Restorer outputs the same SR image regardless of the value of blur kernel, their outputs will only depend on the basic input, and the results will keep the same during the iterations. To ensure that the outputs of Estimator and Restorer are closely related to their conditional inputs, we propose a conditional residual block (CRB). On the basis of the residual block in [39], we concatenate the conditional and basic inputs at the beginning: fout = R(Concat([fbasic, fcond])) + fbasic (5) where R(·) denotes the residual mapping function of CRB and Concat([·, ·]) denotes concatenation. fbasic and fcond are the basic input and conditional input respectively. As shown in Figure 2 (c), the residual mapping function consists of two 3× 3 convolutional layers and one channel attention layer [17]. Both Estimator and Restorer are build by CRBs. Estimator. The whole structure of Estimator is shown in Figure 2 (b). We firstly downsample SR image by a convolutional layer with stride s. Then the feature maps are sent to all CRBs as conditional inputs. At the end of the network, we squeeze the features by global average pooling to form the elements of predicted kernel. Since the kernel is reduced by PCA, Estimator only needs to estimate the PCA result of blur kernel. In practice, Estimator has 5 CRBs, and both basic input and conditional input of each CRB have 32 channels. Restorer. The whole structure of Restorer is shown in Figure 2 (a). In Restorer, we stretch the kernel in spatial dimension to the same spatial size as LR image. Then the stretched kernel is sent to all CRBs of Restorer as conditional inputs. We use PixelShuffle [28] layers to upscale the features to desired size. In practice, Restorer has 40 CRBs, and the basic input and conditional input of each CRB has 64 and 10 channels respectively. 4 Experiments 4.1 Experiments on Synthetic Test Images 4.1.1 Data, Training and Testing We collect 3450 HR images from DIV2K [1] and Flickr2K [11] as training set. To make reasonable comparison with other methods, we train models with two different degradation settings. One is the setting in [14], which only focuses on cases with isotropic Gaussian blur kernels. The other is the setting in [3], which focuses on cases with more general and irregular blur kernels. Setting 1. Following the setting in [14], the kernel size is set as 21. During training, the kernel width is uniformly sampled in [0.2, 4.0], [0.2, 3.0] and [0.2, 2.0] for scale factors 4, 3 and 2 respectively. For quantitative evaluation, we collect HR images from the commonly used benchmark datasets, i.e. Set5 [4], Set14 [34], Urban100 [19], BSD100 [24] and Manga109 [25]. Since determined kernels are needed for reasonable comparison, we uniformly choose 8 kernels, denoted as Gaussian8, from range [1.8, 3.2], [1.35, 2.40] and [0.80, 1.60] for scale factors 4, 3 and 2 respectively. The HR images are first blurred by the selected blur kernels and then downsampled to form synthetic test images. Setting 2. Following the setting in [3], we set the kernel size as 11. We firstly generate anisotropic Gaussian kernels. The lengths of both axises are uniformly distributed in (0.6, 5), rotated by a random angle uniformly distributed in [−π, π]. To deviate from a regular Gaussian, we further apply uniform multiplicative noise (up to 25% of each pixel value of the kernel) and normalize it to sum to one. For testing, we use the benchmark dataset DIV2KRK that is used in [3]. The input size during training is 64× 64 for all scale factors. The batch size is 64. Each model is trained for 4× 105 iterations. We use Adam [22] as our optimizer, with β1 = 0.9, β2 = 0.99. The initial learning rate is 2× 10−4, and will decay by half at every 1× 105 iterations. All models are trained on RTX2080Ti GPUs. 4.1.2 Quantitative Results Setting 1. For the first setting, we evaluate our method on test images synthesized by Gaussian8 kernels. We mainly compare our results with ZSSR [29] and IKC [14], which are methods designed for blind SR. We also include a comparison with CARN [2]. Since it is not designed for blind SR, we perform deblurring method [27] before or after CARN. The PSNR and SSIM results on Y channel of transformed YCbCr space are shown in Table 1. Despite that CARN achieves remarkable results in the context of bicubic downsampling, it suffers severe performance drop when applied to images with unknown blur kernels. Its performance is largely improved when it is followed by a deblurring method, but still inferior to that of blind-SR methods. ZSSR trains specific network for each single tested image by utilizing the internal patch recurrence. However, ZSSR has an in-born drawback: the training samples for each image are limited, and thus it cannot learn a good prior for HR images. IKC is also a two-step solution for blind SR. Although the accuracy of estimated kernel is largely improved in IKC, the final result is still suboptimal. DAN is trained in an end-to-end scheme, which is not only much easier to be trained than two-step solutions, but also likely to a reach a better optimum point. As shown in Table 1, the PSNR result of DAN on Manga109 for scale ×3 is even 4.95dB higher than that of IKC. For other scales and datasets, DAN also largely outperforms IKC. The visual results of img 005 in Urban100 are shown in Figure 3 for comparison. As one can see, CARN and ZSSR even cannot restore the edges for the window. IKC performs better, but the edges are severely blurred. While DAN can restore sharp edges and produce more visually pleasant result. Setting 2. The second setting involves irregular blur kernels, which is more general, but also more difficult to solve. For Setting 2, we mainly compare methods of three different classes: i) SOTA SR algorithms trained on bicubically downsampled images such as EDSR [23] and RCAN [39] , ii) blind SR methods designed for NTIRE competition such as PDN [31] and WDSR [33], iii) the two-step solutions, i.e. the combination of a kernel estimation method and a non-blind SR method, such as Kernel-GAN [3] and ZSSR [29]. The PSNR and SSIM results on Y channl are shown in Table 2. Similarly, the performance of methods trained on bicubically downsampled images is limited by the domain gap. Thus, their results are only slightly better than that of interpolation. The methods in Class 2 are trained on synthesized images provided in NTIRE competition. Although these methods achieve remarkable results in the competition, they still cannot generalize well to irregular blur kernels. The comparison between methods of Class 3 can enlighten us a lot. Specifically, USRNet [35] achieves remarkable results when GT kernels are provided, and KernelGAN also performs well on kernel estimation. However, when they are combined together, as shown in Table 2, the final SR results are worse than most other methods. This indicates that it is important for the Estimator and Restorer to be compatible with each other. Additionally, although better kernel-estimation method can benefit the SR results, the overall performance is still largely inferior to that of DAN. DAN outperforms the combination of KernelGAN and ZSSR by 2.20dB and 0.74dB for scales ×2 and ×4 respectively. The visual results of img 892 in DIVKRK are shown in Figure 4. Although the combination of KernelGAN and ZSSR can produce slightly shaper edges than interpolation, it suffers from severe artifacts. The SR image of DAN is obviously much cleaner and has more reliable details. 4.1.3 Study of Estimated Kernels To evaluated the accuracy of predicted kernels, we calculate their L1 errors in the reduced space, and the results on Urban100 are shown in Figure 5 (a). As one can see that the L1 error of reduced kernels predicted by DAN are much lower than that of IKC. It suggests that the overall improvements of DAN may partially come from more accurate retrieved kernels. We also plot the PSNR results with respect to kernels with different sigma in Figure 5 (b). As sigma increases, the performance gap between IKC and DAN also becomes larger. It indicates that DAN may have better generalization ability. We also replace the estimated kernel by ground truth (GT) to further investigate the influence of Estimator. If GT kernels are provided, the iterating processing becomes meaningless. Thus we test the Restorer with just once forward propagation. The tested results for Setting 1 is shown in Table 3. The result almost keeps unchanged and sometimes even gets worser when GT kernels are provided. It indicates that Predictor may have already satisfied the requirements of Restorer, and the superiority of DAN also partially comes from the good cooperation between its Predictor and Restorer. 4.1.4 Study of Iterations After the model is trained, we also change the number of iterations to see whether the two modules have learned the property of convergence or just have ‘remembered’ the iteration number. The model is trained with 4 iterations, but during testing we increase the iteration number from 1 to 7. As shown in Figure 6 (a) and (c), the average PSNR results on Set5 and Set14 firstly increase rapidly and then gradually converge. It should be noted that when we iterate more times than training, the performance dose not becomes worse, and sometimes even becomes better. For example, the average PSNR on Set14 is 20.43dB when the iteration number is 5, higher than 20.42dB when we iterate 4 times. Although the incremental is relatively small, it suggests that the two modules may have learned to cooperate with each other, instead of solving this problem like ordinary end-to-end networks, in which cases, the performance will drop significantly when the setting of testing is different from that of training. It also suggests that the estimation error of intermediate results does not destroy the convergence of DAN. In other words, DAN is robust to various estimation error. 4.2 Inference Speed One more superiority of our end-to-end model is that it has higher inference speed. To make a quantitative comparison, we evaluate the average speed of different methods on the same platform. We choose the 40 images synthesized by Gaussian8 kernels from Set5 as testing images, and all methods are evaluated on the same platform with a RTX2080Ti GPU. We choose KernelGAN [3] + ZSSR [29] as the one of the representative methods. Its speed is 415.7 seconds per image. IKC [14] has much faster inference speed, which is only 3.93 seconds per image. As a comparison, the average speed of DAN is 0.75 seconds per image, nearly 554 times faster than KernelGAN + ZSSR, and 5 times faster than IKC. In other words, DAN not only can largely outperform SOTA blind SR methods on PSNR results, but also has much higher speed. 4.3 Experiments on Real World Images We also conduct experiments to prove that DAN can generalize well to real wold images. In this case, we need to consider the influence of additive noise. As we mentioned in Sec 3.1, we can perform an denoise algorithm in the first place. But for simplicity, we retrain a different model by adding AWGN to LR image during training. In this way, DAN would be forced to generalize to noisy images. The covariance of noise is set as 15. We use KernelGAN [3] + ZSSR [29] and IKC [14] as the representative methods for blind SR, and CARN [2] as the representative method for non-blind SR method. The commonly used image chip [12] is chosen as test image. It should be noted that it is a real image and we do not have the ground truth. Thus we can only provide a visual comparison in Figure [12]. As one can see, the result of KernelGAN + ZSSR is slightly better than bicubic interpolation, but is still heavily blurred. The result of CARN is over smoothed and the edge is not sharp enough. IKC produces cleaner result, but there are still some artifacts. The letter ‘X’ restored by IKC has an obvious dark line at the top right part. But this dark line is much lighter in the image restored by DAN. It suggests that if trained with noisy images, DAN can also learn to denoise, and produce more visually pleasant results with more reliable details. This is because that both modules are implemented via convolutional layers, which are flexible enough to be adapted to different tasks. 5 Conclusion In this paper, we have proposed an end-to-end algorithm for blind SR. This algorithm is based on alternating optimization, the two parts of which are both implemented by convolutional modules, namely Restorer and Estimator. We unfold the alternating process to form an end-to-end trainable network. In this way, Estimator can utilize information from both LR and SR images, which makes it easier to estimate blur kernel. More importantly, Restorer is trained with the kernel estimated by Estimator, instead of ground-truth kernel, thus Restorer could be more tolerant to with the estimation error of Estimator. Besides, the results of both modules could be substantially improved during the iterations, thus it is likely for DAN to get better final results than previous two-step solutions. Experiments also prove that DAN outperforms SOTA blind SR methods by a large margin. In the future, if the two parts of DAN can be implemented by more powerful modules, we believe that its performance could be further improved. Broader Impact Super Resolution is a traditional task in computer vision. It has been studied for several decades and has wide applications in video enhancement, medical imaging, as well as security and surveillance imaging. These techniques have largely benefited the society in various areas for years and have no negative impact yet. The proposed method (DAN) could further improve the merits of these applications especially in cases where the degradations are unknown. DAN has relatively better performance and much higher speed, and it is possible for DAN to be used in real-time video enhancement or surveillance imaging. This work does not present any negative foreseeable societal consequence. Acknowledgements This work is jointly supported by National Key Research and Development Program of China (2016YFB1001000), Key Research Program of Frontier Sciences, CAS (ZDBS-LY-JSC032), Shandong Provincial Key Research and Development Program (2019JZZY010119), and CAS-AIR.
1. What is the main contribution of the paper in the field of blind super-resolution? 2. What are the strengths of the proposed deep neural network model, particularly in handling unknown blur kernels? 3. What are the weaknesses of the paper, especially regarding the missing details and lack of comparisons? 4. Do you have any concerns about the quality and speed of the proposed method compared to previous approaches? 5. Are there any questions or suggestions you have for improving the paper's content or research methodology?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper presents a deep neural network model for blind super-resolution, which is inspired by the previous alternating optimization algorithm. The proposed network consists of two modules: Restorer and Estimator. The Restorer restores a SR image based on a predicted kernel, and the Estimator estimates the blur kernel using the restored SR image. The two modules are alternatingly applied in an end-to-end network. The proposed method outperforms previous methods both in the quality and speed. Strengths Handling unknown blur kernels is important to make SR really practical. The paper presents a novel blind algorithm that achieves state-of-the-art performance. The proposed network is based on the traditional alternating estimation. This makes the proposed approach intuitive and easy to understand. The proposed network is also easy to train as it is end-to-end. The proposed network model outperforms previous blind SR methods both in the quality and speed. Weaknesses While the evaluation is thorough, there are still some details missing, which makes the paper less convincing. - In Table 2, why is IDC not included? - It would be also interesting to see some comparisons and analysis on the quality of estimated blur kernels, e.g., visualization of estimated blur kernels, and qualitative and quantitative comparison against the ground truth blur kernels at different iterations. - There is only one real world example in Fig. 6. There are also some correctness issues that must be addressed. See the next item.
NIPS
Title Best of Both Worlds: Practical and Theoretically Optimal Submodular Maximization in Parallel Abstract For the problem of maximizing a monotone, submodular function with respect to a cardinality constraint k on a ground set of size n, we provide an algorithm that achieves the state-of-the-art in both its empirical performance and its theoretical properties, in terms of adaptive complexity, query complexity, and approximation ratio; that is, it obtains, with high probability, query complexity of O (n) in expectation, adaptivity of O (log(n)), and approximation ratio of nearly 1− 1/e. The main algorithm is assembled from two components which may be of independent interest. The first component of our algorithm, LINEARSEQ, is useful as a preprocessing algorithm to improve the query complexity of many algorithms. Moreover, a variant of LINEARSEQ is shown to have adaptive complexity of O(log(n/k)) which is smaller than that of any previous algorithm in the literature. The second component is a parallelizable thresholding procedure THRESHOLDSEQ for adding elements with gain above a constant threshold. Finally, we demonstrate that our main algorithm empirically outperforms, in terms of runtime, adaptive rounds, total queries, and objective values, the previous state-of-the-art algorithm FAST in a comprehensive evaluation with six submodular objective functions. 1 Introduction The cardinality-constrained optimization of a monotone, submodular function f : 2N → R+, defined on subsets of a ground set N of size n, is a general problem formulation that is ubiquitous in wideranging applications, e.g. video or image summarization [30], network monitoring [26], information gathering [23], and MAP Inference for Determinantal Point Processes [20], among many others. The function f : 2N → R+ is submodular iff for all S ⊆ T ⊆ N , x 6∈ T , ∆ (x |T ) ≤ ∆ (x |S)1; and the function f is monotone if f(S) ≤ f(T ) for all S ⊆ T . In this paper, we study the following submodular maximization problem (SM) maximizef(S), subject to |S| ≤ k, (SM) where f is a monotone, submodular function; SM is an NP-hard problem. There has been extensive effort into the design of approximation algorithms for SM over the course of more than 45 years, e.g. [32, 12, 10, 21, 24]. For SM, the optimal ratio has been shown to be 1− 1/e ≈ 0.63 [32]. 1∆ (x |S) denotes the marginal gain of x to S: f(S ∪ {x})− f(S). 35th Conference on Neural Information Processing Systems (NeurIPS 2021). As instance sizes have grown very large, there has been much effort into the design of efficient, parallelizable algorithms for SM. Since queries to the objective function can be very expensive, the overall efficiency of an algorithm for SM is typically measured by the query complexity, or number of calls made to the objective function f [2, 9]. The degree of parallelizability can be measured by the adaptive complexity of an algorithm, which is the minimum number of rounds into which the queries to f may be organized, such that within each round, the queries are independent and hence may be arbitrariliy parallelized. Observe that the lower the adaptive complexity, the more parallelizable an algorithm is. To obtain a constant approximation factor, a lower bound of Ω(n) has been shown on the query complexity [24] and a lower bound of Ω(log(n)/ log log(n)) has been shown on the adaptive complexity [3]. Several algorithms have been developed recently that are nearly optimal in terms of query and adaptive complexities [14, 11, 17, 5]; that is, these algorithms achieve O (log n) adaptivity and O (npolylog(n)) query complexity (see Table 1). However, these algorithms use sampling techniques that result in very large constant factors that make these algorithms impractical. This fact is discussed in detail in Breuer et al. [9]; as an illustration, to obtain ratio 1− 1/e− 0.1 with 95% confidence, all of these algorithms require more than 106 queries of sets of size k/ log(n) in every adaptive round [9]; moreover, even if these algorithms are run as heuristics using a single sample, other inefficiencies preclude these algorithms of running even on moderately sized instances [9]. For this reason, the FAST algorithm of Breuer et al. [9] has been recently proposed, which uses an entirely different sampling technique called adaptive sequencing. Adaptive sequencing was originally introduced in Balkanski et al. [6], but the original version has quadratic query complexity in the size of the ground set and hence is still impractical on large instances. To speed it up, the FAST algorithm sacrifices theoretical guarantees to yield an algorithm that parallelizes well and is faster than all previous algorithms for SM in an extensive experimental evaluation. The theoretical sacrifices of FAST include: the adaptivity of FAST is Ω(log(n) log2(log n)), which is higher than the state-of-the-art, and more significantly, the algorithm obtains no approximation ratio for k < 8502; since many applications require small choices for k, this limits the practical utility of FAST. A natural question is thus: is it possible to design an algorithm that is both practical and theoretically optimal in terms of adaptivity, ratio, and total queries? 1.1 Contributions In this paper, we provide three main contributions. The first contribution is the algorithm LINEARSEQ (LS, Section 2) that achieves with probability 1− 1/n a constant factor (4 +O(ε))−1 in expected linear query complexity and with O(log n) adaptive rounds (Theorem 1). Although the ratio of ≈ 0.25 is smaller than the optimal 1− 1/e ≈ 0.63, this algorithm can be used to improve the query 2The approximation ratio 1 − 1/e − 4ε of FAST holds with probability 1 − δ for k ≥ θ(ε, δ, k) = 2 log(2δ−1 log( 1 ε log(k)))/ε2(1− 5ε). complexity of many extant algorithms, as we decribe in the related work section below. Interestingly, LINEARSEQ can be modified to have adaptivity O (log(n/k)) at a small cost to its ratio as discussed in Appendix F. This version of LINEARSEQ is a constant-factor algorithm for SM with smaller adaptivity than any previous algorithm in the literature, especially for values of k that are large relative to n. Our second contribution is an improved parallelizable thresholding procedure THRESHOLDSEQ (TS, Section 3) for a commonly recurring task in submodular optimization: namely, add all elements that have a gain of a specified threshold τ to the solution. This subproblem arises not only in SM, but also e.g. in submodular cover [17] and non-monotone submodular maximization [4, 18, 15, 25]. Our TS accomplishes this task with probability 1− 1/n in O(log n) adaptive rounds and expected O(n) query complexity (Theorem 2), while previous procedures for this task only add elements with an expected gain of τ and use expensive sampling techniques [17]; have Ω(log2 n) adaptivity [22]; or have Ω(kn) query complexity [6]. Finally, we present in Section 3 the parallelized greedy algorithm PARALLELGREEDYBOOST (PGB), which is used in conjunction with LINEARSEQ and THRESHOLDSEQ to yield the final algorithm LS+PGB, which answers the above question affirmatively: LS+PGB obtains nearly the optimal 1− 1/e ratio with probability 1− 2/n in O(log n) adaptive rounds and O(n) queries in expectation; moreover, LS+PGB is faster than FAST in an extensive empirical evaluation (see Table 2). In addition, LS+PGB improves theoretically on the previous algorithms in query complexity while obtaining nearly optimal adaptivity (see Table 1). 1.2 Additional Related Work Adaptive Sequencing. The main inefficiency of the adaptive sequencing method of Balkanski et al. [6] (which causes the quadratic query complexity) is an explicit check that a constant fraction of elements will be filtered from the ground set. In this work, we adopt a similar sampling technique to adaptive sequencing, except that we design the algorithm to filter a constant fraction of elements with only constant probability. This method allows us to reduce the quadratic query complexity of adaptive sequencing to linear query complexity while only increasing the adaptive complexity by a small constant factor. In contrast, FAST of Breuer et al. [9] speeds up adaptive sequencing by increasing the adaptive complexity of the algorithm through adaptive binary search procedures, which, in addition to the increasing the adaptivity by logarithmic factors, place restrictions on the k values for which the ratio can hold. This improved adaptive sequencing technique is the core of our THRESHOLDSEQ procedure, which has the additional benefit of being relatively simple to analyze. Algorithms with Linear Query Complexity. Our LINEARSEQ algorithm also uses the improved adaptive sequencing technique, but in addition this algorithm integrates ideas from the Ω(n)-adaptive linear-time streaming algorithm of Kuhnle [24] to achieve a constant-factor algorithm with low adaptivity in expected linear time. Integration of the improved adaptive sequencing with the ideas of Kuhnle [24] is non-trivial, and ultimately this integration enables the theoretical improvement in query complexity over previous algorithms with sublinear adaptivity that obtain a constant ratio with high probability (see Table 1). In Fahrbach et al. [17], a linear-time procedure SUBSAMPLEPREPROCESSING is described; this procedure is to the best of our knowledge the only algorithm in the literature that obtains a constant ratio with sublinear adaptive rounds and linear query complexity and hence is comparable to LINEARSEQ. However, SUBSAMPLEPREPROCESSING uses entirely different ideas from our LINEARSEQ and has much weaker theoretical guarantees: for input 0 < δ < 1, it obtains ratio δ 2 2×106 with probability 1 − δ in O(log(n)/δ) adaptive rounds and O(n) queries in expectation – the small ratio renders SUBSAMPLEPREPROCESSING impractical; also, its ratio holds only with constant probability. By contrast, with ε = 0.1, our LINEARSEQ obtains ratio ≈ 0.196 with probability 1− 1/n in O(log(n)) adaptive rounds and O(n) queries in expectation. Using LS for Preprocessing: Guesses of OPT. Many algorithms for SM, including FAST and all of the algorithms listed in Table 1 except for SM and our algorithm, use a strategy of guessing logarithmically many values of OPT. Our LINEARSEQ algorithm reduces the interval containing OPT from size k to a small constant size in expected linear time. Thus, LINEARSEQ could be used for preprocessing prior to running FAST or one of the other algorithms in Table 1, which would improve their query complexity without compromising their adaptive complexity or ratio; this illustrates the general utility of LINEARSEQ. For example, with this change, the theoretical adaptivity of FAST improves, although it remains worse than LS+PGB: the adaptive complexity of FAST becomes O ( 1 ε2 log(n) log ( 1 ε log(k) )) in contrast to the O ( 1 ε2 log (n/ε) ) of LS+PGB. Although SUBSAMPLEPREPROCESSING may be used for the same purpose, its ratio only holds with constant probability which would then limit the probability of success of any following algorithm. Relationship of THRESHOLDSEQ to Existing Methods. The first procedure in the literature to perform the same task is the THRESHOLDSAMPLING procedure of Fahrbach et al. [17]; however, THRESHOLDSAMPLING only ensures that the expected marginal gain of each element added is at least τ and has large constants in its runtime that make it impractical [9]. In contrast, THRESHOLDSEQ ensures that added elements contribute a gain of at least τ with high probability and is highly efficient empirically. A second procedure in the literature to perform the same task is the ADAPTIVESEQUENCING method of Balkanski et al. [6], which similarly to THRESHOLDSEQ uses random permutations of the ground set; however, ADAPTIVE-SEQUENCING focuses on explicitly ensuring a constant fraction of elements will be filtered in the next round, which is expensive to check: the query complexity of ADAPTIVE-SEQUENCING is O(kn). In contrast, our THRESHOLDSEQ algorithm ensures this property with a constant probability, which is sufficient to ensure the adaptivity with the high probability of 1− 1/n in O(n) expected queries. Finally, a third related procedure in the literature is THRESHOLDSAMPLING of Kazemi et al. [22], which also uses random permutations to sample elements. However, this algorithm has the higher adaptivity of O (log(n) log(k)), in contrast to the O (log(n)) of THRESHOLDSEQ. MapReduce Framework. Another line of work studying parallelizable algorithms for SM has focused on the MapReduce framework [13] in a distributed setting, e.g. [7, 8, 16, 28]. These algorithms divide the dataset over a large number of machines and are intended for a setting in which the data does not fit on a single machine. None of these algorithms has sublinear adaptivity and hence all have potentially large numbers of sequential function queries on each machine. In this work, our empirical evaluation is on a single machine with a large number of CPU cores; we do not evaluate our algorithms in a distributed setting. Organization. The constant-factor algorithm LINEARSEQ is described and analyzed in Section 2; the details of the analysis are presented in Appendix C. The variant of LINEARSEQ with lower adaptivity is described in Appendix F. The algorithms THRESHOLDSEQ and PARALLELGREEDYBOOST are discussed at a high level in Section 3, with detailed descriptions of these algorithms and theoretical analysis presented in Appendices D and E. Our empirical evaluation is summarized in Section 4 with more results and discussion in Appendix H. 2 A Parallelizable Algorithm with Linear Query Complexity: LINEARSEQ In this section, we describe the algorithm LINEARSEQ for SM (Alg. 1) that obtains ratio (4+O (ε))−1 in O ( 1 ε3 log(n) ) adaptive rounds and expected O ( n ε3 ) queries. If ε ≤ 0.21, the ratio of LINEARSEQ is lower-bounded by (4 + 16ε)−1 ≥ 0.135, which shows that a relatively large constant ratio is obtained even at large values of ε. An initial run of this algorithm is required for our main algorithm LS+PGB. Algorithm 1 The algorithm that obtains ratio (4 +O (ε))−1 in O ( log(n)/ε3 ) adaptive rounds and expected O ( n/ε3 ) queries. 1: procedure LINEARSEQ(f,N , k, ε) 2: Input: evaluation oracle f : 2N → R+, constraint k, error ε 3: a = arg maxu∈N f({u}) 4: Initialize A← {a} , V ← N , ` = d4(1 + 1/(βε)) log(n)e, β = ε/(16 log(8/(1− e−ε/2))) 5: for j ← 1 to ` do 6: Update V ← {x ∈ V : ∆ (x |A) ≥ f(A)/k} and filter out the rest 7: if |V | = 0 then break 8: V = {v1, v2, . . . , v|V |} ←random-permutation(V ) 9: Λ← {b(1 + ε)uc : 1 ≤ b(1 + ε)uc ≤ k, u ∈ N} ∪{bk + uεkc : bk + uεkc ≤ |V |, u ∈ N} ∪ {|V |} 10: B[λi] = false, for λi ∈ Λ 11: for λi ∈ Λ in parallel do 12: Tλi−1 ← {v1, v2, . . . , vλi−1} ; Tλi ← {v1, v2, . . . , vλi} ; T ′λi ← Tλi\Tλi−1 13: if ∆ ( T ′λi |A ∪ Tλi−1 ) /|T ′λi | ≥ (1− ε)f(A ∪ Tλi−1)/k then B[λi]← true 14: λ∗ ← max{λi ∈ Λ : B[λi] = false and ((λi ≤ k and B[1] to B[λi−1] are all true) or (λi > k and ∃m ≥ 1 s.t. | ⋃i−1 u=m T ′ λu | ≥ k and B[λm] to B[λi−1] are all true))} 15: A← A ∪ Tλ∗ 16: if |V | > 0 then return failure 17: return A′ ← last k elements added to A Description of LS. The work of LS is done within iterations of a sequential outer for loop (Line 5); this loop iterates at most O (log(n)) times, and each iteration requires two adaptive rounds; thus, the adaptive complexity of the algorithm is O (log(n)). Each iteration adds more elements to the set A, which is initially empty. Within each iteration, there are four high-level steps: 1) filter elements from V that have gain less than f(A)/k (Line 6); 2) randomly permute V (Line 8); 3) compute in parallel the marginal gain of adding blocks of the sequence of remaining elements in V to A (for loop on Line 11); 4) select a prefix of the sequence V = ( v1, v2, . . . , v|V | ) to add to A (Line 14). The selection of the prefix to add is carefully chosen to approximately satisfy, on average, Condition 1 for elements added; and also to ensure that, with constant probability, a constant fraction of elements of V are filtered on the next iteration. The following theorem states the theoretical results for LINEARSEQ. The remainder of this section proves this theorem, with intuition and discussion of the proof. The omitted proofs for all lemmata are provided in Appendix C. Theorem 1. Let (f, k) be an instance of SM. For any constant 0 < ε < 1/2, the algorithm LINEARSEQ has adaptive complexity O ( log(n)/ε3 ) and outputs A′ ⊆ N with |A′| ≤ k such that the following properties hold: 1) The algorithm succeeds with probability at least 1 − 1/n. 2) There are O ( (1/(εk) + 1)n/ε3 ) oracle queries in expectation. 3) If the algorithm succeeds,[ 4 + 4(2−ε)(1−ε)(1−2ε)ε ] f(A′) ≥ f(O), where O is an optimal solution to the instance (f, k). Overview. The goal of this section is to produce a constant factor, parallelizable algorithm with linear query complexity. As a starting point, consider an algorithm1 that takes one pass through the ground set, adding each element e to candidate set A iff ∆ (e |A) ≥ f(A)/k. (1) Condition 1 ensures two properties: 1) the last k elements in A contain a constant fraction of the value f(A); and 2) f(A) is within a constant fraction of OPT. By these two properties, the last k elements of A are a constant factor approximation to SM with exactly one query of the objective function per element of the ground set. For completeness, we give a pseudocode (Alg. 3) and proof in Appendix B. However, each query depends on all of the previous ones and thus there are n adaptive rounds. Therefore, the challenge is to approximately simulate Alg. 3 in a lowly adaptive (highly parallelizable) manner, which is what LINEARSEQ accomplishes. 1This algorithm is a simplified version of the streaming algorithm of Kuhnle [24]. 2.1 Approximately Satisfying Condition 1 Discarding Elements. In one adaptive round during each iteration j of the outer for loop, all elements with gain to A of less than f (A) /k are discarded from V (Line 6). Since the size of A increases as the algorithm runs, by submodularity, the gain of these elements can only decrease and hence these elements cannot satisfy Condition 1 and can be safely discarded from consideration. The process of filtering thereby ensures the following lemma at termination. Lemma 1. At successful termination of LINEARSEQ, f(O) ≤ 2f(A), where O ⊆ N is an optimal solution of size k. Addition of Elements. Next, we describe the details of how elements are added to the set A. The random permutation of remaining elements on Line 8 constructs a sequence ( v1, v2, . . . , v|V | ) such that each element is uniformly randomly sampled from the remaining elements. By testing the marginal gains along the sequence in parallel, it is possible to determine a good prefix of the sequence (vi) to add to A to ensure the following: 1) Condition 1 is approximately satisfied; and 2) We will discard a constant fraction of V in the next iteration with constant probability. Condition 1 is important for the approximation ratio and discarding a constant fraction of V is important for the adaptivity and query complexity. Below, we discuss how to choose the prefix such that both are achieved. To speed up the algorithm, we do not test the marginal gain at each point in the sequence (vi), but rather test blocks of elements at once as determined by the index set Λ defined in the pseudocode. Prefix Selection. Say a block is bad if this block does not satisfy the condition checked on Line 13 (which is an approximate, average form of Condition 1); otherwise, the block is good. At the end of an iteration, we select the largest block index λ∗, where this block is bad and the previous consecutive blocks which together have at least k elements are all good; or this block is bad and all the previous blocks are good blocks. Then, we add the prefix Tλ∗ = (v1, v2, . . . , vλ∗) into A. Now, the relevance of Condition 1 for the approximation ratio is that it implies f(A) ≥ 2f(A \A′), where A′ are the last k elements added to A. Lemma 2 shows that the conditions required on the marginal gains of blocks added imply an approximate form of this fact is satisfied by LINEARSEQ. Indeed, the proof of Lemma 2 informs the choice Λ of blocks evaluated and the computation of λ∗. Lemma 2. Suppose LINEARSEQ terminates successfully. Then f(A) ≥ 2(1−ε+ε 2) 1+ε f(A\A ′). Proof. If |A| ≤ k, the lemma is immediate, so assume |A| > k. For iteration j, let Tj,λ∗j denote the set added to A during iteration j; and let Tj,λ∗j = ∅ if the algorithm terminates before iteration j. Let Aj denote the value of A after iteration j. Define c = max{c ∈ N : A′ ⊆ (∪`j=cTj,λ∗j )}. Then, |Tc,λ∗c | > 0; and for any j > c, |Tj,λ∗j | < k. It holds that (∪ ` j=c+1Tj,λ∗j ) ⊂ A ′ ⊆ (∪`j=cTj,λ∗j ). Figure 1 shows how A is composed of these sets Tj,λ∗j and how each set is composed of blocks. The following claim is proven in Appendix C.3. Claim 1. It holds that ∆ ( Tc,λ∗c |A\A ′) ≥ (1 − ε) max{0, |Tc,λ∗c ∩ A′| − 2εk} · f(A\A′)/k. For j > c, it holds that ∆ ( Tj,λ∗j |Aj−1 ) ≥ 1−ε1+ε |Tj,λ∗j | · f(A\A ′)/k. From Claim 1, f(A)−f(A\A′) = ∆ ( Tc,λ∗c |A\A ′)+ ∑̀ j=c+1 ∆ ( Tj,λ∗j |Aj−1 ) ≥ (1− ε)(1− 2ε) 1 + ε ·f(A\A′), (2) where Inequality 2 is proven in Appendix C.4. In the remainder of this subsection, we will show that a (βε)-fraction of V is discarded at each iteration j of the outer for loop with probability at least 1/2, where β is a constant in terms of ε as defined on Line 4 in the pseudocode. The remainder of the proofs in this section are implicitly conditioned on the behavior of the algorithm prior to iteration j. The next lemma describes the behavior of the number of elements that will be filtered at iteration j + 1. Observe that the set Si defined in the next lemma is the set of elements that would be filtered at the next iteration if prefix Ti is added to A. Lemma 3. Let Si = {x ∈ V : ∆ (x |A ∪ Ti) < f(A ∪ Ti)/k} . It holds that |S0| = 0, |S|V || = |V |, and |Si| ≤ |Si+1|. By Lemma 3, we know the number of elements in Si increases from 0 to |V | with i. Therefore, there exists a t such that t = min{i ∈ N : |Si| ≥ βε|V |}. If λ∗ ≥ t, |Sλ∗ | ≥ βε|V |, and we will successfully filter out more than (βε)-fraction of V at the next iteration. In this case, we say that the iteration j succeeds. Otherwise, if λ∗ < t, the iteration may fail. The remainder of the proof bounds the probability that λ∗ < t, which is an upper bound on the probability that iteration j fails. Let λt = max{λ ∈ Λ : λ < t}, and let λ′t = max({λ′ ∈ Λ : ∑ λ∈Λ,λ′≤λ≤λt |Tλ| ≥ k} ∪ {1}). If λ∗ < t, there must be at least one index λ between λ′t and λt such that the block T ′ λ is bad. The next lemma bounds the probability that any block T ′λ, with λ < λt, is bad. Lemma 4. Let t = min{i ∈ N : |Si| ≥ βε|V |}; λt = max{λ ∈ Λ : λ < t}; (Yi) be a sequence of independent and identically distributed Bernoulli trials, where the success probability is βε. Then for any λ < λt, Pr (B[λ] = false) ≤ Pr (∑|T ′λ| i=1 Yi > ε|T ′λ| ) . Finally, we bound the probability that an iteration j of the outer for loop fails. Let B1 = {λ ∈ Λ : λ ≤ k and λ < λt}, B2 = {λ ∈ Λ : |Λ ∩ [λ, λt]| ≤ d1/εe}. Then Pr (iteration j fails) ≤ Pr (∃λ ∈ B1 ∪B2 with B[λ] = false) ≤ 1/2, (3) where the proof of Inequality 3 is in Appendix C.7. 2.2 Proof of Theorem 1 From Section 2.1, the probability at any iteration of the outer for loop of successful filtering of an (βε)-fraction of V is at least 1/2. We can model the success of the iterations as a sequence of dependent Bernoulli random variables, with success probability that depends on the results of previous trials but is always at least 1/2. Success Probability of LINEARSEQ. If there are at leastm = dlog1−βε(1/n)e successful iterations, the algorithm LINEARSEQ will succeed. The number of successful iterations X` up to and including the `-th iteration is a sum of dependent Bernoulli random variables. With some work (Lemma 6 in Appendix A), the Chernoff bounds can be applied to ensure the algorithm succeeds with probability at least 1− 1/n, as shown in Appendix C.2. Adaptivity and Query Complexity. Oracle queries are made on Lines 6 and 13 of LINEARSEQ. The filtering on Line 6 is in one adaptive round, and the inner for loop is also in one adaptive round. Thus, the adaptivity is proportional to the number of iterations of the outer for loop, O (`) = O ( log(n)/ε3 ) . For the query complexity, let Yi be the number of iterations between the (i− 1)-th and i-th successful iterations of the outer for loop. By Lemma 6 in Appendix A, E [Yi] ≤ 2. From here, we show in Appendix C.8 that there are at most O ( n/ε3 ) queries in expectation. Approximation Ratio. Suppose LINEARSEQ terminates successfully. We have the approximation ratio as follows: f(A′) (a) ≥ f(A)− f(A\A′) (b) ≥ f(A)− 1 + ε 2(1− ε+ ε2) f(A) (c) ≥ 1 4 + 4(2−ε)(1−ε)(1−2ε) · ε f(O), where Inequality (a) is from submodularity of f , Inequality (b) is from Lemma 2, and Inequality (c) is from Lemma 1. 3 Improving to Nearly the Optimal Ratio In this section, we describe how to obtain the nearly optimal ratio in nearly optimal query and adaptive complexities (Section 3.2). First, in Section 3.1, we describe THRESHOLDSEQ, a parallelizable procedure to add all elements with gain above a constant threshold to the solution. In Section 3.2, we describe PARALLELGREEDYBOOST and finally the main algorithm LS+PGB. Because of space constraints, the algorithms are described in the main text at a high level only, with detailed descriptions and proofs deferred to Appendices D and E. 3.1 The THRESHOLDSEQ Procedure In this section, we discuss the algorithm THRESHOLDSEQ, which adds all elements with gain above an input threshold τ up to accuracy ε in O(log n) adaptive rounds and O(n) queries in expectation. Pseudocode is given in Alg. 4 in Appendix D. Overview. The goal of this algorithm is, given an input threshold τ and size constraint k, to produce a set of size at most k such that the average gain of elements added is at least τ . As discussed in Section 1, this task is an important subroutine of many algorithms for submodular optimization (including our final algorithm), although by itself it does not produce any approximation ratio for SM. The overall strategy of our parallelizable algorithm THRESHOLDSEQ is analagous to that of LINEARSEQ, although THRESHOLDSEQ is considerably simpler to analyze. The following theorem summarizes the theoretical guarantees of THRESHOLDSEQ and the proofs are in Appendix D. Theorem 2. Suppose THRESHOLDSEQ is run with input (f, k, ε, δ, τ). Then, the algorithm has adaptive complexity O(log(n/δ)/ε) and outputs A ⊆ N with |A| ≤ k such that the following properties hold: 1) The algorithm succeeds with probability at least 1− δ/n. 2) There are O(n/ε) oracle queries in expectation. 3) It holds that f(A)/|A| ≥ (1 − ε)τ/(1 + ε). 4) If |A| < k, then ∆ (x |A) < τ for all x ∈ N . 3.2 The PARALLELGREEDYBOOST Procedure and the Main Algorithm Algorithm 2 The PARALLELGREEDYBOOST procedure. 1: Input: evaluation oracle f : 2N → R+, constraint k, constant α, value Γ such that Γ ≤ f(O) ≤ Γ/α, accuracy parameter ε 2: Initialize τ ← Γ/(αk), δ← 1/(log1−ε(α/3) + 1), A← ∅ 3: while τ ≥ Γ/(3k) do 4: τ ← τ(1− ε) 5: S ← THRESHOLDSEQ(fA,N , k − |A|, δ, ε/3, τ) 6: A← A ∪ S 7: if |A| = k then 8: return A 9: return A In this section, we describe the greedy algorithm PARALLELGREEDYBOOST (PGB, Alg. 2) that uses multiple calls to THRESHOLDSEQ with descending thresholds. Next, our state-of-the-art algorithm LS+PGB is specified. Description of PARALLELGREEDYBOOST. This procedure takes as input the results from running an α-approximation algorithm on the instance (f, k) of SM; thus, PARALLELGREEDYBOOST is not meant to be used as a standalone algorithm. Namely, PARALLELGREEDYBOOST takes as input Γ, the solution value of an α-approximation algorithm for SM; this solution value Γ is then boosted to ensure the ratio 1− 1/e− ε on the instance. The values of Γ and α are used to produce an initial threshold value τ for THRESHOLDSEQ. Then, the threshold value is iteratively decreased by a factor of (1− ε) and the call to THRESHOLDSEQ is iteratively repeated to build up a solution, until a minimum value for the threshold of Γ/(3k) is reached. Therefore, THRESHOLDSEQ is called at most O (log(1/α)/ε) times. We remark that α is not required to be a constant approximation ratio. Theorem 3. Let (f, k) be an instance of SM. Suppose an α- approximation algorithm for SM is used to obtain Γ, where the approximation ratio α holds with probability 1− pα. For any constant ε > 0, the algorithm PARALLELGREEDYBOOST has adaptive complexity O ( logα−1 ε2 log ( n log(α−1) ε )) and outputs A ∈ N with |A| ≤ k such that the following properties hold: 1) The algorithm succeeds with probability at least 1− 1/n− pα. 2) If the algorithm succeeds, there are O ( n log ( α−1 ) /ε2 ) oracle queries in expectation. 3) If the algorithm succeeds, f(A) ≥ (1− 1/e− ε)f(O), where O is an optimal solution to the instance (f, k). Proof. Success Probability. For the while loop in Line 3-8, there are no more than dlog1−ε(α/3)e iterations. If THRESHOLDSEQ completes successfully at every iteration, Algorithm 2 also succeeds. The probability that this occurs is lower bounded in Appendix E.1.1. For the remainder of the proof of Theorem 3, we assume that every call to THRESHOLDSEQ succeeds. Adaptive and Query Complexity. There are at most dlog1−ε(α/3)e iterations of the while loop. Since log(x) ≤ x − 1, dlog1−ε(α/3)e = d log(α/3) log(1−ε)e, and ε < 1 − 1/e, it holds that dlog1−ε(α/3)e ≤ d log(3/α) ε e. And for each iteration, queries to the oracle happen only on Line 5, the call to THRESHOLDSEQ. Since the adaptive and query complexity of THRESHOLDSEQ is O (log(n/δ)/ε) and O (n/ε), the adaptive and query complexities for Algorithm 2 are O ( logα−1 ε2 log ( n log(α−1) ε )) , O ( logα−1 ε2 n ) , respectively. Approximation Ratio. Let Aj be the set A we get after Line 6, and let Sj be the set returned by THRESHOLDSEQ in iteration j of the while loop. Let ` be the number of iterations of the while loop. First, in the case that |A| < k at termination, THRESHOLDSEQ returns 0 ≤ |S`| < k − |A`−1| at the last iteration. From Theorem 2, for any o ∈ O, ∆ (o |A) < τ < Γ/(3k). By submodularity and monotonicity, f(O)− f(A) ≤ f(O ∪A)− f(A) ≤ ∑ o∈O\A ∆ (o |A) ≤ ∑ o∈O\A Γ/(3k) ≤ f(O)/3, and the ratio holds. Second, consider the case that |A| = k. Suppose in iteration j + 1, THRESHOLDSEQ returns a nonempty set Sj+1. Then, in the previous iteration j, THRESHOLDSEQ returns a set Sj that 0 ≤ |Sj | < k − |Aj−1|. From Theorem 2, f(O)− f(Aj+1) ≤ ( 1− (1− ε/3)(1− ε) (1 + ε/3)k |Aj+1\Aj | ) (f(O)− f(Aj)). (4) The above inequality also holds when Aj+1 = Aj . Therefore, it holds that f(O)− f(A) ≤ e− (1−ε/3)(1−ε) 1+ε/3 f(O) ≤ (1/e+ ε)f(O). (5) The detailed proof of Inequality 4 and 5 can be found in Appendix E.1.2. Main Algorithm: LS+PGB. To obtain the main algorithm of this paper (and its nearly optimal theoretical guarantees), we use PARALLELGREEDYBOOST with the solution value Γ and ratio α given by LINEARSEQ. Because this choice requires an initial run of LINEARSEQ, we denote this algorithm by LS+PGB. Thus, LS+PGB integrates LINEARSEQ and PARALLELGREEDYBOOST to get nearly the optimal 1− 1/e ratio with query complexity of O (n) and adaptivity of O (log(n)). 4 Empirical Evaluation In this section, we demonstrate that the empirical performance of LS+PGB outperforms that of FAST for the metrics of total time, total queries, adaptive rounds, and objective value across six applications of SM: maximum cover on random graphs (MaxCover), twitter feed summarization (TweetSumm), image summarization (ImageSumm), influence maximization (Influence), revenue maximization (RevMax), and Traffic Speeding Sensor Placement (Traffic). See Appendix H.2 for the definition of the objectives. The sizes n of the ground sets range from n = 1885 to 100000. Implementation and Environment. We evaluate the same implementation of FAST used in Breuer et al. [9]. Our implementation of LS+PGB is parallelized using the Message Passing Interface (MPI) within the same Python codebase as FAST (see the Supplementary Material for source code). Practical optimizations to LINEARSEQ are made, which do not compromise the theoretical guarantees, which are discussed in Appendix G. The hardware of the system consists of 40 Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz cores (with 80 threads available), of which up to 75 threads are made available to the algorithms for the experiments. On each instance, the algorithms are repeated independently for five repetitions, and the mean and standard deviation of the objective value, total queries, adaptive rounds and parallel (wall clock) runtime to the submodular function is plotted. Parameters. The parameters ε, δ of FAST are set to enforce the nominal ratio of 1−1/e−0.1 ≈ 0.53 with probability 0.95; these are the same parameter settings for FAST as in the Breuer et al. [9] evaluation. The ε parameter of LS+PGB is set to enforce the same ratio with probability 1−2/n. With these parameters, FAST ensures its ratio only if k ≥ θ(ε, δ, k) = 2 log(2δ−1 log( 1ε log(k)))/ε 2(1− 5ε) ≥ 7103. Since k < 7103 on many of our instances, FAST is evaluated in these instances as a theoretically motivated heuristic. In contrast, the ratio of LS+PGB holds on all instances evaluated. We use exponentially increasing k values from n/1000 to n/10 for each application to explore the behavior of each algorithm across a broad range of instance sizes. Overview of Results. Figure 2 illustrates the comparison with FAST across the ImageSumm and RevenueMax application; results on other applications are shown in Appendix H. Runtime: LS+PGB is faster than FAST by more than 1% on 80% of instances evaluated; and is faster by an order of magnitude on 14% of instances. Objective value: LS+PGB achieves higher objective by more than 1% on 50% of instances, whereas FAST achieves higher objective by more than 1% on 8% of instances. Adaptive rounds: LS+PGB achieves more than 1% fewer adaptive rounds on 75% of instances, while FAST achieves more than 1% fewer adaptive rounds on 22% of instances. Total queries: LS+PGB uses more than 1% fewer queries on 84% of scenarios with FAST using more than 1% fewer queries on 9% of scenarios. In summary, LS+PGB frequently gives substantial improvement in objective value, queries, adaptive rounds, and parallel runtime. Comparison of the arithmetic means of the metrics over all instances is given in Table 2. Finally, FAST and LS+PGB show very similar linear speedup with the number of processors employed: as shown in Fig. 7. 5 Concluding Remarks In this work, we have introduced the algorithm LS+PGB, which is highly parallelizable and achieves state-of-the-art empirical performance over any previous algorithm for SM; also, LS+PGB is nearly optimal theoretically in terms of query complexity, adaptivity, and approximation ratio. An integral component of LS+PGB is our preprocessing algorithm LINEARSEQ, which reduces the interval containing OPT to a small constant size in expected linear time and low adaptivity, which may be independently useful. Another component of LS+PGB is the THRESHOLDSEQ procedure, which adds all elements with gain above a threshold in a parallelizable manner and improves existing algorithms in the literature for the same task. Acknowledgements The work of Yixin Chen, Tonmoy Dey, and Alan Kuhnle was partially supported by Florida State University. The authors have received no third-party funding in direct support of this work. The authors have no additional revenues from other sources related to this work.
1. What is the focus of the paper regarding submodular maximization? 2. What are the strengths of the proposed algorithm in terms of adaptive complexity, query complexity, and approximation ratio? 3. What are some suggestions for improving the presentation of the paper, particularly regarding experiment comparisons and algorithm explanations?
Summary Of The Paper Review
Summary Of The Paper Submodular maximization is an important problem with many relevant applications in machine learning. With the huge amount of existing data and increase in the available computational power (increase in the number of machines), it is important to design scalable and parallelizable algorithms with provable guarantees from both practical and theoretical points of view. Many recent works have studied the submodular maximization in the context of adaptivity (“The degree of parallelizability can be measured by the adaptive complexity of an algorithm, which is the minimum number of rounds into which the queries to f may be organized”). In this paper, the authors propose an algorithm that achieves the state-of-the-art in terms of adaptive complexity, query complexity, and approximation ratio. Review This is a nicely written paper. It studies an important paper. It adds another important piece to the collection of recent developments in the scalable submodular maximization domain. The authors have performed an extensive set of experiments. The code and the data for the experiments are shared. As far as I checked, the proof looks correct. As a result, I recommend this paper for acceptance. I have a few suggestions that could improve the presentation of the paper: It would be useful to add FAST [8] to Table 1 as it is used as the baseline in the experiments. In the experiments, it is useful to report the adaptivity of the algorithms. Adding more details to the experiments in the main text could improve comparison. You can reduce the number of datasets and instead report the objective, adaptivity, and number of queries too. You can add more context to the explanation of the main algorithm.
NIPS
Title Best of Both Worlds: Practical and Theoretically Optimal Submodular Maximization in Parallel Abstract For the problem of maximizing a monotone, submodular function with respect to a cardinality constraint k on a ground set of size n, we provide an algorithm that achieves the state-of-the-art in both its empirical performance and its theoretical properties, in terms of adaptive complexity, query complexity, and approximation ratio; that is, it obtains, with high probability, query complexity of O (n) in expectation, adaptivity of O (log(n)), and approximation ratio of nearly 1− 1/e. The main algorithm is assembled from two components which may be of independent interest. The first component of our algorithm, LINEARSEQ, is useful as a preprocessing algorithm to improve the query complexity of many algorithms. Moreover, a variant of LINEARSEQ is shown to have adaptive complexity of O(log(n/k)) which is smaller than that of any previous algorithm in the literature. The second component is a parallelizable thresholding procedure THRESHOLDSEQ for adding elements with gain above a constant threshold. Finally, we demonstrate that our main algorithm empirically outperforms, in terms of runtime, adaptive rounds, total queries, and objective values, the previous state-of-the-art algorithm FAST in a comprehensive evaluation with six submodular objective functions. 1 Introduction The cardinality-constrained optimization of a monotone, submodular function f : 2N → R+, defined on subsets of a ground set N of size n, is a general problem formulation that is ubiquitous in wideranging applications, e.g. video or image summarization [30], network monitoring [26], information gathering [23], and MAP Inference for Determinantal Point Processes [20], among many others. The function f : 2N → R+ is submodular iff for all S ⊆ T ⊆ N , x 6∈ T , ∆ (x |T ) ≤ ∆ (x |S)1; and the function f is monotone if f(S) ≤ f(T ) for all S ⊆ T . In this paper, we study the following submodular maximization problem (SM) maximizef(S), subject to |S| ≤ k, (SM) where f is a monotone, submodular function; SM is an NP-hard problem. There has been extensive effort into the design of approximation algorithms for SM over the course of more than 45 years, e.g. [32, 12, 10, 21, 24]. For SM, the optimal ratio has been shown to be 1− 1/e ≈ 0.63 [32]. 1∆ (x |S) denotes the marginal gain of x to S: f(S ∪ {x})− f(S). 35th Conference on Neural Information Processing Systems (NeurIPS 2021). As instance sizes have grown very large, there has been much effort into the design of efficient, parallelizable algorithms for SM. Since queries to the objective function can be very expensive, the overall efficiency of an algorithm for SM is typically measured by the query complexity, or number of calls made to the objective function f [2, 9]. The degree of parallelizability can be measured by the adaptive complexity of an algorithm, which is the minimum number of rounds into which the queries to f may be organized, such that within each round, the queries are independent and hence may be arbitrariliy parallelized. Observe that the lower the adaptive complexity, the more parallelizable an algorithm is. To obtain a constant approximation factor, a lower bound of Ω(n) has been shown on the query complexity [24] and a lower bound of Ω(log(n)/ log log(n)) has been shown on the adaptive complexity [3]. Several algorithms have been developed recently that are nearly optimal in terms of query and adaptive complexities [14, 11, 17, 5]; that is, these algorithms achieve O (log n) adaptivity and O (npolylog(n)) query complexity (see Table 1). However, these algorithms use sampling techniques that result in very large constant factors that make these algorithms impractical. This fact is discussed in detail in Breuer et al. [9]; as an illustration, to obtain ratio 1− 1/e− 0.1 with 95% confidence, all of these algorithms require more than 106 queries of sets of size k/ log(n) in every adaptive round [9]; moreover, even if these algorithms are run as heuristics using a single sample, other inefficiencies preclude these algorithms of running even on moderately sized instances [9]. For this reason, the FAST algorithm of Breuer et al. [9] has been recently proposed, which uses an entirely different sampling technique called adaptive sequencing. Adaptive sequencing was originally introduced in Balkanski et al. [6], but the original version has quadratic query complexity in the size of the ground set and hence is still impractical on large instances. To speed it up, the FAST algorithm sacrifices theoretical guarantees to yield an algorithm that parallelizes well and is faster than all previous algorithms for SM in an extensive experimental evaluation. The theoretical sacrifices of FAST include: the adaptivity of FAST is Ω(log(n) log2(log n)), which is higher than the state-of-the-art, and more significantly, the algorithm obtains no approximation ratio for k < 8502; since many applications require small choices for k, this limits the practical utility of FAST. A natural question is thus: is it possible to design an algorithm that is both practical and theoretically optimal in terms of adaptivity, ratio, and total queries? 1.1 Contributions In this paper, we provide three main contributions. The first contribution is the algorithm LINEARSEQ (LS, Section 2) that achieves with probability 1− 1/n a constant factor (4 +O(ε))−1 in expected linear query complexity and with O(log n) adaptive rounds (Theorem 1). Although the ratio of ≈ 0.25 is smaller than the optimal 1− 1/e ≈ 0.63, this algorithm can be used to improve the query 2The approximation ratio 1 − 1/e − 4ε of FAST holds with probability 1 − δ for k ≥ θ(ε, δ, k) = 2 log(2δ−1 log( 1 ε log(k)))/ε2(1− 5ε). complexity of many extant algorithms, as we decribe in the related work section below. Interestingly, LINEARSEQ can be modified to have adaptivity O (log(n/k)) at a small cost to its ratio as discussed in Appendix F. This version of LINEARSEQ is a constant-factor algorithm for SM with smaller adaptivity than any previous algorithm in the literature, especially for values of k that are large relative to n. Our second contribution is an improved parallelizable thresholding procedure THRESHOLDSEQ (TS, Section 3) for a commonly recurring task in submodular optimization: namely, add all elements that have a gain of a specified threshold τ to the solution. This subproblem arises not only in SM, but also e.g. in submodular cover [17] and non-monotone submodular maximization [4, 18, 15, 25]. Our TS accomplishes this task with probability 1− 1/n in O(log n) adaptive rounds and expected O(n) query complexity (Theorem 2), while previous procedures for this task only add elements with an expected gain of τ and use expensive sampling techniques [17]; have Ω(log2 n) adaptivity [22]; or have Ω(kn) query complexity [6]. Finally, we present in Section 3 the parallelized greedy algorithm PARALLELGREEDYBOOST (PGB), which is used in conjunction with LINEARSEQ and THRESHOLDSEQ to yield the final algorithm LS+PGB, which answers the above question affirmatively: LS+PGB obtains nearly the optimal 1− 1/e ratio with probability 1− 2/n in O(log n) adaptive rounds and O(n) queries in expectation; moreover, LS+PGB is faster than FAST in an extensive empirical evaluation (see Table 2). In addition, LS+PGB improves theoretically on the previous algorithms in query complexity while obtaining nearly optimal adaptivity (see Table 1). 1.2 Additional Related Work Adaptive Sequencing. The main inefficiency of the adaptive sequencing method of Balkanski et al. [6] (which causes the quadratic query complexity) is an explicit check that a constant fraction of elements will be filtered from the ground set. In this work, we adopt a similar sampling technique to adaptive sequencing, except that we design the algorithm to filter a constant fraction of elements with only constant probability. This method allows us to reduce the quadratic query complexity of adaptive sequencing to linear query complexity while only increasing the adaptive complexity by a small constant factor. In contrast, FAST of Breuer et al. [9] speeds up adaptive sequencing by increasing the adaptive complexity of the algorithm through adaptive binary search procedures, which, in addition to the increasing the adaptivity by logarithmic factors, place restrictions on the k values for which the ratio can hold. This improved adaptive sequencing technique is the core of our THRESHOLDSEQ procedure, which has the additional benefit of being relatively simple to analyze. Algorithms with Linear Query Complexity. Our LINEARSEQ algorithm also uses the improved adaptive sequencing technique, but in addition this algorithm integrates ideas from the Ω(n)-adaptive linear-time streaming algorithm of Kuhnle [24] to achieve a constant-factor algorithm with low adaptivity in expected linear time. Integration of the improved adaptive sequencing with the ideas of Kuhnle [24] is non-trivial, and ultimately this integration enables the theoretical improvement in query complexity over previous algorithms with sublinear adaptivity that obtain a constant ratio with high probability (see Table 1). In Fahrbach et al. [17], a linear-time procedure SUBSAMPLEPREPROCESSING is described; this procedure is to the best of our knowledge the only algorithm in the literature that obtains a constant ratio with sublinear adaptive rounds and linear query complexity and hence is comparable to LINEARSEQ. However, SUBSAMPLEPREPROCESSING uses entirely different ideas from our LINEARSEQ and has much weaker theoretical guarantees: for input 0 < δ < 1, it obtains ratio δ 2 2×106 with probability 1 − δ in O(log(n)/δ) adaptive rounds and O(n) queries in expectation – the small ratio renders SUBSAMPLEPREPROCESSING impractical; also, its ratio holds only with constant probability. By contrast, with ε = 0.1, our LINEARSEQ obtains ratio ≈ 0.196 with probability 1− 1/n in O(log(n)) adaptive rounds and O(n) queries in expectation. Using LS for Preprocessing: Guesses of OPT. Many algorithms for SM, including FAST and all of the algorithms listed in Table 1 except for SM and our algorithm, use a strategy of guessing logarithmically many values of OPT. Our LINEARSEQ algorithm reduces the interval containing OPT from size k to a small constant size in expected linear time. Thus, LINEARSEQ could be used for preprocessing prior to running FAST or one of the other algorithms in Table 1, which would improve their query complexity without compromising their adaptive complexity or ratio; this illustrates the general utility of LINEARSEQ. For example, with this change, the theoretical adaptivity of FAST improves, although it remains worse than LS+PGB: the adaptive complexity of FAST becomes O ( 1 ε2 log(n) log ( 1 ε log(k) )) in contrast to the O ( 1 ε2 log (n/ε) ) of LS+PGB. Although SUBSAMPLEPREPROCESSING may be used for the same purpose, its ratio only holds with constant probability which would then limit the probability of success of any following algorithm. Relationship of THRESHOLDSEQ to Existing Methods. The first procedure in the literature to perform the same task is the THRESHOLDSAMPLING procedure of Fahrbach et al. [17]; however, THRESHOLDSAMPLING only ensures that the expected marginal gain of each element added is at least τ and has large constants in its runtime that make it impractical [9]. In contrast, THRESHOLDSEQ ensures that added elements contribute a gain of at least τ with high probability and is highly efficient empirically. A second procedure in the literature to perform the same task is the ADAPTIVESEQUENCING method of Balkanski et al. [6], which similarly to THRESHOLDSEQ uses random permutations of the ground set; however, ADAPTIVE-SEQUENCING focuses on explicitly ensuring a constant fraction of elements will be filtered in the next round, which is expensive to check: the query complexity of ADAPTIVE-SEQUENCING is O(kn). In contrast, our THRESHOLDSEQ algorithm ensures this property with a constant probability, which is sufficient to ensure the adaptivity with the high probability of 1− 1/n in O(n) expected queries. Finally, a third related procedure in the literature is THRESHOLDSAMPLING of Kazemi et al. [22], which also uses random permutations to sample elements. However, this algorithm has the higher adaptivity of O (log(n) log(k)), in contrast to the O (log(n)) of THRESHOLDSEQ. MapReduce Framework. Another line of work studying parallelizable algorithms for SM has focused on the MapReduce framework [13] in a distributed setting, e.g. [7, 8, 16, 28]. These algorithms divide the dataset over a large number of machines and are intended for a setting in which the data does not fit on a single machine. None of these algorithms has sublinear adaptivity and hence all have potentially large numbers of sequential function queries on each machine. In this work, our empirical evaluation is on a single machine with a large number of CPU cores; we do not evaluate our algorithms in a distributed setting. Organization. The constant-factor algorithm LINEARSEQ is described and analyzed in Section 2; the details of the analysis are presented in Appendix C. The variant of LINEARSEQ with lower adaptivity is described in Appendix F. The algorithms THRESHOLDSEQ and PARALLELGREEDYBOOST are discussed at a high level in Section 3, with detailed descriptions of these algorithms and theoretical analysis presented in Appendices D and E. Our empirical evaluation is summarized in Section 4 with more results and discussion in Appendix H. 2 A Parallelizable Algorithm with Linear Query Complexity: LINEARSEQ In this section, we describe the algorithm LINEARSEQ for SM (Alg. 1) that obtains ratio (4+O (ε))−1 in O ( 1 ε3 log(n) ) adaptive rounds and expected O ( n ε3 ) queries. If ε ≤ 0.21, the ratio of LINEARSEQ is lower-bounded by (4 + 16ε)−1 ≥ 0.135, which shows that a relatively large constant ratio is obtained even at large values of ε. An initial run of this algorithm is required for our main algorithm LS+PGB. Algorithm 1 The algorithm that obtains ratio (4 +O (ε))−1 in O ( log(n)/ε3 ) adaptive rounds and expected O ( n/ε3 ) queries. 1: procedure LINEARSEQ(f,N , k, ε) 2: Input: evaluation oracle f : 2N → R+, constraint k, error ε 3: a = arg maxu∈N f({u}) 4: Initialize A← {a} , V ← N , ` = d4(1 + 1/(βε)) log(n)e, β = ε/(16 log(8/(1− e−ε/2))) 5: for j ← 1 to ` do 6: Update V ← {x ∈ V : ∆ (x |A) ≥ f(A)/k} and filter out the rest 7: if |V | = 0 then break 8: V = {v1, v2, . . . , v|V |} ←random-permutation(V ) 9: Λ← {b(1 + ε)uc : 1 ≤ b(1 + ε)uc ≤ k, u ∈ N} ∪{bk + uεkc : bk + uεkc ≤ |V |, u ∈ N} ∪ {|V |} 10: B[λi] = false, for λi ∈ Λ 11: for λi ∈ Λ in parallel do 12: Tλi−1 ← {v1, v2, . . . , vλi−1} ; Tλi ← {v1, v2, . . . , vλi} ; T ′λi ← Tλi\Tλi−1 13: if ∆ ( T ′λi |A ∪ Tλi−1 ) /|T ′λi | ≥ (1− ε)f(A ∪ Tλi−1)/k then B[λi]← true 14: λ∗ ← max{λi ∈ Λ : B[λi] = false and ((λi ≤ k and B[1] to B[λi−1] are all true) or (λi > k and ∃m ≥ 1 s.t. | ⋃i−1 u=m T ′ λu | ≥ k and B[λm] to B[λi−1] are all true))} 15: A← A ∪ Tλ∗ 16: if |V | > 0 then return failure 17: return A′ ← last k elements added to A Description of LS. The work of LS is done within iterations of a sequential outer for loop (Line 5); this loop iterates at most O (log(n)) times, and each iteration requires two adaptive rounds; thus, the adaptive complexity of the algorithm is O (log(n)). Each iteration adds more elements to the set A, which is initially empty. Within each iteration, there are four high-level steps: 1) filter elements from V that have gain less than f(A)/k (Line 6); 2) randomly permute V (Line 8); 3) compute in parallel the marginal gain of adding blocks of the sequence of remaining elements in V to A (for loop on Line 11); 4) select a prefix of the sequence V = ( v1, v2, . . . , v|V | ) to add to A (Line 14). The selection of the prefix to add is carefully chosen to approximately satisfy, on average, Condition 1 for elements added; and also to ensure that, with constant probability, a constant fraction of elements of V are filtered on the next iteration. The following theorem states the theoretical results for LINEARSEQ. The remainder of this section proves this theorem, with intuition and discussion of the proof. The omitted proofs for all lemmata are provided in Appendix C. Theorem 1. Let (f, k) be an instance of SM. For any constant 0 < ε < 1/2, the algorithm LINEARSEQ has adaptive complexity O ( log(n)/ε3 ) and outputs A′ ⊆ N with |A′| ≤ k such that the following properties hold: 1) The algorithm succeeds with probability at least 1 − 1/n. 2) There are O ( (1/(εk) + 1)n/ε3 ) oracle queries in expectation. 3) If the algorithm succeeds,[ 4 + 4(2−ε)(1−ε)(1−2ε)ε ] f(A′) ≥ f(O), where O is an optimal solution to the instance (f, k). Overview. The goal of this section is to produce a constant factor, parallelizable algorithm with linear query complexity. As a starting point, consider an algorithm1 that takes one pass through the ground set, adding each element e to candidate set A iff ∆ (e |A) ≥ f(A)/k. (1) Condition 1 ensures two properties: 1) the last k elements in A contain a constant fraction of the value f(A); and 2) f(A) is within a constant fraction of OPT. By these two properties, the last k elements of A are a constant factor approximation to SM with exactly one query of the objective function per element of the ground set. For completeness, we give a pseudocode (Alg. 3) and proof in Appendix B. However, each query depends on all of the previous ones and thus there are n adaptive rounds. Therefore, the challenge is to approximately simulate Alg. 3 in a lowly adaptive (highly parallelizable) manner, which is what LINEARSEQ accomplishes. 1This algorithm is a simplified version of the streaming algorithm of Kuhnle [24]. 2.1 Approximately Satisfying Condition 1 Discarding Elements. In one adaptive round during each iteration j of the outer for loop, all elements with gain to A of less than f (A) /k are discarded from V (Line 6). Since the size of A increases as the algorithm runs, by submodularity, the gain of these elements can only decrease and hence these elements cannot satisfy Condition 1 and can be safely discarded from consideration. The process of filtering thereby ensures the following lemma at termination. Lemma 1. At successful termination of LINEARSEQ, f(O) ≤ 2f(A), where O ⊆ N is an optimal solution of size k. Addition of Elements. Next, we describe the details of how elements are added to the set A. The random permutation of remaining elements on Line 8 constructs a sequence ( v1, v2, . . . , v|V | ) such that each element is uniformly randomly sampled from the remaining elements. By testing the marginal gains along the sequence in parallel, it is possible to determine a good prefix of the sequence (vi) to add to A to ensure the following: 1) Condition 1 is approximately satisfied; and 2) We will discard a constant fraction of V in the next iteration with constant probability. Condition 1 is important for the approximation ratio and discarding a constant fraction of V is important for the adaptivity and query complexity. Below, we discuss how to choose the prefix such that both are achieved. To speed up the algorithm, we do not test the marginal gain at each point in the sequence (vi), but rather test blocks of elements at once as determined by the index set Λ defined in the pseudocode. Prefix Selection. Say a block is bad if this block does not satisfy the condition checked on Line 13 (which is an approximate, average form of Condition 1); otherwise, the block is good. At the end of an iteration, we select the largest block index λ∗, where this block is bad and the previous consecutive blocks which together have at least k elements are all good; or this block is bad and all the previous blocks are good blocks. Then, we add the prefix Tλ∗ = (v1, v2, . . . , vλ∗) into A. Now, the relevance of Condition 1 for the approximation ratio is that it implies f(A) ≥ 2f(A \A′), where A′ are the last k elements added to A. Lemma 2 shows that the conditions required on the marginal gains of blocks added imply an approximate form of this fact is satisfied by LINEARSEQ. Indeed, the proof of Lemma 2 informs the choice Λ of blocks evaluated and the computation of λ∗. Lemma 2. Suppose LINEARSEQ terminates successfully. Then f(A) ≥ 2(1−ε+ε 2) 1+ε f(A\A ′). Proof. If |A| ≤ k, the lemma is immediate, so assume |A| > k. For iteration j, let Tj,λ∗j denote the set added to A during iteration j; and let Tj,λ∗j = ∅ if the algorithm terminates before iteration j. Let Aj denote the value of A after iteration j. Define c = max{c ∈ N : A′ ⊆ (∪`j=cTj,λ∗j )}. Then, |Tc,λ∗c | > 0; and for any j > c, |Tj,λ∗j | < k. It holds that (∪ ` j=c+1Tj,λ∗j ) ⊂ A ′ ⊆ (∪`j=cTj,λ∗j ). Figure 1 shows how A is composed of these sets Tj,λ∗j and how each set is composed of blocks. The following claim is proven in Appendix C.3. Claim 1. It holds that ∆ ( Tc,λ∗c |A\A ′) ≥ (1 − ε) max{0, |Tc,λ∗c ∩ A′| − 2εk} · f(A\A′)/k. For j > c, it holds that ∆ ( Tj,λ∗j |Aj−1 ) ≥ 1−ε1+ε |Tj,λ∗j | · f(A\A ′)/k. From Claim 1, f(A)−f(A\A′) = ∆ ( Tc,λ∗c |A\A ′)+ ∑̀ j=c+1 ∆ ( Tj,λ∗j |Aj−1 ) ≥ (1− ε)(1− 2ε) 1 + ε ·f(A\A′), (2) where Inequality 2 is proven in Appendix C.4. In the remainder of this subsection, we will show that a (βε)-fraction of V is discarded at each iteration j of the outer for loop with probability at least 1/2, where β is a constant in terms of ε as defined on Line 4 in the pseudocode. The remainder of the proofs in this section are implicitly conditioned on the behavior of the algorithm prior to iteration j. The next lemma describes the behavior of the number of elements that will be filtered at iteration j + 1. Observe that the set Si defined in the next lemma is the set of elements that would be filtered at the next iteration if prefix Ti is added to A. Lemma 3. Let Si = {x ∈ V : ∆ (x |A ∪ Ti) < f(A ∪ Ti)/k} . It holds that |S0| = 0, |S|V || = |V |, and |Si| ≤ |Si+1|. By Lemma 3, we know the number of elements in Si increases from 0 to |V | with i. Therefore, there exists a t such that t = min{i ∈ N : |Si| ≥ βε|V |}. If λ∗ ≥ t, |Sλ∗ | ≥ βε|V |, and we will successfully filter out more than (βε)-fraction of V at the next iteration. In this case, we say that the iteration j succeeds. Otherwise, if λ∗ < t, the iteration may fail. The remainder of the proof bounds the probability that λ∗ < t, which is an upper bound on the probability that iteration j fails. Let λt = max{λ ∈ Λ : λ < t}, and let λ′t = max({λ′ ∈ Λ : ∑ λ∈Λ,λ′≤λ≤λt |Tλ| ≥ k} ∪ {1}). If λ∗ < t, there must be at least one index λ between λ′t and λt such that the block T ′ λ is bad. The next lemma bounds the probability that any block T ′λ, with λ < λt, is bad. Lemma 4. Let t = min{i ∈ N : |Si| ≥ βε|V |}; λt = max{λ ∈ Λ : λ < t}; (Yi) be a sequence of independent and identically distributed Bernoulli trials, where the success probability is βε. Then for any λ < λt, Pr (B[λ] = false) ≤ Pr (∑|T ′λ| i=1 Yi > ε|T ′λ| ) . Finally, we bound the probability that an iteration j of the outer for loop fails. Let B1 = {λ ∈ Λ : λ ≤ k and λ < λt}, B2 = {λ ∈ Λ : |Λ ∩ [λ, λt]| ≤ d1/εe}. Then Pr (iteration j fails) ≤ Pr (∃λ ∈ B1 ∪B2 with B[λ] = false) ≤ 1/2, (3) where the proof of Inequality 3 is in Appendix C.7. 2.2 Proof of Theorem 1 From Section 2.1, the probability at any iteration of the outer for loop of successful filtering of an (βε)-fraction of V is at least 1/2. We can model the success of the iterations as a sequence of dependent Bernoulli random variables, with success probability that depends on the results of previous trials but is always at least 1/2. Success Probability of LINEARSEQ. If there are at leastm = dlog1−βε(1/n)e successful iterations, the algorithm LINEARSEQ will succeed. The number of successful iterations X` up to and including the `-th iteration is a sum of dependent Bernoulli random variables. With some work (Lemma 6 in Appendix A), the Chernoff bounds can be applied to ensure the algorithm succeeds with probability at least 1− 1/n, as shown in Appendix C.2. Adaptivity and Query Complexity. Oracle queries are made on Lines 6 and 13 of LINEARSEQ. The filtering on Line 6 is in one adaptive round, and the inner for loop is also in one adaptive round. Thus, the adaptivity is proportional to the number of iterations of the outer for loop, O (`) = O ( log(n)/ε3 ) . For the query complexity, let Yi be the number of iterations between the (i− 1)-th and i-th successful iterations of the outer for loop. By Lemma 6 in Appendix A, E [Yi] ≤ 2. From here, we show in Appendix C.8 that there are at most O ( n/ε3 ) queries in expectation. Approximation Ratio. Suppose LINEARSEQ terminates successfully. We have the approximation ratio as follows: f(A′) (a) ≥ f(A)− f(A\A′) (b) ≥ f(A)− 1 + ε 2(1− ε+ ε2) f(A) (c) ≥ 1 4 + 4(2−ε)(1−ε)(1−2ε) · ε f(O), where Inequality (a) is from submodularity of f , Inequality (b) is from Lemma 2, and Inequality (c) is from Lemma 1. 3 Improving to Nearly the Optimal Ratio In this section, we describe how to obtain the nearly optimal ratio in nearly optimal query and adaptive complexities (Section 3.2). First, in Section 3.1, we describe THRESHOLDSEQ, a parallelizable procedure to add all elements with gain above a constant threshold to the solution. In Section 3.2, we describe PARALLELGREEDYBOOST and finally the main algorithm LS+PGB. Because of space constraints, the algorithms are described in the main text at a high level only, with detailed descriptions and proofs deferred to Appendices D and E. 3.1 The THRESHOLDSEQ Procedure In this section, we discuss the algorithm THRESHOLDSEQ, which adds all elements with gain above an input threshold τ up to accuracy ε in O(log n) adaptive rounds and O(n) queries in expectation. Pseudocode is given in Alg. 4 in Appendix D. Overview. The goal of this algorithm is, given an input threshold τ and size constraint k, to produce a set of size at most k such that the average gain of elements added is at least τ . As discussed in Section 1, this task is an important subroutine of many algorithms for submodular optimization (including our final algorithm), although by itself it does not produce any approximation ratio for SM. The overall strategy of our parallelizable algorithm THRESHOLDSEQ is analagous to that of LINEARSEQ, although THRESHOLDSEQ is considerably simpler to analyze. The following theorem summarizes the theoretical guarantees of THRESHOLDSEQ and the proofs are in Appendix D. Theorem 2. Suppose THRESHOLDSEQ is run with input (f, k, ε, δ, τ). Then, the algorithm has adaptive complexity O(log(n/δ)/ε) and outputs A ⊆ N with |A| ≤ k such that the following properties hold: 1) The algorithm succeeds with probability at least 1− δ/n. 2) There are O(n/ε) oracle queries in expectation. 3) It holds that f(A)/|A| ≥ (1 − ε)τ/(1 + ε). 4) If |A| < k, then ∆ (x |A) < τ for all x ∈ N . 3.2 The PARALLELGREEDYBOOST Procedure and the Main Algorithm Algorithm 2 The PARALLELGREEDYBOOST procedure. 1: Input: evaluation oracle f : 2N → R+, constraint k, constant α, value Γ such that Γ ≤ f(O) ≤ Γ/α, accuracy parameter ε 2: Initialize τ ← Γ/(αk), δ← 1/(log1−ε(α/3) + 1), A← ∅ 3: while τ ≥ Γ/(3k) do 4: τ ← τ(1− ε) 5: S ← THRESHOLDSEQ(fA,N , k − |A|, δ, ε/3, τ) 6: A← A ∪ S 7: if |A| = k then 8: return A 9: return A In this section, we describe the greedy algorithm PARALLELGREEDYBOOST (PGB, Alg. 2) that uses multiple calls to THRESHOLDSEQ with descending thresholds. Next, our state-of-the-art algorithm LS+PGB is specified. Description of PARALLELGREEDYBOOST. This procedure takes as input the results from running an α-approximation algorithm on the instance (f, k) of SM; thus, PARALLELGREEDYBOOST is not meant to be used as a standalone algorithm. Namely, PARALLELGREEDYBOOST takes as input Γ, the solution value of an α-approximation algorithm for SM; this solution value Γ is then boosted to ensure the ratio 1− 1/e− ε on the instance. The values of Γ and α are used to produce an initial threshold value τ for THRESHOLDSEQ. Then, the threshold value is iteratively decreased by a factor of (1− ε) and the call to THRESHOLDSEQ is iteratively repeated to build up a solution, until a minimum value for the threshold of Γ/(3k) is reached. Therefore, THRESHOLDSEQ is called at most O (log(1/α)/ε) times. We remark that α is not required to be a constant approximation ratio. Theorem 3. Let (f, k) be an instance of SM. Suppose an α- approximation algorithm for SM is used to obtain Γ, where the approximation ratio α holds with probability 1− pα. For any constant ε > 0, the algorithm PARALLELGREEDYBOOST has adaptive complexity O ( logα−1 ε2 log ( n log(α−1) ε )) and outputs A ∈ N with |A| ≤ k such that the following properties hold: 1) The algorithm succeeds with probability at least 1− 1/n− pα. 2) If the algorithm succeeds, there are O ( n log ( α−1 ) /ε2 ) oracle queries in expectation. 3) If the algorithm succeeds, f(A) ≥ (1− 1/e− ε)f(O), where O is an optimal solution to the instance (f, k). Proof. Success Probability. For the while loop in Line 3-8, there are no more than dlog1−ε(α/3)e iterations. If THRESHOLDSEQ completes successfully at every iteration, Algorithm 2 also succeeds. The probability that this occurs is lower bounded in Appendix E.1.1. For the remainder of the proof of Theorem 3, we assume that every call to THRESHOLDSEQ succeeds. Adaptive and Query Complexity. There are at most dlog1−ε(α/3)e iterations of the while loop. Since log(x) ≤ x − 1, dlog1−ε(α/3)e = d log(α/3) log(1−ε)e, and ε < 1 − 1/e, it holds that dlog1−ε(α/3)e ≤ d log(3/α) ε e. And for each iteration, queries to the oracle happen only on Line 5, the call to THRESHOLDSEQ. Since the adaptive and query complexity of THRESHOLDSEQ is O (log(n/δ)/ε) and O (n/ε), the adaptive and query complexities for Algorithm 2 are O ( logα−1 ε2 log ( n log(α−1) ε )) , O ( logα−1 ε2 n ) , respectively. Approximation Ratio. Let Aj be the set A we get after Line 6, and let Sj be the set returned by THRESHOLDSEQ in iteration j of the while loop. Let ` be the number of iterations of the while loop. First, in the case that |A| < k at termination, THRESHOLDSEQ returns 0 ≤ |S`| < k − |A`−1| at the last iteration. From Theorem 2, for any o ∈ O, ∆ (o |A) < τ < Γ/(3k). By submodularity and monotonicity, f(O)− f(A) ≤ f(O ∪A)− f(A) ≤ ∑ o∈O\A ∆ (o |A) ≤ ∑ o∈O\A Γ/(3k) ≤ f(O)/3, and the ratio holds. Second, consider the case that |A| = k. Suppose in iteration j + 1, THRESHOLDSEQ returns a nonempty set Sj+1. Then, in the previous iteration j, THRESHOLDSEQ returns a set Sj that 0 ≤ |Sj | < k − |Aj−1|. From Theorem 2, f(O)− f(Aj+1) ≤ ( 1− (1− ε/3)(1− ε) (1 + ε/3)k |Aj+1\Aj | ) (f(O)− f(Aj)). (4) The above inequality also holds when Aj+1 = Aj . Therefore, it holds that f(O)− f(A) ≤ e− (1−ε/3)(1−ε) 1+ε/3 f(O) ≤ (1/e+ ε)f(O). (5) The detailed proof of Inequality 4 and 5 can be found in Appendix E.1.2. Main Algorithm: LS+PGB. To obtain the main algorithm of this paper (and its nearly optimal theoretical guarantees), we use PARALLELGREEDYBOOST with the solution value Γ and ratio α given by LINEARSEQ. Because this choice requires an initial run of LINEARSEQ, we denote this algorithm by LS+PGB. Thus, LS+PGB integrates LINEARSEQ and PARALLELGREEDYBOOST to get nearly the optimal 1− 1/e ratio with query complexity of O (n) and adaptivity of O (log(n)). 4 Empirical Evaluation In this section, we demonstrate that the empirical performance of LS+PGB outperforms that of FAST for the metrics of total time, total queries, adaptive rounds, and objective value across six applications of SM: maximum cover on random graphs (MaxCover), twitter feed summarization (TweetSumm), image summarization (ImageSumm), influence maximization (Influence), revenue maximization (RevMax), and Traffic Speeding Sensor Placement (Traffic). See Appendix H.2 for the definition of the objectives. The sizes n of the ground sets range from n = 1885 to 100000. Implementation and Environment. We evaluate the same implementation of FAST used in Breuer et al. [9]. Our implementation of LS+PGB is parallelized using the Message Passing Interface (MPI) within the same Python codebase as FAST (see the Supplementary Material for source code). Practical optimizations to LINEARSEQ are made, which do not compromise the theoretical guarantees, which are discussed in Appendix G. The hardware of the system consists of 40 Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz cores (with 80 threads available), of which up to 75 threads are made available to the algorithms for the experiments. On each instance, the algorithms are repeated independently for five repetitions, and the mean and standard deviation of the objective value, total queries, adaptive rounds and parallel (wall clock) runtime to the submodular function is plotted. Parameters. The parameters ε, δ of FAST are set to enforce the nominal ratio of 1−1/e−0.1 ≈ 0.53 with probability 0.95; these are the same parameter settings for FAST as in the Breuer et al. [9] evaluation. The ε parameter of LS+PGB is set to enforce the same ratio with probability 1−2/n. With these parameters, FAST ensures its ratio only if k ≥ θ(ε, δ, k) = 2 log(2δ−1 log( 1ε log(k)))/ε 2(1− 5ε) ≥ 7103. Since k < 7103 on many of our instances, FAST is evaluated in these instances as a theoretically motivated heuristic. In contrast, the ratio of LS+PGB holds on all instances evaluated. We use exponentially increasing k values from n/1000 to n/10 for each application to explore the behavior of each algorithm across a broad range of instance sizes. Overview of Results. Figure 2 illustrates the comparison with FAST across the ImageSumm and RevenueMax application; results on other applications are shown in Appendix H. Runtime: LS+PGB is faster than FAST by more than 1% on 80% of instances evaluated; and is faster by an order of magnitude on 14% of instances. Objective value: LS+PGB achieves higher objective by more than 1% on 50% of instances, whereas FAST achieves higher objective by more than 1% on 8% of instances. Adaptive rounds: LS+PGB achieves more than 1% fewer adaptive rounds on 75% of instances, while FAST achieves more than 1% fewer adaptive rounds on 22% of instances. Total queries: LS+PGB uses more than 1% fewer queries on 84% of scenarios with FAST using more than 1% fewer queries on 9% of scenarios. In summary, LS+PGB frequently gives substantial improvement in objective value, queries, adaptive rounds, and parallel runtime. Comparison of the arithmetic means of the metrics over all instances is given in Table 2. Finally, FAST and LS+PGB show very similar linear speedup with the number of processors employed: as shown in Fig. 7. 5 Concluding Remarks In this work, we have introduced the algorithm LS+PGB, which is highly parallelizable and achieves state-of-the-art empirical performance over any previous algorithm for SM; also, LS+PGB is nearly optimal theoretically in terms of query complexity, adaptivity, and approximation ratio. An integral component of LS+PGB is our preprocessing algorithm LINEARSEQ, which reduces the interval containing OPT to a small constant size in expected linear time and low adaptivity, which may be independently useful. Another component of LS+PGB is the THRESHOLDSEQ procedure, which adds all elements with gain above a threshold in a parallelizable manner and improves existing algorithms in the literature for the same task. Acknowledgements The work of Yixin Chen, Tonmoy Dey, and Alan Kuhnle was partially supported by Florida State University. The authors have received no third-party funding in direct support of this work. The authors have no additional revenues from other sources related to this work.
1. What is the focus and contribution of the paper on maximizing submodular functions? 2. What are the strengths of the proposed approach, particularly in terms of its adaptation and complexity? 3. How does the proposed method compare to other recent works, specifically FAST, regarding its performance and scalability? 4. Do you have any concerns or suggestions regarding the organization and clarity of the paper, especially regarding the placement of proofs and explanations? 5. Are there any limitations or areas for improvement in the proposed approach that could be explored further?
Summary Of The Paper Review
Summary Of The Paper This paper proposes several new algorithms for maximizing submodular functions which when combined achieve state of the art combination of approximation ratio, adaptive complexity, and query complexity. Empirically, the combined LS+PGB algorithm is also equal to or better than the recent FAST algorithm. Review This paper proposes a new algorithm for maximizing submodular functions that achieves state of the art combination of approximation ratio, adaptive complexity, and query complexity. Empirically, the algorithm is also equal to or better than FAST. Impact: This is nice paper with impressive results that improve over previous state of the art. The improved ThresholdSeq algorithm in particular may improve many existing algorithms that rely on this subproblem. Quality/Originality: Builds on and expands previous work in a novel way. The experiments would be improved if additional large scale datasets were used such as ImageNet, MovieLens, or Uber Pickups Clarity/Organization: Very good overview of previous state of the art, with guarantees as well as high level explanation of various algorithms The paper ends abruptly with many results deferred to the Supplementary material. I suggest moving the complete proof of ThresholdSeq to the main paper as a warmup, then explain how the analysis must change for LinearSeq and defer the full proof of LinearSeq to the Supplementary Material. Please mention that "adaptivity", "adaptive complexity", and "adaptivity complexity" are used interchangeably Questions: How does the proposed method compare to FAST for larger values of k? EDIT: The authors answered my main question in their response. Therefore, I am keeping my review the same.
NIPS
Title Best of Both Worlds: Practical and Theoretically Optimal Submodular Maximization in Parallel Abstract For the problem of maximizing a monotone, submodular function with respect to a cardinality constraint k on a ground set of size n, we provide an algorithm that achieves the state-of-the-art in both its empirical performance and its theoretical properties, in terms of adaptive complexity, query complexity, and approximation ratio; that is, it obtains, with high probability, query complexity of O (n) in expectation, adaptivity of O (log(n)), and approximation ratio of nearly 1− 1/e. The main algorithm is assembled from two components which may be of independent interest. The first component of our algorithm, LINEARSEQ, is useful as a preprocessing algorithm to improve the query complexity of many algorithms. Moreover, a variant of LINEARSEQ is shown to have adaptive complexity of O(log(n/k)) which is smaller than that of any previous algorithm in the literature. The second component is a parallelizable thresholding procedure THRESHOLDSEQ for adding elements with gain above a constant threshold. Finally, we demonstrate that our main algorithm empirically outperforms, in terms of runtime, adaptive rounds, total queries, and objective values, the previous state-of-the-art algorithm FAST in a comprehensive evaluation with six submodular objective functions. 1 Introduction The cardinality-constrained optimization of a monotone, submodular function f : 2N → R+, defined on subsets of a ground set N of size n, is a general problem formulation that is ubiquitous in wideranging applications, e.g. video or image summarization [30], network monitoring [26], information gathering [23], and MAP Inference for Determinantal Point Processes [20], among many others. The function f : 2N → R+ is submodular iff for all S ⊆ T ⊆ N , x 6∈ T , ∆ (x |T ) ≤ ∆ (x |S)1; and the function f is monotone if f(S) ≤ f(T ) for all S ⊆ T . In this paper, we study the following submodular maximization problem (SM) maximizef(S), subject to |S| ≤ k, (SM) where f is a monotone, submodular function; SM is an NP-hard problem. There has been extensive effort into the design of approximation algorithms for SM over the course of more than 45 years, e.g. [32, 12, 10, 21, 24]. For SM, the optimal ratio has been shown to be 1− 1/e ≈ 0.63 [32]. 1∆ (x |S) denotes the marginal gain of x to S: f(S ∪ {x})− f(S). 35th Conference on Neural Information Processing Systems (NeurIPS 2021). As instance sizes have grown very large, there has been much effort into the design of efficient, parallelizable algorithms for SM. Since queries to the objective function can be very expensive, the overall efficiency of an algorithm for SM is typically measured by the query complexity, or number of calls made to the objective function f [2, 9]. The degree of parallelizability can be measured by the adaptive complexity of an algorithm, which is the minimum number of rounds into which the queries to f may be organized, such that within each round, the queries are independent and hence may be arbitrariliy parallelized. Observe that the lower the adaptive complexity, the more parallelizable an algorithm is. To obtain a constant approximation factor, a lower bound of Ω(n) has been shown on the query complexity [24] and a lower bound of Ω(log(n)/ log log(n)) has been shown on the adaptive complexity [3]. Several algorithms have been developed recently that are nearly optimal in terms of query and adaptive complexities [14, 11, 17, 5]; that is, these algorithms achieve O (log n) adaptivity and O (npolylog(n)) query complexity (see Table 1). However, these algorithms use sampling techniques that result in very large constant factors that make these algorithms impractical. This fact is discussed in detail in Breuer et al. [9]; as an illustration, to obtain ratio 1− 1/e− 0.1 with 95% confidence, all of these algorithms require more than 106 queries of sets of size k/ log(n) in every adaptive round [9]; moreover, even if these algorithms are run as heuristics using a single sample, other inefficiencies preclude these algorithms of running even on moderately sized instances [9]. For this reason, the FAST algorithm of Breuer et al. [9] has been recently proposed, which uses an entirely different sampling technique called adaptive sequencing. Adaptive sequencing was originally introduced in Balkanski et al. [6], but the original version has quadratic query complexity in the size of the ground set and hence is still impractical on large instances. To speed it up, the FAST algorithm sacrifices theoretical guarantees to yield an algorithm that parallelizes well and is faster than all previous algorithms for SM in an extensive experimental evaluation. The theoretical sacrifices of FAST include: the adaptivity of FAST is Ω(log(n) log2(log n)), which is higher than the state-of-the-art, and more significantly, the algorithm obtains no approximation ratio for k < 8502; since many applications require small choices for k, this limits the practical utility of FAST. A natural question is thus: is it possible to design an algorithm that is both practical and theoretically optimal in terms of adaptivity, ratio, and total queries? 1.1 Contributions In this paper, we provide three main contributions. The first contribution is the algorithm LINEARSEQ (LS, Section 2) that achieves with probability 1− 1/n a constant factor (4 +O(ε))−1 in expected linear query complexity and with O(log n) adaptive rounds (Theorem 1). Although the ratio of ≈ 0.25 is smaller than the optimal 1− 1/e ≈ 0.63, this algorithm can be used to improve the query 2The approximation ratio 1 − 1/e − 4ε of FAST holds with probability 1 − δ for k ≥ θ(ε, δ, k) = 2 log(2δ−1 log( 1 ε log(k)))/ε2(1− 5ε). complexity of many extant algorithms, as we decribe in the related work section below. Interestingly, LINEARSEQ can be modified to have adaptivity O (log(n/k)) at a small cost to its ratio as discussed in Appendix F. This version of LINEARSEQ is a constant-factor algorithm for SM with smaller adaptivity than any previous algorithm in the literature, especially for values of k that are large relative to n. Our second contribution is an improved parallelizable thresholding procedure THRESHOLDSEQ (TS, Section 3) for a commonly recurring task in submodular optimization: namely, add all elements that have a gain of a specified threshold τ to the solution. This subproblem arises not only in SM, but also e.g. in submodular cover [17] and non-monotone submodular maximization [4, 18, 15, 25]. Our TS accomplishes this task with probability 1− 1/n in O(log n) adaptive rounds and expected O(n) query complexity (Theorem 2), while previous procedures for this task only add elements with an expected gain of τ and use expensive sampling techniques [17]; have Ω(log2 n) adaptivity [22]; or have Ω(kn) query complexity [6]. Finally, we present in Section 3 the parallelized greedy algorithm PARALLELGREEDYBOOST (PGB), which is used in conjunction with LINEARSEQ and THRESHOLDSEQ to yield the final algorithm LS+PGB, which answers the above question affirmatively: LS+PGB obtains nearly the optimal 1− 1/e ratio with probability 1− 2/n in O(log n) adaptive rounds and O(n) queries in expectation; moreover, LS+PGB is faster than FAST in an extensive empirical evaluation (see Table 2). In addition, LS+PGB improves theoretically on the previous algorithms in query complexity while obtaining nearly optimal adaptivity (see Table 1). 1.2 Additional Related Work Adaptive Sequencing. The main inefficiency of the adaptive sequencing method of Balkanski et al. [6] (which causes the quadratic query complexity) is an explicit check that a constant fraction of elements will be filtered from the ground set. In this work, we adopt a similar sampling technique to adaptive sequencing, except that we design the algorithm to filter a constant fraction of elements with only constant probability. This method allows us to reduce the quadratic query complexity of adaptive sequencing to linear query complexity while only increasing the adaptive complexity by a small constant factor. In contrast, FAST of Breuer et al. [9] speeds up adaptive sequencing by increasing the adaptive complexity of the algorithm through adaptive binary search procedures, which, in addition to the increasing the adaptivity by logarithmic factors, place restrictions on the k values for which the ratio can hold. This improved adaptive sequencing technique is the core of our THRESHOLDSEQ procedure, which has the additional benefit of being relatively simple to analyze. Algorithms with Linear Query Complexity. Our LINEARSEQ algorithm also uses the improved adaptive sequencing technique, but in addition this algorithm integrates ideas from the Ω(n)-adaptive linear-time streaming algorithm of Kuhnle [24] to achieve a constant-factor algorithm with low adaptivity in expected linear time. Integration of the improved adaptive sequencing with the ideas of Kuhnle [24] is non-trivial, and ultimately this integration enables the theoretical improvement in query complexity over previous algorithms with sublinear adaptivity that obtain a constant ratio with high probability (see Table 1). In Fahrbach et al. [17], a linear-time procedure SUBSAMPLEPREPROCESSING is described; this procedure is to the best of our knowledge the only algorithm in the literature that obtains a constant ratio with sublinear adaptive rounds and linear query complexity and hence is comparable to LINEARSEQ. However, SUBSAMPLEPREPROCESSING uses entirely different ideas from our LINEARSEQ and has much weaker theoretical guarantees: for input 0 < δ < 1, it obtains ratio δ 2 2×106 with probability 1 − δ in O(log(n)/δ) adaptive rounds and O(n) queries in expectation – the small ratio renders SUBSAMPLEPREPROCESSING impractical; also, its ratio holds only with constant probability. By contrast, with ε = 0.1, our LINEARSEQ obtains ratio ≈ 0.196 with probability 1− 1/n in O(log(n)) adaptive rounds and O(n) queries in expectation. Using LS for Preprocessing: Guesses of OPT. Many algorithms for SM, including FAST and all of the algorithms listed in Table 1 except for SM and our algorithm, use a strategy of guessing logarithmically many values of OPT. Our LINEARSEQ algorithm reduces the interval containing OPT from size k to a small constant size in expected linear time. Thus, LINEARSEQ could be used for preprocessing prior to running FAST or one of the other algorithms in Table 1, which would improve their query complexity without compromising their adaptive complexity or ratio; this illustrates the general utility of LINEARSEQ. For example, with this change, the theoretical adaptivity of FAST improves, although it remains worse than LS+PGB: the adaptive complexity of FAST becomes O ( 1 ε2 log(n) log ( 1 ε log(k) )) in contrast to the O ( 1 ε2 log (n/ε) ) of LS+PGB. Although SUBSAMPLEPREPROCESSING may be used for the same purpose, its ratio only holds with constant probability which would then limit the probability of success of any following algorithm. Relationship of THRESHOLDSEQ to Existing Methods. The first procedure in the literature to perform the same task is the THRESHOLDSAMPLING procedure of Fahrbach et al. [17]; however, THRESHOLDSAMPLING only ensures that the expected marginal gain of each element added is at least τ and has large constants in its runtime that make it impractical [9]. In contrast, THRESHOLDSEQ ensures that added elements contribute a gain of at least τ with high probability and is highly efficient empirically. A second procedure in the literature to perform the same task is the ADAPTIVESEQUENCING method of Balkanski et al. [6], which similarly to THRESHOLDSEQ uses random permutations of the ground set; however, ADAPTIVE-SEQUENCING focuses on explicitly ensuring a constant fraction of elements will be filtered in the next round, which is expensive to check: the query complexity of ADAPTIVE-SEQUENCING is O(kn). In contrast, our THRESHOLDSEQ algorithm ensures this property with a constant probability, which is sufficient to ensure the adaptivity with the high probability of 1− 1/n in O(n) expected queries. Finally, a third related procedure in the literature is THRESHOLDSAMPLING of Kazemi et al. [22], which also uses random permutations to sample elements. However, this algorithm has the higher adaptivity of O (log(n) log(k)), in contrast to the O (log(n)) of THRESHOLDSEQ. MapReduce Framework. Another line of work studying parallelizable algorithms for SM has focused on the MapReduce framework [13] in a distributed setting, e.g. [7, 8, 16, 28]. These algorithms divide the dataset over a large number of machines and are intended for a setting in which the data does not fit on a single machine. None of these algorithms has sublinear adaptivity and hence all have potentially large numbers of sequential function queries on each machine. In this work, our empirical evaluation is on a single machine with a large number of CPU cores; we do not evaluate our algorithms in a distributed setting. Organization. The constant-factor algorithm LINEARSEQ is described and analyzed in Section 2; the details of the analysis are presented in Appendix C. The variant of LINEARSEQ with lower adaptivity is described in Appendix F. The algorithms THRESHOLDSEQ and PARALLELGREEDYBOOST are discussed at a high level in Section 3, with detailed descriptions of these algorithms and theoretical analysis presented in Appendices D and E. Our empirical evaluation is summarized in Section 4 with more results and discussion in Appendix H. 2 A Parallelizable Algorithm with Linear Query Complexity: LINEARSEQ In this section, we describe the algorithm LINEARSEQ for SM (Alg. 1) that obtains ratio (4+O (ε))−1 in O ( 1 ε3 log(n) ) adaptive rounds and expected O ( n ε3 ) queries. If ε ≤ 0.21, the ratio of LINEARSEQ is lower-bounded by (4 + 16ε)−1 ≥ 0.135, which shows that a relatively large constant ratio is obtained even at large values of ε. An initial run of this algorithm is required for our main algorithm LS+PGB. Algorithm 1 The algorithm that obtains ratio (4 +O (ε))−1 in O ( log(n)/ε3 ) adaptive rounds and expected O ( n/ε3 ) queries. 1: procedure LINEARSEQ(f,N , k, ε) 2: Input: evaluation oracle f : 2N → R+, constraint k, error ε 3: a = arg maxu∈N f({u}) 4: Initialize A← {a} , V ← N , ` = d4(1 + 1/(βε)) log(n)e, β = ε/(16 log(8/(1− e−ε/2))) 5: for j ← 1 to ` do 6: Update V ← {x ∈ V : ∆ (x |A) ≥ f(A)/k} and filter out the rest 7: if |V | = 0 then break 8: V = {v1, v2, . . . , v|V |} ←random-permutation(V ) 9: Λ← {b(1 + ε)uc : 1 ≤ b(1 + ε)uc ≤ k, u ∈ N} ∪{bk + uεkc : bk + uεkc ≤ |V |, u ∈ N} ∪ {|V |} 10: B[λi] = false, for λi ∈ Λ 11: for λi ∈ Λ in parallel do 12: Tλi−1 ← {v1, v2, . . . , vλi−1} ; Tλi ← {v1, v2, . . . , vλi} ; T ′λi ← Tλi\Tλi−1 13: if ∆ ( T ′λi |A ∪ Tλi−1 ) /|T ′λi | ≥ (1− ε)f(A ∪ Tλi−1)/k then B[λi]← true 14: λ∗ ← max{λi ∈ Λ : B[λi] = false and ((λi ≤ k and B[1] to B[λi−1] are all true) or (λi > k and ∃m ≥ 1 s.t. | ⋃i−1 u=m T ′ λu | ≥ k and B[λm] to B[λi−1] are all true))} 15: A← A ∪ Tλ∗ 16: if |V | > 0 then return failure 17: return A′ ← last k elements added to A Description of LS. The work of LS is done within iterations of a sequential outer for loop (Line 5); this loop iterates at most O (log(n)) times, and each iteration requires two adaptive rounds; thus, the adaptive complexity of the algorithm is O (log(n)). Each iteration adds more elements to the set A, which is initially empty. Within each iteration, there are four high-level steps: 1) filter elements from V that have gain less than f(A)/k (Line 6); 2) randomly permute V (Line 8); 3) compute in parallel the marginal gain of adding blocks of the sequence of remaining elements in V to A (for loop on Line 11); 4) select a prefix of the sequence V = ( v1, v2, . . . , v|V | ) to add to A (Line 14). The selection of the prefix to add is carefully chosen to approximately satisfy, on average, Condition 1 for elements added; and also to ensure that, with constant probability, a constant fraction of elements of V are filtered on the next iteration. The following theorem states the theoretical results for LINEARSEQ. The remainder of this section proves this theorem, with intuition and discussion of the proof. The omitted proofs for all lemmata are provided in Appendix C. Theorem 1. Let (f, k) be an instance of SM. For any constant 0 < ε < 1/2, the algorithm LINEARSEQ has adaptive complexity O ( log(n)/ε3 ) and outputs A′ ⊆ N with |A′| ≤ k such that the following properties hold: 1) The algorithm succeeds with probability at least 1 − 1/n. 2) There are O ( (1/(εk) + 1)n/ε3 ) oracle queries in expectation. 3) If the algorithm succeeds,[ 4 + 4(2−ε)(1−ε)(1−2ε)ε ] f(A′) ≥ f(O), where O is an optimal solution to the instance (f, k). Overview. The goal of this section is to produce a constant factor, parallelizable algorithm with linear query complexity. As a starting point, consider an algorithm1 that takes one pass through the ground set, adding each element e to candidate set A iff ∆ (e |A) ≥ f(A)/k. (1) Condition 1 ensures two properties: 1) the last k elements in A contain a constant fraction of the value f(A); and 2) f(A) is within a constant fraction of OPT. By these two properties, the last k elements of A are a constant factor approximation to SM with exactly one query of the objective function per element of the ground set. For completeness, we give a pseudocode (Alg. 3) and proof in Appendix B. However, each query depends on all of the previous ones and thus there are n adaptive rounds. Therefore, the challenge is to approximately simulate Alg. 3 in a lowly adaptive (highly parallelizable) manner, which is what LINEARSEQ accomplishes. 1This algorithm is a simplified version of the streaming algorithm of Kuhnle [24]. 2.1 Approximately Satisfying Condition 1 Discarding Elements. In one adaptive round during each iteration j of the outer for loop, all elements with gain to A of less than f (A) /k are discarded from V (Line 6). Since the size of A increases as the algorithm runs, by submodularity, the gain of these elements can only decrease and hence these elements cannot satisfy Condition 1 and can be safely discarded from consideration. The process of filtering thereby ensures the following lemma at termination. Lemma 1. At successful termination of LINEARSEQ, f(O) ≤ 2f(A), where O ⊆ N is an optimal solution of size k. Addition of Elements. Next, we describe the details of how elements are added to the set A. The random permutation of remaining elements on Line 8 constructs a sequence ( v1, v2, . . . , v|V | ) such that each element is uniformly randomly sampled from the remaining elements. By testing the marginal gains along the sequence in parallel, it is possible to determine a good prefix of the sequence (vi) to add to A to ensure the following: 1) Condition 1 is approximately satisfied; and 2) We will discard a constant fraction of V in the next iteration with constant probability. Condition 1 is important for the approximation ratio and discarding a constant fraction of V is important for the adaptivity and query complexity. Below, we discuss how to choose the prefix such that both are achieved. To speed up the algorithm, we do not test the marginal gain at each point in the sequence (vi), but rather test blocks of elements at once as determined by the index set Λ defined in the pseudocode. Prefix Selection. Say a block is bad if this block does not satisfy the condition checked on Line 13 (which is an approximate, average form of Condition 1); otherwise, the block is good. At the end of an iteration, we select the largest block index λ∗, where this block is bad and the previous consecutive blocks which together have at least k elements are all good; or this block is bad and all the previous blocks are good blocks. Then, we add the prefix Tλ∗ = (v1, v2, . . . , vλ∗) into A. Now, the relevance of Condition 1 for the approximation ratio is that it implies f(A) ≥ 2f(A \A′), where A′ are the last k elements added to A. Lemma 2 shows that the conditions required on the marginal gains of blocks added imply an approximate form of this fact is satisfied by LINEARSEQ. Indeed, the proof of Lemma 2 informs the choice Λ of blocks evaluated and the computation of λ∗. Lemma 2. Suppose LINEARSEQ terminates successfully. Then f(A) ≥ 2(1−ε+ε 2) 1+ε f(A\A ′). Proof. If |A| ≤ k, the lemma is immediate, so assume |A| > k. For iteration j, let Tj,λ∗j denote the set added to A during iteration j; and let Tj,λ∗j = ∅ if the algorithm terminates before iteration j. Let Aj denote the value of A after iteration j. Define c = max{c ∈ N : A′ ⊆ (∪`j=cTj,λ∗j )}. Then, |Tc,λ∗c | > 0; and for any j > c, |Tj,λ∗j | < k. It holds that (∪ ` j=c+1Tj,λ∗j ) ⊂ A ′ ⊆ (∪`j=cTj,λ∗j ). Figure 1 shows how A is composed of these sets Tj,λ∗j and how each set is composed of blocks. The following claim is proven in Appendix C.3. Claim 1. It holds that ∆ ( Tc,λ∗c |A\A ′) ≥ (1 − ε) max{0, |Tc,λ∗c ∩ A′| − 2εk} · f(A\A′)/k. For j > c, it holds that ∆ ( Tj,λ∗j |Aj−1 ) ≥ 1−ε1+ε |Tj,λ∗j | · f(A\A ′)/k. From Claim 1, f(A)−f(A\A′) = ∆ ( Tc,λ∗c |A\A ′)+ ∑̀ j=c+1 ∆ ( Tj,λ∗j |Aj−1 ) ≥ (1− ε)(1− 2ε) 1 + ε ·f(A\A′), (2) where Inequality 2 is proven in Appendix C.4. In the remainder of this subsection, we will show that a (βε)-fraction of V is discarded at each iteration j of the outer for loop with probability at least 1/2, where β is a constant in terms of ε as defined on Line 4 in the pseudocode. The remainder of the proofs in this section are implicitly conditioned on the behavior of the algorithm prior to iteration j. The next lemma describes the behavior of the number of elements that will be filtered at iteration j + 1. Observe that the set Si defined in the next lemma is the set of elements that would be filtered at the next iteration if prefix Ti is added to A. Lemma 3. Let Si = {x ∈ V : ∆ (x |A ∪ Ti) < f(A ∪ Ti)/k} . It holds that |S0| = 0, |S|V || = |V |, and |Si| ≤ |Si+1|. By Lemma 3, we know the number of elements in Si increases from 0 to |V | with i. Therefore, there exists a t such that t = min{i ∈ N : |Si| ≥ βε|V |}. If λ∗ ≥ t, |Sλ∗ | ≥ βε|V |, and we will successfully filter out more than (βε)-fraction of V at the next iteration. In this case, we say that the iteration j succeeds. Otherwise, if λ∗ < t, the iteration may fail. The remainder of the proof bounds the probability that λ∗ < t, which is an upper bound on the probability that iteration j fails. Let λt = max{λ ∈ Λ : λ < t}, and let λ′t = max({λ′ ∈ Λ : ∑ λ∈Λ,λ′≤λ≤λt |Tλ| ≥ k} ∪ {1}). If λ∗ < t, there must be at least one index λ between λ′t and λt such that the block T ′ λ is bad. The next lemma bounds the probability that any block T ′λ, with λ < λt, is bad. Lemma 4. Let t = min{i ∈ N : |Si| ≥ βε|V |}; λt = max{λ ∈ Λ : λ < t}; (Yi) be a sequence of independent and identically distributed Bernoulli trials, where the success probability is βε. Then for any λ < λt, Pr (B[λ] = false) ≤ Pr (∑|T ′λ| i=1 Yi > ε|T ′λ| ) . Finally, we bound the probability that an iteration j of the outer for loop fails. Let B1 = {λ ∈ Λ : λ ≤ k and λ < λt}, B2 = {λ ∈ Λ : |Λ ∩ [λ, λt]| ≤ d1/εe}. Then Pr (iteration j fails) ≤ Pr (∃λ ∈ B1 ∪B2 with B[λ] = false) ≤ 1/2, (3) where the proof of Inequality 3 is in Appendix C.7. 2.2 Proof of Theorem 1 From Section 2.1, the probability at any iteration of the outer for loop of successful filtering of an (βε)-fraction of V is at least 1/2. We can model the success of the iterations as a sequence of dependent Bernoulli random variables, with success probability that depends on the results of previous trials but is always at least 1/2. Success Probability of LINEARSEQ. If there are at leastm = dlog1−βε(1/n)e successful iterations, the algorithm LINEARSEQ will succeed. The number of successful iterations X` up to and including the `-th iteration is a sum of dependent Bernoulli random variables. With some work (Lemma 6 in Appendix A), the Chernoff bounds can be applied to ensure the algorithm succeeds with probability at least 1− 1/n, as shown in Appendix C.2. Adaptivity and Query Complexity. Oracle queries are made on Lines 6 and 13 of LINEARSEQ. The filtering on Line 6 is in one adaptive round, and the inner for loop is also in one adaptive round. Thus, the adaptivity is proportional to the number of iterations of the outer for loop, O (`) = O ( log(n)/ε3 ) . For the query complexity, let Yi be the number of iterations between the (i− 1)-th and i-th successful iterations of the outer for loop. By Lemma 6 in Appendix A, E [Yi] ≤ 2. From here, we show in Appendix C.8 that there are at most O ( n/ε3 ) queries in expectation. Approximation Ratio. Suppose LINEARSEQ terminates successfully. We have the approximation ratio as follows: f(A′) (a) ≥ f(A)− f(A\A′) (b) ≥ f(A)− 1 + ε 2(1− ε+ ε2) f(A) (c) ≥ 1 4 + 4(2−ε)(1−ε)(1−2ε) · ε f(O), where Inequality (a) is from submodularity of f , Inequality (b) is from Lemma 2, and Inequality (c) is from Lemma 1. 3 Improving to Nearly the Optimal Ratio In this section, we describe how to obtain the nearly optimal ratio in nearly optimal query and adaptive complexities (Section 3.2). First, in Section 3.1, we describe THRESHOLDSEQ, a parallelizable procedure to add all elements with gain above a constant threshold to the solution. In Section 3.2, we describe PARALLELGREEDYBOOST and finally the main algorithm LS+PGB. Because of space constraints, the algorithms are described in the main text at a high level only, with detailed descriptions and proofs deferred to Appendices D and E. 3.1 The THRESHOLDSEQ Procedure In this section, we discuss the algorithm THRESHOLDSEQ, which adds all elements with gain above an input threshold τ up to accuracy ε in O(log n) adaptive rounds and O(n) queries in expectation. Pseudocode is given in Alg. 4 in Appendix D. Overview. The goal of this algorithm is, given an input threshold τ and size constraint k, to produce a set of size at most k such that the average gain of elements added is at least τ . As discussed in Section 1, this task is an important subroutine of many algorithms for submodular optimization (including our final algorithm), although by itself it does not produce any approximation ratio for SM. The overall strategy of our parallelizable algorithm THRESHOLDSEQ is analagous to that of LINEARSEQ, although THRESHOLDSEQ is considerably simpler to analyze. The following theorem summarizes the theoretical guarantees of THRESHOLDSEQ and the proofs are in Appendix D. Theorem 2. Suppose THRESHOLDSEQ is run with input (f, k, ε, δ, τ). Then, the algorithm has adaptive complexity O(log(n/δ)/ε) and outputs A ⊆ N with |A| ≤ k such that the following properties hold: 1) The algorithm succeeds with probability at least 1− δ/n. 2) There are O(n/ε) oracle queries in expectation. 3) It holds that f(A)/|A| ≥ (1 − ε)τ/(1 + ε). 4) If |A| < k, then ∆ (x |A) < τ for all x ∈ N . 3.2 The PARALLELGREEDYBOOST Procedure and the Main Algorithm Algorithm 2 The PARALLELGREEDYBOOST procedure. 1: Input: evaluation oracle f : 2N → R+, constraint k, constant α, value Γ such that Γ ≤ f(O) ≤ Γ/α, accuracy parameter ε 2: Initialize τ ← Γ/(αk), δ← 1/(log1−ε(α/3) + 1), A← ∅ 3: while τ ≥ Γ/(3k) do 4: τ ← τ(1− ε) 5: S ← THRESHOLDSEQ(fA,N , k − |A|, δ, ε/3, τ) 6: A← A ∪ S 7: if |A| = k then 8: return A 9: return A In this section, we describe the greedy algorithm PARALLELGREEDYBOOST (PGB, Alg. 2) that uses multiple calls to THRESHOLDSEQ with descending thresholds. Next, our state-of-the-art algorithm LS+PGB is specified. Description of PARALLELGREEDYBOOST. This procedure takes as input the results from running an α-approximation algorithm on the instance (f, k) of SM; thus, PARALLELGREEDYBOOST is not meant to be used as a standalone algorithm. Namely, PARALLELGREEDYBOOST takes as input Γ, the solution value of an α-approximation algorithm for SM; this solution value Γ is then boosted to ensure the ratio 1− 1/e− ε on the instance. The values of Γ and α are used to produce an initial threshold value τ for THRESHOLDSEQ. Then, the threshold value is iteratively decreased by a factor of (1− ε) and the call to THRESHOLDSEQ is iteratively repeated to build up a solution, until a minimum value for the threshold of Γ/(3k) is reached. Therefore, THRESHOLDSEQ is called at most O (log(1/α)/ε) times. We remark that α is not required to be a constant approximation ratio. Theorem 3. Let (f, k) be an instance of SM. Suppose an α- approximation algorithm for SM is used to obtain Γ, where the approximation ratio α holds with probability 1− pα. For any constant ε > 0, the algorithm PARALLELGREEDYBOOST has adaptive complexity O ( logα−1 ε2 log ( n log(α−1) ε )) and outputs A ∈ N with |A| ≤ k such that the following properties hold: 1) The algorithm succeeds with probability at least 1− 1/n− pα. 2) If the algorithm succeeds, there are O ( n log ( α−1 ) /ε2 ) oracle queries in expectation. 3) If the algorithm succeeds, f(A) ≥ (1− 1/e− ε)f(O), where O is an optimal solution to the instance (f, k). Proof. Success Probability. For the while loop in Line 3-8, there are no more than dlog1−ε(α/3)e iterations. If THRESHOLDSEQ completes successfully at every iteration, Algorithm 2 also succeeds. The probability that this occurs is lower bounded in Appendix E.1.1. For the remainder of the proof of Theorem 3, we assume that every call to THRESHOLDSEQ succeeds. Adaptive and Query Complexity. There are at most dlog1−ε(α/3)e iterations of the while loop. Since log(x) ≤ x − 1, dlog1−ε(α/3)e = d log(α/3) log(1−ε)e, and ε < 1 − 1/e, it holds that dlog1−ε(α/3)e ≤ d log(3/α) ε e. And for each iteration, queries to the oracle happen only on Line 5, the call to THRESHOLDSEQ. Since the adaptive and query complexity of THRESHOLDSEQ is O (log(n/δ)/ε) and O (n/ε), the adaptive and query complexities for Algorithm 2 are O ( logα−1 ε2 log ( n log(α−1) ε )) , O ( logα−1 ε2 n ) , respectively. Approximation Ratio. Let Aj be the set A we get after Line 6, and let Sj be the set returned by THRESHOLDSEQ in iteration j of the while loop. Let ` be the number of iterations of the while loop. First, in the case that |A| < k at termination, THRESHOLDSEQ returns 0 ≤ |S`| < k − |A`−1| at the last iteration. From Theorem 2, for any o ∈ O, ∆ (o |A) < τ < Γ/(3k). By submodularity and monotonicity, f(O)− f(A) ≤ f(O ∪A)− f(A) ≤ ∑ o∈O\A ∆ (o |A) ≤ ∑ o∈O\A Γ/(3k) ≤ f(O)/3, and the ratio holds. Second, consider the case that |A| = k. Suppose in iteration j + 1, THRESHOLDSEQ returns a nonempty set Sj+1. Then, in the previous iteration j, THRESHOLDSEQ returns a set Sj that 0 ≤ |Sj | < k − |Aj−1|. From Theorem 2, f(O)− f(Aj+1) ≤ ( 1− (1− ε/3)(1− ε) (1 + ε/3)k |Aj+1\Aj | ) (f(O)− f(Aj)). (4) The above inequality also holds when Aj+1 = Aj . Therefore, it holds that f(O)− f(A) ≤ e− (1−ε/3)(1−ε) 1+ε/3 f(O) ≤ (1/e+ ε)f(O). (5) The detailed proof of Inequality 4 and 5 can be found in Appendix E.1.2. Main Algorithm: LS+PGB. To obtain the main algorithm of this paper (and its nearly optimal theoretical guarantees), we use PARALLELGREEDYBOOST with the solution value Γ and ratio α given by LINEARSEQ. Because this choice requires an initial run of LINEARSEQ, we denote this algorithm by LS+PGB. Thus, LS+PGB integrates LINEARSEQ and PARALLELGREEDYBOOST to get nearly the optimal 1− 1/e ratio with query complexity of O (n) and adaptivity of O (log(n)). 4 Empirical Evaluation In this section, we demonstrate that the empirical performance of LS+PGB outperforms that of FAST for the metrics of total time, total queries, adaptive rounds, and objective value across six applications of SM: maximum cover on random graphs (MaxCover), twitter feed summarization (TweetSumm), image summarization (ImageSumm), influence maximization (Influence), revenue maximization (RevMax), and Traffic Speeding Sensor Placement (Traffic). See Appendix H.2 for the definition of the objectives. The sizes n of the ground sets range from n = 1885 to 100000. Implementation and Environment. We evaluate the same implementation of FAST used in Breuer et al. [9]. Our implementation of LS+PGB is parallelized using the Message Passing Interface (MPI) within the same Python codebase as FAST (see the Supplementary Material for source code). Practical optimizations to LINEARSEQ are made, which do not compromise the theoretical guarantees, which are discussed in Appendix G. The hardware of the system consists of 40 Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz cores (with 80 threads available), of which up to 75 threads are made available to the algorithms for the experiments. On each instance, the algorithms are repeated independently for five repetitions, and the mean and standard deviation of the objective value, total queries, adaptive rounds and parallel (wall clock) runtime to the submodular function is plotted. Parameters. The parameters ε, δ of FAST are set to enforce the nominal ratio of 1−1/e−0.1 ≈ 0.53 with probability 0.95; these are the same parameter settings for FAST as in the Breuer et al. [9] evaluation. The ε parameter of LS+PGB is set to enforce the same ratio with probability 1−2/n. With these parameters, FAST ensures its ratio only if k ≥ θ(ε, δ, k) = 2 log(2δ−1 log( 1ε log(k)))/ε 2(1− 5ε) ≥ 7103. Since k < 7103 on many of our instances, FAST is evaluated in these instances as a theoretically motivated heuristic. In contrast, the ratio of LS+PGB holds on all instances evaluated. We use exponentially increasing k values from n/1000 to n/10 for each application to explore the behavior of each algorithm across a broad range of instance sizes. Overview of Results. Figure 2 illustrates the comparison with FAST across the ImageSumm and RevenueMax application; results on other applications are shown in Appendix H. Runtime: LS+PGB is faster than FAST by more than 1% on 80% of instances evaluated; and is faster by an order of magnitude on 14% of instances. Objective value: LS+PGB achieves higher objective by more than 1% on 50% of instances, whereas FAST achieves higher objective by more than 1% on 8% of instances. Adaptive rounds: LS+PGB achieves more than 1% fewer adaptive rounds on 75% of instances, while FAST achieves more than 1% fewer adaptive rounds on 22% of instances. Total queries: LS+PGB uses more than 1% fewer queries on 84% of scenarios with FAST using more than 1% fewer queries on 9% of scenarios. In summary, LS+PGB frequently gives substantial improvement in objective value, queries, adaptive rounds, and parallel runtime. Comparison of the arithmetic means of the metrics over all instances is given in Table 2. Finally, FAST and LS+PGB show very similar linear speedup with the number of processors employed: as shown in Fig. 7. 5 Concluding Remarks In this work, we have introduced the algorithm LS+PGB, which is highly parallelizable and achieves state-of-the-art empirical performance over any previous algorithm for SM; also, LS+PGB is nearly optimal theoretically in terms of query complexity, adaptivity, and approximation ratio. An integral component of LS+PGB is our preprocessing algorithm LINEARSEQ, which reduces the interval containing OPT to a small constant size in expected linear time and low adaptivity, which may be independently useful. Another component of LS+PGB is the THRESHOLDSEQ procedure, which adds all elements with gain above a threshold in a parallelizable manner and improves existing algorithms in the literature for the same task. Acknowledgements The work of Yixin Chen, Tonmoy Dey, and Alan Kuhnle was partially supported by Florida State University. The authors have received no third-party funding in direct support of this work. The authors have no additional revenues from other sources related to this work.
1. What is the focus of the paper regarding monotone submodular maximization? 2. What are the strengths and weaknesses of the proposed LinearSeq subroutine compared to previous works? 3. How does the reviewer assess the novelty and significance of the paper's contributions? 4. Are there any suggestions or typos in the review that could improve the clarity and quality of the paper?
Summary Of The Paper Review
Summary Of The Paper This work introduces the LinearSeq subroutine, which achieves a 1/4 approximation ratio for monotone submodular maximization subject to a cardinality constraint w.h.p. while using O ( n ) queries in expectation and O ( log ⁡ n ) adaptive rounds. The authors then show how to use LinearSeq together with a thresholding subroutine to achieve an algorithm with a ( 1 − 1 / e − ε ) approximation for monotone submodular maximization in O ( 1 / ε 2 log ⁡ ( n / ε ) ) adaptive rounds and an expected O ( n / ε 2 ) queries. This is an improvement over the previous works of [Balkanski-Rubinstein-Singer, SODA 2019], [Ene-Nguyen, SODA 2019], [Fahrbach-Mirrokni-Zadimoghaddam, SODA 2019] in terms of ε factors (for the query complexity) and practicability. The authors compare their work extensively to the recent FAST algorithm of [Breuer-Balkanski-Singer, ICML 2019], which sacrifices some theoretical guarantees in favor of being extremely practical. Compared to all previous works, the main result here improves the query complexity without sacrificing the approximation quality or (much of) the adaptivity. The experiments in this paper compare against FAST and demonstrate that the algorithm in this paper is substantially faster than FAST without compromising on the solution quality (see Figure 3 in the full version of the paper). Review Originality. This paper builds on recent line of work on low-adaptivity monotone submodular maximization. The overarching approach is reasonably well understood in the submodular literature, but the new subroutine (LinearSeq) that the authors introduce is a powerful, standalone technique for preprocessing the candidate space for a good guess of OPT, and therefore can be used to reduce the query complexity of all existing low-adaptivity algorithms (i.e., this idea allows us to find an interval [ L , U ] such that U / L = O ( 1 ) and O P T ∈ [ L , U ] using a nearly optimal number of queries and adaptive rounds). Quality. The LinearSeq subroutine is a low-adaptive adaptation of the novel streaming algorithm of [Kuhnle, AISTATS 2021]; hence, this paper does a great job of bridging the two literatures (streaming and low-adaptivity) in a meaningful way. Overall, the paper is good, but it's not clear that the previous state-of-the-art algorithm for this problem is the FAST algorithm [Breuer-Balkanski-Singer, ICML 2020], even though it's the most recent in this line of work. The authors compare only against this, but it would be valuable to see how the algorithms from SODA 19 compare as well, as they have better theoretical guarantees than FAST. The theory is interesting and seems to follow the high-level approach of [Fahrbach-Mirrokni-Zadimoghaddam, SODA 2019], which is a thresholding batch-greedy algorithm. The experiments could be improved in my opinion, since they only compare against FAST. In the first nine pages, only the running times are considered, which can be somewhat implementation-specific. For the revision, it seems that comparing the solution quality (i.e., Figure 3 in the appendix) would be more meaningful, since most of the running time information is captured in Table 2. Further, the drop in running time for FAST in the MaxCover(WS) experiment seems inconsistent with the other data points and should probably be explained (especially since this is averaged over five trials). In general, the running time of this algorithm is 10x faster than FAST with comparable solution quality and ~10x fewer oracle queries. Clarity. The paper is written pretty well for a familiar audience. One comment here is that I would advertise all results of this paper clearly in the abstract, not just that LinearSeq achieves a 1/4 approximation with nearly optimal adaptivity and query complexity. At quick glance, it's not clear that the paper contains a ( 1 − 1 / e − ε ) -approximation algorithm, which then makes the optimality component of the title somewhat confusing. Significance. Overall, this is a good result in an active area of research. The remaining problems in low-adaptivity monotone submodular maximization are (1) shaving off ε factors and (2) making the algorithms more practical. This paper tackles both of these problems. Typos / Suggestions. Suggestion: Consider removing "Best of Both Worlds" from the title since it somewhat implies that no previous works achieve this balance. [33] Suggestion: Consider using the author names + the citation in the Reference column of Figure 1. It seems like this would be easier to read than the algorithm acronym and citation number. [74] Suggestion: Would be good to include the line of low-adaptivity submodular maximization algorithms for non-monotone functions, instead of just citation [22]: Balkanski, Eric, Adam Breuer, and Yaron Singer. "Non-monotone submodular maximization in exponentially fewer iterations." Advances in Neural Information Processing Systems 31 (NeurIPS 2018). Fahrbach, Matthew, Vahab Mirrokni, and Morteza Zadimoghaddam. "Non-monotone submodular maximization with nearly optimal adaptivity and query complexity." International Conference on Machine Learning. PMLR, 2019. Ene, Alina, and Huy Nguyen. "Parallel algorithm for non-monotone DR-submodular maximization." International Conference on Machine Learning. PMLR, 2020. [93] Suggestion: The paper seems to have a somewhat of an aggressive in the first few pages (in my opinion). The superior results are clear from the main theorem alone, so I'd let them speak for themself. Examples: [93] "arguably, this is the right way to conduct adaptive sequencing" [77] "highly impractical" -- maybe, but it's probably better to say "very expensive" [158] Typo: Line 2 of Algorithm 1 uses ϵ instead of ε [280] The guarantees of ThresholdSeq in Theorem 2 has the same format of Lemma 3.2 in [Fahrbach-Mirrokni-Zadimoghaddam, SODA 2019] for their Threshold-Sampling algorithm. If the ThresholdSeq algorithm was inspired by that subroutine, it makes sense to at least point to that reference and their algorithm. [302] Is the α in " Γ , α , where the..." a typo?
NIPS
Title Best of Both Worlds: Practical and Theoretically Optimal Submodular Maximization in Parallel Abstract For the problem of maximizing a monotone, submodular function with respect to a cardinality constraint k on a ground set of size n, we provide an algorithm that achieves the state-of-the-art in both its empirical performance and its theoretical properties, in terms of adaptive complexity, query complexity, and approximation ratio; that is, it obtains, with high probability, query complexity of O (n) in expectation, adaptivity of O (log(n)), and approximation ratio of nearly 1− 1/e. The main algorithm is assembled from two components which may be of independent interest. The first component of our algorithm, LINEARSEQ, is useful as a preprocessing algorithm to improve the query complexity of many algorithms. Moreover, a variant of LINEARSEQ is shown to have adaptive complexity of O(log(n/k)) which is smaller than that of any previous algorithm in the literature. The second component is a parallelizable thresholding procedure THRESHOLDSEQ for adding elements with gain above a constant threshold. Finally, we demonstrate that our main algorithm empirically outperforms, in terms of runtime, adaptive rounds, total queries, and objective values, the previous state-of-the-art algorithm FAST in a comprehensive evaluation with six submodular objective functions. 1 Introduction The cardinality-constrained optimization of a monotone, submodular function f : 2N → R+, defined on subsets of a ground set N of size n, is a general problem formulation that is ubiquitous in wideranging applications, e.g. video or image summarization [30], network monitoring [26], information gathering [23], and MAP Inference for Determinantal Point Processes [20], among many others. The function f : 2N → R+ is submodular iff for all S ⊆ T ⊆ N , x 6∈ T , ∆ (x |T ) ≤ ∆ (x |S)1; and the function f is monotone if f(S) ≤ f(T ) for all S ⊆ T . In this paper, we study the following submodular maximization problem (SM) maximizef(S), subject to |S| ≤ k, (SM) where f is a monotone, submodular function; SM is an NP-hard problem. There has been extensive effort into the design of approximation algorithms for SM over the course of more than 45 years, e.g. [32, 12, 10, 21, 24]. For SM, the optimal ratio has been shown to be 1− 1/e ≈ 0.63 [32]. 1∆ (x |S) denotes the marginal gain of x to S: f(S ∪ {x})− f(S). 35th Conference on Neural Information Processing Systems (NeurIPS 2021). As instance sizes have grown very large, there has been much effort into the design of efficient, parallelizable algorithms for SM. Since queries to the objective function can be very expensive, the overall efficiency of an algorithm for SM is typically measured by the query complexity, or number of calls made to the objective function f [2, 9]. The degree of parallelizability can be measured by the adaptive complexity of an algorithm, which is the minimum number of rounds into which the queries to f may be organized, such that within each round, the queries are independent and hence may be arbitrariliy parallelized. Observe that the lower the adaptive complexity, the more parallelizable an algorithm is. To obtain a constant approximation factor, a lower bound of Ω(n) has been shown on the query complexity [24] and a lower bound of Ω(log(n)/ log log(n)) has been shown on the adaptive complexity [3]. Several algorithms have been developed recently that are nearly optimal in terms of query and adaptive complexities [14, 11, 17, 5]; that is, these algorithms achieve O (log n) adaptivity and O (npolylog(n)) query complexity (see Table 1). However, these algorithms use sampling techniques that result in very large constant factors that make these algorithms impractical. This fact is discussed in detail in Breuer et al. [9]; as an illustration, to obtain ratio 1− 1/e− 0.1 with 95% confidence, all of these algorithms require more than 106 queries of sets of size k/ log(n) in every adaptive round [9]; moreover, even if these algorithms are run as heuristics using a single sample, other inefficiencies preclude these algorithms of running even on moderately sized instances [9]. For this reason, the FAST algorithm of Breuer et al. [9] has been recently proposed, which uses an entirely different sampling technique called adaptive sequencing. Adaptive sequencing was originally introduced in Balkanski et al. [6], but the original version has quadratic query complexity in the size of the ground set and hence is still impractical on large instances. To speed it up, the FAST algorithm sacrifices theoretical guarantees to yield an algorithm that parallelizes well and is faster than all previous algorithms for SM in an extensive experimental evaluation. The theoretical sacrifices of FAST include: the adaptivity of FAST is Ω(log(n) log2(log n)), which is higher than the state-of-the-art, and more significantly, the algorithm obtains no approximation ratio for k < 8502; since many applications require small choices for k, this limits the practical utility of FAST. A natural question is thus: is it possible to design an algorithm that is both practical and theoretically optimal in terms of adaptivity, ratio, and total queries? 1.1 Contributions In this paper, we provide three main contributions. The first contribution is the algorithm LINEARSEQ (LS, Section 2) that achieves with probability 1− 1/n a constant factor (4 +O(ε))−1 in expected linear query complexity and with O(log n) adaptive rounds (Theorem 1). Although the ratio of ≈ 0.25 is smaller than the optimal 1− 1/e ≈ 0.63, this algorithm can be used to improve the query 2The approximation ratio 1 − 1/e − 4ε of FAST holds with probability 1 − δ for k ≥ θ(ε, δ, k) = 2 log(2δ−1 log( 1 ε log(k)))/ε2(1− 5ε). complexity of many extant algorithms, as we decribe in the related work section below. Interestingly, LINEARSEQ can be modified to have adaptivity O (log(n/k)) at a small cost to its ratio as discussed in Appendix F. This version of LINEARSEQ is a constant-factor algorithm for SM with smaller adaptivity than any previous algorithm in the literature, especially for values of k that are large relative to n. Our second contribution is an improved parallelizable thresholding procedure THRESHOLDSEQ (TS, Section 3) for a commonly recurring task in submodular optimization: namely, add all elements that have a gain of a specified threshold τ to the solution. This subproblem arises not only in SM, but also e.g. in submodular cover [17] and non-monotone submodular maximization [4, 18, 15, 25]. Our TS accomplishes this task with probability 1− 1/n in O(log n) adaptive rounds and expected O(n) query complexity (Theorem 2), while previous procedures for this task only add elements with an expected gain of τ and use expensive sampling techniques [17]; have Ω(log2 n) adaptivity [22]; or have Ω(kn) query complexity [6]. Finally, we present in Section 3 the parallelized greedy algorithm PARALLELGREEDYBOOST (PGB), which is used in conjunction with LINEARSEQ and THRESHOLDSEQ to yield the final algorithm LS+PGB, which answers the above question affirmatively: LS+PGB obtains nearly the optimal 1− 1/e ratio with probability 1− 2/n in O(log n) adaptive rounds and O(n) queries in expectation; moreover, LS+PGB is faster than FAST in an extensive empirical evaluation (see Table 2). In addition, LS+PGB improves theoretically on the previous algorithms in query complexity while obtaining nearly optimal adaptivity (see Table 1). 1.2 Additional Related Work Adaptive Sequencing. The main inefficiency of the adaptive sequencing method of Balkanski et al. [6] (which causes the quadratic query complexity) is an explicit check that a constant fraction of elements will be filtered from the ground set. In this work, we adopt a similar sampling technique to adaptive sequencing, except that we design the algorithm to filter a constant fraction of elements with only constant probability. This method allows us to reduce the quadratic query complexity of adaptive sequencing to linear query complexity while only increasing the adaptive complexity by a small constant factor. In contrast, FAST of Breuer et al. [9] speeds up adaptive sequencing by increasing the adaptive complexity of the algorithm through adaptive binary search procedures, which, in addition to the increasing the adaptivity by logarithmic factors, place restrictions on the k values for which the ratio can hold. This improved adaptive sequencing technique is the core of our THRESHOLDSEQ procedure, which has the additional benefit of being relatively simple to analyze. Algorithms with Linear Query Complexity. Our LINEARSEQ algorithm also uses the improved adaptive sequencing technique, but in addition this algorithm integrates ideas from the Ω(n)-adaptive linear-time streaming algorithm of Kuhnle [24] to achieve a constant-factor algorithm with low adaptivity in expected linear time. Integration of the improved adaptive sequencing with the ideas of Kuhnle [24] is non-trivial, and ultimately this integration enables the theoretical improvement in query complexity over previous algorithms with sublinear adaptivity that obtain a constant ratio with high probability (see Table 1). In Fahrbach et al. [17], a linear-time procedure SUBSAMPLEPREPROCESSING is described; this procedure is to the best of our knowledge the only algorithm in the literature that obtains a constant ratio with sublinear adaptive rounds and linear query complexity and hence is comparable to LINEARSEQ. However, SUBSAMPLEPREPROCESSING uses entirely different ideas from our LINEARSEQ and has much weaker theoretical guarantees: for input 0 < δ < 1, it obtains ratio δ 2 2×106 with probability 1 − δ in O(log(n)/δ) adaptive rounds and O(n) queries in expectation – the small ratio renders SUBSAMPLEPREPROCESSING impractical; also, its ratio holds only with constant probability. By contrast, with ε = 0.1, our LINEARSEQ obtains ratio ≈ 0.196 with probability 1− 1/n in O(log(n)) adaptive rounds and O(n) queries in expectation. Using LS for Preprocessing: Guesses of OPT. Many algorithms for SM, including FAST and all of the algorithms listed in Table 1 except for SM and our algorithm, use a strategy of guessing logarithmically many values of OPT. Our LINEARSEQ algorithm reduces the interval containing OPT from size k to a small constant size in expected linear time. Thus, LINEARSEQ could be used for preprocessing prior to running FAST or one of the other algorithms in Table 1, which would improve their query complexity without compromising their adaptive complexity or ratio; this illustrates the general utility of LINEARSEQ. For example, with this change, the theoretical adaptivity of FAST improves, although it remains worse than LS+PGB: the adaptive complexity of FAST becomes O ( 1 ε2 log(n) log ( 1 ε log(k) )) in contrast to the O ( 1 ε2 log (n/ε) ) of LS+PGB. Although SUBSAMPLEPREPROCESSING may be used for the same purpose, its ratio only holds with constant probability which would then limit the probability of success of any following algorithm. Relationship of THRESHOLDSEQ to Existing Methods. The first procedure in the literature to perform the same task is the THRESHOLDSAMPLING procedure of Fahrbach et al. [17]; however, THRESHOLDSAMPLING only ensures that the expected marginal gain of each element added is at least τ and has large constants in its runtime that make it impractical [9]. In contrast, THRESHOLDSEQ ensures that added elements contribute a gain of at least τ with high probability and is highly efficient empirically. A second procedure in the literature to perform the same task is the ADAPTIVESEQUENCING method of Balkanski et al. [6], which similarly to THRESHOLDSEQ uses random permutations of the ground set; however, ADAPTIVE-SEQUENCING focuses on explicitly ensuring a constant fraction of elements will be filtered in the next round, which is expensive to check: the query complexity of ADAPTIVE-SEQUENCING is O(kn). In contrast, our THRESHOLDSEQ algorithm ensures this property with a constant probability, which is sufficient to ensure the adaptivity with the high probability of 1− 1/n in O(n) expected queries. Finally, a third related procedure in the literature is THRESHOLDSAMPLING of Kazemi et al. [22], which also uses random permutations to sample elements. However, this algorithm has the higher adaptivity of O (log(n) log(k)), in contrast to the O (log(n)) of THRESHOLDSEQ. MapReduce Framework. Another line of work studying parallelizable algorithms for SM has focused on the MapReduce framework [13] in a distributed setting, e.g. [7, 8, 16, 28]. These algorithms divide the dataset over a large number of machines and are intended for a setting in which the data does not fit on a single machine. None of these algorithms has sublinear adaptivity and hence all have potentially large numbers of sequential function queries on each machine. In this work, our empirical evaluation is on a single machine with a large number of CPU cores; we do not evaluate our algorithms in a distributed setting. Organization. The constant-factor algorithm LINEARSEQ is described and analyzed in Section 2; the details of the analysis are presented in Appendix C. The variant of LINEARSEQ with lower adaptivity is described in Appendix F. The algorithms THRESHOLDSEQ and PARALLELGREEDYBOOST are discussed at a high level in Section 3, with detailed descriptions of these algorithms and theoretical analysis presented in Appendices D and E. Our empirical evaluation is summarized in Section 4 with more results and discussion in Appendix H. 2 A Parallelizable Algorithm with Linear Query Complexity: LINEARSEQ In this section, we describe the algorithm LINEARSEQ for SM (Alg. 1) that obtains ratio (4+O (ε))−1 in O ( 1 ε3 log(n) ) adaptive rounds and expected O ( n ε3 ) queries. If ε ≤ 0.21, the ratio of LINEARSEQ is lower-bounded by (4 + 16ε)−1 ≥ 0.135, which shows that a relatively large constant ratio is obtained even at large values of ε. An initial run of this algorithm is required for our main algorithm LS+PGB. Algorithm 1 The algorithm that obtains ratio (4 +O (ε))−1 in O ( log(n)/ε3 ) adaptive rounds and expected O ( n/ε3 ) queries. 1: procedure LINEARSEQ(f,N , k, ε) 2: Input: evaluation oracle f : 2N → R+, constraint k, error ε 3: a = arg maxu∈N f({u}) 4: Initialize A← {a} , V ← N , ` = d4(1 + 1/(βε)) log(n)e, β = ε/(16 log(8/(1− e−ε/2))) 5: for j ← 1 to ` do 6: Update V ← {x ∈ V : ∆ (x |A) ≥ f(A)/k} and filter out the rest 7: if |V | = 0 then break 8: V = {v1, v2, . . . , v|V |} ←random-permutation(V ) 9: Λ← {b(1 + ε)uc : 1 ≤ b(1 + ε)uc ≤ k, u ∈ N} ∪{bk + uεkc : bk + uεkc ≤ |V |, u ∈ N} ∪ {|V |} 10: B[λi] = false, for λi ∈ Λ 11: for λi ∈ Λ in parallel do 12: Tλi−1 ← {v1, v2, . . . , vλi−1} ; Tλi ← {v1, v2, . . . , vλi} ; T ′λi ← Tλi\Tλi−1 13: if ∆ ( T ′λi |A ∪ Tλi−1 ) /|T ′λi | ≥ (1− ε)f(A ∪ Tλi−1)/k then B[λi]← true 14: λ∗ ← max{λi ∈ Λ : B[λi] = false and ((λi ≤ k and B[1] to B[λi−1] are all true) or (λi > k and ∃m ≥ 1 s.t. | ⋃i−1 u=m T ′ λu | ≥ k and B[λm] to B[λi−1] are all true))} 15: A← A ∪ Tλ∗ 16: if |V | > 0 then return failure 17: return A′ ← last k elements added to A Description of LS. The work of LS is done within iterations of a sequential outer for loop (Line 5); this loop iterates at most O (log(n)) times, and each iteration requires two adaptive rounds; thus, the adaptive complexity of the algorithm is O (log(n)). Each iteration adds more elements to the set A, which is initially empty. Within each iteration, there are four high-level steps: 1) filter elements from V that have gain less than f(A)/k (Line 6); 2) randomly permute V (Line 8); 3) compute in parallel the marginal gain of adding blocks of the sequence of remaining elements in V to A (for loop on Line 11); 4) select a prefix of the sequence V = ( v1, v2, . . . , v|V | ) to add to A (Line 14). The selection of the prefix to add is carefully chosen to approximately satisfy, on average, Condition 1 for elements added; and also to ensure that, with constant probability, a constant fraction of elements of V are filtered on the next iteration. The following theorem states the theoretical results for LINEARSEQ. The remainder of this section proves this theorem, with intuition and discussion of the proof. The omitted proofs for all lemmata are provided in Appendix C. Theorem 1. Let (f, k) be an instance of SM. For any constant 0 < ε < 1/2, the algorithm LINEARSEQ has adaptive complexity O ( log(n)/ε3 ) and outputs A′ ⊆ N with |A′| ≤ k such that the following properties hold: 1) The algorithm succeeds with probability at least 1 − 1/n. 2) There are O ( (1/(εk) + 1)n/ε3 ) oracle queries in expectation. 3) If the algorithm succeeds,[ 4 + 4(2−ε)(1−ε)(1−2ε)ε ] f(A′) ≥ f(O), where O is an optimal solution to the instance (f, k). Overview. The goal of this section is to produce a constant factor, parallelizable algorithm with linear query complexity. As a starting point, consider an algorithm1 that takes one pass through the ground set, adding each element e to candidate set A iff ∆ (e |A) ≥ f(A)/k. (1) Condition 1 ensures two properties: 1) the last k elements in A contain a constant fraction of the value f(A); and 2) f(A) is within a constant fraction of OPT. By these two properties, the last k elements of A are a constant factor approximation to SM with exactly one query of the objective function per element of the ground set. For completeness, we give a pseudocode (Alg. 3) and proof in Appendix B. However, each query depends on all of the previous ones and thus there are n adaptive rounds. Therefore, the challenge is to approximately simulate Alg. 3 in a lowly adaptive (highly parallelizable) manner, which is what LINEARSEQ accomplishes. 1This algorithm is a simplified version of the streaming algorithm of Kuhnle [24]. 2.1 Approximately Satisfying Condition 1 Discarding Elements. In one adaptive round during each iteration j of the outer for loop, all elements with gain to A of less than f (A) /k are discarded from V (Line 6). Since the size of A increases as the algorithm runs, by submodularity, the gain of these elements can only decrease and hence these elements cannot satisfy Condition 1 and can be safely discarded from consideration. The process of filtering thereby ensures the following lemma at termination. Lemma 1. At successful termination of LINEARSEQ, f(O) ≤ 2f(A), where O ⊆ N is an optimal solution of size k. Addition of Elements. Next, we describe the details of how elements are added to the set A. The random permutation of remaining elements on Line 8 constructs a sequence ( v1, v2, . . . , v|V | ) such that each element is uniformly randomly sampled from the remaining elements. By testing the marginal gains along the sequence in parallel, it is possible to determine a good prefix of the sequence (vi) to add to A to ensure the following: 1) Condition 1 is approximately satisfied; and 2) We will discard a constant fraction of V in the next iteration with constant probability. Condition 1 is important for the approximation ratio and discarding a constant fraction of V is important for the adaptivity and query complexity. Below, we discuss how to choose the prefix such that both are achieved. To speed up the algorithm, we do not test the marginal gain at each point in the sequence (vi), but rather test blocks of elements at once as determined by the index set Λ defined in the pseudocode. Prefix Selection. Say a block is bad if this block does not satisfy the condition checked on Line 13 (which is an approximate, average form of Condition 1); otherwise, the block is good. At the end of an iteration, we select the largest block index λ∗, where this block is bad and the previous consecutive blocks which together have at least k elements are all good; or this block is bad and all the previous blocks are good blocks. Then, we add the prefix Tλ∗ = (v1, v2, . . . , vλ∗) into A. Now, the relevance of Condition 1 for the approximation ratio is that it implies f(A) ≥ 2f(A \A′), where A′ are the last k elements added to A. Lemma 2 shows that the conditions required on the marginal gains of blocks added imply an approximate form of this fact is satisfied by LINEARSEQ. Indeed, the proof of Lemma 2 informs the choice Λ of blocks evaluated and the computation of λ∗. Lemma 2. Suppose LINEARSEQ terminates successfully. Then f(A) ≥ 2(1−ε+ε 2) 1+ε f(A\A ′). Proof. If |A| ≤ k, the lemma is immediate, so assume |A| > k. For iteration j, let Tj,λ∗j denote the set added to A during iteration j; and let Tj,λ∗j = ∅ if the algorithm terminates before iteration j. Let Aj denote the value of A after iteration j. Define c = max{c ∈ N : A′ ⊆ (∪`j=cTj,λ∗j )}. Then, |Tc,λ∗c | > 0; and for any j > c, |Tj,λ∗j | < k. It holds that (∪ ` j=c+1Tj,λ∗j ) ⊂ A ′ ⊆ (∪`j=cTj,λ∗j ). Figure 1 shows how A is composed of these sets Tj,λ∗j and how each set is composed of blocks. The following claim is proven in Appendix C.3. Claim 1. It holds that ∆ ( Tc,λ∗c |A\A ′) ≥ (1 − ε) max{0, |Tc,λ∗c ∩ A′| − 2εk} · f(A\A′)/k. For j > c, it holds that ∆ ( Tj,λ∗j |Aj−1 ) ≥ 1−ε1+ε |Tj,λ∗j | · f(A\A ′)/k. From Claim 1, f(A)−f(A\A′) = ∆ ( Tc,λ∗c |A\A ′)+ ∑̀ j=c+1 ∆ ( Tj,λ∗j |Aj−1 ) ≥ (1− ε)(1− 2ε) 1 + ε ·f(A\A′), (2) where Inequality 2 is proven in Appendix C.4. In the remainder of this subsection, we will show that a (βε)-fraction of V is discarded at each iteration j of the outer for loop with probability at least 1/2, where β is a constant in terms of ε as defined on Line 4 in the pseudocode. The remainder of the proofs in this section are implicitly conditioned on the behavior of the algorithm prior to iteration j. The next lemma describes the behavior of the number of elements that will be filtered at iteration j + 1. Observe that the set Si defined in the next lemma is the set of elements that would be filtered at the next iteration if prefix Ti is added to A. Lemma 3. Let Si = {x ∈ V : ∆ (x |A ∪ Ti) < f(A ∪ Ti)/k} . It holds that |S0| = 0, |S|V || = |V |, and |Si| ≤ |Si+1|. By Lemma 3, we know the number of elements in Si increases from 0 to |V | with i. Therefore, there exists a t such that t = min{i ∈ N : |Si| ≥ βε|V |}. If λ∗ ≥ t, |Sλ∗ | ≥ βε|V |, and we will successfully filter out more than (βε)-fraction of V at the next iteration. In this case, we say that the iteration j succeeds. Otherwise, if λ∗ < t, the iteration may fail. The remainder of the proof bounds the probability that λ∗ < t, which is an upper bound on the probability that iteration j fails. Let λt = max{λ ∈ Λ : λ < t}, and let λ′t = max({λ′ ∈ Λ : ∑ λ∈Λ,λ′≤λ≤λt |Tλ| ≥ k} ∪ {1}). If λ∗ < t, there must be at least one index λ between λ′t and λt such that the block T ′ λ is bad. The next lemma bounds the probability that any block T ′λ, with λ < λt, is bad. Lemma 4. Let t = min{i ∈ N : |Si| ≥ βε|V |}; λt = max{λ ∈ Λ : λ < t}; (Yi) be a sequence of independent and identically distributed Bernoulli trials, where the success probability is βε. Then for any λ < λt, Pr (B[λ] = false) ≤ Pr (∑|T ′λ| i=1 Yi > ε|T ′λ| ) . Finally, we bound the probability that an iteration j of the outer for loop fails. Let B1 = {λ ∈ Λ : λ ≤ k and λ < λt}, B2 = {λ ∈ Λ : |Λ ∩ [λ, λt]| ≤ d1/εe}. Then Pr (iteration j fails) ≤ Pr (∃λ ∈ B1 ∪B2 with B[λ] = false) ≤ 1/2, (3) where the proof of Inequality 3 is in Appendix C.7. 2.2 Proof of Theorem 1 From Section 2.1, the probability at any iteration of the outer for loop of successful filtering of an (βε)-fraction of V is at least 1/2. We can model the success of the iterations as a sequence of dependent Bernoulli random variables, with success probability that depends on the results of previous trials but is always at least 1/2. Success Probability of LINEARSEQ. If there are at leastm = dlog1−βε(1/n)e successful iterations, the algorithm LINEARSEQ will succeed. The number of successful iterations X` up to and including the `-th iteration is a sum of dependent Bernoulli random variables. With some work (Lemma 6 in Appendix A), the Chernoff bounds can be applied to ensure the algorithm succeeds with probability at least 1− 1/n, as shown in Appendix C.2. Adaptivity and Query Complexity. Oracle queries are made on Lines 6 and 13 of LINEARSEQ. The filtering on Line 6 is in one adaptive round, and the inner for loop is also in one adaptive round. Thus, the adaptivity is proportional to the number of iterations of the outer for loop, O (`) = O ( log(n)/ε3 ) . For the query complexity, let Yi be the number of iterations between the (i− 1)-th and i-th successful iterations of the outer for loop. By Lemma 6 in Appendix A, E [Yi] ≤ 2. From here, we show in Appendix C.8 that there are at most O ( n/ε3 ) queries in expectation. Approximation Ratio. Suppose LINEARSEQ terminates successfully. We have the approximation ratio as follows: f(A′) (a) ≥ f(A)− f(A\A′) (b) ≥ f(A)− 1 + ε 2(1− ε+ ε2) f(A) (c) ≥ 1 4 + 4(2−ε)(1−ε)(1−2ε) · ε f(O), where Inequality (a) is from submodularity of f , Inequality (b) is from Lemma 2, and Inequality (c) is from Lemma 1. 3 Improving to Nearly the Optimal Ratio In this section, we describe how to obtain the nearly optimal ratio in nearly optimal query and adaptive complexities (Section 3.2). First, in Section 3.1, we describe THRESHOLDSEQ, a parallelizable procedure to add all elements with gain above a constant threshold to the solution. In Section 3.2, we describe PARALLELGREEDYBOOST and finally the main algorithm LS+PGB. Because of space constraints, the algorithms are described in the main text at a high level only, with detailed descriptions and proofs deferred to Appendices D and E. 3.1 The THRESHOLDSEQ Procedure In this section, we discuss the algorithm THRESHOLDSEQ, which adds all elements with gain above an input threshold τ up to accuracy ε in O(log n) adaptive rounds and O(n) queries in expectation. Pseudocode is given in Alg. 4 in Appendix D. Overview. The goal of this algorithm is, given an input threshold τ and size constraint k, to produce a set of size at most k such that the average gain of elements added is at least τ . As discussed in Section 1, this task is an important subroutine of many algorithms for submodular optimization (including our final algorithm), although by itself it does not produce any approximation ratio for SM. The overall strategy of our parallelizable algorithm THRESHOLDSEQ is analagous to that of LINEARSEQ, although THRESHOLDSEQ is considerably simpler to analyze. The following theorem summarizes the theoretical guarantees of THRESHOLDSEQ and the proofs are in Appendix D. Theorem 2. Suppose THRESHOLDSEQ is run with input (f, k, ε, δ, τ). Then, the algorithm has adaptive complexity O(log(n/δ)/ε) and outputs A ⊆ N with |A| ≤ k such that the following properties hold: 1) The algorithm succeeds with probability at least 1− δ/n. 2) There are O(n/ε) oracle queries in expectation. 3) It holds that f(A)/|A| ≥ (1 − ε)τ/(1 + ε). 4) If |A| < k, then ∆ (x |A) < τ for all x ∈ N . 3.2 The PARALLELGREEDYBOOST Procedure and the Main Algorithm Algorithm 2 The PARALLELGREEDYBOOST procedure. 1: Input: evaluation oracle f : 2N → R+, constraint k, constant α, value Γ such that Γ ≤ f(O) ≤ Γ/α, accuracy parameter ε 2: Initialize τ ← Γ/(αk), δ← 1/(log1−ε(α/3) + 1), A← ∅ 3: while τ ≥ Γ/(3k) do 4: τ ← τ(1− ε) 5: S ← THRESHOLDSEQ(fA,N , k − |A|, δ, ε/3, τ) 6: A← A ∪ S 7: if |A| = k then 8: return A 9: return A In this section, we describe the greedy algorithm PARALLELGREEDYBOOST (PGB, Alg. 2) that uses multiple calls to THRESHOLDSEQ with descending thresholds. Next, our state-of-the-art algorithm LS+PGB is specified. Description of PARALLELGREEDYBOOST. This procedure takes as input the results from running an α-approximation algorithm on the instance (f, k) of SM; thus, PARALLELGREEDYBOOST is not meant to be used as a standalone algorithm. Namely, PARALLELGREEDYBOOST takes as input Γ, the solution value of an α-approximation algorithm for SM; this solution value Γ is then boosted to ensure the ratio 1− 1/e− ε on the instance. The values of Γ and α are used to produce an initial threshold value τ for THRESHOLDSEQ. Then, the threshold value is iteratively decreased by a factor of (1− ε) and the call to THRESHOLDSEQ is iteratively repeated to build up a solution, until a minimum value for the threshold of Γ/(3k) is reached. Therefore, THRESHOLDSEQ is called at most O (log(1/α)/ε) times. We remark that α is not required to be a constant approximation ratio. Theorem 3. Let (f, k) be an instance of SM. Suppose an α- approximation algorithm for SM is used to obtain Γ, where the approximation ratio α holds with probability 1− pα. For any constant ε > 0, the algorithm PARALLELGREEDYBOOST has adaptive complexity O ( logα−1 ε2 log ( n log(α−1) ε )) and outputs A ∈ N with |A| ≤ k such that the following properties hold: 1) The algorithm succeeds with probability at least 1− 1/n− pα. 2) If the algorithm succeeds, there are O ( n log ( α−1 ) /ε2 ) oracle queries in expectation. 3) If the algorithm succeeds, f(A) ≥ (1− 1/e− ε)f(O), where O is an optimal solution to the instance (f, k). Proof. Success Probability. For the while loop in Line 3-8, there are no more than dlog1−ε(α/3)e iterations. If THRESHOLDSEQ completes successfully at every iteration, Algorithm 2 also succeeds. The probability that this occurs is lower bounded in Appendix E.1.1. For the remainder of the proof of Theorem 3, we assume that every call to THRESHOLDSEQ succeeds. Adaptive and Query Complexity. There are at most dlog1−ε(α/3)e iterations of the while loop. Since log(x) ≤ x − 1, dlog1−ε(α/3)e = d log(α/3) log(1−ε)e, and ε < 1 − 1/e, it holds that dlog1−ε(α/3)e ≤ d log(3/α) ε e. And for each iteration, queries to the oracle happen only on Line 5, the call to THRESHOLDSEQ. Since the adaptive and query complexity of THRESHOLDSEQ is O (log(n/δ)/ε) and O (n/ε), the adaptive and query complexities for Algorithm 2 are O ( logα−1 ε2 log ( n log(α−1) ε )) , O ( logα−1 ε2 n ) , respectively. Approximation Ratio. Let Aj be the set A we get after Line 6, and let Sj be the set returned by THRESHOLDSEQ in iteration j of the while loop. Let ` be the number of iterations of the while loop. First, in the case that |A| < k at termination, THRESHOLDSEQ returns 0 ≤ |S`| < k − |A`−1| at the last iteration. From Theorem 2, for any o ∈ O, ∆ (o |A) < τ < Γ/(3k). By submodularity and monotonicity, f(O)− f(A) ≤ f(O ∪A)− f(A) ≤ ∑ o∈O\A ∆ (o |A) ≤ ∑ o∈O\A Γ/(3k) ≤ f(O)/3, and the ratio holds. Second, consider the case that |A| = k. Suppose in iteration j + 1, THRESHOLDSEQ returns a nonempty set Sj+1. Then, in the previous iteration j, THRESHOLDSEQ returns a set Sj that 0 ≤ |Sj | < k − |Aj−1|. From Theorem 2, f(O)− f(Aj+1) ≤ ( 1− (1− ε/3)(1− ε) (1 + ε/3)k |Aj+1\Aj | ) (f(O)− f(Aj)). (4) The above inequality also holds when Aj+1 = Aj . Therefore, it holds that f(O)− f(A) ≤ e− (1−ε/3)(1−ε) 1+ε/3 f(O) ≤ (1/e+ ε)f(O). (5) The detailed proof of Inequality 4 and 5 can be found in Appendix E.1.2. Main Algorithm: LS+PGB. To obtain the main algorithm of this paper (and its nearly optimal theoretical guarantees), we use PARALLELGREEDYBOOST with the solution value Γ and ratio α given by LINEARSEQ. Because this choice requires an initial run of LINEARSEQ, we denote this algorithm by LS+PGB. Thus, LS+PGB integrates LINEARSEQ and PARALLELGREEDYBOOST to get nearly the optimal 1− 1/e ratio with query complexity of O (n) and adaptivity of O (log(n)). 4 Empirical Evaluation In this section, we demonstrate that the empirical performance of LS+PGB outperforms that of FAST for the metrics of total time, total queries, adaptive rounds, and objective value across six applications of SM: maximum cover on random graphs (MaxCover), twitter feed summarization (TweetSumm), image summarization (ImageSumm), influence maximization (Influence), revenue maximization (RevMax), and Traffic Speeding Sensor Placement (Traffic). See Appendix H.2 for the definition of the objectives. The sizes n of the ground sets range from n = 1885 to 100000. Implementation and Environment. We evaluate the same implementation of FAST used in Breuer et al. [9]. Our implementation of LS+PGB is parallelized using the Message Passing Interface (MPI) within the same Python codebase as FAST (see the Supplementary Material for source code). Practical optimizations to LINEARSEQ are made, which do not compromise the theoretical guarantees, which are discussed in Appendix G. The hardware of the system consists of 40 Intel(R) Xeon(R) Gold 5218R CPU @ 2.10GHz cores (with 80 threads available), of which up to 75 threads are made available to the algorithms for the experiments. On each instance, the algorithms are repeated independently for five repetitions, and the mean and standard deviation of the objective value, total queries, adaptive rounds and parallel (wall clock) runtime to the submodular function is plotted. Parameters. The parameters ε, δ of FAST are set to enforce the nominal ratio of 1−1/e−0.1 ≈ 0.53 with probability 0.95; these are the same parameter settings for FAST as in the Breuer et al. [9] evaluation. The ε parameter of LS+PGB is set to enforce the same ratio with probability 1−2/n. With these parameters, FAST ensures its ratio only if k ≥ θ(ε, δ, k) = 2 log(2δ−1 log( 1ε log(k)))/ε 2(1− 5ε) ≥ 7103. Since k < 7103 on many of our instances, FAST is evaluated in these instances as a theoretically motivated heuristic. In contrast, the ratio of LS+PGB holds on all instances evaluated. We use exponentially increasing k values from n/1000 to n/10 for each application to explore the behavior of each algorithm across a broad range of instance sizes. Overview of Results. Figure 2 illustrates the comparison with FAST across the ImageSumm and RevenueMax application; results on other applications are shown in Appendix H. Runtime: LS+PGB is faster than FAST by more than 1% on 80% of instances evaluated; and is faster by an order of magnitude on 14% of instances. Objective value: LS+PGB achieves higher objective by more than 1% on 50% of instances, whereas FAST achieves higher objective by more than 1% on 8% of instances. Adaptive rounds: LS+PGB achieves more than 1% fewer adaptive rounds on 75% of instances, while FAST achieves more than 1% fewer adaptive rounds on 22% of instances. Total queries: LS+PGB uses more than 1% fewer queries on 84% of scenarios with FAST using more than 1% fewer queries on 9% of scenarios. In summary, LS+PGB frequently gives substantial improvement in objective value, queries, adaptive rounds, and parallel runtime. Comparison of the arithmetic means of the metrics over all instances is given in Table 2. Finally, FAST and LS+PGB show very similar linear speedup with the number of processors employed: as shown in Fig. 7. 5 Concluding Remarks In this work, we have introduced the algorithm LS+PGB, which is highly parallelizable and achieves state-of-the-art empirical performance over any previous algorithm for SM; also, LS+PGB is nearly optimal theoretically in terms of query complexity, adaptivity, and approximation ratio. An integral component of LS+PGB is our preprocessing algorithm LINEARSEQ, which reduces the interval containing OPT to a small constant size in expected linear time and low adaptivity, which may be independently useful. Another component of LS+PGB is the THRESHOLDSEQ procedure, which adds all elements with gain above a threshold in a parallelizable manner and improves existing algorithms in the literature for the same task. Acknowledgements The work of Yixin Chen, Tonmoy Dey, and Alan Kuhnle was partially supported by Florida State University. The authors have received no third-party funding in direct support of this work. The authors have no additional revenues from other sources related to this work.
1. What is the main contribution of the paper regarding submodular maximization? 2. What are the strengths of the proposed algorithm compared to previous works, particularly FAST? 3. How does the reviewer assess the practical performance of the proposed algorithm, and what factors may affect its magnitude? 4. What are some potential improvements that the authors could consider for their experimental setup and codebase?
Summary Of The Paper Review
Summary Of The Paper In recent years there has been a theoretical breakthrough in low adaptivity O(log(n)) parallel algorithms for submodular maximization, but these algorithms are very slow in practice. Very recently, Breuer et al. proposed an algorithm, FAST, that trades off a small factor in asymptotic adaptive complexity to obtain the practically fastest algorithm. Building on this line of research, the authors propose a new algorithm with slightly better practical performance absent the tradeoff in asymptotic adaptive complexity. The authors demonstrate the practical performance advantage by benchmarking their approach against the FAST algorithm on numerous experiments. Review Overall, I agree with the authors that it is desirable to design a practically fast algorithm with O(log(n)) adaptivity. Some of the ideas used to obtain this, while based significantly on recent work, are also nontrivial. For example, the LinearSeq subroutine and also the ThresholdSeq subroutine to identify high value elements are of general interest and might be used to accelerate other algorithms. In general, the ideas here refine and expand upon the sequencing techniques introduced in Balkanski et al., 2020 and Breuer et al., 2020. The paper and proofs are clearly written. However, the magnitude of the contribution of the paper hinges on the claim that the proposed algorithm is empirically faster than the FAST algorithm. The reported runtimes and objective values suggest that the algorithm is slightly faster for most (but not all) experiments and parameter combinations. This may be thought of as incremental progress, and to contextualize the magnitude of the obtained speedups, I would note that it is possible to achieve similar or greater speedups for the FAST algorithm by considering various factors such as e.g. better parallel load balancing, etc. (though these might also apply to the authors' algorithm). I will nonetheless acknowledge that if the authors' claims are correct, then their algorithm is currently the fastest for generic submodular maximization (even if by a small margin), which is still useful/interesting. Also, I note that the reported query counts do appear to significantly improve on FAST's queries for most experiments (Appendix Fig. H.3). This means that the performance advantage may actually be better on different (larger?) hardware. Because the magnitude of the contribution hinges on the experimental results, implementations and experimental setup are critical. In this respect, I appreciate that the majority of the applications replicate Breuer et al (the FAST algorithm), and also that the authors have also added the Image Summary and Information Monitoring objectives to add some diversity. This gives a nice balance of 6 established benchmarks (from the FAST paper) and two additional ones for diversity. The plots are clear; the values of k are chosen and plotted in a theoretically reasonable way (as values of k/n), and the authors show averages over multiple runs + error bars. I also dug around the codebase (which is nice to see in the submission). It appears to extend the FAST codebase, and it uses MPI functions, helper functions, etc. from the former. I also develop in MPI, and I appreciate that this is nontrivial. It is also advantageous here because it gives confidence that the authors' comparison to FAST uses a fair/coonosistent comparison of codes, subroutines, timings, etc. It is also nice to see papers adopting and expands a fast parallel codebase & benchmarks that can be used by other researchers in the future. I would note that because the hardware appears to be a server machine instead of AWS/GoogleCloud/Azure, the reported timings are not strictly replicable (absent buying an identical server machine). As a heuristic check, I compared the benchmark runtimes the authors obtain using FAST against those reported in the original FAST paper, and they do appear to be reasonably consistent. (Note: Please consider adding a \ref to Appendix H in the text of the paper where you discuss threads/processors.) One oddity in the experiments is that the Parameters paragraph (line 328) and also Appendix H state "these are the same parameter settings for FAST as in the Breuer et al. ", but Breuer et al. (p.6) reports using probability 0.95, not 0.975 as reported here. 0.975 is a fine value to use for benchmarking, but consider revising these paragraphs in the paper and the Appendix (as they also use the probability parameter to compute the values of k under which the Breuer paper's guarantees are heuristic only.) I will consider updating my review based on the authors' response. UPDATE after reading the authors' response Thank you for the response. I had a closer look at the code and experiments. There are two things that I think would improve the paper and code. First, I would just re-emphasize that you ought to consider benchmarking on AWS or a cloud computing platform such that your experimental results can be shown to not be dependent on your specific lab hardware. Second, I note the following line of your code inside your parallel_adaptiveAdd() method (which shoulders a lot of the computation for your algorithm): line 148: gain= objective.value( tmpS ) - valTmpS; which can be found inside your parallel_adaptiveAdd() method. This is technically fine, but if the goal is to do a fair comparison with the FAST code, consider instead: gain= objective.marginalval( new_tmpS, tmpS); # where new_tmpS = list( set(tmpS) | set( Ti) ) The reason is that the FAST codes allow you to specify a marginalval() method, which might use memoization or other speedups. Your code in contrast is memoizing the value of the original tmpS (once per processor), but computing the full function value of the new tmpS and taking the difference of the two. The point is that depending on the specific objective function, and also on how objective.marginalval() is written (e.g. whether it uses memoization or speedups on a particular objective), either the FAST code or your code will have a potentially significant advantage. Fixing this will allow the two to be on the same footing. Again, I do not believe this should lower your review score as it could either help or hurt your algorithm's performance vis-a-vis FAST depending on the specific objective function---I merely suggest it as a means to improve the fairness of the comparison.
NIPS
Title Reinforcement Learning for Control with Multiple Frequencies Abstract Many real-world sequential decision problems involve multiple action variables whose control frequencies are different, such that actions take their effects at different periods. While these problems can be formulated with the notion of multiple action persistences in factored-action MDP (FA-MDP), it is non-trivial to solve them efficiently since an action-persistent policy constructed from a stationary policy can be arbitrarily suboptimal, rendering solution methods for the standard FA-MDPs hardly applicable. In this paper, we formalize the problem of multiple control frequencies in RL and provide its efficient solution method. Our proposed method, Action-Persistent Policy Iteration (AP-PI), provides a theoretical guarantee on the convergence to an optimal solution while incurring only a factor of |A| increase in time complexity during policy improvement step, compared to the standard policy iteration for FA-MDPs. Extending this result, we present ActionPersistent Actor-Critic (AP-AC), a scalable RL algorithm for high-dimensional control tasks. In the experiments, we demonstrate that AP-AC significantly outperforms the baselines on several continuous control tasks and a traffic control simulation, which highlights the effectiveness of our method that directly optimizes the periodic non-stationary policy for tasks with multiple control frequencies. 1 Introduction In recent years, reinforcement learning (RL) [23] has shown great promise in various domains, such as complex games [14, 21, 22] and high-dimensional continuous control [11, 19]. These problems have been mostly formulated as discrete-time Markov decision processes (MDPs) [17], assuming all decision variables are simultaneously determined at every time step. However, many real-world sequential decision-making problems involve multiple decision variables whose control frequencies are different by requirement. For example, when managing a financial portfolio of various assets, the frequency of rebalancing may need to be different for each asset, e.g. weekly for stock and monthly for real estate. Similarly, robotic systems typically consist of a number of controllers operating at different frequencies due to their system specification. Different control frequencies can be formulated with the notion of different action persistence in the discrete-time factored-action MDP (FA-MDP), where the base time interval is determined by the reciprocal of the least common multiple of the control frequencies. However, while algorithms for single action persistence has been proposed in order to improve the empirical performance of online [9] or offline [13] RL agents, to the best of our knowledge, addressing multiple action persistences in RL has been mostly unexplored due to its difficulty involved in the non-stationarity nature of the optimal policy. In this paper, we formalize the problem of multiple action persistences in FA-MDPs. We first show that any persistent policy induced by a stationary policy can be arbitrarily bad via a simple example. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Then, we introduce efficient methods for FA-MDPs that directly optimize a periodic non-stationary policy while circumventing the exponential growth of time complexity with respect to the periodicity of action persistence. We first present a tabular planning algorithm, Action-Persistent Policy Iteration (AP-PI), which provides the theoretical guarantee on the convergence to an optimal solution while incurring only a factor of |A| time complexity increase in the policy improvement step compared to the policy iteration for standard FA-MDPs. We then present Action-Persistent Actor-Critic (AP-AC), a scalable learning algorithm for high-dimensional tasks via practical approximations to AP-PI, with a neural network architecture designed to facilitate the direct optimization of a periodic non-stationary policy. In the experiments, we demonstrate that AP-AC significantly outperforms a number of baselines based on SAC, from the results on modified Mujoco continuous control benchmarks [3, 26] and the SUMO traffic control simulation [8], which highlights the effectiveness of our method that directly optimizes the periodic non-stationary policy for tasks with multiple control frequencies. 2 Preliminaries We assume the environment modeled as discrete-time factored-action MDP (FA-MDP) M = 〈S,A, P,R, γ〉 where S is the set of states s, A is the set of vector-represented actions a = (a1, . . . , am), P (s′|s, a) = Pr(st+1 = s′|st = s, at = a) is the transition probability, R(s, a) ∈ R is the immediate reward for taking action a in state s, and γ ∈ [0, 1) is the discount factor. A policy π = (πt)t≥0 ∈ Π is a sequence of functions where πt : H → ∆(A) is a mapping from history ht = (s0, a0, . . . , st−1, at−1, st) to a probability distribution over A, πt(at|ht) = Pr(at|ht). We call πt Markovian if πt depends only on the last state st and call it stationary if πt does not depend on t. The policy πt is called deterministic if it maps from history to some action with probability 1 and can be denoted as πt : H → A. For simplicity, we will only consider the fully factorized policy πt(a1t , . . . , a m t |ht) = ∏m k=1 π k t (a k t |ht), which comprises the set Π of all fully factorized policies. The action-value function Qπt of policy π is defined as Qπt (s, a) = Eπ [ ∑∞ τ=t γ τ−tR(sτ , aτ )|st = s, at = a]. We consider the sequential decision problem where each action variable ak has its own control frequency. The notion of control frequency can be formulated in terms of action persistence with FA-MDPM by considering how frequently ak should be decided inM. Specifically, we let ck be the action persistence of k-th action variable ak, i.e. ak is decided every ck time step inM. The overall action persistence of the decision problem is then described as a vector c = (c1, . . . , cm) ∈ Nm. Finally, we define the c-persistent policy π as follows: Definition 1. (c-persistent policy) Let π = (πt)t≥0 ∈ Π be a policy. Given the action persistence vector c ∈ Nm, the c-persistent policy π̄c = (π̄c,t)t≥0 induced by π is a non-stationary policy where ∀t, π̄c,t(a|ht) = m∏ k=1 π̄kc,t(a k|ht) s.t. π̄kc,t(ak|ht) = { πkt (a k|ht) if t mod ck = 0 δak t−(t mod ck) (ak) otherwise (1) where δx(y) = 1 if x = y and 0 otherwise. Additionally, we define the set of c-persistent policies Πc = {(π̄c,t)t≥0 : π ∈ Π}. Our goal is to find the c-persistent policy π∗c that maximizes expected cumulative rewards: π̄∗c = arg max π̄∈Πc Eπ̄ [ ∞∑ t=0 γtR(st, at) ] (2) Remark. When c = (1, . . . , 1), we have Πc = Π. Thus, Eq. (2) is reduced to the standard objective function of FA-MDP, which is known to always have a deterministic and Markovian stationary policy as an optimal solution [17]. Also, the c-persistent policy of Definition 1 is different from the k-persistent policy [13] in that our definition considers multiple action persistences and is not limited by Markovian policy π while [13] considers single action persistence and a non-stationary policy induced only by a Markovian policy. The agent with c-persistent policy π̄c induced by π interacts with the environment as follows: At time step t = 0, all action variables are selected according to π̄c,0 = π0, i.e. (a10, . . . , a m 0 ) ∼∏m k=1 π k 0 (·|h0). Then, each action variable ak is kept persistent for the subsequent ck − 1 time steps. At time step t = ck, the action variable ak is set by π̄kc,t(·|ht) = πkt (·|ht), and continue into the next time step. In other words, the agent decides the value for ak only at the time steps t that are multiples of ck, i.e. t mod ck = 0. Figure 1b illustrates an example of c-persistent policy π̄c. For the remainder of this paper, we will omit the subscript c in π̄c for notational brevity if there is no confusion. All the proofs of theorems are available in the Appendix. 3 Action-Persistence in FA-MDPs Finding the optimal policy via Eq. (2) is non-trivial since any c-persistent policy naively constructed from a stationary policy can be suboptimal, unlike in standard FA-MDPs where there always exists a stationary optimal policy. To see this, consider the FA-MDP depicted in Figure 1, where there are two action variables with action persistences 2 and 3, respectively. In this example task, in order to obtain a positive reward, the agent should take an action a = (1, 1) at state s0 to go to the rightmost state. However, when we use this to form a stationary deterministic policy with π(s0) = (1, 1) and construct a c-persistent policy in a naive manner, we see that the policy can never reach s3 due to the inherent action persistence c = (2, 3): The action (1, 1) taken at s0 when t = 0 will persist at the next time step t = 1 in s1, making the agent go back to s0. Then, the agent will select an action (1, 1) again by π(s0), and this will be repeated forever. As a consequence, the agent visits only s0 and s1, and thus cannot reach the rightmost state. In contrast, the non-stationary deterministic policy π̄ described in Figure 1b reaches s3. Careful readers may notice that a c-persistent policy "projected" from some stationary but stochastic policy can eventually reach s3, but its expected return is clearly less than the non-stationary deterministic policy in Figure 1b, thus suboptimal. Therefore, obtaining a c-persistent policy by ignoring the action persistence requirement and solving the corresponding standard FA-MDP would not work. However, one can observe that the action persistence scheme is repeated periodically at every L , LCM(c1, . . . , cm) time steps. From this observation, a naive approach to solving Eq. (2) would be redefining the action space to have L-step actions as elements. After redefining the transition and reward function corresponding to these actions, standard solution methods for FA-MDP such as dynamic programming can be applied. Still, this approach not only has exponential time complexity with respect to L due to the increase in the size of action space, i.e. |A|L, but also can be suboptimal unless the underlying transition dynamics is nearly deterministic due to the open-loop decision-making nature of L-step actions [27]. A more principled approach is to consider an L-Markovian policy that memorizes which action was taken during the last L steps, but its straightforward conversion to the standard MDP via state augmentation still suffers from the exponential time complexity with respect to L. 3.1 Policy evaluation for c-persistent policy: c-persistent Bellman operators As discussed in the previous section, augmenting state or action space for storing L-step information results in exponential complexity with respect to L. Instead, we take a more direct approach that optimizes the c-persistent policy via composition of Bellman operators within the space of L-periodic, non-stationary and deterministic policies ΠL: ΠL = {π ∈ Π : ∀t, πt = πt+L and πt : A× S → A} (3) We will later prove that there always exists an optimal policy for Eq. (2), which is induced by π ∈ ΠL. The policy in ΠL will be denoted as π = (π0, . . . , πL−1) in the remainder of the paper. As the first step of the derivation of our algorithm, we define function Γct,a(a ′): Γct,a(a ′) = (ā1, . . . ām) where āk , { ak if t mod ck 6= 0 a′k if t mod ck = 0 (4) which projects action a′ into a feasible action at time step t if the action taken at t− 1 is assumed to be a. This is done by extracting dimensions of "effectable" action variables at time step t from a′ and extracting dimensions of "uneffectable" variables at time step t from a. For the L-periodic non-stationary deterministic policy π = (π0, . . . , πL−1) ∈ ΠL, we first define the one-step c-persistent Bellman operator T̄ πt induced by π. Specifically, for t ∈ {0, . . . , L− 1}, (T̄ πt Q)(s, a) , R(s, a) + γE s′∼P (s′|s,a) a′=πt+1(a,s ′) [ Q(s′,Γct+1,a(a ′)) ] (5) Then, we define an L-step c-persistent Bellman operator H̄πt by making the composition of L one-step c-persistent Bellman operators: (H̄π0 Q)(s, a) , (T̄ π0 T̄ π1 · · · T̄ πL−2T̄ πL−1Q)(s, a) (6) (H̄π1 Q)(s, a) , (T̄ π1 T̄ π2 · · · T̄ πL−1T̄ π0 Q)(s, a) ... (H̄πL−1Q)(s, a) , (T̄ πL−1T̄ π0 · · · T̄ πL−3T̄ πL−2Q)(s, a) The following theorem and corollary state that each L-step c-persistent Bellman operator H̄πt is a contraction mapping, and each of the fixed points Qπ̄0 , . . . , Q π̄ L−1 has a recursive relationship with another by one-step c-persistent Bellman operators T̄ π0 , . . . , T̄ πL−1. Theorem 1. For all t ∈ {0, . . . , L − 1}, the L-step c-persistent Bellman operators H̄πt is γLcontraction with respect to infinity norm, thus H̄πt Q π̄ t = Q π̄ t has the unique fixed point solution. In other words, for any Q0t : S ×A → R, define Qn+1t = H̄πt Qnt . Then, the sequence Qnt converges to t-th c-persistent value function of π̄ as n→∞. Corollary 1. Qπ̄t = T̄ πt Qπ̄(t+1) mod L holds for all t ∈ {0, . . . , L − 1}, thus c-persistent value functions can be obtained by repeatedly applying 1-step c-persistent backup in a L-cyclic manner. Note that the c-persistent value function of the policy π obtained by H̄πt , has the following form: Qπ̄t (s, a) =E ∀τ, sτ+1∼P (·|sτ ,aτ ) āτ+1=Γ c τ+1,āτ (πτ+1(āτ ,sτ+1)) [ ∞∑ τ=t γτ−tR(sτ , āτ ) ∣∣∣ st = s, āt = a] (7) which is obtained by unfolding the L-step c-persistent Bellman recursion from H̄πt . Here, one can easily show that every action taken at every time step t, which is projected by Γct,ā(·), abides by c-persistence, by mathematical induction. As a result, Qπ̄t (s, a) has the intended interpretable meaning, i.e. the expected sum of rewards that can be obtained when following the c-persistent policy π̄ which is induced by π, except for the initial action a, starting from the state s at time step t. Remark. The time complexity of applying the one-step c-persistent Bellman backup T̄ πt of Eq. (5) for a deterministic policy π is O(|S|2|A|) for each t, which is identical to the time complexity of the non-persistent standard Bellman backup. Now, we have a complete policy evaluation operator for c-persistent policy induced by L-periodic non-stationary deterministic policy π. 3.2 Policy improvement for c-persistent policy The remaining step for full policy iteration is policy improvement using Qπ̄t (s, a). Theorem 2. Given aL-periodic, non-stationary, and deterministic policy π = (π0, . . . , πL−1) ∈ ΠL, let Qπ̄t be the c-persistent value of π̄ denoted in Eq. (7). If we update the new policy π new = (πnew0 , . . . , π new L−1) ∈ ΠL by ∀t, a, s′, πnewt (a, s′) = arg max a′ Qπ̄t (s ′,Γct,a(a ′)) (8) then Qπ̄ new t (s, a) ≥ Qπ̄t (s, a) holds for all t, s, a. Remark. The time complexity of policy improvement step defined by Eq. (8) is O(|S||A|2) for each t, which has |A| times worse time complexity compared to the standard non-persistent policy improvement whose complexity is O(|S||A|). Note also that the new policy πnew is not necessarily c-persistent, i.e. πnew /∈ Πc is possible, but the performance of its inducing c-persistent policy is always improved. Finally, Theorems 1 and 2 lead us to a full algorithm, action-persistent policy iteration (AP-PI). AP-PI iterates between c-persistent policy evaluation by Eq. (6) and the c-persistent policy improvement of Eq. (8), and it is guaranteed to converge to the optimal c-persistent policy π̄∗ ∈ Πc. The pseudo-code of AP-PI can be found in Appendix D. Theorem 3. Starting from any π̄0 ∈ Πc induced by L-periodic non-stationary deterministic policy π0 ∈ ΠL, the sequence of value functions Qπ̄ n and the improved policies π̄n+1 induced by πn+1 converge to the optimal value function and the optimal c-persistent policy π̄∗, i.e. Qπ̄ ∗ t (s, a) = limn→∞Q π̄n t mod L(s, a) ≥ Qπ̄t (s, a) for any π̄ ∈ Πc, t ∈ N0, s ∈ S, and a ∈ A. Corollary 2. There always exists a c-persistent optimal policy π̄∗c , which is induced by a L-periodic, non-stationary, and deterministic policy π ∈ ΠL. The policy π̄∗ = (π̄∗0 , . . . , π̄ ∗ L−1) obtained by AP-PI is executed as follows. First, ā is initialized randomly. Then, at every step t, at = Γct,ā(π̄ ∗ t mod L(ā, st)) is executed, and ā is updated by ā← at. To the best of our knowledge, AP-PI is the first algorithm that addresses multiple action persistences, extending the single action persistence model that has been recently analyzed in [13]. AP-PI can be readily made scalable using the actor-critic architecture, to cope with large action spaces such as continuous actions, which we describe in the next section. This is a non-trivial extension of Persistent Fitted Q-iteration (PFQI) [13] which only applies to finite action spaces with single action persistence. 4 Action-Persistent Actor-Critic In this section, we present Action-Persistent Actor-Critic (AP-AC), an off-policy RL algorithm that can be applied to high-dimensional tasks via practical approximation to AP-PI. AP-AC extends Soft Actor-Critic (SAC) [4] to perform iterative optimization of the parametric models of an L-periodic non-stationary policy (i.e. actor), and its c-persistent action-value function (i.e. critic). We assume that the action persistence vector c = (c1, . . . , cm) is given as a part of the environment specification. As discussed in Section 3, the optimal c-persistent policy π̄ can be induced by an L-periodic nonstationary policy π = (π0, . . . , πL−1), where πt : A× S → ∆(A) for all t. The corresponding optimal value function is also represented by the L-periodic action-value functionQπ̄ = (Qπ̄0 , . . . , Q π̄ L−1) with Qπ̄t : S ×A → R for all t. We exploit this structure of the optimal solution in the neural network architecture. Specifically, the parameterized actor network πφ(ā, s) and the critic network Qθ(s, a) are designed to have L heads, whose t-th head represents πt and Qπ̄t respectively, thus sharing the parameters of the lower layers among different t. The t-th head of the critic recursively references the ((t+ 1) mod L)-th head for the target value, reflecting the result of Corollary 1. The c-persistent value function is trained to minimize the squared temporal difference error: JQ(θ) = 1 L L−1∑ t=0 E (s,a,r,s′)∼D a′∼πφ,(t+1) mod L(·|a,s′) [( Qθ,t(s, a)− yt(a, r, s′, a′) )2] (9) s.t. yt(a, r, s′, a′) = r + γQθ̄,(t+1) mod L ( s′,Γct+1,a(a ′) ) − α log πφ,(t+1) mod L(a′|a, s′), where D denotes the replay buffer, θ̄ is the parameters of the target network, and Γct,a(a′) is the action projection function defined in Eq. (4). This objective function is obtained from Eq. (5) with an (optional) entropy regularization term α log πφ,t(a′|a, s′), following the SAC formulation. Note that every term in Eq. (9) is agnostic to the actual time step when (s, a, r, s′) was collected, which is due to the way we calculate yt using Q(t+1) mod L and Γ. Thus every (s, a, r, s′) sample in D can be used to train Qθ,t regardless of t. The policy parameters are then optimized by maximizing: Jπ(φ) = 1 L L−1∑ t=0 E (s,a,r,s′)∼D a′∼πφ,t(·|a,s′) [ Qθ,t(s ′,Γct,a(a ′))− α log πφ,t(a′|a, s′) ] (10) where the (optional) α log πφ,t(a′|a, s′) term comes from SAC formulation. In essence, maximizing Jπ(φ) with respect to φ corresponds to c-persistent policy improvement by implementing Eq. (8) approximately. As with the case with critic, every term in Eq. (10) is agnostic to the actual time step t of when (s, a, r, s′) was collected, thus every sample in D can be used to train πφ,t for all t. The overall network architecture and the computational graph for training AP-AC are visualized in Figure 2. In order to obtain lower-variance gradient estimate ∇̂φJπ(φ), we adopt the exact reparameterization [7] for continuous action tasks and the relaxed reparameterization with Gumbelsoftmax [5, 12] for discrete action tasks. The rest of the design choices follows that of SAC such as the clipped double Q trick and soft target update. The pseudo-code for AP-AC can be found in Appendix E. 5 Related Works Action Repetition in RL Recent deep RL algorithms have adopted action repetition to improve learning efficiency by reducing control granularity. Static action repetition, which repeats the same action over a fixed k time step, has been widely adopted in both on-policy [15] and off-policy [14] RL. Dynamic action repetition [9, 20] has also been explored to further improve the learning efficiency of online RL agents by adaptively changing the time scale of repeating actions per state. Recently, the notion of a single action-persistence has been formalized by introducing persistent Bellman operators, and its corresponding offline RL algorithm has been proposed along with a heuristic method for finding good persistence for empirical performance [13]. In contrast to the existing works that consider a single action-persistence, we deal with arbitrarily multiple action-persistence where each decision variable has its own persistence, and our goal is to provide an efficient solution method for the given action persistence c rather than finding a proper c to speed up learning. Temporal Abstraction in RL The notion of action persistence is also naturally related to temporally abstract actions [16, 25] and semi-MDP framework [2]. Specifically, persisting actions with multiple frequencies can be seen as a particular instance of a semi-Markov option as follows: initiation set is the set of all states I = S, an internal policy is c-persistent π ∈ Πc, and the termination condition is defined as β(ht) = 1{t mod L=0}. Then, our off-policy learning scheme that exploits every transition sample to update every timestep’s actor and critic in Eq. (9-10) can also be understood as an intra-option learning [24] method in the constructed semi-Markov option framework. Still, the cardinality of the set of possible options has an exponential growth with respect to L, thus obtaining an optimal policy over the set of options will be computationally inefficient compared to AP-PI that enjoys a linear complexity with respect to L. 6 Experiments We conducted a set of experiments in order to evaluate the effectiveness of AP-AC on highdimensional tasks with different control frequencies. To the best of our knowledge, this work is the first to address multiple control frequencies in RL. Since there are no existing RL methods designed for multiple control frequencies, we take the variants of SAC as baselines for performance comparison, which are listed as follows: (1) SAC: this agent is trained on the standard non-persistent environment, while being evaluated on the environment where the action-persistence is enforced. This is intended to show the suboptimality of simply projecting a stationary policy to an action-persistent policy. (2) SAC in AP-Env: this agent is trained and evaluated on the action-persistent version of the environment, using the standard RL algorithm. This is to demonstrate the suboptimality of a stationary Markovian policy. (3) SAC-L: this agent takes a current observation, past L actions, and the one-hot indicator of the current time step (t mod L), which are sufficient for the optimal decision-making for the corresponding state augmentation approach discussed in Section 3. Still, this does not exploit the structure of the c-persistent optimal solution such as periodically recurrent policy/value representation and can take redundant information which is not fully compact. As a consequence, it is expected to show relatively weak performance. (4) SAC-L-compact: this agent takes a current observation, the last action which was actually taken, and the one-hot indicator of the current time step (t mod L), which is a compact representation of SAC-L. Still, this is unable to exploit every transition sample to update every timestep’s actor and critic, while AP-AC is capable of doing it in Eq. (9-10). Therefore, it is expected to be less sample-efficient than AP-AC. We conducted experiments on both continuous and discrete tasks, which will be detailed in the following section. The experimental setups including hyperparameters can be found in Appendix G. 6.1 Task description Mujoco tasks (continuous action space) In many real-world situations, complex robotic systems consist of a number of controllers whose operating control frequencies vary due to the system specification. In order to simulate this setting, we first conduct experiments on four OpenAI Gym continuous control tasks based on the Mujoco physics simulator [3, 26], where the controllable joints are modified to have different action persistence. Figure 3 depicts the detailed experimental setup for different action persistence for each task. For Hopper and Walker2d, action persistence for the thigh(s), the leg(s), and the foot(feet) are set to 4, 2, and 1 respectively. For HalfCheetah, action persistence for the thighs, the shins, and the feet are set to 4, 2, and 1 respectively. Finally, for Ant, the persistence for the hips and ankles are set to 4 and 2. We represent the policy πt as the Gaussian with diagonal covariance matrix and tanh-squashing function to bound the output in range [−1, 1] for each dimension [4]. Traffic light control (discrete action space) We also tested AP-AC on a traffic control task, a realistic discrete-action sequential decision scenario with action persistence: in the traffic system, the control frequency of each traffic light can be different, for example depending on the number lanes and the speed limit. We use SUMO (Simulation of Urban MObility) [8] as the traffic simulator and SUMO-RL [1] for the environment interface. The specific instance we use is the implementation of 2X2GRID in SUMO-RL, which is depicted in Figure 4. The goal is to manipulate traffic lights located at each junction to improve the overall traffic flow, where the vehicles are generated randomly with a probability of 0.1 for every second at the end of the road. The observation for each junction consists of the following four types of values: (1) the current traffic light status, represented by (4D one-hot), (2) the elapsed time from the current traffic light status, normalized within [0, 1] (1D), (3) the density of all vehicles for each lane (8D), and (4) the density of stopped vehicles for each lane (8D). Therefore, the overall dimension of the observation space is 4× 21 = 81. The action space is described in Figure 4b. The reward in the range [0, 1] is defined to be mini∈{1,2,3,4} 1/(waiting time of junction i), with the goal of improving traffic flow of the junction of heaviest traffic. The length of episodes is 1000. We use a factorized (relaxed) categorical distribution to represent the policy with discrete action space, i.e. πφ,t(a|·) = ∏4 k=1 Cat(a k|pkφ(·)) where pφ(·) denotes the probability vector with size 4. Though the cardinality of the entire joint action space is |A| = 44 = 256, the input and output dimensions to represent actions in the actor/critic networks are 4× 4 = 16 (i.e. four one-hot vectors with size 4) since we are assuming fully factorized policies. 6.2 Results We performed deterministic evaluation for each algorithm every 10K time steps, i.e. the peformance of the mean policy for continuous control and the greedy policy with respect to the categorical probabilities for the traffic control. The results are presented in Figure 5. Since SAC (colored in green) is optimized for the non-persistent environment, its naive projection to the c-persistent policy suffers from severe performance degradation. In contrast, SAC in AP-Env (colored in cyan) interacts with the c-persistent environment directly while optimizing a stationary Markovian policy. Still, as discussed in Section 3, stationary Markovian policies can be suboptimal in general, which resulted in performing worse than AP-AC. SAC-L (colored in magenta) takes the past L-step actions and indicator of the current time step (t mod L), which is sufficient information for optimal c-persistent decision-making. Nonetheless, it does not exploit the structure of optimal c-persistent solution and can take redundant information since not all the past L-step actions are required for optimal decision-making, resulting in inefficient learning. This can be observed from the results that as the action dimension increases (Hopper (3)→Walker/Halfcheetah (6)→ Ant (8)→ SUMO2X2GRID (16)), the performance of SAC-L gets relatively worse. SAC-L-compact (colored in red) takes the last action actually taken, and indicator of the current time step (t mod L), which is also sufficient as well as compact information for optimal c-persistent decision-making, showing better performance than SAC-L in high-dimensional action tasks. Still, it is unable to exploit every transition sample to update every timestep’s actor and critic, which leads to learning inefficiency compared to AP-AC. Finally, AP-AC significantly outperforms all of the baseline algorithms in all benchmark domains except for Hopper where AP-AC and baselines are on par. The experimental results highlight the effectiveness of our method that directly optimizes a periodic non-stationary policy for the tasks with multiple control frequencies. 7 Discussion and Conclusion In this work, we formalized the notion of multiple action persistences in RL, which generalizes the result of [13] that deals with single action persistence. We introduced AP-PI, an efficient tabular planning algorithm for c-persistent policy for FA-MDP, and showed a formal analysis on its optimal convergence guarantee while it has only a marginal increase in the time complexity compared to the standard policy iteration. We then presented AP-AC, an off-policy deep reinforcement learning algorithm that scales, which directly exploits the structure of the optimal solution from the formal analysis on AP-PI. We empirically demonstrated that AP-AC significantly outperforms a number of strong baselines, both on continuous and discrete problems with action persistence. Extending the results of this work to multi-agent or hierarchical RL would be an interesting direction for future work. Broader Impact In recent years, reinforcement learning (RL) has shown remarkable successes in various areas, where most of their results are based on the assumption that all decision variables are simultaneously determined at every discrete time step. However, many real-world sequential decision-making problems involve multiple decision variables whose control frequencies are different by the domain requirement. In this situation, standard RL algorithms without considering the control frequency requirement may suffer from severe performance degradation as discussed in Section 3. This paper provides a theoretical and algorithmic foundation of how to address multiple control frequencies in RL, which enables RL to be applied to more complex and diverse real-world problems that involve decision variables with different frequencies. Therefore, this work would be beneficial for those who want to apply RL to various tasks that inherently have multiple control frequencies. As we provide a general-purpose methodology, we believe this work has little to do with a particular system failure or a particular data bias. On the other hand, this work could contribute to accelerating industrial adoption of RL, which has the potential to adversely affect employment due to automation. Acknowledgments This work was supported by the National Research Foundation (NRF) of Korea (NRF2019R1A2C1087634 and NRF-2019M3F2A1072238), the Ministry of Science and Information communication Technology (MSIT) of Korea (IITP No. 2020-0-00940, IITP No. 2019-0-00075, IITP No. 2017-0-01779 XAI), and POSCO.
1. What is the focus and contribution of the paper on reinforcement learning? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical foundation? 3. What are the weaknesses of the paper, especially in the experimental section? 4. Do you have any concerns regarding the baselines used in the experiments? 5. How might the proposed algorithm be improved or refined?
Summary and Contributions Strengths Weaknesses
Summary and Contributions [Reinforcement Learning for Control with Multiple Frequencies] In this paper, the authors propose a variant of SAC where actions are with different frequency. The authors provide mathematics proof which will help to deepen the understanding of the algorithm. The experiment section indicates that the proposed algorithm can obtain better performance. Strengths 1. The algorithm is interesting and studies a real-life problem that could be beneficial to society. 2. The theory is provided for the proposed algorithm. Weaknesses The experiment section is weak. The baseline does not seem strong enough. In Figure 5, the performance at 1 million steps for cheetah is only around 3000, and SAC, one baseline that should reach around 10000 reward at 1 million steps, is not learning anything. It seems that the baselines are broken. It also occurs quite obviously for all other environments (Hopper, Walker, Ant).
NIPS
Title Reinforcement Learning for Control with Multiple Frequencies Abstract Many real-world sequential decision problems involve multiple action variables whose control frequencies are different, such that actions take their effects at different periods. While these problems can be formulated with the notion of multiple action persistences in factored-action MDP (FA-MDP), it is non-trivial to solve them efficiently since an action-persistent policy constructed from a stationary policy can be arbitrarily suboptimal, rendering solution methods for the standard FA-MDPs hardly applicable. In this paper, we formalize the problem of multiple control frequencies in RL and provide its efficient solution method. Our proposed method, Action-Persistent Policy Iteration (AP-PI), provides a theoretical guarantee on the convergence to an optimal solution while incurring only a factor of |A| increase in time complexity during policy improvement step, compared to the standard policy iteration for FA-MDPs. Extending this result, we present ActionPersistent Actor-Critic (AP-AC), a scalable RL algorithm for high-dimensional control tasks. In the experiments, we demonstrate that AP-AC significantly outperforms the baselines on several continuous control tasks and a traffic control simulation, which highlights the effectiveness of our method that directly optimizes the periodic non-stationary policy for tasks with multiple control frequencies. 1 Introduction In recent years, reinforcement learning (RL) [23] has shown great promise in various domains, such as complex games [14, 21, 22] and high-dimensional continuous control [11, 19]. These problems have been mostly formulated as discrete-time Markov decision processes (MDPs) [17], assuming all decision variables are simultaneously determined at every time step. However, many real-world sequential decision-making problems involve multiple decision variables whose control frequencies are different by requirement. For example, when managing a financial portfolio of various assets, the frequency of rebalancing may need to be different for each asset, e.g. weekly for stock and monthly for real estate. Similarly, robotic systems typically consist of a number of controllers operating at different frequencies due to their system specification. Different control frequencies can be formulated with the notion of different action persistence in the discrete-time factored-action MDP (FA-MDP), where the base time interval is determined by the reciprocal of the least common multiple of the control frequencies. However, while algorithms for single action persistence has been proposed in order to improve the empirical performance of online [9] or offline [13] RL agents, to the best of our knowledge, addressing multiple action persistences in RL has been mostly unexplored due to its difficulty involved in the non-stationarity nature of the optimal policy. In this paper, we formalize the problem of multiple action persistences in FA-MDPs. We first show that any persistent policy induced by a stationary policy can be arbitrarily bad via a simple example. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Then, we introduce efficient methods for FA-MDPs that directly optimize a periodic non-stationary policy while circumventing the exponential growth of time complexity with respect to the periodicity of action persistence. We first present a tabular planning algorithm, Action-Persistent Policy Iteration (AP-PI), which provides the theoretical guarantee on the convergence to an optimal solution while incurring only a factor of |A| time complexity increase in the policy improvement step compared to the policy iteration for standard FA-MDPs. We then present Action-Persistent Actor-Critic (AP-AC), a scalable learning algorithm for high-dimensional tasks via practical approximations to AP-PI, with a neural network architecture designed to facilitate the direct optimization of a periodic non-stationary policy. In the experiments, we demonstrate that AP-AC significantly outperforms a number of baselines based on SAC, from the results on modified Mujoco continuous control benchmarks [3, 26] and the SUMO traffic control simulation [8], which highlights the effectiveness of our method that directly optimizes the periodic non-stationary policy for tasks with multiple control frequencies. 2 Preliminaries We assume the environment modeled as discrete-time factored-action MDP (FA-MDP) M = 〈S,A, P,R, γ〉 where S is the set of states s, A is the set of vector-represented actions a = (a1, . . . , am), P (s′|s, a) = Pr(st+1 = s′|st = s, at = a) is the transition probability, R(s, a) ∈ R is the immediate reward for taking action a in state s, and γ ∈ [0, 1) is the discount factor. A policy π = (πt)t≥0 ∈ Π is a sequence of functions where πt : H → ∆(A) is a mapping from history ht = (s0, a0, . . . , st−1, at−1, st) to a probability distribution over A, πt(at|ht) = Pr(at|ht). We call πt Markovian if πt depends only on the last state st and call it stationary if πt does not depend on t. The policy πt is called deterministic if it maps from history to some action with probability 1 and can be denoted as πt : H → A. For simplicity, we will only consider the fully factorized policy πt(a1t , . . . , a m t |ht) = ∏m k=1 π k t (a k t |ht), which comprises the set Π of all fully factorized policies. The action-value function Qπt of policy π is defined as Qπt (s, a) = Eπ [ ∑∞ τ=t γ τ−tR(sτ , aτ )|st = s, at = a]. We consider the sequential decision problem where each action variable ak has its own control frequency. The notion of control frequency can be formulated in terms of action persistence with FA-MDPM by considering how frequently ak should be decided inM. Specifically, we let ck be the action persistence of k-th action variable ak, i.e. ak is decided every ck time step inM. The overall action persistence of the decision problem is then described as a vector c = (c1, . . . , cm) ∈ Nm. Finally, we define the c-persistent policy π as follows: Definition 1. (c-persistent policy) Let π = (πt)t≥0 ∈ Π be a policy. Given the action persistence vector c ∈ Nm, the c-persistent policy π̄c = (π̄c,t)t≥0 induced by π is a non-stationary policy where ∀t, π̄c,t(a|ht) = m∏ k=1 π̄kc,t(a k|ht) s.t. π̄kc,t(ak|ht) = { πkt (a k|ht) if t mod ck = 0 δak t−(t mod ck) (ak) otherwise (1) where δx(y) = 1 if x = y and 0 otherwise. Additionally, we define the set of c-persistent policies Πc = {(π̄c,t)t≥0 : π ∈ Π}. Our goal is to find the c-persistent policy π∗c that maximizes expected cumulative rewards: π̄∗c = arg max π̄∈Πc Eπ̄ [ ∞∑ t=0 γtR(st, at) ] (2) Remark. When c = (1, . . . , 1), we have Πc = Π. Thus, Eq. (2) is reduced to the standard objective function of FA-MDP, which is known to always have a deterministic and Markovian stationary policy as an optimal solution [17]. Also, the c-persistent policy of Definition 1 is different from the k-persistent policy [13] in that our definition considers multiple action persistences and is not limited by Markovian policy π while [13] considers single action persistence and a non-stationary policy induced only by a Markovian policy. The agent with c-persistent policy π̄c induced by π interacts with the environment as follows: At time step t = 0, all action variables are selected according to π̄c,0 = π0, i.e. (a10, . . . , a m 0 ) ∼∏m k=1 π k 0 (·|h0). Then, each action variable ak is kept persistent for the subsequent ck − 1 time steps. At time step t = ck, the action variable ak is set by π̄kc,t(·|ht) = πkt (·|ht), and continue into the next time step. In other words, the agent decides the value for ak only at the time steps t that are multiples of ck, i.e. t mod ck = 0. Figure 1b illustrates an example of c-persistent policy π̄c. For the remainder of this paper, we will omit the subscript c in π̄c for notational brevity if there is no confusion. All the proofs of theorems are available in the Appendix. 3 Action-Persistence in FA-MDPs Finding the optimal policy via Eq. (2) is non-trivial since any c-persistent policy naively constructed from a stationary policy can be suboptimal, unlike in standard FA-MDPs where there always exists a stationary optimal policy. To see this, consider the FA-MDP depicted in Figure 1, where there are two action variables with action persistences 2 and 3, respectively. In this example task, in order to obtain a positive reward, the agent should take an action a = (1, 1) at state s0 to go to the rightmost state. However, when we use this to form a stationary deterministic policy with π(s0) = (1, 1) and construct a c-persistent policy in a naive manner, we see that the policy can never reach s3 due to the inherent action persistence c = (2, 3): The action (1, 1) taken at s0 when t = 0 will persist at the next time step t = 1 in s1, making the agent go back to s0. Then, the agent will select an action (1, 1) again by π(s0), and this will be repeated forever. As a consequence, the agent visits only s0 and s1, and thus cannot reach the rightmost state. In contrast, the non-stationary deterministic policy π̄ described in Figure 1b reaches s3. Careful readers may notice that a c-persistent policy "projected" from some stationary but stochastic policy can eventually reach s3, but its expected return is clearly less than the non-stationary deterministic policy in Figure 1b, thus suboptimal. Therefore, obtaining a c-persistent policy by ignoring the action persistence requirement and solving the corresponding standard FA-MDP would not work. However, one can observe that the action persistence scheme is repeated periodically at every L , LCM(c1, . . . , cm) time steps. From this observation, a naive approach to solving Eq. (2) would be redefining the action space to have L-step actions as elements. After redefining the transition and reward function corresponding to these actions, standard solution methods for FA-MDP such as dynamic programming can be applied. Still, this approach not only has exponential time complexity with respect to L due to the increase in the size of action space, i.e. |A|L, but also can be suboptimal unless the underlying transition dynamics is nearly deterministic due to the open-loop decision-making nature of L-step actions [27]. A more principled approach is to consider an L-Markovian policy that memorizes which action was taken during the last L steps, but its straightforward conversion to the standard MDP via state augmentation still suffers from the exponential time complexity with respect to L. 3.1 Policy evaluation for c-persistent policy: c-persistent Bellman operators As discussed in the previous section, augmenting state or action space for storing L-step information results in exponential complexity with respect to L. Instead, we take a more direct approach that optimizes the c-persistent policy via composition of Bellman operators within the space of L-periodic, non-stationary and deterministic policies ΠL: ΠL = {π ∈ Π : ∀t, πt = πt+L and πt : A× S → A} (3) We will later prove that there always exists an optimal policy for Eq. (2), which is induced by π ∈ ΠL. The policy in ΠL will be denoted as π = (π0, . . . , πL−1) in the remainder of the paper. As the first step of the derivation of our algorithm, we define function Γct,a(a ′): Γct,a(a ′) = (ā1, . . . ām) where āk , { ak if t mod ck 6= 0 a′k if t mod ck = 0 (4) which projects action a′ into a feasible action at time step t if the action taken at t− 1 is assumed to be a. This is done by extracting dimensions of "effectable" action variables at time step t from a′ and extracting dimensions of "uneffectable" variables at time step t from a. For the L-periodic non-stationary deterministic policy π = (π0, . . . , πL−1) ∈ ΠL, we first define the one-step c-persistent Bellman operator T̄ πt induced by π. Specifically, for t ∈ {0, . . . , L− 1}, (T̄ πt Q)(s, a) , R(s, a) + γE s′∼P (s′|s,a) a′=πt+1(a,s ′) [ Q(s′,Γct+1,a(a ′)) ] (5) Then, we define an L-step c-persistent Bellman operator H̄πt by making the composition of L one-step c-persistent Bellman operators: (H̄π0 Q)(s, a) , (T̄ π0 T̄ π1 · · · T̄ πL−2T̄ πL−1Q)(s, a) (6) (H̄π1 Q)(s, a) , (T̄ π1 T̄ π2 · · · T̄ πL−1T̄ π0 Q)(s, a) ... (H̄πL−1Q)(s, a) , (T̄ πL−1T̄ π0 · · · T̄ πL−3T̄ πL−2Q)(s, a) The following theorem and corollary state that each L-step c-persistent Bellman operator H̄πt is a contraction mapping, and each of the fixed points Qπ̄0 , . . . , Q π̄ L−1 has a recursive relationship with another by one-step c-persistent Bellman operators T̄ π0 , . . . , T̄ πL−1. Theorem 1. For all t ∈ {0, . . . , L − 1}, the L-step c-persistent Bellman operators H̄πt is γLcontraction with respect to infinity norm, thus H̄πt Q π̄ t = Q π̄ t has the unique fixed point solution. In other words, for any Q0t : S ×A → R, define Qn+1t = H̄πt Qnt . Then, the sequence Qnt converges to t-th c-persistent value function of π̄ as n→∞. Corollary 1. Qπ̄t = T̄ πt Qπ̄(t+1) mod L holds for all t ∈ {0, . . . , L − 1}, thus c-persistent value functions can be obtained by repeatedly applying 1-step c-persistent backup in a L-cyclic manner. Note that the c-persistent value function of the policy π obtained by H̄πt , has the following form: Qπ̄t (s, a) =E ∀τ, sτ+1∼P (·|sτ ,aτ ) āτ+1=Γ c τ+1,āτ (πτ+1(āτ ,sτ+1)) [ ∞∑ τ=t γτ−tR(sτ , āτ ) ∣∣∣ st = s, āt = a] (7) which is obtained by unfolding the L-step c-persistent Bellman recursion from H̄πt . Here, one can easily show that every action taken at every time step t, which is projected by Γct,ā(·), abides by c-persistence, by mathematical induction. As a result, Qπ̄t (s, a) has the intended interpretable meaning, i.e. the expected sum of rewards that can be obtained when following the c-persistent policy π̄ which is induced by π, except for the initial action a, starting from the state s at time step t. Remark. The time complexity of applying the one-step c-persistent Bellman backup T̄ πt of Eq. (5) for a deterministic policy π is O(|S|2|A|) for each t, which is identical to the time complexity of the non-persistent standard Bellman backup. Now, we have a complete policy evaluation operator for c-persistent policy induced by L-periodic non-stationary deterministic policy π. 3.2 Policy improvement for c-persistent policy The remaining step for full policy iteration is policy improvement using Qπ̄t (s, a). Theorem 2. Given aL-periodic, non-stationary, and deterministic policy π = (π0, . . . , πL−1) ∈ ΠL, let Qπ̄t be the c-persistent value of π̄ denoted in Eq. (7). If we update the new policy π new = (πnew0 , . . . , π new L−1) ∈ ΠL by ∀t, a, s′, πnewt (a, s′) = arg max a′ Qπ̄t (s ′,Γct,a(a ′)) (8) then Qπ̄ new t (s, a) ≥ Qπ̄t (s, a) holds for all t, s, a. Remark. The time complexity of policy improvement step defined by Eq. (8) is O(|S||A|2) for each t, which has |A| times worse time complexity compared to the standard non-persistent policy improvement whose complexity is O(|S||A|). Note also that the new policy πnew is not necessarily c-persistent, i.e. πnew /∈ Πc is possible, but the performance of its inducing c-persistent policy is always improved. Finally, Theorems 1 and 2 lead us to a full algorithm, action-persistent policy iteration (AP-PI). AP-PI iterates between c-persistent policy evaluation by Eq. (6) and the c-persistent policy improvement of Eq. (8), and it is guaranteed to converge to the optimal c-persistent policy π̄∗ ∈ Πc. The pseudo-code of AP-PI can be found in Appendix D. Theorem 3. Starting from any π̄0 ∈ Πc induced by L-periodic non-stationary deterministic policy π0 ∈ ΠL, the sequence of value functions Qπ̄ n and the improved policies π̄n+1 induced by πn+1 converge to the optimal value function and the optimal c-persistent policy π̄∗, i.e. Qπ̄ ∗ t (s, a) = limn→∞Q π̄n t mod L(s, a) ≥ Qπ̄t (s, a) for any π̄ ∈ Πc, t ∈ N0, s ∈ S, and a ∈ A. Corollary 2. There always exists a c-persistent optimal policy π̄∗c , which is induced by a L-periodic, non-stationary, and deterministic policy π ∈ ΠL. The policy π̄∗ = (π̄∗0 , . . . , π̄ ∗ L−1) obtained by AP-PI is executed as follows. First, ā is initialized randomly. Then, at every step t, at = Γct,ā(π̄ ∗ t mod L(ā, st)) is executed, and ā is updated by ā← at. To the best of our knowledge, AP-PI is the first algorithm that addresses multiple action persistences, extending the single action persistence model that has been recently analyzed in [13]. AP-PI can be readily made scalable using the actor-critic architecture, to cope with large action spaces such as continuous actions, which we describe in the next section. This is a non-trivial extension of Persistent Fitted Q-iteration (PFQI) [13] which only applies to finite action spaces with single action persistence. 4 Action-Persistent Actor-Critic In this section, we present Action-Persistent Actor-Critic (AP-AC), an off-policy RL algorithm that can be applied to high-dimensional tasks via practical approximation to AP-PI. AP-AC extends Soft Actor-Critic (SAC) [4] to perform iterative optimization of the parametric models of an L-periodic non-stationary policy (i.e. actor), and its c-persistent action-value function (i.e. critic). We assume that the action persistence vector c = (c1, . . . , cm) is given as a part of the environment specification. As discussed in Section 3, the optimal c-persistent policy π̄ can be induced by an L-periodic nonstationary policy π = (π0, . . . , πL−1), where πt : A× S → ∆(A) for all t. The corresponding optimal value function is also represented by the L-periodic action-value functionQπ̄ = (Qπ̄0 , . . . , Q π̄ L−1) with Qπ̄t : S ×A → R for all t. We exploit this structure of the optimal solution in the neural network architecture. Specifically, the parameterized actor network πφ(ā, s) and the critic network Qθ(s, a) are designed to have L heads, whose t-th head represents πt and Qπ̄t respectively, thus sharing the parameters of the lower layers among different t. The t-th head of the critic recursively references the ((t+ 1) mod L)-th head for the target value, reflecting the result of Corollary 1. The c-persistent value function is trained to minimize the squared temporal difference error: JQ(θ) = 1 L L−1∑ t=0 E (s,a,r,s′)∼D a′∼πφ,(t+1) mod L(·|a,s′) [( Qθ,t(s, a)− yt(a, r, s′, a′) )2] (9) s.t. yt(a, r, s′, a′) = r + γQθ̄,(t+1) mod L ( s′,Γct+1,a(a ′) ) − α log πφ,(t+1) mod L(a′|a, s′), where D denotes the replay buffer, θ̄ is the parameters of the target network, and Γct,a(a′) is the action projection function defined in Eq. (4). This objective function is obtained from Eq. (5) with an (optional) entropy regularization term α log πφ,t(a′|a, s′), following the SAC formulation. Note that every term in Eq. (9) is agnostic to the actual time step when (s, a, r, s′) was collected, which is due to the way we calculate yt using Q(t+1) mod L and Γ. Thus every (s, a, r, s′) sample in D can be used to train Qθ,t regardless of t. The policy parameters are then optimized by maximizing: Jπ(φ) = 1 L L−1∑ t=0 E (s,a,r,s′)∼D a′∼πφ,t(·|a,s′) [ Qθ,t(s ′,Γct,a(a ′))− α log πφ,t(a′|a, s′) ] (10) where the (optional) α log πφ,t(a′|a, s′) term comes from SAC formulation. In essence, maximizing Jπ(φ) with respect to φ corresponds to c-persistent policy improvement by implementing Eq. (8) approximately. As with the case with critic, every term in Eq. (10) is agnostic to the actual time step t of when (s, a, r, s′) was collected, thus every sample in D can be used to train πφ,t for all t. The overall network architecture and the computational graph for training AP-AC are visualized in Figure 2. In order to obtain lower-variance gradient estimate ∇̂φJπ(φ), we adopt the exact reparameterization [7] for continuous action tasks and the relaxed reparameterization with Gumbelsoftmax [5, 12] for discrete action tasks. The rest of the design choices follows that of SAC such as the clipped double Q trick and soft target update. The pseudo-code for AP-AC can be found in Appendix E. 5 Related Works Action Repetition in RL Recent deep RL algorithms have adopted action repetition to improve learning efficiency by reducing control granularity. Static action repetition, which repeats the same action over a fixed k time step, has been widely adopted in both on-policy [15] and off-policy [14] RL. Dynamic action repetition [9, 20] has also been explored to further improve the learning efficiency of online RL agents by adaptively changing the time scale of repeating actions per state. Recently, the notion of a single action-persistence has been formalized by introducing persistent Bellman operators, and its corresponding offline RL algorithm has been proposed along with a heuristic method for finding good persistence for empirical performance [13]. In contrast to the existing works that consider a single action-persistence, we deal with arbitrarily multiple action-persistence where each decision variable has its own persistence, and our goal is to provide an efficient solution method for the given action persistence c rather than finding a proper c to speed up learning. Temporal Abstraction in RL The notion of action persistence is also naturally related to temporally abstract actions [16, 25] and semi-MDP framework [2]. Specifically, persisting actions with multiple frequencies can be seen as a particular instance of a semi-Markov option as follows: initiation set is the set of all states I = S, an internal policy is c-persistent π ∈ Πc, and the termination condition is defined as β(ht) = 1{t mod L=0}. Then, our off-policy learning scheme that exploits every transition sample to update every timestep’s actor and critic in Eq. (9-10) can also be understood as an intra-option learning [24] method in the constructed semi-Markov option framework. Still, the cardinality of the set of possible options has an exponential growth with respect to L, thus obtaining an optimal policy over the set of options will be computationally inefficient compared to AP-PI that enjoys a linear complexity with respect to L. 6 Experiments We conducted a set of experiments in order to evaluate the effectiveness of AP-AC on highdimensional tasks with different control frequencies. To the best of our knowledge, this work is the first to address multiple control frequencies in RL. Since there are no existing RL methods designed for multiple control frequencies, we take the variants of SAC as baselines for performance comparison, which are listed as follows: (1) SAC: this agent is trained on the standard non-persistent environment, while being evaluated on the environment where the action-persistence is enforced. This is intended to show the suboptimality of simply projecting a stationary policy to an action-persistent policy. (2) SAC in AP-Env: this agent is trained and evaluated on the action-persistent version of the environment, using the standard RL algorithm. This is to demonstrate the suboptimality of a stationary Markovian policy. (3) SAC-L: this agent takes a current observation, past L actions, and the one-hot indicator of the current time step (t mod L), which are sufficient for the optimal decision-making for the corresponding state augmentation approach discussed in Section 3. Still, this does not exploit the structure of the c-persistent optimal solution such as periodically recurrent policy/value representation and can take redundant information which is not fully compact. As a consequence, it is expected to show relatively weak performance. (4) SAC-L-compact: this agent takes a current observation, the last action which was actually taken, and the one-hot indicator of the current time step (t mod L), which is a compact representation of SAC-L. Still, this is unable to exploit every transition sample to update every timestep’s actor and critic, while AP-AC is capable of doing it in Eq. (9-10). Therefore, it is expected to be less sample-efficient than AP-AC. We conducted experiments on both continuous and discrete tasks, which will be detailed in the following section. The experimental setups including hyperparameters can be found in Appendix G. 6.1 Task description Mujoco tasks (continuous action space) In many real-world situations, complex robotic systems consist of a number of controllers whose operating control frequencies vary due to the system specification. In order to simulate this setting, we first conduct experiments on four OpenAI Gym continuous control tasks based on the Mujoco physics simulator [3, 26], where the controllable joints are modified to have different action persistence. Figure 3 depicts the detailed experimental setup for different action persistence for each task. For Hopper and Walker2d, action persistence for the thigh(s), the leg(s), and the foot(feet) are set to 4, 2, and 1 respectively. For HalfCheetah, action persistence for the thighs, the shins, and the feet are set to 4, 2, and 1 respectively. Finally, for Ant, the persistence for the hips and ankles are set to 4 and 2. We represent the policy πt as the Gaussian with diagonal covariance matrix and tanh-squashing function to bound the output in range [−1, 1] for each dimension [4]. Traffic light control (discrete action space) We also tested AP-AC on a traffic control task, a realistic discrete-action sequential decision scenario with action persistence: in the traffic system, the control frequency of each traffic light can be different, for example depending on the number lanes and the speed limit. We use SUMO (Simulation of Urban MObility) [8] as the traffic simulator and SUMO-RL [1] for the environment interface. The specific instance we use is the implementation of 2X2GRID in SUMO-RL, which is depicted in Figure 4. The goal is to manipulate traffic lights located at each junction to improve the overall traffic flow, where the vehicles are generated randomly with a probability of 0.1 for every second at the end of the road. The observation for each junction consists of the following four types of values: (1) the current traffic light status, represented by (4D one-hot), (2) the elapsed time from the current traffic light status, normalized within [0, 1] (1D), (3) the density of all vehicles for each lane (8D), and (4) the density of stopped vehicles for each lane (8D). Therefore, the overall dimension of the observation space is 4× 21 = 81. The action space is described in Figure 4b. The reward in the range [0, 1] is defined to be mini∈{1,2,3,4} 1/(waiting time of junction i), with the goal of improving traffic flow of the junction of heaviest traffic. The length of episodes is 1000. We use a factorized (relaxed) categorical distribution to represent the policy with discrete action space, i.e. πφ,t(a|·) = ∏4 k=1 Cat(a k|pkφ(·)) where pφ(·) denotes the probability vector with size 4. Though the cardinality of the entire joint action space is |A| = 44 = 256, the input and output dimensions to represent actions in the actor/critic networks are 4× 4 = 16 (i.e. four one-hot vectors with size 4) since we are assuming fully factorized policies. 6.2 Results We performed deterministic evaluation for each algorithm every 10K time steps, i.e. the peformance of the mean policy for continuous control and the greedy policy with respect to the categorical probabilities for the traffic control. The results are presented in Figure 5. Since SAC (colored in green) is optimized for the non-persistent environment, its naive projection to the c-persistent policy suffers from severe performance degradation. In contrast, SAC in AP-Env (colored in cyan) interacts with the c-persistent environment directly while optimizing a stationary Markovian policy. Still, as discussed in Section 3, stationary Markovian policies can be suboptimal in general, which resulted in performing worse than AP-AC. SAC-L (colored in magenta) takes the past L-step actions and indicator of the current time step (t mod L), which is sufficient information for optimal c-persistent decision-making. Nonetheless, it does not exploit the structure of optimal c-persistent solution and can take redundant information since not all the past L-step actions are required for optimal decision-making, resulting in inefficient learning. This can be observed from the results that as the action dimension increases (Hopper (3)→Walker/Halfcheetah (6)→ Ant (8)→ SUMO2X2GRID (16)), the performance of SAC-L gets relatively worse. SAC-L-compact (colored in red) takes the last action actually taken, and indicator of the current time step (t mod L), which is also sufficient as well as compact information for optimal c-persistent decision-making, showing better performance than SAC-L in high-dimensional action tasks. Still, it is unable to exploit every transition sample to update every timestep’s actor and critic, which leads to learning inefficiency compared to AP-AC. Finally, AP-AC significantly outperforms all of the baseline algorithms in all benchmark domains except for Hopper where AP-AC and baselines are on par. The experimental results highlight the effectiveness of our method that directly optimizes a periodic non-stationary policy for the tasks with multiple control frequencies. 7 Discussion and Conclusion In this work, we formalized the notion of multiple action persistences in RL, which generalizes the result of [13] that deals with single action persistence. We introduced AP-PI, an efficient tabular planning algorithm for c-persistent policy for FA-MDP, and showed a formal analysis on its optimal convergence guarantee while it has only a marginal increase in the time complexity compared to the standard policy iteration. We then presented AP-AC, an off-policy deep reinforcement learning algorithm that scales, which directly exploits the structure of the optimal solution from the formal analysis on AP-PI. We empirically demonstrated that AP-AC significantly outperforms a number of strong baselines, both on continuous and discrete problems with action persistence. Extending the results of this work to multi-agent or hierarchical RL would be an interesting direction for future work. Broader Impact In recent years, reinforcement learning (RL) has shown remarkable successes in various areas, where most of their results are based on the assumption that all decision variables are simultaneously determined at every discrete time step. However, many real-world sequential decision-making problems involve multiple decision variables whose control frequencies are different by the domain requirement. In this situation, standard RL algorithms without considering the control frequency requirement may suffer from severe performance degradation as discussed in Section 3. This paper provides a theoretical and algorithmic foundation of how to address multiple control frequencies in RL, which enables RL to be applied to more complex and diverse real-world problems that involve decision variables with different frequencies. Therefore, this work would be beneficial for those who want to apply RL to various tasks that inherently have multiple control frequencies. As we provide a general-purpose methodology, we believe this work has little to do with a particular system failure or a particular data bias. On the other hand, this work could contribute to accelerating industrial adoption of RL, which has the potential to adversely affect employment due to automation. Acknowledgments This work was supported by the National Research Foundation (NRF) of Korea (NRF2019R1A2C1087634 and NRF-2019M3F2A1072238), the Ministry of Science and Information communication Technology (MSIT) of Korea (IITP No. 2020-0-00940, IITP No. 2019-0-00075, IITP No. 2017-0-01779 XAI), and POSCO.
1. What is the focus of the paper regarding reinforcement learning? 2. What are the strengths of the proposed approach, particularly in handling factored action spaces? 3. Do you have any concerns or questions about the algorithm's practicality? 4. How does the reviewer assess the novelty and thoroughness of the work? 5. What are the weaknesses of the paper regarding its treatment of alternative solutions?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This work introduces an algorithm for reinforcement learning in settings with factored action spaces in which each element of the action space may have a different control frequency. To motivate the necessity of such an algorithm, it provides an argument that in this setting, a naive approach with a stationary Markovian policy on the states (which does not observe the timestep) can be suboptimal. Further, it argues that simply augmenting the state or action spaces and applying standard RL methods results in costs which are exponential in L, the least common multiple of the set of action persistences. In constructing the method this paper introduces c-persistent Bellman operators, a way of updating a Q-function in an environment with multiple action persistences, and proves its convergence. This leads to a method which uses L Q-functions, one for each step in the periodic structure of action persistences. Using these Q-functions, the paper introduces a policy improvement step and proves that it does, in fact, improve improve performance. Finally it shows that a policy iteration algorithm based on these components converges to the optimal policy. The paper proposes a practical implementation of these ideas comprising a neural network architecture and an actor-critic algorithm and validates its performance experimentally. **Post rebuttal** Thanks to the authors for the response! The clarification and experiments for the alternate baseline are very helpful. I think this paper does a meticulous job on a problem that's worth solving and I'd like to see it at NeurIPS this year. Strengths While the problem studied here is relatively niche within the reinforcement learning community, in practical systems it is common to have multiple control frequencies. This paper tackles the problem in a rigorous way and I could see its methods being used in the future. The algorithm that it proposes is not surprising but the work starts from a real problem, builds up the fundamentals, then solves it in a convincing way. It is refreshing to see a simple thing done very thoroughly. I do not know the work on this particular subproblem well enough to definitively state the novelty of this work, but as someone who has worked on related problems in the past it is new to me. The trick of using every timestep to train every head of the model (line 195) is particularly clever, and should be adopted in the hierarchical and temporally-abstract actions literature more generally. (I wish I had thought of it myself while working on abstract actions last year — it would have directly applied and improved the sample efficiency of a method I developed.) The claims and their proofs appear sound and the proposed algorithm is practical. Weaknesses The main weakness of this work as it stands is its treatment of the alternatives. The simplest solution to this problem, which I believe would have all the same properties, would be to condition the value function and the policy on the last action which was _actually taken_ and t mod L. This could be viewed as a modification to the environment rather than to the learning algorithm: the observations from the environment would simply include the last (realized, in the sense of 𝚪) action taken and the current point in the action period. This modification would make the environment satisfy the Markov property once again and permit the straightforward application of unmodified RL methods. Instead of having exponential complexity in L, as was suggested for the alternatives in this work, it would have linear complexity just like the proposed method. This environment modification method would not have all the advantages of the one proposed in this paper, in particular being able to update every timestep with every transition. However, the use of such a method seems less dire than this work implies. Is my understanding correct? Or am I missing something important?
NIPS
Title Reinforcement Learning for Control with Multiple Frequencies Abstract Many real-world sequential decision problems involve multiple action variables whose control frequencies are different, such that actions take their effects at different periods. While these problems can be formulated with the notion of multiple action persistences in factored-action MDP (FA-MDP), it is non-trivial to solve them efficiently since an action-persistent policy constructed from a stationary policy can be arbitrarily suboptimal, rendering solution methods for the standard FA-MDPs hardly applicable. In this paper, we formalize the problem of multiple control frequencies in RL and provide its efficient solution method. Our proposed method, Action-Persistent Policy Iteration (AP-PI), provides a theoretical guarantee on the convergence to an optimal solution while incurring only a factor of |A| increase in time complexity during policy improvement step, compared to the standard policy iteration for FA-MDPs. Extending this result, we present ActionPersistent Actor-Critic (AP-AC), a scalable RL algorithm for high-dimensional control tasks. In the experiments, we demonstrate that AP-AC significantly outperforms the baselines on several continuous control tasks and a traffic control simulation, which highlights the effectiveness of our method that directly optimizes the periodic non-stationary policy for tasks with multiple control frequencies. 1 Introduction In recent years, reinforcement learning (RL) [23] has shown great promise in various domains, such as complex games [14, 21, 22] and high-dimensional continuous control [11, 19]. These problems have been mostly formulated as discrete-time Markov decision processes (MDPs) [17], assuming all decision variables are simultaneously determined at every time step. However, many real-world sequential decision-making problems involve multiple decision variables whose control frequencies are different by requirement. For example, when managing a financial portfolio of various assets, the frequency of rebalancing may need to be different for each asset, e.g. weekly for stock and monthly for real estate. Similarly, robotic systems typically consist of a number of controllers operating at different frequencies due to their system specification. Different control frequencies can be formulated with the notion of different action persistence in the discrete-time factored-action MDP (FA-MDP), where the base time interval is determined by the reciprocal of the least common multiple of the control frequencies. However, while algorithms for single action persistence has been proposed in order to improve the empirical performance of online [9] or offline [13] RL agents, to the best of our knowledge, addressing multiple action persistences in RL has been mostly unexplored due to its difficulty involved in the non-stationarity nature of the optimal policy. In this paper, we formalize the problem of multiple action persistences in FA-MDPs. We first show that any persistent policy induced by a stationary policy can be arbitrarily bad via a simple example. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Then, we introduce efficient methods for FA-MDPs that directly optimize a periodic non-stationary policy while circumventing the exponential growth of time complexity with respect to the periodicity of action persistence. We first present a tabular planning algorithm, Action-Persistent Policy Iteration (AP-PI), which provides the theoretical guarantee on the convergence to an optimal solution while incurring only a factor of |A| time complexity increase in the policy improvement step compared to the policy iteration for standard FA-MDPs. We then present Action-Persistent Actor-Critic (AP-AC), a scalable learning algorithm for high-dimensional tasks via practical approximations to AP-PI, with a neural network architecture designed to facilitate the direct optimization of a periodic non-stationary policy. In the experiments, we demonstrate that AP-AC significantly outperforms a number of baselines based on SAC, from the results on modified Mujoco continuous control benchmarks [3, 26] and the SUMO traffic control simulation [8], which highlights the effectiveness of our method that directly optimizes the periodic non-stationary policy for tasks with multiple control frequencies. 2 Preliminaries We assume the environment modeled as discrete-time factored-action MDP (FA-MDP) M = 〈S,A, P,R, γ〉 where S is the set of states s, A is the set of vector-represented actions a = (a1, . . . , am), P (s′|s, a) = Pr(st+1 = s′|st = s, at = a) is the transition probability, R(s, a) ∈ R is the immediate reward for taking action a in state s, and γ ∈ [0, 1) is the discount factor. A policy π = (πt)t≥0 ∈ Π is a sequence of functions where πt : H → ∆(A) is a mapping from history ht = (s0, a0, . . . , st−1, at−1, st) to a probability distribution over A, πt(at|ht) = Pr(at|ht). We call πt Markovian if πt depends only on the last state st and call it stationary if πt does not depend on t. The policy πt is called deterministic if it maps from history to some action with probability 1 and can be denoted as πt : H → A. For simplicity, we will only consider the fully factorized policy πt(a1t , . . . , a m t |ht) = ∏m k=1 π k t (a k t |ht), which comprises the set Π of all fully factorized policies. The action-value function Qπt of policy π is defined as Qπt (s, a) = Eπ [ ∑∞ τ=t γ τ−tR(sτ , aτ )|st = s, at = a]. We consider the sequential decision problem where each action variable ak has its own control frequency. The notion of control frequency can be formulated in terms of action persistence with FA-MDPM by considering how frequently ak should be decided inM. Specifically, we let ck be the action persistence of k-th action variable ak, i.e. ak is decided every ck time step inM. The overall action persistence of the decision problem is then described as a vector c = (c1, . . . , cm) ∈ Nm. Finally, we define the c-persistent policy π as follows: Definition 1. (c-persistent policy) Let π = (πt)t≥0 ∈ Π be a policy. Given the action persistence vector c ∈ Nm, the c-persistent policy π̄c = (π̄c,t)t≥0 induced by π is a non-stationary policy where ∀t, π̄c,t(a|ht) = m∏ k=1 π̄kc,t(a k|ht) s.t. π̄kc,t(ak|ht) = { πkt (a k|ht) if t mod ck = 0 δak t−(t mod ck) (ak) otherwise (1) where δx(y) = 1 if x = y and 0 otherwise. Additionally, we define the set of c-persistent policies Πc = {(π̄c,t)t≥0 : π ∈ Π}. Our goal is to find the c-persistent policy π∗c that maximizes expected cumulative rewards: π̄∗c = arg max π̄∈Πc Eπ̄ [ ∞∑ t=0 γtR(st, at) ] (2) Remark. When c = (1, . . . , 1), we have Πc = Π. Thus, Eq. (2) is reduced to the standard objective function of FA-MDP, which is known to always have a deterministic and Markovian stationary policy as an optimal solution [17]. Also, the c-persistent policy of Definition 1 is different from the k-persistent policy [13] in that our definition considers multiple action persistences and is not limited by Markovian policy π while [13] considers single action persistence and a non-stationary policy induced only by a Markovian policy. The agent with c-persistent policy π̄c induced by π interacts with the environment as follows: At time step t = 0, all action variables are selected according to π̄c,0 = π0, i.e. (a10, . . . , a m 0 ) ∼∏m k=1 π k 0 (·|h0). Then, each action variable ak is kept persistent for the subsequent ck − 1 time steps. At time step t = ck, the action variable ak is set by π̄kc,t(·|ht) = πkt (·|ht), and continue into the next time step. In other words, the agent decides the value for ak only at the time steps t that are multiples of ck, i.e. t mod ck = 0. Figure 1b illustrates an example of c-persistent policy π̄c. For the remainder of this paper, we will omit the subscript c in π̄c for notational brevity if there is no confusion. All the proofs of theorems are available in the Appendix. 3 Action-Persistence in FA-MDPs Finding the optimal policy via Eq. (2) is non-trivial since any c-persistent policy naively constructed from a stationary policy can be suboptimal, unlike in standard FA-MDPs where there always exists a stationary optimal policy. To see this, consider the FA-MDP depicted in Figure 1, where there are two action variables with action persistences 2 and 3, respectively. In this example task, in order to obtain a positive reward, the agent should take an action a = (1, 1) at state s0 to go to the rightmost state. However, when we use this to form a stationary deterministic policy with π(s0) = (1, 1) and construct a c-persistent policy in a naive manner, we see that the policy can never reach s3 due to the inherent action persistence c = (2, 3): The action (1, 1) taken at s0 when t = 0 will persist at the next time step t = 1 in s1, making the agent go back to s0. Then, the agent will select an action (1, 1) again by π(s0), and this will be repeated forever. As a consequence, the agent visits only s0 and s1, and thus cannot reach the rightmost state. In contrast, the non-stationary deterministic policy π̄ described in Figure 1b reaches s3. Careful readers may notice that a c-persistent policy "projected" from some stationary but stochastic policy can eventually reach s3, but its expected return is clearly less than the non-stationary deterministic policy in Figure 1b, thus suboptimal. Therefore, obtaining a c-persistent policy by ignoring the action persistence requirement and solving the corresponding standard FA-MDP would not work. However, one can observe that the action persistence scheme is repeated periodically at every L , LCM(c1, . . . , cm) time steps. From this observation, a naive approach to solving Eq. (2) would be redefining the action space to have L-step actions as elements. After redefining the transition and reward function corresponding to these actions, standard solution methods for FA-MDP such as dynamic programming can be applied. Still, this approach not only has exponential time complexity with respect to L due to the increase in the size of action space, i.e. |A|L, but also can be suboptimal unless the underlying transition dynamics is nearly deterministic due to the open-loop decision-making nature of L-step actions [27]. A more principled approach is to consider an L-Markovian policy that memorizes which action was taken during the last L steps, but its straightforward conversion to the standard MDP via state augmentation still suffers from the exponential time complexity with respect to L. 3.1 Policy evaluation for c-persistent policy: c-persistent Bellman operators As discussed in the previous section, augmenting state or action space for storing L-step information results in exponential complexity with respect to L. Instead, we take a more direct approach that optimizes the c-persistent policy via composition of Bellman operators within the space of L-periodic, non-stationary and deterministic policies ΠL: ΠL = {π ∈ Π : ∀t, πt = πt+L and πt : A× S → A} (3) We will later prove that there always exists an optimal policy for Eq. (2), which is induced by π ∈ ΠL. The policy in ΠL will be denoted as π = (π0, . . . , πL−1) in the remainder of the paper. As the first step of the derivation of our algorithm, we define function Γct,a(a ′): Γct,a(a ′) = (ā1, . . . ām) where āk , { ak if t mod ck 6= 0 a′k if t mod ck = 0 (4) which projects action a′ into a feasible action at time step t if the action taken at t− 1 is assumed to be a. This is done by extracting dimensions of "effectable" action variables at time step t from a′ and extracting dimensions of "uneffectable" variables at time step t from a. For the L-periodic non-stationary deterministic policy π = (π0, . . . , πL−1) ∈ ΠL, we first define the one-step c-persistent Bellman operator T̄ πt induced by π. Specifically, for t ∈ {0, . . . , L− 1}, (T̄ πt Q)(s, a) , R(s, a) + γE s′∼P (s′|s,a) a′=πt+1(a,s ′) [ Q(s′,Γct+1,a(a ′)) ] (5) Then, we define an L-step c-persistent Bellman operator H̄πt by making the composition of L one-step c-persistent Bellman operators: (H̄π0 Q)(s, a) , (T̄ π0 T̄ π1 · · · T̄ πL−2T̄ πL−1Q)(s, a) (6) (H̄π1 Q)(s, a) , (T̄ π1 T̄ π2 · · · T̄ πL−1T̄ π0 Q)(s, a) ... (H̄πL−1Q)(s, a) , (T̄ πL−1T̄ π0 · · · T̄ πL−3T̄ πL−2Q)(s, a) The following theorem and corollary state that each L-step c-persistent Bellman operator H̄πt is a contraction mapping, and each of the fixed points Qπ̄0 , . . . , Q π̄ L−1 has a recursive relationship with another by one-step c-persistent Bellman operators T̄ π0 , . . . , T̄ πL−1. Theorem 1. For all t ∈ {0, . . . , L − 1}, the L-step c-persistent Bellman operators H̄πt is γLcontraction with respect to infinity norm, thus H̄πt Q π̄ t = Q π̄ t has the unique fixed point solution. In other words, for any Q0t : S ×A → R, define Qn+1t = H̄πt Qnt . Then, the sequence Qnt converges to t-th c-persistent value function of π̄ as n→∞. Corollary 1. Qπ̄t = T̄ πt Qπ̄(t+1) mod L holds for all t ∈ {0, . . . , L − 1}, thus c-persistent value functions can be obtained by repeatedly applying 1-step c-persistent backup in a L-cyclic manner. Note that the c-persistent value function of the policy π obtained by H̄πt , has the following form: Qπ̄t (s, a) =E ∀τ, sτ+1∼P (·|sτ ,aτ ) āτ+1=Γ c τ+1,āτ (πτ+1(āτ ,sτ+1)) [ ∞∑ τ=t γτ−tR(sτ , āτ ) ∣∣∣ st = s, āt = a] (7) which is obtained by unfolding the L-step c-persistent Bellman recursion from H̄πt . Here, one can easily show that every action taken at every time step t, which is projected by Γct,ā(·), abides by c-persistence, by mathematical induction. As a result, Qπ̄t (s, a) has the intended interpretable meaning, i.e. the expected sum of rewards that can be obtained when following the c-persistent policy π̄ which is induced by π, except for the initial action a, starting from the state s at time step t. Remark. The time complexity of applying the one-step c-persistent Bellman backup T̄ πt of Eq. (5) for a deterministic policy π is O(|S|2|A|) for each t, which is identical to the time complexity of the non-persistent standard Bellman backup. Now, we have a complete policy evaluation operator for c-persistent policy induced by L-periodic non-stationary deterministic policy π. 3.2 Policy improvement for c-persistent policy The remaining step for full policy iteration is policy improvement using Qπ̄t (s, a). Theorem 2. Given aL-periodic, non-stationary, and deterministic policy π = (π0, . . . , πL−1) ∈ ΠL, let Qπ̄t be the c-persistent value of π̄ denoted in Eq. (7). If we update the new policy π new = (πnew0 , . . . , π new L−1) ∈ ΠL by ∀t, a, s′, πnewt (a, s′) = arg max a′ Qπ̄t (s ′,Γct,a(a ′)) (8) then Qπ̄ new t (s, a) ≥ Qπ̄t (s, a) holds for all t, s, a. Remark. The time complexity of policy improvement step defined by Eq. (8) is O(|S||A|2) for each t, which has |A| times worse time complexity compared to the standard non-persistent policy improvement whose complexity is O(|S||A|). Note also that the new policy πnew is not necessarily c-persistent, i.e. πnew /∈ Πc is possible, but the performance of its inducing c-persistent policy is always improved. Finally, Theorems 1 and 2 lead us to a full algorithm, action-persistent policy iteration (AP-PI). AP-PI iterates between c-persistent policy evaluation by Eq. (6) and the c-persistent policy improvement of Eq. (8), and it is guaranteed to converge to the optimal c-persistent policy π̄∗ ∈ Πc. The pseudo-code of AP-PI can be found in Appendix D. Theorem 3. Starting from any π̄0 ∈ Πc induced by L-periodic non-stationary deterministic policy π0 ∈ ΠL, the sequence of value functions Qπ̄ n and the improved policies π̄n+1 induced by πn+1 converge to the optimal value function and the optimal c-persistent policy π̄∗, i.e. Qπ̄ ∗ t (s, a) = limn→∞Q π̄n t mod L(s, a) ≥ Qπ̄t (s, a) for any π̄ ∈ Πc, t ∈ N0, s ∈ S, and a ∈ A. Corollary 2. There always exists a c-persistent optimal policy π̄∗c , which is induced by a L-periodic, non-stationary, and deterministic policy π ∈ ΠL. The policy π̄∗ = (π̄∗0 , . . . , π̄ ∗ L−1) obtained by AP-PI is executed as follows. First, ā is initialized randomly. Then, at every step t, at = Γct,ā(π̄ ∗ t mod L(ā, st)) is executed, and ā is updated by ā← at. To the best of our knowledge, AP-PI is the first algorithm that addresses multiple action persistences, extending the single action persistence model that has been recently analyzed in [13]. AP-PI can be readily made scalable using the actor-critic architecture, to cope with large action spaces such as continuous actions, which we describe in the next section. This is a non-trivial extension of Persistent Fitted Q-iteration (PFQI) [13] which only applies to finite action spaces with single action persistence. 4 Action-Persistent Actor-Critic In this section, we present Action-Persistent Actor-Critic (AP-AC), an off-policy RL algorithm that can be applied to high-dimensional tasks via practical approximation to AP-PI. AP-AC extends Soft Actor-Critic (SAC) [4] to perform iterative optimization of the parametric models of an L-periodic non-stationary policy (i.e. actor), and its c-persistent action-value function (i.e. critic). We assume that the action persistence vector c = (c1, . . . , cm) is given as a part of the environment specification. As discussed in Section 3, the optimal c-persistent policy π̄ can be induced by an L-periodic nonstationary policy π = (π0, . . . , πL−1), where πt : A× S → ∆(A) for all t. The corresponding optimal value function is also represented by the L-periodic action-value functionQπ̄ = (Qπ̄0 , . . . , Q π̄ L−1) with Qπ̄t : S ×A → R for all t. We exploit this structure of the optimal solution in the neural network architecture. Specifically, the parameterized actor network πφ(ā, s) and the critic network Qθ(s, a) are designed to have L heads, whose t-th head represents πt and Qπ̄t respectively, thus sharing the parameters of the lower layers among different t. The t-th head of the critic recursively references the ((t+ 1) mod L)-th head for the target value, reflecting the result of Corollary 1. The c-persistent value function is trained to minimize the squared temporal difference error: JQ(θ) = 1 L L−1∑ t=0 E (s,a,r,s′)∼D a′∼πφ,(t+1) mod L(·|a,s′) [( Qθ,t(s, a)− yt(a, r, s′, a′) )2] (9) s.t. yt(a, r, s′, a′) = r + γQθ̄,(t+1) mod L ( s′,Γct+1,a(a ′) ) − α log πφ,(t+1) mod L(a′|a, s′), where D denotes the replay buffer, θ̄ is the parameters of the target network, and Γct,a(a′) is the action projection function defined in Eq. (4). This objective function is obtained from Eq. (5) with an (optional) entropy regularization term α log πφ,t(a′|a, s′), following the SAC formulation. Note that every term in Eq. (9) is agnostic to the actual time step when (s, a, r, s′) was collected, which is due to the way we calculate yt using Q(t+1) mod L and Γ. Thus every (s, a, r, s′) sample in D can be used to train Qθ,t regardless of t. The policy parameters are then optimized by maximizing: Jπ(φ) = 1 L L−1∑ t=0 E (s,a,r,s′)∼D a′∼πφ,t(·|a,s′) [ Qθ,t(s ′,Γct,a(a ′))− α log πφ,t(a′|a, s′) ] (10) where the (optional) α log πφ,t(a′|a, s′) term comes from SAC formulation. In essence, maximizing Jπ(φ) with respect to φ corresponds to c-persistent policy improvement by implementing Eq. (8) approximately. As with the case with critic, every term in Eq. (10) is agnostic to the actual time step t of when (s, a, r, s′) was collected, thus every sample in D can be used to train πφ,t for all t. The overall network architecture and the computational graph for training AP-AC are visualized in Figure 2. In order to obtain lower-variance gradient estimate ∇̂φJπ(φ), we adopt the exact reparameterization [7] for continuous action tasks and the relaxed reparameterization with Gumbelsoftmax [5, 12] for discrete action tasks. The rest of the design choices follows that of SAC such as the clipped double Q trick and soft target update. The pseudo-code for AP-AC can be found in Appendix E. 5 Related Works Action Repetition in RL Recent deep RL algorithms have adopted action repetition to improve learning efficiency by reducing control granularity. Static action repetition, which repeats the same action over a fixed k time step, has been widely adopted in both on-policy [15] and off-policy [14] RL. Dynamic action repetition [9, 20] has also been explored to further improve the learning efficiency of online RL agents by adaptively changing the time scale of repeating actions per state. Recently, the notion of a single action-persistence has been formalized by introducing persistent Bellman operators, and its corresponding offline RL algorithm has been proposed along with a heuristic method for finding good persistence for empirical performance [13]. In contrast to the existing works that consider a single action-persistence, we deal with arbitrarily multiple action-persistence where each decision variable has its own persistence, and our goal is to provide an efficient solution method for the given action persistence c rather than finding a proper c to speed up learning. Temporal Abstraction in RL The notion of action persistence is also naturally related to temporally abstract actions [16, 25] and semi-MDP framework [2]. Specifically, persisting actions with multiple frequencies can be seen as a particular instance of a semi-Markov option as follows: initiation set is the set of all states I = S, an internal policy is c-persistent π ∈ Πc, and the termination condition is defined as β(ht) = 1{t mod L=0}. Then, our off-policy learning scheme that exploits every transition sample to update every timestep’s actor and critic in Eq. (9-10) can also be understood as an intra-option learning [24] method in the constructed semi-Markov option framework. Still, the cardinality of the set of possible options has an exponential growth with respect to L, thus obtaining an optimal policy over the set of options will be computationally inefficient compared to AP-PI that enjoys a linear complexity with respect to L. 6 Experiments We conducted a set of experiments in order to evaluate the effectiveness of AP-AC on highdimensional tasks with different control frequencies. To the best of our knowledge, this work is the first to address multiple control frequencies in RL. Since there are no existing RL methods designed for multiple control frequencies, we take the variants of SAC as baselines for performance comparison, which are listed as follows: (1) SAC: this agent is trained on the standard non-persistent environment, while being evaluated on the environment where the action-persistence is enforced. This is intended to show the suboptimality of simply projecting a stationary policy to an action-persistent policy. (2) SAC in AP-Env: this agent is trained and evaluated on the action-persistent version of the environment, using the standard RL algorithm. This is to demonstrate the suboptimality of a stationary Markovian policy. (3) SAC-L: this agent takes a current observation, past L actions, and the one-hot indicator of the current time step (t mod L), which are sufficient for the optimal decision-making for the corresponding state augmentation approach discussed in Section 3. Still, this does not exploit the structure of the c-persistent optimal solution such as periodically recurrent policy/value representation and can take redundant information which is not fully compact. As a consequence, it is expected to show relatively weak performance. (4) SAC-L-compact: this agent takes a current observation, the last action which was actually taken, and the one-hot indicator of the current time step (t mod L), which is a compact representation of SAC-L. Still, this is unable to exploit every transition sample to update every timestep’s actor and critic, while AP-AC is capable of doing it in Eq. (9-10). Therefore, it is expected to be less sample-efficient than AP-AC. We conducted experiments on both continuous and discrete tasks, which will be detailed in the following section. The experimental setups including hyperparameters can be found in Appendix G. 6.1 Task description Mujoco tasks (continuous action space) In many real-world situations, complex robotic systems consist of a number of controllers whose operating control frequencies vary due to the system specification. In order to simulate this setting, we first conduct experiments on four OpenAI Gym continuous control tasks based on the Mujoco physics simulator [3, 26], where the controllable joints are modified to have different action persistence. Figure 3 depicts the detailed experimental setup for different action persistence for each task. For Hopper and Walker2d, action persistence for the thigh(s), the leg(s), and the foot(feet) are set to 4, 2, and 1 respectively. For HalfCheetah, action persistence for the thighs, the shins, and the feet are set to 4, 2, and 1 respectively. Finally, for Ant, the persistence for the hips and ankles are set to 4 and 2. We represent the policy πt as the Gaussian with diagonal covariance matrix and tanh-squashing function to bound the output in range [−1, 1] for each dimension [4]. Traffic light control (discrete action space) We also tested AP-AC on a traffic control task, a realistic discrete-action sequential decision scenario with action persistence: in the traffic system, the control frequency of each traffic light can be different, for example depending on the number lanes and the speed limit. We use SUMO (Simulation of Urban MObility) [8] as the traffic simulator and SUMO-RL [1] for the environment interface. The specific instance we use is the implementation of 2X2GRID in SUMO-RL, which is depicted in Figure 4. The goal is to manipulate traffic lights located at each junction to improve the overall traffic flow, where the vehicles are generated randomly with a probability of 0.1 for every second at the end of the road. The observation for each junction consists of the following four types of values: (1) the current traffic light status, represented by (4D one-hot), (2) the elapsed time from the current traffic light status, normalized within [0, 1] (1D), (3) the density of all vehicles for each lane (8D), and (4) the density of stopped vehicles for each lane (8D). Therefore, the overall dimension of the observation space is 4× 21 = 81. The action space is described in Figure 4b. The reward in the range [0, 1] is defined to be mini∈{1,2,3,4} 1/(waiting time of junction i), with the goal of improving traffic flow of the junction of heaviest traffic. The length of episodes is 1000. We use a factorized (relaxed) categorical distribution to represent the policy with discrete action space, i.e. πφ,t(a|·) = ∏4 k=1 Cat(a k|pkφ(·)) where pφ(·) denotes the probability vector with size 4. Though the cardinality of the entire joint action space is |A| = 44 = 256, the input and output dimensions to represent actions in the actor/critic networks are 4× 4 = 16 (i.e. four one-hot vectors with size 4) since we are assuming fully factorized policies. 6.2 Results We performed deterministic evaluation for each algorithm every 10K time steps, i.e. the peformance of the mean policy for continuous control and the greedy policy with respect to the categorical probabilities for the traffic control. The results are presented in Figure 5. Since SAC (colored in green) is optimized for the non-persistent environment, its naive projection to the c-persistent policy suffers from severe performance degradation. In contrast, SAC in AP-Env (colored in cyan) interacts with the c-persistent environment directly while optimizing a stationary Markovian policy. Still, as discussed in Section 3, stationary Markovian policies can be suboptimal in general, which resulted in performing worse than AP-AC. SAC-L (colored in magenta) takes the past L-step actions and indicator of the current time step (t mod L), which is sufficient information for optimal c-persistent decision-making. Nonetheless, it does not exploit the structure of optimal c-persistent solution and can take redundant information since not all the past L-step actions are required for optimal decision-making, resulting in inefficient learning. This can be observed from the results that as the action dimension increases (Hopper (3)→Walker/Halfcheetah (6)→ Ant (8)→ SUMO2X2GRID (16)), the performance of SAC-L gets relatively worse. SAC-L-compact (colored in red) takes the last action actually taken, and indicator of the current time step (t mod L), which is also sufficient as well as compact information for optimal c-persistent decision-making, showing better performance than SAC-L in high-dimensional action tasks. Still, it is unable to exploit every transition sample to update every timestep’s actor and critic, which leads to learning inefficiency compared to AP-AC. Finally, AP-AC significantly outperforms all of the baseline algorithms in all benchmark domains except for Hopper where AP-AC and baselines are on par. The experimental results highlight the effectiveness of our method that directly optimizes a periodic non-stationary policy for the tasks with multiple control frequencies. 7 Discussion and Conclusion In this work, we formalized the notion of multiple action persistences in RL, which generalizes the result of [13] that deals with single action persistence. We introduced AP-PI, an efficient tabular planning algorithm for c-persistent policy for FA-MDP, and showed a formal analysis on its optimal convergence guarantee while it has only a marginal increase in the time complexity compared to the standard policy iteration. We then presented AP-AC, an off-policy deep reinforcement learning algorithm that scales, which directly exploits the structure of the optimal solution from the formal analysis on AP-PI. We empirically demonstrated that AP-AC significantly outperforms a number of strong baselines, both on continuous and discrete problems with action persistence. Extending the results of this work to multi-agent or hierarchical RL would be an interesting direction for future work. Broader Impact In recent years, reinforcement learning (RL) has shown remarkable successes in various areas, where most of their results are based on the assumption that all decision variables are simultaneously determined at every discrete time step. However, many real-world sequential decision-making problems involve multiple decision variables whose control frequencies are different by the domain requirement. In this situation, standard RL algorithms without considering the control frequency requirement may suffer from severe performance degradation as discussed in Section 3. This paper provides a theoretical and algorithmic foundation of how to address multiple control frequencies in RL, which enables RL to be applied to more complex and diverse real-world problems that involve decision variables with different frequencies. Therefore, this work would be beneficial for those who want to apply RL to various tasks that inherently have multiple control frequencies. As we provide a general-purpose methodology, we believe this work has little to do with a particular system failure or a particular data bias. On the other hand, this work could contribute to accelerating industrial adoption of RL, which has the potential to adversely affect employment due to automation. Acknowledgments This work was supported by the National Research Foundation (NRF) of Korea (NRF2019R1A2C1087634 and NRF-2019M3F2A1072238), the Ministry of Science and Information communication Technology (MSIT) of Korea (IITP No. 2020-0-00940, IITP No. 2019-0-00075, IITP No. 2017-0-01779 XAI), and POSCO.
1. What is the main contribution of the paper in the field of reinforcement learning? 2. What are the strengths of the proposed policy iteration algorithm, particularly in its theoretical support and practical implementation? 3. What are the weaknesses of the paper regarding its extension from prior works and experimental comprehensiveness? 4. How does the reviewer assess the relevance and significance of multiple action persistences in real-world applications? 5. Are there any suggestions for improving the empirical results or providing more qualitative examples to better demonstrate the advantages of the proposed method?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper introduces multiple action persistences in RL, where each factored action is repeated for a certain number of steps with its own frequency, and proposes a new policy iteration that guarantees contraction and convergence to the optimal policy. The proposed algorithm was applied to soft actor-critic (SAC), and the results on MuJoCo and traffic control domain show that the proposed method (AP-AC) outperforms the naive stationary policy baseline (SAC) that is unaware of action persistence and other baselines that are aware of action persistence. Strengths - The notion of multiple action persistences in FA-MDPs is a nice generalization of the previous work [11]. Although this is not a popular topic, it is worth discussing in that real-world applications may require such constraints (multiple action persistence). So, I think this work is relevant to the RL community. - The proposed policy iteration algorithm is novel and is supported by the theoretical results. Also, its application to advanced methods such as SAC is implemented well. - The empirical result looks good, and the baselines are well-designed. Weaknesses - Although the proposed policy iteration is novel, it feels a bit like a straightforward extension of the previous work on Persistent Fitted Q-iteration [11]. It would be good to motivate why it is a non-trivial extension from the previous work and what is the new challenges. - The experimental results could be more comprehensive. For instance, it would be interesting to see how action persistence affects the performance by varying it. It would be also interesting to show some qualitative examples (e.g., traffic control) highlighting the limitation of the naive approaches in contrast to the proposed method.
NIPS
Title Reinforcement Learning for Control with Multiple Frequencies Abstract Many real-world sequential decision problems involve multiple action variables whose control frequencies are different, such that actions take their effects at different periods. While these problems can be formulated with the notion of multiple action persistences in factored-action MDP (FA-MDP), it is non-trivial to solve them efficiently since an action-persistent policy constructed from a stationary policy can be arbitrarily suboptimal, rendering solution methods for the standard FA-MDPs hardly applicable. In this paper, we formalize the problem of multiple control frequencies in RL and provide its efficient solution method. Our proposed method, Action-Persistent Policy Iteration (AP-PI), provides a theoretical guarantee on the convergence to an optimal solution while incurring only a factor of |A| increase in time complexity during policy improvement step, compared to the standard policy iteration for FA-MDPs. Extending this result, we present ActionPersistent Actor-Critic (AP-AC), a scalable RL algorithm for high-dimensional control tasks. In the experiments, we demonstrate that AP-AC significantly outperforms the baselines on several continuous control tasks and a traffic control simulation, which highlights the effectiveness of our method that directly optimizes the periodic non-stationary policy for tasks with multiple control frequencies. 1 Introduction In recent years, reinforcement learning (RL) [23] has shown great promise in various domains, such as complex games [14, 21, 22] and high-dimensional continuous control [11, 19]. These problems have been mostly formulated as discrete-time Markov decision processes (MDPs) [17], assuming all decision variables are simultaneously determined at every time step. However, many real-world sequential decision-making problems involve multiple decision variables whose control frequencies are different by requirement. For example, when managing a financial portfolio of various assets, the frequency of rebalancing may need to be different for each asset, e.g. weekly for stock and monthly for real estate. Similarly, robotic systems typically consist of a number of controllers operating at different frequencies due to their system specification. Different control frequencies can be formulated with the notion of different action persistence in the discrete-time factored-action MDP (FA-MDP), where the base time interval is determined by the reciprocal of the least common multiple of the control frequencies. However, while algorithms for single action persistence has been proposed in order to improve the empirical performance of online [9] or offline [13] RL agents, to the best of our knowledge, addressing multiple action persistences in RL has been mostly unexplored due to its difficulty involved in the non-stationarity nature of the optimal policy. In this paper, we formalize the problem of multiple action persistences in FA-MDPs. We first show that any persistent policy induced by a stationary policy can be arbitrarily bad via a simple example. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. Then, we introduce efficient methods for FA-MDPs that directly optimize a periodic non-stationary policy while circumventing the exponential growth of time complexity with respect to the periodicity of action persistence. We first present a tabular planning algorithm, Action-Persistent Policy Iteration (AP-PI), which provides the theoretical guarantee on the convergence to an optimal solution while incurring only a factor of |A| time complexity increase in the policy improvement step compared to the policy iteration for standard FA-MDPs. We then present Action-Persistent Actor-Critic (AP-AC), a scalable learning algorithm for high-dimensional tasks via practical approximations to AP-PI, with a neural network architecture designed to facilitate the direct optimization of a periodic non-stationary policy. In the experiments, we demonstrate that AP-AC significantly outperforms a number of baselines based on SAC, from the results on modified Mujoco continuous control benchmarks [3, 26] and the SUMO traffic control simulation [8], which highlights the effectiveness of our method that directly optimizes the periodic non-stationary policy for tasks with multiple control frequencies. 2 Preliminaries We assume the environment modeled as discrete-time factored-action MDP (FA-MDP) M = 〈S,A, P,R, γ〉 where S is the set of states s, A is the set of vector-represented actions a = (a1, . . . , am), P (s′|s, a) = Pr(st+1 = s′|st = s, at = a) is the transition probability, R(s, a) ∈ R is the immediate reward for taking action a in state s, and γ ∈ [0, 1) is the discount factor. A policy π = (πt)t≥0 ∈ Π is a sequence of functions where πt : H → ∆(A) is a mapping from history ht = (s0, a0, . . . , st−1, at−1, st) to a probability distribution over A, πt(at|ht) = Pr(at|ht). We call πt Markovian if πt depends only on the last state st and call it stationary if πt does not depend on t. The policy πt is called deterministic if it maps from history to some action with probability 1 and can be denoted as πt : H → A. For simplicity, we will only consider the fully factorized policy πt(a1t , . . . , a m t |ht) = ∏m k=1 π k t (a k t |ht), which comprises the set Π of all fully factorized policies. The action-value function Qπt of policy π is defined as Qπt (s, a) = Eπ [ ∑∞ τ=t γ τ−tR(sτ , aτ )|st = s, at = a]. We consider the sequential decision problem where each action variable ak has its own control frequency. The notion of control frequency can be formulated in terms of action persistence with FA-MDPM by considering how frequently ak should be decided inM. Specifically, we let ck be the action persistence of k-th action variable ak, i.e. ak is decided every ck time step inM. The overall action persistence of the decision problem is then described as a vector c = (c1, . . . , cm) ∈ Nm. Finally, we define the c-persistent policy π as follows: Definition 1. (c-persistent policy) Let π = (πt)t≥0 ∈ Π be a policy. Given the action persistence vector c ∈ Nm, the c-persistent policy π̄c = (π̄c,t)t≥0 induced by π is a non-stationary policy where ∀t, π̄c,t(a|ht) = m∏ k=1 π̄kc,t(a k|ht) s.t. π̄kc,t(ak|ht) = { πkt (a k|ht) if t mod ck = 0 δak t−(t mod ck) (ak) otherwise (1) where δx(y) = 1 if x = y and 0 otherwise. Additionally, we define the set of c-persistent policies Πc = {(π̄c,t)t≥0 : π ∈ Π}. Our goal is to find the c-persistent policy π∗c that maximizes expected cumulative rewards: π̄∗c = arg max π̄∈Πc Eπ̄ [ ∞∑ t=0 γtR(st, at) ] (2) Remark. When c = (1, . . . , 1), we have Πc = Π. Thus, Eq. (2) is reduced to the standard objective function of FA-MDP, which is known to always have a deterministic and Markovian stationary policy as an optimal solution [17]. Also, the c-persistent policy of Definition 1 is different from the k-persistent policy [13] in that our definition considers multiple action persistences and is not limited by Markovian policy π while [13] considers single action persistence and a non-stationary policy induced only by a Markovian policy. The agent with c-persistent policy π̄c induced by π interacts with the environment as follows: At time step t = 0, all action variables are selected according to π̄c,0 = π0, i.e. (a10, . . . , a m 0 ) ∼∏m k=1 π k 0 (·|h0). Then, each action variable ak is kept persistent for the subsequent ck − 1 time steps. At time step t = ck, the action variable ak is set by π̄kc,t(·|ht) = πkt (·|ht), and continue into the next time step. In other words, the agent decides the value for ak only at the time steps t that are multiples of ck, i.e. t mod ck = 0. Figure 1b illustrates an example of c-persistent policy π̄c. For the remainder of this paper, we will omit the subscript c in π̄c for notational brevity if there is no confusion. All the proofs of theorems are available in the Appendix. 3 Action-Persistence in FA-MDPs Finding the optimal policy via Eq. (2) is non-trivial since any c-persistent policy naively constructed from a stationary policy can be suboptimal, unlike in standard FA-MDPs where there always exists a stationary optimal policy. To see this, consider the FA-MDP depicted in Figure 1, where there are two action variables with action persistences 2 and 3, respectively. In this example task, in order to obtain a positive reward, the agent should take an action a = (1, 1) at state s0 to go to the rightmost state. However, when we use this to form a stationary deterministic policy with π(s0) = (1, 1) and construct a c-persistent policy in a naive manner, we see that the policy can never reach s3 due to the inherent action persistence c = (2, 3): The action (1, 1) taken at s0 when t = 0 will persist at the next time step t = 1 in s1, making the agent go back to s0. Then, the agent will select an action (1, 1) again by π(s0), and this will be repeated forever. As a consequence, the agent visits only s0 and s1, and thus cannot reach the rightmost state. In contrast, the non-stationary deterministic policy π̄ described in Figure 1b reaches s3. Careful readers may notice that a c-persistent policy "projected" from some stationary but stochastic policy can eventually reach s3, but its expected return is clearly less than the non-stationary deterministic policy in Figure 1b, thus suboptimal. Therefore, obtaining a c-persistent policy by ignoring the action persistence requirement and solving the corresponding standard FA-MDP would not work. However, one can observe that the action persistence scheme is repeated periodically at every L , LCM(c1, . . . , cm) time steps. From this observation, a naive approach to solving Eq. (2) would be redefining the action space to have L-step actions as elements. After redefining the transition and reward function corresponding to these actions, standard solution methods for FA-MDP such as dynamic programming can be applied. Still, this approach not only has exponential time complexity with respect to L due to the increase in the size of action space, i.e. |A|L, but also can be suboptimal unless the underlying transition dynamics is nearly deterministic due to the open-loop decision-making nature of L-step actions [27]. A more principled approach is to consider an L-Markovian policy that memorizes which action was taken during the last L steps, but its straightforward conversion to the standard MDP via state augmentation still suffers from the exponential time complexity with respect to L. 3.1 Policy evaluation for c-persistent policy: c-persistent Bellman operators As discussed in the previous section, augmenting state or action space for storing L-step information results in exponential complexity with respect to L. Instead, we take a more direct approach that optimizes the c-persistent policy via composition of Bellman operators within the space of L-periodic, non-stationary and deterministic policies ΠL: ΠL = {π ∈ Π : ∀t, πt = πt+L and πt : A× S → A} (3) We will later prove that there always exists an optimal policy for Eq. (2), which is induced by π ∈ ΠL. The policy in ΠL will be denoted as π = (π0, . . . , πL−1) in the remainder of the paper. As the first step of the derivation of our algorithm, we define function Γct,a(a ′): Γct,a(a ′) = (ā1, . . . ām) where āk , { ak if t mod ck 6= 0 a′k if t mod ck = 0 (4) which projects action a′ into a feasible action at time step t if the action taken at t− 1 is assumed to be a. This is done by extracting dimensions of "effectable" action variables at time step t from a′ and extracting dimensions of "uneffectable" variables at time step t from a. For the L-periodic non-stationary deterministic policy π = (π0, . . . , πL−1) ∈ ΠL, we first define the one-step c-persistent Bellman operator T̄ πt induced by π. Specifically, for t ∈ {0, . . . , L− 1}, (T̄ πt Q)(s, a) , R(s, a) + γE s′∼P (s′|s,a) a′=πt+1(a,s ′) [ Q(s′,Γct+1,a(a ′)) ] (5) Then, we define an L-step c-persistent Bellman operator H̄πt by making the composition of L one-step c-persistent Bellman operators: (H̄π0 Q)(s, a) , (T̄ π0 T̄ π1 · · · T̄ πL−2T̄ πL−1Q)(s, a) (6) (H̄π1 Q)(s, a) , (T̄ π1 T̄ π2 · · · T̄ πL−1T̄ π0 Q)(s, a) ... (H̄πL−1Q)(s, a) , (T̄ πL−1T̄ π0 · · · T̄ πL−3T̄ πL−2Q)(s, a) The following theorem and corollary state that each L-step c-persistent Bellman operator H̄πt is a contraction mapping, and each of the fixed points Qπ̄0 , . . . , Q π̄ L−1 has a recursive relationship with another by one-step c-persistent Bellman operators T̄ π0 , . . . , T̄ πL−1. Theorem 1. For all t ∈ {0, . . . , L − 1}, the L-step c-persistent Bellman operators H̄πt is γLcontraction with respect to infinity norm, thus H̄πt Q π̄ t = Q π̄ t has the unique fixed point solution. In other words, for any Q0t : S ×A → R, define Qn+1t = H̄πt Qnt . Then, the sequence Qnt converges to t-th c-persistent value function of π̄ as n→∞. Corollary 1. Qπ̄t = T̄ πt Qπ̄(t+1) mod L holds for all t ∈ {0, . . . , L − 1}, thus c-persistent value functions can be obtained by repeatedly applying 1-step c-persistent backup in a L-cyclic manner. Note that the c-persistent value function of the policy π obtained by H̄πt , has the following form: Qπ̄t (s, a) =E ∀τ, sτ+1∼P (·|sτ ,aτ ) āτ+1=Γ c τ+1,āτ (πτ+1(āτ ,sτ+1)) [ ∞∑ τ=t γτ−tR(sτ , āτ ) ∣∣∣ st = s, āt = a] (7) which is obtained by unfolding the L-step c-persistent Bellman recursion from H̄πt . Here, one can easily show that every action taken at every time step t, which is projected by Γct,ā(·), abides by c-persistence, by mathematical induction. As a result, Qπ̄t (s, a) has the intended interpretable meaning, i.e. the expected sum of rewards that can be obtained when following the c-persistent policy π̄ which is induced by π, except for the initial action a, starting from the state s at time step t. Remark. The time complexity of applying the one-step c-persistent Bellman backup T̄ πt of Eq. (5) for a deterministic policy π is O(|S|2|A|) for each t, which is identical to the time complexity of the non-persistent standard Bellman backup. Now, we have a complete policy evaluation operator for c-persistent policy induced by L-periodic non-stationary deterministic policy π. 3.2 Policy improvement for c-persistent policy The remaining step for full policy iteration is policy improvement using Qπ̄t (s, a). Theorem 2. Given aL-periodic, non-stationary, and deterministic policy π = (π0, . . . , πL−1) ∈ ΠL, let Qπ̄t be the c-persistent value of π̄ denoted in Eq. (7). If we update the new policy π new = (πnew0 , . . . , π new L−1) ∈ ΠL by ∀t, a, s′, πnewt (a, s′) = arg max a′ Qπ̄t (s ′,Γct,a(a ′)) (8) then Qπ̄ new t (s, a) ≥ Qπ̄t (s, a) holds for all t, s, a. Remark. The time complexity of policy improvement step defined by Eq. (8) is O(|S||A|2) for each t, which has |A| times worse time complexity compared to the standard non-persistent policy improvement whose complexity is O(|S||A|). Note also that the new policy πnew is not necessarily c-persistent, i.e. πnew /∈ Πc is possible, but the performance of its inducing c-persistent policy is always improved. Finally, Theorems 1 and 2 lead us to a full algorithm, action-persistent policy iteration (AP-PI). AP-PI iterates between c-persistent policy evaluation by Eq. (6) and the c-persistent policy improvement of Eq. (8), and it is guaranteed to converge to the optimal c-persistent policy π̄∗ ∈ Πc. The pseudo-code of AP-PI can be found in Appendix D. Theorem 3. Starting from any π̄0 ∈ Πc induced by L-periodic non-stationary deterministic policy π0 ∈ ΠL, the sequence of value functions Qπ̄ n and the improved policies π̄n+1 induced by πn+1 converge to the optimal value function and the optimal c-persistent policy π̄∗, i.e. Qπ̄ ∗ t (s, a) = limn→∞Q π̄n t mod L(s, a) ≥ Qπ̄t (s, a) for any π̄ ∈ Πc, t ∈ N0, s ∈ S, and a ∈ A. Corollary 2. There always exists a c-persistent optimal policy π̄∗c , which is induced by a L-periodic, non-stationary, and deterministic policy π ∈ ΠL. The policy π̄∗ = (π̄∗0 , . . . , π̄ ∗ L−1) obtained by AP-PI is executed as follows. First, ā is initialized randomly. Then, at every step t, at = Γct,ā(π̄ ∗ t mod L(ā, st)) is executed, and ā is updated by ā← at. To the best of our knowledge, AP-PI is the first algorithm that addresses multiple action persistences, extending the single action persistence model that has been recently analyzed in [13]. AP-PI can be readily made scalable using the actor-critic architecture, to cope with large action spaces such as continuous actions, which we describe in the next section. This is a non-trivial extension of Persistent Fitted Q-iteration (PFQI) [13] which only applies to finite action spaces with single action persistence. 4 Action-Persistent Actor-Critic In this section, we present Action-Persistent Actor-Critic (AP-AC), an off-policy RL algorithm that can be applied to high-dimensional tasks via practical approximation to AP-PI. AP-AC extends Soft Actor-Critic (SAC) [4] to perform iterative optimization of the parametric models of an L-periodic non-stationary policy (i.e. actor), and its c-persistent action-value function (i.e. critic). We assume that the action persistence vector c = (c1, . . . , cm) is given as a part of the environment specification. As discussed in Section 3, the optimal c-persistent policy π̄ can be induced by an L-periodic nonstationary policy π = (π0, . . . , πL−1), where πt : A× S → ∆(A) for all t. The corresponding optimal value function is also represented by the L-periodic action-value functionQπ̄ = (Qπ̄0 , . . . , Q π̄ L−1) with Qπ̄t : S ×A → R for all t. We exploit this structure of the optimal solution in the neural network architecture. Specifically, the parameterized actor network πφ(ā, s) and the critic network Qθ(s, a) are designed to have L heads, whose t-th head represents πt and Qπ̄t respectively, thus sharing the parameters of the lower layers among different t. The t-th head of the critic recursively references the ((t+ 1) mod L)-th head for the target value, reflecting the result of Corollary 1. The c-persistent value function is trained to minimize the squared temporal difference error: JQ(θ) = 1 L L−1∑ t=0 E (s,a,r,s′)∼D a′∼πφ,(t+1) mod L(·|a,s′) [( Qθ,t(s, a)− yt(a, r, s′, a′) )2] (9) s.t. yt(a, r, s′, a′) = r + γQθ̄,(t+1) mod L ( s′,Γct+1,a(a ′) ) − α log πφ,(t+1) mod L(a′|a, s′), where D denotes the replay buffer, θ̄ is the parameters of the target network, and Γct,a(a′) is the action projection function defined in Eq. (4). This objective function is obtained from Eq. (5) with an (optional) entropy regularization term α log πφ,t(a′|a, s′), following the SAC formulation. Note that every term in Eq. (9) is agnostic to the actual time step when (s, a, r, s′) was collected, which is due to the way we calculate yt using Q(t+1) mod L and Γ. Thus every (s, a, r, s′) sample in D can be used to train Qθ,t regardless of t. The policy parameters are then optimized by maximizing: Jπ(φ) = 1 L L−1∑ t=0 E (s,a,r,s′)∼D a′∼πφ,t(·|a,s′) [ Qθ,t(s ′,Γct,a(a ′))− α log πφ,t(a′|a, s′) ] (10) where the (optional) α log πφ,t(a′|a, s′) term comes from SAC formulation. In essence, maximizing Jπ(φ) with respect to φ corresponds to c-persistent policy improvement by implementing Eq. (8) approximately. As with the case with critic, every term in Eq. (10) is agnostic to the actual time step t of when (s, a, r, s′) was collected, thus every sample in D can be used to train πφ,t for all t. The overall network architecture and the computational graph for training AP-AC are visualized in Figure 2. In order to obtain lower-variance gradient estimate ∇̂φJπ(φ), we adopt the exact reparameterization [7] for continuous action tasks and the relaxed reparameterization with Gumbelsoftmax [5, 12] for discrete action tasks. The rest of the design choices follows that of SAC such as the clipped double Q trick and soft target update. The pseudo-code for AP-AC can be found in Appendix E. 5 Related Works Action Repetition in RL Recent deep RL algorithms have adopted action repetition to improve learning efficiency by reducing control granularity. Static action repetition, which repeats the same action over a fixed k time step, has been widely adopted in both on-policy [15] and off-policy [14] RL. Dynamic action repetition [9, 20] has also been explored to further improve the learning efficiency of online RL agents by adaptively changing the time scale of repeating actions per state. Recently, the notion of a single action-persistence has been formalized by introducing persistent Bellman operators, and its corresponding offline RL algorithm has been proposed along with a heuristic method for finding good persistence for empirical performance [13]. In contrast to the existing works that consider a single action-persistence, we deal with arbitrarily multiple action-persistence where each decision variable has its own persistence, and our goal is to provide an efficient solution method for the given action persistence c rather than finding a proper c to speed up learning. Temporal Abstraction in RL The notion of action persistence is also naturally related to temporally abstract actions [16, 25] and semi-MDP framework [2]. Specifically, persisting actions with multiple frequencies can be seen as a particular instance of a semi-Markov option as follows: initiation set is the set of all states I = S, an internal policy is c-persistent π ∈ Πc, and the termination condition is defined as β(ht) = 1{t mod L=0}. Then, our off-policy learning scheme that exploits every transition sample to update every timestep’s actor and critic in Eq. (9-10) can also be understood as an intra-option learning [24] method in the constructed semi-Markov option framework. Still, the cardinality of the set of possible options has an exponential growth with respect to L, thus obtaining an optimal policy over the set of options will be computationally inefficient compared to AP-PI that enjoys a linear complexity with respect to L. 6 Experiments We conducted a set of experiments in order to evaluate the effectiveness of AP-AC on highdimensional tasks with different control frequencies. To the best of our knowledge, this work is the first to address multiple control frequencies in RL. Since there are no existing RL methods designed for multiple control frequencies, we take the variants of SAC as baselines for performance comparison, which are listed as follows: (1) SAC: this agent is trained on the standard non-persistent environment, while being evaluated on the environment where the action-persistence is enforced. This is intended to show the suboptimality of simply projecting a stationary policy to an action-persistent policy. (2) SAC in AP-Env: this agent is trained and evaluated on the action-persistent version of the environment, using the standard RL algorithm. This is to demonstrate the suboptimality of a stationary Markovian policy. (3) SAC-L: this agent takes a current observation, past L actions, and the one-hot indicator of the current time step (t mod L), which are sufficient for the optimal decision-making for the corresponding state augmentation approach discussed in Section 3. Still, this does not exploit the structure of the c-persistent optimal solution such as periodically recurrent policy/value representation and can take redundant information which is not fully compact. As a consequence, it is expected to show relatively weak performance. (4) SAC-L-compact: this agent takes a current observation, the last action which was actually taken, and the one-hot indicator of the current time step (t mod L), which is a compact representation of SAC-L. Still, this is unable to exploit every transition sample to update every timestep’s actor and critic, while AP-AC is capable of doing it in Eq. (9-10). Therefore, it is expected to be less sample-efficient than AP-AC. We conducted experiments on both continuous and discrete tasks, which will be detailed in the following section. The experimental setups including hyperparameters can be found in Appendix G. 6.1 Task description Mujoco tasks (continuous action space) In many real-world situations, complex robotic systems consist of a number of controllers whose operating control frequencies vary due to the system specification. In order to simulate this setting, we first conduct experiments on four OpenAI Gym continuous control tasks based on the Mujoco physics simulator [3, 26], where the controllable joints are modified to have different action persistence. Figure 3 depicts the detailed experimental setup for different action persistence for each task. For Hopper and Walker2d, action persistence for the thigh(s), the leg(s), and the foot(feet) are set to 4, 2, and 1 respectively. For HalfCheetah, action persistence for the thighs, the shins, and the feet are set to 4, 2, and 1 respectively. Finally, for Ant, the persistence for the hips and ankles are set to 4 and 2. We represent the policy πt as the Gaussian with diagonal covariance matrix and tanh-squashing function to bound the output in range [−1, 1] for each dimension [4]. Traffic light control (discrete action space) We also tested AP-AC on a traffic control task, a realistic discrete-action sequential decision scenario with action persistence: in the traffic system, the control frequency of each traffic light can be different, for example depending on the number lanes and the speed limit. We use SUMO (Simulation of Urban MObility) [8] as the traffic simulator and SUMO-RL [1] for the environment interface. The specific instance we use is the implementation of 2X2GRID in SUMO-RL, which is depicted in Figure 4. The goal is to manipulate traffic lights located at each junction to improve the overall traffic flow, where the vehicles are generated randomly with a probability of 0.1 for every second at the end of the road. The observation for each junction consists of the following four types of values: (1) the current traffic light status, represented by (4D one-hot), (2) the elapsed time from the current traffic light status, normalized within [0, 1] (1D), (3) the density of all vehicles for each lane (8D), and (4) the density of stopped vehicles for each lane (8D). Therefore, the overall dimension of the observation space is 4× 21 = 81. The action space is described in Figure 4b. The reward in the range [0, 1] is defined to be mini∈{1,2,3,4} 1/(waiting time of junction i), with the goal of improving traffic flow of the junction of heaviest traffic. The length of episodes is 1000. We use a factorized (relaxed) categorical distribution to represent the policy with discrete action space, i.e. πφ,t(a|·) = ∏4 k=1 Cat(a k|pkφ(·)) where pφ(·) denotes the probability vector with size 4. Though the cardinality of the entire joint action space is |A| = 44 = 256, the input and output dimensions to represent actions in the actor/critic networks are 4× 4 = 16 (i.e. four one-hot vectors with size 4) since we are assuming fully factorized policies. 6.2 Results We performed deterministic evaluation for each algorithm every 10K time steps, i.e. the peformance of the mean policy for continuous control and the greedy policy with respect to the categorical probabilities for the traffic control. The results are presented in Figure 5. Since SAC (colored in green) is optimized for the non-persistent environment, its naive projection to the c-persistent policy suffers from severe performance degradation. In contrast, SAC in AP-Env (colored in cyan) interacts with the c-persistent environment directly while optimizing a stationary Markovian policy. Still, as discussed in Section 3, stationary Markovian policies can be suboptimal in general, which resulted in performing worse than AP-AC. SAC-L (colored in magenta) takes the past L-step actions and indicator of the current time step (t mod L), which is sufficient information for optimal c-persistent decision-making. Nonetheless, it does not exploit the structure of optimal c-persistent solution and can take redundant information since not all the past L-step actions are required for optimal decision-making, resulting in inefficient learning. This can be observed from the results that as the action dimension increases (Hopper (3)→Walker/Halfcheetah (6)→ Ant (8)→ SUMO2X2GRID (16)), the performance of SAC-L gets relatively worse. SAC-L-compact (colored in red) takes the last action actually taken, and indicator of the current time step (t mod L), which is also sufficient as well as compact information for optimal c-persistent decision-making, showing better performance than SAC-L in high-dimensional action tasks. Still, it is unable to exploit every transition sample to update every timestep’s actor and critic, which leads to learning inefficiency compared to AP-AC. Finally, AP-AC significantly outperforms all of the baseline algorithms in all benchmark domains except for Hopper where AP-AC and baselines are on par. The experimental results highlight the effectiveness of our method that directly optimizes a periodic non-stationary policy for the tasks with multiple control frequencies. 7 Discussion and Conclusion In this work, we formalized the notion of multiple action persistences in RL, which generalizes the result of [13] that deals with single action persistence. We introduced AP-PI, an efficient tabular planning algorithm for c-persistent policy for FA-MDP, and showed a formal analysis on its optimal convergence guarantee while it has only a marginal increase in the time complexity compared to the standard policy iteration. We then presented AP-AC, an off-policy deep reinforcement learning algorithm that scales, which directly exploits the structure of the optimal solution from the formal analysis on AP-PI. We empirically demonstrated that AP-AC significantly outperforms a number of strong baselines, both on continuous and discrete problems with action persistence. Extending the results of this work to multi-agent or hierarchical RL would be an interesting direction for future work. Broader Impact In recent years, reinforcement learning (RL) has shown remarkable successes in various areas, where most of their results are based on the assumption that all decision variables are simultaneously determined at every discrete time step. However, many real-world sequential decision-making problems involve multiple decision variables whose control frequencies are different by the domain requirement. In this situation, standard RL algorithms without considering the control frequency requirement may suffer from severe performance degradation as discussed in Section 3. This paper provides a theoretical and algorithmic foundation of how to address multiple control frequencies in RL, which enables RL to be applied to more complex and diverse real-world problems that involve decision variables with different frequencies. Therefore, this work would be beneficial for those who want to apply RL to various tasks that inherently have multiple control frequencies. As we provide a general-purpose methodology, we believe this work has little to do with a particular system failure or a particular data bias. On the other hand, this work could contribute to accelerating industrial adoption of RL, which has the potential to adversely affect employment due to automation. Acknowledgments This work was supported by the National Research Foundation (NRF) of Korea (NRF2019R1A2C1087634 and NRF-2019M3F2A1072238), the Ministry of Science and Information communication Technology (MSIT) of Korea (IITP No. 2020-0-00940, IITP No. 2019-0-00075, IITP No. 2017-0-01779 XAI), and POSCO.
1. What is the focus and contribution of the paper regarding practical algorithms for solving MDPs? 2. What are the strengths of the proposed approach, particularly in its simplicity and theoretical grounding? 3. What are the weaknesses of the paper, especially regarding its potential applications, experimental limitations, and choice of baselines? 4. Do you have any concerns or suggestions regarding the paper's background and related work?
Summary and Contributions Strengths Weaknesses
Summary and Contributions This paper introduces a practical algorithms for solving a particular class of problems where the MDP has factorized action spaces each factor of which are controlled via a fixed frequency. The algorithm derived is theoretically sound and to the best of my knowledge is novel. The authors conducted experiments on simulated environments that show their method performing well against a few baselines. Strengths This paper introduces a cute idea on solving MDP with factored persistent actions. On this problem, the proposed approach is very simple to work with and seems to be quite competitive. The paper is also theoretically grounded. Weaknesses - It is not very clear whether the proposed approach has many potential areas of application. The setting of the problem seems somewhat restrictive. That being said, I am not an expert in this area. - In all experiments conducted, the least common multiple is small. It is unclear whether the proposed method could generalize to much higher values of least common multiples. In addition, it is not clear whether the proposed algorithm would remain competitive against SAC-L in settings where L is large since in this case the large number of actor and critic heads could hurt performance. If the control frequencies are say (3, 5, 7), the least common multiple would be very high in value. It would be very interesting to see how well the proposed approach as well as the baselines handle this scenario. - "SAC" is a very weak baseline in this setting as it is trained essentially on a different environment. I think this baseline should not be introduced in the main text as it could mislead the readers. It is, however, good to be included in the appendix. - The paper lacks a bit of background in that it does not talk enough about related work.
NIPS
Title Provably Efficient Causal Reinforcement Learning with Confounded Observational Data Abstract Empowered by neural networks, deep reinforcement learning (DRL) achieves tremendous empirical success. However, DRL requires a large dataset by interacting with the environment, which is unrealistic in critical scenarios such as autonomous driving and personalized medicine. In this paper, we study how to incorporate the dataset collected in the offline setting to improve the sample efficiency in the online setting. To incorporate the observational data, we face two challenges. (a) The behavior policy that generates the observational data may depend on unobserved random variables (confounders), which affect the received rewards and transition dynamics. (b) Exploration in the online setting requires quantifying the uncertainty given both the observational and interventional data. To tackle such challenges, we propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner. DOVI explicitly adjusts for the confounding bias in the observational data, where the confounders are partially observed or unobserved. In both cases, such adjustments allow us to construct the bonus based on a notion of information gain, which takes into account the amount of information acquired from the offline setting. In particular, we prove that the regret of DOVI is smaller than the optimal regret achievable in the pure online setting when the confounded observational data are informative upon the adjustments. N/A Empowered by neural networks, deep reinforcement learning (DRL) achieves tremendous empirical success. However, DRL requires a large dataset by interacting with the environment, which is unrealistic in critical scenarios such as autonomous driving and personalized medicine. In this paper, we study how to incorporate the dataset collected in the offline setting to improve the sample efficiency in the online setting. To incorporate the observational data, we face two challenges. (a) The behavior policy that generates the observational data may depend on unobserved random variables (confounders), which affect the received rewards and transition dynamics. (b) Exploration in the online setting requires quantifying the uncertainty given both the observational and interventional data. To tackle such challenges, we propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner. DOVI explicitly adjusts for the confounding bias in the observational data, where the confounders are partially observed or unobserved. In both cases, such adjustments allow us to construct the bonus based on a notion of information gain, which takes into account the amount of information acquired from the offline setting. In particular, we prove that the regret of DOVI is smaller than the optimal regret achievable in the pure online setting when the confounded observational data are informative upon the adjustments. 1 Introduction Empowered by the breakthrough in neural networks, deep reinforcement learning (DRL) achieves significant empirical successes in various scenarios [19, 23, 36, 37]. Learning an expressive function approximator necessitates collecting a large dataset. Specifically, in the online setting, it requires the agent to interact with the environment for a large number of steps. For example, to learn a human-level policy for playing Atari games, the agent has to interact with a simulator for more than 108 steps [13]. However, in most scenarios, we do not have access to a simulator that allows for trial and error without any cost. Meanwhile, in critical scenarios, e.g., autonomous driving and personalized medicine, trial and error in the real world is unsafe and even unethical. As a result, it remains challenging to apply DRL to more scenarios. To bypass such a barrier, we study how to incorporate the dataset collected offline, namely the observational data, to improve the sample efficiency of RL in the online setting [21]. In contrast to the interventional data collected online in possibly expensive ways, observational data are often abundantly available in various scenarios. For example, in autonomous driving, we have access to trajectories generated by the drivers. As another example, in personalized medicine, we have access to electronic health records from doctors. However, to incorporate the observational data in a provably efficient way, we have to address two challenges. 35th Conference on Neural Information Processing Systems (NeurIPS 2021). • The observational data are possibly confounded. Specifically, there often exist unobserved random variables, namely confounders, that causally affect the agent and the environment at the same time. In particular, the policy used to generate the observational data, namely the behavior policy, possibly depends on the confounders. Meanwhile, the confounders possibly affect the received rewards and the transition dynamics. In the example of autonomous driving [9, 22], the drivers may be affected by complicated traffic or poor road design, resulting in traffic accidents even without misconduct. The complicated traffic and poor road design subsequently affect both the action of the drivers and the outcome. Therefore, it is unclear from the observational data whether the accidents are due to the actions adopted by the drivers. Agents trained with such observational data may be unwilling to take any actions under complicated traffic, jeopardizing the safety of passengers. In the example of personalized medicine [8, 29], the patients may not be compliant with prescriptions and instructions, which subsequently affects both the treatment and the outcome. As another example, the doctor may prescribe medicine to patients based on patients’ socioeconomic status (which could be inferred by the doctor through interacting with the patients). Meanwhile, socioeconomic status affects the patients’ health condition and subsequently plays the role of the confounder. In both scenarios, such confounders may be unavailable due to privacy or ethical concerns. Such a confounding issue makes the observational data uninformative and even misleading for identifying and estimating the causal effect, which is crucial for decision-making in the online setting. In all the examples, it is unclear from the observational data whether the outcome is due to the actions adopted. • Even without the confounding issue, it remains unclear how the observational data may facilitate exploration in the online setting, which is the key to the sample efficiency of RL. At the core of exploration is uncertainty quantification. Specifically, quantifying the uncertainty that remains given the dataset collected up to the current step, including the observational data and the interventional data, allows us to construct a bonus. When incorporated into the reward, such a bonus encourages the agent to explore the less visited state-action pairs with more uncertainty. In particular, constructing such a bonus requires quantifying the amount of information carried over by the observational data from the offline setting, which also plays a key role in characterizing the regret, especially how much the observational data may facilitate reducing the regret. Uncertainty quantification becomes even more challenging when the observational data are confounded. Specifically, as the behavior policy depends on the confounders, there is a mismatch between the data generating processes in the offline setting and the online setting. As a result, it remains challenging to quantify how much information carried over from the offline setting is useful for the online setting, as the observational data are uninformative and even misleading due to the confounding issue. Contribution. To study causal reinforcement learning, we propose a class of Markov decision processes (MDPs), namely confounded MDPs, which captures the data generating processes in both the offline setting and the online setting as well as their mismatch due to the confounding issue. In particular, we study two tractable cases of confounded MDPs in the episodic setting with linear function approximation [7, 16, 42, 43]. • In the first case, the confounders are partially observed in the observational data. Assuming that an observed subset of the confounders satisfies the backdoor criterion [32], we propose the deconfounded optimistic value iteration (DOVI) algorithm, which explicitly corrects for the confounding bias in the observational data using the backdoor adjustment. • In the second case, the confounders are unobserved in the observational data. Assuming that there exists an observed set of intermediate states that satisfies the frontdoor criterion [32], we propose an extension of DOVI, namely DOVI+, which explicitly corrects for the confounding bias in the observational data using the composition of two backdoor adjustments. We remark that DOVI+ follows the same principle of design as DOVI and defer the discussion of DOVI+ to §A. In both cases, the adjustments allow DOVI and DOVI+ to incorporate the observational data into the interventional data while bypassing the confounding issue. It further enables estimating the causal effect of a policy on the received rewards and the transition dynamics with enlarged effective sample size. Moreover, such adjustments allow us to construct the bonus based on a notion of information gain, which takes into account the amount of information carried over from the offline setting. In particular, we prove that DOVI and DOVI+ attain the ∆H · √ d3H3T -regret up to logarithmic factors, where d is the dimension of features, H is the length of each episode, and T = HK is the number of steps taken in the online setting, where K is the number of episodes. Here the multiplicative factor ∆H > 0 depends on d, H , and a notion of information gain that quantifies the amount of information obtained from the interventional data additionally when given the properly adjusted observational data. When the observational data are unavailable or uninformative upon the adjustments, ∆H is a logarithmic factor. Correspondingly, DOVI and DOVI+ attain the optimal√ T -regret achievable in the pure online setting [7, 16, 42, 43]. When the observational data are sufficiently informative upon the adjustments, ∆H decreases towards zero as the effective sample size of the observational data increases, which quantifies how much the observational data may facilitate exploration in the online setting. Related Work. Our work is related to the study of causal bandit [20]. The goal of causal bandit is to obtain the optimal intervention in the online setting where the data generating process is described by a causal diagram. The previous study establishes causal bandit algorithms in the online setting [26, 34], the offline setting [17, 18], and a combination of both settings [11]. In contrast to this line of work, we study causal RL in a combination of the online setting and the offline setting. Causal RL is more challenging than causal bandit, which corresponds toH = 1, as it involves the transition dynamics and is more challenging in exploration. See §B for a detailed literature review on causal bandit. Our work is related to the study of causal RL considered in various settings. [45] propose a modelbased RL algorithm that solves dynamic treatment regimes (DTR), which involve a combination of the online setting and the offline setting. Their algorithm hinges on the analysis of sensitivity [3, 27, 38, 44], which constructs a set of feasible models of the transition dynamics based on the confounded observational data. Correspondingly, their algorithm achieves exploration by choosing an optimistic model of the transition dynamics from such a feasible set. In contrast, we propose a model-free RL algorithm, which achieves exploration through the bonus based on a notion of information gain. It is worth mentioning that the assumption of [45] is weaker than ours as theirs does not allow for identifying the causal effect. As a result of partial identification, the regret of their algorithm is the same as the regret in the pure online setting as T → +∞. In contrast, our work instantiates the following framework in handling confounders for reinforcement learning. (a) First, we propose the estimation equation based on the observations, which identifies the causal effect of actions on the cumulative reward. (b) Second, we conduct point estimation and uncertainty quantification based on observations and the estimation equation. (c) Finally, we conduct exploration based on the uncertainty quantification and achieve the regret reduction in the online setting. Consequently, the regret of our algorithm is smaller than the regret in the pure online setting by a multiplicative factor for all T . [25] propose a model-based RL algorithm in a combination of the online setting and the offline setting. Their algorithm uses a variational autoencoder (VAE) for estimating a structural causal model (SCM) based on the confounded observational data. In particular, their algorithm utilizes the actor-critic algorithm to obtain the optimal policy in such an SCM. However, the regret of their algorithm remains unclear. [6] propose a model-based RL algorithm in the pure online setting that learns the optimal policy in a partially observable Markov decision process (POMDP). The regret of their algorithm also remains unclear. [35] utilize generative adversarial reinforcement learning to reconstruct transition dynamics with confounder, and [40] propose a model-based approach for POMDP based on adjustment with proxy variables. [30] consider offpolicy policy evaluation under one-decision confounding and constructs worst-case bounds with theoretical guarantee. [4] utilizes states and actions as proxy variables to tackle off-policy policy evaluation with confounders. In contrast, our work utilizes backdoor and frontdoor adjustments to handle confounded observation. 2 Confounded Reinforcement Learning Structural Causal Model. We denote a structural causal model (SCM) [32] by a tuple (A,B, F, P ). Here A is the set of exogenous (unobserved) variables, B is the set of endogenous (observed) variables, F is the set of structural functions capturing the causal relations, which determines an endogenous variable v ∈ B based on the other exogenous and endogenous variables, and P is the distribution of all the exogenous variables. We say that a pair of variables Y and Z are confounded by a variable W if they are both caused by W . An intervention on a set of endogenous variables X ⊆ B assigns a value x to X regardless of the other exogenous and endogenous variables as well as the structural functions. We denote by do(X = x) the intervention on X and write do(x) if it is clear from the context. Similarly, a stochastic intervention [10, 28] on a set of endogenous variables X ⊆ B assigns a distribution p to X regardless of the other exogenous and endogenous variables as well as the structural functions. We denote by do(X ∼ p) the stochastic intervention on X . Confounded Markov Decision Process. To characterize a Markov decision process (MDP) in the offline setting with observational data, which are possibly confounded, we introduce an SCM, where the endogenous variables are the states {sh}h∈[H], actions {ah}h∈[H], and rewards {rh}h∈[H]. Let {wh}h∈[H] be the confounders. In §3, we assume that the confounders are partially observed, while in §A, we assume that they are unobserved. The set of structural functions F consists of the transition of states sh+1 ∼ Ph(· | sh, ah, wh), the transition of confounders wh ∼ P̃h(· | sh), the behavior policy ah ∼ νh(· | sh, wh), which depends on the confounder wh, and the reward function rh(sh, ah, wh). See Figure 1 for the causal diagram that describes such an SCM. Here ah and sh+1 are confounded by wh in addition to sh. We denote such a confounded MDP by the tuple (S,A,W, H,P, r), where H is the length of an episode, S, A, andW are the spaces of states, actions, and confounders, respectively, r = {rh}h∈[H] is the set of reward functions, and P = {Ph, P̃h}h∈H is the set of transition kernels. In the sequel, we assume without loss of generality that rh takes value in [0, 1] for all h ∈ [H]. In the online setting that allows for intervention, we assume that the confounders {wh}h∈[H] are unobserved. A policy π = {πh}h∈[H] induces the stochastic intervention do(a1 ∼ π1(· | s1), . . . , aH ∼ πH(· | sH)), which does not depend on the confounders. In particular, an agent interacts with the environment as follows. At the beginning of the k-th episode, the environment arbitrarily selects an initial state sk1 and the agent selects a policy π k = {πkh}h∈[H]. At the h-th step of the k-th episode, the agent observes the state skh and takes the action a k h ∼ πkh(· | skh). The environment randomly selects the confounder wkh ∼ P̃h(· | skh), which is unobserved, and the agent receives the reward rkh = rh(s k h, a k h, w k h). The environment then transits into the next state skh+1 ∼ Ph(· | skh, akh, wkh). For a policy π = {πh}h∈H , which does not depend on the confounders {wh}h∈[H], we define the value function V π = {V πh }h∈[H] as follows, V πh (s) = Eπ [ H∑ j=h rj(sj , aj , wj) ∣∣∣∣ sh = s], ∀h ∈ [H], (2.1) where we denote by Eπ the expectation with respect to the confounders {wj}Hj=h and the trajectory {(sj , aj)}Hj=h, starting from the state sj = s and following the policy π. Correspondingly, we define the action-value function Qπ = {Qπh}h∈[H] as follows, Qπh(s, a) = Eπ [ H∑ j=h rj(sj , aj , wj) ∣∣∣∣ sh = s,do(ah = a)], ∀h ∈ [H]. (2.2) We assess the performance of an algorithm using the regret against the globally optimal policy π∗ = {π∗h}h∈[H] in hindsight after K episodes, which is defined as follows, Regret(T ) = max π K∑ k=1 ( V π1 (s k 1)− V π k 1 (s k 1) ) = K∑ k=1 ( V π ∗ 1 (s k 1)− V π k 1 (s k 1) ) . (2.3) Here T = HK is the total number of steps. Our goal is to design an algorithm that minimizes the regret defined in (2.3), where π∗ does not depend on the confounders {wh}h∈[H]. In the online setting that allows for intervention, it is well understood how to minimize such a regret [2, 14–16]. However, it remains unclear how to efficiently utilize the observational data obtained in the offline setting, which are possibly confounded. In realworld applications, e.g., autonomous driving and personalized medicine, such observational data are often abundant, whereas intervention in the online setting is often restricted. We refer to §C for a comparison between the confounded MDP and other extensions of MDP, including the dynamics treatment regime (DTR), partially observable MDP (POMDP), and contextual MDP (CMDP). Why is Incorporating Confounded Observational Data Challenging? Straightforwardly incorporating the confounded observational data into an online algorithm possibly leads to an undesirable regret due to the mismatch between the online and offline data generating processes. In particular, due to the existence of the confounders {wh}h∈[H], which are partially observed (§3) or unobserved (§A), the conditional probability P(sh+1 | sh, ah) in the offline setting is different from the causal effect P(sh+1 | sh,do(ah)) in the online setting [33]. More specifically, it holds that P(sh+1 | sh, ah) = Ewh∼P̃h(· | sh) [ Ph(sh+1 | sh, ah, wh) · νh(ah | sh, wh) ] Ewh∼P̃h(· | sh) [ νh(ah | sh, wh) ] , P ( sh+1 ∣∣ sh,do(ah)) = Ewh∼P̃h(· | sh)[Ph(· | sh, ah, wh)]. In other words, without proper covariate adjustments [32], the confounded observational data may be not informative for estimating the transition dynamics and the associated action-value function in the online setting. To this end, we propose an algorithm that incorporates the confounded observational data in a provably efficient manner. Moreover, our analysis quantifies the amount of information carried over by the confounded observational data from the offline setting and to what extent it helps reducing the regret in the online setting. 3 Algorithm and Theory for Partially Observed Confounder In this section, we propose the Deconfounded Optimistic Value Iteration (DOVI) algorithm. DOVI handles the case where the confounders are unobserved in the online setting but are partially observed in the offline setting. We then characterize the regret of DOVI. We defer the extension of DOVI, namely DOVI+, to §A which handles the case where the confounders are unobserved in both the online setting and the offline setting. 3.1 Algorithm Backdoor Adjustment. In the online setting that allows for intervention, the causal effect of ah on sh+1 given sh, that is, P(sh+1 | sh,do(ah)), plays a key role in the estimation of the action-value function. Meanwhile, the confounded observational data may not allow us to identify the causal effect P(sh+1 | sh,do(ah)) if the confounder wh is unobserved. However, if the confounder wh is partially observed in the offline setting, the observed subset uh of wh allows us to identify the causal effect P(sh+1 | sh,do(ah)), as long as uh satisfies the following backdoor criterion. Assumption 3.1 (Backdoor Criterion [32, 33]). In the SCM defined in §2 and its induced directed acyclic graph (DAG), for all h ∈ [H], there exists an observed subset uh of wh that satisfies the backdoor criterion, that is, • the elements of uh are not the descendants of ah, and • conditioning on sh, the elements of uh d-separate every path between ah and sh+1, rh that has an incoming arrow into ah. See Figure 2 for an example that satisfies the backdoor criterion. In particular, we identify the causal effect P(sh+1 | sh,do(ah)) as follows. Proposition 3.2 (Backdoor Adjustment [32]). Under Assumption 3.1, it holds for all h ∈ [H] that P ( sh+1 ∣∣ sh,do(ah)) = Euh∼P(· | sh)[P(sh+1 | sh, ah, uh)], E [ rh(sh, ah, wh) ∣∣ sh,do(ah)] = Euh∼P(· | sh)[E[rh(sh, ah, wh) ∣∣ sh, ah, uh]]. Here (sh+1, sh, ah, uh) follows the SCM defined in §2, which generates the confounded observational data. Proof. See [32] for a detailed proof. With a slight abuse of notation, we write P(sh+1 | sh, ah, uh) as Ph(sh+1 | sh, ah, uh) and P(uh | sh) as P̃h(uh | sh), since they are induced by the SCM defined in §2. In the sequel, we define U the space of observed state uh and write rh = rh(sh, ah, wh) for notational simplicity. Backdoor-Adjusted Bellman Equation. We now formulate the Bellman equation for the confounded MDP. It holds for all (sh, ah) ∈ S ×A that Qπh(sh, ah) = Eπ [ H∑ j=h rj(sj , aj , uj) ∣∣∣∣ sh,do(ah)] = E[rh ∣∣ sh,do(ah)]+ Esh+1[V πh+1(sh+1)], where Esh+1 denotes the expectation with respect to sh+1 ∼ P(· ∣∣ sh,do(ah)). Here E[rh ∣∣ sh,do(ah)] and P(· ∣∣ sh,do(ah)) are characterized in Proposition 3.2. In the sequel, we define the following transition operator and counterfactual reward function, (PhV )(sh, ah) = Esh+1∼P(· | sh,do(ah)) [ V (sh+1) ] , ∀V : S 7→ R, (sh, ah) ∈ S ×A, (3.1) Rh(sh, ah) = E [ rh ∣∣ sh,do(ah)], ∀(sh, ah) ∈ S ×A. (3.2) We have the following Bellman equation, Qπh(sh, ah) = Rh(sh, ah) + (PhV πh+1)(sh, ah), ∀h ∈ [H], (sh, ah) ∈ S ×A. (3.3) Correspondingly, the Bellman optimality equation takes the following form, Q∗h(sh, ah) = Rh(sh, ah) + (PhV ∗h+1)(sh, ah), V ∗h (sh) = max ah∈A Q∗h(sh, ah), (3.4) which holds for all h ∈ [H] and (sh, ah) ∈ S × A. Such a Bellman optimality equation allows us to adapt the least-squares value iteration (LSVI) algorithm [2, 5, 14, 16, 31]. Linear Function Approximation. We focus on the following setting with linear transition kernels and reward functions [7, 16, 42, 43], which corresponds to a linear SCM [33]. Assumption 3.3 (Linear Confounded MDP). We assume that Ph(sh+1 | sh, ah, uh) = 〈φh(sh, ah, uh), µh(sh+1)〉, ∀h ∈ [H], (sh+1, sh, ah) ∈ S × S ×A, where φh(·, ·, ·) and µh(·) = (µ1,h(·), . . . , µd,h(·))> are Rd-valued functions. We assume that∑d i=1 ‖µi,h‖21 ≤ d and ‖φh(sh, ah, uh)‖2 ≤ 1 for all h ∈ [H] and (sh, ah, uh) ∈ S × A × U . Meanwhile, we assume that E[rh | sh, ah, uh] = φh(sh, ah, uh)>θh, ∀h ∈ [H], (sh, ah, uh) ∈ S ×A× U , (3.5) where θh ∈ Rd and ‖θh‖2 ≤ √ d for all h ∈ [H]. Such a linear setting generalizes the tabular setting where S , A, and U are finite. Proposition 3.4. We define the backdoor-adjusted feature as follows, ψh(sh, ah) = Euh∼P̃h(· | sh) [ φh(sh, ah, uh) ] , ∀h ∈ [H], (sh, ah) ∈ S ×A. (3.6) Under Assumption 3.1, it holds that P(sh+1 | sh,do(ah)) = 〈ψh(sh, ah), µh(sh+1)〉, ∀h ∈ [H], (sh+1, sh, ah) ∈ S × S ×A. Moreover, the action-value functions Qπh and Q ∗ h are linear in the backdoor-adjusted feature ψh for all π. Proof. See §F.1 for a detailed proof. Such an observation allows us to estimate the action-value function based on the backdoor-adjusted features {ψh}h∈[H] in the online setting. See §D for a detailed discussion. In the sequel, we assume that either the density of {P̃h(· | sh)}h∈[H] is known or the backdoor-adjusted feature {ψh}h∈[H] is known. In the sequel, we introduce the DOVI algorithm (Algorithm 1). Each iteration of DOVI consists of two components, namely point estimation, where we estimateQ∗h based on the confounded observational data and the interventional data, and uncertainty quantification, where we construct the upper confidence bound (UCB) of the point estimator. Algorithm 1 Deconfounded Optimistic Value Iteration (DOVI) for Confounded MDP Require: Observational data {(sih, aih, uih, rih)}i∈[n],h∈[H], tuning parameters λ, β > 0, backdooradjusted feature {ψh}h∈[H], which is defined in (3.6). 1: Initialization: Set {Q0h, V 0h }h∈[H] as zero functions and V kH+1 as a zero function for k ∈ [K]. 2: for k = 1, . . . ,K do 3: for h = H, . . . , 1 do 4: Set ωkh ← argminω∈Rd ∑k−1 τ=1(r τ h + V τ h+1(s τ h+1) − ω>ψh(sτh, aτh))2 + λ‖ω‖22 + Lkh(ω), where Lkh is defined in (3.8). 5: Set Qkh(·, ·)← min{ψh(·, ·)>ωkh + Γkh(·, ·), H − h}, where Γkh is defined in (3.12). 6: Set πkh(· | sh)← argmaxah∈AQ k h(sh, ah) for all sh ∈ S. 7: Set V kh (·)← 〈πkh(· | ·), Qkh(·, ·)〉A. 8: end for 9: Obtain sk1 from the environment. 10: for h = 1, . . . ,H do 11: Take akh ∼ πkh(· | skh). Obtain rkh = rh(skh, akh, ukh) and skh+1. 12: end for 13: end for Point Estimation. To solve the Bellman optimality equation in (3.4), we minimize the empirical mean-squared Bellman error as follows at each step, ωkh ← argmin ω∈Rd k−1∑ τ=1 ( rτh + V τ h+1(s τ h+1)− ω>ψh(sτh, aτh) )2 + λ‖ω‖22 + Lkh(ω), h = H, . . . , 1, (3.7) where we set V kH+1 = 0 for all k ∈ [K] and V τh+1 is defined in Line 7 of Algorithm 1 for all (τ, h) ∈ [K] × [H − 1]. Here k is the index of episode, λ > 0 is a tuning parameter, and Lkh is a regularizer, which is constructed based on the confounded observational data. More specifically, we define Lkh(ω) = n∑ i=1 ( rih + V k h+1(s i h+1)− ω>φh(sih, aih, uih) )2 , ∀(k, h) ∈ [K]× [H], (3.8) which corresponds to the least-squares loss for regressing rih + V k h+1(s i h+1) against φh(s i h, a i h, u i h) for all i ∈ [n]. Here {(sih, aih, uih, rih)}(i,h)∈[n]×[H] are the confounded observational data, where uih ∼ P̃h(· | sih), sih+1 ∼ Ph(· | sih, aih, uih), and aih ∼ νh(· | sih, wih) with ν = {νh}h∈[H] being the behavior policy. Here recall that, with a slight abuse of notation, we write P(sh+1 | sh, ah, uh) as Ph(sh+1 | sh, ah, uh) and P(uh | sh) as P̃h(uh | sh), since they are induced by the SCM defined in §2. The update in (3.7) takes the following explicit form, ωkh ← (Λkh)−1 ( k−1∑ τ=1 ψh(s τ h, a τ h) · ( V kh+1(s τ h+1) + r τ h ) + n∑ i=1 φh(s i h, a i h, u i h) · ( V kh+1(s i h+1) + r i h )) , (3.9) where Λkh = k−1∑ τ=1 ψh(s τ h, a τ h)ψh(s τ h, a τ h) > + n∑ i=1 φh(s i h, a i h, u i h)φh(s i h, a i h, u i h) > + λI. (3.10) Uncertainty Quantification. We now construct the UCB Γkh(·, ·) of the point estimator ψh(·, ·)>ωkh obtained from (3.9), which encourages the exploration of the less visited state-action pairs. To this end, we employ the following notion of information gain to motivate the UCB, Γkh(s k h, a k h) ∝ H(ωkh | ξk−1)−H ( ωkh | ξk−1 ∪ {(skh, akh)} ) , (3.11) where H(ωkh | ξk−1) is the differential entropy of the random variable ωkh given the data ξk−1. In particular, ξk−1 = {(sτh, aτh, rτh)}(τ,h)∈[k−1]×[H] ∪ {(sih, aih, uih, rih)}(i,h)∈[n]×[H] consists of the confounded observational data and the interventional data up to the (k − 1)-th episode. However, it is challenging to characterize the distribution of ωkh. To this end, we consider a Bayesian counterpart of the confounded MDP, where the prior of ωkh is N(0, I/λ) and the residual of the regression problem in (3.7) is N(0, 1). In such a “parallel” confounded MDP, the posterior of ωkh follows N(µk,h, (Λ k h) −1), where Λkh is defined in (3.10) and µk,h coincides with the right-hand side of (3.9). Moreover, it holds for all (skh, a k h) ∈ S ×A that H(ωkh | ξk−1) = 1/2 · log det ( (2πe)d · (Λkh)−1 ) , H ( ωkh ∣∣ ξk−1 ∪ {(skh, akh)}) = 1/2 · log det((2πe)d · (Λkh + ψh(skh, akh)ψh(skh, akh)>)−1). Correspondingly, we employ the following UCB, which instantiates (3.11), that is, Γkh(s k h, a k h) = β · ( log det ( Λkh + ψh(s k h, a k h)ψh(s k h, a k h) >)− log det(Λkh))1/2 (3.12) for all (skh, a k h) ∈ S × A. Here β > 0 is a tuning parameter. We highlight that, although the information gain in (3.11) relies on the “parallel” confounded MDP, the UCB in (3.12), which is used in Line 5 of Algorithm 1, does not rely on the Bayesian perspective. Also, our analysis establishes the frequentist regret. Regularization with Observational Data: A Bayesian Perspective. In the “parallel” confounded MDP, it holds that ωkh ∼ N(0, I/λ), ωkh | ξ0 ∼ N ( µ1,h, (Λ 1 h) −1), ωkh | ξk−1 ∼ N(µk,h, (Λkh)−1), where µk,h coincides with the right-hand side of (3.9) and µ1,h is defined by setting k = 1 in µk,h. Here ξ0 = {(sih, aih, uih, rih)}(i,h)∈[n]×[H] are the confounded observational data. Hence, the regularizer Lkh in (3.8) corresponds to using ω k h | ξ0 as the prior for the Bayesian regression problem given only the interventional data ξk−1 \ ξ0 = {(sτh, aτh, rτh)}(τ,h)∈[k−1]×[H]. 3.2 Theory The following theorem characterizes the regret of DOVI, which is defined in (2.3). Theorem 3.5 (Regret of DOVI). Let β = CdH √ log(d(T + nH)/ζ) and λ = 1, where C > 0 and ζ ∈ (0, 1] are absolute constants. Under Assumptions 3.1 and 3.3, it holds with probability at least 1− 5ζ/2 that Regret(T ) ≤ C ′ ·∆H · √ d3H3T · √ log ( d(T + nH)/ζ ) , (3.13) where C ′ > 0 is an absolute constant and ∆H = 1√ dH2 H∑ h=1 ( log det(ΛK+1h )− log det(Λ 1 h) )1/2 . (3.14) Proof. See §F.3 for a detailed proof. Note that ΛK+1h (n + K + λ)I and Λ1h λI for all h ∈ [H]. Hence, it holds that ∆H = O( √ log(n+K + 1)) in the worst case. Thus, the regret of DOVI isO( √ d3H3T ) up to logarithmic factors, which is optimal in the total number of steps T if we only consider the online setting. However, ∆H is possibly much smaller than O( √ log(n+K + 1)), depending on the amount of information carried over by the confounded observational data from the offline setting, which is quantified in the following. Interpretation of ∆H : An Information-Theoretic Perspective. Let ω∗h be the parameter of the globally optimal action-value function Q∗h, which corresponds to π ∗ in (2.3). Recall that we denote by ξ0 and ξK the confounded observational data {(sih, aih, uih, rih)}(i,h)∈[n]×[H] and the union {(sih, aih, uih, rih)}(i,h)∈[n]×[H] ∪ {(skh, akh, rkh)}(k,h)∈[K]×[H] of the confounded observational data and the interventional data up to the K-th episode, respectively. We consider the aforementioned Bayesian counterpart of the confounded MDP, where the prior of ω∗h is also N(0, I/λ). In such a “parallel” confounded MDP, we have ω∗h ∼ N(0, I/λ), ω∗h | ξ0 ∼ N ( µ∗1,h, (Λ 1 h) −1), ω∗h | ξK ∼ N(µ∗K,h, (ΛK+1h )−1), (3.15) where µ∗1,h = (Λ 1 h) −1 n∑ i=1 φh(s i h, a i h, u i h) · ( V ∗h+1(s i h+1) + r i h ) , µ∗K,h = (Λ K+1 h ) −1 ( Λ1hµ ∗ 1,h + K∑ τ=1 ψh(s τ h, a τ h) · ( V ∗h+1(s τ h+1) + r τ h )) . It then holds for the right-hand side of (3.14) that 1/2 · log det(ΛK+1h )− 1/2 · log det(Λ 1 h) = H(ω ∗ h | ξ0)−H(ω∗h | ξK). (3.16) The left-hand side of (3.16) characterizes the information gain of intervention in the online setting given the confounded observational data in the offline setting. In other words, if the confounded observational data are sufficiently informative upon the backdoor adjustment, then ∆H is small, which implies that the regret is small. More specifically, the matrices (Λ1h) −1 and (ΛK+1h ) −1 defined in (3.10) characterize the ellipsoidal confidence sets given ξ0 and ξK , respectively. If the confounded observational data are sufficiently informative upon the backdoor adjustment, ΛK+1h is close to Λ1h. To illustrate, let {ψh(sτh, aτh)}(τ,h)∈[K]×[H] and {φh(sih, aih, uih)}(i,h)∈[n]×[H] be sampled uniformly at random from the canonical basis {e`}`∈[d] of Rd. It then holds that ΛK+1h ≈ (K + n)I/d + λI and Λ1h ≈ nI/d + λI . Hence, for λ = 1 and sufficiently large n and K, we have ∆H = O( √ log(1 +K/(n+ d))) = O( √ K/(n+ d)). For example, for n = Ω(K2), it holds that ∆H = O(n−1/2), which implies that the regret of DOVI is O(n−1/2 · √ d3H3T ). In other words, if the confounded observational data are sufficiently informative upon the backdoor adjustment, the regret of DOVI can be arbitrarily small given a sufficiently large sample size n of the confounded observational data, which is often the case in practice [8, 9, 21, 22, 29]. 4 Conclusion In this paper, we propose the deconfounded optimistic value iteration (DOVI) algorithm and its variant DOVI+, which incorporate the confounded observational data to the online reinforcement learning in a provably efficient manner. DOVI and DOVI+ explicitly adjust for the confounding bias in the observational data via the backdoor and frontdoor adjustments, respectively. In both cases, such adjustments allow us to construct the bonus based on a notion of information gain, which considers the amount of information acquired from the offline dataset. We further conduct regret analysis of DOVI and DOVI+. Our analysis suggests that practitioners can tackle the confounding issue in the offline dataset by estimating the counterfactual reward for value function estimations, given that a proper adjustment such as the backdoor or frontdoor adjustment is available. In the case of backdoor and frontdoor adjustment, we prove that the regret of DOVI is smaller than the optimal regret achievable in the pure online setting when the confounded observational data are informative upon the adjustments, suggesting that one can exploit the confounded observational data in reinforcement learning upon proper adjustments. In our future study, we wish to incorporate proxy variables that are native to MDPs for the adjustments of the offline dataset, such as the variables exploited by [4, 24, 40]. Acknolodgements Zhaoran Wang acknowledges National Science Foundation (Awards 2048075, 2008827, 2015568, 1934931), Simons Institute (Theory of Reinforcement Learning), Amazon, J.P. Morgan, and Two Sigma for their supports. Zhuoran Yang acknowledges Simons Institute (Theory of Reinforcement Learning). The authors also thank the anonymous reviewers, whose invaluable suggestions help the authors to improve the paper.
1. How does the proposed method, DOVI, compare to other algorithms in tackling confounded MDPs? 2. Can the authors provide a clearer description of how DOVI relates to existing least-squares based VI algorithms? 3. How does one obtain the back-door adjusted feature in general, given that only samples of s_h and u_h are observed? 4. How will the estimation error of the back-door adjusted feature play a role in the regret? 5. What is the significance of the sublinear regret bound provided for DOVI in linear confounded MDPs? 6. Are there any sensitivity analyses performed on the assumption of linear confounded MDPs? If so, what were the results?
Summary Of The Paper Review
Summary Of The Paper In the main paper, the authors study the problem of performing value iteration using confounded observational data where the confounders are partially observable. Applying for backdoor adjustment to correct the observational data, the authors propose de-confounded optimistic value iteration (DOVI) in this setting. Finally, a sublinear regret bound of DOVI is provided for linear confounded MDPs and cases where the observed subset of the confounders satisfy the backdoor criterion. Review The paper proposes an interesting problem to study and has a clear motivation. DOVI is an adaptation of least squares value iteration in linear confounded MDPs. The authors have given a clear description of the algorithm and its guarantee. (1) Can the authors comment on whether DOVI can be considered an instantiation of https://arxiv.org/pdf/2011.04622.pdf with a particular feature map and regularization? Given that we only observe samples of s_h, u_h, how should one obtain the back-door adjusted feature in general? And how will the estimation error of the back-door adjusted feature play a role in the regret? (2) Experiments: It would be very helpful to showcase how DOVI performs empirically compared to VI methods that do not take confounding into account, i.e., treating u_h as part of the state variable. Since the regret guarantee greatly depends on the linear confounded MDPs assumption, performing some sensitivity analysis on that would be very important. Overall, the strengths of the paper are (1) a clean formulation of confounded MDPs and (2) DOVI which achieves sublinear regret for linear confounded MDPs. The main weaknesses of the paper are (1) (theoretical and empirical) comparison of DOVI with other algorithms in tackling confounded MDPs (e.g., general algorithms for solving POMDPs or algorithms that are designed for unconfounded MDPs); (2) a clearer description on how DOVI is related to existing least-squares based VI algorithms. Minor typo: L4: "semple" -> "sample" Update after the author response I have read through the author response and other reviews. The author response has clarified some of my concerns. Similar to reviewer czeu and 8cdv, I believe that an adequate experiment section is needed for the paper. Hence, I am maintaining my score.